Announcements and Meta

!announcements

@lemmy.basedcount.com
Create post
[Community Poll] Threads.net, Reddit reposts, alternative frontends and a Lemmy update

[Community Poll] Threads.net, Reddit reposts, alternative frontends and a Lemmy update

There are a few topics about which the Based Count admin team would like to hear your thoughts as our community, before we take action. Instad of making various different posts we decided to condensate them into a single big announcement, feel free to speak your mind on any of the following topics. We encourage all Based Count users to participate, but we also welcome feedback from users of other instances.

Lemmy update

Earlier today we have updated our instance to the new 0.19.0 version, recently released by the lemmy developers.

Because of a previously undetected bug in the code of Kaleidoscope, our custom frontend, we had to temporarily disable user flairs and replace our user interface with the latest version of the legacy lemmy-ui. Kaleidoscope and user flairs will be re-activated in the next few days, as soon as we can solve the current problem.

Furthermore, according to some feedback we have received from administrators of other instances, there might still be some bugs in the new Lemmy release and it's possible that the server will feel a bit slower, due to some changes with how federation is handled. If you encounter any errors or issues, please use this thread to report them.

Threads.net

A few days ago, Meta's threads.net app launched in the European Union. Simultaneously, they also started testing integration with the Fediverse through ActivityPub. This reignited old talks about whether smaller instances like ours should federate with them or not. It should be noted that threads is pursuing a closer integration with microblogging platforms like Mastodon, rather than link aggregators like Lemmy. Also, it appears that Threads posts will at first be read only to federated servers. In other words, the Fediverse will be able to read content from Threads, but not the other way around.

Arguments FOR federation

  • More content

Arguments AGAINST federation

  • Privacy NIGHTMARE (it's literally Meta)
  • Possible increased storage costs due to having to host additional content
  • Risk of Meta pursuing an Embrace, Extend and Extinguish strategy, despite having denied such plans in the past.

We are currently defederated from threads.net due to privacy concerns, but we'd be open to reconsider this stance depending on community feedback.

Reddit reposting bots and alien.top

Another hot topic in the last few weeks was the alien.top instance and the Fediverser project. Its aim is pretty simple: creating digital bridges between Lemmy and Reddit to favour a migration of Reddit users to the Fediverse. This is mostly done by reposting Reddit posts to Lemmy lemmy communities through bots. An examples of this would be a bot reposting all content from r/PCM to our !pcm community.

Advantages

  • This would solve the content drought that has caused many users to give up on Lemmy and move back to Reddit

Disadvantages

  • Some percieve this as bot spam and have lamented the excessive amounts of bot posts in their feeds

Various instances have already defederated alien.top, including big players such as lemmy.world and feddit.de. What should our stance be? Would you like to see more posts in !pcm even if those posts were made by bots?

Example

Alternative frontends

We would like to enrich our instance by adding support for different user interfaces. Because our current UI features some changes from the OG Lemmy UI, adopting new frontends would require quite a lot of work on our end, to ensure features such as our very own user flairs work seamlessly on the new design.

Because of this, for the time being only one other client would support our custom features. Vote to decide your favourite

Week long downtime

Week long downtime

Like some of you have already noticed, the instance has been down for the last week and a half. I documented the problem and the process of bringing it back online in a thread on our Discord, so if you are curious about that I'll redirect you there.

The TL;DR is that the instance ran out of disk space, so the database crashed. No database, no Lemmy.

I solved it by moving our 30GB of images to a separate, much cheaper storage (we moved from 3,00€ / month down to just 0,02€ / month for the image storage!), freeing up a bunch of space for the database. This should keep us going for a while and allow us to scale much better in the future.

The new host

The new image host we are using, is an object storage located on a separate machnie from the Lemmy server (while previously images resided on the same server as the database and the instance). Because of this, you are likely to experience some milliseconds of delay with the loading of new images, because some back and forth between Lemmy and the image server needs to happen before you can see it (Lemmy downloads the image, sends it to the image storage, image storage returns a link to the image to Lemmy, THEN you can see the image. It takes a while).

Next moves

While I'm on a roll with the Lemmy updates, later today I plan on updating our instance to version 0.18.5 of Lemmy. This should give us even more stability and better uptime in the future, but might temporarily break user flairs in !pcm@lemmy.basedcount.com.

I am terribly sorry for the prolonged downtimes and I really appreciated all the people who joined our Discord server asking if they could somehow help or simply showing care for our work on the instance.
Please remember that this is mostly a solo project of mine, where I am left handling both the server admin side as well as the community facing one. It's a lot for one guy to deal with, I'm sure you'll understand.

EDIT: I've succefully updated the instance, everything seems to be working fine. Let me know if something feels odd or buggy.

Recent downtimes

Recent downtimes

As you might have noticed, we've been experiencing repeated crashes and downtimes for the last 12 hours or so. This was because, after four long and tireless months, our storage drive had finally filled its 50GB capacity.

I noticed this today, at around 9:00 UTC and quickly took action. Fortunately we had already prepared a different partition with an additional 15GB ready for situations of this kind. It took me some additional 40 minutes of tinkering to move the database to said partition, but after that the instance was back online without any majour issues.

This is our current storage breakdown:

  • Secondary partition (currently: 10 GB; max capacity: 15GB)

    • Database (postgres): 10GB
  • Main partition (currently: 38GB; max capacity: 50GB)

    • Image hosting (pict-rs): 23GB
    • Lemmy executables, other services (Kaleidoscope, AutoMod, Flair) and operating system: 15GB

Of these, the only ones expected to grow are the database and the image hosting, with the latter being by far the fastest growing.

In the future we are considering moving pict-rs to a separate, more cost effective storage, however for the time being this should hold. We apologize for the disservice in the last hours.

Cheers and stay Based!

Refederating feddit.nl

Refederating feddit.nl

This is an update on the "Why is feddit.nl defederated" post by @witchdoctor@lemmy.basedcount.com in !general@lemmy.basedcount.com.

Some time has passed and the admin appear to have gotten back control of his instance. Furthermore, I just noticed that AvaddonLFC (very active lemmy.world admin) has joined their admin team. This is more than enough for our criteria.
I have also withdrawn our censure against their instance on the Fediseer.

Here's hoping we'll be able to welcome many based Dutch people!

Feature update: introducing Kaleidoscope and plans for the future

Feature update: introducing Kaleidoscope and plans for the future

Open link in next tab

https://lemmy.basedcount.com/pictrs/image/7128ea29-f4bf-4602-b1b7-5517a09ae3f4.png?format=webp

Transparency report: potential CSAM attack

Transparency report: potential CSAM attack

We have been informed of another potential CSAM attack to our federated instance lemmy.ml.

After the events of the last time, I have preemptively and temporarily defederated us from lemmy.ml until the situation can be assessed with more clarity.

I have already deleted the suspicious posts (without looking at them myself, all from the database's command line) and banned the author. To the best of our knowledge, at no point in time any CSAM content was saved on our server.

EDIT: 2023-09-03 8:40 UTC

There have been no further reports of similar problems arising from lemmy.ml or other instances, so I am re enabling federation. Thank you for your patience.

Based Count Terms of Service and New Rules

Based Count Terms of Service and New Rules

Open link in next tab

Based Count

https://basedcount.com/tos

See your r/PoliticalCompassMemes pill count. Avoid censorship, stay based.

Transparency report: broken images and federated CSAM attack

Transparency report: broken images and federated CSAM attack

Images posted within the last 48 hours will appear as broken. This is expected and intended.

Yesterday 2023-08-27 a community on the lemmy.world instance received multiple posts containing CSAM (or as it is more commonly known CP) content, which spread throughout the federation. We also ended up becoming involuntary hosts of said content.

Due to the severely limited nature of the Lemmy moderation tools, removing or purging the incriminated posts from the admin UI wasn't sufficient and didn't cause the images to be actually removed from our server. Because of this, a nuclear option was required. I have deleted every image saved by our server during the last 48 hours.

Unfortunately this also includes a post on !pcm@lemmy.basedcount.com , as well as multiple posts on !returntomonke@lemmy.basedcount.com. Authors of the affected posts can fix them by re-uploading their images, without the need to recreate the posts.

We are sorry for the inconvenience, but hosting CSAM content is highly illegal and we simply can't take any risks on this front.

I am currently discussing with the other admins whether further measures are necessary, to prevent this from happening in the future. We'll keep you posted if we have any updates.

EDIT [2023-08-28 10:00 UTC]:

The attack is still ongoing. I have now blocked the community and further deleted the last 15 minutes of images.