@excel
@lemmy.megumin.orgI haven't been able to subscribe to any communities hosted on kbin instances. I can see the communities, but they stay at "subscribe pending" and the posts never show up.
I can still see posts and comments from kbin users as long as they're on a Lemmy community, so I know it's not being blocked on my end.
I heard that there was a "temporary outage" regarding federation with kbin.social a while back, so I've been waiting for months for it to start working, but it just never did. I've also tried subscribing to other kbin instances and they haven't worked either...
So has this just always been broken for everybody?
Or is there something I can change on my end to get it working?
https://palia.com/news/palia-open-beta-available-now
Palia Open Beta Available Now, Aug. 10.
I keep seeing posts about this kind of thing getting people's hopes up, so let's address this myth.
We're talking about these tools that advertise the ability to accurately detect things like deep-fake videos or text generated by LLMs (like ChatGPT), etc. We are NOT talking about voluntary watermarking that companies like OpenAI might choose to add in the future.
I mean something with high levels of accuracy, both highly sensitive (low false negatives) and highly specific (low false positives). High would probably be at least 95%, though this is ultimately subjective.
If you're going to definitively label something as "fake" or "real", you better be damn sure about it, because the consequences for being wrong with that label are even worse than having no label at all. You're either telling people that they should trust a fake that they might have been skeptical about otherwise, or you're slandering something real. In both cases you're spreading misinformation which is worse than if you had just said "I'm not sure".
To understand this part you need to understand a little bit about how these neural networks are created in the first place. Generative Adversarial Networks (GANs) are a strategy often employed to train models that generate content. These work by having two different neural networks, one that generates content similar to existing content, and one that detects the difference between generated content and the existing content. These networks learn in tandem, each time one network gets better the other one also gets better.
That this means is that building a content generator and a fake content detector are effectively two different sides of the same coin. Improvements to one can always be translated directly and in an automated way into improvements into the other one. This means that the generator will always improve until the detector is fooled about 50% of the time.
Note that not all of these models are always trained in exactly this way, but the point is that anything CAN be trained this way, so even if a GAN wasn't originally used, any kind of improved detection can always be directly translated into improved generation to beat that detection. This isn't just any ordinary "arms race", because the turn around time here is so fast there won't be any chance of being ahead of the curve... the generators will always win.