Ye, a normal VPS would be too slow for production use, as a GPU is recommended. But you can plug in any home PC to do it without risks
Ye, a normal VPS would be too slow for production use, as a GPU is recommended. But you can plug in any home PC to do it without risks
IFTAS is already working with Thorn towards this goal. But you already have access to such technology through my toolset.
haha, nah people reported some unexpected censors, and we investigated what part of their prompt might be causing it.
Image processing libraries are used at the forefront of almost all web services, including lemmy and are extraordinarily robust. I really don’t have the time to go at this in depth, but if you are familiar with this stuff you will know how extraordinary such an exploit would be and its existence would be causing massive chaos all over the world.
It’s the earliest AI technology striving to expose unreported CSAM at scale.
horde-safety has been out for a year now. Just saying… It’s not a trained AI model in this way, but it’s still using Neural Networks (i.e. “AI Technology”)
Moreover, Bluesky already feels a bit like how social media would look if a non-TERF version of the Guardian was running it. It’s very liberal, very centrist, and very ‘don’t rock the boat too much’.
We’ll be keeping an eye on this, and for the moment we’ll be posting to both Twitter and Bluesky. We look forward to engaging with you wherever you end up.
“we’re not ready to choose between the cop bar and the Nazi bar, so for now we’ll keep hanging out in both.”
Of course they don’t mention fediverse as an option. Of course.
You mean an exploit payload embedded in an image, and pwning a system parsing that image through python PIL? While there’s never a 100% chance of anything, you’re more likely to be struck by lightning than this coming to pass and at that point you’re at more security risk at using the internet altogether.
I read most of it until I reached the point where it’s a slow-burn advertisement for their own AI assistant
Why would it be a security risk?
I don’t see how it’s a privacy risk since you’re not exposing your IP or anything. Likewise the images are already uploaded to your servers, so there’s no extra privacy risk for the uploader.
Tech journalists never learn anything from history. No Vc-funded social media is good
For now. They’re still in their growth phase. If they ever become dominant and they need to make money, they’ll turn into a walled garden like every other. Everyone seems to forget that Twitter, Reddit and Facebook were also all about openness at the start
You can actually run it in async model without pictrs safety and just have it scan your newly uploaded images directly from storage. It just doesn’t prevent upload this way, just deletes them.
You don’t get public traffic redirected. It’s not how it works
It stops doing checks. Iirc you can configure it yes
https://github.com/db0/fedi-safety and the companion app https://github.com/db0/pictrs-safety which can be installed as part of your lemmy deployment in the docker-compose (or with a var in your ansible)
Not all web traffic, just the images to check. With any decent bandwidth, it shouldn’t be an issue for most. It also setup in such a way as to not cause a downtime if the checker goes down.
This approach was developed precicely for threaded fediverse. The initial use-case was protecting my own lemmy from CSAM! Check out fedi-safety and pictrs-safety