Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

  • Ghostie21@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    2 days ago

    How is this proliferating csam? Also, how do you expect them to find csam without having known images? It gives a really nice way to check based on hashes without having someone look at every picture on someone’s harddrive. With this AI it should greatly help determining new or unknown images while minimizing the number of actual people that have to see that stuff, and who get scarred from looking at such images. The only reason to be against this is if you are looking at CP and want it to be harder to find, or if you don’t understand how this technology is being used.

    • Churbleyimyam@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      How is this proliferating csam?

      Sharing it with people and companies that it wasn’t being shared with before.

      Also, how do you expect them to find csam without having known images?

      The same way it is now: people reporting it and undercover police accounts. People recognise it.

      without having someone look at every picture on someone’s harddrive

      If it’s going to get used as evidence in court a human will have to review and confirm it. I don’t think “Because the AI said so” is going to convince juries.

      The only reason to be against this is if you are looking at CP

      Or if it’s you or someone you love who is in the CP. Having further copies of it on further hard drives, whether it’s so someone can bake it into their AI tool or any other purpose is wrong. That’s just my view though.