Inside the shifting plan at Elon Musk’s X to build a new team and police a platform ‘so toxic it’s almost unrecognizable’::X’s trust and safety center was planned for over a year and is significantly smaller than the initially envisioned 500-person team.

  • riot@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    10 months ago

    mirror: https://archive.vn/ghN0z

    According to the former X insider, the company has experimented with AI moderation. And Musk’s latest push into artificial intelligence technology through X.AI, a one-year old startup that’s developed its own large language model, could provide a valuable resource for the team of human moderators.

    An AI system “can tell you in about roughly three seconds for each of those tweets, whether they’re in policy or out of policy, and by the way, they’re at the accuracy levels about 98% whereas with human moderators, no company has better accuracy level than like 65%,” the source said. “You kind of want to see at the same time in parallel what you can do with AI versus just humans and so I think they’re gonna see what that right balance is.”

    I don’t believe that for one second. I’d believe it, if those numbers were reversed, but anyone who uses LLMs regularly, knows how easy it is to circumvent them.

    EDIT: Added the paragraph right before the one I originally posted alone, that specifies that their “AI system” is an LLM.

    • Alteon@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      AI is whatever tech companies say it is. They aren’t saying it for the people, like you, that knows it’s horseshit. They are saying it for the investors, politicians, and ignorant folks. They are essentially saying that “AI” (cue jazz hands and glitter) can fix all of their problems, so don’t stop investing in us.