• cheese_greater@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    3
    ·
    edit-2
    1 year ago

    I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering or shitposting.

      • sebi@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        1 year ago

        Because generative Neural Networks always have some random noise. Read more about it here

          • PetDinosaurs@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            1 year ago

            It almost certainly has some gan-like pieces.

            Gans are part of the NN toolbox, like cnns and rnns and such.

            Basically all commercial algorithms (not just nns, everything) are what I like to call “hybrid” methods, which means keep throwing different tools at it until things work well enough.

              • PetDinosaurs@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                It doesn’t matter. Even the training process makes it pretty much impossible to tell these things apart.

                And if we do find a way to distinguish, we’ll immediately incorporate that into the model design in a GAN like manner, and we’ll soon be unable to distinguish again.

                • stevedidWHAT@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 year ago

                  Which is why hardcoded fingerprints/identifications are required to identify the individual as a speaker rather than as an AI vs Human. Which is what we’re ultimately agreeing on here outside of the pedantics of the article and scientific findings:

                  Trying to find the model who is supposed to be human as an AI is counter intuitive. They’re direct opposites if one works, both can’t be exist in this implementation.

                  The hard part will obviously be making sure that such a “fingerprint” wouldn’t be removable which will take some wild math and out of the box thinking I’m sure.

                  Tough problem!

  • Nioxic@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I have to hand in a short report

    I wrote parts of it and asked chatgpt for a conclusion.

    So i read that, adjusted a few points. Added another couple points…

    Then rewrote it all in my own wording. (Chatgpt gave me 10 lines out of 10 pages)

    We are allowed to use chatgpt though. Because we would always have internet access for our job anyway. (Computer science)

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      I found out on the last screen of a travel grant application I needed a coverletter.

      I pasted in the requirements for the cover letter and what I had put in my application.

      I pasted the results in as the cover letter without review.

      I got the travel grant.

        • TropicalDingdong@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          Exactly. But they still need to exist. That’s what chat gpt is for. Letters, bullshit emails, applications. The shit that’s just tedious.

  • Boddhisatva@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

    If you ask this thing whether or not some given text is AI generated, and it is only right 26% of the time, then I can think of a real quick way to make it 74% accurate.

    • Leate_Wonceslace@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      I feel like this must stem from a misunderstanding of what 26% accuracy means, but for the life of me, I can’t figure out what it would be.

    • notatoad@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      it seemed like a really weird decision for OpenAI to have an AI classifier in the first place. their whole business is to generate output that’s good enough that it can’t be distinguished from what a human might produce, and then they went and made a tool to try and point out where they failed.

      • Boddhisatva@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That may have been the goal. Look how good our AI is, even we can’t tell if its output is human generated or not.

  • Matriks404@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Did human-generated content really become so low quality that it is distinguishable from AI-generated content?

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    A lot of these relied on common mistakes that “AI” algorithms make but humans generally don’t. As language models are improving, it’s harder to detect.

  • Jargus@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    So Democracy is basically fucked and countries without freedom of expression/speech have a advantage while our social media will be a cesspool and will divide and weaken our societies. The future looks bright /s

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 year ago

    We need to embrace AI written content fully. Language is just a protocol for communication. If AI can flesh out the “packets” for us nicely in a way that fits what the receiving humans need to understand the communication then that’s a major win. Now I can ask AI to write me a nice letter and prompt it with a short bulleted list of what I want to say. Boom! Done, and time is saved.

    The professional writers who used to slave over a blank Word document are now obsolete, just like the slide rule “computers” of old (the people who could solve complicated mathematics and engineering problems on paper).

    Teachers who thought a hand written report could be used to prove that “education” has happened are now realizing that the idea was a crutch (it was 25 years ago too when we could copy/paste Microsoft Encarta articles and use as our research papers).

    The technology really just shows us that our language capabilities really are just a means to an end. If a better means asrises we should figure out how to maximize it.