There’s no way for teachers to figure out if students are using ChatGPT to cheat, OpenAI says in new back-to-school guide::AI detectors used by educators to detect use of ChatGPT don’t work, says OpenAI.

  • T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    It makes some sense. If a tool could reliably discern it, the tool would used to train the model to be more indistinguishable from regular text, putting us back to where we are now.

  • EndOfLine@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    edit-2
    1 year ago

    At the core of learning is for students to understand the content being taught. Using tools and shortcuts doesn’t necessarily negate that understanding.

    Using chatGPT is no different, from an acidemic evaluation standpoint, than having somebody else do an assignment.

    Teachers should already be incorporating some sort of verbal q&a sessions with students to see if their demonstrated in-person comprehension matches their written comprehension. Though from my personal experience, this very rarely happens.

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      That’s going on the supposition that a person just prompts for an essay and leaves it at that, which to be fair is likely the issue. The thing is, the genie is out of the bottle and it’s not going to go back in. I think at this point it’ll be better to adjust the way we teach children things, and also get to know the tools they’ll be using.

      I’ve been using GPT and LLAMA to assist me in writing emails and reports. I provide a foundation, and working with the LLMs I get a good cohesive output. It saves me time, allowing me to work on other things, and whoever needs to read the report or email gets a well-written document/letter that doesn’t meander in the same way I normally do.

      I essentially write a draft, have the LLMs write the whole thing, and then there’s usually some back-and-forth to get the proper tone and verbiage right, as well as trim away whatever nonsense the models make up that wasn’t in my original text. Essentially I act as an editor. Writing is a skill I don’t really possess, but now there are tools to make up for this.

      Using an LLM in that way, you’re actively working with the text, and you’re still learning the source material. You’re just leaving the writing to someone else.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Don’t know why the downvote(s). Like many great technology advancements it can be used for good or for malice. AI definitely can be a great boon to society, but one of the unique aspects of this vs something like the computer or vaccines is that the tech is quite new, organizations and governments are scrambling to regulate it, and almost any fool can get their hands on it.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    7
    ·
    1 year ago

    Calling it cheating is the wrong way to think about it. If you had a TI 80 whatever in the early 90s, it was practically cheating when everyone else had crap for graphing calculators.

    Cat GPT used effectively isn’t any different than a calculator or an electronic typewriter. It’s a tool. Use it well and you’ll do much better work

    These hand wringing articles tell us more about the paucity of our approach to teaching and learning than they do about technology.

    • Copernican@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Do you understand what definitions are in place for authorship, citation, and plagiarism in regards to academic honesty policies?

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        The policies, and more importantly, the pedagogy are out of date and basically irrelevant in an age where machines can and do create better work than the majority of university students. Teachers used to ban certain levels of calculator from their classrooms because it was considered ‘cheating’ (they still might). Those teacher represent a backwards approach towards preparing students for a changing world.

        The future isn’t writing essays independent of machine assistance just like the future of calculus isn’t slide rulers.

        • Copernican@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          I think a big challenge or gap here is that writing has a correlation to vocabulary and developing the ability to articulate. It pays off not just for the prose that you write, but your ability to speak and discuss and present ideas. I agree that ai is a tool we will likely be using more in the future. But education is in place to develop skills and knowledge. Does ai help or hinder that goal if a teachers job includes evaluating how much a student has learned and whether they can articulate that?