ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • drekly@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    It speeds things up for people who know what they’re talking about. The doctor asking for the plan could probably argue a few of the errors and GPT will say “oh you’re right, I’ll change that to something better” and then it’s good to go.

    Yes you can’t just rely on it to be right all the time, but you can often use it to find the right answer with a small conversation, which would be quicker than just doing it alone.

    I recently won a client with GPTs help in my industry.

    I personally think I’m very knowledgeable in what I do, but to save time I asked what I should be looking out for, and it gave me a long list of areas to consider in a proposal. That list alone was a great starting block to get going. Some of the list wasn’t relevant to me or the client, so had to be ignored, but the majority of it was solid, and started me out an hour ahead, essentially tackling the planning stage for me.

    To someone outside of my industry, if they used that list verbatim, they would have brought up a lot of irrelevant information and covered topics that would make no sense.

    I feel it’s a tool or partner rather than a replacement for experts. It helps me get to where I need to go quicker, and it’s fantastic at brainstorming ideas or potential issues in plans. It takes some of the pressure off as I get things done.