adeoxymus@lemmy.worldtoTechnology@lemmy.world•ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plansEnglish
5·
1 year agoI’d say that a measurement always trumps arguments. At least you know how accurate they are, this statement cannot follow from reason:
The JAMA study found that 12.5% of ChatGPT’s responses were “hallucinated,” and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.
I was pronouncing the una like tuna and could not figure it out