ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.
Because if it’s able to crawl all of the science pubs, then it would be able to try different combos until it works. Isn’t that how it could/is being used, to test stuff?