• BertramDitore@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    11 months ago

    Disinformation, which comes from self-serving and agenda-driven swaths of the world’s population (meaning people, not AI), will be amplified by AI-powered tools. The tools themselves are not necessarily the problem (though of course they sometimes are), but if the datasets they steal (sorry, use) to train their models are filled with dis and misinformation, then obviously their outputs will be filled with the same. We should tackle the inputs first, and then the outputs will be less likely to misinform.

    In order for the inputs to be better, we need a quality free press and faith in our public institutions. So most of the world is not in great shape when it comes to those…

    We also need to be able to easily see inside the workings of the AI models so we can pinpoint exactly how the misinformation is being generated, so we can take steps to fix it. I understand this is currently a pretty challenging technical issue, but frankly I don’t think AI tools should ever be made public until they are fully transparent about their sourcing.