• capital@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    10 months ago

    Watermarking AI-generated content might sound like a practical approach for legislators to track and regulate such material, but it’s likely to fall short in practice. Firstly, AI technology evolves rapidly, and watermarking methods can become obsolete almost as soon as they’re developed. Hackers and tech-savvy users could easily find ways to remove or alter these watermarks.

    Secondly, enforcing a universal watermarking standard across all AI platforms and content types would be a logistical nightmare, given the diversity of AI applications and the global nature of its development and deployment.

    Additionally, watermarking doesn’t address deeper ethical issues like misinformation or the potential misuse of deepfakes. It’s more of a band-aid solution that might give a false sense of security, rather than a comprehensive strategy for managing the complexities of AI-generated content.

    This comment brought to you by an LLM.

    • Tak@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      10 months ago

      Plus what if the creator simply doesn’t live in California. What are they gonna do about it?

  • QuadratureSurfer@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 months ago

    The problem here will be when companies start accusing smaller competitors/startups of using AI when they haven’t used it at all.

    It’s getting harder and harder to tell when a photograph is AI generated or not. Sometimes they’re obvious, but it makes you second guess even legitimate photographs of people because you noticed that they have 6 fingers or their face looks a little off.

    A perfect example of this was posted recently where, 80-90% of people thought that the AI pictures were real pictures and that the Real pictures were AI generated.

    https://web.archive.org/web/20240122054948/https://www.nytimes.com/interactive/2024/01/19/technology/artificial-intelligence-image-generators-faces-quiz.html

    And where do you draw the line? What if I used AI to remove a single item in the background like a trashcan? Do I need to go back and watermark anything that’s already been generated?

    What if I used AI to upscale an image or colorize it? What if I used AI to come up with ideas, and then painted it in?

    And what does this actually solve? Anyone running a misinformation campaign is just going to remove the watermark and it would give us a false sense of “this can’t be AI, it doesn’t have a watermark”.

    The actual text in the bill doesn’t offer any answers. So far it’s just a statement that they want to implement something “to allow consumers to easily determine whether images, audio, video, or text was created by generative artificial intelligence.”

    https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB942

  • skarlow181@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    Completely impractical. If something is AI generated, or manipulated with Photoshop or in the darkroom really doesn’t make a difference. AI isn’t special here, photo manipulation is about as old as the photograph itself. It would be much better to spend some effort into signing authentic images,including a whole chain of trust up to the actual camera. Luckily the Content Authenticity Initiative is already working on that.

  • randon31415@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    … and also abortion doctors to carry medicine that reverses abortion if a women wants it.

    Come on dems! Republicans are blowing us out of the water on requiring absurd technology that doesn’t exist. We should try to enforce the 3 laws of robotics!