Google to pause Gemini AI image generation after refusing to show White people.::Google will pause the image generation feature of its artificial intelligence model, Gemini, after the model refused to show images of White people when prompted.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    8
    ·
    9 months ago

    So what. It means they overtrained, deployed, and had to choose between reverting to a model with known issues or training a new model. They probably tried a temporary fix with a LoRA and it failed so they have to wait on the next big version to finish training and those can take weeks even on massive data center class hardware.

    People don’t seem to have any fundamental understanding of AI here. It is all static tensor math. There is no persistence or learning inside the model. Any illusion of persistence is due to the loader code that turns your text into math tokens. That is just standard code.

    There is no fundamental difference between an offline AI and the proprietary like Gemini. One loader code is just data mining while the other is not. Training has a sweet spot. If too much John Oliver is added, everything will generate as John Oliver, like absolutely everything.

    • slacktoid@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      9 months ago

      Theres no such thing as too much John Oliver. this guy doesnt know what they are talking about.

  • Prophet@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    The guy who leads this group is extremely vocal (almost weirdly so) about white privilege and systemic racism. He is also white. It’s true that many AI models have white-bias. The reasons for this are multi-faceted. Our datasets are grossly imbalanced against racial minorities. I also think I understand that for some darker-skinned races, it is more difficult for the model to extract relevant features from the shitty Flickr photos they scrape for these models.

    That said, injecting words into the users prompt to force the model to generate minorities more often is an extremely naive approach. Kind of like if Google added “reddit” to all searches just because it worked for some specific test cases, but ignoring that you now no longer get any site except reddit. Probably the solution here looks like paying a lot of money for high quality datasets as well as investing in user education and more AI explainability of these tools.