• 2 Posts
  • 122 Comments
Joined 11 months ago
cake
Cake day: December 18th, 2023

help-circle


  • The article is fake news. I suggest looking elsewhere for proper information.

    As for your questions: LLMs were certainly not involved here. I can’t guess what techniques were used.

    Racial discrimination is often hard to nail down. Race is implicit in any number of facts. Place of birth, current address, school, … You could infer race from such data. If you do not look at race at all but the end result still discriminates, then it’s probably still racial discrimination. I say probably because you are free to do what you like and discriminate based on any number of factors, as long as it isn’t race, sex, and the like. You certainly may discriminate based on education or wealth. Things being as they are, that will discriminate against minorities. They have systematically lower credit ratings, for example.

    In the case of generative AI, bias is often not clearly defined. For example, you type “US President” into an image generator. All US presidents so far were male, and all but one white. But half of all people who are eligible for the presidency are female and (I think) a little less than half non-white. So what’s the non-biased output?












  • The “battle” is the result of copyright people trying to use open source people for their ends.

    In the past, for software, the focus was completely on the terms of the license. If you look at OSI’s new definition, you will find no mention of that, despite the fact that common licenses in the AI world are not in line with traditional standards. The big focus is data, because that is what copyright people care about. AI trainers are supposed to provide extensive documentation on training data. That’s exactly the same demand that the copyright lobby managed to get into the european AI Act. They will use that to sue people for piracy.

    Of course, what the copyright people really want is free money. They’re spreading the myth that training data is like source code and training like compiling. That may seem like a harmless, flawed analogy. But the implication is that the people who work and pay to do open source AI have actually done nothing except piracy. If they can convince judges or politicians who don’t understand the implications then this may cause a lot of damage.