• Nobody@lemmy.world
    link
    fedilink
    English
    arrow-up
    71
    ·
    6 months ago

    Tech company creates best search engine —-> world domination —> becomes VC company in tech trench coat —-> destroy search engine to prop up bad investments in artificial intelligence advanced chatbots

    • stellargmite@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      6 months ago

      Then Hire cheap human intelligence to correct the AIs hallucinatory trash, trained from actual human generated content in the first place which the original intended audience did understand the nuanced context and meaning of in the first place. Wow more like theyve shovelled a bucket of horse manure on the pizza as well as the glue. Added value to the advertisers. AI my arse. I think calling these things language models is being generous. More like energy and data hungry vomitrons.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        6 months ago

        Calling these things Artificial Intelligence should be a crime. It’s false advertising! Intelligence requires critical thought. They possess zero critical thought. They’re stochastic parrots, whose only skill is mimicking human language, and they can only mimic convincingly when fed billions of examples.

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    ·
    6 months ago

    “Many of the examples we’ve seen have been uncommon queries,”

    Ah the good old “the problem is with the user not with our code” argument. The sign of a truly successful software maker.

    • voluble@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      6 months ago

      “We don’t understand. Why aren’t people simply searching for Taylor Swift”

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 months ago

      This is perhaps the most ironic thing about the whole reddit data scraping thing and Spez selling out the user data of reddit to LLM’S. Like. We spent so much time posting nonsense. And then a bunch of people became mods to course correct subreddits where that nonsense could be potentially fatal. And then they got rid of those mods because they protested. And now it’s bots on bots on bots posting nonsense. And they want their LLM’S trained on that nonsense because reasons.

  • Maxnmy's@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    6 months ago

    Isn’t the model fundamentally flawed if it can’t appropriately present arbitrary results? It is operating at a scale where human workers cannot catch every concerning result before users see them.

    The ethical thing to do would be to discontinue this failed experiment. The way it presents results is demonstrably unsafe. It will continue to present satire and shitposts as suggested actions.

  • flop_leash_973@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    6 months ago

    If you have to constantly manually intervene in what your automated solutions are doing, then it is probably not doing a very good job and it might be a good idea to go back to the drawing board.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      The problem is, the internet has adapted to the Google of a year ago, which means that setting Google search back to 2009 just means that every “SEO hacker” gets to have a field day to get spam to the top of results without any controls to prevent them.

      Google built a search engine optimized for the early internet. Bad actors adapted, to siphon money out of Google traffic. Google adapted to stop them. Bad actors adapted. So began a cat-and-mouse game which ended with the pre-AI Google search we all know and hate today. Through their success, Google has destroyed the internet that was; and all that’s left is whatever this is. No matter what happens next, Google search is toast.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        6 months ago

        It’s even broader than that: historically most of the original protocols for the Internet were designed assuming people wouldn’t do bad things: for example the original e-mail protocol (SMTP) allowed anybody to connect to a an e-mail server using Telnet (a plain text, unencrypted remote comms terminal) and type a bunch of pretty si mple commands to send an e-mail as if they were any e-mail account on that domain (which was a great way for techies to prank their mates back when I was at Uni in the early 90s) and even now that a lot of it got tightenned we’re still suffering from problems like spam and phishing due to the “good faith” approach for designing what became one of the most used text communication protocol around.

    • jeffw@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      6 months ago

      That cant answer most questions though. For example, I hung a door recently and had some questions that it answered (mostly) accurately. An encyclopedia can’t tell me how to hang a door

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        Yeah, there’s a reason this wasn’t done before generative AI. It couldn’t handle anything slightly more specific.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        Same I was dealing with a strange piece of software I searched configs and samples for hours and couldn’t find anything about anybody having any problems with the weird language they use. I finally gave up and asked gpt, it explained exactly what was going wrong and gave me half a dozen answers to try to fix it.

      • btaf45@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        6 months ago

        That cant answer most questions though.

        It would make AI much more trustworthy. You cannot trust chatGPT on anything related to science because it tells you stuff like the Andromeda galaxy being inside the Milky Way. The only way to fix that is to directly program basic known science into the AI.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Google wants that to work. That’s why the “knowledge panels” kept popping up at the top of search before now with links to Wikipedia. They only want to answer the easy questions; definitions, math problems, things that they can give you the Wikipedia answer for, Yelp reviews, “Thai Food Near Me,” etc. They don’t want to answer the hard questions; presumably because it’s harder to sell ads for more niche questions and topics. And “harder” means you have to get humans involved. Which is why they’re complaining now that users are asking questions that are “too hard for our poor widdle generative AI to handle :-(”— they don’t want us to ask hard questions.

  • JdW@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    6 months ago

    If only there was a way to show the whole world in one simple example how Enshitification works.

    Google execs: Hold my beer!

    • Lost_My_Mind@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      Would you be tickled if Mr Yankovic strapped you down to some medical restraining table and then…tickled your feet with a feather???

      Seems like something he’d do.

  • frostmore@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    6 months ago

    allowing reddit to train Google’s AI was a mistake to begin with. i mean just look at reddit and the shitlord that is spez.

    there are better sources and reddit is not one of them.

  • trollbearpig@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    9
    ·
    edit-2
    6 months ago

    I looove how the people at Google are so dumb that they forgot that anything resembling real intelligence in ChatGPT is just cheap labor in Africa (Kenya if I remember correctly) picking good training data. So OpenAI, using an army of smart humans and lots of data built a computer program that sometimes looks smart hahaha.

    But the dumbasses in Google really drank the cool aid hahaha. They really believed that LLMs are magically smart so they feed it reddit garbage unfiltered hahahaha. Just from a PR perspective it must be a nigthmare for them, I really can’t understand what they were thinking here hahaha, is so pathetically dumb. Just goes to show that money can’t buy intelligence I guess.

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 months ago

    At this point, it seems like google is just a platform to message a google employee to go google it for you.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Does anybody remember “Cha-Cha?” This was literally their model. Person asks a question via text message (this was like 2008), college student Googles the answer, follows a link, copies and pastes the answer, college student gets paid like 20¢.

      Source: I was one of those college students. I never even got paid enough to get a payout before they went under.

  • Aceticon@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 months ago

    Probably one of the shitstains in Google’s C-suite after having signed a “wonderful” contract to get access to “all that great data from Reddit” forced the Techies to use it against their better judgement and advice.

    It would certainly match the kind of thing I’ve seen more than once were some MBA makes a costly decision with technical implications without consulting the actual techies first, then the thing turns out to be a massive mistake and to save themselves they just double up and force the techies to use it anyway.

    That said, that’s normally about some kind of tooling or framework from a 3rd party supplier that just makes life miserable for those forced to use it or simply doesn’t solve the problem and techies have to quietly use what they wanted to use all along and then make believe they’re using the useless “sollution” that cost lots of $$$ in yearly licensing fees, and stuff like this that ends up directly and painfully torpedoing at the customer-facing end the strategical direction the company is betting on for the next decade, is pretty unusual.

    • btaf45@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Probably one of the shitstains in Google’s C-suite after having signed a “wonderful” contract to get access to “all that great data from Reddit”

      If they hadn’t bought and then shutdown what became google groups to sabotage Usenet they could have gotten access to just as good of a data set for free.

  • Bobmighty@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 months ago

    I once had a Christmas day post blow up and become top of the day from a stupid pic I uploaded. I wonder if some of those comments or a weird version of that pic will pop up. Anyone that had similar things happen should keep their eye out. Anything that blew up probably gets a bit more weight.

    Oh God, cumbox! All of cumbox is in there. I wonder what kind of unrelated search could summon up that bit of fuzzy fun?