• maniel@lemmy.ml
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    4
    ·
    10 months ago

    You don’t need AI to do that, seriously, such a buzzword where a relatively simple algorithm would suffice, don’t tell me it’s harder than double pendulums or those ball bouncing contraptions tech students make since a decade or more

    • CrayonRosary@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      10 months ago

      Not needing AI isn’t the point. The point is that AI can do it, and AI doesn’t require a programmer to design and debug a bespoke algorithm to accomplish a task. It would take a human a lot longer than 6 hours to perfect an algorithm to do this.

  • surewhynotlem@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 months ago

    Oh yeah? Can it tilt the board all the way to one corner, then pop the other corner and send the ball flying right to the end?

    No, it’s amateur at best.

    • Blooper@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      That’s addressed in the article actually. They had to program it so as not to cheat when they found it actually trying to cheat.

  • INeedMana@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 months ago

    It’s cool but my question is (I did not see this addressed in the article nor video but might have missed it) did it learn to win the game in general terms or only this one example? I mean, if the layout of the board was changed, would it still solve it?

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      10 months ago

      They don’t discuss it here, but it’s most likely a reinforcement model that operates different generations of learned behavior to decide if it’s improving or not.

      It would know that the ball going in the hole is “bad”, and then try to avoid that happening. Each move that is "good’ is then kept in a list of moves it should perform in the next generation of its plan to avoid the “bad” things. Loop -> fail -> logic build -> retry. After 6 hours, it has mapped a complete list of “good” moves to affect it’s final outcome.

      The answer your question: no, it would not be able to use what it learned here on a different map of the board. It’s building reactions to events based on this one board, and bound by rules. You could use the ruleset with another board, but it would need to learn it all again just as a human would.

      The thing about these models is less that they will work (it is assumed they eventually will through trial and error), but how efficiently they will work. The number of generational cycles and retries is usually the benchmark when dealing with reinforcement, but they don’t discuss that data here either.

      • INeedMana@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Yes, but that’s kind of my point

        We see it learn something with insane precision but most often it is almost an effect of over-training. It probably would require less time to learn another layout but it’s not learning the general rules (can’t go through walls, holes are bad, we want to get to X), it learns the specific layout. Each time a layout changes, it would have to re-learn it

        It is impressive and enables automation in a lot of areas, but in the end it is still only machine learning, adapting weights to specific scenario

    • indomara@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      It did learn to use shortcuts to skip parts of the maze, and had to be told not to. Super interesting!

      • INeedMana@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Yes, but that’s only because a generation found some random, specific motion that scored better. Not because it analyzed that doing a skip should be possible

  • dangblingus@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    Not sure if it’s more interesting that an AI taught itself the PID instructions in order to deftly move the ball around, or if it’s more interesting if a human programs the PID instructions to move the ball around. Sounds like a lot of electricity was used doing it the first way.