• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • Redacted@lemmy.worldtoGames@lemmy.worldThe N64
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    6 months ago

    Hard disagree. Most trailblazing console ever with one of the strongest lineups of first/second party games we’ve ever seen. Yes there were some shoddy third party ports but you didn’t buy it for those.

    People moan about the controller but forget it was the first time a joystick was used and the only real issue was the redundant left prong. Loved the feel of the Z button for shooting games coupled with the Rumble Pak.



  • I’d appreciate it if you could share evidence to support these claims.

    Which claims? I am making no claims other than AIs in their current form do not fully represent what most humans would define as a conscious experience of the world. They therefore do not understand concepts as most humans know it. My evidence for this is that the hard problem of consciousness is yet to be solved and we don’t fully understand how living brains work. As stated previously, the burden of proof for anything further lies with yourself.

    What definitions? Cite them.

    The definition of how a conscious being experiences the world. Defining it is half the problem. There are no useful citations as you have entered the realm of philosophical debate which has no real answers, just debates about definitions.

    Explain how I’m oversimplifying, don’t simply state that I’m doing it.

    I already provided a precise example of your reductionist arguing methods. Are you even taking the time to read my responses or just arguing for the sake of not being wrong?

    I’ve already provided my proof. I apologize if I missed it, but I haven’t seen your proof yet. Show me the default scientific position.

    You haven’t provided any proof whatsoever because you can’t. To convince me you’d have to provide compelling evidence of how consciousness arises within the mind and then demonstrate how that can be replicated in a neural network. If that existed it would be all over the news and the Nobel Prizes would be in the post.

    If you have evidence to support your claims, I’d be happy to consider it. However, without any, I won’t be returning to this discussion.

    Again, I don’t need evidence for my standpoint as it’s the default scientific position and the burden of proof lies with yourself. It’s like asking me to prove you didn’t see a unicorn.


  • Bringing physically or mentally disabled people into the discussion does not add or prove anything, I think we both agree they understand and experience the world as they are conscious beings.

    This has, as usual, descended into a discussion about the word “understanding”. We differ in that I actually do consider it mystical to some degree as it is poorly defined and implies some aspect of consciousness to myself and others.

    Your definitions are remarkably vague and lack clear boundaries.

    That’s language for you I’m afraid, it’s a tool to convey concepts that can easily be misinterpreted. As I’ve previously alluded to, this comes down to definitions and you can’t really argue your point without reducing complexity of how living things experience the world.

    I’m not overstating anything (it’s difficult to overstate the complexities of the mind), but I can see how it could be interpreted that way given your propensity to oversimplify all aspects of a conscious being.

    This is an argument from incredulity, repeatedly asserting that neural networks lack “true” understanding without any explanation or evidence. This is a personal belief disguised as a logical or philosophical claim. If a neural network can reliably connect images with their meanings, even for unseen examples, it demonstrates a level of understanding on its own terms.

    The burden of proof here rests on your shoulders and my view is certainly not just a personal belief, it’s the default scientific position. Repeating my point about the definition of “understanding” which you failed to counter does not make it an agrument from incredulity.

    If you offer your definition of the word “understanding” I might be able to agree as long as it does not evoke human or even animal conscious experience. There’s literally no evidence for that and as we know, extraordinary claims require extraordinary evidence.


  • That last sentence you wrote exemplifies the reductionism I mentioned:

    It does, by showing it can learn associations with just limited time from a human’s perspective, it clearly experienced the world.

    Nope that does not mean it experienced the world, that’s the reductionist view. It’s reductionist because you said it learnt from a human perspective, which it didn’t. A human’s perspective is much more than a camera and a microphone in a cot. And experience is much more than being able to link words to pictures.

    In general, you (and others with a similar view) reduce complexity of words used to descibe conciousness like “understanding”, “experience” and “perspective” so they no longer carry the weight they were intended to have. At this point you attribute them to neural networks which are just categorisation algorithms.

    I don’t think being alive is necessarily essential for understanding, I just can’t think of any examples of non-living things that understand at present. I’d posit that there is something more we are yet to discover about consciousness and the inner workings of living brains that cannot be fully captured in the mathematics of neural networks as yet. Otherwise we’d have already solved the hard problem of consciousness.

    I’m not trying to shift the goalposts, it’s just difficult to convey concisely without writing a wall of text. Neither of the links you provided are actual evidence for your view because this isn’t really a discussion that evidence can be provided for. It’s really a philosophical one about the nature of understanding.






  • Whilst everything you linked is great research which demonstrates the vast capabilities of LLMs, none of it demonstrates understanding as most humans know it.

    This argument always boils down to one’s definition of the word “understanding”. For me that word implies a degree of consciousness, for others, apparently not.

    To quote GPT-4:

    LLMs do not truly understand the meaning, context, or implications of the language they generate or process. They are more like sophisticated parrots that mimic human language, rather than intelligent agents that comprehend and communicate with humans. LLMs are impressive and useful tools, but they are not substitutes for human understanding.




  • You posted the article rather than the research paper and had every chance of altering the headline before you posted it but didn’t.

    You questioned why you were downvoted so I offered an explanation.

    Your attempts to form your own arguments often boil down to “no you”.

    So as I’ve said all along we just differ on our definitions of the term “understanding” and have devolved into a semantic exchange. You are now using a bee analogy but for a start that is a living thing not a mathematical model, another indication that you don’t understand nuance. Secondly, again, it’s about definitions. Bees don’t understand the number zero in the middle of the number line but I’d agree they understand the concept of nothing as in “There is no food.”

    As you can clearly see from the other comments, most people interpret the word “understanding” differently from yourself and AI proponents. So I infer you are either not a native English speaker or are trying very hard to shoehorn your oversimplified definition in to support your worldview. I’m not sure which but your reductionist way of arguing is ridiculous as others have pointed out and full of logical fallacies which you don’t seem to comprehend either.

    Regarding what you said about Pythag, I agree and would expect it to outperform statistical analysis. That is due to the fact that it has arrived at and encoded the theorem within its graphs but I and many others do not define this as knowledge or understanding because they have other connotations to the majority of humans. It wouldn’t for instance be able to tell you what a triangle is using that model alone.

    I spot another apeal to authority… “Hinton said so and so…” It matters not. If Hinton said the sky is green you’d believe it as you barely think for yourself when others you consider more knowledgeable have stated something which may or may not be true. Might explain why you have such an affinity for AI…



  • There you go arguing in bad faith again by putting words in my mouth and reducing the nuance of what was said.

    You do know dissertations are articles and don’t constitute any form or rigorous proof in and of themselves? Seems like you have a very rudimentary understanding of English, which might explain why you keep struggling with semantics. If that is so, I apologise because definitions are difficult when it comes to language, let alone ESL.

    I didn’t dispute that NNs can arrive at a theorem. I debate whether they truly understand the theorem they have encoded in their graphs as you claim.

    This is a philosophical/semantical debate as to what “understanding” actually is because there’s not really any evidence that they are any more than clever pattern recognition algorithms driven by mathematics.


  • Seems to me you are attempting to understand machine learning mathematics through articles.

    That quote is not a retort to anything I said.

    Look up Category Theory. It demonstrates how the laws of mathematics can be derived by forming logical categories. From that you should be able to imagine how a neural network could perform a similar task within its structure.

    It is not understanding, just encoding to arrive at correct results.



  • So somewhere in there I’d expect nodes connected to represent the Othello grid. They wouldn’t necessarily be in a grid, just topologically the same graph.

    Then I’d expect millions of other weighted connections to represent the moves within the grid including some weightings to prevent illegal moves. All based on mathematics and clever statistical analysis of the training data. If you want to refer to things as tokens then be my guest but it’s all graphs.

    If you think I’m getting closer to your point can you just explain it properly? I don’t understand what you think a neural network model is or what you are trying to teach me with Pythag.


  • They operate by weighting connections between patterns they identify in their training data. They then use statistics to predict outcomes.

    I am not particularly surprised that the Othello models built up an internal model of the game as their training data were grid moves. Without loooking into it I’d assume the most efficient way of storing that information was in a grid format with specific nodes weighted to the successful moves. To me that’s less impressive than the LLMs.