• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle
  • The same people who didn’t understand that Google uses a SEO algorithm to promote sites regardless of the accuracy of their content, so they would trust the first page.

    If people don’t understand the tools they are using and don’t double check the information from single sources, I think it’s kinda on them. I have a dietician friend, and I usually get back to him after doing my “Google research” for my diets… so much misinformation, even without an AI overview. Search engines are just best effort sources of information. Anyone using Google for anything of actual importance is using the wrong tool, it isn’t a scholar or research search engine.


  • It really depends on the type of information that you are looking for. Anyone who understands how LLMs work, will understand when they’ll get a good overview.

    I usually see the results as quick summaries from an untrusted source. Even if they aren’t exact, they can help me get perspective. Then I know what information to verify if something relevant was pointed out in the summary.

    Today I searched something like “Are owls endangered?”. I knew I was about to get a great overview because it’s a simple question. After getting the summary, I just went into some pages and confirmed what the summary said. The summary helped me know what to look for even if I didn’t trust it.

    It has improved my search experience… But I do understand that people would prefer if it was 100% accurate because it is a search engine. If you refuse to tolerate innacurate results or you feel your search experience is worse, you can just disable it. Nobody is forcing you to keep it.


  • You can have human-level conversations with a tool. I don’t get your point. Just because it doesn’t have full human intelligence doesn’t mean it isn’t good at conversation. It is better at conversation than most humans. It is obviously not smarter than a human, but it is more eloquent and has more general knowledge than the average human.

    We are the first humans who can have a human level conversation with something that is not a human. What do you think human conversations look like? They are not very deep in general.

    Pretty funny you think you convinced me it is a tool when I just showed you like 8 different ways I use it as a tool.

    I literally started this thing saying that even if it isn’t perfect, it is pretty good and it a crazy achievement. You just keep saying it is worthless shit, but that’s not true. Just because the tool isn’t perfect it doesn’t mean that it is worthless or it isn’t an achievement.

    But whatever man… You just want to be right, so take your imaginary trophy and walk away.



  • I’m already using Copilot every single day. I love it. It helps me save so much time writing boilerplate code that can be easily guessed by the model.

    It even helps me understand tools faster than the documentation. I just type a comment and it autocompletes a piece of code that is probably wrong, but probably has the APIs that I need to learn about. So I just go, learn the specific APIs, fix the details of the code and move on.

    I use chatgpt to help me improve my private blog posts because I’m not a native English speaker, so it makes the text feel more fluent.

    We trained a model with the documentation of our company so it automatically references docs when someone asks it questions.

    I’m using the AI from Jira to automatically generate queries and find what I want as fast as possible. I used to hate searching for stuff in Jira because I never remembered the DSL.

    I have GPT as a command line tool because I constantly forget commands and this tool helps me remember without having to read the help or open Google.

    We have pipelines that read exceptions that would usually be confusing for developers, but GPT automatically generates an explanation for the error in the logs.

    I literally ask Chatgpt questions about other areas of technology that I don’t understand. My questions aren’t advanced so I usually get the right answers and I can keep reading about the topics. Chatgpt is literally teaching me how to do front ends, something that I hated my whole career but now feels like a breeze.

    Maybe you should start actually figuring out how to use the tool instead of complaining about it in this echo chamber.




  • Does it matter if it is actually experiencing things? What matters is what you experience while talking to it, not what it experiences while talking to you. When you play videogames, do you actually think the NPCs are experiencing you?

    It’s pretty insane how negative people are. We did something so extraordinary. Imagine if someone told the engineers who built the space shuttle “but it isn’t teleportation”. Maybe stop being so judgemental of what others have achieved.

    “Uhh actually, this isn’t a fully simulated conscious being with a fully formed organic body that resembles my biological structure on a molecular level… Get this shit out of here”






  • You keep asking questions like “can a model build a house” but keep ignoring questions like “can an octopus build a house”. Then asking “can a model learn in seconds how to escape from a complex enclosure” and then ignoring “can a newborn human baby do that?”

    Can an octopus write a poem? Can a baby write an essay? Can an adult human speak every human language, including fictional languages?

    Just because it isn’t as intelligent as a human doesn’t mean this isn’t some type if intelligence.

    Go and check what we call AI in videogames. Do you think that’s a simulated human? Go see what we’ve been calling AI in chess. Is that a simulated human being playing chess? No.

    We’ve been calling Artificial intelligence things that are waaaaaay dumber than GPTs for decades. Even in the academia. Suddenly a group of people decided “artificial intelligence must be equal to human intelligence”. Nope.

    Intelligence doesn’t need to be the same type of human intelligence.



  • Things we know so far:

    • Humans can train LLMs with new data, which means they can acquire knowledge.

    • LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.

    • We know multi-modal is possible, which means these models can acquire skills.

    • We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.

    • We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.

    … What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.


  • What is intelligence?

    Even if we don’t know what it is with certainty, it’s valid to say that something isn’t intelligence. For example, a rock isn’t intelligent. I think everyone would agree with that.

    Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.

    A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.

    For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I’m not aware of the “intelligent” process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.

    If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?