So they’ll stop injecting ads in the middle of videos at the worst possible times right?
…
So they’ll stop injecting ads… Right?
So they’ll stop injecting ads in the middle of videos at the worst possible times right?
…
So they’ll stop injecting ads… Right?
Diode emitting lights?
It’s almost as if the highest quality text to train AI on isn’t conservative bullshit.
Honestly, I’m not against ads, I understand that a site with free articles needs to pay the bills somehow. The reason I use ad block is that online ads have become so intrusive that it makes websites unusable, and the way they track you is way over the line. If ads didn’t completely destroy the experience of reading a website and were reasonable in the data they collected I probably wouldn’t bother with ad block.
It’s incredibly obvious when you call the current generation of AI by its full name, generative AI. It’s creating data, that’s what it’s generating.
Open your eyes, the world isn’t even real. Once you realize that you can take whatever form you want.
As bad as establishment candidates tend to be, trump is far far worse. The problem with the trump supporters is they boil everything down to “X bad, so not X good!” Trump was an outsider which is why they supported him, but just being an outsider doesn’t automatically make you good, and in his case he’s far worse.
How is tiktok anything like WikiLeaks or lavabit? How is Lemmy like them either?
Oh, I interpreted “recommended” as recommended as in tailored to your preferences. What they mean is that youtube won’t show popular videos anymore. Still, doesn’t seem like the wrong choice to me necessarily, but I’m personally not interested in the type of videos that have the highest popularity.
Recommending things when you’re not logged in means they’re tracking you without your consent. Why would anyone want that?
Same. I hate the unintuitive keyboard shortcuts, the nonsensical drag and drop everything UI, and their ridiculously over complicated development system.
It’s great for brainstorming and getting started on a problem, but you need to keep what you said in mind the whole time and verify its output. I’ve found it really good for troubleshooting. It’s wrong a lot of the time but it does lead you in the right direction which is very helpful for problems where it’s hard to know where to even start.
It’s super accurate all the way down to the C̴̛͔̫̘̳̜̗̹̮͍͉̲̟̰̐̀̅͒͘͜͠o̶͔̙͙̦͎̩͛̓̔̀͋͜ç̸̧̛̯̰̱̤͔͍͖̥͉̟͍̌̄͗̎̅͌̒̀̀͊̈́̓͘͝a̴̳͇͉͌̔̔͋̍͋̏̈́͝c̷̛͎̙̟͘o̴͕̟̳̺͈̤͓̪̟͕̣̮̳͖̦̓́͂̓͗͊̅͠l̷̝̦̟̱̭̗͇̘̳̗̫͐̑͋a̸̡̡̺̹͉̠̺̹͉̙̮̍̏̂̚͝͝
Sounds great, how do we enforce it?
I think worldview is all about simulation and maintaining state, it’s not really about making associations, but rather maintaining some kind of up to date and imaginary state that you can simulate on top of, to represent the world. I think it needs to be a very dynamic thing which is a pretty different paradigm to the ML training methodology.
Yes, I view these things as foundational to freewill and imagination, but I’m trying to think more low level than that. Simulation facilities imagination and reasoning facilities motivation which facilities free will.
Are those things necessary for intelligence? Well it depends on your definition and everyone has a different definition ranging from reciting information to full blown consciousness. Personally, I don’t really care about coming up with a rigid definition for it, it’s just a word, I care more about the attributes. I think LLMs are a good knowledge engine and knowledge is a component of intelligence.
LLMs build on top of the context you provide them and as a result are very impressionable and agreeable. It’s something to keep in mind when trying to get it to come up with good answers as you need to carefully word questions to avoid biasing it.
It can easily create a sense of false confidence in people who are just being told what they want to hear, but interpret that as validation, which was already a bad enough problem in the pre LLM world.
As a developer building on top of LLMs, my advice is to learn programming architecture. There’s a shit ton of work that needs to be done to get this unpredictable non deterministic tech to work safely and accurately. This is like saying get out of tech right before the Internet boom. The hardest part of programming isn’t writing low level functions, it’s architecting complex systems while keeping them robust, maintainable, and expandable. By the time an AI can do that, all office jobs are obsolete. AIs will be able to replace CEOs before they can replace system architects. Programmers won’t go away, they’ll just have less busywork to do and instead need to work at a higher level, but the complexity of those higher level requirements are about to explode and we will need LLMs to do the simpler tasks with our oversight to make sure it gets integrated correctly.
I also recommend still learning the fundamentals, just maybe not as deeply as you needed to. Knowing how things work under the hood still helps immensely with debugging and creating better more efficient architectures even at a high level.
I will say, I do know developers that specialized in algorithms who are feeling pretty lost right now, but they’re perfectly capable of adapting their skills to the new paradigm, their issue is more of a personal issue of deciding what they want to do since they were passionate about algorithms.
A worldview is your current representational model of the world around you, so for example you know you’re a human on earth in a physical universe when a set of rules, you have a mental representation of your body and it’s capabilities, your location and the physicality of the things in your location. It can also be abstract things too, like your personality and your relationships and your understanding of what’s capable in the world.
Basically, you live in reality, but you need a way to store a representation of that reality in your mind in order to be able to interact with and understand that reality.
The simulation part is your ability to imagine manipulating that reality to achieve a goal, and if you break that down, you’re trying to convert reality from your perceived current real state A, to a imagined desired state B. Reasoning is coming up with a plan to convert the worldview from state A to state B step by step, so let’s say you want to brush your teeth, you a want to convert your worldview of you having dirty teeth to you having clean teeth, and to do that you reason that you need to follow a few steps to achieve that, like moving your body to the bathroom, retrieving tools (toothbrush and toothpaste) and applying mechanical action to your teeth to clean them. You created a step by step plan to change the state of your worldview to a new desired state you came up with. It doesn’t need to be physical either, it could be an abstract goal, like calculating a tip for a bill. It can also be a grand goal, like going to college or creating a mathematical proof.
LLMs don’t have a representational model of the world, they don’t have a working memory or a world simulation to use as a scratchpad for testing out reasoning. They just take a sequence of words and retrieve the next word that is probabilistically and relationally likely to be a good next word based on its training data.
They could be a really important cortex that can assist in developing a worldview model, but in their current granular state of being a single task AI model, they cannot do reasoning on their own.
Knowledge retrieval is an important component that assists in reasoning though, so it can still play a very important role in reasoning.
A worldview simulation it can use as a scratch pad for reasoning. I view reasoning as a set of simulated actions to convert a worldview from state a to state b.
It depends on how you define intelligence though. Normally people define it as human like, and I think there are 3 primary sub types of intelligence needed for cognizance, being reasoning, awareness, and knowledge. I think the current Gen is figuring out the knowledge type, but it needs to be combined with the other two to be complete.
I tried out ollama. It was trivially easy to set up.
Stable diffusion is a bit more work, but any power user should be able to figure it out.