Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.
But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences.
“I think we might just have to say goodbye to finding out about the truth in a quick way,” says Sandra Wachter, a professor at the Oxford Internet Institute, who researches the legal and ethical implications of AI. “The idea that you can just quickly Google something and know what’s fact and what’s fiction—I don’t think it works like that anymore.”
A reminder for anyone reading this that you are in a universe that behaves at cosmic scales like it is continuous with singularities and whatnot, and behaves even at small scales like it is continuous, but as soon as it is interacted with switches to behaving like it is discrete.
If the persistent information about those interactions is erased, it goes back to behaving continuous.
If our universe really was continuous even at the smallest scales, it couldn’t be a simulated one if free will exists, as it would take an infinite amount of information to track how you would interact with it and change it.
But by switching to discrete units when interacted with, it means state changes are finite, even if they seem unthinkably complex and detailed to us.
We use a very similar paradigm in massive open worlds like No Man’s Sky where an algorithm procedurally generates a universe with billions of planets that can each be visited, but then converts those to discrete voxels to track how you interact with and change things.
So you are currently reading an article about how the emerging tech being built is creating increasingly realistic digital copies of humans in virtual spaces, while thinking of yourself as being a human inside a universe that behaves in a way that would not be able to be simulated if interacted with but then spontaneously changes to a way that can be simulated when interacted with.
I really think people are going to need to prepare for serious adjustments to the ways in which they understand their place in the universe which are going to become increasingly hard to ignore as the next few years go by and tech trends like this continue.
Your comment reads like rambling, unless you’re so much smarter than anybody else. I couldn’t make out many cohesive thoughts, merely guessing here.
First of, our universe doesn’t change the moment we touch something, else any interaction would create a parallel universe, which in itself is fiction and unobservable.
Then you talk about removing persistent information. Why would you do that and how would you do that? What is the point of even wanting or trying to do that? An AI robot talking and moving isn’t that different than when we had non AI, case based reasoning. Even the most random noise AI can produce is based of something. It’s a sum of values. We didn’t and don’t generate a computerized random number any differnt.
You can’t proof that our universe is or isn’t simulated, simplified the simulation would only need to stimulate your life in your head, not more. Actually what your eyes see and what your brain is receiving, is already a form of simulation, as it is not exact.
No Man’s Sky is using generic if else switch cases to generate randomness. Else you’d get donut planets for instance or a cat as planet, but you never will in infinite generations. Just because there’s mathematical randomness by adding noise, doesn’t make it change much about its constraints. Even current AI is deterministic, but the effort to prove that isn’t realistically approachable. I personality believe even a human brain would be provable deterministic, if you could look into the finest details and reproduce it. But we can’t reverse time, so that’s going to be impossible.
However we can only observe our own current universe. So how would AI change that now? Also our universe is changing even when you yourself interact with nothing.
It would help if your were more precise in what you’re implying. What change of anyone’s perspective? Doesn’t seam to be any different to the past, unless you mean tech illiterate, like people would react on seeing a video/photo of themselves for the first time. It’s not like AI can read your mind and interact with things the same way you would, nor even predict or do the same as you.
AI is just guessing and that’s often good enough, but it can be totally wrong (for now) by doing deterministically things with only one solution. It can summarize text but will fail by simple math calculations, because it’s not calculating but guessing by probability, in its realm of constraints.
https://en.m.wikipedia.org/wiki/Double-slit_experiment
https://en.m.wikipedia.org/wiki/Quantum_eraser_experiment
If/else statements can’t generate randomness. They can alter behavior based on random input, but they cannot generate randomness in and of themselves.
No, it’s stochastic.