The Darvaza gas crater is a hole in Turkmenistan that’s leaking natural gas and is on fire. I’m quite sure they don’t have a “poet laureate”, it’s literally just a hole in the ground.
But even if it was some metropolis, yeah, he’d be just some guy.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
The Darvaza gas crater is a hole in Turkmenistan that’s leaking natural gas and is on fire. I’m quite sure they don’t have a “poet laureate”, it’s literally just a hole in the ground.
But even if it was some metropolis, yeah, he’d be just some guy.
You can get whatever result you want if you’re able to define what “better” means.
Why publish books of it, then?
The whole point of poetry is that it’s an original expression of another human.
Who are you to decide what the “point” of poetry is?
Maybe the point of poetry is to make the reader feel something. If AI-generated poetry can do that just as well as human-generated poetry, then it’s just as good when judged in that manner.
I do get the sense sometimes that the more extreme anti-AI screeds I’ve come across have the feel of narcissistic rage about them. The recognition of AI art threatens things that we’ve told ourselves are “special” about us.
Indeed, there are whole categories of art such as “found art” or the abstract stuff that involves throwing splats of paint at things that can’t really convey the intent of the artist because the artist wasn’t involved in specifying how it looked in the first place. The artist is more like the “first viewer” of those particular art pieces, they do or find a thing and then decide “that means something” after the fact.
It’s entirely possible to do that with something AI generated. Algorithmic art goes way back. Lots of people find graphs of the Mandelbrot Set to be beautiful.
That’s not how synthetic data generation generally works. It uses AI to process data sources, generating well-formed training data based on existing data that’s not so useful directly. Not to generate it entirely from its own imagination.
The comments assuming otherwise are ironic because it’s misinformation that people keep telling each other.
The “how will we know if it’s real” question has the same answer as it always has. Check if the source is reputable and find multiple reputable sources to see if they agree.
“Is there a photo of the thing” has never been a particularly great way of judging whether something is accurately described in the news. This is just people finding out something they should have already known.
If the concern is over the verifiability of the photos themselves, there are technical solutions that can be used for that problem.
Not necessarily. If they’re low on cash then cutting unnecessary costs is not unreasonable. What is Mozilla’s core goal? Perhaps the “advocacy” and “global programs” divisions weren’t all that relevant to it, and so their funding is better put elsewhere.
Entertainment.
If you think it’s supposed to be predictive you’re perhaps confusing it with futureology, which is a more scientific field.
Fearing AI because of what you saw in “The Terminator” is like fearing sleeping pills because of what you saw in “Nightmare on Elm Street.”
There isn’t really much fundamental difference between an image detector and an image generator. The way image generators like stable diffusion work is essentially by generating a starting image that’s nothing but random static and telling the generator “find the cat that’s hidden in this noise.”
It’ll probably take a bit of work to rig this child porn detector up to generate images, but I could definitely imagine it happening. It’s going to make an already complicated philosophical debate even more complicated.