Biology, gaming handhelds, meditation and copious amounts of caffeine.

  • 9 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle



  • I didn’t follow a guide, but there are many good ones online.

    For games, really just install Steam on your main computer and the TV client, make sure Remote Play is configured to use the most out of your connection and set to the desired resolution. This is about it.

    For torrents, you want a downloading client (I use qBittorrent), software that will automatically download movies and TV shows based on what you want (Sonarr, Radarr, all the *Arr stuff) and some server that will store the media and organize it in a “Netflix-like” easy to use interface, for that I use Jellyfin on my main PC.

    So in short, for games, I open Steam Big Picture, select the game, I’m playing. For media, my PC downloads everything I want at night and during the day it’s all there with subtitles, episodes, descriptions, etc, ready to play by opening up Jellyfin. It’s mostly hands off, but the initial setup can be a bit painful if you’ve never used these tools before, specially dealing with the *Arr setup.



  • As a biologist, I’m always extremely frustrated at how parts of the general public believe they can just ignore our entire field of study and pretend their common sense and Google is equivalent to our work. “race is a biological fact!”, “RNA vaccines will change your cells!”, “gender is a biological fact!” and I was about to comment how other natural sciences have it good… But thinking about it, everyone suddenly thinks they’re a gravity and quantum physics expert, and I’m sure chemists must also see some crazy shit online, so at the end of the day, everyone must be very frustrated.




  • He’s just extremely angry because our current government made several deals with Chinese electric car manufacturers, which in turn quick started our electric car industry with several models that are significantly cheaper and better designed than Teslas, essentially making it impossible for his brand to ever get into the brazilian market. Oh, and we are also one of the largest groups still using Twitter.

    The threat to Tesla and Twitter means he wanted to feed the opposition, which just meant jumping in at the easiest most surface level right wing conspiracy theory being shared around - Moraes and the Supreme Court actually interfered in the elections to elect Lula and they’re actually some occult power controlling the country. I would laugh, but a large chunk of the population is dumb enough to believe this tale.



  • The article is actually incomplete. Some insider builds already lack the old taskbar, it can’t be invoked and if an application relies on it you simply get a crash.

    This is not new behavior from Windows. When legacy features are going to be removed, they do stagger updates when users have known software conflicts installed, they also might throw warnings. This is exactly what we are seeing now.

    Though the fact this small article is just reporting on Reddit information rather than testing insider builds is not my fault nor my concern.




  • So no, if this law came into effect, people would just stop using AI. And imo, they probably should stop for cases like this unless it has direct human oversight of everything coming out of it.

    Then you and I agree. If AI can be advertised as a source of information but at the same time can’t provide safeguarded information, then there should not be commercial AI. Build tools to help video editing, remove backgrounds from photos, go nuts, but do not position yourself as a source of information.

    Though if fixing AI is at all possible, even if we predict it will only happen after decades of technology improvements, it for sure won’t happen if we are complacent and do not add such legislative restrictions.


  • I wonder how legislation is going to evolve to handle AI. Brazilian law would punish a newspaper or social media platform claiming that Iran just attacked Israel - this is dangerous information that could affect somebody’s life.

    If it were up to me, if your AI hallucinated some dangerous information and provided it to users, you’re personally responsible. I bet if such a law existed in less than a month all those AI developers would very quickly abandon the “oh no you see it’s impossible to completely avoid hallucinations for you see the math is just too complex tee hee” and would actually fix this.