It was one of those one person family bathrooms. I had a 3 hour wait and a bottle of rum.
Also [email protected]. Not a lot of Zeppos out here.
It was one of those one person family bathrooms. I had a 3 hour wait and a bottle of rum.
I was pouring rum into it.
I was in an airport bathroom and somehow the auto soap dispenser managed to squirt soap into my open cup of coffee. Fuck those things.
Pretty sure I was able to rearrange icons on my IPod Touch 4 in 2010.
I have an ancient PC with a nice video card, and it plays games from about five years ago quite well I haven’t felt a need to upgrade. Unfortunately, I play a couple games with kernel level anti-cheat stuff and I don’t think they will work with linux.
It’s unsettling the way the beans are clinging to her chest.
It’s quite a giveaway to a lucky American business. I vote Google because then they’ll get bored with it and shut it down.
I’m kind of surprised people felt that way about AirPods. I don’t remember that at the time. They seem quite mild to me at this point - people didn’t mind wearing regular earbuds around, why worry if there’s a cord or not?
It’s a native feature of the device that allows its user to get enormous amounts of attention, in real life and subsequently online, by simply wearing it in public.
Sounds horrible. I guess I’m not someone who seeks attention at any cost like some people, it public is the last situation I’d use this thing in. I would feel like a complete dumbass wearing it at a coffee shop and waving my hands around.
So about .0017 per email? Probably still profitable for them.
They… accepted that they’re not killing Linux, yet. Back in the 2000s, before Azure, they felt quite threated by Linux and ‘Microsoft Linux’ was a joke. The then-CEO Steve Ballmer called Linux a ‘cancer’.
A machine preparing drinks is not ‘AI’.
deleted by creator
I’m aware of the pressure sensitivity thing causing a false low, and also how sensors have a delay from reading interstitial fluid. I’ve been doing this for 3 1/2 years, as noted. Even with the gibbering of the sensor, I maintain over 95% in range, so I feel like I am fairly attentive and well informed.
I’ve had one for about 3 1/2 years and that’s not my experience, unfortunately. Some of the sensors are right on from the start and stay that way… maybe 30% of them. Some are off when they start and take 1-2 days to start reading correctly. In the meantime, it might say I’m 140 when my meters say 110, or 110 when my meters say 140, or at worst, Dexcom says 90 when I am obviously low and I check and it’s 65. Some sensors are just whacked out and unpredictable, like I’ll be hanging at 100 and it shows a quick drop to 90, 75, 65, and I’m uh, what? And check with a meter and it’s 110. The in-between sensors, they might be reading 30 points off for 3 days before I decide to calibrate and find out oh, it’s been telling me 80 when I’m really at 110. So, it’s always worth confirming.
Dexcom’s own instructions say to never do a ‘correction’, meaning insulin or carbs, without double checking with a meter ‘if your symptoms don’t match the reading’. I can’t always tell whether I’m low or high or normal, so that means realistically, it’s good to double-check. I’ve had times where I was correcting at ‘80’ up to 120 repeatedly for days and once I calibrated it, I found out I had really been ‘correcting’ from 110 to 160.
The Dexcom isn’t always right though. They claim “no fingersticks” and “no calibration required” and both of those claims are complete bullshit.
No, graft is used as a term for things like police demanding weekly payments in exchange for protection, acting like a local gang or mafia, or politicians soliciting bribes. It’s when anyone in an official position abuses their authority in such a way.
Another related confusion in academia recently is the ‘AI detector’. It could easily be defeated with minor rewrites, if they were even accurate in the first place. My favorite misconception is there was a story of a professor who told students “I asked ChatGPT if it wrote this, and it said yes” which is just really not how it works.
This is true and well-stated. Mainly what I wish people would understand is there are current appropriate uses, like ‘rewrite my marketing email’, but generating information that could result in great harm if inaccurate is an inappropriate use. It’s all about the specific model, though - if you had a ChatGPT system trained extensively on medical information, it would result in greater accuracy, but still the information would need expert human review before any decision were made. Mainly I wish the media had been more responsible and accurate in portraying these systems to the public.
Maybe they should have done this about a year ago.