• 0 Posts
  • 8 Comments
Joined 8 months ago
cake
Cake day: March 15th, 2024

help-circle
  • I had a booth about this at the Bay Area Maker Faire lately.

    If we’re all printing the same object on our 3D printers, it’s proooobably a lot less trouble to just have someone injection mold it and save us all the trouble. 3D printers are really great for one-offs and mass-customization and things like that. Aaaaaand, I feel like it’s kind of an under-appreciated problem in 3D printing. Because, yeah, CAD is hard and we’re never going to reach a world where every 3D printer owner is very very comfortable with CAD, and so it should be more of a concrete goal for the 3D printing community to make sure that we’re focusing on this problem. It’s important that every 3D printer owner can do at least some amount of tweaking and customizing, otherwise we’re failing as a community.

    Now, I don’t Tinkshame. I spent a lot of time learning Blender, FreeCAD, and OpenSCAD to prove Naomi Wu’s assertion that we should all just get over ourselves and use TinkerCAD. The only real problem with it is that it’s not really free, it’s “free at the pleasure of AutoDesk” where they could raise the “Mission Accomplished” banner at some point and turn it off. And there’s not really an open source version of it for roughly the same reason that random thingiverse models are always kinda halfassed and bad. Doing a good TinkerCAD-but-actually-free-by-some-definition is actual work to get everything right and polished and documented and bug-free and nobody really wants to pay for it.

    Also, maybe I am pedantic and obsessive, but I don’t really like screwing around too heavily with models in a slicer, so I’d rather they take some of the magical code in the OrcaSlicer/PrusaSlicer/SuperSlicer tree and actually organize it into something that could be TinkerCAD-esque?

    Anyway, the core of the talk of my booth was systems and libraries of 3D printable objects. So, for example, there’s the Honeycomb Storage Wall system and some of us have been writing some neat lil OpenSCAD libraries and models for it (and another group of people have been doing similar things in Fusion) where you can make a parametric model so you can measure your flashlight and print a cute 40mm holder for it based on the measurement without having to model things from scratch and it’ll click into the HSW wall and it’s fine unless you are married to someone who has ommetaphobia and then you need to make sure that the honeycomb is the same color as the wall. And the same is true for Gridfinity, just you can put that in the drawer.

    And there’s also a lot of parametric models. I’m not sure what you are looking to print, but there’s a decent selection of people who have done stuff in Fusion or FreeCAD or OpenSCAD where you can download the model and change the parameters to get it a lot closer to what you want without going through all of the drama of making it all over again.

    I love using OpenSCAD. I’ve got a buncha years of experience using various 3D modelling tools at various times and so I can use Blender or FreeCAD quite well actually, but in the end, I do a lot of functional bits and it’s so darn easy to just write some code because, actually, I’ve been working as a professional software engineer for quite some time.

    So… dono, it depends on your aspirations? There were a good number of Gridfinity-like systems that were around before Gridfinity came out and they were … ok, but not great, but then Gridfinity came along and did a boxy-box system just like was already there but with some interesting tweaks and making it more amenable to real customization and suddenly everybody went gonzo over Gridfinity in particular. So you might not be just making a thing that exists in a dozen forms better if you borrow an idea and make your version of it.

    Also, I learned 3D modelling tools mumble mumble years ago in a failed attempt and/or dodged-bullet because I’d wanted to do games or special effects as a kid. The software I learned on is long gone, but it turns out that once you are thinking about things, it tends to stick? Which means that I learned pottery while visualizing the objects I was making on the wheel as if they were in the CAD window of my mind, got good at photographic lighting based on what I’d observed in the 3D program, and then transitioned back to CAD because I wanted to make things, so it’s kinda one of those things where you probably won’t waste the time spent.

    tl;dr: I learned OpenSCAD, FreeCAD, and Blender to prove that Naomi Wu is right and we should all get over ourselves and use TinkerCAD and … she’s still probably right, LOL.


  • It’s important to realize that the nerd you saw on the news has always been someone wearing nerd as a costume and the entire history of technology is loaded with examples of the real nerd being marginalized. It’s just that in ages past the VC’s would give a smaller amount of money and require the startup to go through concrete milestones to unlock all of it so there was more of a chance for the founder’s dreams to smack up against reality before they were $230m in the hole with no product worth selling.


  • While there is arguably a larger pool of people who you can reach by not having open racism and the CEO whipping his dick out (and mysteriously not slamming it into his Tesla door, even if it is a masterful gambit) you can still get a lot of white men of privilege who are smart and hardworking who don’t nominally worry about being on the receiving end of most of the harassment so it’s OK as long as they end up part of the winning team because they’ll get mega stock bucks at the end. And this does extend to the factory floor, at least people’s impressions while joining the factory floor. They wouldn’t be an engineer but they’ll be a supervisor or something?

    It’s kinda un-earned? Like, there’s stories that people tell each other of questionable veracity? Some set of startups in the days of yore gave their cleaning staff or whatnot options so I think it’s become part of the cultural mythos now even if the reality is that the cleaning staff these days is contractors who are mistreated so even if it did actually happen then, it won’t happen now.

    And, dono, once you’ve solved the hard problems early on, there’s less of that drive to do the truly novel things and so you get more of the people who want to be part of a company that’s going to the top and wouldn’t mind if they could coast and/or fail upwards along the way.

    The problem is that employers tend to presume that they can continue to abuse people going forwards into the future because they’ve gotten away with it so far. Until they do things like yank offers from new college grads or laying off too many of the professional staff, at which point you’ve shattered the illusion.

    tl;dr: Elon sowing: Haha fuck yeah!!! Yes!!

    Elon reaping: Well this fucking sucks. What the fuck.


  • As best I can tell, the touchscreen is added at the concept phase by folks who mostly know what’s going to make people look at the car and want to buy it, several years before the car hits the market and well before the actual car electronics teams are involved.

    So, yeah, car UI/UX sucks right now because we’re seeing all of the things added to cars a few years ago in response to Tesla and implemented by people who think that just because they programmed a random car-focused microcontroller back in the day that this means that they understand all of the layers involved in a modern Linux or Android or Windows embedded car electronics unit including layer 8 of the OSI stack (meaning: interfacing with humans)

    But, yah, dono. I don’t actually have my own car. My spouse got a Mazda a bunch of years ago now and it has actually a pretty good touchscreen interface with physical controls such that if you want to dig into stuff, you can touchscreen but all of the common stuff is switches and knobs. The generation before that had way way too many buttons and it was just gag-me-with-a-spoon. The generation after that removed the touchscreen because the leadership at Mazda decided people were just not to be trusted with a touchscreen and I feel like they went a little too far in the wrong direction. Meanwhile, in airplane cockpit design, they put great pains into having you be able to navigate by touch where necessary such that all of the knobs are differently textured or shaped. And, as I said, I don’t actually have my own car, but I have to say that if I did have a car, I’d want it to be designed like that.





  • Funny, just this morning I woke up to someone commenting on one of my pieces of art that I’d posted on Reddit that if I hadn’t put in the comment how I did it, they’d have thought it was an AI generated picture.

    It’s super-painful to be a technologist and an artist at the same time right now because there are way too many people in tech who have no understanding of what it means to create art. There’s people in the art community who don’t really get AI either, of course, but since they are trending towards probably the right opinion based on an incomplete understanding of what the things we see as AI actually are, it’s much easier to listen to them. If anything, the artists can labor under the misapprehension that the current crop of AI tools are doing more than they actually are.

    In the golden age of analog photography, people would do a print and include the raw borders of the image. So you’d see sprocket holes if it’s 35mm film or a variety of rough boundaries for other film formats. And it was a known artistic convention that you were showing exactly what you shot, no cropping, no edits, etc. The early first version of Instagram decided that those film borders meant “art” so of course they added the fake film borders and it grated on my nerves because I think it was the edges from a roll of Velvia, which is a brilliant color slide film. And then someone would have the photo with the B&W filter because that also means “art” but you would never see a B&W Velvia shot unless you were working really hard on a thing. So this is far from the first time that a bunch of clueless people on the tech side of the fence did something silly out of ego and ignorance.

    The picture I posted is the result of a bunch of work on fabbing, 3D printing, FastLED programming, photographic technique, providing an interesting concept to a person and an existing body of work such that said person would want to show up to some random eccentric’s place for a shoot, et al. And, well… captions on art exist for a reason, right? It adds layers to the work to know that the artist was half-mad when they painted it and maybe you can tell by the painting’s brushwork or just know your art history really well but maybe you can’t and so a caption helps create context for people not skilled in that particular art.

    And, there’s not really “secrets” in art. Lots of curators and art critics will take great pains to explain why Jackson Pollock or Mark Rothko so if you are still wandering around saying “BUT IT LOOKS LIKE GIANT SQUARES” that’s intentional ignorance.

    Now, I’ve been exploring my particular weird genre of art for a while now. Before AI, Photoshop was the thing. Much in the same way as I could have thrown a long enough prompt into a spicy-autocomplete image generator, I also could have probably photoshopped it. Then again, the tutorials for the Photoshop version of the technique all refer back to the actual photographic effect.

    Describing something as it’s not has long been a violation of social norms that people who are stuck in a world of intentional ignorance, ego, and disrespect for the artistic process have engaged in. In the simultaneous heyday of Second Life and Flickr, people wanting to treat their Second Life as their primary life caused Flickr to create features so people could mediate this boundary. So, on one level, this isn’t entirely new and posting AI art in the painting reddit is no different from posting filtered Second Life to the portrait group on flickr. It’s simple rudeness of the sort that the unglamorous aspects of community moderation are there to solve for.

    I have gotten quizzed about how I make my art, but I’ve never seen anybody go off and then create a replica of my art, they’ve always gone off and created something new and novel and interesting and you might not even realize that what got them there was tricks I shared with them it’s so different. Artists don’t see other art in the gallery and autocomplete art that looks like what they saw, they incorporate ideas into their own work with their own flair.

    Thus, there’s more going on than just mere rudeness. I’ve been doing this for a long time now and the AI companies have a habit of misrepresenting exactly what content they have stolen to train their image models. So it’s entirely likely that the cool AI picture that someone thinks my art looks like is really just autocompleted using parts of my art. Except I can’t say “no” and if there was a market for people making art that looks roughly like mine, I’d offer paid workshops or something.