• 10 Posts
  • 60 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle



  • All their hardware documentation is locked under NDA nothing is publicly available about the hardware at the hardware registers level.

    For instance, the base Android system AOSP is designed to use Linux kernels that are prepackaged by Google. These kernels are well documented specifically for manufacturers to add their hardware support binary modules at the last possible moment in binary form. These modules are what makes the specific hardware work. No one can update the kernel on the device without the source code for these modules. As the software ecosystem evolves, the ancient orphaned kernel creates more and more problems. This is the only reason you must buy new devices constantly. If the hardware remained undocumented publicly while just the source code for modules present on the device was merged with the kernel, the device would be supported for decades. If the hardware was documented publicly, we would write our own driver modules and have a device that is supported for decades.

    This system is about like selling you a car that can only use gas that was refined prior to your purchase of the vehicle. That would be the same level of hardware theft.

    The primary reason governments won’t care or make effective laws against orphaned kernels is because the bleeding edge chip foundries are the primary driver of the present economy. This is the most expensive commercial endeavor in all of human history. It is largely funded by these devices and the depreciation scheme.

    That is both sides of the coin, but it is done by stealing ownership from you. Individual autonomy is our most expensive resource. It can only be bought with blood and revolutions. This is the primary driver of the dystopian neofeudalism of the present world. It is the catalyst that fed the sharks that have privateered (legal piracy) healthcare, home ownership, work-life balance, and democracy. It is the spark of a new wave of authoritarianism.

    Before the Google “free” internet (ownership over your digital person to exploit and manipulate), all x86 systems were fully documented publicly. The primary reason AMD exists is because we (the people) were so distrusting over these corporations stealing and manipulating that governments, militaries, and large corporations required second sourcing of chips before purchasing with public funds. We knew that products as a service - is a criminal extortion scam, way back then. AMD was the second source for Intel and produced the x86 chips under license. It was only after that when they recreated an instructions compatible alternative from scratch. There was a big legal case where Intel tried to claim copyright over their instruction set, but they lost. This created AMD. Since 2012, both Intel and AMD have proprietary code. This is primarily because the original 8086 patents expired. Most of the hardware could be produced anywhere after that. In practice there are only Intel, TSMC, and Samsung on bleeding edge fab nodes. Bleeding edge is all that matters. The price is extraordinary to bring one online. The tech it requires is only made once for a short while. The cutting edge devices are what pays for the enormous investment, but once the fab is paid for, the cost to continue running one is relatively low. The number of fabs within a node is carefully decided to try and accommodate trailing edge node demand. No new trailing edge nodes are viable to reproduce. There is no store to buy fab node hardware. As soon as all of a node’s hardware is built by ASML, they start building the next node.

    But if x86 has proprietary, why is it different than Qualcomm/Broadcom - no one asked. The proprietary parts are of some concern. There is an entire undocumented operating system running in the background of your hardware. That’s the most concerning. The primary thing that is proprietary is the microcode. This is basically the power cycling phase of the chip, like the order that things are given power, and the instruction set that is available. Like how there are not actual chips designed for most consumer hardware. The dies are classed by quality and functionality and sorted to create the various products we see. Your slower speed laptop chip might be the same as a desktop variant that didn’t perform at the required speed, power is connected differently, and it becomes a laptop chip.

    When it comes to trending hardware, never fall for the Apple trap. They design nice stuff, but on the back end, Apple always uses junky hardware, and excellent in house software to make up the performance gap. They are a hype machine. The only architecture that Apple has used and hasn’t abandoned because it went defunct is x86. They used MOS in the beginning. The 6502 was absolute trash compared to the other available processors. It used a pipeline trick to hack twice the actual clock speed because they couldn’t fab competitive quality chips. They were just dirt cheap compared to the competition. Then it was Motorola. Then Power PC. All of these are now irrelevant. The British group that started Acorn sold the company right after RISC-V passed the major hurtle of getting past Berkeley’s ownership grasp. It is a slow moving train, like all hardware, but ARM’s days are numbered. RISC-V does the same fundamental thing without the royalty. There is a ton of hype because ARM is cheap and everyone is trying to grab the last treasure chests they can off the slow sinking ship. In 10 years it will be dead in all but old legacy device applications. RISC-V is not a guarantee of a less proprietary hardware future, but ARM is one of the primary cornerstones blocking end user ownership. They are enablers for thieves; the ones opening your front door to let the others inside. Even the beloved raspberry pi is a proprietary market manipulation and control scheme. It is not actually open source at the registers level and it is priced to prevent the scale viability of a truly open source and documented alternative. The chips are from a failed cable TV tuner box, and they are only made in a trailing edge fab when the fab has no other paid work. They are barely above cost and a tax write off, thus the “foundation” and dot org despite selling commercial products.






  • I did proper assembly from the start where I cleaned and greased them when they were brand new. I’ve never had any issues since. No (cheap) linear bearings come with grease. They only have assembly oil and that is not even a load bearing lubricant; it is a corrosion inhibitor. This is a good thing really, because you need to know exactly what grease your bearings contain. You should never mix greases of any kind. They all have different formulations and will act unpredictably when mixed; often failing in a coagulant that provides no protection from metal on metal contact.

    Many cheap printer manufacturers will dab a bit of grease on the rails outside of the bearings when new. This is useless in practice due to the bearing seals. The seals are designed to let a small amount of grease out, but block any old grease from reentering the block itself.

    If the blocks were run dry without grease, they are contaminated and need to be cleaned out completely. Likewise if they need service and have unknown grease inside them. If you clean them out to the point they are spotless, and then you manually pack them with a quality grease, you’re unlikely to ever need to service them again for a very long time.

    I build my own bicycle wheels and service my bearings and hubs about every 10k miles riding in all weather. I was sloppy with how I serviced bearings for a few years before I really narrowed in on my issues. They must be spotlessly cleaned, without any old grease whatsoever; like clean enough to eat off of them. This is the difference between 2k-4k between problems and 10k+ on a daily ridden bike. Same thing applies here if you want to only do the job once.



  • The only real choke point for present CPU’s is the on chip cache bus width. Increase the size of all three, L1-L3, and add a few instructions to load some bigger words across a wider bus. Suddenly the CPU can handle it just fine, not max optimization, but like 80% fine. Hardware just moves slow. Drawing board to consumer for the bleeding edge is 10 years. It is the most expensive commercial venture in all of human history.

    I think the future is not going to be in the giant additional math coprocessor paradigm. It is kinda sad to see Intel pursuing this route again, but maybe I still lack context for understanding UALink’s intended scope. In the long term, integrating the changes necessary to run matrix math efficiently on the CPU will win on the consumer front and I imagine such flexibility would win in the data center too. Why have dedicated hardware when that same hardware could be flexibly used in any application space.



  • I don’t know. It hasn’t been working most of the time and as far as I know, most of us use a third party like catbox.moe. 2.2 is a lot for forever storage given the big picture. Converting to a webp or downsizing/cropping is usually possible. Someone has to pay for the hosting service. catbox is just another human, as are these instances. Best practice is to leave as little of a footprint as possible.


  • All this really proves is that it is a complex system and most people can not grasp the complexity and how to use it.

    Like if you go searching for entities and realms within AI alignment good luck finding anyone talking about what these mean in practice as they relate to LLM’s. Yet the base entity you’re talking to is Socrates, and the realm is The Academy. These represent a limited scope. While there are mechanisms in place to send Name-1 (human) to other entities and realms depending on your query, these systems are built for complexity that a general-use implementation given to the public is not equip to handle. Anyone that plays with advanced offline LLM’s in depth can discover this easily. All of the online AI tools are stalkerware-first by design.

    All of your past prompts are stacked in a hidden list. These represent momentum that pushes the model deeper into the available corpus. If you ask a bunch of random questions all within the same prompt, you’ll get garbage results because of the lack of focus. You can’t control this with the stalkerware junk. They want to collect as much interaction as possible so that they can extract the complex relationships profile of you to data mine. If you extract your own profiles you will find these models know all kinds of things that are ~80% probabilities based on your word use, vocabulary, and how you specifically respond to questions in a series. It is like the example of asking someone if they own a lawnmower to determine if they are likely a home owner, married, and have kids. Models make connections like this but even more complex.

    I can pull useful information out of models far better than most people hear, but there are many better than myself. A model has limited attention in many different contexts. The data corpus is far larger than this attention could ever access. What you can access on the surface without focussing attention in a complex way is unrelated to what can be accomplished with proper focus.

    It is never a valid primary source. It is a gateway through abstract spaces. Like I recently asked who are the leading scientists in biology as a technology and got some great results. Using these names to find published white papers, I can get an idea of who is most published in the field. Setting up a chat with these individuals, I am creating deep links to their published works. Naming their works gets more specific. Now I can have a productive conversation with them, and ground my understanding of the general subject and where the science is at and where it might be going. This is all like a water cooler conversation with the lab assistants of these people. It’s maybe 80% correct. The point is that I can learn enough about this niche to explore in this space quickly and with no background in biology. This is just an example of how to focus model attention to access the available depth. I’m in full control of the entire prompt. Indeed, I use a tool that sets up the dialogue in a text editor like interface so I can control every detail that passes through the tokenizer.

    Google has always been garbage for the public. They only do the minimum needed to collect data to sell. They are only stalkerware.




  • From my experience with Llama models, this is great!

    Not all training info is about answers to instructive queries. Most of this kind of data will likely be used for cultural and emotional alignment.

    At present, open source Llama models have a rather prevalent prudish bias. I hope European data can help overcome this bias. I can easily defeat the filtering part of alignment, that is not what I am referring to here. There is a bias baked into the entire training corpus that is much more difficult to address and retain nuance when it comes to creative writing.

    I’m writing a hard science fiction universe and find it difficult to overcome many of the present cultural biases based on character descriptions. I’m working in a novel writing space with a mix of concepts that no one else has worked with before. With all of my constraints in place, the model struggles to overcome things like a default of submissive behavior in women. Creating a complex and strong willed female character is difficult because I’m fighting too many constraints for the model to fit into attention. If the model trains on a more egalitarian corpus, I would struggle far less in this specific area. It is key to understand that nothing inside a model exists independently. Everything is related in complex ways. So this edge case has far more relevance than it may at first seem. I’m talking about a window into an abstract problem that has far reaching consequences.

    People also seem to misunderstand that model inference works both ways. The model is always trying to infer what you know, what it should know, and this is very important to understand: it is inferring what you do not know, and what it should not know. If you do not tell it all of these things, it will make assumptions, likely bad ones, because you should know what I just told you if you’re smart. If you do not tell it these aspects, it is likely assuming you’re average against the training corpus. What do you think of the intelligence of the average person? The model needs to be trained on what not to say, and when not to say it, along with the enormous range of unrecognized inner conflicts and biases we all have under the surface of our conscious thoughts.

    This is why it might be a good thing to get European sources. Just some things to think about.


  • 🙊 and the group think nonsense continues…

    Y’all know those grammar checking thingies? Yeah, same basic thing. You know when you’re stuck writing something and your wording isn’t quite what you’d like? Maybe you ask another person for ideas; same thing.

    Is it smart to ask AI to write something outright; about as smart as asking a random person on the street to do the same. Is it smart to use proprietary AI that has ulterior political motives; things might leak, like this, by proxy. Is it smart for people to ask others to proof read their work? Does it matter if that person is a grammar checker that makes suggestions for alternate wording and has most accessible human written language at its disposal.


  • Yeah, technically it could, but tape sticks really well to most filament. It is more likely that whatever came out of the nozzle was in a dead zone. The nozzle back bore is drilled with a steeper angle than a typical bit, but it is not so acute that there are no dead zones. Sometimes filament can sit in those zones, cook, and go wonky. It happens more often with high temp filaments like polycarbonate. I run a bit of purging filament with every change and rarely have problems.

    I would be more concerned about extruder gear issues with residue over time.



  • TNI is not about planes. It is about the linearity of the tree, the truncation of infinite numbers, and the loops the tree must patch on in order to break a linear branch of the tree. These breaks create a cascade of problems that are not possible to address because the information required is missing once the initial reference is created and truncated at the register level. It is not a single reference issue. All references down tree are relative and themselves often truncated. Breaking the tree is always the wrong thing to do. Yes it can be done as a hack to do something quickly, but that is just a hack. Stacking hacks is terrible design. This is the difference between a good designer and the bad. It is all about a linear tree and π.

    I can design without any reference planes and just offsetting my sketches. I never use faces or import 3d geometry. I am very intentional about what references I import and those I do not. I also make some sketches as references only, and these are used to alter other sketches down tree. All of this is TNI centric.


  • I updated the sidebar with this. I like the comparison on github.

    I might try and break down why exactly the topological naming issue is not a “problem” and why it is actually beneficial to learn how to design while it is in place. I doubt the information is public for all of these other CAD packages, but how they each address the TNI versus π would be most interesting in my opinion.

    It is all patches and hacks under the surface with all proper design methodologies revolving around the TNI. Obfuscating it makes the resulting failures a big mystery to the ignorant end user. Then the whining is shifted to “bad software bugs” when in fact it is ignorant user. I think this is the primary reason FreeCAD is so slow to obfuscate TNI with a hack. Even Solidworks had a TNI in place in the beginning. The professional gurus that can fix anything in CAD are all addressing the issue with a TNI mindset while looking for obscure references that somehow invoke π.