To be fair I had an 8Gb M1 Mac mini for about a year and never even once felt like it was lacking memory. I could open as many things as I wanted and it didn’t slow down, so I can kinda see where they were going with this. Not saying it makes that situation much better though.
I think the current base iPhones with 4Gb or 6Gb suffer way more from lack of memory than the 8Gb Macs, and people aren’t taking about this enough.
I needed a cheap laptop for audio, so i decided to pick up a second hand m1 air a couple months ago.
It is honestly pretty impressive for the price, I generally don’t have issues either. Everything is snappy, and it handles multitasking fine. Its even faster than my $2000+ PC at several things, which frustrates me greatly.
However… When running ableton live (or presumably anything that involves heavy image, video, or audio editing), 8gb of ram is honestly not enough. If you push it too hard, it hangs for a second, then the offending app will just close.
Also there is a weird delay in factorio, absolutely unacceptable.
Base 8GB MacBooks also tend to have base storage, meaning a single NVMe controller instead of dual. If you’re relying on virtual memory then it would make sense to get the Mac that has double the SSD bandwidth. I bought a base M1 Mac Mini for the kids and it’s pretty good for their needs, but they tend to prefer the old i3 win 10 PC connected to the same monitor. The M1 Mini could run Intel Civ 6 faster than my 32GB i7 MacBook Pro could, which surprised me.
I delivered a season of 4k animations for a network show using Motion, AE, C4D, Ps, AI…all using a base model M1 Mini (8/256), with zero problems.
Of course more would be better, but unless you’ve actually used one, it’s hard to imagine how well it works. I tried mentioning this in another post, but it’s all Apple hate all the way down here
Actually the opposite is true for a basic spec. like RAM. People may not understand CPU/GPU naming conventions. BUT they understand something simple like 16>8.
They also understand their “old slow” PC probably had 8gb and they want to UPGRADE so when they see this “new” mac with same amount of ram they immediately think slow whether it is or not…
Some of the YouTubers comparing the new MacBook found that the 16GB Air smoked the 8GB version for creative tasks and rendering, but they found no difference between 16 GB and 24GB. Seems like Apple could up the RAM to 12 GB and see a big improvement.
It’s not that it’s more efficient, it’s simply used less than in conventional PC architecture.
It’s not that you’re wrong from a philosophical perspective with that, it’s that you’re factually incorrect. Memory addresses don’t suddenly shrink or expand depending on where they exist on the bus or the CPU. Being on the SoC doesn’t magically make RAM used less by the OS and applications, as the mach kernel, Darwin, and various MacOS layers still address the same amount of memory as they would on traditional PC architecture.
Memory is memory, just like glass is glass, and glass will still scratch at a level 7 just like 8GB of RAM holds the same amount of information as…8GB of RAM.
The article actually quantitatively tests this too by pointing out their memory usage with Chrome and different numbers of tabs open.
You should familiarize yourself with the architecture before commenting. The GPU is broken into several cores of the SoC, along with the roles of the CPU. The UM is not part of the SoC. However, data is passed from what could be referred to as the CPU to what could be referred to as the GPU without interacting with UM.
I’m actually deeply familiar with the architecture, and how caches, memory, and UM’s work. I understand all of that. None of that changes the storage available. Having high memory bandwidth to load/unload memory addresses doesn’t fix the issue of the environment easily exceeding 8GB. I also understand the caching principles and how you actually want RAM utilization to be higher for faster responsiveness. 8GB is still 8GB, and a joke.
A weeklong battery life, efficient cores, rapid response time, and great software environment make it a great choice…at 16GB for my needs. I will not recommend 8GB to any user at all going forward. It’s marketing malarkey with no future proofing, degrading the viable longevity of the machine.
There’s no conversation to continue. Glass is glass, and 8GB is 8GB, as well as being a joke.
If it’s great for your needs, the base model isn’t for you. You can stream video with have 30 tabs open in Safari and only use 4.6GB of UM on an M1 Mac. I just verified for you.
What a load of nonsense. You’ve got no idea how a computer works. RAM isn’t just used for passing data between cores. If anything that’s more the role of cache although even that isn’t strictly accurate.
Whether a system has a discrete GPU or not doesn’t really factor into the discussion one way or another, although even if it did having more RAM would be even more important without a discrete GPU because a portion of the system RAM gets utilized as VRAM.
Like why not test these things? This just sounds like ai generated garbage.
That being said, 8gb is an abysmally low amount of ram in 2024. I had a mid range surface in 2014 that had that much ram. And the upcharge for more is quite ridiculous too.
I know it’s pc ram but I bought 64gb of ddr4 3600mhz for like $130. How on earth is apple charging $200 for 8!!!
Overall, I’m using 12.5GB of memory and the only application I have open is Chrome. Oh, and did I mention I’m typing this on a 16GB MacBook Air? I used to have an 8GB Apple silicon Air and to be frank it was a nightmare, constantly running out of memory just browsing the web.
Earlier it’s mentioned that they have 15 tabs open. I don’t like a lot of things they do in “gaming journalism” but on this article they’re spot on. Apple is full of shit in saying 8GB is enough by today’s standards. 8GB is a fuckin joke, and you can’t add any RAM later.
That doesn’t make sense. I have the 8GB M2 and don’t have any issues with 20+ tabs, video calling, torrents, Luminar, Little Snitch, etc open right now.
15 tabs of Safari, which is demonstrably a better browser by some opinions due to its efficiency and available privacy configuration options. What if you prefer Chrome or Firefox?
I will argue in Apple’s defense that their stack includes very effective libraries that intrinsically made applications on Mac OS better in many regards, but 8GB is still 8GB, and an SoC isn’t upgradeable. Competition has far cheaper 16GB options, and Apple is back to looking like complete assholes again.
That’s because PC people try to equate specs in dissimilar architecture with an OS that is not written explicitly to utilize that architecture. They haven’t read enough about it or experienced it in practice to have an informed opinion. We can get downvoted together on our “sub standard hardware” that works wonderfully. lol
The only memory-utilization-related advantage gained by sharing memory between the CPU and GPU is zero-copy operations between the CPU and GPU. The occasional texture upload and framebuffer access is nowhere near enough to make 8 GiB the functional equivalent of 16 GiB.
If you want to see something “written explicitly to utilize [a unified memory] architecture,” look no further than the Nintendo Switch. The operating system and applications are designed specifically for the hardware, and even first-party titles are choked by the hardware’s memory capacity and bandwidth.
The Tegra is similar being an SoC, however it does not possess nearly as many dedicated independent processing cores designed around specialized processes.
The M1 has 10-core CPU with 8 performance cores and 2 efficiency cores, a 16-core GPU, a 16-core Neural Engine, and all with 200GB/s memory bandwidth.
The M1-3 is still miles ahead of the Tegra, I don’t disagree. My point was that software designed specifically for a platform can’t make up for the platform’s shortcomings. The SOC itself is excellently designed to meet needs well into the future, but that 8 GiB of total system memory on the base model is unfortunately a shortcoming.
Apple’s use of memory compression stops it from being too noticeable, but it’s going to become a problem as application memory requirements grow (like with the Switch).
Oh no I read the article, I just don’t consider that testing.
It’s not really apt to compare using ram on a browser on one computer and extract that to another, there’s a lot of complicated ram and cache management that happens in the background.
Testing would involve getting a 8gb ram Mac computer and running common tasks to see if you can measure poorer performance, be it lag, stutters or frame drops.
You do have a point, but I think the intent of the article is to convey the common understanding that Apple is leaning on sales tactics to convince people of a thing that anyone with technical acumen sees through immediately. Regardless of how efficient Mach/Darwin is, it’s still apples to apples (pun intended) to understand how quickly 8GB fills up in 2024. For those who need a fully quantitative performance measurement between 8 and 16GB, with enough applications loaded to display the thrashing that starts happening, they’re not really the audience. THAT audience is busy reading about gardening tips, lifestyle, and celebrity gossip.
Your 64 gigs of ram probably uses 10x the power and takes up significantly more space than the single memory chip that’s on the M1-M3s die. And yet it still has less bandwidth than the M1, and on top of that the M1 utilizes it more efficiently than a “normal” desktop or laptop can since there’s one pool of memory for RAM RAM and VRAM.
Chat GPT guestimates 57GB/s for dual channel DDR4 at 3600mhz
$1000 for 8 gigs of RAM in the Air is whatever. $1200 for 8 gigs of ram in the Pro was not great. But 1600 for 8 gigs of ram in the new M3 MBP is really awful.
the M1 utilizes it more efficiently than a “normal” desktop or laptop can since there’s one pool of memory for RAM RAM and VRAM.
That’s not how it works, unfortunately.
A UMA (unified memory architecture) enables zero-copy texture uploads and frame buffer access, but that’s not likely to constitute notable memory savings outside games or GPU-accelerated photo editing. Most of the memory is going to be consumed by applications running on the CPU anyway, and that’s not something that can be improved by sharing memory between the CPU and GPU.
And yet [your 64 gigs of ram] still has less bandwidth than the M1
It’s by necessity that the M1 has higher memory bandwidth. UMA comes with the drawback of the GPU and CPU having to share that memory, and there’s only so much bandwidth to go around. GPU cores are bandwidth hungry, which is mitigated by either using a pile of L2 cache or by giving the system better memory bandwidth.
But but the memory is more efficient, 8 is actually 16!
LMAOOOOOOO 🤡
To be fair I had an 8Gb M1 Mac mini for about a year and never even once felt like it was lacking memory. I could open as many things as I wanted and it didn’t slow down, so I can kinda see where they were going with this. Not saying it makes that situation much better though.
I think the current base iPhones with 4Gb or 6Gb suffer way more from lack of memory than the 8Gb Macs, and people aren’t taking about this enough.
I needed a cheap laptop for audio, so i decided to pick up a second hand m1 air a couple months ago.
It is honestly pretty impressive for the price, I generally don’t have issues either. Everything is snappy, and it handles multitasking fine. Its even faster than my $2000+ PC at several things, which frustrates me greatly.
However… When running ableton live (or presumably anything that involves heavy image, video, or audio editing), 8gb of ram is honestly not enough. If you push it too hard, it hangs for a second, then the offending app will just close.
Also there is a weird delay in factorio, absolutely unacceptable.
Base 8GB MacBooks also tend to have base storage, meaning a single NVMe controller instead of dual. If you’re relying on virtual memory then it would make sense to get the Mac that has double the SSD bandwidth. I bought a base M1 Mac Mini for the kids and it’s pretty good for their needs, but they tend to prefer the old i3 win 10 PC connected to the same monitor. The M1 Mini could run Intel Civ 6 faster than my 32GB i7 MacBook Pro could, which surprised me.
tell me you run windows without telling me you run windows
You didn’t even ask which several things OP was referring to.
NO! No fair.
I delivered a season of 4k animations for a network show using Motion, AE, C4D, Ps, AI…all using a base model M1 Mini (8/256), with zero problems.
Of course more would be better, but unless you’ve actually used one, it’s hard to imagine how well it works. I tried mentioning this in another post, but it’s all Apple hate all the way down here
People are, just not PC spec heads that like to compare numbers. Practical use is the only real comparison.
Actually the opposite is true for a basic spec. like RAM. People may not understand CPU/GPU naming conventions. BUT they understand something simple like 16>8.
They also understand their “old slow” PC probably had 8gb and they want to UPGRADE so when they see this “new” mac with same amount of ram they immediately think slow whether it is or not…
Some of the YouTubers comparing the new MacBook found that the 16GB Air smoked the 8GB version for creative tasks and rendering, but they found no difference between 16 GB and 24GB. Seems like Apple could up the RAM to 12 GB and see a big improvement.
UM on an SoC is not the same thing as RAM on a PC with a CPU and GPU. It’s purely a storage liaison, since data is passed directly from core to core.
It’s not that it’s more efficient, it’s simply used less than in conventional PC architecture.
MacOS is also designed specifically to leverage the hardware, so practical use is the only legitimate comparison to a PC.
Maybe PC Gamer isn’t the most informed reviewer of technology outside of PCs.
It’s not that you’re wrong from a philosophical perspective with that, it’s that you’re factually incorrect. Memory addresses don’t suddenly shrink or expand depending on where they exist on the bus or the CPU. Being on the SoC doesn’t magically make RAM used less by the OS and applications, as the mach kernel, Darwin, and various MacOS layers still address the same amount of memory as they would on traditional PC architecture.
Memory is memory, just like glass is glass, and glass will still scratch at a level 7 just like 8GB of RAM holds the same amount of information as…8GB of RAM.
The article actually quantitatively tests this too by pointing out their memory usage with Chrome and different numbers of tabs open.
Looks like you didn’t read the article.
You should familiarize yourself with the architecture before commenting. The GPU is broken into several cores of the SoC, along with the roles of the CPU. The UM is not part of the SoC. However, data is passed from what could be referred to as the CPU to what could be referred to as the GPU without interacting with UM.
I’m actually deeply familiar with the architecture, and how caches, memory, and UM’s work. I understand all of that. None of that changes the storage available. Having high memory bandwidth to load/unload memory addresses doesn’t fix the issue of the environment easily exceeding 8GB. I also understand the caching principles and how you actually want RAM utilization to be higher for faster responsiveness. 8GB is still 8GB, and a joke.
Use your experience and analyze Apple’s M SoC before we continue this conversation.
A weeklong battery life, efficient cores, rapid response time, and great software environment make it a great choice…at 16GB for my needs. I will not recommend 8GB to any user at all going forward. It’s marketing malarkey with no future proofing, degrading the viable longevity of the machine.
There’s no conversation to continue. Glass is glass, and 8GB is 8GB, as well as being a joke.
If it’s great for your needs, the base model isn’t for you. You can stream video with have 30 tabs open in Safari and only use 4.6GB of UM on an M1 Mac. I just verified for you.
“Your powers are quaint, you must be popular with the children”
What a load of nonsense. You’ve got no idea how a computer works. RAM isn’t just used for passing data between cores. If anything that’s more the role of cache although even that isn’t strictly accurate.
Whether a system has a discrete GPU or not doesn’t really factor into the discussion one way or another, although even if it did having more RAM would be even more important without a discrete GPU because a portion of the system RAM gets utilized as VRAM.
This is a truly terrible article.
Like why not test these things? This just sounds like ai generated garbage.
That being said, 8gb is an abysmally low amount of ram in 2024. I had a mid range surface in 2014 that had that much ram. And the upcharge for more is quite ridiculous too.
I know it’s pc ram but I bought 64gb of ddr4 3600mhz for like $130. How on earth is apple charging $200 for 8!!!
Looks like you didn’t read the article either.
Earlier it’s mentioned that they have 15 tabs open. I don’t like a lot of things they do in “gaming journalism” but on this article they’re spot on. Apple is full of shit in saying 8GB is enough by today’s standards. 8GB is a fuckin joke, and you can’t add any RAM later.
That doesn’t make sense. I have the 8GB M2 and don’t have any issues with 20+ tabs, video calling, torrents, Luminar, Little Snitch, etc open right now.
15 tabs of Safari, which is demonstrably a better browser by some opinions due to its efficiency and available privacy configuration options. What if you prefer Chrome or Firefox?
I will argue in Apple’s defense that their stack includes very effective libraries that intrinsically made applications on Mac OS better in many regards, but 8GB is still 8GB, and an SoC isn’t upgradeable. Competition has far cheaper 16GB options, and Apple is back to looking like complete assholes again.
I’m using Chrome.
That’s because PC people try to equate specs in dissimilar architecture with an OS that is not written explicitly to utilize that architecture. They haven’t read enough about it or experienced it in practice to have an informed opinion. We can get downvoted together on our “sub standard hardware” that works wonderfully. lol
The only memory-utilization-related advantage gained by sharing memory between the CPU and GPU is zero-copy operations between the CPU and GPU. The occasional texture upload and framebuffer access is nowhere near enough to make 8 GiB the functional equivalent of 16 GiB.
If you want to see something “written explicitly to utilize [a unified memory] architecture,” look no further than the Nintendo Switch. The operating system and applications are designed specifically for the hardware, and even first-party titles are choked by the hardware’s memory capacity and bandwidth.
The Tegra is similar being an SoC, however it does not possess nearly as many dedicated independent processing cores designed around specialized processes.
The M1 has 10-core CPU with 8 performance cores and 2 efficiency cores, a 16-core GPU, a 16-core Neural Engine, and all with 200GB/s memory bandwidth.
The M1-3 is still miles ahead of the Tegra, I don’t disagree. My point was that software designed specifically for a platform can’t make up for the platform’s shortcomings. The SOC itself is excellently designed to meet needs well into the future, but that 8 GiB of total system memory on the base model is unfortunately a shortcoming.
Apple’s use of memory compression stops it from being too noticeable, but it’s going to become a problem as application memory requirements grow (like with the Switch).
Oh no I read the article, I just don’t consider that testing.
It’s not really apt to compare using ram on a browser on one computer and extract that to another, there’s a lot of complicated ram and cache management that happens in the background.
Testing would involve getting a 8gb ram Mac computer and running common tasks to see if you can measure poorer performance, be it lag, stutters or frame drops.
You do have a point, but I think the intent of the article is to convey the common understanding that Apple is leaning on sales tactics to convince people of a thing that anyone with technical acumen sees through immediately. Regardless of how efficient Mach/Darwin is, it’s still apples to apples (pun intended) to understand how quickly 8GB fills up in 2024. For those who need a fully quantitative performance measurement between 8 and 16GB, with enough applications loaded to display the thrashing that starts happening, they’re not really the audience. THAT audience is busy reading about gardening tips, lifestyle, and celebrity gossip.
Your 64 gigs of ram probably uses 10x the power and takes up significantly more space than the single memory chip that’s on the M1-M3s die. And yet it still has less bandwidth than the M1, and on top of that the M1 utilizes it more efficiently than a “normal” desktop or laptop can since there’s one pool of memory for RAM RAM and VRAM.
https://en.wikipedia.org/wiki/Apple_M1#:~:text=While the M1 SoC has 66.67GB/s memory bandwidth
Chat GPT guestimates 57GB/s for dual channel DDR4 at 3600mhz
$1000 for 8 gigs of RAM in the Air is whatever. $1200 for 8 gigs of ram in the Pro was not great. But 1600 for 8 gigs of ram in the new M3 MBP is really awful.
That’s not how it works, unfortunately.
A UMA (unified memory architecture) enables zero-copy texture uploads and frame buffer access, but that’s not likely to constitute notable memory savings outside games or GPU-accelerated photo editing. Most of the memory is going to be consumed by applications running on the CPU anyway, and that’s not something that can be improved by sharing memory between the CPU and GPU.
It’s by necessity that the M1 has higher memory bandwidth. UMA comes with the drawback of the GPU and CPU having to share that memory, and there’s only so much bandwidth to go around. GPU cores are bandwidth hungry, which is mitigated by either using a pile of L2 cache or by giving the system better memory bandwidth.