How “Unified Memory” Speeds Up Apple’s M1 ARM Macs

An Apple M1 chipApple

Apple is rethinking how components should exist and function inside a laptop. With the M1 chips in the new Macs, Apple has a new “unified memory architecture” (UMA) that dramatically speeds up memory performance. This is how memory works on Apple Silicon.

How Apple Silicon handles RAM

In case you haven’t heard the news already, Apple announced a new Mac roster in November 2020. The new MacBook Air, MacBook Pro, and Mac Mini models use a Custom designed ARM processor by Apple called M1. This change was long overdue and is the culmination of Apple’s decade of designing ARM processors for iPhone and iPad.

The M1 is a system on a chip (SoC), which means that there is not only a processor inside the processor, but also other key components including the GPU, E controllers / S, Apple’s Neural Engine for AI tasks and, especially for our purposes, physical RAM is part of this same package. To be clear, RAM is not on the same silicon as the fundamental parts of the SoC. Instead, it sits on the side, as pictured above.

Adding RAM to the SoC is nothing new. Smartphone SoCs can include RAM, and Apple’s decision to put RAM modules aside is something we’ve been seeing from the company since at least 2018. Exploded view of iFixit for the iPad Pro 11, you can see the RAM sitting on the side with the A12X processor.

What’s different now is that this approach is also coming to the Mac, a full-fledged computer designed for heavier workloads.

RELATED: What is Apple’s M1 Chip for Mac?

The basics: what are RAM and memory?

Two sticks of DDR4 RAM with black heat sink.Corsair

RAM stands for Random Access Memory. It is the main component of system memory, which is temporary storage space for data that your computer is currently using. It can be anything from files needed to run the operating system, to a spreadsheet you are currently editing, to the contents of open browser tabs.

When you decide to open a text file, your CPU receives these instructions along with the program to use. The CPU then takes all the data it needs for these operations and loads the necessary information into memory. Then the processor manages the changes to the file by accessing and manipulating what is in memory.

Typically, RAM exists in the form of those long, thin sticks that fit into specialized slots on your laptop or desktop motherboard, as pictured above. RAM can also be a simple square or rectangular module soldered to the motherboard. Regardless, RAM for PC and Mac has always been a discrete component with its own space on the motherboard.

M1 RAM: the discreet roommate

A graph showing the different parts of the M1 processor.Apple

The physical RAM modules are therefore still separate entities, but they sit on the same green substrate as the processor. “Big whoop” I hear you say. “What is the problem?” Well, first of all, it means faster access to memory, which inevitably improves performance. Plus, Apple is tweaking the way memory is used in the system.

Apple calls its approach a “unified memory architecture” (UMA). The basic idea is that the M1’s RAM is a single pool of memory that all parts of the processor can access. First, it means that if the GPU needs more system memory, it can increase usage while other parts of the SoC decrease. Better yet, there is no need to slice up portions of memory for each part of the SoC, and then transfer the data between the two spaces for different parts of the processor. Instead, the GPU, CPU, and other parts of the processor can access the same data at the same memory address.

To understand why this is important, imagine the outline of how a video game works. The CPU first receives all the instructions for the game, then offloads the data that the GPU needs onto the graphics card. The graphics card then takes all this data and works on it in its own processor (the GPU) and onboard RAM.

Even if you have a processor with integrated graphics, the GPU usually keeps its own memory, just like the processor. They both work on the same data independently, then commute between their memory fiefdoms. If you remove the requirement to move data back and forth, it’s easy to see how keeping everything in the same virtual workbook could improve performance.

For example, here’s how Apple describes its unified memory architecture on the official M1 website:

“M1 also features our Unified Memory Architecture, or UMA. M1 unifies its high bandwidth, low latency memory into a single pool within a custom package. As a result, all SoC technologies can access the same data without copying it between multiple memory pools. This greatly improves performance and fuel efficiency. Video applications are more vivid. The games are richer and more detailed. Image processing is super fast. And your whole system is more responsive. “

And it’s not just that each component can access the same memory in the same place. As Chris Mellor points out at The register, Apple here uses a high bandwidth memory. The memory is closer to the processor (and other components), and it’s just faster to access than to access a traditional RAM chip connected to a motherboard through a socket interface.

Apple isn’t the first company to try unified memory

A diagram showing how CPU and GPU cores can use Nvidia's unified memory feature.An NVIDIA diagram of the early days of the enterprise’s unified memory function. NVIDIA

Apple is not the first company to address this issue. For example, NVIDIA began offering developers a hardware and software solution called Unified memory about six years ago.

For NVIDIA, unified memory provides a single memory location that is “accessible from any processor in a system.” In the world of NVIDIA, when it comes to CPU and GPU, they go to the same place for the same data. Behind the scenes, however, the system paging the required data between separate CPU and GPU memory.

As far as we know, Apple doesn’t take an approach that uses behind-the-scenes techniques. Instead, each part of the SoC is able to access the exact same location for the data in memory.

The bottom line with Apple’s UMA is better performance through faster access to RAM and a shared memory pool that removes performance penalties for moving data to different addresses.

How Much RAM Do You Need?

The M1-based MacBook Pro

Apple’s solution isn’t just sunshine and happiness. Since the M1 has the RAM modules so deeply integrated, you cannot upgrade it after purchase. If you choose an 8 GB MacBook Air, you will not have to increase the RAM of this device later. To be fair, upgrading RAM isn’t something you’ve been able to do on a MacBook for quite some time now. This was something previous Mac Minis could do, but newer M1 versions couldn’t.

Early Mac M1s hit 16GB – you can get a Mac M1 with 8GB or 16GB of memory, but you can’t get more than that. It’s no longer just a matter of sticking a RAM module into a slot.

So how much RAM do you need? When we talk about Windows PCs, the general advice is that 8 GB is more than enough for basic computing tasks. Gamers are well advised to upgrade to 16GB, and “prosumer” activity likely needs to double again for tasks like editing large video files at high resolution.

Likewise, with the M1 Macs, the base model with 8 GB should be sufficient for most people. In fact, it can cover even the toughest everyday uses. It’s hard to say, though, as most of the benchmarks we’ve seen put the M1 to the test in synthetic benchmarks that push the CPU or GPU.

What really matters is a Mac M1’s ability to keep multiple programs and a multitude of browser tabs open at once. It’s not just about testing the hardware, you notice, because software optimizations can go a long way in improving this kind of performance, which is why the focus has been on benchmarks that can really push the hardware. Ultimately, however, we’ll assume that most people just want to see how the newer Macs handle “real world” use.

Stephen Hall at 9to5 Mac saw impressive results with a MacBook Air M1 with 8 GB of RAM. In order for the laptop to start faltering, it had to open a Safari window with 24 website tabs, six more Safari windows playing 2160p video, and Spotify in the background. He also took a screenshot. “It was only then that the computer finally shut down,” Hall said.

At TechCrunch, Matthew Panazarino went even further with a MacBook Pro M1 with 16 GB of RAM. It opened 400 tabs in Safari (plus it had a few other programs open), and it worked great with no problems. Interestingly, he tried the same experiment with Chrome, but Chrome just died. But, he said, the rest of the system continued to perform well despite the issues with Google’s browser. In fact, during his testing, he even noticed that the laptop was using up swap space at one point, with no noticeable drop in performance.

When your PC runs out of RAM, it uses available SSD or hard drive storage as a temporary memory pool. This may betray a noticeable slowdown in performance, but not with M1 Macs, it seems.

These are only occasional daily experiences, not formal tests. Still, they’re probably representative of what to expect for heavy daily use, and given the modified approach to memory, 8GB of RAM should be fine for most people who don’t open tabs to browser by the hundreds.

However, if you find yourself editing large images or multi-gigabyte video files while browsing a few dozen tabs and playing a movie in the background on an external monitor, then maybe choose the 16GB model. is the best choice.

This is not the first time that Apple redesigned its Mac systems and migrated to a new architecture.

RELATED: Deja Vu: A Brief History of Each Mac Processor Architecture

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.