Home QRIX Auto QRIX Gadgets QRIX Flights QRIX Hotels QRIX Shopping Web Hosting Filmybaap Contact Us Advertise More From Zordo

CPU architectures: What’s the difference between ARM and x86 and why does it matter?

1 year ago 253

Graphic of a CPU on a mult-coloured computerised motherboard
(Image credit: Getty Images)

If you’re buying a new computer, there are two main CPU architectures to choose between. Windows PCs are normally built on the x86 platform, used by Intel and AMD, while Apple’s computers use the company’s own M1 and M2 processors, based on the ARM architecture.

There are differences between these approaches and significant implications for what it means for performance. 

ARM vs x86: Instruction sets

The x86 and ARM processor platforms do the same basic job, but they do it in quite different ways. Their internal logic is wired up in different arrangements, with different configurations of internal data registers and different sets of hard-coded instructions. On a fundamental level, they run software in different ways and use different code.

On the x86 platform, the internal structure and instruction set of the processor is ultimately based on that of the Intel 8008, an 8-bit CPU that debuted in 1972. In fact, machine code programs written for that chip can still be assembled and run on the latest processor from Intel or AMD.

Naturally, though, the hardware has evolved considerably since then. After the 8008 came the 8088, and then the 16-bit 8086, which powered the original IBM PC. In the 1980s this was followed by the 80186, 80286, and so forth – hence the “x86” moniker. 

Through generations, new features have been introduced to support multitasking and virtual memory; support has also been added for 32-bit and 64-bit operations, enabling computers to work efficiently with huge data sets and massive amounts of RAM. A series of extensions accelerate specific tasks such as graphics processing, virtualization, and data encryption.

Apple’s processors are based on the competing ARM architecture. This originated at Acorn Computers in the mid-1980s, at a time when the company was looking to create a successor to the hugely popular BBC Micro. Rather than buying chips from an external supplier, as it had with its previous home computers, the company set out to design a new processor that would outperform existing rivals. And it succeeded: at its launch, the ARM-based Acorn Archimedes was the most powerful home computer money could buy. 

Today the ARM platform is owned and developed by the Arm Group in Cambridge, and like x86, it’s continued to grow and develop since its inception. Successive versions of the platform have added 64-bit support and numerous extensions to speed up common mathematical operations – including, in the latest ARMv9 release, security, and artificial intelligence (AI) features. 

RISC vs CISC: The eternal rivalry

While ARM processors can do anything that x86 can, they have different strengths and weaknesses because they follow a different design philosophy, known as reduced instruction set computer (RISC). The name originally stood for “Acorn RISC Machines”, then later changed to “Advanced RISC Machines” as the market expanded beyond its original creator. 

Its an idea that became popular in the 1980s and 1990s. This was a period when Intel and other chip makers were building more and more features and functions into the silicon, enabling programmers to execute complex operations with just a few lines of code. These processors came to be called complex instruction set computer (CISC) chips.

The RISC philosophy takes the opposite approach, aiming to make a CPU as simple as possible, by reducing it to a bare minimum of basic functions. Thus the ARM architecture uses just 34 instructions, which mostly handle simple mathematical operations and move data between registers and memory locations. By contrast, the Intel 8086 supported 81 instructions, permitting far more advanced data operations – and with subsequent revisions and extensions it’s ballooned to more than 200 instructions.

The RISC approach may seem counterintuitive. The smaller instruction set means that programs need to be longer and more complex to achieve the same results. However, a RISC chip can have a much simpler physical design than a CISC one. This can make it easier and cheaper to manufacture, and it can tear through instructions at a faster rate – in most cases, every operation is completed in a single clock cycle. It can consume less power too, which is why ARM processors are dominant in smartphones, where battery life is key.

While the CISC and RISC approaches are opposed, the differences aren’t as important as might be imagined. Few programs are written in pure assembly language these days, so developers don’t need to worry about the underlying architecture: they can write in Python, C#, or an alternative language before letting the interpreter or compiler deal with the translation. In fact, Apple’s ARM-based Macs include a real-time translation layer that lets them run programs written for x86 systems with no modifications.

The differences in power consumption are also smaller than they used to be. For many years, Intel struggled to match the low power consumption of ARM chips, not only because of the complexity of its CPU designs but also because its in-house manufacturing facilities were unable to reduce the size of the transistors inside its chips as fast as its rivals. That’s been a point of some embarrassment: the very latest Intel chips are still using a 10nm fabrication process (dubbed “Intel 7”), while Apple’s M-series processors have been using a 5nm process since their launch in 2020.

To help out, Intel’s 12th generation Alder Lake processors, released at the end of 2021, introduced a heterogeneous core design. Where previous Intel chips typically featured four or eight identical cores, current models combine lightweight ‘efficiency cores’ (E-cores) with powerful ‘performance cores’ (P-cores) that only roar into life when they’re needed for the most demanding tasks. This idea was actually pioneered by Arm – it introduced what it called the “big.LITTLE” design in 2011 – but now that Intel has got on board, we’re frequently seeing Windows laptops that can provide more than ten hours of video playback.

Which tech companies build processors?

Aside from the architectures, another notable difference between the two major computing architectures is that, unlike Intel, Arm doesn’t make any processors of its own. Rather, the company licenses its designs to companies that can then customize them as desired, and have them manufactured to their own specification. In the case of Apple Silicon, Apple uses the core ARM logic but adds many of its own optimizations, and outsources the manufacturing to TSMC.

The way chips are marketed to end users is different too. While all of Intel’s x86 processors use the same underlying architecture, it’s offered in an enormous number of different configurations. Within each generation of Core CPUs there are Core i3, i5, i7, and i9 variants, which further subdivide into ranges of different models aimed at mobile, desktop, or gaming systems. They all have different numbers of processing cores, different amounts of cache memory, different clock speeds, and different power requirements. It’s confusing, and when you’re choosing a computer there’s a risk that you could choose a model that’s underpowered for your needs.

Apple, by contrast, offers seven computer chips in total, namely the M1, M1 Pro, M1 Max, M1 Ultra, M2, M2 Pro, and M2 Max, at the time of writing. It’s a much simpler lineup than Intel’s, and even the regular M1 is competitive with a mid-range Intel chip.

How ARM and x86 CPUs access RAM

There’s one last difference between Apple’s chips and Intel’s – and this one isn’t intrinsic to the ARM architecture, but is a design decision Apple has taken itself. Where Intel’s chips rely on external system RAM, Apple incorporates the memory directly into the silicon die for its M-series processors. 

This means you can’t ever upgrade the memory on an Apple Silicon computer, which can lead to some agonizing decisions when it comes to choosing a specification. It also means that really large allocations of memory aren’t available at all on the mainstream chips: the M1 is offered with a maximum of 16GB of RAM, while the M2 is limited to 24GB. If you want 32GB or more you need to move up to an expensive M1 Pro, Max or Ultra system. For comparison, all of Intel’s 12th and 13th generation processors can use up to 128GB of RAM.

However, because Apple’s RAM is literally located right next to the processor logic, and connected to it via the fastest possible fabric, its processors can access code and data extremely quickly and efficiently. The standard M1 boasts a maximum memory bandwidth of 68GB/sec, while the M2 goes up to 100GB/sec, and the M1 Pro, Max, and Ultra models go up to 200GB/sec, 400GB/sec, and 800GB/sec respectively. With Intel, it all depends on the specifics of the processor, the RAM, and the motherboard, but even the newest, fastest Core i9 is limited to a theoretical maximum of 90GB/sec. 

What’s more, Apple uses what it calls a “unified memory architecture”, which means the whole range of memory can be directly accessed by either the CPU or the on-die GPU. This provides huge efficiency benefits compared to a conventional PC architecture, where the CPU and GPU each have separate memory banks, and can’t work together on the same data without copying it back and forth.

What makes AMD better than Intel?

There’s a third major player in the CPU market beyond Intel and Apple. AMD’s chips don’t have such a distinct identity, however, as they use the same core x86 architecture and instruction set as Intel.

Intel and AMD’s symbiotic relationship

A component in a circuit board with red effects

(Image credit: Getty Images)

Why does Intel let its biggest rival use its proprietary architecture? In the early 1980s, IBM wanted to use Intel’s chips in the original IBM PC, but didn’t want to be reliant on one source of silicon.

It told Intel it'd use x86 processors only if a second company could make hardware under license. AMD was authorized to build Intel 8086, 80186 and 80286 processors. Later, AMD created its own chip designs to rival Intel’s. The K5 and K6, released in the late 1990s, provided x86 compatibility at a lower price than Intel’s Pentium processors. 

After 2000, AMD grafted a  new 64-bit processing mode onto the x86 architecture, with boosts to support working with larger numbers, bigger data sets, and more RAM. Intel licensed these extensions, and the two companies are effectively reliant on each other.

Although AMD’s processors can run the same programs as Intel’s, there are some key differences. AMD sells its own chips, but it doesn’t manufacture them itself; this means it can use whichever foundry offers the best technology. While the first two generations of Ryzen CPUs were produced by Global Foundries, AMD switched to TSMC in 2019 to take advantage of its 7nm fabrication process, and the latest Ryzen 7000-series chips use the company’s 5nm process. That helps AMD chips spend more of their time running at the highest frequencies before they need to slow down and cool off.

AMD’s designs also frequently pack in more cores than similarly priced Intel chips, in part due to AMD’s ‘chiplet’ approach. Rather than build everything onto one die, it breaks the design down into multiple processor cores – chiplets – it then connects together, along with shared resources such as the main memory cache. Actual core counts can be misleading, as both companies use multithreading technologies that allow a single core to service two execution threads at once. Matters are further confused by Intel’s recent adoption of efficiency cores, which don’t contribute to peak performance.

Overall, though, you’ll normally get more multicore processing power from an AMD chip – and to support those cores, AMD tends to provide more on-chip memory than Intel. While Ryzen processors don’t put the whole RAM allocation on the chip die, as Apple’s chips do, they generally have large caches that help them keep processing data and instructions at full speed, without having to wait for information to be fetched from the DIMMs.

The only question is how valuable multicore performance really is. Big database servers and graphics rendering programs may benefit enormously from parallel processing power, but many desktop applications are mostly single-threaded. In practice, you might get a better experience with fewer, faster cores.

A daily dose of IT news, reviews, features and insights, straight to your inbox!

Darien began his IT career in the 1990s as a systems engineer, later becoming an IT project manager. His formative experiences included upgrading a major multinational from token-ring networking to Ethernet, and migrating a travelling sales force from Windows 3.1 to Windows 95.

He subsequently spent some years acting as a one-man IT department for a small publishing company, before moving into journalism himself. He is now a regular contributor to IT Pro, specialising in networking and security, and serves as associate editor of PC Pro magazine with particular responsibility for business reviews and features.

You can email Darien at [email protected], or follow him on Twitter at @dariengs.

Read Entire Article