The Component Club

ARM Wants To Be In 50% Of Datacenters



The Growing Challenge to x86

For decades, Intel's x86 architecture has been the heavyweight champ of the computing world. From desktop PCs to data centers, x86 has been the go-to engine driving workloads large and small, and there’s a very good reason for that; raw, unfiltered compute power. When engineers needed muscle, Intel delivered, often at the expense of power efficiency, but with the kind of performance that made everything else a compromise.

Of course, x86 hasn’t been alone on the stage. ARM has been in the background for just as long, quietly powering mobile phones, embedded devices, and increasingly, wearables and IoT applications. RISC-V, the open-source newcomer, is gaining traction as well, offering a blank slate to anyone who wants to design a custom processor. Both of these architectures have historically been praised for their low power usage, efficient instruction sets, and ease of integration into compact, thermally constrained systems.

For years, no one mistook them for performance kings, with ARM and RISC-V being the quiet energy efficient workhorses of the portable electronic world, but now, something has changed; the nature of computation itself is shifting.

Modern workloads are more parallel, more distributed, and often don’t require the kind of single-threaded brute force that x86 was built for. AI inference, edge computing, cloud-native services, even high-end mobile development, all benefit from architectures that are efficient first and powerful second. As such, this shift is giving ARM and RISC-V an unexpected edge.

In response, engineers are getting creative. ARM, once confined to phones and embedded boards, now powers Apple’s M-series chips, capable of blisteringly fast CPUs that go toe-to-toe with the best Intel and AMD have to offer.

Meanwhile, on the open front, RISC-V is slowly evolving from a research darling into a viable platform for custom silicon, with real potential in servers, HPC, and even data centers. Multiple startups, and even some government-backed entities, are exploring RISC-V cores with wide SIMD units, hardware accelerators, and server-grade memory controllers, looking to take a bite out of Intel’s lunch in the 24/7 compute market.

Of course, x86 still holds the lion’s share of the software ecosystem, the legacy footprint, and the manufacturing pipeline. But the ground is shifting, and engineers today have more choices than ever before, and many are choosing architectures not just because they’re better, but because they’re better for the job.

ARM Looks to provide 50% of data center chips

Arm Holdings (NASDAQ: ARM) is setting an audacious goal: to command 50% of the global data center CPU market by the end of 2025. As AI infrastructure spending balloons, the company is betting big on its scalable chip architecture and a growing list of hyperscaler clients. But with current metrics falling short of that ambitious benchmark, analysts remain cautiously optimistic.

According to recent statements from company executives, Arm expects its chip architectures, used under license by major cloud providers, to power half of the world’s AI data center CPUs by year’s end. That would be a significant rise from the ~15% market share reported in 2024, and significantly ahead of the ~21% share IDC projects for Arm-based server shipments in calendar year 2025.

This optimism comes as Arm reports explosive growth in customer adoption, with the company claiming a 14x increase in data center customers since 2021, with wins at Amazon Web Services (AWS), Google Cloud (GCP), Microsoft Azure, and Nvidia. Notably, Nvidia’s Grace CPU and AWS’s Graviton chips are based on Arm’s Neoverse V2 architecture, showing how ARM is increasing its foothold in general-purpose and AI-specific server workloads.

Despite real momentum, some analysts, including Uttam Dey of Seeking Alpha, are skeptical. Arm’s most recent annualized contract value (ACV) rose to $1.37 billion, up 15.5% year-over-year. FY25 revenue surpassed $4 billion, a 24% increase from the previous year, however, these figures fall short of the hypergrowth needed to justify a leap to 50% market share in under 12 months.

Research conducted by IDC supports long-term optimism, with Arm-based accelerated servers projected to grow at a 26.3% CAGR over the next five years. Yet even with triple-digit growth in key metrics, it remains unlikely Arm can quadruple its current data center share in the next two quarters.

Arm has momentum, marquee clients, and growing credibility in a space long dominated by x86 incumbents. But transforming that into a majority market share, especially within the constraints of 2025, may be a stretch.

What would an ARM dominated future look like?

It’s no longer a fringe idea: ARM could very well take the CPU crown. The research shows it, the market is certainly inching that way, and the engineering advantages are undeniable. From battery-powered IoT sensors to high-density cloud servers, ARM is positioning itself to be the dominant compute architecture across the spectrum.

Unlike x86, which is tightly controlled by Intel and AMD, ARM is licensable, meaning that anyone from a startup to a hyperscaler can build custom silicon using ARM’s IP blocks. Want to optimize for thermal footprint, latency, memory bandwidth, or AI acceleration? ARM makes that possible, and with solid software support and a mature toolchain ecosystem, the barriers to adoption are lower than ever.

In practical terms, an ARM-dominated future looks a lot like what we’re already seeing at AWS and Apple: lots of smaller, efficient cores, tuned to specific tasks, working in highly parallel configurations. Instead of dumping more heat and watts into single-threaded performance, ARM platforms tend to scale out, meaning more cores, more concurrency, and lower energy per operation.

That shift in compute model could unlock major gains across the board. Servers could hit core counts that seem ridiculous today, with thermal and power demands that are actually manageable. Cloud providers could squeeze more workloads into the same racks, devices at the edge could run heavier inference tasks without burning through their batteries, and at the datacenter level, AI training and execution could get cheaper and more energy efficient.

That said, this kind of architectural pluralism comes with a price. With ARM, RISC-V, and even some niche players gaining ground, software compatibility becomes the next battlefield. A truly multi-ISA world likely means we’ll need better abstraction layers or microcode-level translation to allow code to run fluidly across architectures.

But a world dominated by ARM would also mean less dependence on Intel and AMD, giving customers more pricing power, design freedom, and potentially better long-term silicon roadmaps. No more waiting for one vendor’s fab process to catch up, and certainly no more begging for power envelopes that fit your form factor.

An ARM-led era wouldn’t just mean smaller chips, it would mean a smarter, more flexible compute economy. And frankly, we need that.





About The Author

Robin Mitchell is an electronics engineer, entrepreneur, and the founder of two UK-based ventures: MitchElectronics Media and MitchElectronics. With a passion for demystifying technology and a sharp eye for detail, Robin has spent the past decade bridging the gap between cutting-edge electronics and accessible, high-impact content. Through MitchElectronics Media, Robin leads a technical content agency that specializes in crafting engaging, accurate, and engineer-driven media for the electronics industry. His work includes technical articles, whitepapers, tutorials, and product spotlights—trusted by engineers, technicians, and B2B marketers alike. From datasheets to design stories, Robin knows how to turn complexity into clarity. At the same time, Robin operates MitchElectron...




Updating