AMD vs Intel: Which is better for 2019 and beyond?
With Computex and E3 2019 behind us, now is a great time to look at the AMD vs Intel tug-of-war. Both introduced new products, with AMD slated to launch new CPUs and GPUs in July. Intel won’t launch its new tenth-generation processors until the end of 2019. Intel’s new line of discrete GPUs won’t appear until sometime in 2020.
The state of AMD
AMD entered the x86 processor market as a sub-contractor for Intel. The contract allowed AMD to use Intel’s x86-based 8086 design to manufacture processor clones. These chips would help Intel fulfill orders for IBM’s new PCs.
Once Intel introduced its 32-bit processor, AMD’s contract stalled, forcing both companies into a legal battle spanning years. AMD resumed making clones until 1996 when it launched its first in-house x86-based processor, the AMD K5.
AMD introduced the first consumer-facing 64-bit processor, the Athlon 64, in 2003. It also launched the Athlon 64 FX for enthusiasts and the Opteron chip for servers. The company’s first consumer-facing dual-core chip, the Athlon 64 X2, arrived in 2005. Its first four-core chip, Phenom, arrived two years later. The Athlon and Phenom desktop parts appeared to be AMD’s prime focus.
That changed when Apple ignited the mobile boom with its first iPhone and iPad.
The Bulldozer years
After a brief presence between 2006 and 2008, AMD rebooted its mobile efforts with Fusion. This initiative introduced AMD’s very first Accelerated Processing Unit, or APU, cramming CPU cores and GPU cores into one chip. It also started a chain reaction that would see AMD fall behind Intel in the desktop space until 2016. AMD’s APU efforts essentially dominated the Bulldozer years.
For example, between 2011 and 2016, AMD’s only desktop efforts were the FX-branded chips. Codenamed Zambezi and Vishera, they were based on AMD’s Bulldozer architecture (Piledriver was a revised Bulldozer). Meanwhile, Intel cranked out desktop and laptop chips every year. The company also focused on the enterprise sector given the uncertainty of desktops.
Naysayers predicted tablets and smartphones would kill the desktop and laptop markets. But Ultrabooks, 2-in-1s, and detachables seemingly saved the PC industry.
To AMD’s defense, naysayers predicted tablets and smartphones would kill the desktop and laptop markets. But Ultrabooks, 2-in-1s, and detachables seemingly saved the PC industry and nearly killed tablets in the process. Still, OEMs mostly stick with Intel-based chips in PCs while resorting to ARM-based solutions in handheld mobile devices.
Yet despite its heavy APU focus, AMD had a master plan.
Consoles and graphics
Custom APUs based on its Graphics Core Next GPU architecture landed in the Xbox One, Xbox One X, PlayStation 4, and PlayStation 4 Pro. Developers working on x86-based PCs could now create games that worked across console and PC without any porting involved. High-definition PC gaming finally returned.
That’s the flip-side to AMD’s minimal CPU presence during the Bulldozer years: It’s also a graphics card manufacturer. AMD acquired ATI Technologies in 2006 and began producing add-in graphics cards for desktops. Intel won’t enter the discrete GPU race until 2020.
AMD’s first-generation Graphics Core Next (GCN) architecture appeared in 2012’s Radeon HD 7000 “Southern Islands” add-in card family. The Radeon RX Vega series concluded AMD’s GCN era with the 7nm Radeon VII graphics card manufactured by Gigabyte, Sapphire, XFX, and more.
Similar to AMD’s mobile focus, the two years between Radeon RX 300 and Radeon RX Vega targeted mainstream graphics. That gave Nvidia room to dominate the desktop and notebook spaces with its GeForce GTX 900 and GTX 10 Series. Meanwhile, AMD released budget-friendly RX 400 and 500 cards, bringing low-end VR and Full HD graphics to every desktop.
Meet Zen and Vega
Looking back, AMD experienced a four-year gap between its FX “Vishera” CPU family and its Ryzen 1000 chips. Two years passed between AMD’s final high-end Radeon RX 300 desktop add-in GPU and its budget-friendly Radeon RX 400 family.
In that time, AMD secretly worked on a new from-scratch CPU architecture. Called Zen, it would set the company back on a competitive course.
AMD also developed the Vega graphics platform based on its fifth-generation GCN architecture. This design served as AMD’s high-end successor to the RX 300 family. AMD created Radeon DNA too (RDNA): The company’s first new from-scratch GPU architecture since GCN’s introduction in 2012.
According to AMD, Ryzen CPUs can match the performance of Intel processors at half the cost.
According to AMD, Ryzen CPUs can match the performance of Intel processors at half the cost. While that sounds great, there’s a huge setback: Ryzen desktop chips do not include integrated graphics. If you’re a PC gamer, that likely won’t matter given you’ll want a specific graphics card. If you don’t require a discrete GPU, most Intel desktop and mobile CPUs include integrated graphics. AMD’s Ryzen-branded APUs include integrated graphics as well.
The original Ryzen 1000 series relies on AMD’s first Zen design using 14nm process technology. The Ryzen 2000 series for desktop relies on an enhanced Zen architecture (aka Zen+) and 12nm process technology.
On the mobile front, AMD’s Ryzen 2000 APUs for laptops and desktops use the original 14nm Zen architecture. The new Ryzen 3000 APUs arriving in July rely on the 12nm Zen Plus (or Zen+) design. Compared to the desktop chips, AMD’s Ryzen-branded APUs are one step behind in Zen’s Gen1-Refresh-Gen2 update model.
The new Ryzen 3000 desktop CPUs are based on AMD’s second-generation Zen architecture (Zen 2) and TSMC’s 7nm+ process technology. That’s notable given Intel’s 10nm Ice Lake chips won’t appear until the end of 2019. Even more, AMD’s new batch includes the upcoming Ryzen 9 3950X, a 16-core chip clocked up to 4.7GHz for $749. Intel’s equivalent is the Core i9-9960X costing at least $1725. Ouch.
But wait! There’s more! AMD’s new Ryzen 3000 series boasts support for PCI Express 4.0 while current Intel products do not. Short for Peripheral Component Interconnect Express, PCI Express is a standard for high-speed connections between the CPU, graphics card, storage, and more. The PCI-SIG approved the PCIe 4.0 specification in October 2017 enabling data transfers of up to 64GB per second (16GT/s).
Twenty months later, PCI Express 5.0 is now ready for hardware manufacturers. Like PCIe 4.0, we may not see devices supporting this standard for another 20 months. It promises up to 128GB per second using an x16 configuration (32GT/s). AMD, Intel, Nvidia, and many others already pledged to adopt this new standard.
The performance-per-watt formula
The bottom line with Ryzen is that AMD targets performance-per-watt, enabling more cores and frequencies for half the cost. For enthusiasts, AMD offers its Ryzen Threadripper CPUs like the 32-core 2990WX for $1,799. Currently, the highest core count in Intel’s X-Series CPU family for enthusiasts is 18 in the $1,999 Core i9-9980XE.
In the desktop space, AMD is now in a great position. The company doesn’t overload its Ryzen portfolio with an insane number of products. With the Ryzen 2000 Series, AMD provides eight desktop processors, four HEDT processors, ten mobile APUs, and twelve desktop APUs. Just in Intel’s 9th generation Coffee Lake refresh family alone, the company sells thirty-four desktop CPUs and nine laptop CPUs. We expect HEDT chips to arrive later this summer.
But keep this in mind: Even though AMD chips are lower in price, they consume more power. Look at this AMD vs Intel comparison:
|Base speed||Max speed||Power||Price|
|Ryzen 7 3800X||3.9GHz||4.5GHz||105 watts||$399|
|Core i9-9900K||3.6GHz||5.0GHz||95 watts||$488|
With Intel Turbo Boost Technology, the Intel chip can hit the 5.0GHz ceiling using two cores. The boost number drops to 4.8GHz using four cores and 4.7GHz using eight cores. Meanwhile, AMD’s Precision Boost 2 technology increases the speed of any number of cores . The increase is based on an analysis of the current environment involving thermal, electrical, and headroom utilization. It’s what AMD calls the “reliability triangle.”
That said, the AMD chip has a base speed advantage while the Intel chip has a higher turbo speed ceiling. And while the Ryzen 7 chip is $89 cheaper, it consumes an additional 10 watts of power. Unfortunately, we don’t have benchmark numbers for comparison given the Ryzen 3000 desktop parts won’t arrive until July.
A new deal with Samsung
AMD is reentering the handheld market thanks to a new deal with Samsung. The company had a brief presence in the non-gaming handheld market after its acquisition of ATI Technologies in 2006. Now it’s back in the game licensing its GPU technology to Samsung.
Before the acquisition, ATI provided two SoCs (system-on-a-chip aka all-in-one processors). Xilleon accelerated video decompression for broadcast networks. Imageon brought integrated graphics to handheld mobile devices supporting 2D and 3D graphics rendering.
After the acquisition, AMD re-branded the chips as AMD Imageon and AMD Xilleon. Two years later, AMD decided to focus primarily on x86-based processors and graphics chips. That meant spinning off its manufacturing operations as GlobalFoundries and selling its ATI-related SoC divisions in 2008. Qualcomm bought the Imageon technology and re-branded it to Adurino while Broadcom purchased the Xilleon technology.
The new deal with Samsung brings AMD’s Radeon graphics core technology to Samsung’s Exynos chips used in smartphones and tablets. Samsung typically uses its Exynos chips in devices sold internationally while it relies on Qualcomm Snapdragon chips in North America.
A new deal with Google
Samsung isn’t the only company seeking AMD’s GPU technology. The company announced in March that Google’s upcoming game streaming service Stadia will utilize custom Radeon-branded GPUs built for datacenters. Based on AMD’s Multiuser GPU technology introduced in 2015, these GPUs include 56 compute units (3,584 stream processors) and dedicated HBM2 memory to produce 10.7 teraflops of graphics processing power.
The big misconception during GDC 2019 was that AMD would provide a custom APU like it does with the consoles. That’s not the case. AMD clearly states Google will use its GPU’s designed for datacenters. There’s no mention of APUs or AMD-made CPU cores. Presumably those will be custom-built by Intel clocked at 2.7GHz.
What could be the case is that Google’s datacenters are already filled with Intel-based CPUs. The company likely landed a deal with AMD to install Radeon datacenter GPUs (if they aren’t already). Purchasing AMD Opteron APU-based systems may not ideal due to the horsepower needed to run and stream multiple virtual machines. Moreover, AMD’s server APUs target small businesses seeking high performance at a low power cost.
Still, Google Stadia is a big win for both AMD and Intel. Even more, given console games already target custom APUs, there’s no feature trade-off since games run on AMD’s GCN architecture. The only real big “loser” in this scenario is Nvidia.
The state of Intel
Intel really needs no backstory. The doors opened as N M Electronics in 1968 and then changed to Intel – short for Integrated Electronics – a month later. The x86-based processor era began with Intel’s 8086 chip used in IBM’s new personal computer family launched in 1981. Intel’s 80286, 80386, and 80486 microprocessors followed thereafter.
Intel began using a tick-tock production model in 2007. The “tock” represented a change in the CPU microarchitecture while the “tick” crammed the revision into a smaller chip layout. For instance, Intel used its 22nm fourth-generation “Haswell” microarchitecture in processors launched during 2013. Intel’s fifth-generation “Broadwell” CPUs arrived the following year based on a 14nm version of “Haswell.”
The death of Tick Tock
The move to 14nm process technology effectively killed Intel’s Tick-Tock model and introduced a new model Intel calls Process-Architecture-Optimization. With its 14nm process node already up and running, Intel designed a new microarchitecture codenamed Skylake. This design served as the foundation for its fifth to ninth-generation processor families. Intel officially killed its tick-tock model with the launch of its seventh generation “Kaby Lake” processors.
Kaby Lake relies on the first optimization of Intel’s 14nm process technology (dubbed as 14nm+) in 2016. Intel refreshed Kaby Lake for 2017 in the first wave of eighth-generation mobile processors using the same process node. This updated design increased power efficiency and added four cores to Intel’s Core i5 CPU family. The eighth generation really didn’t kick off until Intel’s second Skylake optimization *14nm++) dubbed as “Coffee Lake.”
From there we saw a third optimization in 2018 (14nm+++) with “Whiskey Lake,” a mobile-only successor to Kaby Lake Refresh. We also saw the debut of “Amber Lake,” the mobile-only successor to Kaby Lake.
All the while, Intel teased a new processor based on 10nm process technology dubbed Cannon Lake. Still based on Skylake, the eighth-generation chip made an appearance but didn’t go mainstream. What Cannon Lake did accomplish was restart Intel’s Process-Architecture-Optimization engine.
Caffeine and icy waters
That brings us to Intel’s latest processors. Initially launched in October 2018, the ninth-generation family is a refresh of Coffee Lake on the 14nm++ process node. Three desktop CPUs arrived in October followed by six in January during CES 2019 and another twenty-four in April. That rollout number doesn’t even include mobile, server, and HEDT products.
Not stopping there, Intel introduced its tenth-generation “Ice Lake” family during Computex 2019 based on a new “Sunny Cove” architecture. It’s the architecture portion of Intel’s Process-Architecture-Optimization model. The first eleven chips target mobile sporting “U” (ultra-low power) and “Y” (extreme low power) suffixes. You’ll see up to four cores and eight threads, speeds up to 4.1GHz, and GPU speeds up to 1.1GHz.
Unfortunately, we don’t know anything about these chips save for little tidbits provided by Intel. They feature an overhauled integrated GPU architecture (Gen11) promising smooth framerates in Battlefield V at 1080p. They also support DDR4 memory at 3,200MHz. The Intel 300 Series chipsets add Wi-Fi 6 connectivity and Intel Optane Memory support.
Ice Lake CPUs and chipsets are supposedly shipping now to OEMS for laptops arriving during the 2019 holiday season.
AMD vs Intel showdown
That said, the AMD vs Intel battle pits third-generation Ryzen “Zen 2” chips against Intel’s ninth generation “Coffee Lake” products. But as we previously stated, Ryzen 3000 doesn’t ship until July, so we have no benchmarks for comparison.
What we can do is compare a second-generation AMD Ryzen chip with a similar ninth-generation Intel CPU. We dug into Geekbench to find their single- and multi-core scores:
|Ryzen 7 2700X||Core i9-9900K|
|Base speed (GHz):||3.7||3.6|
|Max speed (GHz)||4.3||5.0|
|Power:||105 watts||95 watts|
As the results show, even if the Ryzen 7 2700X has a slightly higher base speed at a reduced cost, it still doesn’t out-perform Intel’s Core i9-9900K part. That’s a huge argument in the AMD vs Intel debate: Intel’s CPU core is simply better at performing instructions per cycle. Even more, AMD’s chip consumes more power and doesn’t even ship with integrated graphics. Ultimately, you may be better off spending the extra $205 on the Intel chip.
Let’s do another comparison for laptops:
|Ryzen 7 2700U||Core i7-8559U|
|Base speed (GHz):||3.3||2.7|
|Max speed (GHz):||3.8||4.5|
|Power:||25 watts||28 watts|
Here we see AMD’s second-generation APU consume three watts less power. But despite its higher base speed, the chip falls behind Intel’s eighth-generation laptop CPU in the single-core Geekbench test. It also falls behind in the multi-core test partially due to its lower maximum speed.
According to Intel’s business unit, “not all cores are created equal, and more cores doesn’t always equate to better overall performance.”
Intel says performance also depends on memory and architecture optimizations. The company made this clear after AMD compared its new second-generation 64-core Epyc “Rome” CPU to Intel’s second-generation 28-core Xeon Platinum 8280 “Cascade Lake” scalable CPU for servers. AMD demonstrated its chip running 2x faster than the Xeon in a benchmark. Intel said AMD didn’t configure the test system correctly, producing lower-than-normal results from the Xeon chip.
Navi vs Xe in 2020
Another problem AMD faces is Intel’s upcoming entry into the add-in graphics card market. Former AMD Radeon chief architect Raja Koduri joined Intel at the end of 2017 to serve as chief architect and senior vice president of a new Core and Visual Computing Group. His first task: Crank out a discrete graphics card by 2020. The deal also saw Radeon cores integrated into Intel-based modules housing Kaby Lake processor cores and HBM2 video memory.
Intel’s new discrete GPUs will be based on its scalable “Xe” architecture. You’ll see solutions for the data center, enthusiast desktops, and notebooks. You’ll see parallel computing along with hardware-level real-time ray tracing, competing with Nvidia’s just-launched RTX 20 Series GPU family. Nvidia’s GTX 10 Series only supports ray tracing through GPGPU acceleration or software.
That’s big news, especially for a processor company re-entering the discrete GPU market. Ray tracing on a consumer-based desktop is a big leap anyway, promising photo-realistic rendering without horrible wait times. It’s the New Thing in gaming ignited by Nvidia’s RTX 20 family for desktops and laptops.
AMD CEO Lisa Su talked about hardware and software-based ray tracing in January but didn’t mention anything about ray tracing during her E3 2019 keynote in June. Instead, she revealed the new “Navi” cards arriving July 7, 2019:
|Radeon RX 5700 XT 50th Anniversary Edition||Radeon RX 5700 XT||Radeon RX 5700|
|Performance:||10.14 TFLOPS||9.75 TFLOPS||7.95 TFLOPS|
|Competing product:||GeForce RTX 2070||GeForce RTX 2070||GeForce RTX 2060|
With Intel entering the discrete GPU space, AMD and Nvidia won’t be the only contenders fighting for your dollars. For Intel, this may be a difficult market to penetrate given AMD and Nvidia’s huge, dedicated customer base. Hardware-level ray tracing seems like an ace in the hole and a great alternative to Nvidia’s RTX 20 Series. Unfortunately, AMD’s new Radeon RX 5700 Series doesn’t include support on a hardware level.
So who is winning this war?
AMD vs Intel in Desktops
In the AMD vs Intel battle for the desktop, Intel should continue to dominate for the foreseeable future. Still, AMD poses a significant threat.
AMD’s high core count and low prices are an attractive selling point. In turn, they require more power and don’t include integrated graphics. Customers can get a 12-core CPU from AMD for $499 while currently there’s no mainstream 12-core chip sold by Intel. The second half of 2019 should see the release of AMD’s third-generation Ryzen 3 chips. W may even see new Threadripper HEDT parts.
Meanwhile, Intel may introduce new X-Series HEDT processors to compete with new Threadrippers. Given Ice Lake CPUs won’t arrive until late 2019, Computex may be the last we hear from Intel in the consumer processor space. Until then, Project Athena will likely generate loads of buzz before Ice Lake’s debut: The Ultrabook successor based on Intel’s 10nm CPUs.
AMD vs Intel in Laptops
In the AMD vs Intel laptop feud, Intel should continue to dominate due to trend.
AMD’s APUs typically resided in budget-friendly laptops until Ryzen’s arrival in 2017. Intel still outnumbers AMD in laptops, but you can find great solutions like Acer’s Predator Helios 500 and Aspire 3. On the Asus front, the ROG Zephyrus and the VivoBook sport Ryzen-branded APUs as well.
Unfortunately, AMD still doesn’t offer an eight-core mobile chip despite all the core-cramming it does in the desktop space. Instead, Intel currently takes the lead with its eight-core i9-9980HK and i9-9880H laptop CPUs. You’ll likely find these in gaming laptops paired with a discrete Nvidia GeForce graphics chip. Heck, Intel sells six-core laptop processors too.
AMD vs Intel – Beyond the PC
Despite its battle with Intel in three major markets, AMD will continue to dominate in the console arena. AMD’s close connection to console and PC partners bring it to the gaming forefront on the hardware and developer ends. With the PlayStation 5 and Project Scarlet using AMD components, this dominance will likely not change for another five years. Nvidia, meanwhile, has Nintendo.