Integrated Graphics Are About to Get Way Better

Forget the purchase of a dedicated graphics card, you can soon play without. At least, if you're part of 90% of people who still plays at 1080p or less. Recent advances from Intel and AMD mean that their integrated GPUs are about to tear the market for low-end graphics cards.

Why are iGPUs so slow at first?

There are two reasons: the memory and the size of the matrix.

The memory part is easy to understand: faster memory means better performance. IGPUs do not benefit from sophisticated memory technologies such as GDDR6 or HBM2, but rather rely on sharing the system's RAM with the rest of the computer. This is mainly because it is expensive to put that memory on the chip itself, and that iGPUs are usually aimed at low-budget players. It will not change anytime soon, at least not from what we know now, but improving memory controllers for faster RAM can improve the performance of next-generation iGPUs.

The second reason, the size of chips, is what changes in 2019. GPU chips are much larger than processors, and large chips are a bad deal for silicon manufacturing. This returns to the default rate. A larger area has a higher risk of defects, and a defect in the matrix may mean that the entire processor is roasted.

You can see in this (hypothetical) example below that doubling the size of the matrix results in a much lower yield because each defect lands in a much larger area. Depending on where the defects occur, they can render a whole processor worthless. This example is not exaggerated for an effect; Depending on the processor, embedded graphics can take up almost half of the die.

The disk space is sold to different component manufacturers at a very high price, so it's hard to justify investing a ton of space in a much better iGPU while this space could be used for something else, such as a increased number of nuclei. It's not that technology is not there; If Intel or AMD wanted to make a 90% GPU chip, they could, but their monolithic design yields would be so low that it was not worth it.

Enter: Chiplets

Intel and AMD have shown their cards and they are pretty similar. The more recent process nodes having higher than normal defect rates, Chipzilla and the red team chose to cut their dies and pick them up by mail. Everyone does things a little differently, but in both cases it means that the problem of the size of the matrix is ​​no longer a problem, since they can create a smaller and cheaper chip, then real CPU.

In the case of Intel, this seems to be a mainly economic measure. It does not seem that their architecture changes much, but allows them to choose the node on which to build each part of the CPU. However, they seem to have plans to extend the iGPU, because the next Gen11 model offers "64 improved execution units, more than double the previous Intel Gen9 (24 EU), designed to break the barrier. ". A single performance TFLOP is not that important, as the Ryzen 2400G's Vega 11 graphics cards have 1.7 TFLOPS, but Intel's iGPUs have lagged far behind AMD's. It's good.

Ryzen APUs could kill the market

AMD owns Radeon, the second largest graphics processor manufacturer, and uses it in its Ryzen APUs. Looking at their future technology, this bodes well for them, especially with 7nm improvements coming soon. They say that their next chips Ryzen would use chiplets, but differently from Intel. Their chiplets are fully separate arrays, connected by their versatile "Infinity Fabric" interconnect, which allows for greater modularity than Intel's design (at the cost of slightly increased latency). They have already used disk chips with their 64-core Epyc processor, announced in early November.

According to some recent leaksAMD's new Zen 2 family includes the 3300G, a chip with an eight-core chiplet processor and a Navi 20 chiplet (their upcoming graphics architecture). If that happens to be true, this unique chip could replace the entry-level graphics cards. The 2400G with the Vega 11 computing units already gets playable frame rates in most games in 1080p, and the 3300G would have almost twice as many units of compute and would find itself in a new faster architecture .

It's not just a conjecture; that makes a lot of sense. The design of their design allows AMD to connect just about any number of chips, the only limiting factors being the power and available space on the package. They will almost certainly use two chiplets per processor, and all they would have to do to create the best iGPU in the world would be to replace one of these chiplets with a GPU. They also have a good reason to do so, because it would not only be revolutionary for PC games, but also for consoles, as they made APUs for the Xbox One and PS4 series.

They could even use a faster graphics memory, such as a sort of L4 cache, but they will probably use the system's RAM again and hope to improve the memory controller for third-generation Ryzen products.

In any case, the blue and red teams have much more space to work on their matrices, which will certainly ensure at least an improvement. But who knows, maybe they'll grab as many CPU cores as possible and try to keep Moore's law even longer.

Advertisements

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.