Sony has finally given us a deep dive into the specs of its upcoming PlayStation 5, and while the talk was mostly aimed at developers with super in-depth explanations around SSD, GPU, and audio design, gamers are already abuzz with speculations about how performance between the PlayStation 5 and Xbox’s Series X consoles might compare.
[widget path=”global/article/imagegallery” parameters=”albumSlug=9-sequels-that-would-make-ps5-a-force-to-reckon-with&captions=true”]
In particular, everyone’s wondering about that teraflop number: at 10.28 teraflops, is the PS5 going to be a noticeable downgrade from the Xbox Series X’s 12 teraflops? Let’s break it down.
FLOPs stands for floating point operations per second, and it’s a basic measure of the GPUs ability to perform the types of calculations it needs when rendering a scene. Computers today are powerful enough that we measure this ability in teraflops, or trillions of FLOPS. But teraflops aren’t the end-all-be-all of performance. A teraflop from one graphics chip isn’t always comparable to a teraflop from another—AMD’s RDNA GPUs, for example, produces better performance per teraflop than its previous GCN-based GPUs. The same goes for comparing NVIDIA to AMD. So you can’t really compare the performance of two chips based on that single number alone.
[widget path=”global/article/imagegallery” parameters=”albumSlug=developers-share-what-most-excites-them-about-next-gen-tech&captions=true”]
If the two GPUs come from the same family—like the PS5 and Xbox Series X’s do, as both are based on AMD’s RDNA 2 architecture—things are slightly more apples to apples, but it’s still not the only indicator of performance. Let’s look at how that teraflop value is calculated: Each GPU has a certain number of compute units (CUs) and a clock speed. When you put those together, you can generate a single number—in teraflops—that makes it a little easier to compare performance, however imperfectly.
So, for the PlayStation 5, that calculation looks like:
36 CUs x 64 shaders per CU x 2.23 GHz x 2 instructions per clock = 10,275 gigaflops, or 10.28 teraflops (when rounded).
The Xbox Series X has a lower clock speed, but more compute units, leading to a higher number of teraflops:
52 CUs x 64 shaders per CU x 1.825 GHz x 2 instructions per clock = 12,147 gigaflops, or 12.1 teraflops (when rounded).
So for that specific measure of computational power, the Xbox Series X comes out ahead by about 18%. But as Cerny said in his talk, teraflops only measures one aspect of a GPU’s performance. The clock speed affects many other parts of the GPU, and he argues that a faster clock speed provides more advantages across the board than a lower clock speed with more compute units (and thus teraflops). We’ll have to hold judgment until we see both consoles in the silicon-flesh.
[widget path=”global/article/imagegallery” parameters=”albumSlug=next-gen-buzzwords-explained&captions=true”]
After all, we also have to take into account the fact that we don’t know how Microsoft’s performance-balancing features like variable-rate shading will compare to what Sony is offering. Measuring teraflops alone also ignores how the PS5 will downclock its hardware in certain situations, how the two consoles’ SSDs compare, whether the GPU is bottlenecked in a given scene, or how any other components may influence performance.
So don’t get too tied up in the teraflops number just yet. The PS5 may very well be less powerful than the Xbox Series X, but this doesn’t mean it’s going to be exactly 18% slower when playing actual games—we need to wait for benchmarks until we can truly see how the two consoles stack up. Not to mention how exclusive games, form factor, or price may tip the scales of the overall picture.
[poilib element=”accentDivider”]
Source: IGN.com PS5 & Xbox Series X: Teraflops Aren't The Only Measure of Power