Why Some Games Cause the Graphics Card to Run Hotter Than Others

By | February 16, 2016

You may notice that certain games of yours cause your GPU to run hotter than others, even when all your games seem to push it to 100% usage, according to GPU monitoring tools. For this reason, it’s best to test several games when overclocking your graphics card, as what’s stable and a safe temp in one game may not work with other games.

The following tests are on my AMD R9 380x 4 GB. Please note that I intentionally reduced the clock rate to 902 mhz from the stock speed of 990 mhz; I prefer quieter gaming and cooler temps over a small increase in performance, and by reducing clock rates by 10% , the graphics card runs 5-10 degrees cooler with lower fan noise as well.

I chose the applications Smite and Planetside 2 to demonstrate this concept. They seem to offer the greatest difference in temperatures among my library of games, while still pushing the GPU to a near-constant 100% usage.

Both applications are running at highest settings at 2560×1440, with the exception of anti-aliasing being off Smite, and shadows being turned off in Planetside 2. Most serious players in Planetside 2 play with shadows off, because they are known to cause glitches with the rendering of enemies at long distances, as well as a massive impact on framerates.

 

smitetemps

When running Smite, GPU usage is almost always near 100, yet it only runs in the mid 60s Celsius.

 

After just a couple minutes of running Planetside 2, temperatures already reach the mid 70s!

Why is There a Difference?

Modern graphics card dies are made up of 3 types of units; Shader Units, Texture Mapping Units(TMUs), and Render Ouput Units(ROPs).

Shader Units do the vast majority of the work in modern games, as they generally rely on complex shaders. As newer games contain high resolution shadows, complex lighting, and detailed materials, they require a lot of compute power from shader cores.

Texture Mapping Units are responsible for computations related to textures, such as mapping out textures onto 3d models, as well as texture filtering such as anisotropic filtering.

 

Render Output Units are responsible for the final pixel operations before sending the image to your display. This can include operations such as anti-aliasing. This means that a graphics card with more ROPs is generally better suited for higher resolutions such as 4K. My Radeon r9 380x is currently the most powerful AMD graphics card that only uses 32 ROPs, and these tests were done at 2560×1440, so it helps push the limits of the ROPs on my card. AMD’s r9 390, 390x, Fury, Fury X, and Nano all contain 64 ROPs. For this reason, I highly recommend at least a R9 390 for playing at resolutions above 1080P. On the NVIDIA side of things, based on the ROPs in their cards, I recommend at least a GTX 970 for resolutions above 1080P, and a GTX 980 TI for resolutions of 4K and above.

Keep in mind that the amount of shader cores can still help improve performance at higher resolutions(shader workload scales somewhat with resolution as well), so don’t think that picking a r9 380x over a r7 370 would offer no performance benefit at very high resolutions, as that’s simply not true. While both have the same amount of ROPs, the r9 380x has twice the amount of shader cores, which will definitely help in most games, even at high resolutions such as 1440P or 4K.

While the amount of each of these units used to be symmetrical, recently GPUs tend to contain significantly more Shader Units than the others. For example, the AMD r9 380x contains 2048 Shader Units, 128 Texture Mapping Units, and 32 Render Output Units.

Different games may rely more heavily on shaders than others. As the shader cores take up significantly more space on the silicon die than the TMUs and ROPs, games that are bottlenecked by TMUs or ROPs tend to run cooler than those that are bottlenecked by the shader cores, since less silicon is being utilized, and thus less heat is generated.

To be clear, “100% usage” as reported by many graphics card monitoring applications isn’t truly 100% utilization. It only means that your graphics card is the limiting factor in your system, and that the graphics card is going as fast as it can. It does not mean that every single Shader unit, texture mapping unit, and render output unit is being fully utilized.

In addition to internal bottlenecks within the graphics card, some games may contain CPU bottlenecks that prevent the graphics card from running at its full potential, or ones that are simply reaching their frame rate cap. This can have an even greater impact on the temperature the graphics card runs at.

For this reason, the GPU benchmarking software Furmark is highly recommended for testing the limits of your graphics card. It pushes your graphics card to use the vast majority of its processing units at once, which creates a “worst case scenario” in regards to power consumption and heat generated.

Hopefully this clears up many misconceptions. Many people will call games poorly optimized or badly made if it causes their graphics card to run abnormally hot. This does not mean a game is well optimized or poorly optimized. It simply means is that the game is fully utilizing their hardware and pushing it to its limit.

One thought on “Why Some Games Cause the Graphics Card to Run Hotter Than Others

  1. DAOWAce

    “.. It only means that your graphics card is the limiting factor in your system, -and that the graphics card is going as fast as it can-.”

    Oh, you mean it’s lag? /s

    Been trying to explain this to people over the last decade. Majority of gamers don’t understand at all. It’s not “lagging”, it’s running as fast as it can. *sigh*

Comments are closed.