As much as games are about what we play, they are also about what we see. Seriously, close your eyes and think of your favourite gaming moments. Chances are, you’re remembering how they looked as much (if not more than) how they felt to actually play. 

Because of this visual focus, graphics have always been a huge deal in the gaming world. That said, gamers and game critics alike throw around the terms “life-like” and “realistic” far too much when it comes to graphics—seriously, critics in the 2000s were calling games like goldeneye “life-like”. So instead of rating and reviewing game graphics, let’s take a look behind the screen at the technologies which have shaped how games have looked since their inception, over half a century ago.

The Early Days: From Pixels to 3D Graphics

Tennis for Two is widely seen as one of the first video games ever made—and it was before pixels even existed. The game was ‘rendered’ on an Oscilloscope and simulated a two-dimensional game of tennis.

That said, the first commercial games used simple black-and-white pixel-based graphics and 2D sprites. Games like Space Invaders and Pong are great, striking, examples of this. These graphical limitations were a result of computational power—or the lack thereof. 

It wasn’t until later that games became imbued with bold colours, with consoles like the NES and Atari 2600. The release of what are known as fifth generation games consoles (The Nintendo 64 and PlayStation 1) brought with it the ability to render 3D graphics. While these graphics certainly had their artefacts due to the way their 3D was achieved, they were a huge step forwards for gaming as they, quite literally, introduced gaming to the third dimension.

Graphics Acceleration: The Rise of GPUs

One of the key reasons that fifth generation consoles could finally bring 3D to the world of games was that they had powerful dedicated Graphics Processing Units—GPUS. 

Consoles before this point had GPUs of some sort, but they weren’t anything as powerful as the new GPUs which started to be developed throughout the 1990s. For example, the 3dfx Voodoo graphics card was released in the mid-90s and was one of the first to provide hardware-accelerated 3D graphics. This is simply a fancy way of saying it more efficiently distributed tasks and functions across the computer’s hardware when completing tough tasks—such as rendering 3D graphics.

Hardware acceleration was a breakthrough, and made 3D graphics mainstream, totally changing the gaming landscape.

Shader Technology: Realistic Lighting and Effects

While 3D was now possible, it was still very computationally ‘expensive’ (hard for computers to do efficiently). Thus, software developers quickly got to work on creating more efficient ways to render 3D worlds. One of the key innovations here was shaders. 

Through shaders, games could implement more complex lighting and visual effects, making 3D much more impressive. Shaders can be thought of as mathematical programs which calculate the appropriate amount of light, shadow and visual effects on surfaces. There are plenty of types of shaders, but in the early 2000s Vertex shaders and pixel shaders were the primary ones which impacted gaming, allowing for more detailed and aesthetically appealing game worlds.

High-Definition Era: HD Resolutions and Textures

As games consoles continued to pack in more and more powerful hardware, game developers were able to take advantage of this new power. The Xbox 360 and PlayStation 3, in particular, ushered in a new era of graphical refinement. Compared to the console generation before them, the hardware in these consoles was seriously impressive. This meant that developers could begin using higher resolutions for both rendering and within game textures themselves. The visually impressive game worlds of this era still hold up today because of this leap forwards.

Real-Time Ray Tracing: A Game-Changing Innovation

If you were a gamer in the late 2010s, you definitely heard all the hype surrounding ray tracing. Much like the invention of shaders, ray tracing provided a new technique for game developers to get more realistic lightning in scenes. 

In short, ray tracing is a technique that traces the path of light rays across through a scene, to achieve much more realistic illumination. Graphics cards like NVIDIA’s RTX series were the first consumer GPUs to offer ray tracing capabilities, and they certainly let everyone know about it.

The Present and Future

More recently, two other technologies have come to impact the graphical capabilities of games and gaming devices. These are cloud-based rendering and DLSS.

Cloud-based rendering is exactly what it sounds like. It uses cloud systems—so powerful computers elsewhere—to render the images for you and then streams the result to your device. This means that your device doesn’t need to do the heavy lifting of actually rendering the scene, you simply need a strong enough internet connection to stream the cloud render.

DLSS is a little less intuitive. It stands for Deep Learning Super Sampling. Developed by NVIDIA, this uses machine learning to upscale low-resolution images, meaning that you can render lower quality images that are then upscaled—enhancing performance while keeping image quality very high. DLSS works particularly well with ray traced games, giving them smoother frame rates.

Games like Rainbow Six Siege have used DLSS to help achieve better frame rates for many players—a huge bonus for online gamers! Speaking of which, if you want to compete with the best, you can buy a level 50 R6 Account!

Graphics are a huge part of gaming’s identity, and this isn’t going to change any time soon. As we move towards higher resolution and more immersive formats, gaming graphics are sure to continue evolving—bringing us ever closer to the photorealism that gamers and game critics have been shouting about for decades.