Everything You Need to Know About Laptop GPU

If you’re new to gaming, you have probably heard the term GPU get thrown around a lot without knowing what it actually means.

One of the most common misconceptions is that GPU and the graphics card are the same. Of course, the most important component of a gaming laptop is the GPU; but what is it, and what does it do?

If we already have the CPU, why do we need the GPU in our laptops? What do the numbers and letters on the names of graphics cards represent? We will answer all of these questions as we delve into what the GPU actually is. 

What is a GPU?

The Graphics Processing Unit (GPU) is the part of your laptop dedicated to processing your laptop’s graphics. The GPU is responsible for the quality of the images you see on the laptop’s display and the speed at which they appear. Just like the CPU, it is an electronic circuit, a chip comprising transistors. The same impact transistor technology had on CPUs has been replicated with GPUs.

NVIDIA GeForce GTX 980M GPU Chip
NVIDIA GeForce GTX 965M GPU Chip 

The GPU and graphics card are not the same things. The graphics card consists of the GPU, coolers (fans), PCB, connectors, and video memory (VRAM). The GPU is at the heart of the graphics card and executes the processes which produce the images on display. 

GeForce RTX 2080 Ti by Gigabyte
Gigabyte’s GeForce RTX 2080 Ti Graphics Card

It does not mean that the CPU is not important for gaming. Drivers enable the CPU to communicate with the GPU and instruct it to execute some commands. In addition, some CPUs come with an inbuilt graphics processor. For example, Intel’s 6th to latest generation processors have integrated graphics capable of supporting 4K displays. 

You must be wondering, why would you need a discrete graphics card when you can just purchase a laptop whose CPU has an integrated graphics processor? Well, if you are only interested in entry-level gaming, video editing, etc., then CPUs with integrated graphics would be your best option. But, on the other hand, if you want to play high-end games on your laptop or use high-end software in video editing, integrated graphics will not be that useful; you need to get one with a discrete graphics card. 

History of GPU

The demand for a higher quality of in-game graphics started as early as the 1970s. However, the devices responsible for gaming graphics for early gaming consoles were not GPUs. Instead, these controllers were hardcoded to display specific graphics for a specific game that was being played. So when CPUs came into the picture, the CPUs themselves handled graphics processing for both computers and gaming consoles.

In the mid-1980s, a discrete GPU idea started to materialize through Commodore Amiga’s graphics system. It was the first microprocessor to take graphics processing away from the CPU. Then, in 1987, IBM 8514/A became popular after a graphics interface was integrated into the operating systems. This video card could handle the processing of graphics much faster than a CPU. 

Amiga computer

During this time, a small company from Canada, ATI, started manufacturing their own graphics cards, the CPU-dependent ATI wonder series. It supported switching between different resolutions and modes. However, these GPUs were only limited to 2D graphics. The era of 3D graphics was ushered in by the release of the first PlayStation console in 1995. S3 VIRGE, released in the same year, was the first 2D video card supporting 3D graphics. 

NVIDIA marked the age of modern-day graphics cards when they released GEFORCE 256 in 1999. During the same time, ATI was releasing the Radeon series, which AMD would later adopt. These two (NVIDIA and ATI) were the only players by 2001, and the other graphics card companies were unable to keep up with them. For subsequent years, NVIDIA kept releasing more improved versions of their GeForce graphics cards; GeForce 2, 3, etc., ATI did the same with their Radeon cards. Finally, in 2006, AMD acquired ATI; and NVIDIA started releasing their GTX series. Fast forward to 2018, when NVIDIA started releasing their RTX 20th Gen series. 

Difference between Laptop and Desktop GPU

The design of laptop and desktop GPUs is virtually identical, and the difference is in the functionality. 

Performance

The desktop variants give a better performance for GPUs of the same make (model number, generation, etc.). This is because they are designed to cater to high-end tasks. Furthermore, the desktop graphics cards have more space to operate; incorporating fans to cool them is not a problem. In addition, they are equipped with higher memory bandwidth, clock speed, pixel rate, compute units, and texture mapping units.

Upgradability

Just like the CPUs, you can swap your desktop’s graphics card for a better one. For example, suppose you have an NVIDIA GTX; you can just remove it and plug in an RTX on the same port. Unfortunately, this is not possible with laptop GPUs, they are permanently attached to the motherboard. 

Pricing

Desktop graphics cards are priced separately from the desktop itself. However, since the laptop’s GPUs are already pre-installed, their pricing is not independent. This is the reason gaming laptops are usually expensive.  

Identifying GPUs

Now let’s dig into the technical stuff. When deciding to buy a gaming laptop (or one for video editing), under the specifications, you have probably seen the names NVIDIA GeForce RTX 3090 or AMD Radeon 5700 XT. So what do these numbers and letters mean? Both NVIDIA and Radeon have a similar way of naming their products which reflect the following:

Company Name

Since 2001, only NVIDIA and AMD Radeon (which was ATI) have been the only 2 GPU manufacturing companies. These two have had annual releases for new products and their improvements. Both of these brands have different edges over each other. NVIDIA has the ray-tracing graphics card that AMD lacks. AMD’s GPUs are overclock-ready. They have an interface that supports this. To overclock NVIDIA’s graphics, you need a 3rd party software. The naming of graphics cards starts with the name of these two companies (except in some cases, which I will get into shortly). 

Intel also manufactures its own GPUs, such as the Iris and HD graphics series. Contrary to NVIDIA and AMD, Intel’s GPUs are highly CPU-dependent. For example, HD Graphics P4700 is only compatible with Xeon CPUs. Recently, Intel announced its interest in releasing independent GPUs, the Xe series. Other companies such as Gigabyte assemble GPUs from NVIDIA into graphics cards and rebrand them as their own. 

NVIDIA and AMD graphics

Technology

The only technology that separates current GPUs is ray tracing. NVIDIA has adopted this technology to create its RTX series. Ray tracing simulates the effects of light on encountering virtual objects. It results in a more realistic image displayed on the screen while playing games. Shadows have a sharper edge and move smoothly depending on the path of light. It also captures reflections from shiny surfaces. This technology also results in a better resolution and hence better image quality. 

RTX ray tracing

NVIDIA’s GTX series and  AMD Radeon’s RX series represent standard graphics cards and don’t have this technology. However, Razer Blade and MSI laptops come with an RTX graphics card, and you can read reviews of them here. 

Generation

NVIDIA uses the first two numbers after the GTX and RTX to denote the generation of the GPU. NVIDIA GeForce GTX 2060 has a 20th generation GPU. On the other hand, AMD uses only the first number, so AMD Radeon RX 5700 contains a 5th generation GPU. For both Intel and AMD, the higher the number (generation), the better the performance. A GTX 3090 has better performance than a GTX 2060. Each preceding generation of GPUs is an improvement from the previous one. 

Model (Code Number)

For NVIDIA, the following two numbers after the generation represent the model number. In NVIDIA GEFORCE GTX 2060, 60 represents the model number. AMD, on the other hand, uses three numbers. In AMD Radeon RX 5700, 700 is the model number. A higher number represents a slight improvement in performance over the previous one. The improvement does not go over the next generation. To illustrate, a GTX 2060 has better performance than a GTX 2050, but this performance doesn’t go beyond a GTX 3050. 

The base models of graphics cards do not have a letter at the end. Subsequent releases during the same year usually involve additions that will slightly boost the GPU’s performance. NVIDIA uses the letters ‘TI’ and ‘Super’ after the model number to represent these additions.

Radeon uses XT to denote that a given model has additional features that give it a performance edge over the base model. Don’t confuse Radeon’s TX with NVIDIA’s RTX series; Radeon does not have a graphics card supporting ray tracing. This is where NVIDIA has a strong edge over them. Integrated graphics for both Intel and AMD follow the same naming scheme. 

Currently, NVIDIA’s RTX 3090 graphics card is the best one out there. It supports a wide range of AAA games (high-end). In March 2021, AMD announced that they would be releasing RX 6700 XT series to compete with RTX 3090. Laptops containing this graphics card are costly. If you’re on a budget, you might want to opt for a laptop with a lower GPU. However, the 20th generation RTX is still good enough to play so many high-end games. So now you know which graphics card to go for. We hope you enjoyed reading this article. Don’t forget to leave a comment below informing us if this was useful to you. 

Chris Martin is a professional tech writer. He's been covering tech tutorials, hardware reviews, and more as a professional writer for over seven years now and it doesn't look like he'll be stopping anytime soon! In addition to writing about the latest gadgets on the market, he also covers topics such as how to set up your home network or troubleshoot any computer problems you may have.

Be the first to comment

Leave a Reply

Your email address will not be published.


*

This site uses Akismet to reduce spam. Learn how your comment data is processed.