All posts tagged: Video cards

Unveiling the New: Nvidia’s RTX 4060 Series

In today’s world of fast-paced technology and breathtaking graphics, the unveiling of Nvidia’s RTX 4060, 4060 Ti, and the 4060 Ti 16 gigabyte models, sparks an excited buzz in the gaming community. Announced today, the 4060 series’ price tags are $400 for the 4060 Ti eight gigabyte model, and $500 for the 16 gigabyte variant. The pricing for the 4060 is $300. With the 4060 Ti 8GB model hitting the market on May 24th, and the other two models expected to be released in July, gamers are holding their breath.

Let’s Talk Specs: The Nitty-Gritty of the RTX 4060 Series

When it comes to the newly announced 4060 Ti 8GB, the MSRP matches that of its predecessor, the 3060 Ti. However, the real focus here is the performance relative to the price, or in other words, the overall value. Additionally, with the rumored AMD RX 7600 on the horizon, comparisons and competition are bound to emerge.

For Nvidia’s launch, there’s a twist: no Founders Edition (FE) models for the 4060 Ti 16GB or the 4060 will be released. Instead, they are launching an FE model for the 4060 Ti 8GB, and their partners will provide versions of the other cards. The 4060 Ti models boast a total GPU power of 160 Watts. Features like DLSS 3 and AV1, previously lacking on the 30 series, have now moved to the 40 series, including the 4060 cards. Meanwhile, the 4060 is exclusively available in an 8GB SKU. It boasts a TGP of 115 Watts, a generational reduction in power targets for the 60 SKU naming.

Nvidia’s VRAM Choices: A Source of Confusion

Nvidia’s decisions regarding VRAM capacities in their lineup can be a little perplexing. For example, the 4070, which is technically a superior card, has 12GB VRAM, whereas the lower-end 4060 Ti can have up to 16GB VRAM. This mismatch can lead to some confusion among consumers, particularly those less versed in the nuances of GPU specifications.

The Dance of Memory: Cash versus Capacity, and Bandwidth.

There’s always a delicate balancing act at play in architectural and product design, and Nvidia’s memory design choices for the RTX 4060 series show this perfectly. A crucial factor to scrutinize is the bandwidth of memory for instance in the 4060 series it runs slower than previous generations if we go by the given numbers. However, it’s crucial to note that, just like CUDA cores, memory bandwidth needs can also shift with different generations.

A striking upgrade in the 4060 series is the 32 megabytes L2 cache, a substantial increase from the previous 4 megabytes. This increase, combined with a reduction in latency between the L2 cache and the cores, should promote more cache hits.

In discussing the effective bandwidth, Nvidia offered an intriguing explanation. They estimate that traffic reduction across the bus to memory is roughly 50 or 52 percent, which is a substantial decrease in traffic. The reason behind this reduction? The increased L2 cache.

Charting the Controversy: Are Nvidia’s Performance Claims Accurate?

Nvidia is claiming a performance increase of 15% over the 3060 Ti when tested correctly, or an increase of 70% when tested with entirely different, incomparable technologies that don’t exist on all products under test. Keep in mind that it can be hard to test these cards under different hardware requirements without completely compromising the results. So it’s a little subjective. Nvidia also provided a chart of the 4060 Ti 8 gigabyte versus a few prior cards. However, the data was only for the 4060 Ti with DLSS 3. DLSS 3 isn’t widely used and if not used then has in impact on the price to performance of the card. Until these cards are available for testing we won’t really know what they’re up to task for and what has been embellished.

Cache: More is Better… Or is it?

Cache makes everything better, more residency, fewer transactions, and ultimately faster speed. So, why not just cram in more cache? While adding more cache might seem like the simplest solution, it’s not always the best. With a silicon product, you have a limited area to play with, and this area can be allocated to things like logic, cache, encoders, or interfaces. More of one means less of the others. Overloading on cache while undersupplying compute capabilities can starve the card for other resources, which could make it slower for routine tasks.

You could make the silicon bigger, but this would increase costs significantly. Nvidia aims to keep the die size as small as possible, which is why they don’t pack more cache into everything. While AMD has found a workaround by adding more cache to their CPUs as a separate die, this doesn’t always help, and it increases the cost.

So why all this fuss about cache? Nvidia uses cache as its reasoning for having a lower memory bandwidth and less memory overall. Understanding this should help formulate your opinions. However, what truly matters are the hard performance numbers, which we don’t have yet as the cards have not been released.

Wrapping It Up

Taking a step back, we’ve traversed an incredible journey through the intricacies of GPU benchmarking, its nuances, and how it relates to real-world performance. We’ve explored the role of DLSS, or deep learning super sampling, and understood why not every benchmarking scenario should enable it by default. The critical distinction lies in the fact that it’s not always supported or preferred, which gives us a more holistic view of performance.

In the end, however, the primary focus must always be on actual, empirical performance data. All the features, all the arguments, and all the technical explanations fall flat if the real-world performance doesn’t measure up. And so, while we eagerly await more comprehensive data and tests, we remain grounded in the understanding that the ultimate measure of a product’s worth is its performance in the hands of the end-user.

Brian JueilsUnveiling the New: Nvidia’s RTX 4060 Series
read more