Skip to content
new midrange champ

GeForce RTX 4060 review: Not thrilling, but a super-efficient $299 workhorse

60-class GPUs are Nvidia's most popular; the 4060 don't fix what ain't broke.

Andrew Cunningham
PNY's take on the basic $299 version of the Nvidia GeForce RTX 4060. Credit: Andrew Cunningham
PNY's take on the basic $299 version of the Nvidia GeForce RTX 4060. Credit: Andrew Cunningham

Nvidia's GeForce 1060, 2060, and 3060 graphics cards are some of the most widely used GPUs in all of PC gaming. Four of Steam's top five GPUs are 60-series cards, and the only one that isn't is an even lower-end GTX 1650.

All of this is to say that, despite all the fanfare for high-end products like the RTX 4090, the new GeForce RTX 4060 is Nvidia's most important Ada Lovelace-based GPU. History suggests that it will become a baseline for game developers to aim for and the go-to recommendation for most entry-level-to-mainstream PC gaming builds.

The RTX 4060, which launches this week starting at $299, is mostly up to the task. It's faster and considerably more power efficient than the 3060 it replaces, and it doesn't come with the same generation-over-generation price hike as the higher-end Lovelace GPUs. It's also a solid value compared to the 4060 Ti, typically delivering between 80 and 90 percent of the 4060 Ti's performance for 75 percent of the money.

That doesn't mean there aren't small things to gripe about. Stepping down from the 3060's 12GB of memory to a stingier 8GB doesn't feel great, especially when poorly optimized PC ports seem to want as much video memory as they can get. It's only a modest speed upgrade over the RTX 3060. And DLSS Frame Generation, the big new 4000-series feature that Nvidia plays up in all of its announcements, is less impressive on midrange cards than it is on high-end ones.

The RTX 4060

RTX 4090 RTX 4080 RTX 4070 Ti RTX 4070 RTX 4060 Ti RTX 4060 RTX 3060 Ti RTX 3060
CUDA Cores 16,384 9,728 7,680 5,888 4,352 3,072 4,864 3,584
Boost Clock 2,520 MHz 2,505 MHz 2,610 MHz 2,475 MHz 2,535 MHz 2,460 MHz 1,665 MHz 1,777 MHz
Memory Bus Width 384-bit 256-bit 192-bit 192-bit 128-bit 128-bit 256-bit 192-bit
Memory Clock 1,313 MHz 1,400 MHz 1,313 MHz 1,313 MHz 2,250 MHz 2,125MHz 1,750 MHz 1,875 MHz
Memory size 24GB GDDR6X 16GB GDDR6X 12GB GDDR6X 12GB GDDR6X 8GB or 16GB GDDR6 8GB GDDR6 8GB GDDR6 12GB GDDR6
TGP 450 W 320 W 285 W 200 W 160 W 115 W 200 W 170 W

The RTX 4060 Ti was an outlier compared to the 3060 Ti, shipping with fewer of Nvidia's CUDA cores and half the memory bandwidth and leaning on boosted clock speeds, additional L2 cache, and other architectural upgrades to close the gap. The result was a card that didn't always feel like much of an upgrade, especially at higher resolutions.

The RTX 4060 looks a lot like the 4060 Ti did, narrowing the memory interface a bit compared to the RTX 3060 (from 192-bit to 128-bit) and dropping the number of CUDA cores but adding extra L2 cache and boosting clocks. The 4060 uses Nvidia's AD107 GPU die, the fifth-largest Ada Lovelace die after the AD102 (4090), AD103 (4080), AD104 (4070 series), and AD106 (4060 Ti). There are fewer CUDA cores and less L2 cache here (24MB for the 4060, compared to 32MB for the 4060 Ti), and the card uses eight lanes of PCI Express 4.0 rather than the typical 16. This shouldn't be limiting at all for PCIe 4.0-based systems, but you could see marginal performance impacts if you install the card in an older PCIe 3.0-based system.

Less hardware running at lower speeds also means lower power usage, and the 4060's maximum power usage is just 115 W, compared to 160 W for the 4060 Ti, 170 W for the old 3060, and 165 W for AMD's Radeon RX 7600—the 4060's closest competitor in AMD's 7000-series lineup. Like the other 4000-series cards, the 4060 adds support for Nvidia's DLSS 3 upscaling and frame rate-boosting technologies, plus hardware-accelerated encoding support for the AV1 video codec.

For midrange cards, smaller versions like the PNY model could be preferable to Nvidia's relatively bulky Founders Edition cards (4060 Ti, bottom) and 12VHPWR dongle.
Three DisplayPorts and one HDMI port; the standard arrangement.

As far as we can tell, Nvidia isn't making a Founders Edition version of the 4060, leaving the job to its card partners. Our review model is an RTX 4060 8GB Verto provided by PNY, and it's the kind of no-frills card you'd expect to get for $299. It's a reasonably sized card that should fit well in any micro ATX case and most mini ITX cases, its two-fan cooler keeps the GPU running cool and quiet, and it gets all its power from a single 8-pin connector, so you don't need to worry about a bulky 12VHPWR adapter (or finding a new ATX 3.0 power supply). It doesn't have LEDs or a particularly flashy design, but it gets the job done.

As with PNY's version of the 4060 Ti, even this relatively modest cooler design hangs a few inches past the end of the actual graphics card; hopefully this will lead to even more-compact designs, though it seems like the GPU makers have spent most of their time and attention on three-fan triple-slot overkill versions of the 4060 Ti.

Performance and power

As with the 4060 Ti and Radeon RX 7600, we tested the RTX 4060 primarily at 1080p and 1440p, the resolutions that you could reasonably expect to hit with a $300 graphics card. It should be possible to hit 60 fps in lighter and older games at 4K, but the 4060's weaker hardware and 8GB bank of RAM will make 4K a no-go for most modern AAA titles.

In our 1080p tests, the 4060 performs roughly as expected. In both the 3DMark tests and actual games, whether they're using ray tracing or not, the 4060 is generally between 15 and 20 percent faster than the 3060, though it usually falls a hair short of matching the 3060 Ti. The GPU is capable of average frame rates well above 60 fps at 1080p, at least as long as ray tracing isn't enabled, though even at that resolution, you'll need to lean on DLSS and turned-down settings to play at 60 fps in games like Returnal and Cyberpunk 2077.

The best argument against Nvidia here is that the RX 7600 (currently $260 or $270) is usually as fast or faster for a bit less money, at least in games with no ray tracing or upscaling effects turned on—per usual for Radeons, performance does tank with ray-tracing enabled. Intel's Arc A750 ($250-ish) is surprisingly competitive, too, even in ray-traced games, but older DirectX 11 games like Grand Theft Auto V still run worse on Intel's hardware than newer DirectX 12 and Vulkan games. The 4060 costs a bit more than either card, but it does have the benefit of performing consistently well in all kinds of games and consuming very little power while doing it.

The RTX 4060 stretches to hit 1440p, averaging around or a little above 60 fps in our slightly older, non-ray-traced games but with dips below 60 fps that can make games stutter a little when things get busy. Comparing it to the RTX 3060, you also see some signs that performance isn't scaling evenly as you increase resolution—performance increases are closer to 15 percent than 20 percent, whether it's because of the narrower memory bus, stepping down from 12GB to 8GB of memory, the reduced CUDA core count, or all three. The RX 7600 continues to run just about even with the RTX 4060 in most non-ray-traced games, with exceptions like Borderlands 3 where it's actually faster; the 4060 maintains a significant lead in ray-traced games, though 1440p at max settings is firmly out of reach for most of them.

Credit: Andrew Cunningham

The RTX 4060's power consumption is right around where Nvidia said it would be—it drew about 115 W of power on average while running the Borderlands 3 and Hitman III benchmarks at 4K, compared to 160 for the 4060 Ti, 160 to 170 W for the 3060, 140 to 160 W for the RX 7600, and 190 W for the Arc A750. Its performance might not be thrilling, but its power efficiency is seriously impressive.

On DLSS 3 and frame generation (again)

When combining DLSS upscaling with DLSS Frame Generation, Nvidia can (under the right conditions) produce a more detailed, higher-frame-rate image without requiring the raw GPU performance it would normally take to render that image. Credit: Nvidia

Note: Parts of this section also appeared in our RTX 4060 Ti review.

As we've covered in most of our RTX 4000-series GPU reviews, the Ada architecture's DLSS Frame Generation feature promises to "double" your frame rate by creating one AI-interpolated frame for every frame that the GPU renders. DLSS FG works in concert with typical DLSS upscaling—Nvidia says that "seven out of every eight pixels" can be generated by AI when DLSS and DLSS FG are both enabled. The result can be a clean-looking, high-resolution, high-frame-rate image rendered using just a fraction of the GPU performance needed to render the same scene natively.

Nvidia has always leaned on DLSS FG in its presentations to make the 4000-series GPUs look like even larger leaps than they were. In the case of the 4060, the company optimistically claims that the card is as much as 70 percent faster than the RTX 3060, rather than the 15 to 20 percent we observed when DLSS FG wasn't involved.

The RTX 4060 does 20 percent better than the 3060 in games without DLSS FG, which feels like a better deal when paired with its $30 price cut.
Performance with DLSS FG enabled looks impressive, but remember that visual quality can take a visible dip in games that are running at lower frame rates in the first place.

DLSS FG did markedly improve both average and 1 percent low frame rates in all the games we tested it in, and it can be a handy way to push past 60 fps in a demanding game if you're trying to enjoy the high-refresh-rate monitor you dropped extra money on. But it's not without consequence; DLSS FG can also create extra input latency that can make fast-paced shooters or action games feel less responsive, particularly with V-sync enabled.

One of the games in our test suite, Housemarque and Sony's Returnal, also shows how DLSS FG can impact visuals in a way that DLSS supersampling doesn't. A driving rain falls through the entire benchmark, and between the unpredictable paths that the raindrops take and the way the rain refracts the light that passes through it, DLSS FG can have issues predicting where each drop is going to end up and what it will look like in the interpolated frames it creates.

When using the 4060 Ti, we observed that the rain would phase in and out of existence with DLSS FG turned on—it would be totally visible in some frames and totally gone in the next. The problem is similar but slightly exacerbated on the RTX 4060, which has an even lower base frame rate (45 fps with DLSS and FG off, 66 fps with DLSS on and FG off). I noticed the same blinking effect, and even the frames with visible raindrops sometimes showed fewer than they were supposed to.

The rest of the scene, at least to my eyes, looked mostly fine, and DLSS FG does boost the average and 1 percent low frame rates. But depending on what you play, you might run into situations where the effect looks subtly wrong or strange in a way that I don't particularly notice or care about with DLSS 2.

Add to that the latency issues you can experience and it feels like DLSS FG performance numbers still need an asterisk after them. It's a handy tool when you're experimenting with different settings to make a high-end game playable on a midrange PC, and I would rather have the option than not have it. But I wouldn't say it's a slam-dunk selling point.

The midrange GPU for most people

Midrange GPU, midrange (and extremely common) power connector. Credit: Andrew Cunningham

It's not an exciting upgrade, but if you asked me which GPU I would buy for an $800 to $1,000 gaming PC, the RTX 4060 would be the one I'd point to, especially with so many 3060 cards (as of this writing) still selling for pretty close to the same $300 price.

The AMD Radeon RX 7600 and Intel Arc A750 are definitely worth a look if you want to spend less money, but they both come with caveats about what kinds of games do and don't run well that you just don't need to worry about with Nvidia's cards. Add to that the benefits of DLSS (just regular DLSS, not the benefits-with-caveats of DLSS FG), the fact that most professional and AI apps are tuned to use GeForce cards, and the unbeatable power efficiency, and you have a really nice 1080p to 1440p all-rounder without any major weak points. Anyone upgrading from a 1050, 1060, 1650, 1660, or 2060 card will get a nice upgrade; anyone happily using a 3060 doesn't need to worry about missing much.

The good

  • No price hike in what has (so far) been a pretty price hike-y GPU generation.
  • Modest performance improvement from RTX 3060.
  • Dramatically lower power consumption.
  • There isn't a genre of game, API, or rendering effect that it handles poorly, unlike AMD with ray-tracing and Intel with DirectX 11 titles.
  • Good value-for-money relative to the 4060 Ti.

The bad

  • Radeon RX 7600 is sometimes faster for less money, if you don't care about ray-tracing.
  • DLSS FG remains less useful for midrange GPUs than high-end ones, which don't stand to benefit as much from technology that boosts frame rates.

The ugly

  • 4GB less RAM than last year's card.

Correction: A previous version of this article stated that the RTX 4060 has more CUDA cores than the RTX 3060; it actually has fewer. This doesn't affect any of our testing results or conclusions. 

Listing image: Andrew Cunningham

Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Photo of Andrew Cunningham
Andrew Cunningham Senior Technology Reporter
Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.
Staff Picks
GKH
GKH
I saw the 'Ars Approved' seal on the review, which is completely unearned IMO.
That bothered me at first too, but on reflection the author's basic argument isn't bad. If you need an entry level GPU in July 2023, this hits all of the standard highlights of an Nvidia card from recent years: slightly more expensive than the AMD alternative, about the same performance but with special Nvidia bonuses, and with much better power efficiency.

It's only when you look at the name (to compare to last gen x60 cards instead of x50 cards) and price inflation that's hit every segment of the GPU market that it becomes a bitter pill. Meaning the card isn't bad, the PC gaming market in general is trash when "entry level GPU" is ~$300 and "enthusiast GPU" is ~$1000+.

The market being trash is kind of orthogonal to "I need something to play games". Nothing you can do about the former if the latter is true; you just need direction to the least-worst option. My personal advice would be one of "buy used", "check out a console", or "forget about gaming until the market is more sane". But if none of those are an option, and you just have to get something right now, this is a very solid choice. Which makes me gag a bit for old man yells at cloud reasons, but current reality is current reality, and ultimately I can't really fault the author for operating within it.
thomsirveaux
To respond more directly to this top part of your post:

If Ars wants to review video cards, then it should place any prospective card into context in the market. That market includes many new cards and should also include at least a mention of used ones as viable alternatives. Not everyone will want to buy used, but it’s hard to take a review seriously that doesn’t even consider that part of the market.

Personally, especially given the recent reviews, I don’t think Ars should spend its resources on GPU reviews.

There’s nothing particularly thorough or insightful about the reviews. They don’t give a broad enough overview of the market. There’s little guidance for people upgrading from 2-3 generations ago. There are technical issues. There are editorial issues. The chain of logic on “buy” or “don’t buy” doesn’t hold up to a critical eye.
Hey y'all, author here.

Caught up on this thread this evening, since I'm working on other things. It's clear this review isn't doing the job for some of you - I can at least try to give a bit more context about my thought process and the realities of recent GPU reviews. I will split this up using pithy subheads so things don't get too lost in the sea of text. :)

I Suddenly Review GPUs Now: An Incredible True Story

I hear and sympathize with the complaints asking for more context, particularly re: last-generation cards, other competing cards, etc. This is mainly a problem of equipment. I started doing GPU reviews in November/December of last year, with the RTX 4080 and 7900 XTX/XT. I had very few GPUs to start with, and a totally different testbed from what Sam had been using - we're a remote workplace without a big centralized store of equipment to draw on, so what's in my closet at any given point is pretty much what I have.

Unlike at Gamers Nexus or LTT or some other big YouTube channels, there is not a "team" here reviewing GPUs or working closely and collaboratively on these things, there's not a person who takes the pictures and a person who runs the tests and a person who draws up the charts and a person who writes the script. It's all mostly just One Guy, and now the One Guy is me.

So I was starting from a position of not really wanting to use numbers from old reviews - I can't be sure how comparable they are, given that I have a totally different CPU/motherboard/RAM/etc. And I only had one or two other GPUs in my possession. I've tried to get more, and had some success, but companies generally don't have old GPU stock just sitting around to send out to people (I've definitely asked).

I have tried to do older-gen comparisons when I can; for the RX 7600 review, I ended up going to a friend's house and borrowing his own personal 5600 XT, something he generously allowed me to do. Not something that's repeatable for many GPUs, though!

This is not to be all woe-is-me, the situation just is what it is. I just want to key y'all in to the behind-the-scenes difficulties I've dealt with while also juggling the work I had already been doing before. I think the CPU reviews have gotten better now that I've been doing it for a couple of generations and I have a deeper well of hardware to draw from; I think the GPU reviews will get there too.

On Used Cards, And The Value Of Efficiency

When I do GPU reviews (or CPU reviews) basically I try to start by pretending that a friend who plays PC games but is not particularly tech-savvy has come to me and told me they want to build something or buy something or upgrade.

I don't really consider the used market a ton in this context, because buying used tech stuff from eBay, Amazon, whatever 3rd-party source you wanna name has a lot of potential problems and pitfalls. You don't know where components have been or what they've been doing, you might get scammed and sent some counterfeit part, you might not be covered by whatever warranty that card originally had.

It's clear I should at least be giving more high-level context about the secondhand market for Ars' readership, though, and I'll try to do that. But I don't think "just go buy a used 3070" is like, slam-dunk super-actionable buying advice for a lot of people.

I will also say that I personally place more value on power consumption and power efficiency than most GPU reviewers. Most people seem to treat it as incidental, and put performance above everything. That's just a matter of perspective, and if you want to buy a 300W GPU that's cheaper than but performs the same as a 200W GPU (just pulling numbers out of the air), that's fine! But assuming you're going to use a GPU for 2, 3, even more years, efficiency does eventually have a dollar value.

This also plays into the "recommend used cards" thing, incidentally, since this gen's new cards have been so much more efficient than last gen's.

Things I Think I Am Right About, Actually!!

I do object to the idea that I'm not considering the wider market at all; consider the "no price hike" bullet point, for example. That's listed in "the good" because the RTX 3060 came out in February of 2021 for $329, which is about $380 in mid-2023 dollars. A super-soft PC market has helped CPU and SSD and RAM prices stay level or go down over the last year or two, but given the way costs of most consumer goods have spiked recently, it is actually meaningful when the price of a GPU doesn't go up compared to last gen. And lest we memory-hole the GPU shortage, it's not like you could actually buy a 3060 for $329 for most of its life. For a GPU to come out on the other end of that and cost less than its predecessor's MSRP is, honestly, pretty miraculous.

On the subject of "1080p is SO five years ago, my $300 GPU should play 4K now!," I think this line of argument is a little strange? Even ignoring the thing about most Steam games being played at 1080p, people who make this point sort of act like games haven't gotten more demanding as time has progressed. It's not as though "1080p" is some static target that a 1060 is just as good at hitting now as it was in 2016; a game released in 2023 needs a better GPU to play at 1080p than one released a few years ago.

I'll also say that I have a really hard time caring a ton about the meta-conversation going on around Nvidia's model numbers this gen - that so-and-so card should actually be a 70 series, or a 50 series, or whatever. I feel like performance relative to last-gen cards and current competitors is always the most important and most relevant thing. I get that everyone wants to go back to the days of the 1060. I do too! But we haven't been there in a while - it's not like the 3060 was such a huge upgrade over the 2060 Super it replaced, after all.

I dunno if any of this will satisfy anyone, but it's also important to me to try to earn and maintain trust and there are some valid points being made. I wouldn't change my analysis or conclusions here; I think in the new GPU market as it currently exists, the 4060 is the easiest and most straightforward recommendation, the one with the fewest caveats, and at the end of the day most folks just want to plug something in and play games and not think about whether their game uses an API or rendering effect that the GPU doesn't like.

Action items

So the tl;dr - I will try to pay more attention to the used market, including highlighting the most trustworthy sources for buying used GPUs. I'll keep working on getting more alternate and older GPUs for comparisons, a situation that will also get better over time as I do this for longer. And on things like power efficiency where I'm giving more weight to it than others generally seem to be, I will try to be more upfront about it.

I will also ask you all to keep criticism constructive (as many of you have!) and to remember that I am not personally in charge of setting GPU prices, or naming them, or dictating the rate of inflation. It's also not my job to root for underdogs. I really want AMD and Intel to give Nvidia a run for its money - competition from AMD (and, sort of, Apple/Arm) has shaken up the CPU space in a super exciting way over the last five years. It's just clear from the Steam survey that none of AMD or Intel's recent products are really doing that, and based on the data I've gathered, that's the general conclusion I've come to, too. Lots of cards that are "good, except for/but..."
ScifiGeek
  • Gamers are unhappy with the current state of the GPU market. This has manifested in low uptake of new GPU products, from 4070s sitting on shelves (and retailers needing to bundle them with $100 gift cards to move them) to $170 dollar discounts on 7900 XT cards within the first 2 months. Being the "least bad" of a generation (and even most of the negative reviews of the 4060 are conceding that it's at least not as disappointing as the 4060Ti was) doesn't do much to make a bitter pill any more palatable.

I chuckle whenever someone uses RTX 4070 as an example of poor GPU sales. This started when some pointed out how terrible it was doing in unit sales at German Retailer Mindfactory. Someone has been conveniently aggregating a weekly sales update for Mindfactory for some time. AMD fans often used them to show how great AMD CPU sales were doing...

So how terrible was it doing? It was their best selling card. Yeah, I don't get it either. But since then I followed the Mindfactory results when available.

The 4070 has been their best selling card, every single week, since it released.

Here are the most recent I could find. Week 24 and again, RTX 4070 is the best selling card:



Also you can check the Steam Survey. In only 6 weeks after launch, the RTX 4070 made the cuttoff to appear. I don't think I've ever seen a card jump to the Survey this fast.

In the Steam Survey, RTX 4070 has already almost caught the RX 6800 XT, that has been out for over 2 years. I remember many saying, it was much better to buy an RX 6800 XT with all the great deals on it.

I think this narrative of new generation selling poorly is more a projection of wanting to see NVidia punished than an actuality.
Most Read
  1. Listing image for first story in Most Read: Helene ravaged the NC plant that makes 60% of the country’s IV fluid supply
    1. Helene ravaged the NC plant that makes 60% of the country’s IV fluid supply
  2. 2. Apple couldn’t tell fake iPhones from real ones, lost $2.5M to scammers
  3. 3. X fails to avoid Australia child safety fine by arguing Twitter doesn’t exist
  4. 4. Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram
  5. 5. ULA’s second Vulcan rocket lost part of its booster and kept going