The Jevons Paradox for PC Gamers

#tech #politics

Table of Contents

I have a confession to make. I’ve been kinda washed this past year. Read too much Substack-slop and now I’m getting rusty on my actual pet topics. So, I’m getting back into reading books again. The first book I’m reading is Slow Down by Kohei Saito – essentially a quick rundown of degrowth and all the arguments related to it.

In short, the book is about the climate crisis and how the most common approaches to tackling it (green technologies, Green New Deals, etc.) are completely unserious and incapable of making any real dent on the problem1.

There are many, many frameworks, facts, and angles presented throughout this book – too many for me to extensively cover here. Instead, I’d like to keep things simple and just go over one: the Jevons Paradox.

This is not a book review2. I’m not going to go over every argument in the book here (although there are plenty), but instead focus on just one: the Jevons Paradox.

The Jevons Paradox

Typically, within climate politics, there tends to be three camps:

  • Those who believe the free-market is capable of naturally solving the climate crisis on its own. They believe that our present-day prosperity will eventually trickle down as investors pursue greener, more efficient technologies.
  • Those who are more skeptical of the free-market to power the green transition but believe state-driven plans and policies (such as the Green New Deal) can steer an economy’s growth in a green direction.
  • Degrowthers – people who believe that an economy centered around the concept of infinite growth (such as our own) is fundamentally unsustainable on a planet with finite natural limits.

The mainstream consensus has long been within the first camp3. Saito, instead, belongs with the third camp.

The argument of the pro-market camp goes like this:

  • Capitalism incentivizes businesses to be as efficient as possible, so they have a reason to want to invent technology that consumes less.
  • The profits we make today get invested into researching said technologies.
  • Eventually, those technologies get invented and even though we indulged some short term profit, we’ve become greener over the long run.

A classic example of this is the transition from incadescent lightbulbs to LEDs. LEDs are up to 80% more energy-efficient than incandescent bulbs and ended up driving down home electricity costs for consumers. As a result, the market naturally shifted towards LED bulbs as consumers preferred the option that would save them money.

Sounds all well and good, right? Well, let’s take it a step further – a step the pro-marketers tend to deliberately leave out:

  • If a technology allows us to produce something more efficiently, that means we’re producing it at a lower cost – i.e. the good in question becomes cheaper
  • But when goods become cheaper relative to income, consumers don’t just leave that difference on the table. They go out and buy more stuff now that they can afford it.
  • Suddenly, whatever gains were made by market efficiency immediately find themselves cancelled out by increased consumer demand.

This right here is the essence of the Jevons Paradox. If LEDs bring such dramatic efficiency gains, why hasn’t U.S. household energy consumption meaningfully dropped since the ’80s? Well, because a lot of those energy savings got cancelled out elsewhere.

Putting It In Gamer Terms

Most studies I’ve found which try to empirically debunk the existence of rebound effects (AKA the Jevons Paradox) with regards to lighting seem to only consider basic effects like “do people leave the lights on for longer”.

But reading all this, it got me thinking. Let’s not just consider basic household light fixtures. After all, televisions are taking up the largest share of household lighting consumption now. Think about how many LEDs go into a OLED TV. Each pixel has at least one (in some cases more) diode corresponding to it. A 1080p screen has roughly 2 million pixels contained within it. Want to upgrade to 4K? Suddenly you’re around 8 million. Now consider on top of that how many resources actually go into streaming 4K content to that TV4. Now, if someone is gaming in 4K, they’ll also need a beefier GPU.

And all the same applies to monitors too. Over the past decade, I’ve seen prices on 1080p monitors drop into budget range, and gamers increasingly move to 1440p or 4K displays. Enthusiasts have even started to move into purchasing the newer 8K-resolution displays. All of this is despite the fact that these resolutions are starting to run up against the limits of human vision. It’s questionable whether or not our eyes can even distinguish anything above 1440p on standard desk-monitor sizes/viewing distances, ditto for 4K and TVs. In the PC space, we see that demand can exceed any sort of practical benefit – all just for bragging rights.

We see the same for GPUs – around the time performance gains for resolutions such as 1080p and 4K stopped mattering, the industry started to push hard on “ray tracing” as the future of games. This feature with debatable visual benefit which put burden on devs to overhaul how they program their games (only to benefit a subset of users) probably would not have taken off in any previous era (see: NVIDIA PhysX).

Prior to the RTX push, NVIDIA cards were actually regarded as relatively power efficient, but now as those cards try to push the limits of 8K and ray-tracing, you end up with comically power-hungry behemoths like the RTX 5090 – which alone consumes more power than an entire PC from 2017. The GTX 1080, despite being a decade old at this point, would probably be more “efficient” than the market’s latest offerings for the purposes of regular, 1080p gaming. But the market phasing it out should lead us to question exactly how “rational” this all is.

Circling back to that question of developer burden for a bit, it also highlights how not just plain consumer demand can fall into this Malthusian trap but so can software. After all, hardware exists to run software.

By 1997, the average storage capacity of a hard drive was ~2GB. RAM typically ranged from 8-32MB. Around the time I stopped tracking the hardware market (2019-ish), 2TB drives and 16-32GB RAM sticks were squarely within the mainstream. It goes down to even our cellphones, which now come with storage and specs putting most old desktops to shame. Even solid state drives – which were both much faster and much more expensive – had come down to price tiers closer to that of standard HDDs. Internet speeds with dial-up were in the kilobits-per-second whereas modern fiber networks have brought us into the realm of gigabits.

These are efficiency improvements in the magnitudes of thousand-fold to million-fold. Even accounting for evolving needs, you’d think there’s absolutely no way we’d ever possibly run near the limits of this. But then why do our computers still feel sluggish? Why do our phones keep running out of storage? Why do websites take forever to load?

It’s because with increased capacity, businesses and developers become tempted to stretch it to the full. Websites are loaded with flashy JavaScript animations that look pretty for a second until they start bringing your RAM to a crawl. Phones default your grandma to taking 4K family photos that clock in at a gigabyte each. Video games have gone from using clever engineering tricks to fit on a 32KB GameBoy cartridge to freely spending hundreds of gigabytes on uncompressed 4K cutscenes. Soon, even the frameworks and libraries that become standard are designed to accommodate showroom fads rather than enabling anything sustainable, simple, or efficient.

But the market has no problems with this because the market incentivizes you to buy the new laptop with even more RAM. It wants you to run out of iPhone storage so you’ll subscribe to iCloud. It benefits from your frustration with laggy internet, because that’s when you’ll be most likely to upgrade.

But what happens when things don’t always go up? What happens when we experience supply shocks, like the one that’s currently caused RAM prices to triple? The software written for moments of RAM Abundance don’t suddenly contract their requirements as soon as we enter RAM Scarcity. The consumer is just left holding the bag.

What about the people stuck in rural areas, who are still stuck on dial-up internet? Politicians will talk about “closing the digital divide” by funding gigabit projects, but why do we even need these costs in the first place?

The Bigger Picture

Technology is not neutral, it does not exist independently of the context in which we create it. We can wow over the innovations in efficiency, but what does it all amount to if we just treat it as an excuse to be inefficient in other ways?

Industry loves to pretend as if every problem is a new one, because it means they can step in and sell us “new solutions”. The basic economics which underpin the climate crisis are not new: ever since the advent of the Industrial Revolution, writers like Jevons, Malthus, and Marx have pointed out fundamental problems that remain relevant to this day. The only thing that’s new is how it continues to get harder and harder for our elites to bury their heads in the sand. The goalposts are shifting: from “climate denial” to “green markets” to “green investment” to “climate adaptation”.

A third of Pakistan found itself submerged by floods only a couple years ago. The Horn of Africa has faced a years-long drought that has led to mass starvation. Even if we were to try to write off the third-world and focus “at home” (itself a loaded term), we still wouldn’t be able to escape the effects. Climate change is to blame for the persistent wildfires that have ravaged California. Frequent natural disasters are wreaking havoc on Florida’s home insurance industry.

It’s easy to compartmentalize, but these are millions of lives being wrecked as we speak. Entire communities and cultures being totally upended. But climate change isn’t some mysterious unfortunate fact of circumstance. As Saito points out, half our emissions emerge from the wealthiest 10% of the global population. Most of this goes to luxury and discretionary consumption. The bottom half of the world, by contrast, only contributes 10%.

For those of us who live in the developed world, the question should be most relevant to us. Is it worth it? Are the benefits we’ve been getting from growth really so essential as to make questioning it a taboo?

Is a video game made a thousand or a million times better because of the extreme fidelity of its graphics? Do I really need Target to wow me with their newly “modernized” interface every other year? Do most of the videos I watch on a day-to-day basis actually need to be in 1080p or 4K? Does the internet feel like a better place, does it feel like a more useful presence in our lives than it did before?

Now, of course, these are pretty superficial examples – but these are all things we’ve come to take for granted over the past five years alone. If we wanted to take it a step further, we could ask if we even need to be streaming video at all? How much of online shopping even needs to exist in the first place? How far could we get by if keeping and stretching technology from two or three decades ago was the norm?

But those are all questions for another time. I think what’s worth taking away from this rabbithole is how deeply interconnected everything is, down to our habits.

Early software engineering (under the UNIX principles) stressed a philosophy of “keep it simple, stupid”. That there’s value to keeping our designs simple, focused, and modular. I wonder how far we can stretch this principle into our society. That when we do run up against limits or inequalities our first instinct isn’t “how do we add another layer to fix this” but “what can we do away with to lighten the load”.

And to bring some truth to the analogy, there is a role developers can play in tackling this. Saito talks about the digital space as the last frontier for capitalism to conquer. It wasn’t that long ago the internet was structured along a different ethos – a culture of the commons rather than one of profit. Creativity, simplicity, and reuse were all values integral to hacker culture.

There’s multiple generations of working developers who still have this memory – one of an alternative system to the one we’re told has no alternative. And it’s this memory that has continued to drive people to challenge the conventions of software design, even when the whole world seems to be swimming the other direction.

Talking about GPUs and design might seem petty amidst a period of so much economic and political turmoil, but there’s a real powerful wedge that can be driven. How we interact with the technology we rely on on a day-to-day basis forms habits that end up shaping how we think. This is especially true for the the developed world – an atomized mass of superconsumers that spend most of their time online.

As engineers, we’re the middlemen that create that interface. There’s a lot of power to be had in reshaping the daily habits of the most powerful polities on earth. The question is, do we take that design for granted and let the market assign us what it will? Or do we work to become conscious of that power, struggle to reclaim it, and turn it towards the betterment of humanity?


  1. I’ve written something similar years back, albeit in a much worse fashion↩︎

  2. I might do that later. Will update this section if so. ↩︎

  3. Although the second has started to gain steam in recent years. ↩︎

  4. It actually consumes far more water than AI models. ↩︎