AI and The Black Box Problem
Tariffs just got announced, and they seem to look far worse than what most were expecting. A lot has already been written about how incredibly disastrous this would be for the American economy and how universally stupid the decision is considered to be.
What you might not have seen is the allegations that the formula used to calculate the tariffs was not the product of careful months of planning and research, but instead a last-second suggestion written up by ChatGPT.
So why did Trump’s team use it? Well, like plenty of people who’ve realized their homework is due in three hours’ time, it seems like they may have been tempted by AI.
A number of X users have realized that if you ask ChatGPT, Gemini, Claude, or Grok for an “easy” way to solve trade deficits and put the US on “an even playing field”, they’ll give you a version of this “deficit divided by exports” formula with remarkable consistency.
The fate of the American economy might’ve just been decided by ChatGPT. What a world we live in.
Now whether or not this specific incident is true, it certainly wouldn’t be the first of its kind. The new administration has taken to relying on AI chatbots to rewrite government documents en masse and call shots on which government programs recieve funding and which ones don’t. They’ve even taken to leveraging AI to determine which foreign students should be deported for their opinions.
Of course, this is incredibly concerning given that AI chatbot tools such as ChatGPT have been highly susceptible to hallucinations, in which the bot will gaslight itself into providing confidently incorrect answers to your questions. It’s not clear that this will be ironed out in the long run, as the exponential increase of AI-generated content online ends up poisoning the quality of the models’ training data (in some cases outright making the models dumber over time). Something so error-prone is not what you want to be granting virtually unchecked decision-power to when issues as serious as First Amendment rights and economic stability hang in the balance.
But to be fair, the government isn’t the only one playing fast and loose with these tools. A lot has been made about how students use ChatGPT to cheat, but did you know professors have also taken to using it to perform peer reviews on their behalf? It doesn’t just stop there, over half of job applicants are letting ChatGPT write their resume for them. But at the same time, business owners fantasize about using AI coding tools to completely replace the need for developers. Lawyers have started getting caught bringing to court AI-generated filings. More and more teens have come to rely on AI apps to replace their friends.
In each and all of these cases, the AI would often spit out complete nonsense information that would still be blindly turned in anyways, much to the detriment of society at large. Either time would be wasted by other actors having to call out the fabricated work, or the integrity of the systems themselves start eroding as inaccuracies and vulnerabilities continue to go unchecked. But even then, there’s something irresistible about the fantasy AI provides, the idea of anyone being able to “order” creativity like its a DoorDash burger.
In engineering, there’s this concept of the “black box system”: a hypothetical system which has nothing but an input, and output, and an opaque black box in the middle. With a black box, you can’t see the circuitry and wires that actually define the logic, it’s just a singular, mysterious brick. All you can do to understand it is provide input and try to read the output.
Engineers will sometimes use this perspective to temporarily simplify down a model and not get lost in the weeds. But this is only one way of looking at a system: ideally, you’d have the option to always return back to looking at the full white-box view, with logic and everything.
But for the average person, the end-user, there’s a bit of an analogy we can make here. For a lot of us, we take the black box for granted. Take, for example, how you interact with a vending machine. The vending machine expects you to put money into it, and get a candy bar out of it. You’re not expected to investigate what goes on in between: there’s an almost ritual aspect to the whole ordeal.
Yet this is the pattern we1 find ourselves increasingly conditioned into. Need to buy supplies? Press the button and the Amazon package will appear at the door. Food? Imports from all across the world just show up on grocery shelves, all in a row next to each other. If you’re willing to shell out the extra cash, you can even just straight up get prepared meals ordered straight to your door. Need to talk to your friends, follow the news, or consume content? Download a social app and let “the algorithm” decide from on high what to serve you.
In all of these cases, our daily life may as well be governed by magic: so many of the things which govern how we live and run our society on a day-to-day basis we barely understand, except for the limited ways in which we’re expected to (i.e. spending money).
If you do happen to be the kind of weirdo who wants to find out how the tech you use actually works, you’ll often be met with active resistance from the people who produced the thing in the first place. Want to try opening up your laptop to modify or repair it? Be careful that the manufacturer won’t deliberately design it in ways that punish you for opening it up. In some cases, companies have gone as far as to take legal action against those who dare to tinker with the devices that they own. Instead, these companies want you to respond to your device breaking by forking over even more money, either to have them repair it or to straight up buy another one.
One can find this pattern mirrored elsewhere too: the restrictions against homebrewing, software tinkering, and ROMhacking all speak to the threat posed by activities which pierce the veil. Finding ways to interact with objects in ways that involve actually doing things is considered sacrosanct because it breaks this cycle of constantly consuming.
This phenomenon isn’t limited to tech, it shows up in any industry which deals in commodities. Fast fashion has been widely criticized for promoting overconsumption and waste, but it’s also worth noting is the other trend that has accompanied its rise: the decline of sewing, tailoring, and stitching as skills among the general public. Clothes that can cheaply be replaced have a much stronger market among a populace that doesn’t know how to fix them. Similarly, we see a generational fall-off in computer literacy, as growing up with tightly-controlled sandboxes has led to younger generations losing the basic capacity to troubleshoot.
Which leads into a major problem: our culture fetishizes “simplicity” and “seamlessness” to the point that any interruption completely gums up the works. We define “working” in terms of what best enables a constant, uninterrupted stream of passive spending and recieving. This whole flow is so tightly integrated and regulated that we lose the ability to think outside of its paradigm.
One of the key ironies is that this drive for a maximally simple user experience can often drive us to engineer overly complex devices. Think about the amount of complexity and effort that goes into building a fingerprint scanner for a phone, and how much more effort goes into engineering it to invisibly hide under your screen. Think about how many more moving parts this adds to your device, and how much more complicated, harder to maintain, and the amount of possible bugs or vulnerabilities it could introduce. Now compare all that to the amount of time you save by not having to enter your passcode.
There’s diminishing returns on personal efficiency yet we continue to chase it out of fear that any one small hiccup will break the immersion. But should we be immersed? A phone is an object, it’s not a world. We have a world out there already. Do we really need to optimize every single aspect of it?
Ever since the beginnings of industrial capitalism, there’s been a long tradition of writers who have recognized the ways in which the market is constantly trying to hawk its products as the new mediators of our lives. And it regardless of if it is 1860 or 1960, the effects are noted down as the same: the environment fosters a culture of passivity, helplessness, and detachment from reality. All problems that only become all the more relevant the closer we get to the present day.
It’s because of that that ChatGPT comes so intuitively to us; you don’t need to understand neural networks any more than you need to understand global supply chains. The input is your words, and it’ll spit out all kinds of outputs. That’s all you need to know.
Whereas prior pillars of online reference (such as Wikipedia) are designed from the ground-up to stress user contribution and citation, ChatGPT is incredibly opaque with where it gets its sources. It simply provides a response, with no consideration for presenting the context of its construction. While many people simply just read Wikipedia casually, there’s a good chunk of people for whom it acts as a launching off point: either to dive down citation rabbitholes and learn more, or contribute the site with your own knowledge and research. Chatbots don’t provide that sort of window, because doing so would break the immersion.
It’s easy for anyone to make fun of AI art and how it defeats the purpose of art, but isn’t that what we’re doing to every other aspect of our lives? Traditionally, it was actually working with our means of survival on a concrete level and taking some level of direct responsibility for our day-to-day life that helped keep us both competent and grounded in reality. With all of that passed to the backend, we just kind of come to accept everything becoming shoddier.
Probably comes as no surprise then that it’s business types more than actual tech workers who seem to be the most obsessed with the AI revolution: money is the only language they understand, and the coin-operated black box is all that they see. The fact that it seems to be able to do anything with a prompt only adds to this, as a relative lack of knowledge surrounding its technical constraints makes the possibilities sound endless. You can mass-produce products, replace workers, even craft economic policy. It really does reveal how powerful the latent desire driving a lot of these companies is, where they hope technology can provide a “cure” to reality and its restraints.
It’s a solid reminder that even our elites can be just as (or often times even more) deluded as us. As individuals, we might rot by overspending or doomscrolling, but institutions rot too. I’ve previously written about how abstraction from political decisionmaking can end up creating worse decisions, but as we see with recent events, it can also lead to full-on political schizophrenia.
Black-box politics works by demanding you lend a one-sided support: political action means either signal-boosting narratives crafted by a party or donating money to them (despite studies showing it’s actually ineffective). Any form of interaction outside of this is becoming rapidly inconcievable. In the past, communities used to find their support in civil society, with organizations that advocated very concrete local interests and often directly provided for their own members. Now, most of these institutions have been hijacked by political parties and steered towards pushing their nationalized agendas.
Just like how the tech companies discourage you from taking matters into your own hands because “they’ll handle everything”, political parties do the same. By removing other outlets whereby we can find alternatives/escape, this creates conditions ripe for hyperpartisan spiralling. Parties constantly artificially raise stakes in the name of laying claim to more and more of society until the stakes start becoming real. People, now having their entire sense of political agency wrapped into these nationalized fights, begin to lose their grip on this reality too.
There’s countless thinkpieces already written about how stupid the tariffs were, how the U.S. just self-inflicted a massive wound on its own economy for no good reason, and so on. But the stupidity comes from not just ChatGPT’s formula, but an understanding of business and economics that’s wildly spiralled out of touch with reality. The politicians drank their own Kool-Aid, they began to believe in this idealized golden age of True American Manufacturing (which wasn’t necessarily the case) that could revived overnight with just enough political willpower.
This isn’t a bug, it’s a feature of the system. I’ve criticized the abuse of the term “misinformation” before, but one thing is true: we do live in an environment where it is a lot easier to just full-on invent your own personal set of facts. But what enables this above all else isn’t a lack of media regulation, it’s the fact that nothing has consequences. Nothing feels real because for a lot of people, every aspect of their life (the same aspects our political elites thinks they can technocratically govern) is essentially virtual. Things have become so seamless that we can’t even tell the difference anymore.
Of course, in the long term we can’t always expect things to “just work”. Cracks in the facade are inevitably going to emerge, whether that be due to either material constraints or the sheer limits of human psychology. The tariffs are already showing some of these cracks: companies are having to pull pre-orders and un-launch their products, and prices on goods are expected to increase both dramatically and suddenly. The narrow profit margins that even made a device like the iPhone possible can be pulled just as easily as they cemented themselves.
If there is a silver lining to all this, it’s that the illusion of seamlessness can be broken rather easily. It doesn’t take long after reality comes knocking for our more superficial attachments to fall by the wayside. In a market of free choice, it’s easy to fall into the path of least resistance, but when that luxury is taken away, you have no choice but to adapt and acknowledge what is in front of you. We’re entering a world in which tinkerers are going from enemies of the system to those who are left to pick up its pieces.
AI won’t save us, not as long as its design continues to reinforce the very behaviors that brought us to this point in the first place. The system won’t save us either, as it’s continually refused every offramp provided to it. Only the people, by taking their lives back into their own hands, can rebuild their resillience and be prepared to take on whatever lies out there next for us in history.
-
Especially talking here within the context of the first-world upper-middle classes that disproportionately comprise our political elite. ↩︎