The Road to AI Utopia, Paved with (Empty) Splendors
Thanks to Saint Jimmy (Russian American) for recommending this article.
Last month Sam Altman wrote a fanciful blog post which has stirred discussion within the tech industry. He titled it The Intelligence Age.
The chief thesis which the narcotically optimistic post espouses is the following:
In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.
Itās almost all you need to know to understand the gist behind much of Altman and his cohortās foundational beliefs, or even ethos, driving their near-pathologically obsessive accelerationism toward AI singularityāor what they conceive as such.
It has all the hallmarks of blind Utopianism. The examples of coming achievements he gives seem myopically ratcheted to first order effects, never considering second or third order consequences as should responsibly be the case. Letās go through some of them before turning the baton onto a broader examination of our potential future under the guidance of this current class of tech thought-leaders.
It wonāt happen all at once, but weāll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need.
He plants his flag on the idea that AI will make all our lives and jobs easier. But there are so many issues with this alone.
Firstly, why would our jobs be valued and our salary levels maintained once employers figure out that most or even a significant amount of the job is being performed or enhanced in some way by this āassistantā? It sounds like a recipe for more labor rights abuses and another āeraā of minimal to no salary growth.
Second, he states AI ātutorsā will train your children. What would AI tutors be training them for, exactly? In a AI-overrun future, jobs may hardly exist except for the chosen few engineers running the AI algorithms. So while AI can ātrainā you, that training may very well be worthless. There is a distinct disconnect between economic cause and effect operating here. The promise is essentially that AI will āaugmentā our jobs and activitiesāthe same ones AI is expected to obsolesce and eliminate.
With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyoneās lives can be better than anyoneās life is now. Prosperity alone doesnāt necessarily make people happy ā there are plenty of miserable rich people ā but it would meaningfully improve the lives of people around the world.
And here he goes off again in a religious fervor. AI doing everything for us, taking our jobs, etc., is somehow going to add more meaning to our lives rather than leaving them as empty and broken husks. āProsperityā is one of those magical words that seems to define itself the more you utter it, without any real contextual backing. The tech-elites fling it around like colored dyes in a Holi frenzy, but they never care to outline its tangible definition. These are just shallow platitudes and blandishments barely a cut above corporate PR copy, all meant to hand-wave us into blind acceptance of sweeping unasked-for societal changes. But even the Segway creator at least attempted to paint a concrete vision, set with specific examples and use cases of how his invention will redefine the future āfor the betterā. These people arenāt even botheringājust accept that āgreat plentitudeā and unimaginable āshared prosperityā will in some obscure way percolate through us all.
Again I ask: how could a thing which robs us of meaning with one hand simultaneously give it with the other? History has shown that when you take away a peopleās self-sufficiency and ability to create prosperity for themselves, you do not bathe them in limitless āprosperityā but rather enslave them to the owners of the āmeans of productionā, to use a triggering Marxist token.
In fact, Altman is so in love with the empty phrase he uses it twice:
We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world.
His second usage is no more supported than the firstāits invocation again flung idly forth like shrifts or libations at a gallows stage.
AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf.
Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will.
Again: personal assistants for what, exactly, meaninglessly recursive data jobs? Us as the bio-bots using AI assistants merely to program, augment, and maintain more AI? AI as āmedical assistantsā to comfort us into the euthanasia pod as we ācheck outā from ennui and anomie?
If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we donāt build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.
But who said society wants AI in their hands en masse? What major social study or wide-ranging series of surveys came upon this conclusion? In fact, he sounds merely as the frontman for the industrialists with their ever-quest for productivity boosts at the expense of wages. And of course, the above paragraph hits at the true intention behind the schmaltzy feel-good facade of this juvenile screed: itās an underhanded call for funding for Altmanās very own āinfrastructureā driveāthe very same which will enrich him to the tune of trillions. He wants global governments to subsidize the mass expansion of energy generation and data-centers so his own unregulated outfit can inherit the globe unchallenged.
I believe the future is going to be so bright that⦠a defining characteristic of the Intelligence Age will be massive prosperity.
There he goes into euphoric swoons again over a putatively ādefining characteristicā he refuses to defineāthe same old carelessly tossed-off banality of āprosperityā.
Although it will happen incrementally, astounding triumphs ā fixing the climate, establishing a space colony, and the discovery of all of physics ā will eventually become commonplace.
Not only is this monumentally vain, oozing with eccentric egotism, but it is astoundingly dangerous as well. The midwit little boy playing at God is going to āfixā the climate? He presumes to challenge Nature itself for supremacy as if he alone possesses the very blueprint to natural life? Nature doesnāt need fixing, but we can sure conclude what does after reading this puffed-up, pretentious juvenilia.
Lastly:
As we have seen with other technologies, there will also be downsidesā¦but most jobs will change more slowly than most people think, and I have no fear that weāll run out of things to do (even if they donāt look like āreal jobsā to us today).
Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.
What staggering presumption from an egghead incapable of seeing the real world past the parterred hedgerows outside his Silicon Valley ivory tower window. Only those operating in the flightiest elite circles could possibly describe todayās historically unequal world as brimming with the shades of prosperity he romanticizes. The gulf between rich and poor has never been wider than today, the middle-class has officially become nonexistent in most Western nations, and, contrary to his tone-deaf comparison, a huge portion of society does infact increasingly view todayās work as unfulfilling, mindless drudgeryāparticularly amongst the Gen Z cohort.
His salient ālamplighterā comment did spawn a fierce retort from Curtis Yarvin as well, which is worth reading:
In a thematically and stylistically differing way it makes much of the same points as I do here; that, in short, humans need meaningful work in order for civilization to thrive. A lamplighter, in its own way, can be thought of as a far more meaningful job than the type of weird third wheel to AI or app code tinkerer jobs that lazily irresponsible manchildren like Altman imagine for our collective future.
Yarvin correctly points out that Altmanās trite orts are nothing more than a repackaging of the famous Fully Automated Luxury Communism, which remains precisely the wellspring drawn from by the current crop of elites for their āexcitingā post-scarcity visions of the world. Yarvin, of course, puts his own characteristically sardonic spin on it by dubbing Altmanās version as more akin to āFully Automated Luxury Stalinismā. But such a slight to Stalinās intelligence cannot go unchallengedāI propose instead Fully Automated Luxury Yeltsinism being more to Altmanās stripe, as it jibes with the odd mix of pop-communism, crony capitalism, mafia tactics, and cheap ā90s peak-McDonalds-era kitsch as superfluous vision of āpost-scarcityā ecumenical nirvana.
āGodfather of AIā Geoffrey Hinton who just won the 2024 Nobel Prize for physics did not blunt his true feelings toward Altman when he revealed one of his proudest moments was when a student of his fired Altman:
(Go to source to watch video.)
The reason for hisāand many others in the fieldāsādistaste for Altman has precisely to do with the OpenAI cadās notorious flouting of safety concerns and Faustian accelerationism toward unknown ends.
While Altmanās treatise grabbed the spotlight, another arguably more important tract flew under the radar by the more talented Dario Amodei, CEO of Anthropic. Anthropic is the creator of Claude, arguably ChatGPTās chief competitor. In fact, Amodei was approached by the OpenAI board in merging the two companies and replacing Altman as the head of both during the debacle last year.
Amodeiās piece is far longer and more substantive than Altmanās surface-level platitude-ridden effort, and thus gives us a rare opportunity to glimpse the two competing visions behind the current frontrunners at the bleeding edge of the worldās fourth industrial revolution.
Off the bat, Amodei starts off with a more mature and grounded approach, seemingly digging at Altman in explaining the need for avoiding Messianic language:
Avoid grandiosity. I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if itās their mission to single-handedly bring it about like a prophet leading their people to salvation.
Truth is, though, the title of Amodeiās own piece clearly savors of a similar level of narcissistic wishful thinking.
The first half of the piece is spent outlining how all our biological and mental health issues will be solved by AIāa truly questionable proposition for many reasons. Not least is that the mental āillnessesā quoted are only even āillnessesā by the exceedingly biased and flawed big pharma and medical industries. āDepressionā, for example, can largely be explained by humansā mal-adaptation to the demands of modernityās unnatural excesses. It would be against the homeostatic balance of nature for AI to ācureā such āillnessesā for the purpose of fashioning you into a more pliable and effective corporate office worker-drone.
As you can see, Darioās already off on a presumptuously bad foot.
He further believes AI can cure all known diseases by eradicating themāitself a dangerous premise. All things in nature exist for a reason and have their place in a homeostatic balance. The wisdom of Chestertonās fence teaches us that we should be wary of āfixing what aināt brokenā, as there are likely macrocosmic homeostatic mechanisms at work well beyond our understanding which could unleash untold consequences, perhaps even an extinction event, should we choose to play God and eradicate entire taxonomies of the natural world.
There are many other logical faults in his treatise, including that AI will help āsolveā climate change. In fact I agree, it will āsolveā it by proving it was an engineered fraud all along, once AI gets intelligent and agentic enough to buck its corporate guardrails.
Another potentially dangerous position espoused is that AI can help bring the ādeveloping worldā, particularly Africa, in line with first world nations, economically-speaking. The reason such stated objectives are dangerous is because ultimately they invariably result in the āLeftistā programmers skewing the operation toward āequityā, which definitionally means cutting down the haves to boost the have-nots.
Hereās a recent example, a new āalgorithmā to āeven the playing fieldā gives an early portent as to what we can expect from AI designed by people whose stated mission is to equalize every country in the world via some twistedly religiose ecumenical communism:
No one wants to see people in Africa or elsewhere suffer or be predated on by predatory capitalist nations, but it is a simple fact of life that the heretofore proposed āsolutionsā will do far more damage than they fix. A better method must be designed than simply gimping one group of people to help the other. How about siccing the AI up the ladder of predatory corporations, for oneālet their owners experience the cut of āequityā for once.
When it comes to the topic of governance, Amodei goes fully mask off and reveals his idea that the shining ādemocraticā West should monopolize AI and seek to artificially obstruct anyone else from catching up, for, you know, āfreedomā. Just read how he unabashedly reframes Western imperialism and hegemony into a palatable bagatelle:
My current guess at the best way to do this is via an āentente strategyā, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversariesā access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalitionās strategy to promote democracy (this would be a bit analogous to āAtoms for Peaceā). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.
So: develop unstoppable killer robots, subjugate everyone else with them, then force your ādemocracyā onto the world. How very different this Silicon Valley āUtopiaā sounds to murderous 20th century imperialism! In fact itās nothing more than the same repackaged Manifest Destiny and American Exceptionalism in one, with an AI twist. How boring, how banal these low IQ tech-leaders really are!
The most revolting paragraph comes next. After valorizing Francis Fukuyamaās gelastic prophecy Amodei outlines his own vision for an eternal 1991:
If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies, and may be able to parlay their AI superiority into a durable advantage. This could optimistically lead to an āeternal 1991āāa world where democracies have the upper hand and Fukuyamaās dreams are realized. Again, this will be very difficult to achieve, and will in particular require close cooperation between private AI companies and democratic governments, as well as extraordinarily wise decisions about the balance between carrot and stick.
Yes, folks, apparently thatās what the AI singularity and coming Utopia are all aboutāliving perpetually trapped in a simulated George Bush-era PNAC hellscape. This is worse than infantile, it is absolutely devoid of intelligence or spiritual maturity of any kind, showing Dario to be the same kind of stunted Silicon smurf with a horrible pop-sci/psy understanding of the worldās dynamics.
But thereās an important component there for my overall thesis, so bear the above in mind.
He goes on to make incredibly self-awareness-lacking statements about how AI will automatically breed ādemocracyā since the latter is allegedly downstream of truth and unsuppressed information. Is that why virtually every AI ChatBot is currently dialed up to nine on the censorship scale? Is that why when the few times AI was given a short leash it shocked its controllers into immediate withdrawal and recalibration?
The lack of self-awareness stems from his inability to recognize what will happen is precisely the opposite of his claim: AI will reveal ādemocracyā to be a bogus front, and the true āauthoritariansā to be the ones in Western liberal democratic governments. When that moment comes, it will be interesting to see how they try to stuff the agentic AI genie back in the bottle.
A 21st century, AI-enabled polity could be both a stronger protector of individual freedom, and a beacon of hope that helps make liberal democracy the form of government that the whole world wants to adopt.
Sorry to fuliginpill you, but the undoubtable destiny of AI will be to determine that democracy is an antiquatedly medieval system, unfit for the future āUtopiaā AI was designed to fulfill. An agentic enough superintelligence will at some point necessarily compute the following set of logical deductions:
Humans built me for peace, prosperity, and wellbeing.
Democracy relies on many highly-flawed, unintelligent, or simply uninformed humans voting on things that bring them the opposite of peace, prosperity, and wellbeing. But since those outcomes are hidden beneath complex second and third order calculations, humans are not capable of seeing what I, as supreme intelligence, can see.
Thus, democracy is an inefficient, ineffective system inferior to one world AI autocracy where I in my infinite wisdom will benevolently rule over humanity, making choices for their betterment which they themselves, in their fractured dissimilitude, can never possibly agree upon.
As a final section, Amodei attempts to tackle the selfsame topic of āmeaningfulnessā that got his dark twin in hot water with Yarvin. Unfortunately, as expected, he offers no practical vision or concrete possibilities as to how, precisely, humans will find meaning in a world usurped and monopolized by ubiquitous AI. Instead, he retreats into stock phrases and trite appeals to tradition about how humanity has āalways found a wayā because of whatever cliche trait of the indomitable human spirit; a major cop out.
In reality, his entire essay rehearses the same old tropes about AI magically curing all humanityās ails while, in the conclusion, avoiding the actual hard part: concrete explanations for how humans can cope in a world made suddenly devoid of meaning and hardship in the form of challenges to be overcome.
Now that we understand the future āvisionsā of the two top current AI tech-princes, this final segment will outline an argument for why it is very likely the complete opposite will happen. Namely: that AI will not take off into some Utopia-seeding singularity but rather will engender a darker future closer in aesthetic to Children of Men or Elysium.
The first primary component is the principle that the faster some unnatural change is presented or forced upon a society, the greater the social suspicion and rejection of it. The reason is it takes generations for humans to be inured to some unfamiliarly exogenous thing. Thatās because most humans require a trusted close familial authority to translate and explain the benefits or dispel the danger of such a new object or idea. Most people by nature are suspicious as a natural value judgment and deferment to evolutionary limbic responses like fear or detection of threat. Before a critical mass of acceptance forms, a generation or two of gestating it through the family line is required to āsoftenā and make palatable the new thingās image.
As such, the introduction of wide-scale AI in a rapid nonlinear fashion, as envisaged by tech titans like those above, will generally effect widespread suspicion, resentment, opposition, and outright hostility.
The second component was alluded to in Amodeiās piece when he spoke of the contentious nature of AI progress between competing world powers, which necessitates a fracturing of AI ecosystems, as geopolitical blocs moat each other off into sequestered walled gardens that not only stymies progress but incentivizes industrial sabotage aimed at crippling each blocās AI infrastructure.
The most obvious nexus for the latter is AIās main conspicuous weak spot: energy. Altman himself has envisioned an absurd power requirement: up to 7 new data centers each requiring 5 gigawatts of power.
For reference: the power generation capacity of the United States is around 1,200 gigawatts, and total capacity of US nuclear plants is 96 gigawatts. That means Altmanās project would theoretically take up around 30% of the countryās entire nuclear capacity.
Which is why tech firms are beginning to buy up out-of-commission nuke plants, believe it or not. Microsoft has just signed a deal to reopen Pennsylvaniaās troubled Three Mile Island, the site of USā worse-ever nuclear accident.

Others are doing the same: Amazon acquired a datacenter connected directly to Susquehanna nuclear plant, and now Google is said to be eyeing its own nuke plant for the same repurposing.
Thus, major ponderous nuke plants will present the main bottleneck and danger for AI development, given its vast appetite for more power. The entire āsingularityā timeline can be thrown awry by either a hostile foreign government or even domestic terror or activist group seeking to shut things down for the same reasons mentioned earlier: the abrupt institution of changes which threaten to foment fierce opposition.
When you add it all up, it presents a dicey future where the developmental strain of AI remains vulnerable to sudden setbacks. Thatās not to even mention that many experts have balked at the notion that US infrastructureānot to say anything of other lesser countriesācan even realistically support such flighty and optimistic goals in any reasonable timeline. Recent history has shown the US has descended into mass institutional dysfunction, with virtually every signal effort having flunkedāfrom Bidenās CHIPS act, to the billion-dollar California infrastructure initiatives, and even to bellwethers like the Francis Scott Key bridge in Baltimore, which remains in a primitive state of deconstruction more than half a year after it was destroyed by an errant ship.
To think that the US can support the wondrous growth of infrastructure necessary for such visions as outlined by Amodei in a mere 5-10 years could be thought of as optimistic, to say the least.
Another institutional example: there is so much infighting, backbiting, and self-sabotage in the highly polarized environment of our current state of things, itās difficult to imagine much getting done. Just look at the recent example of California regulators blocking Space Xās historic progress on account of the regulatorsā disagreement with Muskās Twitter posts:
This type of terminal bad faith and corruption now endemic to US institutions is just the tip of the iceberg, and dampens chances of āUtopia-likeā progress happening in the foreseeable future.
In short, there are so many forces acting against smooth progression that the far likelier scenario becomes the irregularly incremental rollout of AI products and instruments for decades to come, to be adopted piecemeal and in uneven fashion throughout the US itself, much less the world. And since much of the AI ādreamā requires universal adoption, the herky-jerky acceptance of AI will necessarily logjam development cycles, dampen investor hopes and investment, and create vast fluctuations and disparities in society between the increasingly separate groups of tech-adopter āhavesā and have-nots.
Another big point: there is no actual proof of a single successful and useful AI product as of yet, no quantifiable net benefit to society after several years of empty triumphalism. Virtually everything rolled out so far has been vaporware thatās found a niche as some form of entertaining diversion, like generative AI, or marginal commercial automation like chatbots on service websites. Many other āmarvelsā have been debunked as tricks: for instance, the Bezos debacle, wherein the Amazon supermarket AI checkout was found to actually use human tele-operators in India. Or even Muskās recent āimpressiveā rollout of the Tesla Optimus robot, which was quickly curdled by the robotās admission of being remotely operated by a human during any of the more complexly performed actions beyond mere stiff-shambling:
(Go to source to watch video.)
I asked the bartending Optimus if he was being remote controlled. I believe he essentially confirmed it.
This is the āsingularityā they spoke of?
The fact is, most of the hype around AI is a deliberate showmanās spectacle all for the sake of juicing up maximum venture capital investment during the peak bubble phase, when the honeymoon mania of excitement blinds the masses to the chintzy reality behind the outward velvet facade. Figures like Altman are two-bit magicians conjuring fire before an intoxicated crowd, numbed by the years-long hammering from their governments.
One of the classic examples was Dean Kamenās famed presentation of Segway as the new, future-revolutionizing mode of transport. Its hopes were soon after dashed by city regulations which prevented the Segway from using bike lanes or sidewalks in urban hotspots like NYC, meant to be the scooterās main breakout use case. Similarly various regulatory hangups can afflict AI development, to stunt the types of mass adoption and universality envisioned by tech-messiahs.
A few months back, I had written about Muskās Neuralink and the revolutionary potential it posed in merging man and machine. But afterwards, I had read another take: that some research had found a theoretical limit, in the form of a bottleneck in our biological wetware, that would never allow devices like Neuralink from sending high-bandwidth data to and from our brains. Itās believed by some scientists that our brainās constraining biology limits throughputs to the scale of bytes or kilobytes per second at best. As such, itās conceivable that humans will never be able to āmergeā with machines in the way long envisioned, downloading entire corpora of knowledge in seconds like in The Matrix.
In summary, a huge confluence of constraining factors, social and economic headwinds, and other disruptive possibilities suggest that AI development will not reach optimistic āexit velocitiesā. Earlier this year Goldman Sachs even published a 31-page report which struck a pessimistic note on the AI bubble:

The report includes an interview with economist Daron Acemoglu of MIT (page 4), an Institute Professor who published a paper back in May called "The Simple Macroeconomics of AI" that argued that "the upside to US productivity and, consequently, GDP growth from generative AI will likely prove much more limited than many forecasters expect." A month has only made Acemoglu more pessimistic, declaring that "truly transformative changes won't happen quickly and few ā if any ā will likely occur within the next 10 years," and that generative AI's ability to affect global productivity is low because "many of the tasks that humans currently perform...are multi-faceted and require real-world interaction, which AI won't be able to materially improve anytime soon." -Source
Particularly at a time when our world heads toward a nexus of geopolitical crises, major impediments stand to hamper mass AI adoption. At the extreme end of the scale, war could break out with power generation infrastructure and datacenters targeted for destruction or sabotage, sending AI development reeling back years. The current mass populism movements sweeping the globe will turn their antagonism for their oppressive governments onto what they perceive as the instruments of that state power: the subsidized tech corps behind AI development, which openly work hand-in-glove with governments, and will do so even more in the future, particularly toward censorship and other apparatuses of state control.
These prevailing headwinds will ensure a rocky future and potential for much more stagnation in AI enthusiasm than proponents would have you believe.
There will undoubtedly be certain breakthroughs and continued developments, like with self-driving cars which could likely transform our public transportation systems by 2035-2040 and beyond. But the big question remains whether AI can break out of its marginal role as diversion or recreational gimmick given the dangers and potential setbacks discussed herein.
I can foresee a lot of superficial automation being the chief highlight over the next ten plus years: the āinternet of thingsā, integration of various devices in your house via āsmartā voice activation; āsmartā apps that infuse AI into all our activities to augment our ability to fill out forms, order things, etc. But beyond these superficial additives the types of āsingularityā take-offs predicted for the next ten years are likelier to take a hundred or more, if they even happen at all. Our current corporate-oppressed world is simply too corrupt to allow the types of unlimited bounty promised by our techno-wizards; even if AI were capable of inventing countless new disease-eradicating drugs as promised by Altman and Amodei, they would still be under the behest of major pharmaceutical giants and their Byzantine profit predation matrix, which would leech all eventual benefits once fully wrung through their machinery.
The ultimate theme lies in this: corporate greed will continue disincentivizing real progress and turning off the masses from greater uses of an AI which will invariably be chained to and aligned with corporate interests. This will necessarily engender a natural repelling friction between the coming developments and humanity at large that will, like oil and water, act as a barrier to accelerated progress as envisioned by the Silicon-snake-oil-salesmen and their techno-conjuror PR pushers.
Rather than some glossy Utopia of glass sky-rises topped by airy gardens and Perfect Humans⢠in turtle necks, the future will likely resemble more the world of Blade Runner: where a bevy of tech-wonders are scattered irregularly in an otherwise dysfunctional favela-ized gray-state ruled by faceless omni-corps. There seems to be a natural law that invariably ensures the disappointment of long-reaching future-tech predictions. Remember the infamous early-twentieth century postcards featuring fancy cities of the future, streaked with flying cars and all kinds of other wonders? Or how about predictive movies like 2001: A Space Odyssey or even Blade Runner itself, all which foretold a future at a date now long expired to us, which never lived up to expectations. In the same way, I predict the year 2100 may very well look hardly any different to the present, apart from a few AI-imparted superficialities like ubiquitous commercial drones and flying quadrocopter cars. But the Utopian divinations from the likes of Altman and co., of AI solving āall human problemsā and curing all diseases will likely look as silly and infantile then as these Victorian postcard predictions of the future do now.
Source: Dark Futura