From ‘Go’ to Geoengineering: AI’s Move 37 Moment
In 2016, the ancient board game of Go – a game born in China over 2,500 years ago and deeply embedded in East Asian intellectual culture, became the stage for a civilizational moment.
Go is not chess.The complexity of Go is so much greater than chess – that it makes chess seem small by comparison.
With more possible board configurations than atoms in the observable universe, Go has long symbolized human intuition, patience, and strategic depth. In China, it is not merely a game; it is part of classical education, philosophy, and elite intellectual training.
When AlphaGo, developed by DeepMind, defeated the legendary Lee Sedol, the shockwaves were global – but particularly profound in China.
This was not just a sporting defeat.
It was symbolic.
But it wasn’t a mistake.
It was brilliance.
That move pivoted the game …. and perhaps our understanding of intelligence itself.
……
Now popularly known as, Move 37 wasn’t just a technical triumph. It was symbolic.
Commentators thought it was a mistake. Professional Go players were stunned. It violated centuries of accumulated human intuition.
We are entering an era where algorithms make decisions that:
We don’t fully understand
We cannot fully explain
Yet increasingly, we must trust
As Yuval Noah Harari argues in his book Nexus, we are moving into uncharted territory ….a world where intelligence is no longer exclusively human, and where algorithmic systems may produce outcomes superior to human judgment, even when opaque.
This is both exhilarating and unsettling…..
Where AI Is Already Making “Move 37” Decisions in Our Lives
Below are few of the real domains where AI systems are shaping human destiny:
1️⃣ Criminal Justice : Risk Assessment Algorithms:
COMPAS-like tools estimate likelihood of reoffending.
Influence sentencing, parole decisions.
Impact: Freedom, incarceration length, life trajectory.
Risk: Embedded bias, lack of explainability.
2️⃣ Banking & Credit : Loan Approval Models
Credit scoring algorithms decide who gets loans, at what interest rate.
Impact: Home ownership, entrepreneurship, economic mobility.
Risk: Algorithmic redlining, opaque rejections.
3️⃣ Hiring & Recruitment
Resume screening AI filters candidates before humans review them.
Impact: Careers shaped before a human even reads your CV.
Risk: Reinforcing historical discrimination patterns.
4️⃣ Healthcare : Diagnostic Algorithms
AI reads radiology scans, predicts disease risks.
Impact: Early detection, treatment decisions, survival rates.
Risk: Over-reliance; black-box diagnoses.
5️⃣ Insurance Pricing: Predictive models calculate risk premiums.
Impact: Who can afford health or life insurance.
6️⃣ Social Media & Information Ecosystems: Recommendation algorithms shape political opinions.
Impact: Democracy, polarization, public discourse.
7️⃣ Autonomous Systems: Self-driving systems decide in milliseconds.
Impact: Life-or-death traffic scenarios.
8️⃣ Climate Modeling & Energy Optimization: AI predicts weather extremes, optimizes grid distribution.
Impact: Disaster preparedness, renewable efficiency.
Here too, the algorithm might make a “Move 37” ….counterintuitive, but correct.
So whats good with Algorithm’s making decisions ?
✅ Pattern Recognition Beyond Human Capacity: AI detects subtle correlations in massive datasets.
✅ Reduced Emotional Bias: Machines do not get angry, tired, or politically pressured.
✅ Speed & Scale: Millisecond decisions at planetary scale.
✅ Climate Modeling Breakthroughs: Improved prediction of extreme events, Smart grid optimization, Carbon capture modeling, Precision agriculture

AI may become one of the most powerful tools in fighting climate change.
But then there are ‘The Dangers’
⚠️ Opacity (“Black Box” Intelligence): We may not understand why a decision was made.
Move 37 worked … but what if it didn’t?
⚠️ Bias Amplification: Algorithms trained on historical data inherit historical injustice.
⚠️ Accountability Vacuum If an AI denies parole wrongly – who is responsible?
⚠️ Power Concentration: Few tech corporations control algorithmic infrastructures.
⚠️ Democratic Erosion : Recommendation engines can polarize societies.
Wicked Problems & Super Wicked Problems: Climate change is often described as a Super wicked problem because :
- Multiple stakeholders
- Conflicting interests
- No clear solution
- Interconnected systems
A super wicked problem (as scholars describe climate change):
- Time is running out
- Those causing the problem are trying to solve it
- No central authority
- Policies irrationally discount the future
AI both coud help solve the problem or worsen it.
AI could either help solve climate change or quietly complicate it , which is perhaps the most AI answer imaginable. So, can AI help climate change?
Potentially, yes , and in ways that are almost poetic. It can run climate simulations at resolutions so detailed that the planet begins to look less like a blurred forecast map and more like a living system under a microscope. It can predict tipping points before glaciers send us resignation letters. It can forecast energy demand with such precision that the sun and wind finally get calendar invites aligned with human consumption.
It can optimize renewable integration so electrons flow with less drama and fewer fossil-fuel cameos. In material science labs, AI can sift through millions of molecular combinations to design better batteries – because apparently even lithium needs strategic guidance. And in agriculture, it can tell farmers exactly when to water, fertilize, or harvest, reducing waste and emissions .. a reminder that sometimes saving the planet begins with smarter soil decisions.
The irony – of course, is that we are now asking algorithms to help fix the very optimization mindset that helped create the crisis. But if intelligence , human or artificial , is about learning from mistakes, then perhaps this is evolution’s most unexpected collaboration.
AI may generate a “Move 37” in climate policy …a solution humans never imagined.
How AI Can Make It Worse
Of course, AI could also make climate change worse , and not in a villain-with-a-laser-beam way, but in a calm, optimized, spreadsheet-approved way.
Start with data centers. These quiet, windowless warehouses are the monasteries of modern intelligence, except instead of chanting monks, they hum with servers consuming staggering amounts of electricity. We ask AI to calculate pathways to net zero while plugging it into grids still powered by coal. It’s like hiring a personal trainer who insists on being chauffeured in a diesel truck.
Then there’s AI-accelerated fossil fuel exploration. The same pattern-recognition genius that can model glacier melt can also locate untapped oil reserves with surgical precision. AI doesn’t have ideology. If instructed, it will optimize extraction just as enthusiastically as it optimizes decarbonization. Intelligence is neutral; incentives are not.
Optimized consumerism may be the most subtle accelerant. AI doesn’t just show you ads , it predicts your impulses before you consciously experience them. It knows when you’re likely to upgrade, replace, click, buy. Climate change is not only a production problem; it’s a consumption problem. And we’ve built machines that specialize in stimulating demand at planetary scale. That’s not marketing anymore. That’s behavioral engineering.
Enter surveillance capitalism , where data about your habits, movements, preferences, and fears becomes raw material. The more precisely human behavior is predicted, the more precisely it can be nudged. And nudging billions toward higher consumption is far more profitable in the short term than nudging them toward restraint.
Then comes the most delicate issue: decision-making detached from democratic oversight. If climate policy increasingly relies on opaque optimization models that only a handful of experts (or corporations) understand, public debate risks becoming ceremonial. “The model recommends it” could quietly replace “the people consent.”
Unchecked, AI could accelerate the super wicked dynamics of climate change: speeding up resource extraction because efficiency improves margins; reinforcing short-term incentives because markets reward quarterly performance, not planetary stability; concentrating decision power in the hands of those who own the algorithms.
The uncomfortable irony? We might not destroy the planet through chaos , but through optimization. Efficiently. Rationally. At scale.
And that may be the most intelligent mistake of all.
Move 37 challenges human exceptionalism.
For centuries, strategy games like Go were considered uniquely human … requiring intuition, creativity, and long-term vision.
When AlphaGo made Move 37, professional players later described it as “beautiful.”
The unsettling part? It was beautiful … and alien.
As Harari notes in Nexus, we are creating systems whose reasoning may outperform us while remaining fundamentally opaque. We are delegating authority to entities that do not experience ethics, responsibility, or meaning.
So What Should We Do?
We need:
- Algorithmic Transparency
- Explainable AI
- Human-in-the-loop governance
- Ethical AI frameworks
- Democratic oversight
- Energy-conscious AI design
The challenge is not whether AI will make Move 37 again.: It definetly will!
The question is:
Will we understand the board well enough to decide when to trust it?
Move 37 was not just a move in a board game: It was a signal !
- A signal that intelligence is evolving beyond human intuition.
- A signal that we are entering uncharted waters.
- A signal that the next pivot in climate change, governance, justice, and democracy may come from systems we barely comprehend.
The future may belong to those who can collaborate with algorithms … Without surrendering human judgment.
Because in the end, the most important move is not 37.
It is how we respond to it.
If this resonates, I would love to hear your thoughts:
Are we ready for a world where the best decision may be one we cannot explain?

Jayant Mahajan works where Management, technology, and sustainability meet, usually right before things get complicated. With industry experience in business management and digital transformation, he brings real-world messiness into the classroom (on purpose). As an educator, he designs future-ready curricula around data thinking, governance, and ethics, because technology without judgment scales mistakes faster. Through his Change Before Climate Change mission, Jayant helps institutions act early by fixing skills and incentives, so climate action becomes good management, not emergency management. Bridging policy, practice, and purpose, one syllabus at a time.

