15.8 C
New York
Saturday, May 16, 2026
Array

What AI must learn from Roosevelt, conservation and 1929


A century ago, America entered the Roaring 20s convinced the future had arrived ahead of schedule. Automobiles, radios, telephones, electrification, mass production and modern finance transformed daily life. Productivity expanded. Markets soared, and confidence became a form of currency.

Then the bill came due.  

But the lesson of the Roaring Twenties did not begin in 1929. It began earlier, with President Theodore Roosevelt’s conservation fight. Roosevelt was not anti-growth. He believed in ambition, enterprise and national development. Yet, he also understood that prosperity could not depend on consuming forests, waters, wildlife and public lands faster than institutions could protect them.

His answer was not to stop progress. It was to govern it. During his presidency, Roosevelt helped protect roughly 230 million acres of public land, created the US Forest Service, and advanced a conservation ethic built on a durable idea: Resources are not truly ours if we consume them in ways that leave less possibility for those who follow.

Related:ETS CIO on competing with AI startups ‘running with scissors’

That stewardship agenda faced backlash. Timber and mining interests saw conservation as overreach, as did some Western critics of federal land policy and some members of Congress. The pattern feels familiar today. Climate action is often framed as a constraint rather than resilience. Long-term risk collides with short-term incentives. Stewardship is attacked as obstruction, while extraction is defended as freedom.

Roosevelt refused that false choice. Conservation was not the enemy of prosperity. It was the condition for prosperity that could last.

The 1929 crash added the second half of the warning. Innovation was real, but so were speculative excess, easy money, weak guardrails, and uneven prosperity. The Federal Reserve’s history of the crash notes that optimism around new technologies coincided with investment trusts, brokerage houses, and margin accounts that allowed investors to buy stocks with borrowed money. Progress was real; the foundation was fragile.

AI is giving the 2020s its own roar

AI can personalize learning, accelerate scientific discovery, improve decision-making, optimize operations and expand access for communities historically left behind. But promise is not readiness, and scale is not sustainability. Across industries, AI is moving faster than many institutions can absorb it. Teams are experimenting. Investors are speculating. Vendors are marketing. Workers are adapting. Regulators are catching up.

AI is now a conservation question as much as a technology question. The resources at stake include energy, water, carbon capacity, data, workforce trust, human judgment, institutional legitimacy and public confidence. The International Energy Agency projects global data center electricity consumption could more than double by 2030, reaching around 945 terawatt-hours in its base case. 

Related:AI on trial: The Workday case that CIOs can’t ignore

That does not mean organizations should avoid AI. It means AI’s resource demands must be governed with the same seriousness as strategy, risk, finance and reputation.

Sustainable AI cannot be reduced to efficient chips or renewable power purchases, important as those are. It requires environmental, human and institutional sustainability.

Environmental sustainability asks whether AI use cases account for the impact on energy, water, carbon, hardware, grid pressure, e-waste and the local community. Human sustainability asks whether AI strengthens worker capability, dignity, agency, contestability and recourse. Institutional sustainability asks whether organizations know where AI is used. That includes who owns it, what data it uses, what risks it creates, how it is monitored and who remains accountable when something goes wrong.

From my vantage point as a technology executive and researcher of AI governance and digital equity, the greatest risk is not that AI becomes too powerful. The great risk is that institutions become too passive. Too many organizations treat AI as a tool-adoption race when it is, in fact, an operating-model transformation.

Related:Why AI teams treat training data like capital

My doctoral research reinforced that people do not judge AI governance by policy language alone. They judge whether governance is lived in practice. Are boundaries clear? Can people question outputs? Is human judgment preserved? Is there a path to correct harmful outcomes? Is someone answerable?

That is why human-in-the-loop cannot be a final approval checkbox. Human intelligence must shape AI throughout the lifecycle: problem definition, data selection, model design, procurement, deployment, monitoring, escalation, exception handling, training and recourse. The most consequential failures often begin long before a final decision appears on a screen.

What’s next?

Leaders should start with the following five moves:

  1. Move from AI experimentation to AI operating discipline. 

  2. Treat compute, carbon, water and hardware as governance issues. 

  3. Govern data as enterprise risk. 

  4. Keep humans accountable across the lifecycle. 

  5. Require transparency, recourse and reinvestment in workforce capability.

The last Roaring Twenties taught us that progress can be real and still be reckless. Roosevelt’s conservation legacy adds an equally important lesson: Leadership is measured by what it protects, not only by what it builds.

A century from now, people may look back on the 2020s as another roaring decade of technological transformation. The question is whether they will also see that we learned the lesson of 1929, and the conservation lesson that came before it.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

CATEGORIES & TAGS

- Advertisement -spot_img

LATEST COMMENTS

Most Popular

WhatsApp