
Coal, Carbon, and Computers
Every email you send has a carbon footprint. So does every TikTok video, every AI prompt, every line of code running in the cloud. But here’s the uncomfortable truth: Nobody really knows how big that footprint is.
In boardrooms from Silicon Valley to Shenzhen, tech CEOs face an inconvenient paradox. They’ve pledged carbon neutrality by 2030, signed climate commitments, and installed solar panels on headquarters. Meanwhile, their actual emissions are skyrocketing—up 48% at Google, 30% at Microsoft since 2020—and the tools to measure what’s actually happening barely exist.
The software industry now rivals aviation in carbon emissions, accounting for 2-4% of global greenhouse gases. AI alone could drive data center energy consumption to double by 2030, consuming more electricity than the entire nation of Japan. Yet while we can measure the carbon cost of a gallon of gasoline down to the gram, calculating the emissions from a single ChatGPT query remains somewhere between educated guess and corporate mythology.
This isn’t just an environmental problem. It’s about to become a legal one. The EU’s Corporate Sustainability Reporting Directive hits 49,000 companies in 2025, demanding verified carbon accounting with teeth. California’s climate disclosure laws follow close behind. The SEC’s rules sit in legal limbo, but the direction is clear: measure your emissions or pay the price. And for software companies, nobody’s quite sure how to measure anything.
The Measurement Problem That Won’t Go Away
On a Tuesday morning in May 2024, Microsoft published its annual sustainability report. The transparency was admirable—and alarming. Despite investing billions in renewable energy and carbon removal, the company’s total emissions had climbed nearly 30% since its 2020 baseline. The culprit? “The construction and operation of AI infrastructure,” buried in the kind of bureaucratic language that signals a problem without a solution.
Brad Smith, Microsoft’s president, didn’t sugarcoat it: “We are learning that the pathway to sustainability is not always linear.” Translation: We’re building so much AI infrastructure that we can’t decarbonize fast enough to keep up, and we’re not entirely sure how to measure what we’re building.
This measurement crisis runs deeper than PR difficulties. The Software Carbon Intensity specification—now an official ISO standard after years of development—represents the field’s best attempt at a scientific framework. Its formula looks deceptively simple: energy consumed multiplied by carbon intensity of electricity, plus embodied emissions from hardware, divided by a functional unit.
But that simplicity collapses in practice. Should you measure per user? Per API call? Per transaction? Each choice produces radically different results. And what about the cloud? When your application shares a server with fifty others, whose emissions are whose? Amazon, Google, and Microsoft each publish carbon calculators with different methodologies that produce wildly different numbers for identical workloads.
A 2024 survey found that 88% of cloud users consider the objective of a “carbon-neutral cloud” either illusory or insufficiently transparent. AWS’s calculator omits most upstream emissions and uses accounting tricks that show near-zero impact regardless of actual electricity sources. Google leads in transparency but admits its data “has not been third-party verified.” Azure falls somewhere in between, with methodology that changes frequently enough to make year-over-year comparison treacherous.
Even the Green Software Foundation—the Linux Foundation project developing these standards—acknowledges the Gordian knot: “Calculation using granular real-world data is challenging to obtain in some environments, particularly the public cloud.” Their solution? Use “best estimates” and models when data doesn’t exist. Which is most of the time.
When Carbon Becomes Code
The implications extend far beyond environmental accounting. Every line of code carries a carbon weight, compounding across billions of executions. A single inefficient database query, repeated a million times daily, can generate tons of unnecessary CO2. An image compressed at 80% quality instead of 60% bloats file sizes by megabytes, multiplied across every user who loads it. Algorithm choices that seemed inconsequential in computer science class—O(n²) versus O(n log n)—create massive energy differences at scale.
Consider the geography problem. Deploying an application in Oregon versus Mumbai creates a 10x difference in carbon emissions because Oregon runs primarily on hydroelectric power while Mumbai relies on coal. The difference between running a workload at 2 AM versus 2 PM can be 4x because solar generation peaks midday. Yet most developers never see these numbers, never get feedback on the carbon consequences of their technical decisions.
A small but growing ecosystem of tools attempts to surface this invisible impact. CodeCarbon wraps Python code in decorators that estimate CO2 during execution. The ecoCode plugin for SonarQube flags “green code smells”—energy-inefficient patterns in syntax. Google’s Carbon Aware SDK enables applications to schedule workloads when grid electricity is cleanest. Microsoft built temporal shifting into Windows 11, automatically delaying updates to low-carbon periods.
But these tools face adoption barriers that reveal deeper cultural resistance. Carbon-aware scheduling only works for delay-tolerant workloads—nobody wants their video call postponed to 3 AM for sustainability. Right-sizing over-provisioned cloud resources saves money and carbon, yet engineers resist because “it might impact performance” or “we might need that capacity later.” Security teams push back on resource sharing because isolation reduces attack surface, even though consolidation cuts emissions dramatically.
The trade-offs get existential with AI. Training a large language model can consume more electricity than 100 American homes use in a year. But the trained model, deployed efficiently, might reduce emissions elsewhere by automating tasks that would require far more carbon-intensive work. How do you account for that? The methodologies don’t have good answers.
The Leaders, The Laggards, and The Lies
Some companies are pushing measurement boundaries despite the uncertainty. Etsy made every shipment carbon-neutral in 2019—the first global e-commerce company to do so—for less than a penny per package. Their migration to Google Cloud cut compute energy by 50% while doubling engineering productivity. The calculation? “We worked with third parties to estimate it,” says a company representative, acknowledging the inherent approximation in all carbon claims.
Shopify takes a different approach: deliberately overpaying for carbon removal to build markets that don’t yet exist. The company’s $94 million Sustainability Fund backs 54 early-stage carbon removal technologies, accepting today’s premium prices to drive tomorrow’s scale. It’s venture capital logic applied to climate—except the returns are measured in atmospheric CO2 rather than dollars.
Then there’s the transparency problem masquerading as a solution. Booking Holdings touts an 85% reduction in operational emissions—technically true but misleading, since those emissions represent a tiny fraction of total impact. The actual carbon footprint lives in hotels and flights booked through the platform, conveniently categorized as “Scope 3” and largely unmeasured. It’s the corporate equivalent of claiming your gas station is carbon neutral because the building runs on solar.
This pattern repeats across tech. Apple faces a class action lawsuit alleging its “carbon neutral” claims rely on essentially meaningless offset projects—buying credits for rainforest preservation that likely would have happened anyway. The company’s defense? Everyone uses these methodologies. Which is true, and exactly the problem.
Amazon achieved “100% renewable energy matching” in 2024, six years ahead of schedule. But “matching” is accounting fiction—buying renewable energy certificates equivalent to consumption somewhere else on the grid, not actually running facilities on clean power moment by moment. Google pioneered more rigorous “24/7 carbon-free energy” matching that ensures hourly rather than annual balancing, a significant improvement. But even this leaves massive embodied emissions from hardware manufacturing unmeasured—often 40-80% of total lifecycle impact.
The greenwashing concern runs so deep that the SCI specification explicitly excludes carbon offsets and renewable energy credits from its calculations. The philosophy: “One tonne of carbon eliminated is not the same as one tonne that has been offset.” This puts the standard in direct conflict with corporate carbon neutrality claims, virtually all of which rely heavily on offsets.
What Actually Works (And What Doesn’t)
Strip away the complexity and a few interventions deliver disproportionate impact. Compression algorithms reduce data transfer by 70-80% with negligible CPU overhead—pure efficiency gain. Converting images from JPEG to WebP cuts file sizes 30-50% with no perceptible quality loss. Database indexing eliminates wasteful full table scans. These aren’t even climate interventions; they’re basic engineering hygiene that happens to reduce emissions.
Serverless computing can slash idle capacity waste by 60-70% for variable workloads, but introduces cold start penalties that consume 15x more energy than warm requests. The net benefit depends entirely on usage patterns—nobody’s measuring that either. Carbon-aware scheduling works beautifully for ML training and batch jobs, but Google’s production systems doing this at scale still saw 13% emissions growth in 2023 because AI demand overwhelmed efficiency gains.
Right-sizing over-provisioned resources offers the lowest-hanging fruit. Studies consistently find that 50-70% of cloud resources run underutilized, representing enormous waste. Cloud provider tools identify optimization opportunities worth 20-40% emissions reduction. Yet adoption remains sluggish because engineers over-provision for safety, organizations lack incentives to optimize, and cloud economics reward consumption.
The programming language question reveals uncomfortable truths about sustainable software. C, C++, and Rust deliver 10-100x better performance per watt than Python or JavaScript for many tasks. But optimizing every application in low-level languages would crater developer productivity and slow innovation to a crawl. The field lacks good frameworks for deciding which workloads merit optimization and which should prioritize development velocity.
What clearly doesn’t work: voluntary action at scale. A 2024 survey found that despite growing awareness, only 9% of companies comprehensively report digital carbon footprints. Without regulatory mandates, measurement remains optional. And without measurement, reduction stays hypothetical.
The Regulatory Hammer Falls
The voluntary era is ending. The EU’s Corporate Sustainability Reporting Directive affects 49,000 companies starting with 2024 data reported in 2025, requiring verified emissions across all scopes unless justified otherwise. The rules demand “double materiality”—both climate risks to business and business impacts on climate—with mandatory assurance integrated into audited financial reports. Business models must align with the 1.5°C Paris Agreement pathway. The penalties for non-compliance include fines and market access restrictions.
California’s SB 253 requires companies exceeding $1 billion revenue operating in the state to report Scope 1 and 2 emissions by 2026, Scope 3 by 2027. The law faces legal challenges but signals direction: mandatory disclosure spreading state by state. China requires major companies to report ESG data from 2026. Singapore, the UK, and other major markets implement similar frameworks. A global regulatory convergence is happening whether companies are ready or not.
The SEC’s climate disclosure rule, adopted in March 2024 but stayed pending judicial review, illustrates the political turbulence. If implemented, it would require material climate risk disclosure and emissions reporting from large companies. If killed by courts, state-level regulations fill the void piecemeal. Either way, the trajectory points toward mandatory, verified carbon accounting as table stakes for public companies.
For tech companies specifically, the crunch intensifies because cloud services and software products create complex Scope 3 emissions. When Microsoft sells Azure services, those emissions appear in customers’ Scope 3 reporting—which means customers demand data Microsoft struggles to provide accurately. When Salesforce sells software running in customer data centers, who owns those emissions? The methodologies remain ambiguous, but regulatory deadlines don’t care about measurement difficulties.
The EU’s Energy Efficiency Directive adds another layer, requiring data centers exceeding 500 kW to report energy, water, and waste heat from May 2024. The Carbon Border Adjustment Mechanism taxes carbon-intensive imports. The Ecodesign for Sustainable Products Regulation will require detailed environmental footprints for products including digital services. The regulatory vise tightens from multiple angles simultaneously.
The Race Against Exponential Growth
Behind all this measurement anxiety lurks an arithmetic problem that may prove unsolvable: AI. The International Energy Agency projects data center energy consumption could exceed 1,000 TWh by 2026, equivalent to Japan’s total electricity use. AI training and inference account for accelerating growth—up 30% annually for accelerated servers versus 9% for conventional infrastructure.
A single ChatGPT query requires 10x more electricity than traditional search. Widespread adoption creates unprecedented demand. Microsoft and Google both cite AI infrastructure as the primary driver of emissions growth that overwhelms efficiency gains. The industry faces a Sophie’s choice: slow AI development to meet climate commitments, or blow through emissions targets while scrambling for technological solutions that may not materialize in time.
Some researchers maintain optimism. AI could optimize grid operations, accelerate materials discovery for carbon removal, and improve climate modeling. The net impact might prove positive if AI enables broader sustainability gains exceeding its own footprint. Others counter that this logic excuses unlimited expansion—the carbon cost remains real and immediate while benefits stay speculative and indirect.
The Green Software Foundation’s discussion on AI sustainability oscillated between these poles without resolution. One panelist argued AI is essential for sustainable development goals. Another warned that “breakneck expansion makes 100% green AI seem elusive.” Both are probably right, which doesn’t help companies trying to make decisions today.
What Tech Leaders Need to Do Now
The window for early-mover advantage is closing fast. Companies establishing measurement capabilities today position themselves for regulatory compliance, capture efficiency savings of 5-20%, and build competitive differentiation. Those that delay face penalties, investor flight, and supply chain exclusion.
Start with baseline establishment using the Software Carbon Intensity specification for applications while employing GHG Protocol for corporate reporting. Even rough baselines with estimation models provide starting points. What matters is beginning, documenting methodology transparently, and improving accuracy over time. Perfection is the enemy of progress.
Deploy tools matching organizational scale. Individual developers can profile energy consumption with CodeCarbon. Small teams benefit from Cloud Carbon Footprint for multi-cloud visibility. Kubernetes operators should monitor containers with Kepler. Enterprises require comprehensive platforms like Climatiq or Persefoni for audit-grade accounting.
Integrate carbon metrics into engineering processes—code reviews, CI/CD pipelines, architecture decisions. Make energy profiling part of performance testing. Include carbon impact in sprint planning and technical debt discussions. The goal: make carbon cost visible in technical decisions the same way performance metrics and security implications are visible today.
Set science-based targets through the Science Based Targets initiative, which validates alignment with 1.5°C climate pathways. Nearly 5,000 companies have submitted targets with 2,600 approved. The commitments create accountability mechanisms that voluntary pledges lack.
Most importantly: engage suppliers and customers on carbon data exchange. For B2B companies, recognize that your emissions are your customers’ Scope 3 problem—they’ll demand disclosure as procurement criteria. For companies with significant supply chains, cascade requirements downward while providing tools and guidance to facilitate measurement.
The Unmeasurable Future
The honest truth? We’re flying blind. The science exists in theory, the tools exist in prototype, the will exists in pockets. But comprehensive, accurate, verifiable software carbon measurement remains years away from maturity. The field advances through iteration—flawed measurements that improve incrementally beat perfect measurements that never happen.
Cloud providers will eventually provide better transparency because customers and regulators demand it. Standards will converge because fragmentation creates compliance nightmares. Verification infrastructure will emerge because auditors and investors require it. The question is whether this happens fast enough.
The Paris Agreement requires a 42% emissions cut by 2030—five years away. The software industry must deliver its proportional reduction: roughly 45% from 2020 baselines. Every company burning fossil fuels in data centers, every product shipping without carbon consideration, every optimization delayed for convenience—these aren’t just environmental failures. They’re business risks materializing in real-time.
Coal powered the Industrial Revolution. Oil powered the twentieth century. Data now powers the twenty-first. But unlike previous energy transitions where we simply moved from one fuel to another, this transition demands something harder: making the invisible visible, measuring what we’ve never measured, and accepting responsibility for impacts we’ve barely begun to understand.
The computers aren’t going away. Neither is the carbon. The only question left is whether we’ll measure it before it’s too late.
Architecture of Influence
The Golden Circle consists of three layers. At the center sits Why—your purpose, the belief that drives you. The middle ring is How—the methods and values that distinguish your approach. The outer edge is What—the tangible products or services you deliver.
Traditional marketing marches outward: “We make X product with Y features, and it’s better because of Z.” This speaks to the rational brain, the neocortex, where decisions are analyzed but not felt.
Sinek’s insight is neurological. The limbic system, which governs emotion, trust, and decision-making, has no capacity for language. When you lead with Why, you bypass rational defenses and speak directly to the part of the brain where loyalty lives. People don’t just understand you. They feel you’re right.
The Apple Doctrine
Consider Apple’s approach. They’ve never led with specifications. The message has always been: “We challenge the status quo. We think differently. Want to join us?” The computers, phones, and watches are merely artifacts of that belief.
This is why Apple devotees queue overnight for product launches. They’re not buying superior hardware—they’re buying identity, belonging, and a worldview. The What is interchangeable; the Why is magnetic.
Contrast this with most technology companies, which trumpet processor speeds and screen resolutions. They compete on features, which means competing on price. Apple competes on meaning.
The Biology of Belief
Sinek draws on the structure of the human brain to explain why this works. The neocortex handles rational thought and language—the What and How. But the limbic brain, responsible for feelings and decision-making, processes the Why.
This explains why customers often can’t articulate why they chose one brand over another. “It just feels right,” they say. That’s the limbic system talking. When your messaging starts with Why, you create gut-level resonance that rational arguments can’t match.
This also clarifies why manipulations—discounts, fear-based marketing, aspirational messaging—produce transactions but not loyalty. They target the wrong part of the brain.
The Leadership Imperative
For business leaders, the implications are structural. Your Why isn’t a tagline or a paragraph in the annual report. It’s the filter for every decision: hiring, product development, partnerships, market expansion.
Southwest Airlines’ Why is democratizing air travel. This belief drives their operational How—no assigned seats, point-to-point routes, single aircraft type—and their What—affordable flights. When they considered adding premium seating, they didn’t analyze competitor pricing. They asked whether it served their Why. It didn’t. They didn’t do it.
Companies that lose sight of their Why drift into feature wars and price competition. They optimize for quarterly results and wake up indistinguishable from competitors.
Finding Your Why
Most organizations have a Why, even if they’ve forgotten it. It’s rarely the sanitized version in the mission statement. It’s the reason the founder started the company at three in the morning, risking everything.
The test is simple: If your Why could apply equally to a competitor, it’s not your Why. “To be the best” or “to innovate” means nothing. A genuine Why is specific, belief-driven, and inarguable. It’s not subject to market research because it exists independent of the market.
The hard part isn’t finding your Why. It’s having the discipline to let it shape every decision, especially when that conflicts with short-term gains.
The Competetive Moat
In an era of commoditization and infinite choice, Why is the last sustainable advantage. Products can be copied, talent can be poached, pricing can be undercut. Purpose can’t be replicated because it’s not a strategy—it’s an identity.
The companies that will dominate the next decade aren’t those with the best technology or the largest market share. They’re the ones that give people a reason to care. Not a rational reason. A felt reason.
Your customers don’t buy what you do. They buy why you do it. The question is: Do you know what that is?
You may also like
You Are What You click
Modern conspicuous consumption
Engagement Vs Interaction
The great Social Media Confusion
Why Before What
Sinek's Golden Circle proves purpose beats features every time