GenAI’s Promise and Pitfalls for Sustainability
Generative AI (GenAI) is quickly emerging as one of the defining technologies of our time. When it comes to sustainability outcomes, will GenAI be helpful or a hindrance? This article explores three use cases from both sides of the coin: how GenAI can be leveraged to advance sustainability in finance and in the real economy, the sustainability risks it poses, and what practical steps can help ensure this powerful technology contributes to sustainable value creation.
Use Case 1: Data and Insights for Sustainable Investment
One of GenAI’s most powerful applications in sustainability is its ability to make sense of large amounts of data at scale—especially unstructured information that may be rich in sustainability content but vary widely in format and quality. GenAI can ingest company filings, regulatory reports, vast datasets, and social media posts, then surface hidden patterns, trends, and inconsistencies. This capability not only reduces the need for human intervention but also enables analysts to identify emerging risks, including supply-chain vulnerabilities or governance red flags, before it is too late.
Beyond text, GenAI is being paired with other data sources such as satellite imagery and internet-of-things (IoT) signals to provide real-time insights. For example, satellite data can be processed to monitor deforestation, methane leaks, or coastal flooding, while IoT sensors can capture actual power or water usage at the shop or factory-floor level. When structured properly, these signals can feed into predictive analytics, scenario modelling, and portfolio optimisation, allowing portfolio managers to adjust exposures and run stress tests.
A recent report by CFA Institute on unstructured data and AI[1] demonstrates how AI models can be trained to identify material environmental, social, and governance (ESG)-related tweets and integrate such information into portfolio construction. When backtested, these sustainability-titled portfolios outperformed the benchmark, with the strongest performance from small-capitalization companies. This result demonstrates GenAI’s ability to turn fragmented ESG datapoints into actionable investment insights for superior outcomes.
Use Case 2: Decarbonisation and Energy Optimisation
Another promising application of Gen AI lies in making energy systems more efficient. According to the International Energy Agency (IEA), data centres consumed around 415 TWh (terawatt-hours) of electricity in 2024.[2] The same report notes that GenAI can help grid operators manage the strain on the grid by providing more-accurate demand forecasts and by integrating renewable power resources more effectively. Remote fault-detection systems could cut unnecessary reduction in electricity reduction even when the energy can be used or stored and free up as much as 175 gigawatts of transmission capacity without the need for new infrastructure.
In hard-to-abate (i.e., difficult to reduce carbon emissions) sectors such as steel and cement, which together account for nearly 13% of global carbon dioxide (CO2) emissions, AI tools are being deployed to improve energy efficiency. These applications range from predictive maintenance and process control to optimising furnace temperatures, all of which can reduce waste and lower the carbon intensity of production.[3] The 2024 Net-Zero Industry Tracker published by the World Economic Forum suggests that the use of GenAI could improve capital efficiency by 5%–7% and reduce capital requirements by USD1.5 trillion to USD2.0 trillion.[4]
Encouraging data points are emerging from the Asia-Pacific region as well. For example, at a systemic level, research shows that AI deployment across Chinese cities is correlated with improved energy efficiency, largely by driving green technological innovation and by facilitating the rationalization of industrial structures.[5] The same study also shows that the benefits are more pronounced in cities with stronger oversight and environmental regulations.
The foregoing examples underscore a broader point: When applied to the right problems, Gen AI can reduce energy consumption across the real economy. For investors, this potential means that assessing how firms deploy AI can offer valuable signals on both carbon-reduction potential and long-term competitiveness.
Use Case 3: Safety, Resilience, and the Circular Economy
Workplace safety and operational resilience are material sustainability factors but do not often grab headlines. In sectors such as energy, water, and waste management, AI-enabled “digital twins” allow operators to create virtual replicas of physical assets, from treatment plants to distribution networks.
By simulating behaviour under different conditions, digital twins help predict equipment failures, optimise maintenance schedules, and reduce both downtime and wastage. For instance, Veolia, a global utility providing waste and water management services, is deploying AI-driven digital twins in its wastewater treatment plants to detect leaks earlier and allocate repair resources more efficiently, improving both environmental and financial outcomes.[6]
On the ground, companies are turning to autonomous inspection robots to keep workers safe. Boston Dynamics’ robotic dog Spot has been equipped with cameras, sensors, and AI vision to perform inspections in hazardous environments—such as high-voltage substations, chemical facilities, and confined spaces—where human entry would pose serious safety risks. Utilities such as the National Grid are using Spot to capture data, which is then fed back into digital twin models for predictive maintenance and safety assurance.[7]
For investors, these examples illustrate how GenAI can contribute directly to social factors by reducing workplace risks. They also highlight a growing focus on circular efficiency: less waste, fewer accidents, and longer asset lifecycles—all of which strengthen operational resilience and align with sustainable value creation.
Sustainability Risks Posed by GenAI
As GenAI becomes more efficient, affordable, and accessible, concerns relating to the sustainability risks it poses are also increasing. At the Rethink Hong Kong conference in September this year, CFA Institute hosted a roundtable on the intersection of AI and sustainability. We polled nearly 40 participants—mainly sustainability, technology, and finance professionals—and the results were telling: 70% said they were concerned about all three sustainability (environmental, social, and governance) risk pillars when it comes to GenAI, and governance concerns were rated as more pressing compared with environmental or social ones. So what are these risks, and how can they be mitigated?
Environmental Risks
- Energy. In 2025, Big Tech is expected to invest more than USD300 billion in AI infrastructure. This surge has environmental implications: For example, Google’s total greenhouse gas emissions in 2024 are 51% higher compared with the baseline in 2019, with Scope 3 emissions up 24% year-on year, largely driven by electricity demand from data centres powering advanced AI models.[8] Training of AI models is energy intensive, and energy use has been increasing for newer models: It took 588 tons of carbon emissions to train GPT-3 (released in 2020) but 5,184 tons for GPT-4 (2023) and 8,930 tons for Llama 3.1 405B (2024).[9]
Many tech companies are actively tackling these issues to assuage concerns. For example, Alibaba Cloud has improved the power usage effectiveness (PUE) and the ratio of clean energy used in its self-built data centres. In 2025, its PUE dropped to 1.190 from 1.247 in 2022, and clean energy accounted for 64% of the energy used.[10]
- Water usage. AI data centres draw and consume large volumes of water for server cooling. In the United States, for example, data centres’ direct water consumption in 2023 was 66.2 billion litres,[11] and one data centre operator reported that 60% of the water it uses is consumed rather than returned.[12] With AI demand rising, global water withdrawal by data centres could reach 4.2 trillion to 6.6 trillion litres by 2027.[13] In Australia, rapid expansion of data centres in New South Wales is projected to use up to 9.6 billion litres of clean water annually—nearly 2% of Sydney’s maximum water supply—raising concerns among residents and local farmers.[14]
I have written previously about the “health warnings” of GenAI, including issues relating to data integrity, transparency, and accountability,[15] as well as potential workforce displacement. The highlights follow:
Social Risks
- Workforce displacement. As AI automates monitoring, decisions, or other roles, certain jobs may disappear. Even with the potential for new jobs created by AI, the transition may be painful for millions of workers, raising issues of unemployment, re-skilling, and inequality.
- Bias reinforcement. Bias can find its way into Gen AI through training data and algorithms that mirror societal prejudices and inequities. Such bias can manifest in multiple forms, including gender, racial, and cultural.
Governance Risks
- Data quality and integrity. ESG data quality varies significantly, which may lead to flawed conclusions, or models may inadvertently inherit biases that affect outputs and decisions. In the Rethink poll, 77% of respondents saw data quality as the biggest barrier to AI implementation.
- Lack of transparency, explainability, and accountability. Many AI models are black boxes: unclear where training data comes from, what biases may be baked in, and how decisions or outputs are justified. As argued in the recent CFA Institute report “Explainable AI in Finance,”[16] human–AI collaboration and the provision of real-time explanations are some of the most effective tools in building trust and safeguarding consumer interests with respect to GenAI.
In Summary
As with any tool, GenAI’s impact depends on how it is used and powered. In the right hands, this technology can help investors and companies perform enhanced data analytics, optimise energy systems, and make workplaces safer. Without thoughtful and responsible development, however, it may also strain electricity grids, deplete already-scarce water resources, and raise difficult questions around governance, bias, and ethics. As CFA Institute research shows, explainability and governance are the differences between insight and illusion in the deployment of GenAI, and those with the discipline to use GenAI responsibly will be able to gain the most from it.
[1]Brian Pisaneschi, “Unstructured Data and AI: Fine-Tuning LLMs to Enhance the Investment Process,” CFA Institute (1 May 2024). https://rpc.cfainstitute.org/sites/default/files/-/media/documents/article/industry-research/unstructured-data-and-ai.pdf.
[2]International Energy Agency, “Energy and AI” (April 2025). https://iea.blob.core.windows.net/assets/601eaec9-ba91-4623-819b-4ded331ec9e8/EnergyandAI.pdf.
[3]Ankit Mishra, “Decarbonizing The Industrial Sector: Can AI Catalyze Global Progress?” Forbes (6 August 2025). www.forbes.com/sites/ankitmishra/2025/08/06/decarbonising-the-industrial-sector-can-ai-catalyse-global-progress/.
[4]World Economic Forum, “ Net-Zero Industry Tracker: 2024 Edition” (December 2024). https://reports.weforum.org/docs/WEF_Net_Zero_Industry_Tracker_2024.pdf.
[5]Jun Zeng and Tian Wang, “The Impact of China’s Artificial Intelligence Development on Urban Energy Efficiency,” Scientific Reports 15 (6 July 2025). www.nature.com/articles/s41598-025-09319-x.
[6]Louis Larsen and Mathieu Lamotte, “Transforming the Future of Wastewater Treatment Plants with Digital Solutions,” Veolia (March 2022). www.veoliawatertechnologies.com/sites/g/files/dvc2471/files/document/2024/08/WP%20All%20HUBGRADE%20Performance%20Plant%20White%20Paper%20EN.pdf
[7]Boston Dynamics, “Case Study: National Grid,” accessed 20 October 2025. https://bostondynamics.com/case-studies/national-grid/.
[8]Google, “Environmental Report 2025” (June 2025). www.gstatic.com/gumdrop/sustainability/google-2025-environmental-report.pdf.
[9]Nestor Maslej, Loredana Fattorini, Raymond Perrault, Yolanda Gil, Vanessa Parli, Njenga Kariuki, Emily Capstick, et al., “The AI Index 2025 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University (April 2025). https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf.
[10]Alibaba Group, “Environmental, Social and Governance Report 2025” (June 2025), https://data.alibabagroup.com/ecms-files/1375187346/7377e4a7-cb19-43e6-8bdf-6ba231ecbf0f/2025%20Alibaba%20Group%20Environmental%2C%20Social%20and%20Governance(ESG)Report.pdf.
[11]Arman Shehabi, Sarah J. Smith, Alex Hubbard, Alex Newkirk, Nuoa Lei, Md Abu
Bakar Siddik, Billie Holecek, Jonathan Koomey, Eric Masanet, and Dale Sartor, “2024 United States Data Center Energy Usage Report,” Lawrence Berkeley National Laboratory (December 2024). doi:10.71468/P1WC7Q.
[12]Alex Setmajer, “How Data Centers Use Water, and How We’re Working to Use Water Responsibly,” Equinix (19 September 2024). https://blog.equinix.com/blog/2024/09/19/how-data-centers-use-water-and-how-were-working-to-use-water-responsibly/.
[13]Shaolei Ren and Amy Luers, “The Real Story on AI’s Water Use—and How to Tackle It,” IEEE Spectrum (10 September 2025). https://spectrum.ieee.org/ai-water-usage.
[14]Byron Kaye, “Exclusive: In Australia, a Data Centre Boom Is Built on Vague Water Plans,” Reuters (14 September 2025). www.reuters.com/sustainability/land-use-biodiversity/australia-data-centre-boom-is-built-vague-water-plans-2025-09-15/.
[15]Mary Leung, “AI and ESG: Harnessing Innovation to Drive Impactful Outcomes” (10 December 2024). www.linkedin.com/pulse/ai-esg-harnessing-innovation-drive-impactful-outcomes-mary-leung-cfa-ilefc.
[16]Cheryll-Ann Wilson, “Explainable AI in Finance: Addressing the Needs of Diverse Stakeholders,” CFA Institute (7 August 2025). https://rpc.cfainstitute.org/research/reports/2025/explainable-ai-in-finance.