Javascript is required
1.
Y. Hua, X. Yin, and F. Wen, “Artificial intelligence and climate risk: A double machine learning approach,” Int. Rev. Financ. Anal., vol. 103, p. 104169, 2025. [Google Scholar] [Crossref]
2.
S. M. Cheong, K. Sankaran, and H. Bastani, “Artificial intelligence for climate change adaptation,” WIREs Data Min. Knowl. Discov., vol. 12, no. 5, p. e1459, 2022. [Google Scholar] [Crossref]
3.
M. Reichstein, V. Benson, J. Blunk, G. Camps-Valls, F. Creutzig, C. J. Fearnley, B. Han, K. Kornhuber, N. Rahaman, B. Schölkopf, et al., “Early warning of complex climate risk with integrated artificial intelligence,” Nat. Commun., vol. 16, no. 1, p. 2564, 2025. [Google Scholar] [Crossref]
4.
W. Leal Filho, T. Wall, S. A. R. Mucova, G. J. Nagy, A. L. Balogun, J. M. Luetz, A. W. Ng, M. Kovaleva, F. M. S. Azam, F. Alves, et al., “Deploying artificial intelligence for climate change adaptation,” Technol. Forecast. Soc. Change, vol. 180, p. 121662, 2022. [Google Scholar] [Crossref]
5.
E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in NLP,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 2019, pp. 3645–3650. [Google Scholar] [Crossref]
6.
E. Nost, “Governing artificial intelligence, governing climate change?,” Geo: Geogr. Environ., vol. 11, no. 1, p. e00138, 2024. [Google Scholar] [Crossref]
7.
A. Nordgren, “Artificial intelligence and climate change: Ethical issues,” J. Inf. Commun. Ethics Soc., vol. 21, no. 1, pp. 1–15, 2023. [Google Scholar] [Crossref]
8.
P. Bauer, A. Thorpe, and G. Brunet, “The quiet revolution of numerical weather prediction,” Nature, vol. 525, pp. 47–55, 2015. [Google Scholar] [Crossref]
9.
P. Henderson, J. Hu, J. Romoff, E. Brunskill, D. Jurafsky, and J. Pineau, “Towards the systematic reporting of the energy and carbon footprints of machine learning,” J. Mach. Learn. Res., vol. 21, no. 248, pp. 1–43, 2020, [Online]. Available: https://www.jmlr.org/papers/v21/20-312.html [Google Scholar]
10.
S. Mehryar, V. Yazdanpanah, and J. Tong, “AI and climate resilience governance,” Iscience, vol. 27, no. 6, p. 109812, 2024. [Google Scholar] [Crossref]
11.
Organisation for Economic Co-operation and Development (OECD), “OECD AI principles: Updated guidance on climate and sustainability,” 2024. https://www.oecd.org/en/topics/ai-principles.html [Google Scholar]
12.
United Nations Framework Convention on Climate Change (UNFCCC), “Artificial intelligence for climate action in developing countries: Opportunities, challenges and risks,” 2024. https://unfccc.int/ttclear/misc_/StaticFiles/gnwoerk_static/AI4climateaction/28da5d97d7824d16b7f68a225c0e3493/a4553e8f70f74be3bc37c929b73d9974.pdf [Google Scholar]
13.
Intergovernmental Panel on Climate Change (IPCC), Climate Change 2022: Impacts, Adaptation and Vulnerability. Cambridge, U.K.: Cambridge University Press, 2023. [Google Scholar]
14.
World Bank Group, “Climate and development: An agenda for action,” 2022. https://openknowledge.worldbank.org/handle/10986/38220 [Google Scholar]
15.
L. Berrang Ford, R. Biesbroek, and J. D. Ford, “A systematic global stocktake of evidence on human adaptation to climate change,” Nat. Clim. Change, vol. 11, no. 11, pp. 989–1000, 2021. [Google Scholar] [Crossref]
16.
D. Patterson, J. Gonzalez, Q. Le, C. Liang, L. M. Munguia, D. Rothchild, D. So, M. Texier, and J. Dean, “Carbon emissions and large neural network training,” arXiv, vol. 2104, p. 10350, 2021, [Online]. Available: https://arxiv.org/abs/2104.10350 [Google Scholar]
17.
R. Schwartz, J. Dodge, N. A. Smith, and O. Etzioni, “Green AI,” Commun. ACM, vol. 63, no. 12, pp. 54–63, 2020. [Google Scholar] [Crossref]
18.
C. Huntingford, E. S. Jeffers, M. B. Bonsall, H. M. Christensen, T. Lees, and H. Yang, “Machine learning and artificial intelligence to aid climate change research and preparedness,” Environ. Res. Lett., vol. 14, no. 12, p. 124007, 2019. [Google Scholar] [Crossref]
19.
H. Jain, R. Dhupper, A. Shrivastava, D. Kumar, and M. Kumari, “AI-enabled strategies for climate change adaptation,” Comput. Urban Sci., vol. 3, no. 1, p. 25, 2023. [Google Scholar] [Crossref]
20.
D. Leslie, “Understanding artificial intelligence ethics and safety,” arXiv, vol. 1906, p. 05684, 2019. [Google Scholar] [Crossref]
21.
U. Gasser and V. A. F. Almeida, “A layered model for AI governance,” IEEE Internet Comput., vol. 21, no. 6, pp. 58–62, 2017. [Google Scholar] [Crossref]
22.
J. Fjeld, N. Achten, H. Hilligoss, A. Nagy, and M. Srikumar, “Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI,” Berkman Klein Ctr. Res. Publ., pp. 2020–1, 2020. [Google Scholar] [Crossref]
23.
B. W. Wirtz, J. C. Weyerer, and C. Geyer, “Artificial intelligence and the public sector: Applications and challenges,” Int. J. Public Adm., vol. 42, no. 7, pp. 596–615, 2019. [Google Scholar] [Crossref]
24.
J. Cinnamon, “Data inequalities and why they matter for development,” Inf. Technol. Dev., vol. 26, no. 2, pp. 214–233, 2020. [Google Scholar] [Crossref]
25.
J. Stilgoe, R. Owen, and P. Macnaghten, “Developing a framework for responsible innovation,” in The Ethics of Nanotechnology, Geoengineering, and Clean Energy, Routledge, 2020, pp. 347–359. [Google Scholar] [Crossref]
Search
Open Access
Research article

AI-Driven Climate Adaptation: Technical Applications, Ethical Governance, and Social Inclusion

Özden Şentürk*
Institute of Social Sciences, İstanbul University, 34600 Istanbul, Turkey
Acadlore Transactions on AI and Machine Learning
|
Volume 5, Issue 1, 2026
|
Pages 20-31
Received: 11-01-2025,
Revised: 12-29-2025,
Accepted: 01-12-2026,
Available online: 01-22-2026
View Full Article|Download PDF

Abstract:

Climate change, which has intensified to a global governance crisis, demands adaptation strategies that are faster, precise, and more inclusive than ever before. Artificial intelligence (AI), increasingly positioned at the core of this transformation, is offering powerful tools for climate risk forecasting, disaster preparedness, energy optimization, agricultural efficiency, and business resilience. Yet the growing adoption of AI exposes a fundamental paradox: while it promises unprecedented analytical capacity, its benefits remain unevenly distributed across communities. The current study addressed this tension by presenting a comprehensive and governance-oriented analysis of AI-driven climate adaptation. Drawing on an extensive review of academic research and major institutional reports, this paper identified three interlinked challenges including methodological limitations, ethical and equity risks, as well as governance gaps which continuously undermine the effectiveness of AI-enabled adaptation. Predictive models struggled to incorporate complex social vulnerabilities; algorithmic opacity limited trust and accountability whereas persistent data inequality prevented low-income regions from leveraging advanced digital tools. In response, the study introduced a multi-layered governance framework encompassing technical capacity, regulatory and ethical infrastructure, and socially inclusive outcomes. The findings revealed that the contributions of AI to climate adaptation were fundamentally shaped by institutional quality, transparent data governance, equitable digital access, and participation of vulnerable populations in decision making. The paper concluded that AI held extraordinary potential to strengthen resilience, only if deployed within governance systems that prioritize fairness, accountability, transparency, ethics, and social inclusion. By aligning technological innovation with just and sustainable governance, AI becomes not only a predictive instrument but a transformative catalyst for equitable climate adaptation worldwide.
Keywords: Artificial intelligence, Climate adaptation, Data governance, Climate risk

1. Introduction

Climate change constitutes a global crisis that not only affects environmental conditions but also exerts profound and multidimensional impacts on economic sustainability, social welfare, and governance structures. Rising temperatures, extreme weather events, droughts, floods, and rising sea level pose serious threats to both natural ecosystems and human life. This situation reflects not merely an environmental issue but also a broader governance challenge requiring strategic interventions across diverse domains such as resource management, disaster planning, insurance systems, infrastructure investments, and public policy making.

In today’s complex risk landscape, where traditional approaches fall short of providing adequate responses, artificial intelligence (AI) technologies offer new opportunities through capabilities such as big data analytics, machine learning, and predictive modeling. The technological potential of AI enables more accurate identification of climate-related risks, testing of climate scenarios through multivariate models, and strengthening of early warning systems for natural disasters. In doing so, it supports the development of more inclusive, timely, and effective adaptation strategies for public institutions, the private sector, and other societal stakeholders. However, the integration of AI into climate policy extends beyond the enhancement of technical capacities. It necessitates the redesign of governance structures, the reinforcement of ethical frameworks, the redefinition of data management principles, and the promotion of broader societal participation. Particularly in developing countries, where disparities in access to technology, infrastructural limitations, and human resource constraints are more pronounced, this transformation must be approached with caution, inclusiveness, and fairness. This study aims to explore the potential of AI technologies in climate change adaptation and risk management through a multidimensional perspective. In line with contemporary approaches in the literature, the study evaluated the application of AI, the limitations encountered and governance-related challenges. The final section presents concrete recommendations for policymakers, academic communities, and practitioners.

Despite the growing interest in the application of AI for climate adaptation, the existing literature remains fragmented across technical, institutional, and social dimensions. The central problem is that most studies focused either on the predictive capacities of AI or on isolated governance concerns, without addressing how these components interact in the processes of real-world climate adaptation. This creates a research gap, as there is still limited understanding of how AI can be integrated into climate governance in a way that is both technologically effective and socially equitable. The aim of this study is therefore to provide a thorough and interdisciplinary examination of AI-enabled climate adaptation by synthesizing technical applications, governance mechanisms, ethical considerations, and concerns of inclusiveness. The contribution of the paper lies in presenting a holistic governance perspective, developing a multi-layered conceptual framework, and consolidating international policy approaches to offer an integrated roadmap for researchers, policymakers, and practitioners.

AI has gained increasing attention within climate adaptation scholarship due to its capacity to process large-scale datasets, uncover hidden patterns, and enhance decision-making across multiple sectors. Existing studies convered several thematic areas, including assessment of climate risk, early warning systems, energy and facility management, business resilience, and agriculture and water optimization, while also highlighting cross-cutting governance, ethics, and equity concerns.

The literature indicated that AI significantly improved identification of climate risk by integrating meteorological, hydrological, and geographic data into predictive systems which could produce high-resolution vulnerability maps and scenario-based simulations [1]. These models help detect exposure patterns and anticipate long-term climate trajectories under various emission pathways. However, researchers emphasized that technical accuracy alone was insufficient, as quantitative models could not fully capture community-level resilience, institutional constraints, or socio-economic vulnerabilities. The need for hybrid approaches was highlighted to combine data-driven insights with contextual knowledge [2].

Another strong line of research focused on AI-enabled early warning systems, where deep learning and neural networks enhanced the detection of extreme weather anomalies and improved forecasting accuracy in the short and medium term [3]. Yet, several scholars noted that technologically advanced systems might fail if warnings did not effectively reach vulnerable groups or if communities lacked the capacity to act. This underscored the importance of user-centered system design, the incorporation of local knowledge, and equitable digital access.

The role of AI in energy and facility management forms another prominent strand of the literature. Smart building technologies increasingly rely on predictive analytics to optimize ventilation, temperature control, and lighting, thereby reducing the use of energy and operational costs [4]. Despite encouraging results, most empirical evidence originated from developed regions with robust data infrastructures and raising concerns about the scalability of these systems in lower-income or data-limited contexts. Comparative analysis showed the proposed framework was more integrative than siloed models (e.g., International Organization for Standardization (ISO) risk standards or sectoral AI applications) and could offer a holistic structure for AI-based adaptation.

In the business sector, AI supports the assessment of climate-induced disruptions, strengthens the resilience of supply chain, and improves scenario-based financial planning. Firms using asset-level climate risk modeling could better anticipate operational vulnerabilities and allocate resources more efficiently during periods of uncertainty [5]. Research also found that companies with environmentally responsible strategies tended to demonstrate higher adaptive capacity. Nevertheless, unequal access to advanced technologies might widen the performance gap between digitally sophisticated firms and those lacking comparable capacities.

Agriculture and water management also constitute another important domain where AI contributes to climate adaptation. Machine learning enhances irrigation scheduling, monitors crop and soil conditions, and improves drought forecasting. In hydrological systems, AI assists in flood prediction, reservoir operations, and the integration of remote-sensing datasets. While studies showed meaningful productivity gains, scholars highlighted continuing challenges related to high costs, limited digital skills, and inadequate infrastructure, particularly in developing countries.

Across the literature, governance, ethics, and equity emerged as critical considerations that shaped the success of AI-driven adaptation. Concerns included biased datasets, opaque algorithms, insufficient regulatory safeguards, and unequal distribution of digital tools [6]. The FATE (Fairness, Accountability, Transparency, and Ethics) principles were repeatedly emphasized, though their application in climate-related systems remained inconsistent [7]. Additional debates focused on the growing privatization of climate analytics, which might restrict public oversight and create dependencies on proprietary tools.

Finally, international institutions such as the United Nations Framework Convention on Climate Change (UNFCCC), Intergovernmental Panel on Climate Change (IPCC), and the Organization for Economic Cooperation and Development (OECD) underline both the potential and the challenges of integrating AI into climate governance. These organizations advocate for standardized frameworks of data governance, inclusive system design, enhanced capacity building, and cross-country collaboration to ensure that AI contributes effectively and equitably to global adaptation efforts.

2. Originality of the Study

This study proposed a multi-layered conceptual governance framework structured around three interdependent components: technical capacity, governance infrastructure, and social outcomes. Technical capacity refers to the ability of AI to process complex environmental datasets and support predictive modeling. Governance infrastructure encompasses data governance, regulatory mechanisms, ethical standards, and institutional accountability. Social outcomes address digital inclusion, community resilience, and distributional impacts. This integrated structure positions AI not only as a technological instrument but also as a governance actor that shapes decision-making processes within climate adaptation systems. In this framework, each component influences and enables the others through specific mechanisms. For instance, improvements in technical capacity such as more accurate climate risk prediction models require supportive governance infrastructure (e.g., transparent data policies and regulations for algorithmic accountability) to be effectively deployed. Those governance mechanisms, in turn, ensure that AI tools are used in an equitable and accountable manner so that social outcomes like community resilience actually improve. Conversely, observed social outcomes feed back into the system: evidence of unequal benefits or new vulnerabilities could prompt governance reforms (such as updated ethical guidelines or inclusive data-sharing agreements) and guide the development of next-generation technical solutions. For example, a highly accurate AI-based flood forecasting tool (technical capacity) will only lead to reduced losses arising from disasters (social outcomes), if paired with proper governance infrastructure such as early warning dissemination protocols, accountability for false alarms, and inclusive access. All communities receive the alerts. Without strong institutional frameworks, technical innovations may fail to reach or benefit vulnerable groups; with robust governance, those innovations translate into broad social resilience.

First, the article introduces a multi-layered governance perspective that integrates the technical functions of AI with institutional structures, ethical frameworks, and social inclusion mechanisms. Unlike existing studies that typically examined these elements separately, this paper brought them together through a unified analytical lens, offering a holistic understanding of AI-driven climate adaptation.

Second, the study synthesizes the diverse applications of AI across key adaptation domains including risk assessment, early warning systems, business continuity, energy optimization, and climate-smart agriculture, and presents an integrated cross-sectoral map. This coordinated perspective fills an important gap in the literature, which often remains fragmented or overly sector-specific.

Third, the article provides a critical examination of the governance challenges associated with AI in climate policymaking. By addressing issues such as algorithmic bias, data inequality, transparency gaps, and private-sector dominance, the study contributes a more nuanced understanding of how AI may simultaneously mitigate and reinforce global climate inequalities.

Fourth, the study consolidates and harmonizes the strategic approaches of leading international organizations including the UNFCCC, IPCC, OECD, and the World Bank and transforms them into a coherent policy roadmap. This synthesis strengthens the theoretical basis of the paper while offering actionable guidance for practitioners and policymakers.

Finally, the article introduces an original conceptual model demonstrating how AI interacts with governance quality, institutional capacity, and social inclusion. This framework advances existing discussions by positioning AI not merely as a predictive tool but as a transformative governance actor. Overall, the originality of this study lies in bridging the technical and governance domains through a unified perspective that is rarely observed in the current literature. By aligning algorithmic insights with regulatory capacity, ethical safeguards, and social inclusion dynamics, the study offered a thorough understanding of the requirements of effective AI-enabled climate adaptation. Unlike earlier frameworks that tended to focus on either technological innovation or governance processes in isolation, the proposed integration framework explicitly bridges these domains and incorporates social inclusion as a core component, thereby highlighting its innovation relative to prior models. Unlike existing studies, which rarely integrated all three dimensions, our approach provides a novel and holistic lens for addressing the challenges of climate adaptation. This unified perspective was rarely observed in the current literature and represents a key original contribution of the study.

3. Methodology

This study adopted a conceptual and integrative review methodology grounded in a governance-oriented analytical perspective. Rather than testing empirical hypotheses, the article aims to develop a theoretically informed synthesis of how AI could support climate adaptation while also generating new governance challenges. The research design was based on a conceptual and theory-building approach that investigated the interactions among AI technologies, climate adaptation policies, and institutional governance. This design is suitable for emerging interdisciplinary fields in which conceptual clarification is essential.

Many advanced applications and governance discussions have emerged during the period of 2019–2025. This recent timeframe was selected to capture the surge of research and technological developments in AI-driven climate adaptation over the previous years. We acknowledge that seminal studies prior to 2019 laid important groundwork; therefore, in addition to the core search, we reviewed references in key papers to identify any influential earlier works and included those relevant to our analysis. A structured literature screening process was conducted using Scopus, Web of Science, ScienceDirect, IEEE Xplore, and SpringerLink. The search covered the above period and employed keywords such as:

• "AI AND climate adaptation"

• "machine learning AND climate risk"

• "AI governance"

• "digital inequality AND climate change"

• "AI early warning systems"

High-level institutional reports published by UNFCCC, IPCC, OECD, and the World Bank were also included due to their relevance for climate governance. In selecting literature from the search results, we applied specific inclusion criteria to ensure quality and relevance. We prioritized peer-reviewed publications and gave weight to studies with high citation counts or those appearing in reputable and high-impact journals, under the assumption that they had made influential contributions to the field. This helped surface seminal articles and reduce the risk of selection bias. The selection of the conceptual review method is grounded in the need to clarify theoretical inconsistencies in this rapidly evolving interdisciplinary field. Inclusion criteria prioritized peer-reviewed articles, methodological rigor, and relevance to governance-oriented debates. Additionally, we considered publication metrics (such as citation influence and journal ranking) as signals of quality, to ensure that widely recognized research was represented. Decisions of exclusion targeted narrowly technical engineering papers, low-quality gray literature, and studies lacking verifiable data. Institutional reports were integrated using thematic coding. Limitations of the method included regional imbalances in available research and potential interpretive subjectivity, which were mitigated by cross-validating sources and triangulating institutional and academic materials.

4. Contributions of AI to Climate Adaptation

The solutions offered by AI in the fight against climate change enable adaptation policies to be executed in a more data-driven, flexible, and effective manner. These technologies are utilized across a wide spectrum, from processing environmental data and operating early warning systems to optimizing energy usage and improving agricultural productivity. The contributions of AI in this context can be categorized into five main thematic areas (Figure 1).

Figure 1. Contributions of AI to climate adaptation
4.1 Risk Assessment and Simulations

With its powerful capabilities of processing big data, AI is used to model future scenarios by integrating climate data from various geographical regions. These applications contribute to the development of climate risk maps, analyses of regional vulnerabilities, and prioritization of intervention strategies by decision-makers. Algorithms that analyze the frequency, intensity, and geographical distribution of climate events could forecast future threats, based on historical data and simulation-based scenarios accordingly. This facilitates targeted resource planning for a range of regions from flood-prone coastal cities to drought-affected agricultural zones [1].

Besides, AI-based tools employed for the spatial modeling of events such as floods, storms, and extreme heat can correlate multilayered datasets (meteorological, topographical, and socio-economic) and produce visualized disaster maps based on these analyses [1]. Such applications enhance decision-making processes in disaster planning, particularly at the level of local governments. For example, the startup Jupiter Intelligence offers an AI-driven climate risk platform that integrates satellite imagery and climate models to generate asset-level risk projections (e.g., estimating the probabilities of flood damage to specific infrastructure). Likewise, Google’s Earth Engine provides analysts with a machine-learning-enabled toolkit to map climate hazards and ecosystem vulnerabilities, thus allowing planners to visualize where future hotspots of risk may emerge.

4.2 Early Warning Systems and Disaster Management

AI-supported early warning systems are developed to forecast extreme weather events associated with climate change more accurately and at an earlier stage. These systems go beyond traditional meteorological models by integrating machine learning-based analytical tools with real-time data streams. The effectiveness of early warning systems depends not only on algorithmic accuracy but also on how these systems engage with the public. User-friendly interfaces, community participation, and regionally customized alerts strengthen preparedness processes before disasters occur [3]. Moreover, these systems are crucial for rapid post-disaster response. For instance, AI-powered automated decision support systems are used for real-time water level monitoring during floods, automating evacuation decisions, and protecting critical infrastructure. A prominent example is Google’s Flood Forecasting Initiative, an AI-powered system now deployed across South Asia to provide residents with flood alerts up to several days in advance [3]. Recent advances in AI and data-driven numerical weather prediction have fundamentally transformed climate and weather forecasting systems, significantly enhancing early warning capacities and institutional preparedness for climate-related risks [8].

4.3 Energy and Facility Management

AI is utilized in smart facility management to improve the efficiency of energy system and reduce carbon footprints. Predictive maintenance systems help prevent equipment failures in energy-intensive sectors, thereby minimizing both production losses and unnecessary energy consumption. AI-supported smart building technologies analyze indoor parameters such as temperature, humidity, lighting, and air quality to optimize energy usage. This maintains comfortable living conditions while minimizing energy waste [4]. Furthermore, the decarbonization of supply chains is another area where AI supports sustainability goals. In practice, companies like Google have applied AI to their operations; for example, Google’s DeepMind famously reduced data-center cooling energy use by about 30% through real-time AI optimizations, a technique now being adapted in commercial buildings. Electric utilities are also using AI for smart grid management; machine learning models can balance electricity supply and demand by predicting peaks (e.g., larger consumption of air conditioning during heatwaves) and adjusting distribution accordingly. Meanwhile, city governments are partnering with tech firms to climate-proof urban infrastructure. For instance, the collaboration of IBM with C40 Cities is developing an AI solution to analyze urban heat island effects and stress on energy resources, in order to help city planners adapt building codes and emergency plans for extreme heat [4]. These scenarios illustrate the role of AI in making energy systems and facilities more adaptive to climate stresses.

4.4 Business Resilience and Financial Decision Support Systems

Enhancing the long-term resilience of businesses against climate change is not only an environmental concern but also a financial necessity. AI serves as a critical tool enabling companies to analyze climate-related risks more effectively and develop preventive strategies. Risks faced by businesses such as disruptions of the supply chain, physical damage from extreme events, or market volatility due to climate shifts can be better anticipated with machine learning models. Firms using asset-level climate risk modeling can allocate resources efficiently and strengthen contingency planning [5]. Research also discovered that companies with proactive adaptation strategies tended to demonstrate higher adaptive capacity [5]. Concrete applications are emerging in industry: decision-support platforms allow companies to input their facility and data on the supply chain as well as receiving AI-driven analyses of climate risks tailored to their operations. In the insurance and finance sectors, specialized startups such as Cervest and Jupiter Intelligence provide AI-powered climate risk analytics to corporations, so as to translate hazard data into financial risk metrics that guide investment and insurance decisions. Financial institutions are increasingly integrating these tools to “climate stress-test” their portfolios, thus showing how AI can inform strategic resilience measures in business. As a result, the measurability of risks in areas like climate finance and insurance is enhanced, and businesses are better equipped to adapt to emerging climate challenges.

4.5 Agriculture and Water Management

The agricultural sector is among the most vulnerable to the impacts of climate change, and AI applications in this area are particularly important for the efficient use of water resources and the sustainability of agricultural production. For farmers, AI-supported systems analyze parameters such as soil moisture, rainfall forecasts, temperature, and plant health to recommend optimal irrigation times, which not only saves water but also improves productivity [2]. In addition, algorithms that develop early warning systems against climate-induced risks such as droughts, frost, or pest outbreaks help enhance the adaptive capacity of especially small-scale producers. In this context, Cheong et al. [2] showed that supervised and reinforcement learning techniques could generate effective strategies even with low-quality or incomplete data. All these applications contributed to making agricultural production more resilient, efficient, and sustainable in the face of climate change. The climate-smart agriculture programs of the Consultative Group for International Agricultural Research (CGIAR) employ AI-driven advisory tools that deliver localized recommendations to farmers (such as the crop varieties to plant, given climate forecasts in an upcoming season or adjusted sowing dates based on the predicted rainfall). In the private sector, companies have developed precision farming systems; IBM’s Watson Decision Platform for Agriculture and John Deere’s advanced analytics in farming equipment are two examples that use real-time field data and AI models to optimize irrigation and fertilization. These tools help farmers adapt to variable weather by conserving water during dry spells and protecting yields against extreme climate events. By integrating such AI advisories into daily farm decisions, agricultural communities can better cope with climate variability.

5. Ethical and Equity Considerations

5.1 Methodological Limitations

Although AI holds significant potential in generating data-driven climate policies and supporting risk assessment mechanisms, its practical applications still face serious methodological constraints. In particular, the models used to measure socially constructed and subjective concepts such as resilience, vulnerability, and societal adaptation often lack sufficient depth and contextual sensitivity, hence limiting the ability of decision-makers to conduct comprehensive evaluations [2]. Climate systems are inherently multivariate, temporally volatile, and complex in terms of causal relationships. This complexity restricts both the performance and interpretability of machine learning models based on AI. Modeling long-term climate projections, for instance, presents a challenge in explaining how algorithms relate specific variables and derive outcomes [1]. While AI-powered prediction models based on big data for spatial analysis have proven technically successful in forecasting short-term events such as floods, droughts, or heatwaves, their results are often insufficiently interpretable for policymakers [1]. The lack of robust uncertainty analysis, especially in time-sensitive and resource-critical areas like disaster management, raises concerns about the reliability of AI-based approaches.

5.2 FATE Principles

The origin, inclusiveness, and representativeness of datasets used to develop AI systems are critical factors in ensuring climate justice. However, many current AI models rely on high-quality datasets derived primarily from the Global North, thereby excluding lower-income countries in the Global South and further exacerbating existing digital inequalities [6]. This technological disparity is not only about access to resources but also about whether the AI solutions produced are appropriate for the specific needs of each region. A system developed in a highly resourced country cannot be expected to have the same effectiveness in a region with limited institutional capacity or weak data infrastructure. Many scholars described this dynamic as a form of “data colonization”, where data from less-developed regions were extracted or utilized under terms set by more powerful actors, often without empowering local stakeholders. Ensuring the sovereignty of data with which communities and nations maintain control over their own climate data is crucial to ethical AI governance, so AI-driven solutions do not simply impose external models but are also co-created with local ownership. The effectiveness of AI-driven climate policies depends not only on technical accuracy and data-processing capacity but also on adherence to the principles of ethical and societal responsibility. In this regard, the FATE principles provide a critical governance framework (Figure 2) [7].

Fairness requires that AI systems do not disproportionately benefit certain geographies or social groups, but instead ensure equitable access to climate technologies and data, particularly between the developed and developing countries.

Accountability emphasizes the need for AI-driven decisions to be traceable and subject to oversight, especially in contexts involving high-stakes environmental and social impacts.

Transparency entails that the decision-making logic of AI systems must be explainable to both policymakers, and affected communities, to enhance trust and legitimacy.

Ethics underscores the importance of aligning AI applications with fundamental human rights, inclusivity, and environmental sustainability.

Figure 2. FATE principles

Taken together, these principles help guide the responsible design and deployment of AI in climate governance, to ensure that technological innovation is not pursued at the expense of equity, justice, and long-term societal well-being [7]. However, the practical implementation of the FATE principles in AI-driven climate adaptation projects requires deliberate strategies. For example, Fairness must be proactively built into AI climate tools: if an AI model determines where to place air-quality sensors or allocate disaster relief resources, it should use inclusive data; otherwise, it may overlook poorer or rural communities with less digital presence, leading to under-monitoring and under-resourcing. To counter such bias, project developers have begun complementing algorithmic output with on-the-ground surveys and ensuring representation of vulnerable groups in the datasets. In terms of Accountability, some climate early-warning systems now include human-in-the-loop oversight committees. For instance, an AI-based flood forecasting system might have a review board of meteorologists and community leaders who audit the alerts it generates and can adjust thresholds or issue clarifications. This creates a traceable chain of responsibility for AI decisions; if an automated warning fails or misfires, there are accountable governance structures in place to investigate and correct it. Transparency is being addressed by making AI models more interpretable and open. In practice, developers of AI climate tools are increasingly publishing details of the models or providing explainability interfaces: a drought-prediction AI might display the key factors (e.g., rainfall levels, and trends of soil moisture) driving its forecasts to local agricultural officers. Furthermore, open-source AI climate platforms (such as open climate risk dashboards) allow experts and stakeholders to inspect and understand the underlying algorithms and this could build trust with end-users. Lastly, upholding Ethics means aligning AI projects with human rights and local values. One example is to ensure informed consent and data privacy: communities contributing data (for example, farmers sharing crop information for an AI advisory service) are consulted and their privacy protected are under clear ethical guidelines. Use of AI that could harm vulnerable populations was rejected.

5.3 Governance Gaps and Lack of Transparency

The integration of AI into climate policymaking is not solely a technological matter but also a profound governance issue. Currently, AI applications in many countries are primarily developed and managed by actors in the private sector, with limited oversight, design input, or regulatory control from public authorities. This raises serious risks in terms of transparency, accountability, and safeguarding of the public interest [6]. Critical questions such as which data is excluded, which interest groups benefit, and which social groups are systematically disadvantaged by AI systems are often not sufficiently addressed. Given that AI is increasingly used as a forecasting tool that influences decision-making, its deployment inherently carries political implications. In addition, the environmental impacts caused by AI, such as energy consumption and carbon footprints, should not be overlooked. The computational power and energy required to run large-scale models sometimes contradict the notion of AI as a climate-friendly technology. Therefore, discussions recurring AI must also examine whether these systems are being developed in alignment with the principles of sustainability [9].

6. Social Inclusion

The effectiveness of AI in climate governance is closely tied to the degree of social inclusion and digital equity embedded in its application. Without deliberate interventions, AI-driven climate strategies risk reinforcing structural inequalities rather than alleviating them [10]. In practice, technological solutions developed for climate adaptation and mitigation could unintentionally benefit the already privileged groups, i.e., those with access to digital tools, education, and financial resources, while leaving marginalized communities further behind.

Vulnerable groups, particularly in emerging economies, often face multiple layers of digital exclusion that limit their participation in AI-enabled climate solutions. These exclusions manifest in different ways: inadequate digital infrastructure, such as unreliable internet connectivity and limited access to hardware; gaps in digital literacy that hinder the ability to effectively use technological platforms; and economic barriers that make advanced digital services unaffordable [11]. As a result, the populations most exposed to climate risks, including smallholder farmers, low-income households, and urban poor communities become the least able to benefit from early warning systems, climate-smart agriculture platforms, or AI-supported disaster management tools. Ensuring social inclusion in this context requires proactive governance measures. Policymakers should integrate equity-oriented frameworks into AI design and deployment, to ensure that marginalized voices are represented in decision-making processes. Training programs to enhance digital literacy are essential, and they should be differentiated to meet the needs of various groups (for example, specialized curricula for older adults or for smallholder farmers who face unique challenges in adopting new technologies).

Besides, targeted subsidies to improve access to digital infrastructure and community-driven participatory models of AI adoption are crucial steps toward bridging the digital divide. By improving basic digital skills across demographics and expanding affordable access to AI-powered services, governments and NGOs could help empower disadvantaged communities to actively engage with climate adaptation tools. In parallel, involving community stakeholders in the co-design of AI solutions (e.g., incorporating indigenous knowledge and local preferences into AI models) enhances the cultural relevance and acceptance of these technologies. Ultimately, a socially inclusive approach guarantees that AI-driven adaptation measures do not merely trickle down to vulnerable populations but are built from the ground up with their direct input and for their direct benefit.

7. Role and Initiatives of International Organizations: AI in Climate Action

To ensure fair and sustainable integration of AI technologies into global climate action, numerous international organizations have launched multidimensional initiatives (Table 1). These organizations develop normative frameworks and strengthen the capacities of countries through technical and financial support.

Table 1. Research summary of organizations

Organization

Abbreviation

Summary of Research

United Nations Framework Convention on Climate Change

UNFCCC

Published an AI for Climate Action roadmap (2023) with 14 recommendations for developing countries (digital infrastructure, equity, and open-access tools) [12].

United Nations Environment Programme

UNEP

Works with UNFCCC and provides technical support on AI and climate technologies through the Climate Technology Centre and Network. [12].

Climate Technology Centre and Network

CTCN

Supports technology transfer, training, and AI-based climate solutions for developing nations.

Intergovernmental Panel on Climate Change

IPCC

Sixth Assessment Report (AR6) (2022) emphasized AI's role in mitigation: energy efficiency, system optimization, and carbon reduction. [13].

World Bank Group

World Bank

Highlights AI as a leverage point for climate finance, urging digital integration in urban and agricultural projects [14].

Organisation for Economic Co-operation and Development

OECD

Updated AI Principles (2024) to include climate aspects; calls for measuring AI’s footprint and reducing digital inequality [11].

The UNFCCC, under its “AI for Climate Action” initiative, published a roadmap for developing countries. This document outlined 14 core recommendations, which included expanding digital infrastructure, addressing social inequalities, and promoting open-access AI tools as public goods [12]. In partnership with the UN Environment Programme (UNEP), technical assistance in climate technologies including AI, was provided through the Climate Technology Centre and Network. The IPCC, in its Sixth Assessment Report, highlighted the potential contributions of digital technologies to mitigation processes. These included energy efficiency, system optimization, and carbon reduction through digitalization [12]. Furthermore, projects such as the Global Adaptation Mapping Initiative (GAMI) demonstrated that AI-supported data analysis methods could address knowledge gaps in adaptation policies [15].

The World Bank has assessed the potential of AI as a leverage point in development and climate finance, particularly recommending the integration of digital components into urban infrastructure and agricultural projects. As of 2024, its blogs and policy documents have emphasized the importance of just transition principles and called for the inclusion of climate change themes in national AI strategies [14]. The OECD updated its 2019 AI Principles in 2024 to include environmental and climate dimensions. Its 2023 reports called for measuring the environmental impact of AI (e.g., energy use and carbon footprints), addressing digital inequality, and enhancing international cooperation for a green digital transformation [11].

These examples illustrate that international organizations are not only setting guiding principles but also aiming to drive field-level transformation through capacity building, financing, and technology transfer. However, the success of this process depends on the simultaneous provision of strong governance, technical expertise, data infrastructure, and public engagement.

8. Rebound Effects and Technical Risks

High-energy AI models, such as large-scale neural networks and foundation models (e.g., GPT-3), consume significant computational resources, resulting in considerable greenhouse gas emissions. For instance, Strubell et al. [5] reported that training a single deep learning model for natural language processing could emit as much $\mathrm{CO}_2$ as five average American cars in their entire lifetime. Similarly, Patterson et al. [16] demonstrated that training Google’s BERT model at scale required hundreds of megawatt-hours of electricity, thus emphasizing the carbon intensity of developing a large-scale model.

To mitigate this rebound effect, several actionable strategies are being developed and adopted:

• Low-energy model designs: This involves creating energy-efficient architectures such as distilled or pruned neural networks that maintain performance while significantly reducing training time and energy usage. EfficientNet and MobileBERT are prime examples of this trend.

• Carbon labeling of AI systems: Institutions like the Allen Institute for AI advocate for standardized reporting of carbon footprints during the development and deployment of AI model. Schwartz et al. [17] proposed a “Green AI” framework to encourage researchers to disclose energy consumption and emissions as a metric alongside accuracy.

• Lifecycle emission tracking: Henderson et al. [9] suggested incorporating full-cycle tracking, covering data collection, training, storage, and model inference to assess long-term environmental impacts and optimize for sustainability over time. This is increasingly essential in climate applications where tools like geospatial models, satellite imagery processing, and climate simulations are computationally intensive. It is implicitly emphasized that uncertainty in complex models represents a significant risk for effective decision-making, and that advanced analytical methods may help to mitigate this risk [18].

Despite their adaptive potential, AI-powered climate strategies may inadvertently trigger rebound effects by encouraging higher consumption through efficiency gains and create new technological risks, including over-reliance on automated systems, algorithmic bias, and reduced institutional resilience [19]. Combined together, these strategies enable a pathway toward sustainable AI deployment in climate adaptation so as to ensure that the tools meant to solve environmental challenges do not inadvertently worsen them.

9. Conclusions and Recommendations

This study demonstrated that AI had become a central driver of climate adaptation by improving risk assessment as well as forecasting accuracy, operational efficiency, agricultural productivity, and business resilience. Through its capacity to process complex and large-scale datasets, AI provides actionable insights that traditional analytical tools cannot easily capture, thereby supporting precise and more timely decisions on adaptation. Despite its transformative potential, the analysis revealed that the benefits of AI were unevenly distributed across countries and sectors.

A central conclusion emerging from this study was that the impact of AI on climate adaptation was not determined solely by technological sophistication but by the institutional, social, and ethical context in which these tools were deployed. Effective governance characterized by algorithmic transparency, ethical oversight, and robust accountability structures is critical to ensuring that AI-based systems operate fairly and reliably. Without such safeguards, there is a significant risk that opaque algorithms, biased datasets, and uneven access to digital tools could undermine public trust and exacerbate social disparities. Accordingly, embedding ethics-by-design and safety-by-design principles into AI-driven climate adaptation is essential to mitigate technological risks such as automation bias, accountability gaps, and long-term dependency on opaque systems, thereby safeguarding public trust and institutional resilience [20]. Consequently, technical innovation should be accompanied by governance innovation to ensure that AI supports resilience equitably and sustainably. To operationalize this synchrony between technological and governance advancement, we proposed a multi-pronged governance innovation framework.

First, regulators should adopt an adaptive governance approach: for example, implementing regulatory sandboxes for AI in climate adaptation allows policymakers to experiment with and refine rules as new AI tools emerge [21]. This flexible regulation, coupled with periodic reviews of the impacts of AI climate tools, ensures that policies keep pace with technological change.

Second, multi-stakeholder governance bodies need to be established to bridge expertise and perspectives. Governments could create task forces or working groups that include AI developers, climate scientists, ethicists, local community representatives, and policymakers [22], [23]. These bodies would co-design guidelines and decision-making protocols for AI use (e.g., setting standards for algorithmic transparency in national early warning systems) and ensure all voices are heard in governing AI’s rollout.

Third, investing in governance capacity and institutions is key to forming dedicated units within climate agencies focused on data and AI ethics, or empowering an independent oversight commission to audit and advise on AI-driven climate programs [24]. Such institutions would institutionalize governance innovation by continuously scanning for emerging risks (like new biases or security issues in AI models) and formulating agile responses.

Fourth, we emphasized iterative learning and feedback mechanisms: governance innovation should be an ongoing cycle in which the outcomes of AI deployment (successes and failures) are systematically evaluated and lessons are reflected in policy updates. For instance, if an AI-powered water management system underperforms in a drought due to unforeseen social factors, governance frameworks should be nimble enough to adjust operational protocols or standards of data in response.

For governments and international public agencies, the findings imply the requirements to strengthen governance frameworks and enact concrete policies that harness AI for the public good. National governments should establish clearly ethical and operational guidelines for AI in climate adaptation; for instance, integrating FATE principles into national AI strategies and mandating impact assessments for high-stakes AI climate tools. Regulatory bodies could create adaptive regulations or sandboxes that allow climate-related AI innovations to be tested under oversight, thus enabling learning while managing risks. At the international level, governments collaborate on setting standards for climate data sharing and AI model validation (through bodies like the UNFCCC), to ensure interoperability and fairness across borders. Public investment is also crucial: governments should invest in open, interoperable, and standardized environmental data infrastructures (e.g., nationwide climate data platforms and sensor networks) that AI systems could rely on. In addition, directing funding and technical support to build AI capacity in the public institutions of developing countries (through technology transfer programs and training) will help level the playing field and address global disparities.

For the private sector (enterprises and industries), an operational plan is needed to align business practices with climate adaptation goals using AI. Companies should incorporate climate resilience into their AI development and deployment. Adopting responsible AI practices, for instance, could ensure climate risk prediction tools are audited for bias (fairness) and aspects of their algorithms are rendered transparent to stakeholders like insurers or local authorities. Enterprises with significant climate expertise, such as tech firms, can partner with governments to co-develop open-access AI tools as public goods as some firms have done by open-sourcing data or AI models related to climate. Importantly, businesses should share relevant climate and weather data while respecting privacy and proprietary limits. Aggregated data on supply chains or operations could improve collective climate risk models. Corporate climate adaptation plans might include investing in AI-driven early warning systems for their facilities and supply chains, and offering those innovations to local communities around them. By treating climate adaptation as part of corporate social responsibility and risk management, enterprises not only protect their assets but also contribute to society’s broader resilience. Sector-specific associations in agriculture, energy, or finance could develop guidelines for their members; for example, insurance companies could agree on standard ways to use AI for assessing climate risks in underwriting, which spreads best practices across the industry.

For local communities, civil society, and non-governmental organizations, empowerment and inclusive action paths are recommended. Community-level actors should be involved as co- creators and beneficiaries of AI-driven adaptation. Expanding digital inclusion initiatives is paramount: programs should provide marginalized communities with affordable internet access, climate information services, and digital literacy training required to use AI-powered tools (such as apps for weather alerts or farming advice). This ensures AI benefits reach those on the front line of climate change. We also recommended establishing participatory mechanisms in AI projects; for instance, local farmers cooperatives or indigenous groups could partner in designing an AI system for water management, thereby contributing their knowledge to warrant the tool fits local conditions. NGOs and community organizations could facilitate these processes by acting as bridges between technologists and residents, translating community needs into technical requirements, and vice versa. Operationally, this could involve setting up community advisory boards for major climate-AI initiatives (to voice concerns and evaluate outcomes) or running pilot projects where communities lead the deployment of a small-scale AI solution (such as a village-level flood alert network using AI forecasts). By supporting such grassroots adaptation efforts and feedback loops, policymakers and funders could safeguard that AI adaptation is not only top-down but also bottom-up [23]. In summary, tailoring policy actions to each group in the public sector, private sector, and communities creates a multi-layered implementation pathway: government policies set the enabling environment; enterprises drive innovation and mobilization of resources; and communities’ ground-truth legitimize AI solutions on the front line of climate impact.

The findings also indicate that data inequality remains one of the most persistent barriers to effective AI-driven adaptation. Countries with limited environmental data infrastructures or restricted access to remote-sensing technologies are less capable of leveraging advanced analytical systems. This creates an adaptation divide in which high-income regions benefit disproportionately from AI-enabled climate services, while low- and middle-income regions struggle to integrate such tools into their national adaptation plans [24]. Addressing data inequality is therefore essential to ensuring that AI contributes to global resilience rather than deepening existing climate-related disparities. Likewise, addressing the current regional imbalance in climate adaptation research and practice is crucial. Many AI solutions and case studies to date originate in data-rich and higher-income settings, which can limit their transferability.

Another major implication is the need to integrate social inclusion into every stage of AI deployment. Communities, especially those most vulnerable to climate impacts, must be actively involved in the design, dissemination, and evaluation of AI-supported systems. Without meaningful local engagement, adaptation strategies may remain technically sound but socially disconnected, hence reducing their effectiveness and legitimacy [25]. Similarly, technical capacity alone is insufficient for successful implementation. AI deployment must be supported by strong institutional capacity, well-trained personnel, and public-sector readiness to integrate digital tools into policy cycles, monitoring systems, and long-term planning processes.

In summary, AI holds significant promise as a catalyst for resilient, inclusive, and sustainable climate adaptation. However, realizing this potential requires a comprehensive governance approach that balances technological advancement with ethical safeguards, institutional strengthening, and social justice. When deployed within such a framework, AI could evolve from a technical tool into a transformative enabler of equitable climate resilience.

Data Availability

The data used to support the research findings are available from the corresponding author upon request.

Conflicts of Interest

The author declares that no conflicts of interest.

References
1.
Y. Hua, X. Yin, and F. Wen, “Artificial intelligence and climate risk: A double machine learning approach,” Int. Rev. Financ. Anal., vol. 103, p. 104169, 2025. [Google Scholar] [Crossref]
2.
S. M. Cheong, K. Sankaran, and H. Bastani, “Artificial intelligence for climate change adaptation,” WIREs Data Min. Knowl. Discov., vol. 12, no. 5, p. e1459, 2022. [Google Scholar] [Crossref]
3.
M. Reichstein, V. Benson, J. Blunk, G. Camps-Valls, F. Creutzig, C. J. Fearnley, B. Han, K. Kornhuber, N. Rahaman, B. Schölkopf, et al., “Early warning of complex climate risk with integrated artificial intelligence,” Nat. Commun., vol. 16, no. 1, p. 2564, 2025. [Google Scholar] [Crossref]
4.
W. Leal Filho, T. Wall, S. A. R. Mucova, G. J. Nagy, A. L. Balogun, J. M. Luetz, A. W. Ng, M. Kovaleva, F. M. S. Azam, F. Alves, et al., “Deploying artificial intelligence for climate change adaptation,” Technol. Forecast. Soc. Change, vol. 180, p. 121662, 2022. [Google Scholar] [Crossref]
5.
E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in NLP,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 2019, pp. 3645–3650. [Google Scholar] [Crossref]
6.
E. Nost, “Governing artificial intelligence, governing climate change?,” Geo: Geogr. Environ., vol. 11, no. 1, p. e00138, 2024. [Google Scholar] [Crossref]
7.
A. Nordgren, “Artificial intelligence and climate change: Ethical issues,” J. Inf. Commun. Ethics Soc., vol. 21, no. 1, pp. 1–15, 2023. [Google Scholar] [Crossref]
8.
P. Bauer, A. Thorpe, and G. Brunet, “The quiet revolution of numerical weather prediction,” Nature, vol. 525, pp. 47–55, 2015. [Google Scholar] [Crossref]
9.
P. Henderson, J. Hu, J. Romoff, E. Brunskill, D. Jurafsky, and J. Pineau, “Towards the systematic reporting of the energy and carbon footprints of machine learning,” J. Mach. Learn. Res., vol. 21, no. 248, pp. 1–43, 2020, [Online]. Available: https://www.jmlr.org/papers/v21/20-312.html [Google Scholar]
10.
S. Mehryar, V. Yazdanpanah, and J. Tong, “AI and climate resilience governance,” Iscience, vol. 27, no. 6, p. 109812, 2024. [Google Scholar] [Crossref]
11.
Organisation for Economic Co-operation and Development (OECD), “OECD AI principles: Updated guidance on climate and sustainability,” 2024. https://www.oecd.org/en/topics/ai-principles.html [Google Scholar]
12.
United Nations Framework Convention on Climate Change (UNFCCC), “Artificial intelligence for climate action in developing countries: Opportunities, challenges and risks,” 2024. https://unfccc.int/ttclear/misc_/StaticFiles/gnwoerk_static/AI4climateaction/28da5d97d7824d16b7f68a225c0e3493/a4553e8f70f74be3bc37c929b73d9974.pdf [Google Scholar]
13.
Intergovernmental Panel on Climate Change (IPCC), Climate Change 2022: Impacts, Adaptation and Vulnerability. Cambridge, U.K.: Cambridge University Press, 2023. [Google Scholar]
14.
World Bank Group, “Climate and development: An agenda for action,” 2022. https://openknowledge.worldbank.org/handle/10986/38220 [Google Scholar]
15.
L. Berrang Ford, R. Biesbroek, and J. D. Ford, “A systematic global stocktake of evidence on human adaptation to climate change,” Nat. Clim. Change, vol. 11, no. 11, pp. 989–1000, 2021. [Google Scholar] [Crossref]
16.
D. Patterson, J. Gonzalez, Q. Le, C. Liang, L. M. Munguia, D. Rothchild, D. So, M. Texier, and J. Dean, “Carbon emissions and large neural network training,” arXiv, vol. 2104, p. 10350, 2021, [Online]. Available: https://arxiv.org/abs/2104.10350 [Google Scholar]
17.
R. Schwartz, J. Dodge, N. A. Smith, and O. Etzioni, “Green AI,” Commun. ACM, vol. 63, no. 12, pp. 54–63, 2020. [Google Scholar] [Crossref]
18.
C. Huntingford, E. S. Jeffers, M. B. Bonsall, H. M. Christensen, T. Lees, and H. Yang, “Machine learning and artificial intelligence to aid climate change research and preparedness,” Environ. Res. Lett., vol. 14, no. 12, p. 124007, 2019. [Google Scholar] [Crossref]
19.
H. Jain, R. Dhupper, A. Shrivastava, D. Kumar, and M. Kumari, “AI-enabled strategies for climate change adaptation,” Comput. Urban Sci., vol. 3, no. 1, p. 25, 2023. [Google Scholar] [Crossref]
20.
D. Leslie, “Understanding artificial intelligence ethics and safety,” arXiv, vol. 1906, p. 05684, 2019. [Google Scholar] [Crossref]
21.
U. Gasser and V. A. F. Almeida, “A layered model for AI governance,” IEEE Internet Comput., vol. 21, no. 6, pp. 58–62, 2017. [Google Scholar] [Crossref]
22.
J. Fjeld, N. Achten, H. Hilligoss, A. Nagy, and M. Srikumar, “Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI,” Berkman Klein Ctr. Res. Publ., pp. 2020–1, 2020. [Google Scholar] [Crossref]
23.
B. W. Wirtz, J. C. Weyerer, and C. Geyer, “Artificial intelligence and the public sector: Applications and challenges,” Int. J. Public Adm., vol. 42, no. 7, pp. 596–615, 2019. [Google Scholar] [Crossref]
24.
J. Cinnamon, “Data inequalities and why they matter for development,” Inf. Technol. Dev., vol. 26, no. 2, pp. 214–233, 2020. [Google Scholar] [Crossref]
25.
J. Stilgoe, R. Owen, and P. Macnaghten, “Developing a framework for responsible innovation,” in The Ethics of Nanotechnology, Geoengineering, and Clean Energy, Routledge, 2020, pp. 347–359. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Şentürk, Ö (2026). AI-Driven Climate Adaptation: Technical Applications, Ethical Governance, and Social Inclusion. Acadlore Trans. Mach. Learn., 5(1), 20-31. https://doi.org/10.56578/ataiml050103
Ö Şentürk, "AI-Driven Climate Adaptation: Technical Applications, Ethical Governance, and Social Inclusion," Acadlore Trans. Mach. Learn., vol. 5, no. 1, pp. 20-31, 2026. https://doi.org/10.56578/ataiml050103
@research-article{Şentürk2026AI-DrivenCA,
title={AI-Driven Climate Adaptation: Technical Applications, Ethical Governance, and Social Inclusion},
author={öZden şEntüRk},
journal={Acadlore Transactions on AI and Machine Learning},
year={2026},
page={20-31},
doi={https://doi.org/10.56578/ataiml050103}
}
öZden şEntüRk, et al. "AI-Driven Climate Adaptation: Technical Applications, Ethical Governance, and Social Inclusion." Acadlore Transactions on AI and Machine Learning, v 5, pp 20-31. doi: https://doi.org/10.56578/ataiml050103
öZden şEntüRk. "AI-Driven Climate Adaptation: Technical Applications, Ethical Governance, and Social Inclusion." Acadlore Transactions on AI and Machine Learning, 5, (2026): 20-31. doi: https://doi.org/10.56578/ataiml050103
ŞENTÜRK Ö. AI-Driven Climate Adaptation: Technical Applications, Ethical Governance, and Social Inclusion[J]. Acadlore Transactions on AI and Machine Learning, 2026, 5(1): 20-31. https://doi.org/10.56578/ataiml050103
cc
©2026 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.