From Automation to Policy Intelligence: A Comparative Study of Generative AI in Public Governance in China and the United States
Abstract:
This study examines how generative artificial intelligence (AI) is transforming public governance, shifting from process automation to policy intelligence. By comparing China and the United States, the research analyses how different governance logics such as state-led centralisation and decentralised innovation shape AI adoption in public administration. A qualitative comparative case study was conducted using information from government reports, such as the US blueprint of AI bill of rights, think tank publications, and scholarly literature. The analysis applied thematic coding to trace trajectories of AI adoption, institutional roles, governance challenges, and strategic framings, interpreted through the frameworks of Digital Governance and Adaptive Governance. Both the countries have distinct ways to integrate AI in public governance. China has organised AI integration into government portals, legal framework, and intelligent cities with high-capacity state coordination and integrated implementation mechanisms. The United States has unstructured but creative uses, and integration occurs at the agency level, ethical protection, and labour reform. Ethical issues vary by context, and while privacy and data-governance risks are on the agenda in China, bias and accountability are on the agenda in the United States. The article contributes to knowledge by drawing on the comparatively less explored paradigm of policy intelligence and presenting a comparative model that brings together structural integration and adaptive flexibility and their implications for international digital governance.1. Introduction
Public governance has evolved significantly alongside advances in digital technologies, giving rise to overlapping concepts such as e-government, digital governance, and online governance. Early e-government initiatives primarily focused on digitizing administrative processes to enhance efficiency and reduce bureaucratic burdens through tools such as electronic tax payments, service portals, and e-licensing (Malodia et al., 2021). However, recent literature has highlighted a major shift from automation toward more intelligence-driven governance models enabled by artificial intelligence (AI) along with advanced data analytics (Zuiderwijk et al., 2021).
Based on this development, it is possible to define digital governance as the utilization of digital infrastructures, data systems and AI-enabled platforms to reform decision-making, coordination and accountability processes in public institutions (Grigalashvili, 2023). On the other hand, adaptive governance focuses on institutional flexibility, learning, and uncertainty responsiveness (van Assche et al., 2021). Digital and adaptive governance are complementary frameworks. Digital governance describes how AI can be institutionally entrenched in the sphere of public administration. Alternatively, adaptive governance describes how organizations respond to risks and uncertainties these technologies create.
Within this context, this study introduces policy intelligence as its central analytical concept. Policy intelligence is defined as the use of advanced AI systems, specifically, generative AI, retrieval-augmented generation (RAG), and knowledge graphs to support anticipatory analysis, scenario simulation, and evidence-informed policymaking, rather than merely automating routine administrative tasks (Gelashvili-Luik et al., 2025). The development of big data analytics, machine learning, and natural language processing has also facilitated the transition of governments to higher-order governance functionalities, such as predictive analytics, decision support, and interactive policy design (Chen et al., 2025; Zuiderwijk et al., 2021). Through these technologies, public institutions are able to synthesise vast amounts of data, experiment with policy options, and engage citizens with intelligent systems.
Meanwhile, current studies also outline the opportunities as well as threats of this transition. The issues that are extensively reported by scholars in the context of AI-enabled governance include privacy, transparency, accountability, influence of algorithms, and ethical legitimacy (Beckman et al., 2022; Gesk & Leyer, 2022). These issues are not assumed analytically in this study, instead, they are regarded as empirically grounded problems, which influence institutional reaction to the adoption of AI. According to Gesk & Leyer (2022), the majority of AI applications in the public sector are focused on automation and service delivery, and policy intelligence is not given much consideration. This fact inspires the present work, which deals not only with the under-conceptualisation of policy intelligence but also with the absence of comparative studies that could evaluate how various systems of governance influence the formation of intelligence. The countries of China and the United States are chosen as the comparative cases due to representing opposite principles of governance and being the world leaders in AI implementation.
China adheres to a state-oriented model that may be defined by central planning, nation-wide coordination, and mass implementation of AI in the fields of governance including smart cities and crisis management (Roberts et al., 2020). However, the United States follows a model of decentralisation and innovation leadership where federal agencies, state governments, and a private sector are experimenting with AI on ethical and risk-based frameworks, including an AI Bill of Rights and the NIST AI Risk Management Framework (Pouget & O’Shaughnessy, 2023). For instance, the usage of AI in Chinese smart city systems for predictive urban management, and in the U. S. agencies are the most common uses of AI chatbots and decision-support systems at the agency level (US Government Accountability Office, 2025; Xu et al., 2025). This divergence offers an effective chance to analyse how the logic of governance preconditions the formation of AI-based policy intelligence.
Research Aims and Objectives
This research aims to analyze the ways generative AI can transform public governance from process automation to policy intelligence by analyzing the smart government model of China and the decentralised innovative model of the United States.
• To assess how generative AI is institutionally embedded in public administration to support policy intelligence in China and the United States.
• To analyze how differing governance logics shape ethical risks, accountability mechanisms, and adaptive responses to AI deployment.
Research Questions
RQ1: How has generative AI been adopted to support policy intelligence in public administration in China and the United States?
RQ2: How do contrasting governance logics shape ethical challenges and institutional responses to AI-enabled policy intelligence?
2. Literature Review
The idea of e-governance has evolved in the past decades like Milakovich (2021), claimed that the first stage of e-governance that was implemented between the 1990s and early 2000s aimed at automating processes to digitalise administrative functions to eliminate bureaucratic inefficiencies. At this phase, service delivery was the focus and some of the tools used included online portals, tax systems, and licensing platforms which was developed to enhance efficiency and convenience to citizens.
In the same manner, under the influence of the development of data analytics and machine learning, the second wave of e-governance introduced the use of big data to enhance policy monitoring, forecasting and decision support (Hossin et al., 2023). The period was characterized by the application of AI in streamlining operations in the process of delivering services and there was an integration effort that sought to result in predictive analytics in policing, health resource management, and city government.
One more evolutionary narrative is presented in recent researches on the subject of public administration that reveals that the utilization of AI not only transforms the administrative practices but also the organisational practices and decision-making patterns. Mergel et al. (2019) asserted that digital transformation within the administration that is AI-inspired requires new capabilities that enable new institutions, data management, and professional abilities. On the same note, Wirtz et al. (2019) emphasized that the use of AI changes the administrative discretion, accountability systems, and interactions between citizens and the state. A more recent literature recognizes a third stage where the e-governance develops to become policy intelligence. At this stage, the more sophisticated AI systems, like generative AI, RAG, and knowledge graphs, are not only automated but assist in anticipatory analysis, situation simulation, and evidence-based policymaking (Yun et al., 2024). These systems also allow governments to synthesize large amounts of both structured and unstructured data, compute policy hypothesis test, and create policy-relevant findings. Bullock (2019) also indicated that AI is also becoming a part of policy advisory procedures, but without a well-stated institutional control framework. In this regard, the use of AI in the context of public administration can be perceived as a transition to automation, analytical enhancement, and the eventual restructuring of policymakers and the way decisions are made and government is managed.
Public administration AI can be explained under structural and adaptive government theoretical approaches. In this research, the term “public administration AI” refers to AI systems deployed within public sector institutions for three purposes: (1) public service delivery and administrative automation, (2) decision-support and analytical augmentation, and (3) AI as a governance infrastructure shaping coordination, accountability, and policymaking.
Digital governance theory provides a foundational framework for understanding AI as a governing infrastructure. According to Grigalashvili (2023), the conceptualisation of digital governance is where digital technologies have been integrated into the administration system, decision-making, and state-citizen relationships. Extending this perspective, digital governance scholarship is looking more and more at AI systems as infrastructural objects that precondition interoperability, transparency, and coordination between and among governmental bodies. Janssen & Kuk (2016) assert that institutional redesign is necessary to govern data-driven with algorithms to coordinate and make decisions across agencies. Similar concerns Margetts & Dunleavy (2013), who point to the fact that AI-driven digital-era governance transforms the nature of accountability relationships by introducing automated reasoning into administrative systems.
Adaptive governance provides an adjunctive analytical perspective. van Assche et al. (2021) define adaptive governance as an institutional response to the uncertainty, complexity, and value conflict. Within the AI context, adaptive governance is useful in the process by which governments adapt legal frameworks, organisational practices, and ethical protection as a response to algorithmic risks. Sun & Medaglia (2019) demonstrate that democratic systems of governance tend to focus more on ethical supervision and risk mitigation that may decrease the integration of AI into high-stakes policymaking scenarios.
This paper integrates adaptive and digital governance models. Digital governance describes the infrastructural imprinting of AI in the public administration, whereas adaptive governance describes the institutional learning and the moral reaction.
The past few years have seen the growth of the application of digital governance by generative AI. Generative AI, in contrast to traditional AI system, is capable of producing new content, synthesising knowledge, and assisting an interactive decision-making process (Albashrawi, 2025). These tools are being experimented by governments in three areas: optimisation of services, knowledge-based systems, and policy support.
The most common application is service optimisation. Yun et al. (2024) demonstrated that to enhance interaction with citizens, chatbots, natural language question-answer systems, and translation services use generative AI. As an illustration, the United States on its part has seen the adoption of virtual assistants, which are AI-based to help in filling taxes, benefits and licensing applications to its local governments (Shorey, 2025). Although these applications are more efficient and more accessible, researchers point out that they are mostly confined to the sphere of transactional governance, but not strategic policymaking.
In addition to the service delivery aspect, more advanced application of generative AI is knowledge-based systems. Kibirige & Wandabwa (2025) believe that combining RAG and knowledge graphs foster governments to convert their disaggregated data sets into viable knowledge frameworks that they can use in the development of policy. Knowledge graphs help to promote the interoperability of heterogeneous data sources, connecting them to common semantics (Aisopos et al., 2023). As an example, the European Union has tried AI-based knowledge graphs to improve the policy consistency in environmental regulation (Gailhofer et al., 2021), and China has implemented analogous models in the context of smart cities (Zhu et al., 2024).
The most controversial are decision-support and policymaking applications. According to Albashrawi (2025), it is possible to use generative AI to simulate policy and produce regulatory analyses. The United States has experimented with AI-based regulatory impact assessment pilots (Kloeppel, 2023), and China has used predictive analytics to shape its crisis management response (Shangguan & Wang, 2022). Nevertheless, researchers warn against the excessive use of AI systems. According to Beckman et al. (2022), overreliance on the products of the algorithms can erode the principles of transparency, fairness, and democratic legitimacy.
The comparative public administration studies point to the fact that the models of national governance influence the adoption and regulation of AI. Lodge & Wegrich (2014) suggest that the administrative traditions are predisposing factors to the problem-solving abilities of states, including technological governance. Berryhill et al. (2019) also illustrate that different countries have a high degree of variation in the implementation of AI in public sector governance based on institutional capacity and regulatory philosophy. The Chinese strategy of governing AI is part of the larger project of state-led modernisation. According to Kaiser (2024), the concept of smart government in China is focused on mass, centrally organized integration of digital technologies to increase the state capacity and control of the policy. Creemers (2018) and Hoffman (2019) believe that the usage of AI in China enhances bureaucratic coordination and anticipatory regulation by using predictive analytics and combined governance platforms. The studies help to elucidate that the Chinese model is applying AI to centralised policy intelligence, and not necessarily administrative efficiency.
The United States, in its turn, is decentralised and innovation-oriented (Hambrice, 2025). Kettl (2020) emphasized the role of fragmented technology adoption that results when federalism and agency autonomy are combined. The federal level, including the AI Bill of Rights and the NIST AI Risk Management Framework, focuses on the ethical protection of the information and on risk management (The White House, 2023). The logic of governance promotes experimentation and limits systemic integration of AI to policy intelligence.
While prior studies document these contrasting approaches, few explicitly compare how governance logics shape the transition from automation to policy intelligence. This gap motivates the comparative analysis undertaken in this study.
While the existing literature has effectively assessed the concept of automation of services and the delivery of digital services by AI in depth, there is considerably less research, which expressly conceptualises the shift to policy intelligence. Additionally, few comparative studies can be used to associate the adoption of generative AI with implicit governance logics. Previous research tends to focus either on state-led centralized governance model or democratic settings, and fails to provide answers to how institutional structures condition AI’s role in policymaking.
This study addresses these gaps by introducing policy intelligence as an analytical lens and by comparatively examining China and the United States as contrasting but globally influential cases.
3. Research Methodology
This study has employed a qualitative comparative case study design grounded in interpretive policy analysis to analyse the generative AI-driven transformations in public governance in China and the United States. The qualitative approach can help to gain subtle insights (Lim, 2024) into governance processes, institutional responses, and ethical concerns rather than producing statistical generalisations. Also, the interpretive nature of qualitative research can help to analyse the AI adoption as a socially integrated process, which is driven by political institutions, regulatory frameworks, and cultural contexts. Consequently, the comparative analysis of China and the US, this study can help to generate insights about the ways divergent governance models can impact the trajectory from automation to policy intelligence.
The study relies on systematic document analysis as its primary qualitative method. Data sources include official policy documents, regulatory frameworks, think tank reports, and peer-reviewed academic literature from both China and the United States. Documents were selected based on relevance, authority, and publication between 2020 and 2025. The research is based on secondary data sources that comprise Official policy documents, such as Interim Measures for the Management of Generative AI Services, the US blueprint for AI bill of rights, peer-reviewed articles that include Feng et al. (2025), Zhang & Li (2025), and think tank reports such as Brookings and Deloitte reports. As opined by Cheong et al. (2023), secondary data is the information which is collected and published for different reasons and exists in the public domain.
As the paper employed a comparative case study approach, China and the US have been selected as case studies for the study concerning the contrasting nature of political systems in these countries and the large-scale deployment of AI in governance. China has a state-led centralized governance system. Comparatively, the governance system in the US is democratic with multiple layers of governance, such as federal agencies and state government. Also, both countries have advanced technological information and resources to redefine public administration with the use of AI. This means that China and the US present a contrasting and compelling case study for this research, as the centralised and state-driven model of AI-governance in China and decentralised and innovation-led AI governance in the US insightful pathway for policymakers to identify the impact of political systems and institutional logics on the adoption and governance of generative AI.
Data has been collected from multiple sources to ensure comprehensiveness and credibility. For China and the US, the data includes official policy documents, published articles, white papers, and scholarly articles. Together, these documents represent the official policy trajectory and scholarly critique of China’s state-led approach to AI in governance. Also, for the US, the combination of data has been taken to ensure that government priorities and independent perspectives are incorporated into the analysis. In this context, 5 to 7 articles for each country will be analysed to achieve a balance between manageability and comprehensiveness to enable systematic comparison without diluting analytical depth.
The thematic analysis has been done in a multi-stage process. To begin with, the policy narratives were introduced to the researcher by reading documents over time. Secondly, open coding was used to identify repetitive ideas that are related to AI adoption, decision-making responsibilities, and governance risks. Third, the codes were broken down into higher-order themes which are consistent with the research questions which include improvement of public services, policy intelligence, ethical risks, and governance regulation. Lastly, cross-source comparison and triangulation between government and independent analyses were used to accomplish theme validation.
Credibility is enhanced through triangulation, as government-issued documents are evaluated alongside independent analyses from academics and think tanks. This reduces reliance on a single perspective and strengthens the reliability of findings. Academic integrity is ensured by citing all sources appropriately. The study acknowledges limitations, particularly the dependence on qualitative data, which may not capture all aspects of policymaking or implementation. However, restricting the dataset to ten strong literary and governance sources, equally distributed across the two countries, mitigates this risk by ensuring consistency, comparability, and depth of analysis.
4. Findings
In this section, themes have been inductively derived through thematic analysis of secondary sources. Table 1 summarises the empirical material supporting each theme and provides the foundation for the comparative interpretation that follows.
Data Source | Key Quotation | Theme |
Lu (2025) | Also, generative AI has enhanced the efficiency and accuracy of government services. For instance, the Hangzhou healthcare security bureau has developed an AI-powered integrated service platform called Xiaozhi, which enables online processing of healthcare security services. The rapidly expanding generative AI also significantly enhances cross-departmental collaboration. Besides, open consultation platforms powered by generative AI make it more convenient for diverse stakeholders, including businesses and the public, to participate in policy design. | Public Service Enhancement & Capacity Building Policy Intelligence & Decision Support |
Zhang & Li (2025) | From a correlation perspective, the average elevation of a city influences the region's infrastructure construction costs, the efficiency of information technology deployment, and the city's ability to attract high-end talent and business clusters to some extent. More broadly, the governance value of AI depends on the degree of coupling between AI systems and local governance structures. A “technology embedding assessment index system” could be established to monitor the real application performance of AI systems in administrative processes, thus promoting institutional reforms and creating space for embedded AI governance. | Public Service Enhancement & Capacity Building Smart Cities & Ethical Risks |
Feng et al. (2025) | China’s governance modernization has entered a critical phase characterized by the integration of digital technologies into core administrative functions. EU smart governance frameworks' emphasis on resource optimization over technological novelty. China’s governance intelligence assessment has evolved through distinct developmental phases spanning four decades—progressing from office automation evaluation to internet-integrated government service assessment. | Policy Intelligence & Decision Support |
Zhuang (2025) | Data handling concerns and a lack of community engagement are addressed in different ways, with Beijing district a model, analysts said. According to the report, the AI puts a laser focus on patrolling Nanjing’s blind spots – the areas generally ignored by human patrols. | Smart Cities & Ethical Risks |
World Economic Forum (2025) | China provides insights into how nations can align strategy, innovation, and ecosystem development to harness AI's transformative potential. China’s trajectory in AI is underpinned by a structured and phased approach. By integrating AI technologies such as digital twins, predictive maintenance and generative AI, industries such as manufacturing, healthcare, transportation, retail and energy are witnessing transformative advancements. | Strategic Framing & Governance Regulation |
Maier (2021) | The US government has focused on the utilization of artificial intelligence (AI) and machine learning (ML) within the government and across the nation. National Artificial Intelligence Initiative Act of 2020, became law on January 1, 2021, offers coordinated program across the entire federal government to accelerate AI research and application. Furthermore, AI is instrumental in enhancing the resilience of government supply chains against disruptions. | Public Service Enhancement & Capacity Building |
Deloitte (2025) | The city of Sioux Falls deployed an AI, IoT and cloud-based platform (Coronavirus Emergency Response (CoVER) platform) to apply the vast amounts of data within their systems to mitigating the impact of the virus. We’re working with tech companies and municipalities like Jersey City in the United States to help unlock the power of this data in an ethical and anonymised way. | Public Service Enhancement & Capacity Building Smart Cities & Ethical Risks |
Shorey (2025) | Agencies have announced “AI-first” strategies following a federal hiring freeze and the buyout, firing, or resignation of 23,000 federal workers. Public administration workers are the contact point between constituents and government, initiating bureaucratic processes and facilitating access to benefits programs. When chatbots are used to facilitate these tasks, they typically link to secondary services that allow users to look up details about their case online. | Public Service Enhancement & Capacity Building |
The White House (2023) | To advance President Biden’s vision, the White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways. | Policy Intelligence & Decision Support |
Davtyan (2025) | Recognition of potential improvement in workplaces by AI has sparked rapid experimentation across a range of U.S. federal agencies, despite continued challenges related to data quality and conflicting or unclear regulations and standards, among other issues. To address Americans’ growing skepticism about AI, it remains critical to focus on reducing risks before deploying technological solutions. Voluntary safety standards have been implemented for design, deployment and oversight of AI system. | Smart Cities & Ethical Risks |
White House (2025) | The Trump Administration is committed to strengthening American leadership in artificial intelligence (AI). To oversee and implement the U.S. national AI strategy, the White House established the National Artificial Intelligence Initiative Office in early January 2021. These AI investments continue to emphasize the broad spectrum of challenges in AI, including core AI research, use-inspired and applied AI R&D, computer systems research in support of AI, and cyberinfrastructure and datasets needed for AI. | Strategic Framing & Governance Regulation |
In Table 2, four themes have been developed based on the collected secondary data and these themes addresses 2 research questions by (a) examining how generative AI supports policy intelligence in public administration (Themes 1 and 2) and (b) analyzing how governance logics shape ethical risks and institutional responses (Themes 3 and 4).
Theme | China—Key Insights | U. S.—Key Insights |
1. Public Service Enhancement & Capacity Building | AI expands efficiency and inclusivity in government services such as chatbots, translation, and 24/7 access (Lu, 2025). Empirical evidence (337 cities, 2018–2024) shows AI improves digital service capacity via tech investment and human capital, though unevenly regionally (Zhang & Li, 2025). Smart governance platforms framed as modernisation drivers (Zhang & Li, 2025). | AI is applied in fraud detection, healthcare, and disaster response; local pilots in traffic and waste management (Deloitte, 2025; Maier, 2021). Workforce adaptation (training, new skills) is identified as essential for sustainable AI use (Shorey, 2025). Strong focus on efficiency, cost savings, and responsiveness. |
2. Policy Intelligence & Decision Support | AI supports predictive governance in legal-political systems (courts, policing, justice) (Feng et al., 2025). Generative AI enables scenario simulations and policy drafting (Lu, 2025). China positions AI as critical to governance modernisation. | America’s AI Action Plan 2025 promotes federal AI adoption across agencies for competitiveness & modernisation (The White House, 2023). AI is framed as infrastructure for cross-agency decision-making and policy support. U.S. approach emphasises innovation-driven, decentralised decision support. |
3. Smart Cities & Ethical Risks | “City brain” projects manage traffic, policing, and disaster control (Zhuang, 2025). Ethical concerns such as privacy, data handling, and uneven regional access to AI benefits. Path dependence and institutional inertia limit adaptability (Zhang & Li, 2025). | Local pilots enhance predictive urban governance but raise accountability questions (Deloitte, 2025). Risks of algorithmic bias, fragmentation across states, and lack of transparency in federal systems (Davtyan, 2025). Citizen trust is linked to transparency and fairness. |
4. Strategic Framing & Governance Regulation | National AI strategy, such as Generative AI Service Measures, AI Action Plan 2025, balances innovation and tight regulation (World Economic Forum, 2025). China frames AI as part of state-led modernisation and global leadership ambitions. Governance logic emphasises centralisation, integration, and legitimacy. | U.S. strategic framing highlights AI as central to national leadership and democratic values (White House, 2025). Narrative connects AI to innovation, free speech, and global competitiveness. The federal government plays a dual role: innovation enabler and regulatory safeguard. |
Zhang & Li (2025) revealed that China has used AI extensively for public service delivery and capacity building, as AI improves the digital service capacity of local government through technological investment and human capital accumulation. Although the impact of such a strategy is uneven, it is undeniable that China has used AI in policy-making for intelligent management. In this context, Lu (2025) also mentioned that generative AI chatbots and translation systems have expanded citizen accessibility to government services, which help to enhance administrative responsiveness 24 hours and reduce additional burdens. Consequently, AI in government in China has enhanced the efficiency of administration and also become a driver of inclusivity, which aligns with its modernisation agenda.
Comparatively, the US integrated AI in service delivery through different agency-specific and decentralised initiatives. For example, Federal agencies have adopted AI for fraud detection, healthcare optimisation, and disaster response, while local governments experiment with AI for traffic control, waste collection, and predictive emergency planning (Deloitte, 2025; Maier, 2021). Here, unlike China, the deployment of AI in governance in the US is fragmented across municipalities and states, although these can be framed in terms of cost efficacy, responsiveness, and workforce augmentation. In this context, workforce adaptation also emerged as a major aspect of capacity building, as Shorey (2025), emphasised that sustainable adoption requires retraining and reskilling government workers to adapt to AI-supported systems. Thus, even though both the US and China pursue efficacy and responsiveness, China focuses on system-wide integration, whereas the US emphasises agency-specific pilots coupled with human capital development.
Further, beyond service delivery, China has also demonstrated the use of AI in policy intelligence and decision support. For example, Feng et al. (2025) researched political-legal systems and identified the integration of AI into courts, policing, and justice workflows, where data-driven platforms enable predictive governance and decision automation. In this context, generative AI supported policymakers by simulating administrative scenarios and drafting regulatory texts which demonstrate the integration of AI into governance logics that were traditionally dominated by human bureaucrats (Lu, 2025). These examples demonstrated that the policy embedded predictive governance helped to enhance privacy through documented measures for transparency mechanisms and channels for public engagement in data-intensive settings. In a nutshell, it can be said that the governance model in China shows that AI is strategically integrated across service delivery, decision-making, and urban systems is underpinned by a strong regulatory apparatus and state-led vision for smart governance.
However, the US has adopted a pluralistic system in which agencies are independently tested and deploy AI, which results in fragmented but innovative applications. White House (2025) emphasised the need for federal agencies to adopt AI for enhancing national competitiveness and modernising administrative processes. However, actual adoption is fragmented as different agencies experiment with the use of AI, which fosters innovation, although it can also limit the coherence of AI-supported policy-making. In this context, Davtyan (2025) mentioned that the ethical challenges in the US discourse as algorithmic bias, black-box decision-making, and lack of accountability in AI-enabled governance, can impair the efficiency of decision-making. Therefore, while China demonstrated a coordinated and system-level integration, the US is focused on agency-led innovation with right-based guardrails.
With innovative ideas such as the city brain project, China pioneered urban management through AI-driven policy making for management (Zhuang, 2025). Such urban management platforms integrate data from traffic systems, policing, and environmental analysis to enable predictive management in urban centres. This means that AI is used for forecasting floods, optimising traffic flows, and coordinating emergency responses, which demonstrate large-scale deployment beyond service delivery. However, some ethical concerns accompany such innovations, as Zhang & Li (2025) mentioned that the government’s considerations include privacy safeguards, structured public-participation mechanisms, and equitable capability across regions. Here, Institutional inertia and path dependence further limit the adaptive capacity of some regions to fully exploit AI’s potential.
On a smaller and experimental scale, the US also tried AI in urban governance. To illustrate, Deloitte (2025) mentioned the local-level pilots, which are the use of AI on the operations of cities, including waste management, traffic optimization, and emergency preparedness. These pilots indicate the good predictive power; however, they also indicate the fragmentation in the various municipalities. In the case of the US, the major ethical threats have also been identified, including algorithmic bias, accountability, and trust in the citizens (Davtyan, 2025). The US discourse, however, as opposed to the privacy and surveillance concerns frequently raised in scholarly and policy discussions about smart-city deployments in China, is all about fairness and transparency in algorithmic decision-making. As a result, both China and the US can use smart city initiatives as an example of how AI can be used to predictive governance, however, the ethical arguments vary depending on the politics and culture.
China has positioned AI as a strategy of modernisation and world leadership through the state. As an illustration, Interim Measures of the management of Generative AI Services 2023 and the Action Plan of Global AI Governance 2025 are national strategies that emphasize a dual agenda of encouraging AI innovation and ensuring strict regulatory control (World Economic Forum, 2025). This shows that there is a centralisation, integration and legitimacy governance logic. Here, smart government is the ideological project that aligns technological innovation with administrative reform and national development in China.
Comparatively, the U.S. frames AI as both an administrative innovation and a geopolitical imperative. Trump’s Artificial Intelligence for the American People 2020 portrayed AI development as a priority for national competitiveness and technological leadership (White House, 2025). AI Action Plan does likewise here in aligning this structuring to the point of making acceptance of AI available as the key to national and global leadership (The White House, 2023). However, the US government adopts a two-pronged strategy of stimulating innovation via deregulation and protecting democratic government via ethical guidelines. Therefore, relative to centrally coordinated regulatory approach of China, The US adopted a hybrid strategy that combined innovation enablement with institution checks.
5. Discussion
The discussion section critically interprets the empirical findings in relation to the two research objectives.
The first research objective aimed at assessing how generative ASI is institutionally embedded in public administration for supporting policy intelligence in China and the United States. The findings indicate AI deployment does not create policy intelligence but rather is conditioned by the way of integrating technologies into the governing frameworks.
Generative AI in China is entrenched with centrally coordinated policies, intelligent government programmes and integrated data infrastructure. The trend is highly consistent with the digital governance theory that focuses on the importance of interoperability of digital systems, hierarchical coordination, and control of infrastructures to improve the state capacity (Zuiderwijk et al., 2021). The Chinese case example illustrates the role that generative AI can play in anticipatory governance activities, i.e. predictive urban management, crisis response and strategic policy planning. By doing this, the findings expand the digital governance theory as they demonstrate how AI-enabled policy intelligence is a next level of digital governance, as it is no longer a service automation level; instead, it is a predictive and knowledge-based decision-making one (Malodia et al., 2021).
However, the findings have also refined the digital governance theory by highlighting its limitations. Although centralized embedding creates scale and coherence, it also results in a high degree of informational consolidation within state institutions. It raises considerations associated with transparency, contestability, limited institutional learning, less evident in the conventional digital governance paradigm (Gesk & Leyer, 2022). Therefore, the intelligence of policy formed as a result of good digital governance can emphasize efficiency and control instead of reflexivity and deliberation.
In contrast, the adoption of generative AI is decentralised and experimental in the United States, and is adopted at the agency level and informed by documents like the AI Bill of Rights and the NIST AI Risk Management Framework. This approach reflects the adaptive theory of governance, which emphasizes flexibility, learning, and responsiveness, in the face of the uncertain technological environment (van Assche et al., 2021). The results demonstrate that American agencies use generative AI in decision support, knowledge retrieval, and pilots rather than system integration. These results have perfected the adaptive governance theory by highlighting a key trade-off. Despite the fact that decentralisation facilitates institution learning and ethical awareness, it restrains the systemic creation of policy intelligence. The adoption is disjointed to limit the capacity to integrate information and coordinate among agencies, reducing the possibility of operationalising experimentation into a large-scale policy intelligence that is long-term (Beckman et al., 2022). Adaptive governance conversely permits cautious innovation yet may decelerate the AI convergence procedure as a strategy tool.
The 2nd research objective was to examine the impact of opposing governance logics on ethical dilemma, accountability system and responsive actions to AI-enabled policy intelligence. The research results prove that the threat of ethics and governance responses is not merely technical issues but a symptom of deeper institutional standards and power structures. The logic of governance in China is more on administrative efficiency, stability of the system and outcome governance performance.
The governance logic in China emphasizes on administrative efficiency, system stability along with outcome-oriented governance performance. In this respect, the ethical risks of the privacy, observation and algorithmic opaqueness are primarily addressed through internal regulatory mechanisms and formal administrative processes. This practice is linked to a digital governance logic whereby accountability is vertically organized and legitimacy is based on policy performance (Roberts et al., 2020). The findings therefore refute literature assumptions that ethical governance of AI requires participatory and decentralized governance. Rather, the model, which is exemplified by China demonstrates how administrative hierarchies integrate ethical oversight within formal governance structure.
In contrast, the governance logic of the United States is based on the legal accountability, safeguarding civil rights, and institutional pluralism. Ethical risks are met by an outside control, rules and norms as opposed to central command. This is consistent with the adaptive governance theory that predicts reflexivity, stakeholder participation, and normative contestation (Pouget & O’Shaughnessy, 2023). However, the findings show that the approach that is based on rights is also associated with the fragmentation of governance since different agencies perceive and enforce ethical principles disproportionately.
The comparative analysis describes the reasons why these differences arise. The centrally coordinated governance model of China ensures rapid and coherent AI deployment, while accountability mechanisms are primarily managed within established institutional frameworks. The U.S. system, which is defined by the separation of power and decentralisation of authority, is more focused on ethical safeguards, which sometimes may create challenges in coordination with agencies. Such opposite results demonstrate that there are trade-offs between policy intelligence and control and accountability, coherence and adaptability.
The findings contribute to theory in a number of ways. First, they advance the concept of digital governance by identifying the concept of policy intelligence as a novel and more advanced domain of AI-intelligent governance that suggests predictive analytics and anticipatory decisions which do not suggest automation as the sole. Second, they develop the theory of adaptive governance by establishing how flexibility and learning are inadequate and stronger coordination mechanisms are necessary so as to have a systemic policy intelligence. Above all, the paper proposes the analytical concept of policy intelligence that is an integrative concept that is born out of dispoliency that exists between the digital governance and adaptive governance. It demonstrates that the attitude of moderation to infrastructural integration and ethical reflexivity and institutional learning is the key to good AI governance. This synthesis of the concept directly tackles the literature in the literature that requires more sophisticated and comparative approaches to AI governance (Gesk & Leyer, 2022; Zuiderwijk et al., 2021).
In practice, the research findings provide useful policy implications to policymakers. In the case of China, the transparency, continued attention to transparency mechanisms, procedural clarity, and structured stakeholder engagement will help support the sustainable operation of centrally coordinated policy intelligence systems. In the case of the United States, better coordination of agencies and mutual data infrastructure might result in making better use of generational AI without compromising ethical protection.
In a broader context, the findings suggest that governments seeking to adopt generative AI should avoid purely technical solutions and instead align AI deployment with institutional capacity, governance norms, and ethical priorities.
6. Conclusion
This research study aims to assess how generative AI is transforming public governance from process automation toward policy intelligence through a comparative analysis of China and the United States. The paper contributes to theory, methodology, and practice in three unique ways by embracing policy intelligence as the foundation of analytical thinking.
Theoretically, the paper contributes to the current body of research on AI-enabled governance by the conceptualisation of policy intelligence as an advanced phase of digital governance. Although most of the previous studies have concentrated on automation of services and efficiency, the present research proves that generative AI is becoming more and more assistive in anticipatory analysis, scenario modelling, and strategic decision-making. In addition, the findings improve both theories once digital governance and adaptive governance frameworks are integrated. The analysis indicates that, digital governance is practical to provide scale and coherence but is prone to rigidity and opaqueness, and adaptive governance is practical to provide ethical reflexivity and learning but may limit systemic policy intelligence, because of fragmentation. By so doing, the paper goes further to augment theoretical arguments by emphasizing on policy intelligence as a product of governance design that is not an end of technology.
The study makes a methodological contribution through its presentation of the importance of qualitative comparative analysis due to the systematic review of documents to study the practice of emerging AI governance. The research provides a clear and repeatable method of the study of AI adoption within a situation with a restricted access to primary data through thematic analysis of policy documents, regulatory frameworks, and governance reports. This methodological procedure is specifically applicable to comparative governance study as far as politically and institutionally dissimilar systems (like China and the United States) are considered.
Practically, this results in concrete implications to the policymakers and practitioners in the findings. In the case of China, the findings indicate that to address ethical risks linked with the large-scale policy intelligence systems, it should supplement centrally coordinated AI use with more robust transparency measures, more precise accountability criteria, and limited forms of public or expert oversight. In the case of the United States, the research identifies the necessity to enhance cross-agency coordination, shared data infrastructures, and shared implementation standards as a means of ensuring that decentralised experimentation can be translated into the continued policy intelligence capacity without undermining ethical protections. On a larger scale, governments that aim to implement generative AI ought to match technological implementation and institutional capacity, legal frameworks, and ethical priorities, not to consider AI as an independent technical solution.
In spite of these contributions, the study has significant drawbacks which should be critically reflected. The use of secondary sources limits a possibility to address the informal institutional practices, the process of the decision making, and the interaction between the policymakers and AI systems on a daily basis. Official records could also be an expression of aspirational accounts as opposed to execution results. Also, the comparative study on two leading AI powers constrains the generalisability of results to other governance environments especially in developing or hybrid political systems.
The limitations of the research ought to be overcome in future by including primary data sources like interviews with the policymakers, civil servants and technical experts and ethnographic or organisational research on AI use in the public agencies. Long-term studies would also explore the dynamics of policy intelligence throughout time as the regulatory framework, societal demands, and the ability of AI to address them rise and fall. It would be also worthwhile to broaden comparative analysis through incorporating other political systems in order to better understand the way institutional diversity affects AI-enabled governance.
Comprehensively, it is possible to state that the control over generative AI is not just a technical issue, but an institutional one. The paper, through foregrounding policy intelligence as a central concept of analytic value, adds value to more informed, theoretical, and practically sound discussions about the future of AI in the government.
The data used to support the research findings are available from the corresponding author upon request.
The author declares no conflict of interest.
