Human Behavioral Dynamics in AI-Assisted Decision Making: An Integrated SWOT–AHP–TOPSIS Analysis
Abstract:
The rapid diffusion of artificial intelligence (AI) into decision-making processes has raised critical questions about how AI reshapes human behavior, judgment, and responsibility. While existing studies often emphasize technical performance, less attention has been given to the behavioral dynamics that emerge when humans interact with AI-supported systems. This study addresses this gap by proposing an integrated Strengths, Weaknesses, Opportunities, and Threats–Analytic Hierarchy Process–Technique for Order Preference by Similarity to an Ideal Solution (SWOT–AHP–TOPSIS) framework to systematically evaluate the behavioral impact of AI-assisted decision making. First, key behavioral factors are identified using SWOT analysis, where strengths and weaknesses represent internal human behavioral traits, and opportunities and threats capture external and contextual influences related to human–AI interaction. These factors are then weighted using AHP based on expert judgments, with consistency checks ensuring methodological reliability. Finally, TOPSIS is applied to rank three AI-assisted decision scenarios—human-dominant, shared-control, and AI-dominant decision making—according to their overall behavioral performance. The results indicate that behavioral weaknesses, such as over-reliance on AI and reduced critical thinking, exert the strongest influence on decision quality. Among the evaluated scenarios, human-dominant decision making achieves the highest closeness coefficient, followed by shared-control and AI-dominant scenarios. Sensitivity analysis confirms the robustness of these rankings under reasonable variations in criterion weights. Methodologically, this study demonstrates that the SWOT–AHP–TOPSIS approach, traditionally used in strategic and operational research, can be effectively adapted to behavioral and socio-technical contexts. Substantively, the findings highlight the importance of preserving human cognitive agency in AI-assisted environments. The proposed framework offers a practical and theoretically grounded tool for researchers, designers, and policymakers to assess and guide the behavioral implications of AI-supported decision systems.1. Introduction
The rapid advancement of artificial intelligence (AI) has fundamentally transformed the way decisions are made across organizational, managerial, and societal contexts. Increasingly, decision makers no longer rely solely on their own judgment but interact with AI-assisted decision systems that provide recommendations, predictions, or ranked alternatives [1]. These systems are now widely adopted in domains such as strategic planning, risk assessment, resource allocation, and policy analysis, where decision environments are complex and characterized by multiple, often conflicting criteria.
Although AI-assisted decision making is frequently promoted for its potential to enhance efficiency, accuracy, and consistency, its implications extend well beyond technical performance. The integration of AI into decision processes inevitably reshapes how humans perceive problems, evaluate alternatives, and assume responsibility for outcomes [2]. Rather than functioning as neutral tools, AI systems actively influence human cognition, trust, autonomy, and reliance patterns during decision making [3]. Consequently, assessing decision quality alone is insufficient; it is equally important to understand how AI assistance alters human decision behavior.
From a behavioral perspective, interaction with AI introduces new dynamics that challenge traditional assumptions of rational decision making. Decision makers may over-rely on algorithmic recommendations, defer responsibility to automated systems, or experience diminished cognitive engagement when AI outputs are perceived as authoritative [4]. At the same time, AI systems may reduce cognitive overload, support reflective thinking, and help mitigate certain cognitive biases under appropriate conditions [5]. These dual and sometimes contradictory effects underscore the need for systematic approaches capable of capturing the behavioral consequences of AI-assisted decision making.
Despite increasing scholarly attention to AI-assisted decision systems, much of the existing research remains concentrated on algorithmic performance [6], optimization accuracy [7], or technical efficiency [8]. Such studies overlook the complex behavioral processes that shape how humans actually interact with AI in real-world decision contexts. As a result, a methodological gap persists between rich behavioral insights and formal decision-analytic tools.
Although prior research acknowledges that AI systems affect decision behavior [9], [10], empirical studies offering structured, multi-criteria evaluations of these behavioral impacts remain limited. Existing approaches tend to focus on isolated constructs—such as trust in AI or reliance on automation—without integrating them into a holistic analytical structure. Moreover, few studies explicitly bridge behavioral theory with decision-analytic methodologies to quantify and compare behavioral effects across alternative AI-assisted decision conditions.
In response to this gap, the present study aims to evaluate the behavioral impact of AI-assisted decision making through a structured, multi-criteria analytical approach. Specifically, it seeks to examine how AI-assisted decision systems influence human decision behavior in complex decision environments, to identify and assess the relative importance of key behavioral factors shaping human–AI interaction, and to propose an integrated framework that enables systematic comparison of behavioral impacts across different AI-assisted decision scenarios. To achieve these objectives, this study employs an integrated Strengths, Weaknesses, Opportunities, and Threats–Analytic Hierarchy Process–Technique for Order Preference by Similarity to an Ideal Solution (SWOT–AHP–TOPSIS) approach, not as an end in itself, but as a means to explicitly model, weight, and interpret behavioral dimensions within AI-assisted decision contexts.
Accordingly, this study addresses the following research questions: (RQ1) How does AI-assisted decision-making influence human decision behavior in complex decision contexts? (RQ2) Which behavioral factors play the most significant role when humans interact with AI-assisted decision systems? and (RQ3) How can an integrated SWOT–AHP–TOPSIS approach be employed to systematically evaluate the behavioral impacts of AI-assisted decision making?
This study contributes to the literature in several important ways. First, it advances theoretical understanding by offering a structured behavioral perspective on how AI-assisted decision systems shape human decision behavior, moving beyond performance-oriented evaluations. Second, it demonstrates how an integrated SWOT–AHP–TOPSIS framework can be adapted to capture and quantify behavioral dimensions within AI-assisted decision contexts, thereby bridging behavioral science and decision-analytic methods. Third, the findings provide practical insights for the design and implementation of AI-assisted decision systems that align technological support with human behavioral tendencies and cognitive constraints. Finally, by integrating concepts from behavioral science and multi-criteria decision analysis, this research contributes to interdisciplinary discussions on the responsible and effective use of AI in decision-making environments.
2. Literature Review and Theoretical Background
The study of human decision making has long recognized that individuals do not behave as perfectly rational agents [11]. Classical rational choice models assume stable preferences and full information processing capabilities; however, behavioral decision-making research demonstrates that human decisions are shaped by cognitive limitations, heuristics, and contextual influences [12], [13], [14]. Concepts such as bounded rationality, cognitive bias, risk perception, and subjective judgment play a central role in explaining why human decision behavior often deviates from purely optimal solutions. These behavioral characteristics are particularly salient in complex decision environments, where multiple criteria, uncertainty, and time constraints place significant cognitive demands on decision makers.
Within this behavioral perspective, decision support systems have traditionally been designed to reduce cognitive load and improve decision quality by structuring information and highlighting relevant criteria [15]. Nevertheless, the introduction of AI into decision support fundamentally alters this dynamic. AI-assisted decision systems do not merely organize information; they generate recommendations, predictions, and rankings that may carry an implicit sense of authority. As a result, human decision behavior becomes increasingly intertwined with algorithmic outputs, raising important questions about autonomy, accountability, and behavioral adaptation [16].
Research on human–AI interaction emphasizes that trust is a critical mediating factor in AI-assisted decision making [17]. Appropriate trust calibration enables users to rely on AI systems when they are reliable and to exercise independent judgment when AI recommendations are uncertain or flawed. However, miscalibrated trust can lead to over-reliance, automation bias, or underutilization of AI support. Studies have shown that individuals often attribute higher credibility to algorithmic recommendations than to human advice, even when the performance of the AI system is imperfect [18]. This tendency may reduce critical scrutiny and shift responsibility away from human decision makers, thereby reshaping decision behavior.
In parallel, research on automation bias and cognitive offloading highlights the behavioral risks associated with AI assistance [19]. Automation bias refers to the propensity of humans to favor suggestions from automated systems while ignoring contradictory information, whereas cognitive offloading describes the transfer of cognitive effort from humans to technological systems. While cognitive offloading can reduce mental workload and improve efficiency, excessive reliance on AI may diminish situational awareness, learning, and long-term decision competence. These findings suggest that AI-assisted decision-making produces both beneficial and detrimental behavioral effects, depending on how humans interact with and interpret AI outputs.
Beyond individual cognitive processes, organizational and contextual factors further influence behavioral responses to AI-assisted decision systems [20]. Decision environments characterized by high stakes, uncertainty, or hierarchical structures may amplify reliance on AI recommendations, as individuals seek to minimize perceived risk or accountability. Conversely, transparent and explainable AI systems may encourage more reflective engagement and balanced human–AI collaboration.
Despite these advances, much of the existing literature examines behavioral factors in isolation, focusing on single constructs such as trust, reliance, or explainability. While such studies provide valuable insights, they often lack integrative frameworks capable of capturing the relative importance and interaction of multiple behavioral dimensions within a unified analytical structure. Consequently, it remains challenging to systematically compare behavioral impacts across different AI-assisted decision scenarios or to translate behavioral insights into actionable decision-analytic evaluations.
From a methodological perspective, multi-criteria decision analysis offers structured tools for evaluating complex decision problems involving multiple criteria. Techniques such as AHP and TOPSIS have been widely applied to support decision making in engineering, management, and policy contexts. These methods enable the weighting and ranking of criteria and alternatives based on systematic comparisons. However, MCDA applications have traditionally emphasized technical, economic, or performance-related criteria, with limited attention to behavioral dimensions.
Recent studies have begun to recognize the potential of adapting MCDA techniques to incorporate qualitative and human-centered factors. Nevertheless, there remains a lack of cohesive frameworks that explicitly integrate behavioral theory with MCDA methods to evaluate the behavioral impact of AI-assisted decision making. In particular, few studies employ MCDA not merely as an optimization tool, but as a mechanism for structuring, quantifying, and interpreting behavioral factors associated with human–AI interaction.
In this context, the integration of SWOT analysis with AHP and TOPSIS provides a promising analytical foundation. SWOT analysis facilitates the systematic identification of internal and external factors related to human behavior, while AHP enables the quantification of their relative importance, and TOPSIS supports comparative evaluation across alternative AI-assisted decision scenarios. When grounded in behavioral theory, such an integrated approach offers the potential to bridge the gap between qualitative behavioral insights and quantitative decision analysis.
In summary, the existing literature highlights the significant influence of AI-assisted decision systems on human decision behavior, encompassing cognitive, affective, and contextual dimensions. However, prior research lacks structured, multi-criteria frameworks that holistically evaluate these behavioral impacts. Addressing this limitation requires an interdisciplinary approach that combines behavioral decision-making theory with decision-analytic methodologies. The present study responds to this need by proposing an integrated SWOT–AHP–TOPSIS framework to systematically assess the behavioral impact of AI-assisted decision making, thereby laying the theoretical and methodological foundation for the research design described in the following section.
3. Methodology
This study adopts a structured, multi-stage research design to systematically evaluate the behavioral impact of AI-assisted decision making. Rather than treating decision outcomes as purely technical results, the proposed design explicitly positions human behavioral responses as the core unit of analysis. The research framework integrates behavioral theory with multi-criteria decision analysis to capture, weight, and compare behavioral factors associated with human–AI interaction across alternative decision scenarios.
The overall methodological framework consists of three interrelated components: (i) the identification and structuring of behavioral factors using SWOT analysis, (ii) the determination of the relative importance of these factors through the Analytic Hierarchy Process, and (iii) the comparative evaluation of AI-assisted decision scenarios using the Technique for Order Preference by Similarity to Ideal Solution. This integrated SWOT–AHP–TOPSIS approach enables the translation of qualitative behavioral insights into a coherent quantitative evaluation framework.
To provide a practical grounding for the conceptual scenarios, the three AI-assisted decision-making configurations were illustrated using a supplier selection task in organizational procurement. In this illustrative context, the human-dominant scenario represents traditional expert-led evaluation with AI providing informational support; the shared-control scenario represents AI-generated rankings that managers can modify; and the AI-dominant scenario represents automated supplier ranking with minimal human intervention. This illustrative case helps clarify the managerial relevance and interpretability of the behavioral evaluation framework.
SWOT is a tool commonly used in strategic analysis [21], [22]. In this study, SWOT is not used for strategic planning in the conventional sense, but rather as a structured mechanism to categorize internal and external behavioral dimensions related to human–AI interaction.
Based on the behavioral SWOT framework, the identified factors were transformed into a hierarchical set of behavioral criteria for the Analytic Hierarchy Process. The overall objective at the top level of the hierarchy is to evaluate the behavioral impact of AI-assisted decision making. At the second level, behavioral factors are grouped into four main dimensions corresponding to the SWOT structure: Strengths, Weaknesses, Opportunities, and Threats, reflecting internal and external, positive and negative behavioral influences. The development of the SWOT criteria followed a multi-stage procedure to ensure conceptual validity and minimize redundancy. First, an initial pool of behavioral factors was derived from an extensive literature review on behavioral decision making, human–AI interaction, automation bias, trust in AI, and cognitive offloading. This initial list contained more than 30 candidate factors.
Second, the candidate factors were screened and clustered based on conceptual similarity. Redundant or highly overlapping constructs were merged or eliminated through expert discussion. For instance, automation bias and over-reliance were distinguished as related but conceptually different constructs: automation bias refers to preferential acceptance of AI recommendations despite contradictory evidence, whereas over-reliance reflects behavioral dependency and skill degradation over time.
Third, experts refined the list to ensure parsimony and analytical tractability. Following common practice in MCDA studies, five representative factors were retained in each SWOT group to balance model comprehensiveness and cognitive manageability in pairwise comparisons. This structure also ensured symmetry across SWOT dimensions and reduced respondent burden in AHP assessments.
Finally, each retained factor was validated by expert consensus regarding relevance to AI-assisted decision contexts. This process resulted in a final set of 20 behavioral criteria structured into SWOT.
Within each dimension, specific behavioral criteria were defined. The Strengths dimension includes cognitive support, consistency of judgment, decision confidence, speed of behavioral response, and learning through feedback. The Weaknesses dimension comprises automation bias, reduced critical thinking, over-reliance and skill degradation, trust miscalibration, and diffusion of responsibility. The Opportunities dimension consists of explainable AI support, personalized decision assistance, behavioral training capability, human-centered interface design, and ethical and regulatory support. The Threats dimension includes algorithmic bias, black-box opacity, high-stakes pressure, accountability ambiguity, and ethical manipulation risk.
These criteria form the third level of the AHP hierarchy, under which alternative AI-assisted decision scenarios are evaluated. Three representative scenarios were defined: (i) human-dominant decision making, where AI provides supportive information while humans retain full control; (ii) shared-control decision making, where AI offers ranked recommendations that humans may accept or override; and (iii) AI-dominant decision making, where AI performs most decision functions with limited human intervention. Pairwise comparisons are then conducted to derive the relative importance of each behavioral criterion, which are subsequently used to assess and rank the alternative scenarios using TOPSIS.
To quantify the relative importance of the identified behavioral factors, this study applies the Analytic Hierarchy Process. AHP is well suited for this purpose because it enables the systematic comparison of both qualitative and quantitative criteria based on expert judgment [23], [24]. By decomposing a complex decision problem into a hierarchical structure, AHP allows decision makers to express relative preferences between behavioral factors through pairwise comparisons [25].
In this study, the AHP hierarchy consists of three levels: the overall research objective (evaluation of behavioral impact of AI-assisted decision making), the main behavioral dimensions derived from the SWOT structure, and the specific behavioral factors within each dimension. Pairwise comparison matrices are constructed to assess the relative importance of these factors. Pairwise comparisons were conducted using Saaty’s [23] 1–9 fundamental scale, where 1 indicates equal importance and 9 indicates extreme importance of one criterion over another. Each expert provided an individual pairwise comparison matrix for the four SWOT dimensions and their associated sub-criteria.
To aggregate individual judgments into a group comparison matrix, the arithmetic mean method was applied to each pairwise comparison element, which is commonly used when experts are assumed to have equal importance and when judgments are expressed on a ratio scale.
Consistency of the aggregated matrices was evaluated using the Consistency Ratio (CR). Following Saaty’s recommendation, matrices with CR $\leq$ 0.10 were considered acceptable, and inconsistent judgments were revised or excluded. CRs are calculated to ensure the logical coherence of judgments, and only matrices meeting acceptable consistency thresholds are retained for further analysis. Expert judgments were elicited from individuals with relevant academic or practical experience in decision making and AI-assisted systems. The arithmetic mean was chosen to aggregate expert judgments because all experts were considered to have comparable expertise and equal weighting, and the method is widely used in group AHP applications. Alternative aggregation methods, such as the geometric mean, were considered; however, preliminary tests showed no substantive differences in ranking results. The arithmetic mean was selected for simplicity and transparency.
Following the weighting process, the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is employed to evaluate and rank AI-assisted decision scenarios based on their behavioral impacts. The TOPSIS procedure followed standard steps: (i) construction of the decision matrix from expert performance ratings, (ii) normalization of the decision matrix using vector normalization, (iii) multiplication by AHP-derived weights to obtain the weighted normalized matrix, (iv) determination of the positive ideal solution (PIS) and negative ideal solution (NIS), (v) calculation of separation measures from PIS and NIS, and (vi) computation of the relative closeness coefficient (C) for ranking alternatives [26]. TOPSIS is a multi-criteria decision-making method used to rank alternatives based on their distance from an ideal solution, where the best alternative is the one that is closest to the PIS and farthest from the NIS [26], [27], [28]. TOPSIS is selected because it enables comparison of alternatives by measuring their relative distance from an ideal positive solution and an ideal negative solution.
In the context of this study, AI-assisted decision scenarios represent different configurations or conditions under which AI systems support human decision making. Behavioral performance scores for each scenario are determined with respect to the weighted behavioral factors obtained from the AHP analysis. The TOPSIS procedure then calculates the relative closeness of each scenario to the ideal behavioral outcome, allowing for a transparent and interpretable ranking of alternatives.
To ensure both analytical rigor and empirical credibility, this study places strong emphasis on data collection procedures and reliability control throughout the research process. Data were collected to support the identification, weighting, and evaluation of behavioral factors associated with AI-assisted decision making.
Behavioral factors used in the SWOT structure were initially derived from an extensive review of prior literature on behavioral decision making, human–AI interaction, trust in automation, and automation bias. These theoretically grounded factors were then refined through expert consultation to ensure conceptual clarity, relevance, and contextual appropriateness. Twelve experts involved in the study possessed academic or practical experience related to decision making, behavioral analysis, or AI-assisted systems. The expert panel consisted of twelve individuals with substantial academic and professional experience in decision analysis, AI applications, and organizational management. Among them, seven were academic researchers specializing in decision science, behavioral science, or information systems, and five were practitioners with managerial or consulting experience in AI-supported decision systems. The experts had a ranging from 6 to 15 years and reported regular involvement in AI-assisted decision-making processes in research, consulting, or organizational contexts. Their expertise ensured informed judgments regarding behavioral dynamics in human–AI interaction scenarios.
For the AHP phase, experts were asked to provide pairwise comparisons of behavioral factors using structured comparison matrices. Their judgments reflected perceived relative importance of behavioral dimensions in AI-assisted decision contexts. To ensure logical coherence, CRs were calculated for each comparison matrix. Only matrices meeting acceptable consistency thresholds were retained, thereby enhancing the reliability of expert judgments and reducing subjectivity-related bias.
In the TOPSIS phase, behavioral performance scores for alternative AI-assisted decision scenarios were collected based on expert evaluations using standardized rating scales. These scores were combined with AHP-derived weights to compute the relative closeness of each scenario to the ideal behavioral outcome. This procedure ensured that scenario evaluation was grounded in systematically weighted behavioral criteria rather than arbitrary judgments.
To further strengthen methodological rigor, several reliability and robustness measures were incorporated. The use of SWOT analysis provided conceptual validity by grounding behavioral factors in established theoretical constructs. AHP enhanced reliability through consistency checks, while TOPSIS contributed robustness by offering a systematic and replicable ranking mechanism. In addition, sensitivity analysis may be conducted to examine the stability of results under variations in behavioral factor weights, thereby assessing the robustness of the analytical outcomes.
Together, these procedures ensure that the proposed methodology yields credible, transparent, and scientifically sound insights into the behavioral impact of AI-assisted decision making. These measures collectively enhance the transparency and replicability of the proposed analytical framework.
By integrating SWOT, AHP, and TOPSIS within a behavioral decision-making framework, this study advances methodological practice in the evaluation of AI-assisted decision systems. The proposed approach does not aim to optimize decision outcomes per se, but rather to systematically assess how AI assistance shapes human decision behavior across multiple dimensions.
The results generated through this methodology form the basis for the empirical analysis and discussion presented in the following section. Specifically, the weighted behavioral factors and ranked AI-assisted decision scenarios provide the necessary inputs for interpreting behavioral patterns, discussing theoretical implications, and deriving practical recommendations for the design and implementation of AI-assisted decision systems.
In the illustrative supplier selection context, experts evaluated the behavioral performance of the three decision scenarios using standardized rating scales. The task was chosen because supplier evaluation is a typical multi-criteria managerial decision involving uncertainty, accountability, and reliance on analytical tools, making it suitable for examining behavioral dynamics in AI-assisted decision making. Although the case is illustrative rather than empirical, it provides a realistic managerial anchor for interpreting the results.
4. Results and Discussion
To ensure robustness, pairwise comparison judgments from 12 experts were collected and synthesized into a unified comparison matrix using the majority rule. The results for global weights, local weights, and overall weights are shown in Table 1.
The AHP results at the first hierarchical level indicate that behavioral weaknesses (W) receive the highest priority (0.4847), followed by strengths (S) (0.3173), threats (T) (0.1324), and opportunities (O) (0.0656). This distribution shows that, within the context of AI-assisted decision making, experts place greater emphasis on behavioral risks—such as automation bias and reduced critical thinking—than on purely positive or enabling aspects of AI.
At the second level, the most influential sub-criteria are W1 (automation bias) and W2 (reduced critical thinking), both with the highest weights (0.1684 each). This confirms that the dominant concern in human–AI interaction is not technical performance but the potential distortion of human judgment. Other relatively important factors include W3 (over-reliance and skill degradation) and W4 (trust miscalibration), while factors such as ethical and regulatory support (O5) or behavioral training capability (O3) receive comparatively low weights.
The AHP results demonstrate that the behavioral dimension of AI-assisted decision making is strongly risk-oriented, emphasizing the need to control negative behavioral effects rather than merely amplifying technological benefits.
To enhance consistency and reduce individual subjectivity, performance ratings expressed as positive integers were collected from 12 experts and aggregated using the arithmetic mean. The resulting single evaluation matrix reflects the average expert judgment and served as the decision matrix in the TOPSIS method. The results of the decision matrix, the weighted normalized matrix, PIS and NIS, the separation measures (${S}^{+}$and ${S}^{-}$), and the relative closeness coefficient ($C^*$) are presented sequentially in Table 2, Table 3, Table 4, Table 5, Table 6, and Table 7.
Criteria | Sub-Criteria | Local Weight | Global Weight | Overall Weight |
|---|---|---|---|---|
Strengths (S) | S1. Cognitive support and workload reduction | 0.3130 | 0.3173 | 0.0993 |
S2. Consistency of judgment | 0.3130 | 0.3173 | 0.0993 | |
S3. Decision confidence | 0.1765 | 0.3173 | 0.0560 | |
S4. Speed of behavioral response | 0.0988 | 0.3173 | 0.0313 | |
S5. Learning through feedback | 0.0988 | 0.3173 | 0.0313 | |
Weaknesses (W) | W1. Automation bias | 0.3474 | 0.4847 | 0.1684 |
W2. Reduced critical thinking | 0.3474 | 0.4847 | 0.1684 | |
W3. Over-reliance and skill degradation | 0.1202 | 0.4847 | 0.0583 | |
W4. Trust miscalibration | 0.1202 | 0.4847 | 0.0583 | |
W5. Diffusion of responsibility | 0.0648 | 0.4847 | 0.0314 | |
Opportunities (O) | O1. Explainable AI support | 0.3261 | 0.0656 | 0.0214 |
O2. Personalized decision support | 0.1798 | 0.0656 | 0.0118 | |
O3. Behavioral training capability | 0.1072 | 0.0656 | 0.0070 | |
O4. Human-centered interface design | 0.3261 | 0.0656 | 0.0214 | |
O5. Ethical and regulatory support | 0.0608 | 0.0656 | 0.0040 | |
Threats (T) | T1. Algorithmic bias | 0.3261 | 0.1324 | 0.0432 |
T2. Black-box opacity | 0.3261 | 0.1324 | 0.0432 | |
T3. High-stakes pressure | 0.1072 | 0.1324 | 0.0142 | |
T4. Accountability ambiguity | 0.1798 | 0.1324 | 0.0238 | |
T5. Ethical manipulation risk | 0.0608 | 0.1324 | 0.0081 |
S1 | S2 | S3 | S4 | S5 | W1 | W2 | W3 | W4 | W5 | O1 | O2 | O3 | O4 | O5 | T1 | T2 | T3 | T4 | T5 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A1 (Human-dominant) | 65 | 60 | 55 | 50 | 70 | 90 | 90 | 85 | 80 | 85 | 70 | 60 | 65 | 75 | 70 | 85 | 85 | 60 | 80 | 75 |
A2 (Shared-control) | 80 | 75 | 75 | 70 | 75 | 65 | 60 | 60 | 65 | 65 | 80 | 75 | 70 | 80 | 75 | 65 | 60 | 65 | 65 | 65 |
A3 (AI-dominant) | 90 | 85 | 85 | 90 | 60 | 30 | 25 | 20 | 30 | 35 | 60 | 65 | 50 | 55 | 50 | 30 | 25 | 40 | 30 | 35 |
S1 | S2 | S3 | S4 | S5 | W1 | W2 | W3 | W4 | W5 | |
A1 | 0.4750 | 0.4678 | 0.4365 | 0.4016 | 0.5890 | 0.7826 | 0.8107 | 0.8023 | 0.7452 | 0.7550 |
A2 | 0.5846 | 0.5848 | 0.5953 | 0.5623 | 0.6311 | 0.5652 | 0.5405 | 0.5663 | 0.6306 | 0.5774 |
A3 | 0.6577 | 0.6627 | 0.6746 | 0.7229 | 0.5048 | 0.2609 | 0.2252 | 0.1888 | 0.2910 | 0.3109 |
O1 | O2 | O3 | O4 | O5 | T1 | T2 | T3 | T4 | T5 | |
A1 | 0.5735 | 0.5174 | 0.6029 | 0.6114 | 0.6134 | 0.7649 | 0.7944 | 0.6180 | 0.7452 | 0.7127 |
A2 | 0.6554 | 0.6467 | 0.6492 | 0.6521 | 0.6572 | 0.5849 | 0.5607 | 0.6695 | 0.6055 | 0.6176 |
A3 | 0.4915 | 0.5605 | 0.4637 | 0.4483 | 0.4381 | 0.2700 | 0.2336 | 0.4120 | 0.2794 | 0.3326 |
Using the weighted criteria derived from AHP, TOPSIS was applied to evaluate three AI-assisted decision-making scenarios:
A1—Human-dominant, A2—Shared-control, and A3—AI-dominant.
The calculated distances from the PIS (${S}^{+}$) and NIS (${S}^{-}$) are:
• A1: ${S}^{+}$ = 0.0315, ${S}^{-}$ = 0.1443
• A2: ${S}^{+}$ = 0.0633, ${S}^{-}$ = 0.0852
• A3: ${S}^{+}$ = 0.1444, ${S}^{-}$ = 0.0314
The corresponding closeness coefficients ($C^*$) are:
• A1: 0.8208
• A2: 0.5738
• A3: 0.1785
Based on these values, the final ranking is:
1. A1—Human-dominant decision making
2. A2—Shared-control decision making
3. A3—AI-dominant decision making
This result indicates that scenarios preserving strong human control over decisions are behaviorally most desirable, while highly automated, AI-dominant decision making is least compatible with healthy human decision behavior.
S1 | S2 | S3 | S4 | S5 | W1 | W2 | W3 | W4 | W5 | |
A1 | 0.0472 | 0.0465 | 0.0244 | 0.0126 | 0.0185 | 0.1318 | 0.1365 | 0.0468 | 0.0434 | 0.0237 |
A2 | 0.0581 | 0.0581 | 0.0333 | 0.0176 | 0.0198 | 0.0952 | 0.0910 | 0.0330 | 0.0367 | 0.0181 |
A3 | 0.0653 | 0.0658 | 0.0378 | 0.0227 | 0.0158 | 0.0439 | 0.0379 | 0.0110 | 0.0170 | 0.0098 |
O1 | O2 | O3 | O4 | O5 | T1 | T2 | T3 | T4 | T5 | |
A1 | 0.0123 | 0.0061 | 0.0042 | 0.0131 | 0.0024 | 0.0330 | 0.0343 | 0.0088 | 0.0177 | 0.0057 |
A2 | 0.0140 | 0.0076 | 0.0046 | 0.0139 | 0.0026 | 0.0253 | 0.0242 | 0.0095 | 0.0144 | 0.0050 |
A3 | 0.0105 | 0.0066 | 0.0033 | 0.0096 | 0.0017 | 0.0117 | 0.0101 | 0.0058 | 0.0067 | 0.0027 |
S1 | S2 | S3 | S4 | S5 | W1 | W2 | W3 | W4 | W5 | |
PIS | 0.0653 | 0.0658 | 0.0378 | 0.0227 | 0.0198 | 0.1318 | 0.1365 | 0.0468 | 0.0434 | 0.0237 |
NIS | 0.0472 | 0.0465 | 0.0244 | 0.0126 | 0.0158 | 0.0439 | 0.0379 | 0.0110 | 0.0170 | 0.0098 |
O1 | O2 | O3 | O4 | O5 | T1 | T2 | T3 | T4 | T5 | |
PIS | 0.0140 | 0.0076 | 0.0046 | 0.0139 | 0.0026 | 0.0330 | 0.0343 | 0.0095 | 0.0177 | 0.0057 |
NIS | 0.0105 | 0.0061 | 0.0033 | 0.0096 | 0.0017 | 0.0117 | 0.0101 | 0.0058 | 0.0067 | 0.0027 |
| $\boldsymbol{S}^{+}$ | $\boldsymbol{S}^{-}$ | |
| A1 | 0.0315 | 0.1443 |
| A2 | 0.0633 | 0.0852 |
| A3 | 0.1444 | 0.0314 |
| $\boldsymbol{C^*}$ | |
| A1 | 0.8208 |
| A2 | 0.5738 |
| A3 | 0.1785 |
The four original scenarios were designed to simulate managerial preference shifts across SWOT dimensions, reflecting possible organizational strategies such as risk-averse, opportunity-driven, or balanced decision styles. These scenarios represent plausible strategic orientations in AI-assisted decision contexts.
The purpose of sensitivity analysis is to examine how stable the final ranking of decision-making scenarios is when the weights of criteria change. Since the AHP results show a strong dominance of behavioral weaknesses (W = 0.4847), this analysis tests whether the preference for the human-dominant scenario (A1) remains valid when the importance of major criteria groups (S, W, O, T) is varied.
Four sensitivity scenarios are considered:
• Scenario 1: Increase the weight of Strengths (S) by 20%, proportionally reducing W, O, and T.
• Scenario 2: Increase the weight of Weaknesses (W) by 20%, proportionally reducing S, O, and T.
• Scenario 3: Increase the weight of Opportunities (O) by 50%, proportionally reducing S, W, and T.
• Scenario 4: Increase the weight of Threats (T) by 30%, proportionally reducing S, W, and O.
In each scenario, the internal distribution of sub-criteria remains unchanged; only the weights of the four main SWOT groups are adjusted.
The TOPSIS procedure was repeated under each scenario. The results show that:
• In all scenarios, A1 (Human-dominant) remains ranked first.
• A2 (Shared-control) consistently remains in second place.
• A3 (AI-dominant) always remains the lowest-ranked alternative.
Although the closeness coefficients ($C^*$) change slightly across scenarios, no scenario leads to a change in the ranking order. The largest variation is observed when the weight of Strengths is increased, which slightly improves the relative performance of A3 due to its high efficiency-related scores, but not enough to overtake A2 or A1. Figure 1 presents the sensitivity analysis of the TOPSIS results under different weighting scenarios for the four primary SWOT dimensions. The figure illustrates how variations in criterion weights influence the closeness coefficients of the three decision-making scenarios.

In addition, a Monte Carlo–style random perturbation test was conducted. Criteria weights were randomly varied within ±10% of their baseline values while maintaining the normalization constraint. The ranking stability was evaluated across 1,000 simulated runs. Results show that the top-ranked scenario remained unchanged in more than 95% of simulations, indicating high robustness.
Given that the Weakness dimension received the highest global importance, an additional sensitivity test was performed by varying only Weakness weights by ±20% while keeping other dimensions constant. The ranking results remained consistent, suggesting that the dominance of the leading scenario is not sensitive to fluctuations in perceived weaknesses of AI-assisted decision-making.
The dominance of the human-dominant scenario (A1) can be explained by its strong performance on weakness-related criteria. Because A1 minimizes automation bias, preserves critical thinking, and maintains personal responsibility, it performs well on the most heavily weighted criteria. Although it is not the best in terms of speed or cognitive load reduction, these advantages of automation are outweighed by the behavioral risks associated with high AI dominance.
The shared-control scenario (A2) represents a compromise. It benefits from AI’s cognitive support and personalization while still allowing human intervention. However, its moderate exposure to automation bias and partial erosion of critical thinking prevents it from surpassing the human-dominant model.
The AI-dominant scenario (A3) performs strongly on technical strengths such as speed and workload reduction, but performs very poorly on behavioral weaknesses. High dependence on AI, loss of critical thinking, and unclear responsibility lead to extremely low scores on the most important criteria, explaining its low overall ranking.
These findings suggest that behavioral quality in AI-assisted decision making is driven more by how well negative behavioral effects are controlled than by how much efficiency or speed AI can provide.
From a managerial decision perspective, the high closeness coefficient of the human-dominant scenario implies that organizations should avoid full delegation of decision authority to AI systems. The numerical ranking provides quantitative support for governance structures in which AI acts as a decision-support tool rather than an autonomous decision agent.
The results support theories of automation bias and trust miscalibration, which argue that excessive reliance on automated systems can degrade human judgment. The high weights assigned to W1 and W2, combined with the poor performance of the AI-dominant scenario, empirically confirm that human decision quality deteriorates when control is overly shifted to machines.
From a human–AI interaction perspective, the findings reinforce the importance of keeping humans meaningfully “in the loop”. Rather than aiming for full automation, AI systems should be designed to support, not replace, human reasoning processes.
The integration of SWOT with AHP and TOPSIS also demonstrates methodological value: SWOT helps conceptualize behavioral dimensions, AHP quantifies their relative importance, and TOPSIS enables transparent comparison of alternative human–AI interaction designs.
For designers and policymakers, the results imply that:
• AI systems should prioritize supportive and advisory roles rather than full automation.
• Interfaces should encourage critical reflection, not blind acceptance of AI outputs.
• Training programs should focus on maintaining human decision skills in AI-rich environments.
• Accountability structures must clearly define human responsibility even when AI is used.
Organizations adopting AI for decision support should therefore avoid assuming that higher automation automatically leads to better decisions. Instead, they should aim for configurations that preserve human agency and behavioral integrity.
The findings of this study provide actionable guidance for managers, decision committees, and organizations designing AI-assisted decision processes. The results indicate that decision configurations preserving human cognitive agency outperform highly automated AI-dominant configurations in behavioral terms. This has direct implications for organizational decision governance and workflow design.
First, the final decision authority should remain with human managers or decision committees, particularly in high-stakes contexts such as strategic planning, supplier selection, hiring, or investment decisions. AI systems should function primarily as analytical advisors that provide recommendations, risk assessments, and scenario analyses rather than autonomous decision makers.
Second, responsibility and accountability structures must be clearly defined. Organizations should establish explicit policies specifying that human decision makers retain ultimate responsibility for AI-supported decisions, even when AI-generated recommendations are used. This helps prevent diffusion of responsibility and reinforces ethical and legal accountability.
Third, organizations should design hybrid human–AI decision workflows. In such workflows, AI systems perform data processing, pattern recognition, and ranking tasks, while humans perform contextual interpretation, ethical evaluation, and final judgment. Decision committees may use AI outputs as structured inputs during deliberations rather than as deterministic prescriptions.
Finally, training and organizational policies should be implemented to mitigate automation bias and maintain critical thinking skills. Managers and employees should be trained to question AI outputs, understand model limitations, and recognize situations where human expertise should override algorithmic recommendations.
The results suggest that intelligent management decision systems should be designed as human-centered socio-technical systems rather than fully automated decision engines.
5. Conclusion
This study proposed and applied an integrated SWOT–AHP–TOPSIS framework to evaluate human behavioral dynamics in AI-assisted decision making. By shifting the analytical focus from purely technical performance to behavioral responses, the study contributes to a more human-centered understanding of AI-supported decisions.
The results demonstrate that behavioral weaknesses—such as over-reliance on AI and reduced critical reflection—play a more decisive role than strengths or opportunities in shaping overall decision quality. Among the three decision-making scenarios, human-dominant decision making achieved the highest closeness coefficient, followed by shared-control and AI-dominant scenarios. This indicates that preserving human cognitive agency remains essential even in highly automated environments.
Methodologically, the study shows that integrating SWOT with AHP and TOPSIS is not limited to strategic management, but can be effectively adapted to behavioral and socio-technical research. The framework enables the systematic translation of qualitative behavioral constructs into quantitative evaluation results.
The findings provide several implications for managers and organizations designing AI-assisted decision workflows. First, a human-dominant decision mode is preferable in high-stakes, ethically sensitive, or strategic decisions such as executive recruitment, project portfolio selection, and strategic investment planning. In such contexts, human judgment is necessary to interpret contextual information, ethical considerations, and tacit knowledge that AI systems cannot fully capture.
Second, shared-control decision modes are more realistic for operational and semi-structured tasks such as supplier evaluation, risk screening, and performance monitoring. In these cases, AI can provide analytical recommendations, while final decisions remain under human oversight, enabling a balance between efficiency and accountability.
Third, AI-dominant decision modes can be beneficial in highly structured, data-intensive tasks such as fraud detection, demand forecasting, or automated scheduling. However, organizations must implement governance mechanisms to mitigate risks such as automation bias, lack of transparency, and accountability ambiguity. These mechanisms include periodic human audits, explainable AI interfaces, and clear responsibility allocation frameworks.
Organizations should adopt a tiered decision architecture in which the level of AI autonomy is matched to decision complexity, risk level, and ethical sensitivity. Decision committees should define escalation protocols to ensure that critical decisions remain subject to human judgment while routine decisions can be delegated to AI systems.
AI-assisted decision modes are increasingly applied in core management domains such as investment appraisal, human resource selection, supplier evaluation, and enterprise risk management. For example, AI-driven analytics can support capital budgeting decisions, while human committees retain authority for final investment approval. Similarly, AI-based screening tools are widely used in recruitment, but human managers are responsible for ethical oversight and final hiring decisions.
From an organizational perspective, AI-assisted decision systems should be embedded in governance structures that define decision authority, accountability, and escalation procedures. Organizations need formal policies specifying when AI recommendations can be automated and when human approval is mandatory.
6. Limitations
Despite its contributions, this study has several limitations.
First, the behavioral criteria were derived primarily from literature review and expert judgment, which may reflect subjective interpretations. Although consistency checks were applied in AHP, some degree of expert bias is unavoidable.
Second, the study evaluates hypothetical or generalized decision scenarios rather than real-time experimental or longitudinal data. As a result, the findings describe relative behavioral tendencies rather than precise behavioral outcomes in specific organizational settings.
Third, cultural, organizational, and domain-specific factors were not explicitly modeled. Behavioral responses to AI may vary significantly across industries, countries, and professional roles.
Future research should validate the proposed framework using empirical datasets or experimental studies in specific organizational domains to further enhance external validity.
7. Future Research Directions
Future studies can extend this research in several important ways.
First, empirical validation using experimental designs, surveys, or field studies would strengthen the behavioral foundations of the proposed framework.
Second, domain-specific applications—such as healthcare, finance, public administration, or education—could reveal how contextual factors modify behavioral responses to AI.
Third, dynamic models could be developed to examine how human behavior evolves over time with prolonged exposure to AI-assisted systems.
The data used to support the research findings are available from the corresponding author upon request.
The author declares no conflict of interest.
