Javascript is required
1.
P. Skoczyński, “Analysis of solutions improving safety of cyclists in the road traffic,” Appl. Sci., vol. 11, no. 9, p. 3771, 2021. [Google Scholar] [Crossref]
2.
W. Wang, X. Liu, Y. Qin, and Y. Fu, “A risk evaluation and prioritization method for FMEA with prospect theory and Choquet integral,” Saf. Sci., vol. 110, pp. 152–163, 2018. [Google Scholar] [Crossref]
3.
Z. Li, Y. Wang, Y. Xu, Y. Liao, Q. Liu, and X. Qing, “Integrated subjective–objective weighting and fuzzy decision framework for FMEA-based risk assessment of wind turbines,” Systems, vol. 13, no. 12, p. 1118, 2025. [Google Scholar] [Crossref]
4.
A. Ajithkumar and P. Poongavanam, “A systematic framework for the optimum selection of organic PCM in sustainable solar drying process: A multi-criteria decision-making methodology,” J. Energy Storage, vol. 116, p. 116080, 2025. [Google Scholar] [Crossref]
5.
X. Liu and V. Y. Mariano, “The Fire-ViT model for tunnel fire detection with vision transformer improvement,” J. Comput. Cogn. Eng., vol. 4, no. 1, pp. 89–96, 2025. [Google Scholar] [Crossref]
6.
S. H. Tseng, A. M. S. Espiritu, and T. H. T. Duong, “Impact of supply chain risks on company metrics: An integrated multi-criteria decision-making approach,” Int. J. Inf. Technol. Decis. Mak., pp. 1–38, 2025. [Google Scholar] [Crossref]
7.
M. Elangovan, P. Ramya, and C. Goswami, “Sustainable energy options: Qualitative TOPSIS method for challenging scenarios,” in Decision‐Making Techniques and Methods for Sustainable Technological Innovation: Strategies and Applications in Industry 5.0, Wiley, 2025, pp. 45–67. [Google Scholar] [Crossref]
8.
E. Akdamar, G. Elidolu, M. Gögebakan, and B. O. Ceylan, “Entropy-based borda extended weighted expert FMEA approach: Comparison with classical and fuzzyFMEA on a ship system,” Appl. Soft Comput., vol. 188, p. 114424, 2026. [Google Scholar] [Crossref]
9.
S. S. Rawat, H. Dincer, and S. Yüksel, “A hybrid weighting method with a new score function for analyzing investment priorities in renewable energy,” Comput. Ind. Eng., vol. 185, p. 109692, 2023. [Google Scholar] [Crossref]
10.
S. Kaur, R. Kumar, and K. Singh, “Sustainable component-level prioritization of PV panels, batteries, and converters for solar technologies in hybrid renewable energy systems using objective-weighted MCDM models,” Energies, vol. 18, no. 20, p. 5410, 2025. [Google Scholar] [Crossref]
11.
Q. Jiang and H. Wang, “A hybrid intuitionistic fuzzy entropy–BWM–WASPAS approach for supplier selection in shipbuilding enterprises,” Sustainability, vol. 17, no. 4, p. 1701, 2025. [Google Scholar] [Crossref]
12.
A. Chakhrit, I. Djelamda, M. Bougofa, I. H. M. Guetarni, A. Bouafia, and M. Chennoufi, “Integrating fuzzy logic and multi-criteria decision-making in a hybrid FMECA for robust risk prioritization,” Qual. Reliab. Eng. Int., vol. 40, no. 6, pp. 3555–3580, 2024. [Google Scholar] [Crossref]
13.
A. A. Ardebili, A. S. Roshany, M. Pourmadadkar, M. Ghodsi, E. Padoano, and M. Boscolo, “Enhancing risk-based engineering design: A hybrid fuzzy failure analysis with empirical validation,” Front. Mech. Eng., vol. 11, p. 1732819, 2026. [Google Scholar] [Crossref]
14.
S. S. Barjoee, V. Rodionov, R. Babkin, M. Tumanov, and B. S. Barjoee, “An integrated modified failure mode effects analysis Shannon entropy combined compromise solution approach to safety risk assessment in stone crusher unit of ceramic sector,” Int. J. Eng., Trans. B: Appl., vol. 39, no. 8, pp. 1976–1987, 2026. [Google Scholar] [Crossref]
15.
W. A. C. Lopes, A. C. Rusteiko, C. R. Mendes, N. V. C. Honorĩo, and M. T. Okano, “Optimization of new project validation protocols in the automotive industry: A simulated environment for efficiency and effectiveness,” J. Comput. Cogn. Eng., vol. 4, no. 3, pp. 343–354, 2025. [Google Scholar] [Crossref]
16.
U. G. Okoro, A. Mubaraq, E. U. Olugu, S. A. Lawal, and K. Y. Wong, “A dynamic maintenance planning methodology for HVAC systems based on Fuzzy-TOPSIS and Failure Mode and Effect Analysis,” J. Build. Eng., vol. 98, p. 111326, 2024. [Google Scholar] [Crossref]
17.
D. Panchal, “Integrated quality and decision-making approaches-based framework for risk analysis,” Int. J. Ind. Syst. Eng., vol. 48, no. 4, pp. 568–587, 2024. [Google Scholar] [Crossref]
18.
J. He, T. Yu, Y. Liu, C. Ma, and X. Gao, “A novel FMEA method considering dynamic weights and its application to CNC machine tools,” Qual. Reliab. Eng. Int., vol. 42, no. 3, pp. 1347–1368, 2026. [Google Scholar] [Crossref]
19.
F. Kechroud, A. Zoubir, D. Rabah, and A. Moussa, “Risk and reliability analysis of critical boiler components,” Eksploat. i Niezawodność – Maint. Reliab., vol. 28, no. 2, p. 211506, 2026. [Google Scholar] [Crossref]
20.
A. Fiorilli, V. Pezzotta, and C. Fragassa, “Criticality-driven reliability enhancement of pneumatic sand molding cells in foundry applications via FMECA,” J. Eng. Manag. Syst. Eng., vol. 4, no. 3, pp. 188–205, 2025. [Google Scholar] [Crossref]
21.
Y. Yan, Z. Luo, Z. Liu, and Z. Liu, “Risk assessment analysis of multiple failure modes using the fuzzy rough FMECA method: A case of FACDG,” Mathematics, vol. 11, no. 16, p. 3459, 2023. [Google Scholar] [Crossref]
22.
S. Dalal, U. Rani, U. K. Lilhore, N. Dahiya, R. Batra, N. Nuristani, and D. N. Le, “Optimized XGBoost model with whale optimization algorithm for detecting anomalies in manufacturing,” J. Comput. Cogn. Eng., vol. 4, no. 4, pp. 413–423, 2025. [Google Scholar] [Crossref]
23.
R. Kumar and S. Singh, “Selection of vacuum cleaner with Technique for Order Preference by Similarity to Ideal Solution method based upon multi-criteriadecision-making theory,” Meas. Control, vol. 53, no. 3–4, pp. 627–634, 2020. [Google Scholar] [Crossref]
24.
R. Kumar, A. Bhattacherjee, A. D. Singh, S. Singh, and C. I. Pruncu, “Selection of portable hard disk drive based upon weighted aggregated sum product assessment method: A case of Indian market,” Meas. Control, vol. 53, no. 7–8, pp. 1218–1230, 2020. [Google Scholar] [Crossref]
25.
A. C. Kutlu and M. Ekmekçoǧlu, “Fuzzy failure modes and effects analysis by using fuzzy TOPSIS-based fuzzy AHP,” Expert Syst. Appl., vol. 39, no. 1, pp. 61–67, 2012. [Google Scholar] [Crossref]
26.
X. Peng, Y. Liu, and L. Hao, “A probabilistic performance-based analysis approach for a vibrator-ground interaction system,” Probabilistic Eng. Mech., vol. 76, p. 103626, 2024. [Google Scholar] [Crossref]
27.
I. S. Arafat, R. Premkumar, M. Vidhyalakshmi, C. Priya, and M. Elangovan, “Optimizing sustainable image encryption strategies in Industry 5.0 using VIKOR MCDM methodology,” in Decision-Making Techniques and Methods for Sustainable Technological Innovation, Wiley, 2025, pp. 85–100. [Google Scholar] [Crossref]
28.
R. Kumar, R. Dubey, S. Singh, S. Singh, C. Prakash, Y. Nirsanametla, G. Królczyk, and R. Chudy, “Multiple‐criteria decision‐making and sensitivity analysis for selection of materials for knee implant femoral component,” Materials, vol. 14, no. 8, p. 2084, 2021. [Google Scholar] [Crossref]
29.
A. S. Sidhu, S. Singh, R. Kumar, D. Y. Pimenov, and K. Giasin, “Prioritizing energy-intensive machining operations and gauging the influence of electric parameters: An industrial case study,” Energies, vol. 14, no. 16, p. 4761, 2021. [Google Scholar] [Crossref]
30.
R. Kumar, S. Singh, V. Aggarwal, S. Singh, D. Y. Pimenov, K. Giasin, and K. Nadolny, “Hand and abrasive flow polished tungsten carbide die: Optimization of surface roughness, polishing time and comparative analysis in wire drawing,” Materials, vol. 15, no. 4, p. 1287, 2022. [Google Scholar] [Crossref]
31.
J. Liu, B. Tang, and M. Xu, “Data-driven statistical nonlinearization technique based on information entropy,” Probabilistic Eng. Mech., vol. 70, p. 103376, 2022. [Google Scholar] [Crossref]
32.
G. Stojić, Ž. Stević, J. Antuchevičiene, D. Pamučar, and M. Vasiljević, “A novel rough WASPAS approach for supplier selection in a company manufacturing PVC carpentry products,” Information, vol. 9, no. 5, p. 121, 2018. [Google Scholar] [Crossref]
33.
M. R. Marjani, M. Habibi, and A. Pazhouhandeh, “An innovative hybrid fuzzy TOPSIS based on design of experiments for multi-criteria supplier evaluation and selection,” Int. J. Oper. Res., vol. 44, no. 2, pp. 171–209, 2022. [Google Scholar] [Crossref]
34.
X. Pang, W. Yang, W. Miao, H. Zhou, and R. Min, “Study on site selection evaluation of emergency material storage based on improved TOPSIS,” Kybernetes, vol. 54, no. 9, pp. 5181–5206, 2025. [Google Scholar] [Crossref]
35.
I. Belošević, M. Kosijer, M. Ivić, and N. Pavlović, “Group decision making process for early stage evaluations of infrastructure projects using extended VIKOR method under fuzzy environment,” Eur. Transp. Res. Rev., vol. 10, no. 2, p. 43, 2018. [Google Scholar] [Crossref]
36.
X. Chen, W. Sun, and R. Zhang, “An interval-valued neutrosophic framework: Improved VIKOR with a preference-aware AHP–entropy weight method for evaluating scalp-detection algorithms,” Appl. Sci., vol. 15, no. 22, p. 11937, 2025. [Google Scholar] [Crossref]
37.
L. T. Wang, Y. Q. Yu, Z. M. Liu, Z. H. Liu, and X. Q. Liu, “An enhanced failure mode and effects analysis risk identification method based on uncertainty and fuzziness,” J. Eng. Manag. Syst. Eng., vol. 3, no. 3, pp. 116–131, 2024. [Google Scholar] [Crossref]
Search
Open Access
Research article

A Decision-Oriented Framework for Risk-Based Maintenance Planning in High-Performance Mechanical Systems Using Entropy-Integrated FMEA–MCDM Approaches

Dharmpal Deepak1,
Sulakshna Dwivedi2,
Harnam Singh Farwaha3,
Raman Kumar3,4,
Željko Stević5,6*,
Manjunatha Chandra1,7,
Rajender Kumar8,
Anant Prakash Agrawal9,
Vivek John10
1
Department of Mechanical Engineering, Punjabi University, 147002 Patiala, India
2
School of Business Management and Commerce, Jagat Guru Nanak Dev Punjab State Open University, 147001 Patiala, India
3
Department of Mechanical and Production Engineering, Guru Nanak Dev Engineering College, 141006 Ludhiana, India
4
Department of Mechanical Engineering, Graphic Era (Deemed to be University), Clement Town, 248002 Dehradun, India
5
Department of Industrial Management Engineering, Korea University, 02841 Seoul, South Korea
6
Department of Mobile Machinery and Railway Transport, Faculty of Transport Engineering, Vilnius Gediminas Technical University, 10105 Vilnius, Lithuania
7
Department of Mechanical Engineering, Chandigarh University, 140413 Mohali, India
8
Centre for Research Impact & Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, 140401 Rajpura, India
9
Department of Mechanical Engineering, Noida Institute of Engineering and Technology, 201306 Greater Noida, India
10
Department of Mechanical Engineering, Uttaranchal Institute of Technology, 248007 Dehradun, India
Journal of Engineering Management and Systems Engineering
|
Volume 5, Issue 2, 2026
|
Pages 137-155
Received: 01-15-2026,
Revised: 04-07-2026,
Accepted: 04-14-2026,
Available online: 04-23-2026
View Full Article|Download PDF

Abstract:

Effective maintenance planning in high-performance mechanical systems requires a structured approach to identifying and prioritizing potential failure modes under multiple, often conflicting criteria. Conventional Failure Mode and Effects Analysis (FMEA) relies heavily on subjective judgment, which can limit consistency and transparency in decision-making. To address this limitation, this study develops a decision-oriented framework that integrates Shannon entropy-based weighting with three Multi-Criteria Decision-Making (MCDM) methods, namely SAW, TOPSIS, and VIKOR. The framework is applied to a representative high-performance mechanical system, in which maintenance-related factors, including failure probability, detection capability, economic impact, repair time, and resource availability, are evaluated in a unified structure. Entropy weighting is employed to derive criterion importance directly from data, reducing reliance on expert bias. The combined use of multiple MCDM techniques enables cross-validation of ranking outcomes and improves the robustness of the prioritization process. The results show a high degree of consistency among the three methods (Spearman’s $\rho>0.80$), indicating stable identification of critical failure modes. The proposed framework provides a transparent basis for risk-informed maintenance planning and supports more effective allocation of inspection and repair resources. From an engineering management perspective, the approach facilitates the transition from experience-driven decisions to data-supported strategies, contributing to improved system reliability and operational efficiency. Although demonstrated in a specific application context, the framework can be extended to other engineering systems where structured failure prioritization is required.
Keywords: Failure Mode and Effects Analysis, Multi-Criteria Decision-Making, Entropy weighting, Maintenance planning, Engineering decision support, System reliability, Risk-based prioritization

1. Introduction

Modern high-performance mechanical systems operate under demanding conditions that expose components to continuous wear, fatigue, and potential failure. In competitive racing bicycles, critical subsystems such as drivetrain components, braking mechanisms, and transmission systems are subjected to dynamic loads, environmental variability, and high operational stress. These conditions increase the likelihood of component degradation and unexpected failures, which can significantly compromise safety, performance, and maintenance efficiency. Therefore, systematic approaches to identifying and prioritizing failure modes are essential for effective maintenance planning and reliable system operation. Well-organized maintenance and resource management are required to advance the performance and durability of bicycles and maintain rider safety while reducing environmental impact [1].

Failure Mode and Effects Analysis (FMEA) is a widely adopted technique for identifying, evaluating, and prioritizing potential failures in engineering systems [2]. Despite its extensive use, traditional FMEA relies heavily on expert judgment and the Risk Priority Number (RPN), introducing subjectivity and limiting its ability to handle multiple conflicting criteria [3]. In complex systems where failure consequences depend on factors such as cost, safety, performance, and maintenance time, conventional FMEA lacks robustness and consistency in decision-making. To overcome these limitations, the integration of Multi-Criteria Decision-Making (MCDM) techniques has been increasingly explored, enabling the systematic evaluation of alternatives across multiple criteria and improving prioritization accuracy [4]. Advanced aggregation-based MCDM approaches further enhance decision consistency by improving the handling of multiple conflicting criteria [5].

Recent advancements in the literature demonstrate the effectiveness of hybrid FMEA–MCDM frameworks for enhancing failure analysis in engineering applications. Various studies have integrated FMEA with advanced decision-making methods such as the AHP [3], VIKOR [3], [6], TOPSIS [6], [7], ELECTRE [2], [8], and fuzzy-based approaches to address uncertainty and improve prioritization consistency [3], [7]. Additionally, objective weighting techniques, particularly Shannon entropy [2], [9], have gained prominence due to their ability to determine criterion importance directly from data, thereby reducing dependence on subjective expert evaluations [7], [10]. Several recent studies have further emphasized the importance of combining subjective and objective weighting schemes to enhance reliability assessment and decision robustness in complex engineering systems [3], [8], [11]. Hybrid FMEA–MCDM frameworks improve the robustness and reliability of failure prioritization in complex engineering systems. Fuzzy and entropy-based Failure Mode Effects and Criticality Analysis (FMECA) models have been proposed to address uncertainty and improve ranking stability in gas turbines and safety-critical mechanical systems [12], [13]. Similarly, entropy-driven and expert-integrated FMEA approaches have shown improved transparency and discrimination capability in maritime and industrial risk assessment applications [8], [14]. Optimization-driven decision frameworks have also been applied in engineering system validation and performance evaluation, demonstrating their effectiveness in improving decision reliability and operational efficiency [15].

In the context of maintenance decision-making, integrated FMEA–MCDM models have been successfully applied to Heating, Ventilation, and Air Conditioning (HVAC) systems and thermal power units to optimize maintenance planning and resource allocation [16], [17]. Furthermore, advanced FMEA extensions incorporating dynamic weighting and predictive maintenance strategies have been developed for Computer Numerical Control (CNC) machines and boiler systems, improving failure detection and prioritization accuracy [18], [19]. Recent reliability-driven studies also highlight the role of FMECA in enhancing system performance and reducing downtime in manufacturing environments [20], while hybrid entropy–MCDM approaches have demonstrated effectiveness in reliability model selection and engineering risk assessment [21]. Advanced expert-based decision systems have also been employed for structured risk assessment, highlighting the role of multi-criteria reasoning in evaluating complex system uncertainties [22]. Despite these advancements, challenges remain in achieving consistent ranking outcomes across multiple decision-making methods and reducing subjectivity in failure prioritization, particularly in high-performance mechanical systems.

Despite these developments, research has been limited in applying integrated FMEA–MCDM frameworks to high-performance bicycle systems, particularly in competitive racing applications where maintenance decisions directly influence operational performance. Existing studies largely focus on general decision-making applications rather than component-level failure prioritization in mechanical systems. Furthermore, the combined use of objective entropy-based weighting with multiple MCDM techniques, such as SAW, TOPSIS, and VIKOR, for comparative and robust failure prioritization remains underexplored in this context. This gap highlights the need for a structured, data-driven framework tailored to failure prioritization in performance-critical mechanical systems.

To address this gap, the current study introduces an integrated FMEA–MCDM framework that combines Shannon entropy-based objective weighting with three popular decision-making methods: SAW, TOPSIS, and VIKOR to prioritize failure modes in racing bicycle systems. The framework is positioned as a decision-support approach for maintenance planning in engineering systems, providing a structured basis for resource allocation and risk-informed decision-making. It systematically ranks failure modes against multiple operational criteria, helping maintenance planners allocate resources effectively, schedule inspections efficiently, and focus on high-risk components that have a direct impact on system reliability and performance. By merging objective weighting with diverse decision-making approaches, the framework supports consistent and data-informed maintenance planning. Although demonstrated on racing bicycle systems, the method can be extended to a wider range of engineering applications, including automotive, manufacturing, and industrial equipment maintenance, where structured failure prioritization is essential for operational efficiency.

The main contributions of this study are threefold. First, it introduces an entropy-driven multi-method decision framework that improves robustness and reduces subjectivity in failure prioritization. Second, it demonstrates the effectiveness of combining SAW, TOPSIS, and VIKOR to achieve consistent and reliable ranking outcomes. Third, it offers practical insights for maintenance planning by supporting more efficient resource allocation and the prioritization of critical components in high-performance mechanical systems.

2. Methodology

The work aims to identify the most critical failure cause to support efficient maintenance decision-making for road-racing bicycles and to improve maintenance methods using MCDM techniques (SAW, TOPSIS, and VIKOR). First, the criterion’s weight is calculated using the Shannon Entropy method; then, three MCDM techniques are applied with these weights, and the ranks are justified using a statistical technique. Spearman’s Rank Correlation Coefficient and the proposed methodology are shown in Figure 1. The rationale for selecting these three MCDM methods lies in their complementary characteristics. While VIKOR balances group utility and individual regret, TOPSIS focuses on geometric closeness to an ideal, and SAW uses straightforward additive logic. Employing all three helps triangulate decision outcomes, improving confidence in the final prioritization. By combining them with Shannon Entropy to compute objective weights, the study produces a robust multi-angle decision-support model.

Figure 1. Methodology for maintenance decision-making of road racing bicycles
Note: MCDM = Multi-Criteria Decision-Making.
2.1 Detailed Mathematical Steps of Multi-Criteria Decision-Making Methods

Three MCDM methods, TOPSIS [23], [24], [25], [26], VIKOR [27], and SAW [28], are applied in conjunction with the Shannon Entropy method [29], [30], [31] to determine the weight of the criteria for ordering multiple alternatives.

Let the decision matrix be defined as:

$$\mathrm{X}=\left[x_{i j}\right]_{m \times n}$$

where, $x_{i j}$ denotes the performance score of alternative $i$ under criterion $j, i=1,2, \ldots, m$, and $j=1,2, \ldots, n$. Here, $m$ is the number of alternatives (failure modes) and $n$ is the number of evaluation criteria. In this study, the alternatives correspond to the identified bicycle failure modes, while the criteria correspond to the nine maintenance-related evaluation factors used in the FMEA-based assessment.

For clarity, the symbols used throughout are defined as follows:

• $x_{i j}$: original score of alternative $i$ under criterion $j$

• $r_{i j}$: normalized value of alternative $i$ under criterion $j$

• $p_{i j}$: proportion of alternative $i$ under criterion $j$ used in entropy weighting

• $w_j$: weight of criterion $j$

• $v_{i j}$: weighted normalized value of alternative $i$ under criterion $j$

• $A_i$: overall SAW score of alternative $i$

• $A^{+}$: positive ideal solution in TOPSIS

• $A^{-}$: negative ideal solution in TOPSIS

•$S_i^{+}$: separation of alternative $i$ from the positive ideal solution

•$S_i^{-}$: separation of alternative $i$ from the negative ideal solution

•$C_i$: TOPSIS closeness coefficient of alternative $i$

•$Q_i$: VIKOR compromise index of alternative $i$

•$S_i$: VIKOR group utility measure

•$R_i$: VIKOR individual regret measure

2.1.1 SAW method

The SAW method ranks alternatives by computing a weighted sum of normalized performance scores across criteria [32].

Step 1: Normalize the decision matrix

For benefit criteria, where a larger value is preferred, normalization is performed as:

$r_{i j}=\frac{x_{i j}}{\max _i x_{i j}}$
(1)

For cost criteria, where a smaller value is preferred, normalization is performed as:

$r_{i j}=\frac{\min _i x_{i j}}{x_{i j}}$
(2)

where, $r_{i j}$ is the normalized score of alternative $i$ under criterion $j$.

Step 2: Compute the weighted score

The overall SAW score is computed using Eq. (3).

$A_i=\sum_{j=1}^n w_j r_{i j}$
(3)

where, $A_i$ is the aggregated performance score.

Step 3: Ranking

Alternatives are ranked in descending order of $A_i$. A larger value of $A_i$ indicates a higher priority level in the context of failure mode assessment.

2.1.2 TOPSIS

TOPSIS ranks alternatives by their geometric distance to an ideal-best solution ($A^{+}$) and ideal-worst solution ($A^{-}$); closer to $A^{+}$ (farther from $A^{-}$) is better [33], [34].

Step 1: Vector normalization

The decision matrix is normalized as:

$r_{i j}=\frac{x_{i j}}{\sqrt{\sum_{i=1}^m x_{i j}^2}}$
(4)

where, $r_{i j}$ is the normalized value of alternative $i$ under criterion $j$.

Step 2: Weighted normalized matrix

The weighted normalized values are obtained as:

$v_{i j}=w_j r_{i j}$
(5)

where, $v_{i j}$ is the weighted normalized value of alternative $i$ under criterion $j$.

Step 3: Determine the ideal solutions

The positive ideal solution $A^{+}$ and the negative ideal solution $A^{-}$ are defined as:

$A^{+}=\left\{v_1^{+}, v_2^{+}, \ldots, v_n^{+}\right\}$
(6)
$A^{-}=\left\{v_1^{-}, v_2^{-}, \ldots, v_n^{-}\right\}$
(7)

where, for a benefit criterion:

$$ v_j^{+}=\max _i v_{i j}, v_j^{-}=\min _i v_{i j} $$

and for a cost criterion:

$$ v_j^{+}=\min _i v_{i j}, v_j^{-}=\max _i v_{i j} $$

Thus, $A^{+}$ represents the best attainable criterion values and $A^{-}$ represents the worst attainable criterion values.

Step 4: Separation measures

The Euclidean distance of each alternative from the positive and negative ideal solutions is computed as

$S_i^{+}=\sqrt{\sum_{j=1}^n\left(v_{i j}-v_j^{+}\right)^2}$
(8)
$S_i^{-}=\sqrt{\sum_{j=1}^n\left(v_{i j}-v_j^{-}\right)^2}$
(9)

where, $S_i^{+}$ is the separation from the positive ideal solution and $S_i^{-}$ is the separation from the negative ideal solution.

Step 5: Closeness coefficient

The relative closeness coefficient of alternative $i$ is calculated as:

$C_i=\frac{S_i^{-}}{S_i^{+}+S_i^{-}}$
(10)

where, $0 \leq C_i \leq 1$. A larger $C_i$ indicates that the alternative is closer to the positive ideal solution and farther from the negative ideal solution; therefore, higher $C_i$ values are preferred. This corrected form should be used consistently throughout the manuscript and in the interpretation of TOPSIS results.

Step 6: Ranking

Alternatives are ranked in descending order of $C_i$. The alternative with the highest $C_i$ is considered the most critical or most preferred according to the TOPSIS framework adopted in this study.

2.1.3 VIKOR

VIKOR ranks alternatives by balancing group utility ($S_i$) and individual regret ($R_i$) to find a compromise solution closest to the ideal [35], [36].

Step 1: Determine the best and worst values

For each criterion $j$, the best value $f_j^*$ and the worst value $f_j^{-}$ are identified as:

for a benefit criterion:

$$ f_j^*=\max _i x_{i j}, f_j^{-}=\min _i x_{i j} $$

and for a cost criterion:

$$ f_j^*=\min _i x_{i j}, f_j^{-}=\max _i x_{i j} $$

These values serve as reference points for compromise evaluation.

Step 2: Compute the group utility and individual regret

The group utility measure $S_i$ and the individual regret measure $R_i$ are calculated as:

$S_i=\sum_{j=1}^n w_j \frac{f_j^*-x_{i j}}{f_j^*-f_j^{-}}$
(11)
$R_i=\max \left[w_j \frac{f_j^*-x_{i j}}{f_j^*-f_j^{-}}\right]$
(12)

where, lower values of $S_i$ and $R_i$ are preferred. Here, $S_i$ reflects the overall distance from the ideal, while $R_i$ captures the worst-case criterion-specific shortfall.

Step 3: Compute the VIKOR index

The compromise index $Q_i$ is given by:

$Q_i=v \frac{S_i-S^*}{S^{-}-S^*}+(1-v) \frac{R_i-R^*}{R^{-}-R^*}$
(13)

where,

$$ S^*=\min _i S_i, S^{-}=\max _i S_i, R^*=\min _i R_i, R^{-}=\max _i R_i $$

and $v$ is the weight of the strategy of majority utility, typically taken as $v=0.5$. A smaller $Q_i$ indicates a better compromise alternative.

2.1.4 Rationale for using Shannon entropy over subjective weighting methods

In MCDM, assigning accurate weights to evaluation criteria is crucial, as it shapes the final prioritizations. While methods like AHP and BWM rely on expert judgments via pairwise comparisons, they introduce subjectivity and potential bias. In contrast, Shannon Entropy offers an objective, data-driven weighting approach by measuring the diversity of information across alternatives. Criteria with greater variability receive higher weights, enhancing transparency and consistency. This study adopts entropy-based weighting to prioritize failure modes in racing bicycles, emphasizing safety, sustainability, and unbiased decision-making. When integrated with VIKOR, TOPSIS, and SAW, Shannon Entropy enhances methodological robustness by providing normalized, stable weights, which are critical for reliable, repeatable maintenance prioritization in complex systems such as competitive road cycling [10], [37].

2.1.5 Shannon’s entropy method for objective weighting

The Shannon Entropy method objectively determines criterion weights by measuring information variability (entropy) across alternatives. Criteria with higher variability (lower entropy) provide more discriminative information and receive higher weights ($w_j$).

Step 1: Proportion normalization

The original decision matrix is transformed into a proportional matrix as:

$p_{i j}=\frac{x_{i j}}{\sum_{i=1}^m x_{i j}} i=1,2, \ldots, m ; j=1,2, \ldots, n$
(14)

where, $p_{i j}$ is the normalized proportion of alternative $i$ under criterion $j$, and $\sum_{i=1}^m p_{i j}=1$.

This formulation requires $x_{i j}>0$.

Step 2: Entropy of each criterion

The entropy value of criterion $j$ is computed as:

$e_j=-k \sum_{i=1}^m p_{i j} \ln \left(p_{i j}\right)$
(15)

where, $k=\frac{1}{\ln (m)}$ is a constant used to normalize the entropy value into the interval $0 \leq e_j \leq$ 1. Lower entropy indicates greater contrast among alternatives and hence stronger discriminating power of the criterion. When $p_{i j}=0$, the term $p_{i j} \ln \left(p_{i j}\right)$ is taken as zero by continuity.

Step 3: Degree of diversification

The degree of diversification, also called divergence, is obtained as:

$d_j=1-e_j$
(16)

where, $d_j$ expresses the useful information carried by criterion $j$. A larger $d_j$ indicates that the criterion contributes more strongly to discrimination among alternatives.

Step 4: Criterion weights

The normalized entropy weight of each criterion is then calculated by:

$w_j=\frac{d_j}{\sum_{j=1}^n d_j}$
(17)

subject to:

$$ w_j \geq 0, \sum_{j=1}^n w_j=1 $$

These weights are subsequently used in SAW, TOPSIS, and VIKOR.

3. Case Study

3.1 Determination of Factors

The traditional FMECA method is a proactive, structured approach carried out methodically to anticipate or identify failures and their modes in a system or process. Figure 2 illustrates the structured FMEA workflow applied in this study. Its main objectives are to use the data to establish scores for the three main components:

• The severity of the risk in a system

• The chance that a failure will occur

• The frequency of non-observation within the system

Figure 2. Failure Mode and Effects Analysis (FMEA): A structured approach
Note: Based on FMEA principles.

Then, the results will be used to determine the criticality of system maintenance. It is primarily applied to macro-level systems. Nonetheless, this case study covers nine criteria to assess the maintenance impact of a failure mode in a system or process, as shown in Figure 3. They are Failure Probability (FP), Non-Detection of Failure (NDF), Skill Requirements Level (SRL), Economic Risk (ER), Maintenance Duration (MD), Spare Parts Availability (SPA), Machine Reliability (MR), Cost per Kilometer (CPK), Lead Time for Repair (LTR).

Figure 3. Factors or criteria used in evaluating and maintaining the operational performance of road racing bicycles
3.1.1 Failure Probability

FP represents the frequency of failure occurrence and the extent of associated operational inconvenience. Mean Distance Between Failures (MDBF) is a historical metric derived from maintenance records, empirical data, and expert assessments. A higher failure frequency corresponds to greater criticality. Table 1 presents MDBF values alongside criticality ratings to inform maintenance prioritization. Focusing on high-criticality failure modes enables effective resource allocation, thereby supporting operational reliability and safety.

Table 1. Scores for FP, NDF, and SRL

FP

NDF

SRL

Occurrence

MDBF (km)

Score

Non-Detection Probability

Non-Detection Severity

Score

Required Skill Level

Score

Almost never

18000

1

$\leq$10%

Intensively less

1

Rider

1

Rare

16000

2

11%–20%

Very less

2

Helper in the workshop

2

Very less

14000

3

21%–30%

Less

3

Technician

3

Less

12000

4

31%–40%

Fair

4

Exceptionally talented technician

4

Medium

10000

5

41%–50%

Medium

5

Foreman

5

Slightly huge

8000

6

51%–60%

Slightly huge

6

Highly competent foreman

6

Huge

6000

7

61%–70%

Huge

7

Engineer

7

Very huge

4000

8

71%–80%

Very huge

8

Assistant manager

8

Intensely huge

2000

9

-

-

-

Manager

9

Note: FP = Failure Probability; NDF = Non-Detection of Failure; SRL = Skill Requirements Level; MDBF = Mean Distance Between Failures.
3.1.2 Non-Detection of Failure

The ability and knowledge of operators or maintenance staff are just one of many variables that must come together to successfully identify the cause or mechanism of a failure. These people are essential in identifying flaws through visual inspection or planned, routine evaluations. Furthermore, by offering timely alerts and data insights and incorporating technological tools such as automated controls, alarms, and sensors, the diagnostic process is dramatically improved. As indicated in Table 1, these components provide a comprehensive approach to fault detection, enabling businesses to quickly identify and resolve potential issues before they escalate into severe disruptions or failures.

3.1.3 Skill Requirements Level

A higher score (ranging from level 1 riders to level 9 managers) reflects greater complexity in the required skills and experience. Lower scores correspond to basic clerical functions, whereas higher scores denote roles that involve advanced technical or leadership capabilities, such as those in IT. This structured scoring system, as presented in Table 1, facilitates aligning personnel with foundational maintenance tasks. Accurately identifying the required skill level is essential for efficiently allocating resources in maintenance operations, particularly given that higher skill levels are generally associated with higher labor costs.

3.1.4 Economic Risk

ER represents the estimated financial and operational impact associated with a failure event. The scoring reflects the potential economic consequences arising from component damage severity, repair or replacement cost, downtime implications, and secondary system disruption. While mechanical complexity (e.g., the number of interacting parts) contributes to impact assessment, the evaluation primarily focuses on the magnitude of repair costs and the risk of operational interruption. Higher scores indicate failures with greater economic and service continuity consequences. A standardized scoring approach, as outlined in Table 2, ensures consistency in evaluation, highlights high-risk areas, and facilitates informed decision-making. This approach supports prioritizing safety interventions and allocating resources optimally, enabling early hazard mitigation and minimizing operational disruptions.

Table 2. Scores for Economic Risk (ER) and Maintenance Duration (MD)

ER

MD

Count of Movable Parts

Score

Downtime (Days)

Score

$\leq$100

1

$\leq$1

1

$\leq$200

2

$\leq$2

2

$\leq$300

3

$\leq$3

3

$\leq$400

4

$\leq$4

4

$\leq$500

5

$\leq$5

5

$\leq$600

6

$\leq$6

6

$\leq$700

7

$\leq$7

7

$\leq$800

8

$\leq$8

8

$\leq$900

9

$>$8

9

3.1.5 Maintenance Duration

MD refers to the overall ease with which equipment can be restored following failure or scheduled maintenance. This metric encompasses not only the duration of the recovery process but also factors such as the availability of replacement parts, technician readiness, equipment complexity, and the efficiency of maintenance procedures. Low MD scores indicate prolonged or stressful recovery scenarios, often attributed to insufficient infrastructure, outdated technology, or inadequate support systems. Higher MD scores correspond to quicker restoration times and enhanced operational continuity. As shown in Table 2, MD-based analysis helps identify areas where reliability is degrading, thereby supporting the prioritization of reliability enhancement efforts and the formulation of optimal maintenance strategies to minimize downtime while maintaining acceptable overall performance.

3.1.6 Spare Parts Availability

Access to spare parts is a critical determinant of maintenance effectiveness and plays a significant role in extending equipment lifespan. In this framework, components are prioritized by weighing their functional importance against ease of acquisition, using a classification matrix with three categories on the Y-axis (desired, necessary, and essential) and three levels on the X-axis (readily available, difficult to procure, and scarce). Table 3 supports maintenance prioritization based on urgency and facilitates informed resource planning. By aligning part criticality with availability, this approach enables optimal task scheduling and ensures the continuity of maintenance operations.

Table 3. Ratings for Spare Parts Availability (SPA)

Criticality/Accessible

Easy

Difficult

Scarce

Suitable

1

4

7

Important

2

5

8

Vital

3

6

9

3.1.7 Machine Reliability

MR represents the likelihood that a component performs its intended function without failure over a defined usage interval, approximated using expert judgment and historical MTBF. Higher scores indicate greater unreliability. The longevity of MR is influenced by several factors, including the quality of the machine’s design, the consistency and adequacy of maintenance practices, prevailing operating conditions, the availability of spare parts, and the presence of skilled technical personnel. Adverse conditions and maintenance neglect diminish MR, whereas real-time monitoring and proactive maintenance strategies enhance it. Table 4 presents the prioritization of these contributing factors, offering guidance for improving system uptime, mitigating failure-related risks, and promoting cost-effective, reliable operation of industrial and mechanical systems.

Although FP and MR are related to reliability, they capture different dimensions of system performance. FP reflects the short-term probability of a failure event occurring under current operating conditions, primarily based on observed frequency data. In contrast, MR represents the inherent long-term structural robustness of the component, reflecting design strength and material durability. Therefore, FP measures the likelihood of operational occurrence, whereas MR reflects intrinsic reliability characteristics. Including both allows the model to distinguish between frequently occurring faults and structurally critical weaknesses.

Table 4. Scores for Machine Reliability (MR), Cost per Kilometer (CPK), Lead Time for Repair (LTR)
ScoresMRCPK (INR/km)LTR (hr)
1Intensively less150.5
2Very less201
3Less252
4Fair303
5Medium354
6Slightly huge405
7Huge456
8Very huge507
9Intensively huge558
3.1.8 Cost per Kilometer

Developing a CPK scorecard for road bike racing includes expenses for acquisition, maintenance, component upgrades, and participation in competitive events. Key considerations include the bicycle’s quality, the performance of its components, and ancillary costs associated with racing, such as travel and event fees. Systematically recording both mileage and financial investments not only provides insight into short-term value but also facilitates evaluation of long-term cost efficiency. The results are presented in Table 4. A customized CPK framework enables riders to strategically plan expenditures in alignment with performance objectives, thereby optimizing budget utilization and enhancing racing outcomes in accordance with individual competitive goals.

3.1.9 Lead Time for Repair

Refers to the estimated time required (in hours) to fully repair or replace a component after failure, based on historical maintenance records. Scored using predefined intervals (Table 4), where higher scores indicate longer repair times. LTR in road bike racing can be initiated during the planning phase by selecting reliable components and implementing effective maintenance practices. Key strategies include: (1) identifying common points of failure; (2) utilizing high-quality, durable components; (3) ensuring a well-equipped repair kit is available on race days; and (4) scheduling regular preventive maintenance. Additionally, immediate access to skilled technicians who can promptly diagnose and address mechanical issues is essential. The outcomes, as presented in Table 4, include reduced downtime, improved performance, and increased riding hours, all of which are critical for competitive cyclists aiming to achieve consistency and reliability during events.

3.2 Expert Scoring Protocol

The scoring process involved three domain experts (certified bicycle mechanics) with experience in competitive bicycle maintenance and mechanical diagnostics. Each expert independently evaluated the failure modes using the predefined 1–9 ordinal scale for all criteria. After the independent scoring phase, a structured discussion session was held to reconcile significant deviations (differences $\geq$ 2 points). Final scores were determined by consensus averaging, with agreement achieved.

To enhance consistency, experts were provided with operational definitions for each criterion and reference examples from maintenance records. No pairwise comparison methods were used, as entropy weighting was applied to minimize subjectivity in assessing the importance of criteria. The finalized score matrix represents the agreed expert consensus supported by documented maintenance evidence.

Within the proposed framework, sustainability is operationalized primarily through resource-efficiency-oriented criteria such as ER, CPK, and LTR. ER reflects the economic impact of failure events, CPK captures lifecycle operating cost implications, and LTR relates to downtime duration and associated resource consumption. While the model does not explicitly incorporate environmental emissions metrics, improved failure prioritization indirectly contributes to sustainability by minimizing material waste, extending component lifespans, and optimizing maintenance resource allocation. Therefore, sustainability benefits are treated as consequential outcomes of enhanced reliability management rather than as standalone environmental indicators.

4. Rating Failure Modes and Assigning Scores

This study presents a case study of a racing road bicycle system. There are many types of racing bikes: MTB, Track, BMX, Cyclo-Cross, etc. However, the present research focuses on road bikes, where the majority of the population is in this discipline of racing. One of the primary and most important factors contributing to the success of the current study is the maintenance of a road racing bicycle. Road bikes are bicycles usually designed for racing on roads. The root cause analysis identifies the possible modes of failure of the parts and their impact and cause on the rider’s performance. According to the scoring system outlined in the preceding section, Table 5 presents numerical scores for the identified failure causes.

Data for the case study were sourced from maintenance records, technical manuals, engineering specifications, and expert evaluations by professional bicycle mechanics. Fifteen high-performance road bicycles were assessed using a structured consensus approach, supported by historical logs, to rate criteria such as failure probability, skill requirements, and part availability. While subjective insights were used where necessary (skill level, availability), objective data, such as MTBF and service times, were prioritized. Structured scoring and entropy-based weighting ensured methodological transparency and reduced bias. However, given the study’s specialized scope, its findings may not be broadly generalizable without domain-specific adaptation. The scores presented in Table 5 were derived from structured expert judgment informed by real-world maintenance observations, MTB failure records, and engineering documentation. The evaluation team consisted of three domain experts (certified bicycle mechanics) and two experienced cycling coaches. A consensus approach was used to assign scores based on the nine defined criteria, with an emphasis on repeatability and the representativeness of typical racing scenarios.

The 1–9 ordinal scale was adopted to ensure consistency with established FMEA and MCDM applications, where nine-point scales provide adequate discrimination without introducing unnecessary granularity. The threshold values for each criterion were defined using historical maintenance records, repair duration ranges, and documented operational risk levels to preserve practical relevance. The scale boundaries were fixed prior to expert evaluation to avoid post hoc adjustments and reduce scoring bias.

Table 5. Score assignment matrix

Main Parts

Major Mode

of Failure

Major Effect

of Failure

Causes of

Failure

Alternatives

FP

NDF

SRL

ER

MD

SPA

MR

CPK

LTR

Frameset

Total bicycle failure

Injury to rider

Shocks from the terrain

D1

2

9

7

6

1

3

8

1

3

Major crash

D2

7

9

9

8

7

8

1

9

9

Drive train unit

Chain slipping off the cassette

Reduction in

riding efficiency

Chain wear

D3

9

5

4

4

2

4

5

3

3

Cassette wear

D4

7

7

4

3

2

4

5

4

2

Chain plate wear

D5

5

5

4

3

2

5

5

2

2

Bottom bracket wear in bearings

D6

5

4

5

2

2

5

3

2

3

Wear in the crankset spindle

D7

3

8

3

1

3

7

4

3

2

Steering unit

Bike imbalance

Injury to the rider

Wear in the stem screws and clamps

D8

5

8

4

1

3

2

4

3

2

Brake unit

Brake failure

Injury to the rider

Wear in the brake pads and clamping scews

D9

2

8

4

2

3

2

6

2

2

Wheel

Wheel Rocking

Reduction in riding efficiency

Wear in the tyre

D10

7

6

6

1

1

2

5

4

2

Axle nut wear

D11

2

5

5

2

2

3

6

2

3

Hub bearing wear

D12

4

8

3

5

6

2

7

3

5

Note: FP = Failure Probability; NDF = Non-Detection of Failure; SRL = Skill Requirements Level; ER = Economic Risk; MD = Maintenance Duration; SPA = Spare Parts Availability; MR= Machine Reliability; CPK= Cost per Kilometer; LTR = Lead Time for Repair.

5. Results and Discussion

First, objective weights were computed using Shannon’s entropy method, in order to determine the objective weights. All calculations were performed using Microsoft Excel. Specifically, the process involved normalizing the decision matrix based on the scores provided by the experts (as per Eq. (14)), computing the entropy values corresponding to each criterion (as per Eq. (15)), evaluating the degree of divergence or inconsistency (as per Eq. (16)), and subsequently calculating the normalized weights (as per Eq. (17)). Table 6 summarizes the intermediate entropy terms, divergence measures, and final normalized weights to ensure full methodological transparency and reproducibility.

Table 6. Entropy weight calculation summary

Step

FP

NDF

SRL

ER

MD

SPA

MR

CPK

LTR

n

12

ln(n)

2.4849

2.4849

2.4849

2.4849

2.4849

2.4849

2.4849

2.4849

2.4849

Scaling constant

0.4024

0.4024

0.4024

0.4024

0.4024

0.4024

0.4024

0.4024

0.4024

Entropy summation term

-2.4071

-2.4701

-2.4488

-2.2829

-2.3604

-2.3819

-2.4109

-2.3342

-2.3412

Entropy value

0.9687

0.994

0.9855

0.9187

0.9499

0.9585

0.9702

0.9394

0.9422

Divergence measure

0.0313

0.006

0.0145

0.0813

0.0501

0.0415

0.0298

0.0606

0.0578

Final weight

0.084

0.016

0.0389

0.218

0.1344

0.1112

0.0799

0.1626

0.1551

Note: FP = Failure Probability; NDF = Non-Detection of Failure; SRL = Skill Requirements Level; ER = Economic Risk; MD = Maintenance Duration; SPA = Spare Parts Availability; MR= Machine Reliability; CPK= Cost per Kilometer; LTR = Lead Time for Repair.
5.1 Prioritizing the Options Using the SAW Method

Table 7 presents the normalized values for various criteria SAW using Eq. (1) and Eq. (2) and the calculated total scores and ranks for each alternative using Eq. (3). D2 has the highest total score of 0.9002 and is ranked first, performing well across nearly all criteria. D2 is a balanced, high-performance option for ER and CPK. D12 ranks second, with a total score of 0.4976. D3 ranks third, with a total score of 0.4800.

Table 7. Normalized decision matrix and SAW ranks

Alternative

FP

NDF

SRL

ER

MD

SPA

MR

CPK

LTR

Total

Rank

D1

0.3333

0.5556

1.0000

0.7500

0.1429

0.6667

0.1250

0.1111

0.4444

0.4296

5

D2

0.8889

0.5556

1.0000

1.0000

1.0000

0.2500

1.0000

1.0000

1.0000

0.9002

1

D3

1.0000

0.8333

0.5000

0.5000

0.5714

0.5000

0.2000

0.3333

0.3333

0.4800

3

D4

0.8889

0.7143

0.6250

0.5000

0.2857

0.5000

0.2000

0.4444

0.2222

0.4361

4

D5

0.6667

0.8333

0.5000

0.3750

0.2857

0.4000

0.2000

0.2222

0.2222

0.3399

9

D6

0.5556

1.0000

0.6250

0.2500

0.2857

0.4000

0.3333

0.2222

0.3333

0.3388

10

D7

0.3333

0.6250

0.5000

0.1250

0.4286

0.2857

0.2500

0.4444

0.2222

0.3007

12

D8

0.5556

0.6250

0.6250

0.1250

0.4286

1.0000

0.2500

0.3333

0.2222

0.3856

7

D9

0.3333

0.6250

0.5000

0.2500

0.4286

1.0000

0.1667

0.2222

0.2222

0.3646

8

D10

0.6667

0.7143

0.5000

0.1250

0.2857

1.0000

0.2000

0.4444

0.2222

0.3864

6

D11

0.3333

0.8333

0.6250

0.2500

0.2857

0.6667

0.1667

0.2222

0.3333

0.3338

11

D12

0.4444

0.6250

0.5000

0.5000

0.7143

0.6667

0.1429

0.3333

0.5556

0.4976

2

Note: FP = Failure Probability; NDF = Non-Detection of Failure; SRL = Skill Requirements Level; ER = Economic Risk; MD = Maintenance Duration; SPA = Spare Parts Availability; MR= Machine Reliability; CPK= Cost per Kilometer; LTR = Lead Time for Repair.
5.2 Prioritizing the Options Using the TOPSIS Method

The matrix is first normalized by Eq. (4) so that the values of each criterion are comparable across alternatives. After normalization, weighted normalization is performed using Eq. (5). TOPSIS then calculates the distance of each alternative from the ideal solution (best possible values) and the negative-ideal solution (worst possible values) using Eq. (6) and Eq. (7), followed by separation measures using Eq. (8) and Eq. (9). The closeness coefficient ($C_i$) represents how close each alternative is to the ideal solution by Eq. (10). A higher $C_i$ indicates efficient performance. Table 8 presents a decision matrix and normalized values for various criteria, along with TOPSIS scores and their ranks. $C_i$ is the “closeness to ideal solution” score of the TOPSIS method, replicating how close each decision alternative (D1, D2, etc.) is to the ideal solution.

Table 8. Decision matrix normalized, TOPSIS scores, and final ranks

Alternative

Distance from Ideal Best Solution

Distance from Ideal Worst Solution

Closeness Coefficient

Rank

D1

0.1441

0.0941

0.3952

3

D2

0.0439

0.1906

0.8128

1

D3

0.1292

0.0781

0.3767

4

D4

0.1375

0.074

0.3499

5

D5

0.1597

0.047

0.2274

9

D6

0.1631

0.0418

0.204

12

D7

0.1688

0.0476

0.2199

10

D8

0.1687

0.0586

0.2576

7

D9

0.1661

0.0542

0.246

8

D10

0.1673

0.0613

0.268

6

D11

0.1642

0.0459

0.2185

11

D12

0.1182

0.0883

0.4278

2

The TOPSIS results indicate that D2 has the highest closeness coefficient $\left(C_i=0.8128\right)$, making it the top-ranked alternative. It demonstrates strong performance across critical criteria, particularly environmental sustainability, design risk management, and process capability, indicating a well-balanced and robust profile. D12 ranks second with a relatively high closeness coefficient $\left(C_i=0.4278\right)$, showing strong performance across multiple criteria, especially ER and maintenance-related factors, although its overall performance remains slightly lower than D2. D1 and D3 follow with $C_i=0.3952$ and 0.3767, respectively, indicating moderate proximity to the ideal solution. D3 performs well in critical failure and supplier performance, while D1 exhibits a relatively balanced performance across most criteria without significant dominance in any specific aspect.

D4 ranks in the mid-range ($C_i=0.3499$), showing acceptable performance in FP and NDF but weaker results in lead time reduction. Alternatives such as D10, D8, and D9 demonstrate moderate performance, indicating some strengths but also noticeable limitations across key criteria.

On the lower end, D5, D7, D11, and D6 exhibit relatively low closeness coefficients, with D6 being the least preferred alternative $\left(C_i=0.2040\right)$. These alternatives are farther from the ideal solution due to weaker performance across multiple critical criteria, particularly ER and CPK, which significantly reduces their prioritization.

Overall, the TOPSIS analysis confirms that D2, D12, and D1 are the most preferred alternatives, as they consistently perform well across critical failure, environmental, and operational criteria. In contrast, lower-ranked alternatives such as D6 and D7 require improvement in key areas such as lead time reduction, supplier performance, and design risk management to enhance their prioritization in future evaluations.

5.3 Prioritizing the Options Using the VIKOR Method

Table 9 presents the normalized decision matrix, the group utility measure (Eq. (11)), the individual regret measure (Eq. (12)), the VIKOR values (Eq. (13)), and the final ranks. D2 has the lowest $Q_i$ score of 0.0000, making it the best option. It performs exceptionally well across all criteria, with many values normalized to zero. D2 shows a strong performance in FP, NDF, and CPK and is close to the ideal solution. D12 ranks second, with a $Q_i$ score of 0.3886. It performs strongly in CPK and ER and offers a balanced approach across several criteria, making it a good compromise. D3 is ranked third and performs well in ER and MR. However, it performs worse than D2 and D12, particularly in the NDF and FP. D4 ranks fourth, performing efficiently in MD and ER but weaker in CPK and LTR. It is a middle-of-the-pack alternative in terms of its overall performance.

Table 9. Normalized decision matrix, and VIKOR ranks

Alternative

FP

NDF

SRL

ER

MD

SPA

MR

CPK

LTR

VIKOR Index (Qi)

Rank Qi

D1

0.0840

0.0160

0.0000

0.0623

0.1344

0.0185

0.0799

0.1626

0.1108

0.6256

5

D2

0.0140

0.0160

0.0000

0.0000

0.0000

0.1112

0.0000

0.0000

0.0000

0.0000

1

D3

0.0000

0.0040

0.0389

0.1245

0.0672

0.0371

0.0457

0.1219

0.1329

0.4166

3

D4

0.0140

0.0080

0.0292

0.1245

0.1120

0.0371

0.0457

0.1016

0.1551

0.5604

4

D5

0.0420

0.0040

0.0389

0.1557

0.1120

0.0556

0.0457

0.1423

0.1551

0.6537

6

D6

0.0560

0.0000

0.0292

0.1868

0.1120

0.0556

0.0228

0.1423

0.1329

0.7896

7

D7

0.0840

0.0120

0.0389

0.2180

0.0896

0.0927

0.0342

0.1016

0.1551

1.0000

12

D8

0.0560

0.0120

0.0292

0.2180

0.0896

0.0000

0.0342

0.1219

0.1551

0.9196

10

D9

0.0840

0.0120

0.0389

0.1868

0.0896

0.0000

0.0571

0.1423

0.1551

0.8101

8

D10

0.0420

0.0080

0.0389

0.2180

0.1120

0.0000

0.0457

0.1016

0.1551

0.9235

11

D11

0.0840

0.0040

0.0292

0.1868

0.1120

0.0185

0.0571

0.1423

0.1329

0.8109

9

D12

0.0700

0.0120

0.0389

0.1245

0.0448

0.0185

0.0685

0.1219

0.0886

0.3886

2

Note: FP = Failure Probability; NDF = Non-Detection of Failure; SRL = Skill Requirements Level; ER = Economic Risk; MD = Maintenance Duration; SPA = Spare Parts Availability; MR= Machine Reliability; CPK= Cost per Kilometer; LTR = Lead Time for Repair.

Lower-ranked alternatives (D6, D8, D9, D10, D11) have relatively high Qi scores and are less preferable than D2, D12, and D3. D6 performs poorly in CPK and MD, while D8 and D10 have low scores in SPA and Maintenance Requirements. Their lower prioritizations suggest they are not ideal solutions. The VIKOR technique promotes compromise solutions, and D2 emerges as the best option, effectively balancing all requirements.

5.4 Comparing Different Approaches

Table 10 compares the results of three distinct MCDM methods: TOPSIS, VIKOR, and SAW. Each technique scrutinizes the decision alternatives (D1 to D12) using different scoring methods, yielding variable yet equivalent prioritizations. TOPSIS evaluates the options based on their closeness to the ideal solution, with higher values indicating better performance. VIKOR ranks the options based on their $\mathrm{Q}_{\mathrm{i}}$ values, with lower $\mathrm{Q}_{\mathrm{i}}$ indicating a better choice. SAW rates alternatives based on their overall weighted scores, with higher scores indicating better prioritizations. Figure 4 shows the ranks attained by different alternatives by three MCDM methods.

Table 10. Comparison of different approaches

Alternatives

TOPSIS

VIKOR

SAW

Closeness Coefficient (ci)

Rank

VIKOR Index (Qi)

Rank

SAW Score

Rank

D1

0.3952

4

0.6256

5

0.4296

5

D2

0.8128

1

0.0000

1

0.9002

1

D3

0.3767

3

0.4166

3

0.4800

3

D4

0.3499

5

0.5604

4

0.4361

4

D5

0.2274

9

0.6537

6

0.3399

9

D6

0.2040

11

0.7896

7

0.3388

10

D7

0.2199

12

1.0000

12

0.3007

12

D8

0.2576

7

0.9196

10

0.3856

7

D9

0.2460

6

0.8101

8

0.3646

8

D10

0.2680

8

0.9235

11

0.3864

6

D11

0.2185

10

0.8109

9

0.3338

11

D12

0.4278

2

0.3886

2

0.4976

2

Figure 4. Ranks attained by different Multi-Criteria Decision-Making (MCDM) methods

D2 is the top-performing alternative across three MCDM methods, consistently ranked $1^{\text {st }}$. D2 is a robust choice, performing well across evaluation techniques. Similarly, D12 follows closely, ranking $2^{\text {nd }}$ in all three MCDM methods, and is another robust and reliable alternative. D3 ranks $3^{\text {rd }}$ across all three MCDM approaches, further suggesting a high level of agreement among the methods regarding the relative merits of these top alternatives.

D1 and D4 display slight variability in their prioritizations; D1 ranks $4{ }^{\text {th }}$ in TOPSIS but $5{ }^{\text {th }}$ in both VIKOR and SAW. D4 performs similarly, ranking $5^{\text {th }}$ in TOPSIS and $4{ }^{\text {th }}$ in VIKOR and SAW. While these alternatives are relatively stable in performance, minor shifts in rank across methods reflect subtle differences in how each method weights and aggregates the criteria. Alternative D7 consistently ranks last across three MCDM methods, indicating poor performance across all approaches.

From an engineering perspective, the high priority given to D2 (major frame crash) reflects the structural vulnerability of lightweight racing frames, which are optimized for stiffness-to-weight rather than impact tolerance. The prominence of D12 (hub bearing wear) is associated with cyclic radial loading and insufficient lubrication at high speeds, leading to progressive fatigue degradation. Similarly, D3 (chain wear) is influenced by repetitive tensile loading, contamination, and misalignment during aggressive shifting. These rankings align with known mechanical stress mechanisms and operational load patterns typical of competitive cycling environments.

The differences in ranks highlight the unique strengths of each MCDM method and the criteria they prioritize. The overall agreement between the methods suggests they provide a reliable basis for decision-making, especially when the same alternatives consistently rank at the top or bottom across all three MCDM techniques.

The prioritization results can be understood in terms of underlying mechanical behavior and maintenance implications. The dominance of D2 (major frame crash) is linked to high stress concentration, impact loading, and the structural vulnerability of lightweight frames, which are optimized for stiffness rather than damage tolerance. Similarly, D12 (hub bearing wear) indicates progressive fatigue from cyclic radial loading, lubrication degradation, and contamination, making it a critical reliability concern. D3 (chain wear) is affected by repetitive tensile loading, misalignment, and abrasive conditions, which speed up material degradation and reduce drivetrain efficiency.

The stability of rankings across SAW, TOPSIS, and VIKOR can be attributed to the consistent dominance of key criteria such as ER, CPK, and MD, which have higher entropy weights. Alternatives that perform well in these high-weight criteria stay at the top across different methods, while mid-ranked options vary because of differences in how they aggregate data and their sensitivity to trade-offs among criteria. From a decision-making point of view, this prioritization allows for adaptive maintenance strategies depending on operational constraints. Under cost limitations, emphasis may be on reducing CPK and LTR, while safety-critical situations require focusing on FP and ER. Similarly, in time-sensitive maintenance environments, MD and SPA may be prioritized to ensure quick system recovery. Hence, the proposed framework enables flexible, context-specific maintenance planning by aligning failure priorities with specific operational goals.

From an engineering management perspective, the proposed framework serves as a structured decision-support tool for maintenance planning and resource allocation. Prioritizing failure modes allows for the creation of risk-based inspection schedules, where high-ranked failures, such as D2 and D12, undergo more frequent and detailed inspections, while lower-ranked modes can be monitored less intensively. Regarding maintenance budgeting, the ranking provides a quantitative foundation for distributing financial resources toward components that have the greatest impact on system reliability and operational risk. This approach supports cost-effective maintenance strategies by directing expenditures toward critical failure modes rather than spreading resources evenly.

The framework also guides technician allocation and workforce planning. High-priority failure modes requiring specialized skills (e.g., structural frame inspection or bearing replacement) can be assigned to experienced personnel, while routine tasks involving lower-risk components can be delegated accordingly. This enhances workforce efficiency and minimizes downtime. More broadly, integrating FMEA with entropy-based weighting and multiple MCDM methods creates a transferable decision-making process applicable to other engineering systems. By systematically connecting failure characteristics with operational criteria, the approach supports consistent, data-driven decision-making across maintenance, reliability, and asset management contexts.

The proposed maintenance actions are directly derived from the integrated framework’s scoring structure and prioritization results. Specifically, alternatives with higher closeness coefficients (e.g., D2, D12, and D3) exhibit strong performance across high-weight criteria such as ER, CPK, and MD, indicating a greater impact on system reliability and operational risk. As a result, these failure modes are assigned higher priority for inspection and preventive maintenance.

The inspection frequency and intervention strategies are therefore aligned with both the relative ranking (Ci values) and each criterion’s contribution to the overall score. For example, failure modes with high FP and ER values require more frequent inspection due to their potential safety and reliability implications, while those with elevated LTR or SPA values necessitate proactive resource planning to minimize downtime and supply delays.

Conversely, lower-ranked alternatives (e.g., D6, D7, D11) demonstrate weaker performance across critical criteria and lower Ci values, thereby justifying reduced inspection intensity and deferred maintenance actions under resource constraints. This ensures that maintenance decisions are not generic but systematically derived from the quantitative evaluation framework. Thus, the recommended actions directly translate model outputs into operational decisions, enabling transparent, data-driven maintenance planning.

A statistical technique, Spearman’s Rank Correlation Coefficient, was applied to measure the degree of correlation among the different MCDM ranking methods and to assess whether the rankings are consistent. The Spearman’s Rank Correlation Coefficients between the different ranking methods are as follows:

• TOPSIS vs. VIKOR of 0.825: indicating a strong positive correlation.

• TOPSIS vs. SAW of 0.958: indicating a very strong positive correlation.

• VIKOR vs. SAW of 0.804: also indicating a strong positive correlation.

The prioritizations assigned by the TOPSIS, VIKOR, and SAW MCDM approaches are considerably correlated, particularly TOPSIS and SAW, indicating that these MCDM methods rank choice alternatives quite consistently. While there are some minor variances, the main patterns in prioritizations are consistent across approaches.

While the highest- and lowest-ranked failure modes remain stable across SAW, TOPSIS, and VIKOR, variations among mid-ranked alternatives are primarily driven by differences in criterion sensitivity and compensation structures. SAW applies full compensatory aggregation, allowing high scores in dominant criteria, such as ER and CPK, to offset weaker performance in other criteria. TOPSIS evaluates relative closeness to ideal and anti-ideal solutions, making mid-ranked alternatives sensitive to small variations in normalized distances. VIKOR emphasizes compromise prioritization based on group utility and regret measures, thereby amplifying differences among criteria with higher entropy weights. Consequently, alternatives with balanced but moderate scores exhibit positional shifts across methods.

Assigning objective weights to maintenance criteria was the first goal (Table 11). The weights of the criteria that decide the NDF as the significant criterion were calculated for this purpose using the Shannon Entropy technique, followed by SRL, MR, FP, SPA, MD, LTR, CPK, and ER. The second goal was to determine the most effective maintenance procedures. Three techniques for MCDM. The second goal has been accomplished using TOPSIS, VIKOR, and SAW.

Table 11. Entropy weights of criterion

Weight

Rank

FP

0.0840

6

NDF

0.0160

9

SRL

0.0389

8

ER

0.2180

1

MD

0.1344

4

SPA

0.1112

5

MR

0.0799

7

CPK

0.1626

2

LTR

0.1551

3

Note: FP = Failure Probability; NDF = Non-Detection of Failure; SRL = Skill Requirements Level; ER = Economic Risk; MD = Maintenance Duration; SPA = Spare Parts Availability; MR= Machine Reliability; CPK= Cost per Kilometer; LTR = Lead Time for Repair.

The following is a systematic display of the prioritization of alternatives that were produced independently using the TOPSIS, VIKOR, and SAW methodologies:

• SAW: D2 > D12 > D3 > D4 > D1 > D10 > D8 > D9 > D5 > D6 > D11 > D7

• TOPSIS: D2 > D12 > D1 > D3 > D4 > D10 > D8 > D9 > D5 > D7 > D11 > D6

• VIKOR: D2 > D12 > D3 > D4 > D1 > D5 > D6 > D9 > D11 > D8 > D10 > D7

As shown above, among the nine likely causes of failure identified, the most significant cause based on SAW, TOPSIS, and VIKOR prioritizations is Major crashes (D2). This is followed by Hub Bearing Wear (D12) and Chain Wear (D3), while Wear in the Crankset Spindle (D7) is the least critical. After obtaining the prioritizations, it was found that the prioritization orders were nearly identical when the same weights were assigned across all three methods. However, the SAW method’s process is much simpler and involves fewer steps compared to approaches like TOPSIS and VIKOR. TOPSIS used vector normalization, whereas VIKOR employed linear normalization techniques and SAW approaches.

All three MCDM methods, SAW, TOPSIS, and VIKOR, produced identical prioritization for the highest and lowest failure modes; however, some differences appeared among the mid-ranked alternatives. These variations arise from the fundamental methodological differences of each approach: SAW calculates a linear, aggregated weighted score, which can reduce the impact of outlier values; TOPSIS evaluates alternatives based on their closeness to the ideal solution and their distance from the negative ideal, potentially penalizing imbalances across criteria; and VIKOR focuses on compromise solutions by minimizing group utility loss while maximizing individual regret. Alternatives D4 and D5 showed rank shifts across the methods, highlighting trade-offs between performance cost and safety impact. Such insights are especially valuable in real-world settings where maintenance planners may shift priorities toward robustness or efficiency, depending on the operational scenario.

The high prioritization of failure modes such as frame cracking and bearing wear can be attributed to repeated cyclic loading, stress concentration at joint interfaces, and exposure to variable operating conditions. These components are subjected to dynamic forces during high-speed cycling, which increases fatigue susceptibility. Similarly, drivetrain-related failures are influenced by frictional losses, lubrication conditions, and alignment issues, underscoring the importance of periodic inspection and preventive maintenance.

In practical terms, the MCDM framework lends itself to integration within contemporary Computerized Maintenance Management Systems or as a supplementary module to IoT-enabled diagnostic tools. When coupled with real-time sensor data, such as vibration, temperature, or usage cycles, the model enables dynamic updates to failure probabilities and maintenance schedules. This data-driven approach enables more efficient resource allocation by minimizing unplanned downtimes and eliminating unnecessary inspections. Furthermore, the framework can be embedded as a decision-making engine within expert systems or AI-driven maintenance platforms, thereby helping technicians prioritize and sequence tasks.

These findings directly support condition-based maintenance planning by identifying components that require higher inspection frequency and preventive replacement strategies.

6. Conclusions

This study developed an integrated FMEA–MCDM framework using Shannon entropy weighting combined with SAW, TOPSIS, and VIKOR to prioritize failure modes in a high-performance mechanical system. The results consistently identified D2, D12, and D3 as the most critical failure modes, demonstrating the robustness of the multi-method approach and the influence of high-weight criteria such as ER, CPK, and MD on prioritization outcomes.

From a methodological perspective, combining objective weighting with multiple decision-making techniques reduces subjectivity and improves ranking stability. The consistency observed across methods confirms the reliability of the proposed framework for complex failure prioritization problems.

From a practical perspective, the framework offers a structured foundation for maintenance decision-making. It supports risk-based inspection planning, targeted resource allocation, and prioritization of critical components, thereby enhancing system reliability and operational efficiency. The findings also provide practical implications for engineering management, particularly in maintenance decision-making and resource allocation. The approach is adaptable to other engineering systems where multiple criteria affect failure severity and maintenance strategies.

The study is limited to a specific case context and predefined evaluation criteria. Future work may expand the framework by adding dynamic operational data, real-time monitoring, and hybrid weighting strategies to further enhance decision accuracy and applicability.

Author Contributions

Conceptualization, D.D. and R.K.; methodology, D.D. and S.D.; software, M.C.; validation, D.D., R.K. and Ž.S.; formal analysis, S.D. and R.K.; investigation, H.S.F. and R.K.; resources, R.K.; data curation, R.K.; writing—original draft preparation, D.D. and S.D.; writing—review and editing, R.K. and Ž.S.; visualization, M.C. and A.P.A.; supervision, R.K. and Ž.S.; project't administration, R.K.; funding acquisition, Ž.S. All authors have read and agreed to the published version of the manuscript.

Data Availability

The data used to support the research findings are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References
1.
P. Skoczyński, “Analysis of solutions improving safety of cyclists in the road traffic,” Appl. Sci., vol. 11, no. 9, p. 3771, 2021. [Google Scholar] [Crossref]
2.
W. Wang, X. Liu, Y. Qin, and Y. Fu, “A risk evaluation and prioritization method for FMEA with prospect theory and Choquet integral,” Saf. Sci., vol. 110, pp. 152–163, 2018. [Google Scholar] [Crossref]
3.
Z. Li, Y. Wang, Y. Xu, Y. Liao, Q. Liu, and X. Qing, “Integrated subjective–objective weighting and fuzzy decision framework for FMEA-based risk assessment of wind turbines,” Systems, vol. 13, no. 12, p. 1118, 2025. [Google Scholar] [Crossref]
4.
A. Ajithkumar and P. Poongavanam, “A systematic framework for the optimum selection of organic PCM in sustainable solar drying process: A multi-criteria decision-making methodology,” J. Energy Storage, vol. 116, p. 116080, 2025. [Google Scholar] [Crossref]
5.
X. Liu and V. Y. Mariano, “The Fire-ViT model for tunnel fire detection with vision transformer improvement,” J. Comput. Cogn. Eng., vol. 4, no. 1, pp. 89–96, 2025. [Google Scholar] [Crossref]
6.
S. H. Tseng, A. M. S. Espiritu, and T. H. T. Duong, “Impact of supply chain risks on company metrics: An integrated multi-criteria decision-making approach,” Int. J. Inf. Technol. Decis. Mak., pp. 1–38, 2025. [Google Scholar] [Crossref]
7.
M. Elangovan, P. Ramya, and C. Goswami, “Sustainable energy options: Qualitative TOPSIS method for challenging scenarios,” in Decision‐Making Techniques and Methods for Sustainable Technological Innovation: Strategies and Applications in Industry 5.0, Wiley, 2025, pp. 45–67. [Google Scholar] [Crossref]
8.
E. Akdamar, G. Elidolu, M. Gögebakan, and B. O. Ceylan, “Entropy-based borda extended weighted expert FMEA approach: Comparison with classical and fuzzyFMEA on a ship system,” Appl. Soft Comput., vol. 188, p. 114424, 2026. [Google Scholar] [Crossref]
9.
S. S. Rawat, H. Dincer, and S. Yüksel, “A hybrid weighting method with a new score function for analyzing investment priorities in renewable energy,” Comput. Ind. Eng., vol. 185, p. 109692, 2023. [Google Scholar] [Crossref]
10.
S. Kaur, R. Kumar, and K. Singh, “Sustainable component-level prioritization of PV panels, batteries, and converters for solar technologies in hybrid renewable energy systems using objective-weighted MCDM models,” Energies, vol. 18, no. 20, p. 5410, 2025. [Google Scholar] [Crossref]
11.
Q. Jiang and H. Wang, “A hybrid intuitionistic fuzzy entropy–BWM–WASPAS approach for supplier selection in shipbuilding enterprises,” Sustainability, vol. 17, no. 4, p. 1701, 2025. [Google Scholar] [Crossref]
12.
A. Chakhrit, I. Djelamda, M. Bougofa, I. H. M. Guetarni, A. Bouafia, and M. Chennoufi, “Integrating fuzzy logic and multi-criteria decision-making in a hybrid FMECA for robust risk prioritization,” Qual. Reliab. Eng. Int., vol. 40, no. 6, pp. 3555–3580, 2024. [Google Scholar] [Crossref]
13.
A. A. Ardebili, A. S. Roshany, M. Pourmadadkar, M. Ghodsi, E. Padoano, and M. Boscolo, “Enhancing risk-based engineering design: A hybrid fuzzy failure analysis with empirical validation,” Front. Mech. Eng., vol. 11, p. 1732819, 2026. [Google Scholar] [Crossref]
14.
S. S. Barjoee, V. Rodionov, R. Babkin, M. Tumanov, and B. S. Barjoee, “An integrated modified failure mode effects analysis Shannon entropy combined compromise solution approach to safety risk assessment in stone crusher unit of ceramic sector,” Int. J. Eng., Trans. B: Appl., vol. 39, no. 8, pp. 1976–1987, 2026. [Google Scholar] [Crossref]
15.
W. A. C. Lopes, A. C. Rusteiko, C. R. Mendes, N. V. C. Honorĩo, and M. T. Okano, “Optimization of new project validation protocols in the automotive industry: A simulated environment for efficiency and effectiveness,” J. Comput. Cogn. Eng., vol. 4, no. 3, pp. 343–354, 2025. [Google Scholar] [Crossref]
16.
U. G. Okoro, A. Mubaraq, E. U. Olugu, S. A. Lawal, and K. Y. Wong, “A dynamic maintenance planning methodology for HVAC systems based on Fuzzy-TOPSIS and Failure Mode and Effect Analysis,” J. Build. Eng., vol. 98, p. 111326, 2024. [Google Scholar] [Crossref]
17.
D. Panchal, “Integrated quality and decision-making approaches-based framework for risk analysis,” Int. J. Ind. Syst. Eng., vol. 48, no. 4, pp. 568–587, 2024. [Google Scholar] [Crossref]
18.
J. He, T. Yu, Y. Liu, C. Ma, and X. Gao, “A novel FMEA method considering dynamic weights and its application to CNC machine tools,” Qual. Reliab. Eng. Int., vol. 42, no. 3, pp. 1347–1368, 2026. [Google Scholar] [Crossref]
19.
F. Kechroud, A. Zoubir, D. Rabah, and A. Moussa, “Risk and reliability analysis of critical boiler components,” Eksploat. i Niezawodność – Maint. Reliab., vol. 28, no. 2, p. 211506, 2026. [Google Scholar] [Crossref]
20.
A. Fiorilli, V. Pezzotta, and C. Fragassa, “Criticality-driven reliability enhancement of pneumatic sand molding cells in foundry applications via FMECA,” J. Eng. Manag. Syst. Eng., vol. 4, no. 3, pp. 188–205, 2025. [Google Scholar] [Crossref]
21.
Y. Yan, Z. Luo, Z. Liu, and Z. Liu, “Risk assessment analysis of multiple failure modes using the fuzzy rough FMECA method: A case of FACDG,” Mathematics, vol. 11, no. 16, p. 3459, 2023. [Google Scholar] [Crossref]
22.
S. Dalal, U. Rani, U. K. Lilhore, N. Dahiya, R. Batra, N. Nuristani, and D. N. Le, “Optimized XGBoost model with whale optimization algorithm for detecting anomalies in manufacturing,” J. Comput. Cogn. Eng., vol. 4, no. 4, pp. 413–423, 2025. [Google Scholar] [Crossref]
23.
R. Kumar and S. Singh, “Selection of vacuum cleaner with Technique for Order Preference by Similarity to Ideal Solution method based upon multi-criteriadecision-making theory,” Meas. Control, vol. 53, no. 3–4, pp. 627–634, 2020. [Google Scholar] [Crossref]
24.
R. Kumar, A. Bhattacherjee, A. D. Singh, S. Singh, and C. I. Pruncu, “Selection of portable hard disk drive based upon weighted aggregated sum product assessment method: A case of Indian market,” Meas. Control, vol. 53, no. 7–8, pp. 1218–1230, 2020. [Google Scholar] [Crossref]
25.
A. C. Kutlu and M. Ekmekçoǧlu, “Fuzzy failure modes and effects analysis by using fuzzy TOPSIS-based fuzzy AHP,” Expert Syst. Appl., vol. 39, no. 1, pp. 61–67, 2012. [Google Scholar] [Crossref]
26.
X. Peng, Y. Liu, and L. Hao, “A probabilistic performance-based analysis approach for a vibrator-ground interaction system,” Probabilistic Eng. Mech., vol. 76, p. 103626, 2024. [Google Scholar] [Crossref]
27.
I. S. Arafat, R. Premkumar, M. Vidhyalakshmi, C. Priya, and M. Elangovan, “Optimizing sustainable image encryption strategies in Industry 5.0 using VIKOR MCDM methodology,” in Decision-Making Techniques and Methods for Sustainable Technological Innovation, Wiley, 2025, pp. 85–100. [Google Scholar] [Crossref]
28.
R. Kumar, R. Dubey, S. Singh, S. Singh, C. Prakash, Y. Nirsanametla, G. Królczyk, and R. Chudy, “Multiple‐criteria decision‐making and sensitivity analysis for selection of materials for knee implant femoral component,” Materials, vol. 14, no. 8, p. 2084, 2021. [Google Scholar] [Crossref]
29.
A. S. Sidhu, S. Singh, R. Kumar, D. Y. Pimenov, and K. Giasin, “Prioritizing energy-intensive machining operations and gauging the influence of electric parameters: An industrial case study,” Energies, vol. 14, no. 16, p. 4761, 2021. [Google Scholar] [Crossref]
30.
R. Kumar, S. Singh, V. Aggarwal, S. Singh, D. Y. Pimenov, K. Giasin, and K. Nadolny, “Hand and abrasive flow polished tungsten carbide die: Optimization of surface roughness, polishing time and comparative analysis in wire drawing,” Materials, vol. 15, no. 4, p. 1287, 2022. [Google Scholar] [Crossref]
31.
J. Liu, B. Tang, and M. Xu, “Data-driven statistical nonlinearization technique based on information entropy,” Probabilistic Eng. Mech., vol. 70, p. 103376, 2022. [Google Scholar] [Crossref]
32.
G. Stojić, Ž. Stević, J. Antuchevičiene, D. Pamučar, and M. Vasiljević, “A novel rough WASPAS approach for supplier selection in a company manufacturing PVC carpentry products,” Information, vol. 9, no. 5, p. 121, 2018. [Google Scholar] [Crossref]
33.
M. R. Marjani, M. Habibi, and A. Pazhouhandeh, “An innovative hybrid fuzzy TOPSIS based on design of experiments for multi-criteria supplier evaluation and selection,” Int. J. Oper. Res., vol. 44, no. 2, pp. 171–209, 2022. [Google Scholar] [Crossref]
34.
X. Pang, W. Yang, W. Miao, H. Zhou, and R. Min, “Study on site selection evaluation of emergency material storage based on improved TOPSIS,” Kybernetes, vol. 54, no. 9, pp. 5181–5206, 2025. [Google Scholar] [Crossref]
35.
I. Belošević, M. Kosijer, M. Ivić, and N. Pavlović, “Group decision making process for early stage evaluations of infrastructure projects using extended VIKOR method under fuzzy environment,” Eur. Transp. Res. Rev., vol. 10, no. 2, p. 43, 2018. [Google Scholar] [Crossref]
36.
X. Chen, W. Sun, and R. Zhang, “An interval-valued neutrosophic framework: Improved VIKOR with a preference-aware AHP–entropy weight method for evaluating scalp-detection algorithms,” Appl. Sci., vol. 15, no. 22, p. 11937, 2025. [Google Scholar] [Crossref]
37.
L. T. Wang, Y. Q. Yu, Z. M. Liu, Z. H. Liu, and X. Q. Liu, “An enhanced failure mode and effects analysis risk identification method based on uncertainty and fuzziness,” J. Eng. Manag. Syst. Eng., vol. 3, no. 3, pp. 116–131, 2024. [Google Scholar] [Crossref]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Deepak, D., Dwivedi, S., Farwaha, H. S., Kumar, R., Stević, Ž., Chandra, M., Kumar, R., Agrawal, A. P., & John, V. (2026). A Decision-Oriented Framework for Risk-Based Maintenance Planning in High-Performance Mechanical Systems Using Entropy-Integrated FMEA–MCDM Approaches. J. Eng. Manag. Syst. Eng., 5(2), 137-155. https://doi.org/10.56578/jemse050202
D. Deepak, S. Dwivedi, H. S. Farwaha, R. Kumar, Ž. Stević, M. Chandra, R. Kumar, A. P. Agrawal, and V. John, "A Decision-Oriented Framework for Risk-Based Maintenance Planning in High-Performance Mechanical Systems Using Entropy-Integrated FMEA–MCDM Approaches," J. Eng. Manag. Syst. Eng., vol. 5, no. 2, pp. 137-155, 2026. https://doi.org/10.56578/jemse050202
@research-article{Deepak2026ADF,
title={A Decision-Oriented Framework for Risk-Based Maintenance Planning in High-Performance Mechanical Systems Using Entropy-Integrated FMEA–MCDM Approaches},
author={Dharmpal Deepak and Sulakshna Dwivedi and Harnam Singh Farwaha and Raman Kumar and žEljko Stević and Manjunatha Chandra and Rajender Kumar and Anant Prakash Agrawal and Vivek John},
journal={Journal of Engineering Management and Systems Engineering},
year={2026},
page={137-155},
doi={https://doi.org/10.56578/jemse050202}
}
Dharmpal Deepak, et al. "A Decision-Oriented Framework for Risk-Based Maintenance Planning in High-Performance Mechanical Systems Using Entropy-Integrated FMEA–MCDM Approaches." Journal of Engineering Management and Systems Engineering, v 5, pp 137-155. doi: https://doi.org/10.56578/jemse050202
Dharmpal Deepak, Sulakshna Dwivedi, Harnam Singh Farwaha, Raman Kumar, žEljko Stević, Manjunatha Chandra, Rajender Kumar, Anant Prakash Agrawal and Vivek John. "A Decision-Oriented Framework for Risk-Based Maintenance Planning in High-Performance Mechanical Systems Using Entropy-Integrated FMEA–MCDM Approaches." Journal of Engineering Management and Systems Engineering, 5, (2026): 137-155. doi: https://doi.org/10.56578/jemse050202
DEEPAK D, DWIVEDI S, FARWAHA H S, et al. A Decision-Oriented Framework for Risk-Based Maintenance Planning in High-Performance Mechanical Systems Using Entropy-Integrated FMEA–MCDM Approaches[J]. Journal of Engineering Management and Systems Engineering, 2026, 5(2): 137-155. https://doi.org/10.56578/jemse050202
cc
©2026 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.