A Decision-Oriented Framework for Risk-Based Maintenance Planning in High-Performance Mechanical Systems Using Entropy-Integrated FMEA–MCDM Approaches
Abstract:
Effective maintenance planning in high-performance mechanical systems requires a structured approach to identifying and prioritizing potential failure modes under multiple, often conflicting criteria. Conventional Failure Mode and Effects Analysis (FMEA) relies heavily on subjective judgment, which can limit consistency and transparency in decision-making. To address this limitation, this study develops a decision-oriented framework that integrates Shannon entropy-based weighting with three Multi-Criteria Decision-Making (MCDM) methods, namely SAW, TOPSIS, and VIKOR. The framework is applied to a representative high-performance mechanical system, in which maintenance-related factors, including failure probability, detection capability, economic impact, repair time, and resource availability, are evaluated in a unified structure. Entropy weighting is employed to derive criterion importance directly from data, reducing reliance on expert bias. The combined use of multiple MCDM techniques enables cross-validation of ranking outcomes and improves the robustness of the prioritization process. The results show a high degree of consistency among the three methods (Spearman’s $\rho>0.80$), indicating stable identification of critical failure modes. The proposed framework provides a transparent basis for risk-informed maintenance planning and supports more effective allocation of inspection and repair resources. From an engineering management perspective, the approach facilitates the transition from experience-driven decisions to data-supported strategies, contributing to improved system reliability and operational efficiency. Although demonstrated in a specific application context, the framework can be extended to other engineering systems where structured failure prioritization is required.1. Introduction
Modern high-performance mechanical systems operate under demanding conditions that expose components to continuous wear, fatigue, and potential failure. In competitive racing bicycles, critical subsystems such as drivetrain components, braking mechanisms, and transmission systems are subjected to dynamic loads, environmental variability, and high operational stress. These conditions increase the likelihood of component degradation and unexpected failures, which can significantly compromise safety, performance, and maintenance efficiency. Therefore, systematic approaches to identifying and prioritizing failure modes are essential for effective maintenance planning and reliable system operation. Well-organized maintenance and resource management are required to advance the performance and durability of bicycles and maintain rider safety while reducing environmental impact [1].
Failure Mode and Effects Analysis (FMEA) is a widely adopted technique for identifying, evaluating, and prioritizing potential failures in engineering systems [2]. Despite its extensive use, traditional FMEA relies heavily on expert judgment and the Risk Priority Number (RPN), introducing subjectivity and limiting its ability to handle multiple conflicting criteria [3]. In complex systems where failure consequences depend on factors such as cost, safety, performance, and maintenance time, conventional FMEA lacks robustness and consistency in decision-making. To overcome these limitations, the integration of Multi-Criteria Decision-Making (MCDM) techniques has been increasingly explored, enabling the systematic evaluation of alternatives across multiple criteria and improving prioritization accuracy [4]. Advanced aggregation-based MCDM approaches further enhance decision consistency by improving the handling of multiple conflicting criteria [5].
Recent advancements in the literature demonstrate the effectiveness of hybrid FMEA–MCDM frameworks for enhancing failure analysis in engineering applications. Various studies have integrated FMEA with advanced decision-making methods such as the AHP [3], VIKOR [3], [6], TOPSIS [6], [7], ELECTRE [2], [8], and fuzzy-based approaches to address uncertainty and improve prioritization consistency [3], [7]. Additionally, objective weighting techniques, particularly Shannon entropy [2], [9], have gained prominence due to their ability to determine criterion importance directly from data, thereby reducing dependence on subjective expert evaluations [7], [10]. Several recent studies have further emphasized the importance of combining subjective and objective weighting schemes to enhance reliability assessment and decision robustness in complex engineering systems [3], [8], [11]. Hybrid FMEA–MCDM frameworks improve the robustness and reliability of failure prioritization in complex engineering systems. Fuzzy and entropy-based Failure Mode Effects and Criticality Analysis (FMECA) models have been proposed to address uncertainty and improve ranking stability in gas turbines and safety-critical mechanical systems [12], [13]. Similarly, entropy-driven and expert-integrated FMEA approaches have shown improved transparency and discrimination capability in maritime and industrial risk assessment applications [8], [14]. Optimization-driven decision frameworks have also been applied in engineering system validation and performance evaluation, demonstrating their effectiveness in improving decision reliability and operational efficiency [15].
In the context of maintenance decision-making, integrated FMEA–MCDM models have been successfully applied to Heating, Ventilation, and Air Conditioning (HVAC) systems and thermal power units to optimize maintenance planning and resource allocation [16], [17]. Furthermore, advanced FMEA extensions incorporating dynamic weighting and predictive maintenance strategies have been developed for Computer Numerical Control (CNC) machines and boiler systems, improving failure detection and prioritization accuracy [18], [19]. Recent reliability-driven studies also highlight the role of FMECA in enhancing system performance and reducing downtime in manufacturing environments [20], while hybrid entropy–MCDM approaches have demonstrated effectiveness in reliability model selection and engineering risk assessment [21]. Advanced expert-based decision systems have also been employed for structured risk assessment, highlighting the role of multi-criteria reasoning in evaluating complex system uncertainties [22]. Despite these advancements, challenges remain in achieving consistent ranking outcomes across multiple decision-making methods and reducing subjectivity in failure prioritization, particularly in high-performance mechanical systems.
Despite these developments, research has been limited in applying integrated FMEA–MCDM frameworks to high-performance bicycle systems, particularly in competitive racing applications where maintenance decisions directly influence operational performance. Existing studies largely focus on general decision-making applications rather than component-level failure prioritization in mechanical systems. Furthermore, the combined use of objective entropy-based weighting with multiple MCDM techniques, such as SAW, TOPSIS, and VIKOR, for comparative and robust failure prioritization remains underexplored in this context. This gap highlights the need for a structured, data-driven framework tailored to failure prioritization in performance-critical mechanical systems.
To address this gap, the current study introduces an integrated FMEA–MCDM framework that combines Shannon entropy-based objective weighting with three popular decision-making methods: SAW, TOPSIS, and VIKOR to prioritize failure modes in racing bicycle systems. The framework is positioned as a decision-support approach for maintenance planning in engineering systems, providing a structured basis for resource allocation and risk-informed decision-making. It systematically ranks failure modes against multiple operational criteria, helping maintenance planners allocate resources effectively, schedule inspections efficiently, and focus on high-risk components that have a direct impact on system reliability and performance. By merging objective weighting with diverse decision-making approaches, the framework supports consistent and data-informed maintenance planning. Although demonstrated on racing bicycle systems, the method can be extended to a wider range of engineering applications, including automotive, manufacturing, and industrial equipment maintenance, where structured failure prioritization is essential for operational efficiency.
The main contributions of this study are threefold. First, it introduces an entropy-driven multi-method decision framework that improves robustness and reduces subjectivity in failure prioritization. Second, it demonstrates the effectiveness of combining SAW, TOPSIS, and VIKOR to achieve consistent and reliable ranking outcomes. Third, it offers practical insights for maintenance planning by supporting more efficient resource allocation and the prioritization of critical components in high-performance mechanical systems.
2. Methodology
The work aims to identify the most critical failure cause to support efficient maintenance decision-making for road-racing bicycles and to improve maintenance methods using MCDM techniques (SAW, TOPSIS, and VIKOR). First, the criterion’s weight is calculated using the Shannon Entropy method; then, three MCDM techniques are applied with these weights, and the ranks are justified using a statistical technique. Spearman’s Rank Correlation Coefficient and the proposed methodology are shown in Figure 1. The rationale for selecting these three MCDM methods lies in their complementary characteristics. While VIKOR balances group utility and individual regret, TOPSIS focuses on geometric closeness to an ideal, and SAW uses straightforward additive logic. Employing all three helps triangulate decision outcomes, improving confidence in the final prioritization. By combining them with Shannon Entropy to compute objective weights, the study produces a robust multi-angle decision-support model.

Three MCDM methods, TOPSIS [23], [24], [25], [26], VIKOR [27], and SAW [28], are applied in conjunction with the Shannon Entropy method [29], [30], [31] to determine the weight of the criteria for ordering multiple alternatives.
Let the decision matrix be defined as:
$$\mathrm{X}=\left[x_{i j}\right]_{m \times n}$$
where, $x_{i j}$ denotes the performance score of alternative $i$ under criterion $j, i=1,2, \ldots, m$, and $j=1,2, \ldots, n$. Here, $m$ is the number of alternatives (failure modes) and $n$ is the number of evaluation criteria. In this study, the alternatives correspond to the identified bicycle failure modes, while the criteria correspond to the nine maintenance-related evaluation factors used in the FMEA-based assessment.
For clarity, the symbols used throughout are defined as follows:
• $x_{i j}$: original score of alternative $i$ under criterion $j$
• $r_{i j}$: normalized value of alternative $i$ under criterion $j$
• $p_{i j}$: proportion of alternative $i$ under criterion $j$ used in entropy weighting
• $w_j$: weight of criterion $j$
• $v_{i j}$: weighted normalized value of alternative $i$ under criterion $j$
• $A_i$: overall SAW score of alternative $i$
• $A^{+}$: positive ideal solution in TOPSIS
• $A^{-}$: negative ideal solution in TOPSIS
•$S_i^{+}$: separation of alternative $i$ from the positive ideal solution
•$S_i^{-}$: separation of alternative $i$ from the negative ideal solution
•$C_i$: TOPSIS closeness coefficient of alternative $i$
•$Q_i$: VIKOR compromise index of alternative $i$
•$S_i$: VIKOR group utility measure
•$R_i$: VIKOR individual regret measure
The SAW method ranks alternatives by computing a weighted sum of normalized performance scores across criteria [32].
Step 1: Normalize the decision matrix
For benefit criteria, where a larger value is preferred, normalization is performed as:
For cost criteria, where a smaller value is preferred, normalization is performed as:
where, $r_{i j}$ is the normalized score of alternative $i$ under criterion $j$.
Step 2: Compute the weighted score
The overall SAW score is computed using Eq. (3).
where, $A_i$ is the aggregated performance score.
Step 3: Ranking
Alternatives are ranked in descending order of $A_i$. A larger value of $A_i$ indicates a higher priority level in the context of failure mode assessment.
TOPSIS ranks alternatives by their geometric distance to an ideal-best solution ($A^{+}$) and ideal-worst solution ($A^{-}$); closer to $A^{+}$ (farther from $A^{-}$) is better [33], [34].
Step 1: Vector normalization
The decision matrix is normalized as:
where, $r_{i j}$ is the normalized value of alternative $i$ under criterion $j$.
Step 2: Weighted normalized matrix
The weighted normalized values are obtained as:
where, $v_{i j}$ is the weighted normalized value of alternative $i$ under criterion $j$.
Step 3: Determine the ideal solutions
The positive ideal solution $A^{+}$ and the negative ideal solution $A^{-}$ are defined as:
where, for a benefit criterion:
$$ v_j^{+}=\max _i v_{i j}, v_j^{-}=\min _i v_{i j} $$
and for a cost criterion:
$$ v_j^{+}=\min _i v_{i j}, v_j^{-}=\max _i v_{i j} $$
Thus, $A^{+}$ represents the best attainable criterion values and $A^{-}$ represents the worst attainable criterion values.
Step 4: Separation measures
The Euclidean distance of each alternative from the positive and negative ideal solutions is computed as
where, $S_i^{+}$ is the separation from the positive ideal solution and $S_i^{-}$ is the separation from the negative ideal solution.
Step 5: Closeness coefficient
The relative closeness coefficient of alternative $i$ is calculated as:
where, $0 \leq C_i \leq 1$. A larger $C_i$ indicates that the alternative is closer to the positive ideal solution and farther from the negative ideal solution; therefore, higher $C_i$ values are preferred. This corrected form should be used consistently throughout the manuscript and in the interpretation of TOPSIS results.
Step 6: Ranking
Alternatives are ranked in descending order of $C_i$. The alternative with the highest $C_i$ is considered the most critical or most preferred according to the TOPSIS framework adopted in this study.
VIKOR ranks alternatives by balancing group utility ($S_i$) and individual regret ($R_i$) to find a compromise solution closest to the ideal [35], [36].
Step 1: Determine the best and worst values
For each criterion $j$, the best value $f_j^*$ and the worst value $f_j^{-}$ are identified as:
for a benefit criterion:
$$ f_j^*=\max _i x_{i j}, f_j^{-}=\min _i x_{i j} $$
and for a cost criterion:
$$ f_j^*=\min _i x_{i j}, f_j^{-}=\max _i x_{i j} $$
These values serve as reference points for compromise evaluation.
Step 2: Compute the group utility and individual regret
The group utility measure $S_i$ and the individual regret measure $R_i$ are calculated as:
where, lower values of $S_i$ and $R_i$ are preferred. Here, $S_i$ reflects the overall distance from the ideal, while $R_i$ captures the worst-case criterion-specific shortfall.
Step 3: Compute the VIKOR index
The compromise index $Q_i$ is given by:
where,
$$ S^*=\min _i S_i, S^{-}=\max _i S_i, R^*=\min _i R_i, R^{-}=\max _i R_i $$
and $v$ is the weight of the strategy of majority utility, typically taken as $v=0.5$. A smaller $Q_i$ indicates a better compromise alternative.
In MCDM, assigning accurate weights to evaluation criteria is crucial, as it shapes the final prioritizations. While methods like AHP and BWM rely on expert judgments via pairwise comparisons, they introduce subjectivity and potential bias. In contrast, Shannon Entropy offers an objective, data-driven weighting approach by measuring the diversity of information across alternatives. Criteria with greater variability receive higher weights, enhancing transparency and consistency. This study adopts entropy-based weighting to prioritize failure modes in racing bicycles, emphasizing safety, sustainability, and unbiased decision-making. When integrated with VIKOR, TOPSIS, and SAW, Shannon Entropy enhances methodological robustness by providing normalized, stable weights, which are critical for reliable, repeatable maintenance prioritization in complex systems such as competitive road cycling [10], [37].
The Shannon Entropy method objectively determines criterion weights by measuring information variability (entropy) across alternatives. Criteria with higher variability (lower entropy) provide more discriminative information and receive higher weights ($w_j$).
Step 1: Proportion normalization
The original decision matrix is transformed into a proportional matrix as:
where, $p_{i j}$ is the normalized proportion of alternative $i$ under criterion $j$, and $\sum_{i=1}^m p_{i j}=1$.
This formulation requires $x_{i j}>0$.
Step 2: Entropy of each criterion
The entropy value of criterion $j$ is computed as:
where, $k=\frac{1}{\ln (m)}$ is a constant used to normalize the entropy value into the interval $0 \leq e_j \leq$ 1. Lower entropy indicates greater contrast among alternatives and hence stronger discriminating power of the criterion. When $p_{i j}=0$, the term $p_{i j} \ln \left(p_{i j}\right)$ is taken as zero by continuity.
Step 3: Degree of diversification
The degree of diversification, also called divergence, is obtained as:
where, $d_j$ expresses the useful information carried by criterion $j$. A larger $d_j$ indicates that the criterion contributes more strongly to discrimination among alternatives.
Step 4: Criterion weights
The normalized entropy weight of each criterion is then calculated by:
subject to:
$$ w_j \geq 0, \sum_{j=1}^n w_j=1 $$
These weights are subsequently used in SAW, TOPSIS, and VIKOR.
3. Case Study
The traditional FMECA method is a proactive, structured approach carried out methodically to anticipate or identify failures and their modes in a system or process. Figure 2 illustrates the structured FMEA workflow applied in this study. Its main objectives are to use the data to establish scores for the three main components:
• The severity of the risk in a system
• The chance that a failure will occur
• The frequency of non-observation within the system

Then, the results will be used to determine the criticality of system maintenance. It is primarily applied to macro-level systems. Nonetheless, this case study covers nine criteria to assess the maintenance impact of a failure mode in a system or process, as shown in Figure 3. They are Failure Probability (FP), Non-Detection of Failure (NDF), Skill Requirements Level (SRL), Economic Risk (ER), Maintenance Duration (MD), Spare Parts Availability (SPA), Machine Reliability (MR), Cost per Kilometer (CPK), Lead Time for Repair (LTR).

FP represents the frequency of failure occurrence and the extent of associated operational inconvenience. Mean Distance Between Failures (MDBF) is a historical metric derived from maintenance records, empirical data, and expert assessments. A higher failure frequency corresponds to greater criticality. Table 1 presents MDBF values alongside criticality ratings to inform maintenance prioritization. Focusing on high-criticality failure modes enables effective resource allocation, thereby supporting operational reliability and safety.
FP | NDF | SRL | |||||
Occurrence | MDBF (km) | Score | Non-Detection Probability | Non-Detection Severity | Score | Required Skill Level | Score |
Almost never | 18000 | 1 | $\leq$10% | Intensively less | 1 | Rider | 1 |
Rare | 16000 | 2 | 11%–20% | Very less | 2 | Helper in the workshop | 2 |
Very less | 14000 | 3 | 21%–30% | Less | 3 | Technician | 3 |
Less | 12000 | 4 | 31%–40% | Fair | 4 | Exceptionally talented technician | 4 |
Medium | 10000 | 5 | 41%–50% | Medium | 5 | Foreman | 5 |
Slightly huge | 8000 | 6 | 51%–60% | Slightly huge | 6 | Highly competent foreman | 6 |
Huge | 6000 | 7 | 61%–70% | Huge | 7 | Engineer | 7 |
Very huge | 4000 | 8 | 71%–80% | Very huge | 8 | Assistant manager | 8 |
Intensely huge | 2000 | 9 | - | - | - | Manager | 9 |
The ability and knowledge of operators or maintenance staff are just one of many variables that must come together to successfully identify the cause or mechanism of a failure. These people are essential in identifying flaws through visual inspection or planned, routine evaluations. Furthermore, by offering timely alerts and data insights and incorporating technological tools such as automated controls, alarms, and sensors, the diagnostic process is dramatically improved. As indicated in Table 1, these components provide a comprehensive approach to fault detection, enabling businesses to quickly identify and resolve potential issues before they escalate into severe disruptions or failures.
A higher score (ranging from level 1 riders to level 9 managers) reflects greater complexity in the required skills and experience. Lower scores correspond to basic clerical functions, whereas higher scores denote roles that involve advanced technical or leadership capabilities, such as those in IT. This structured scoring system, as presented in Table 1, facilitates aligning personnel with foundational maintenance tasks. Accurately identifying the required skill level is essential for efficiently allocating resources in maintenance operations, particularly given that higher skill levels are generally associated with higher labor costs.
ER represents the estimated financial and operational impact associated with a failure event. The scoring reflects the potential economic consequences arising from component damage severity, repair or replacement cost, downtime implications, and secondary system disruption. While mechanical complexity (e.g., the number of interacting parts) contributes to impact assessment, the evaluation primarily focuses on the magnitude of repair costs and the risk of operational interruption. Higher scores indicate failures with greater economic and service continuity consequences. A standardized scoring approach, as outlined in Table 2, ensures consistency in evaluation, highlights high-risk areas, and facilitates informed decision-making. This approach supports prioritizing safety interventions and allocating resources optimally, enabling early hazard mitigation and minimizing operational disruptions.
ER | MD | ||
|---|---|---|---|
Count of Movable Parts | Score | Downtime (Days) | Score |
$\leq$100 | 1 | $\leq$1 | 1 |
$\leq$200 | 2 | $\leq$2 | 2 |
$\leq$300 | 3 | $\leq$3 | 3 |
$\leq$400 | 4 | $\leq$4 | 4 |
$\leq$500 | 5 | $\leq$5 | 5 |
$\leq$600 | 6 | $\leq$6 | 6 |
$\leq$700 | 7 | $\leq$7 | 7 |
$\leq$800 | 8 | $\leq$8 | 8 |
$\leq$900 | 9 | $>$8 | 9 |
MD refers to the overall ease with which equipment can be restored following failure or scheduled maintenance. This metric encompasses not only the duration of the recovery process but also factors such as the availability of replacement parts, technician readiness, equipment complexity, and the efficiency of maintenance procedures. Low MD scores indicate prolonged or stressful recovery scenarios, often attributed to insufficient infrastructure, outdated technology, or inadequate support systems. Higher MD scores correspond to quicker restoration times and enhanced operational continuity. As shown in Table 2, MD-based analysis helps identify areas where reliability is degrading, thereby supporting the prioritization of reliability enhancement efforts and the formulation of optimal maintenance strategies to minimize downtime while maintaining acceptable overall performance.
Access to spare parts is a critical determinant of maintenance effectiveness and plays a significant role in extending equipment lifespan. In this framework, components are prioritized by weighing their functional importance against ease of acquisition, using a classification matrix with three categories on the Y-axis (desired, necessary, and essential) and three levels on the X-axis (readily available, difficult to procure, and scarce). Table 3 supports maintenance prioritization based on urgency and facilitates informed resource planning. By aligning part criticality with availability, this approach enables optimal task scheduling and ensures the continuity of maintenance operations.
Criticality/Accessible | Easy | Difficult | Scarce |
Suitable | 1 | 4 | 7 |
Important | 2 | 5 | 8 |
Vital | 3 | 6 | 9 |
MR represents the likelihood that a component performs its intended function without failure over a defined usage interval, approximated using expert judgment and historical MTBF. Higher scores indicate greater unreliability. The longevity of MR is influenced by several factors, including the quality of the machine’s design, the consistency and adequacy of maintenance practices, prevailing operating conditions, the availability of spare parts, and the presence of skilled technical personnel. Adverse conditions and maintenance neglect diminish MR, whereas real-time monitoring and proactive maintenance strategies enhance it. Table 4 presents the prioritization of these contributing factors, offering guidance for improving system uptime, mitigating failure-related risks, and promoting cost-effective, reliable operation of industrial and mechanical systems.
Although FP and MR are related to reliability, they capture different dimensions of system performance. FP reflects the short-term probability of a failure event occurring under current operating conditions, primarily based on observed frequency data. In contrast, MR represents the inherent long-term structural robustness of the component, reflecting design strength and material durability. Therefore, FP measures the likelihood of operational occurrence, whereas MR reflects intrinsic reliability characteristics. Including both allows the model to distinguish between frequently occurring faults and structurally critical weaknesses.
| Scores | MR | CPK (INR/km) | LTR (hr) |
|---|---|---|---|
| 1 | Intensively less | 15 | 0.5 |
| 2 | Very less | 20 | 1 |
| 3 | Less | 25 | 2 |
| 4 | Fair | 30 | 3 |
| 5 | Medium | 35 | 4 |
| 6 | Slightly huge | 40 | 5 |
| 7 | Huge | 45 | 6 |
| 8 | Very huge | 50 | 7 |
| 9 | Intensively huge | 55 | 8 |
Developing a CPK scorecard for road bike racing includes expenses for acquisition, maintenance, component upgrades, and participation in competitive events. Key considerations include the bicycle’s quality, the performance of its components, and ancillary costs associated with racing, such as travel and event fees. Systematically recording both mileage and financial investments not only provides insight into short-term value but also facilitates evaluation of long-term cost efficiency. The results are presented in Table 4. A customized CPK framework enables riders to strategically plan expenditures in alignment with performance objectives, thereby optimizing budget utilization and enhancing racing outcomes in accordance with individual competitive goals.
Refers to the estimated time required (in hours) to fully repair or replace a component after failure, based on historical maintenance records. Scored using predefined intervals (Table 4), where higher scores indicate longer repair times. LTR in road bike racing can be initiated during the planning phase by selecting reliable components and implementing effective maintenance practices. Key strategies include: (1) identifying common points of failure; (2) utilizing high-quality, durable components; (3) ensuring a well-equipped repair kit is available on race days; and (4) scheduling regular preventive maintenance. Additionally, immediate access to skilled technicians who can promptly diagnose and address mechanical issues is essential. The outcomes, as presented in Table 4, include reduced downtime, improved performance, and increased riding hours, all of which are critical for competitive cyclists aiming to achieve consistency and reliability during events.
The scoring process involved three domain experts (certified bicycle mechanics) with experience in competitive bicycle maintenance and mechanical diagnostics. Each expert independently evaluated the failure modes using the predefined 1–9 ordinal scale for all criteria. After the independent scoring phase, a structured discussion session was held to reconcile significant deviations (differences $\geq$ 2 points). Final scores were determined by consensus averaging, with agreement achieved.
To enhance consistency, experts were provided with operational definitions for each criterion and reference examples from maintenance records. No pairwise comparison methods were used, as entropy weighting was applied to minimize subjectivity in assessing the importance of criteria. The finalized score matrix represents the agreed expert consensus supported by documented maintenance evidence.
Within the proposed framework, sustainability is operationalized primarily through resource-efficiency-oriented criteria such as ER, CPK, and LTR. ER reflects the economic impact of failure events, CPK captures lifecycle operating cost implications, and LTR relates to downtime duration and associated resource consumption. While the model does not explicitly incorporate environmental emissions metrics, improved failure prioritization indirectly contributes to sustainability by minimizing material waste, extending component lifespans, and optimizing maintenance resource allocation. Therefore, sustainability benefits are treated as consequential outcomes of enhanced reliability management rather than as standalone environmental indicators.
4. Rating Failure Modes and Assigning Scores
This study presents a case study of a racing road bicycle system. There are many types of racing bikes: MTB, Track, BMX, Cyclo-Cross, etc. However, the present research focuses on road bikes, where the majority of the population is in this discipline of racing. One of the primary and most important factors contributing to the success of the current study is the maintenance of a road racing bicycle. Road bikes are bicycles usually designed for racing on roads. The root cause analysis identifies the possible modes of failure of the parts and their impact and cause on the rider’s performance. According to the scoring system outlined in the preceding section, Table 5 presents numerical scores for the identified failure causes.
Data for the case study were sourced from maintenance records, technical manuals, engineering specifications, and expert evaluations by professional bicycle mechanics. Fifteen high-performance road bicycles were assessed using a structured consensus approach, supported by historical logs, to rate criteria such as failure probability, skill requirements, and part availability. While subjective insights were used where necessary (skill level, availability), objective data, such as MTBF and service times, were prioritized. Structured scoring and entropy-based weighting ensured methodological transparency and reduced bias. However, given the study’s specialized scope, its findings may not be broadly generalizable without domain-specific adaptation. The scores presented in Table 5 were derived from structured expert judgment informed by real-world maintenance observations, MTB failure records, and engineering documentation. The evaluation team consisted of three domain experts (certified bicycle mechanics) and two experienced cycling coaches. A consensus approach was used to assign scores based on the nine defined criteria, with an emphasis on repeatability and the representativeness of typical racing scenarios.
The 1–9 ordinal scale was adopted to ensure consistency with established FMEA and MCDM applications, where nine-point scales provide adequate discrimination without introducing unnecessary granularity. The threshold values for each criterion were defined using historical maintenance records, repair duration ranges, and documented operational risk levels to preserve practical relevance. The scale boundaries were fixed prior to expert evaluation to avoid post hoc adjustments and reduce scoring bias.
Main Parts | Major Mode of Failure | Major Effect of Failure | Causes of Failure | Alternatives | FP | NDF | SRL | ER | MD | SPA | MR | CPK | LTR | |
Frameset | Total bicycle failure | Injury to rider | Shocks from the terrain | D1 | 2 | 9 | 7 | 6 | 1 | 3 | 8 | 1 | 3 | |
Major crash | D2 | 7 | 9 | 9 | 8 | 7 | 8 | 1 | 9 | 9 | ||||
Drive train unit | Chain slipping off the cassette | Reduction in riding efficiency | Chain wear | D3 | 9 | 5 | 4 | 4 | 2 | 4 | 5 | 3 | 3 | |
Cassette wear | D4 | 7 | 7 | 4 | 3 | 2 | 4 | 5 | 4 | 2 | ||||
Chain plate wear | D5 | 5 | 5 | 4 | 3 | 2 | 5 | 5 | 2 | 2 | ||||
Bottom bracket wear in bearings | D6 | 5 | 4 | 5 | 2 | 2 | 5 | 3 | 2 | 3 | ||||
Wear in the crankset spindle | D7 | 3 | 8 | 3 | 1 | 3 | 7 | 4 | 3 | 2 | ||||
Steering unit | Bike imbalance | Injury to the rider | Wear in the stem screws and clamps | D8 | 5 | 8 | 4 | 1 | 3 | 2 | 4 | 3 | 2 | |
Brake unit | Brake failure | Injury to the rider | Wear in the brake pads and clamping scews | D9 | 2 | 8 | 4 | 2 | 3 | 2 | 6 | 2 | 2 | |
Wheel | Wheel Rocking | Reduction in riding efficiency | Wear in the tyre | D10 | 7 | 6 | 6 | 1 | 1 | 2 | 5 | 4 | 2 | |
Axle nut wear | D11 | 2 | 5 | 5 | 2 | 2 | 3 | 6 | 2 | 3 | ||||
Hub bearing wear | D12 | 4 | 8 | 3 | 5 | 6 | 2 | 7 | 3 | 5 | ||||
5. Results and Discussion
First, objective weights were computed using Shannon’s entropy method, in order to determine the objective weights. All calculations were performed using Microsoft Excel. Specifically, the process involved normalizing the decision matrix based on the scores provided by the experts (as per Eq. (14)), computing the entropy values corresponding to each criterion (as per Eq. (15)), evaluating the degree of divergence or inconsistency (as per Eq. (16)), and subsequently calculating the normalized weights (as per Eq. (17)). Table 6 summarizes the intermediate entropy terms, divergence measures, and final normalized weights to ensure full methodological transparency and reproducibility.
Step | FP | NDF | SRL | ER | MD | SPA | MR | CPK | LTR |
n | 12 | ||||||||
ln(n) | 2.4849 | 2.4849 | 2.4849 | 2.4849 | 2.4849 | 2.4849 | 2.4849 | 2.4849 | 2.4849 |
Scaling constant | 0.4024 | 0.4024 | 0.4024 | 0.4024 | 0.4024 | 0.4024 | 0.4024 | 0.4024 | 0.4024 |
Entropy summation term | -2.4071 | -2.4701 | -2.4488 | -2.2829 | -2.3604 | -2.3819 | -2.4109 | -2.3342 | -2.3412 |
Entropy value | 0.9687 | 0.994 | 0.9855 | 0.9187 | 0.9499 | 0.9585 | 0.9702 | 0.9394 | 0.9422 |
Divergence measure | 0.0313 | 0.006 | 0.0145 | 0.0813 | 0.0501 | 0.0415 | 0.0298 | 0.0606 | 0.0578 |
Final weight | 0.084 | 0.016 | 0.0389 | 0.218 | 0.1344 | 0.1112 | 0.0799 | 0.1626 | 0.1551 |
Table 7 presents the normalized values for various criteria SAW using Eq. (1) and Eq. (2) and the calculated total scores and ranks for each alternative using Eq. (3). D2 has the highest total score of 0.9002 and is ranked first, performing well across nearly all criteria. D2 is a balanced, high-performance option for ER and CPK. D12 ranks second, with a total score of 0.4976. D3 ranks third, with a total score of 0.4800.
Alternative | FP | NDF | SRL | ER | MD | SPA | MR | CPK | LTR | Total | Rank |
|---|---|---|---|---|---|---|---|---|---|---|---|
D1 | 0.3333 | 0.5556 | 1.0000 | 0.7500 | 0.1429 | 0.6667 | 0.1250 | 0.1111 | 0.4444 | 0.4296 | 5 |
D2 | 0.8889 | 0.5556 | 1.0000 | 1.0000 | 1.0000 | 0.2500 | 1.0000 | 1.0000 | 1.0000 | 0.9002 | 1 |
D3 | 1.0000 | 0.8333 | 0.5000 | 0.5000 | 0.5714 | 0.5000 | 0.2000 | 0.3333 | 0.3333 | 0.4800 | 3 |
D4 | 0.8889 | 0.7143 | 0.6250 | 0.5000 | 0.2857 | 0.5000 | 0.2000 | 0.4444 | 0.2222 | 0.4361 | 4 |
D5 | 0.6667 | 0.8333 | 0.5000 | 0.3750 | 0.2857 | 0.4000 | 0.2000 | 0.2222 | 0.2222 | 0.3399 | 9 |
D6 | 0.5556 | 1.0000 | 0.6250 | 0.2500 | 0.2857 | 0.4000 | 0.3333 | 0.2222 | 0.3333 | 0.3388 | 10 |
D7 | 0.3333 | 0.6250 | 0.5000 | 0.1250 | 0.4286 | 0.2857 | 0.2500 | 0.4444 | 0.2222 | 0.3007 | 12 |
D8 | 0.5556 | 0.6250 | 0.6250 | 0.1250 | 0.4286 | 1.0000 | 0.2500 | 0.3333 | 0.2222 | 0.3856 | 7 |
D9 | 0.3333 | 0.6250 | 0.5000 | 0.2500 | 0.4286 | 1.0000 | 0.1667 | 0.2222 | 0.2222 | 0.3646 | 8 |
D10 | 0.6667 | 0.7143 | 0.5000 | 0.1250 | 0.2857 | 1.0000 | 0.2000 | 0.4444 | 0.2222 | 0.3864 | 6 |
D11 | 0.3333 | 0.8333 | 0.6250 | 0.2500 | 0.2857 | 0.6667 | 0.1667 | 0.2222 | 0.3333 | 0.3338 | 11 |
D12 | 0.4444 | 0.6250 | 0.5000 | 0.5000 | 0.7143 | 0.6667 | 0.1429 | 0.3333 | 0.5556 | 0.4976 | 2 |
The matrix is first normalized by Eq. (4) so that the values of each criterion are comparable across alternatives. After normalization, weighted normalization is performed using Eq. (5). TOPSIS then calculates the distance of each alternative from the ideal solution (best possible values) and the negative-ideal solution (worst possible values) using Eq. (6) and Eq. (7), followed by separation measures using Eq. (8) and Eq. (9). The closeness coefficient ($C_i$) represents how close each alternative is to the ideal solution by Eq. (10). A higher $C_i$ indicates efficient performance. Table 8 presents a decision matrix and normalized values for various criteria, along with TOPSIS scores and their ranks. $C_i$ is the “closeness to ideal solution” score of the TOPSIS method, replicating how close each decision alternative (D1, D2, etc.) is to the ideal solution.
Alternative | Distance from Ideal Best Solution | Distance from Ideal Worst Solution | Closeness Coefficient | Rank |
|---|---|---|---|---|
D1 | 0.1441 | 0.0941 | 0.3952 | 3 |
D2 | 0.0439 | 0.1906 | 0.8128 | 1 |
D3 | 0.1292 | 0.0781 | 0.3767 | 4 |
D4 | 0.1375 | 0.074 | 0.3499 | 5 |
D5 | 0.1597 | 0.047 | 0.2274 | 9 |
D6 | 0.1631 | 0.0418 | 0.204 | 12 |
D7 | 0.1688 | 0.0476 | 0.2199 | 10 |
D8 | 0.1687 | 0.0586 | 0.2576 | 7 |
D9 | 0.1661 | 0.0542 | 0.246 | 8 |
D10 | 0.1673 | 0.0613 | 0.268 | 6 |
D11 | 0.1642 | 0.0459 | 0.2185 | 11 |
D12 | 0.1182 | 0.0883 | 0.4278 | 2 |
The TOPSIS results indicate that D2 has the highest closeness coefficient $\left(C_i=0.8128\right)$, making it the top-ranked alternative. It demonstrates strong performance across critical criteria, particularly environmental sustainability, design risk management, and process capability, indicating a well-balanced and robust profile. D12 ranks second with a relatively high closeness coefficient $\left(C_i=0.4278\right)$, showing strong performance across multiple criteria, especially ER and maintenance-related factors, although its overall performance remains slightly lower than D2. D1 and D3 follow with $C_i=0.3952$ and 0.3767, respectively, indicating moderate proximity to the ideal solution. D3 performs well in critical failure and supplier performance, while D1 exhibits a relatively balanced performance across most criteria without significant dominance in any specific aspect.
D4 ranks in the mid-range ($C_i=0.3499$), showing acceptable performance in FP and NDF but weaker results in lead time reduction. Alternatives such as D10, D8, and D9 demonstrate moderate performance, indicating some strengths but also noticeable limitations across key criteria.
On the lower end, D5, D7, D11, and D6 exhibit relatively low closeness coefficients, with D6 being the least preferred alternative $\left(C_i=0.2040\right)$. These alternatives are farther from the ideal solution due to weaker performance across multiple critical criteria, particularly ER and CPK, which significantly reduces their prioritization.
Overall, the TOPSIS analysis confirms that D2, D12, and D1 are the most preferred alternatives, as they consistently perform well across critical failure, environmental, and operational criteria. In contrast, lower-ranked alternatives such as D6 and D7 require improvement in key areas such as lead time reduction, supplier performance, and design risk management to enhance their prioritization in future evaluations.
Table 9 presents the normalized decision matrix, the group utility measure (Eq. (11)), the individual regret measure (Eq. (12)), the VIKOR values (Eq. (13)), and the final ranks. D2 has the lowest $Q_i$ score of 0.0000, making it the best option. It performs exceptionally well across all criteria, with many values normalized to zero. D2 shows a strong performance in FP, NDF, and CPK and is close to the ideal solution. D12 ranks second, with a $Q_i$ score of 0.3886. It performs strongly in CPK and ER and offers a balanced approach across several criteria, making it a good compromise. D3 is ranked third and performs well in ER and MR. However, it performs worse than D2 and D12, particularly in the NDF and FP. D4 ranks fourth, performing efficiently in MD and ER but weaker in CPK and LTR. It is a middle-of-the-pack alternative in terms of its overall performance.
Alternative | FP | NDF | SRL | ER | MD | SPA | MR | CPK | LTR | VIKOR Index (Qi) | Rank Qi |
D1 | 0.0840 | 0.0160 | 0.0000 | 0.0623 | 0.1344 | 0.0185 | 0.0799 | 0.1626 | 0.1108 | 0.6256 | 5 |
D2 | 0.0140 | 0.0160 | 0.0000 | 0.0000 | 0.0000 | 0.1112 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 1 |
D3 | 0.0000 | 0.0040 | 0.0389 | 0.1245 | 0.0672 | 0.0371 | 0.0457 | 0.1219 | 0.1329 | 0.4166 | 3 |
D4 | 0.0140 | 0.0080 | 0.0292 | 0.1245 | 0.1120 | 0.0371 | 0.0457 | 0.1016 | 0.1551 | 0.5604 | 4 |
D5 | 0.0420 | 0.0040 | 0.0389 | 0.1557 | 0.1120 | 0.0556 | 0.0457 | 0.1423 | 0.1551 | 0.6537 | 6 |
D6 | 0.0560 | 0.0000 | 0.0292 | 0.1868 | 0.1120 | 0.0556 | 0.0228 | 0.1423 | 0.1329 | 0.7896 | 7 |
D7 | 0.0840 | 0.0120 | 0.0389 | 0.2180 | 0.0896 | 0.0927 | 0.0342 | 0.1016 | 0.1551 | 1.0000 | 12 |
D8 | 0.0560 | 0.0120 | 0.0292 | 0.2180 | 0.0896 | 0.0000 | 0.0342 | 0.1219 | 0.1551 | 0.9196 | 10 |
D9 | 0.0840 | 0.0120 | 0.0389 | 0.1868 | 0.0896 | 0.0000 | 0.0571 | 0.1423 | 0.1551 | 0.8101 | 8 |
D10 | 0.0420 | 0.0080 | 0.0389 | 0.2180 | 0.1120 | 0.0000 | 0.0457 | 0.1016 | 0.1551 | 0.9235 | 11 |
D11 | 0.0840 | 0.0040 | 0.0292 | 0.1868 | 0.1120 | 0.0185 | 0.0571 | 0.1423 | 0.1329 | 0.8109 | 9 |
D12 | 0.0700 | 0.0120 | 0.0389 | 0.1245 | 0.0448 | 0.0185 | 0.0685 | 0.1219 | 0.0886 | 0.3886 | 2 |
Lower-ranked alternatives (D6, D8, D9, D10, D11) have relatively high Qi scores and are less preferable than D2, D12, and D3. D6 performs poorly in CPK and MD, while D8 and D10 have low scores in SPA and Maintenance Requirements. Their lower prioritizations suggest they are not ideal solutions. The VIKOR technique promotes compromise solutions, and D2 emerges as the best option, effectively balancing all requirements.
Table 10 compares the results of three distinct MCDM methods: TOPSIS, VIKOR, and SAW. Each technique scrutinizes the decision alternatives (D1 to D12) using different scoring methods, yielding variable yet equivalent prioritizations. TOPSIS evaluates the options based on their closeness to the ideal solution, with higher values indicating better performance. VIKOR ranks the options based on their $\mathrm{Q}_{\mathrm{i}}$ values, with lower $\mathrm{Q}_{\mathrm{i}}$ indicating a better choice. SAW rates alternatives based on their overall weighted scores, with higher scores indicating better prioritizations. Figure 4 shows the ranks attained by different alternatives by three MCDM methods.
Alternatives | TOPSIS | VIKOR | SAW | |||
Closeness Coefficient (ci) | Rank | VIKOR Index (Qi) | Rank | SAW Score | Rank | |
D1 | 0.3952 | 4 | 0.6256 | 5 | 0.4296 | 5 |
D2 | 0.8128 | 1 | 0.0000 | 1 | 0.9002 | 1 |
D3 | 0.3767 | 3 | 0.4166 | 3 | 0.4800 | 3 |
D4 | 0.3499 | 5 | 0.5604 | 4 | 0.4361 | 4 |
D5 | 0.2274 | 9 | 0.6537 | 6 | 0.3399 | 9 |
D6 | 0.2040 | 11 | 0.7896 | 7 | 0.3388 | 10 |
D7 | 0.2199 | 12 | 1.0000 | 12 | 0.3007 | 12 |
D8 | 0.2576 | 7 | 0.9196 | 10 | 0.3856 | 7 |
D9 | 0.2460 | 6 | 0.8101 | 8 | 0.3646 | 8 |
D10 | 0.2680 | 8 | 0.9235 | 11 | 0.3864 | 6 |
D11 | 0.2185 | 10 | 0.8109 | 9 | 0.3338 | 11 |
D12 | 0.4278 | 2 | 0.3886 | 2 | 0.4976 | 2 |

D2 is the top-performing alternative across three MCDM methods, consistently ranked $1^{\text {st }}$. D2 is a robust choice, performing well across evaluation techniques. Similarly, D12 follows closely, ranking $2^{\text {nd }}$ in all three MCDM methods, and is another robust and reliable alternative. D3 ranks $3^{\text {rd }}$ across all three MCDM approaches, further suggesting a high level of agreement among the methods regarding the relative merits of these top alternatives.
D1 and D4 display slight variability in their prioritizations; D1 ranks $4{ }^{\text {th }}$ in TOPSIS but $5{ }^{\text {th }}$ in both VIKOR and SAW. D4 performs similarly, ranking $5^{\text {th }}$ in TOPSIS and $4{ }^{\text {th }}$ in VIKOR and SAW. While these alternatives are relatively stable in performance, minor shifts in rank across methods reflect subtle differences in how each method weights and aggregates the criteria. Alternative D7 consistently ranks last across three MCDM methods, indicating poor performance across all approaches.
From an engineering perspective, the high priority given to D2 (major frame crash) reflects the structural vulnerability of lightweight racing frames, which are optimized for stiffness-to-weight rather than impact tolerance. The prominence of D12 (hub bearing wear) is associated with cyclic radial loading and insufficient lubrication at high speeds, leading to progressive fatigue degradation. Similarly, D3 (chain wear) is influenced by repetitive tensile loading, contamination, and misalignment during aggressive shifting. These rankings align with known mechanical stress mechanisms and operational load patterns typical of competitive cycling environments.
The differences in ranks highlight the unique strengths of each MCDM method and the criteria they prioritize. The overall agreement between the methods suggests they provide a reliable basis for decision-making, especially when the same alternatives consistently rank at the top or bottom across all three MCDM techniques.
The prioritization results can be understood in terms of underlying mechanical behavior and maintenance implications. The dominance of D2 (major frame crash) is linked to high stress concentration, impact loading, and the structural vulnerability of lightweight frames, which are optimized for stiffness rather than damage tolerance. Similarly, D12 (hub bearing wear) indicates progressive fatigue from cyclic radial loading, lubrication degradation, and contamination, making it a critical reliability concern. D3 (chain wear) is affected by repetitive tensile loading, misalignment, and abrasive conditions, which speed up material degradation and reduce drivetrain efficiency.
The stability of rankings across SAW, TOPSIS, and VIKOR can be attributed to the consistent dominance of key criteria such as ER, CPK, and MD, which have higher entropy weights. Alternatives that perform well in these high-weight criteria stay at the top across different methods, while mid-ranked options vary because of differences in how they aggregate data and their sensitivity to trade-offs among criteria. From a decision-making point of view, this prioritization allows for adaptive maintenance strategies depending on operational constraints. Under cost limitations, emphasis may be on reducing CPK and LTR, while safety-critical situations require focusing on FP and ER. Similarly, in time-sensitive maintenance environments, MD and SPA may be prioritized to ensure quick system recovery. Hence, the proposed framework enables flexible, context-specific maintenance planning by aligning failure priorities with specific operational goals.
From an engineering management perspective, the proposed framework serves as a structured decision-support tool for maintenance planning and resource allocation. Prioritizing failure modes allows for the creation of risk-based inspection schedules, where high-ranked failures, such as D2 and D12, undergo more frequent and detailed inspections, while lower-ranked modes can be monitored less intensively. Regarding maintenance budgeting, the ranking provides a quantitative foundation for distributing financial resources toward components that have the greatest impact on system reliability and operational risk. This approach supports cost-effective maintenance strategies by directing expenditures toward critical failure modes rather than spreading resources evenly.
The framework also guides technician allocation and workforce planning. High-priority failure modes requiring specialized skills (e.g., structural frame inspection or bearing replacement) can be assigned to experienced personnel, while routine tasks involving lower-risk components can be delegated accordingly. This enhances workforce efficiency and minimizes downtime. More broadly, integrating FMEA with entropy-based weighting and multiple MCDM methods creates a transferable decision-making process applicable to other engineering systems. By systematically connecting failure characteristics with operational criteria, the approach supports consistent, data-driven decision-making across maintenance, reliability, and asset management contexts.
The proposed maintenance actions are directly derived from the integrated framework’s scoring structure and prioritization results. Specifically, alternatives with higher closeness coefficients (e.g., D2, D12, and D3) exhibit strong performance across high-weight criteria such as ER, CPK, and MD, indicating a greater impact on system reliability and operational risk. As a result, these failure modes are assigned higher priority for inspection and preventive maintenance.
The inspection frequency and intervention strategies are therefore aligned with both the relative ranking (Ci values) and each criterion’s contribution to the overall score. For example, failure modes with high FP and ER values require more frequent inspection due to their potential safety and reliability implications, while those with elevated LTR or SPA values necessitate proactive resource planning to minimize downtime and supply delays.
Conversely, lower-ranked alternatives (e.g., D6, D7, D11) demonstrate weaker performance across critical criteria and lower Ci values, thereby justifying reduced inspection intensity and deferred maintenance actions under resource constraints. This ensures that maintenance decisions are not generic but systematically derived from the quantitative evaluation framework. Thus, the recommended actions directly translate model outputs into operational decisions, enabling transparent, data-driven maintenance planning.
A statistical technique, Spearman’s Rank Correlation Coefficient, was applied to measure the degree of correlation among the different MCDM ranking methods and to assess whether the rankings are consistent. The Spearman’s Rank Correlation Coefficients between the different ranking methods are as follows:
• TOPSIS vs. VIKOR of 0.825: indicating a strong positive correlation.
• TOPSIS vs. SAW of 0.958: indicating a very strong positive correlation.
• VIKOR vs. SAW of 0.804: also indicating a strong positive correlation.
The prioritizations assigned by the TOPSIS, VIKOR, and SAW MCDM approaches are considerably correlated, particularly TOPSIS and SAW, indicating that these MCDM methods rank choice alternatives quite consistently. While there are some minor variances, the main patterns in prioritizations are consistent across approaches.
While the highest- and lowest-ranked failure modes remain stable across SAW, TOPSIS, and VIKOR, variations among mid-ranked alternatives are primarily driven by differences in criterion sensitivity and compensation structures. SAW applies full compensatory aggregation, allowing high scores in dominant criteria, such as ER and CPK, to offset weaker performance in other criteria. TOPSIS evaluates relative closeness to ideal and anti-ideal solutions, making mid-ranked alternatives sensitive to small variations in normalized distances. VIKOR emphasizes compromise prioritization based on group utility and regret measures, thereby amplifying differences among criteria with higher entropy weights. Consequently, alternatives with balanced but moderate scores exhibit positional shifts across methods.
Assigning objective weights to maintenance criteria was the first goal (Table 11). The weights of the criteria that decide the NDF as the significant criterion were calculated for this purpose using the Shannon Entropy technique, followed by SRL, MR, FP, SPA, MD, LTR, CPK, and ER. The second goal was to determine the most effective maintenance procedures. Three techniques for MCDM. The second goal has been accomplished using TOPSIS, VIKOR, and SAW.
| Weight | Rank |
FP | 0.0840 | 6 |
NDF | 0.0160 | 9 |
SRL | 0.0389 | 8 |
ER | 0.2180 | 1 |
MD | 0.1344 | 4 |
SPA | 0.1112 | 5 |
MR | 0.0799 | 7 |
CPK | 0.1626 | 2 |
LTR | 0.1551 | 3 |
The following is a systematic display of the prioritization of alternatives that were produced independently using the TOPSIS, VIKOR, and SAW methodologies:
• SAW: D2 > D12 > D3 > D4 > D1 > D10 > D8 > D9 > D5 > D6 > D11 > D7
• TOPSIS: D2 > D12 > D1 > D3 > D4 > D10 > D8 > D9 > D5 > D7 > D11 > D6
• VIKOR: D2 > D12 > D3 > D4 > D1 > D5 > D6 > D9 > D11 > D8 > D10 > D7
As shown above, among the nine likely causes of failure identified, the most significant cause based on SAW, TOPSIS, and VIKOR prioritizations is Major crashes (D2). This is followed by Hub Bearing Wear (D12) and Chain Wear (D3), while Wear in the Crankset Spindle (D7) is the least critical. After obtaining the prioritizations, it was found that the prioritization orders were nearly identical when the same weights were assigned across all three methods. However, the SAW method’s process is much simpler and involves fewer steps compared to approaches like TOPSIS and VIKOR. TOPSIS used vector normalization, whereas VIKOR employed linear normalization techniques and SAW approaches.
All three MCDM methods, SAW, TOPSIS, and VIKOR, produced identical prioritization for the highest and lowest failure modes; however, some differences appeared among the mid-ranked alternatives. These variations arise from the fundamental methodological differences of each approach: SAW calculates a linear, aggregated weighted score, which can reduce the impact of outlier values; TOPSIS evaluates alternatives based on their closeness to the ideal solution and their distance from the negative ideal, potentially penalizing imbalances across criteria; and VIKOR focuses on compromise solutions by minimizing group utility loss while maximizing individual regret. Alternatives D4 and D5 showed rank shifts across the methods, highlighting trade-offs between performance cost and safety impact. Such insights are especially valuable in real-world settings where maintenance planners may shift priorities toward robustness or efficiency, depending on the operational scenario.
The high prioritization of failure modes such as frame cracking and bearing wear can be attributed to repeated cyclic loading, stress concentration at joint interfaces, and exposure to variable operating conditions. These components are subjected to dynamic forces during high-speed cycling, which increases fatigue susceptibility. Similarly, drivetrain-related failures are influenced by frictional losses, lubrication conditions, and alignment issues, underscoring the importance of periodic inspection and preventive maintenance.
In practical terms, the MCDM framework lends itself to integration within contemporary Computerized Maintenance Management Systems or as a supplementary module to IoT-enabled diagnostic tools. When coupled with real-time sensor data, such as vibration, temperature, or usage cycles, the model enables dynamic updates to failure probabilities and maintenance schedules. This data-driven approach enables more efficient resource allocation by minimizing unplanned downtimes and eliminating unnecessary inspections. Furthermore, the framework can be embedded as a decision-making engine within expert systems or AI-driven maintenance platforms, thereby helping technicians prioritize and sequence tasks.
These findings directly support condition-based maintenance planning by identifying components that require higher inspection frequency and preventive replacement strategies.
6. Conclusions
This study developed an integrated FMEA–MCDM framework using Shannon entropy weighting combined with SAW, TOPSIS, and VIKOR to prioritize failure modes in a high-performance mechanical system. The results consistently identified D2, D12, and D3 as the most critical failure modes, demonstrating the robustness of the multi-method approach and the influence of high-weight criteria such as ER, CPK, and MD on prioritization outcomes.
From a methodological perspective, combining objective weighting with multiple decision-making techniques reduces subjectivity and improves ranking stability. The consistency observed across methods confirms the reliability of the proposed framework for complex failure prioritization problems.
From a practical perspective, the framework offers a structured foundation for maintenance decision-making. It supports risk-based inspection planning, targeted resource allocation, and prioritization of critical components, thereby enhancing system reliability and operational efficiency. The findings also provide practical implications for engineering management, particularly in maintenance decision-making and resource allocation. The approach is adaptable to other engineering systems where multiple criteria affect failure severity and maintenance strategies.
The study is limited to a specific case context and predefined evaluation criteria. Future work may expand the framework by adding dynamic operational data, real-time monitoring, and hybrid weighting strategies to further enhance decision accuracy and applicability.
Conceptualization, D.D. and R.K.; methodology, D.D. and S.D.; software, M.C.; validation, D.D., R.K. and Ž.S.; formal analysis, S.D. and R.K.; investigation, H.S.F. and R.K.; resources, R.K.; data curation, R.K.; writing—original draft preparation, D.D. and S.D.; writing—review and editing, R.K. and Ž.S.; visualization, M.C. and A.P.A.; supervision, R.K. and Ž.S.; project't administration, R.K.; funding acquisition, Ž.S. All authors have read and agreed to the published version of the manuscript.
The data used to support the research findings are available from the corresponding author upon request.
The authors declare no conflicts of interest.
