Javascript is required
1.
A. Durgut and Ö. Akçay, “Termal kamera ile ekran kartının 3 boyutlu modelinin oluşturulması,” Anadolu Univ. J. Sci. Tech. A-Appl. Sci. Eng., vol. 17, no. 1, pp. 51–63, 2016. [Google Scholar]
2.
G. Kızılbey, “Comparison and optimization of GPU performances on ethereum mining,” mastersthesis, Ankara Yildirim Beyazit University Graduate School of Natural and Applied Sciences Department of Electrical and Electronics Engineering, Ankara, 2019. [Online]. Available: https://acikerisim.ybu.edu.tr/xmlui/handle/20.500.12413/4138 [Google Scholar]
3.
A. P. de Almeida Rocha, G. Reynoso-Meza, R. C. Oliveira, and N. Mendes, “A pixel counting based method for designing shading devices in buildings considering energy efficiency, daylight use and fading protection,” Appl. Energy, vol. 262, p. 114497, 2020. [Google Scholar] [Crossref]
4.
“Ekran kartı nedir?,” ATOM BİLİŞİM BİLGİSAYAR, 2024. https://www.atombilisim.com.tr/ekran-karti-nedir [Google Scholar]
5.
“Ekran kartı ne İşe yarar? Performansa etkileri nelerdir?,” Casper Blog, 2024. https://www.casper.com.tr/blog/ekran-karti-ne-ise-yarar-performansa-etkileri-nelerdir [Google Scholar]
6.
M. Böyük, R. Duvar, and O. Urhan, “Deep learning based vehicle detection with images taken from unmanned air vehicle,” in 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), Istanbul, Turkey, Istanbul, Turkey, 2020, pp. 1–4. [Google Scholar] [Crossref]
7.
D. J. Bernstein, T. R. Chen, C. M. Cheng, T. Lange, and B. Y. Yang, “Ecm on graphics cards,” 2009, pp. 483–501. [Online]. Available: https://link.springer.com/chapter/10.1007/978-3-642-01001-9 28 [Google Scholar]
8.
“What is a graphics card?,” Business Insider, 2024. https://www.businessinsider.com/guides/tech/what-is-a-graphics-card [Google Scholar]
9.
Z. Wang, M. Baydaş, Ž. Stević, A. Özçil, S. A. Irfan, Z. Wu, and G. P. Rangaiah, “Comparison of fuzzy and crisp decision matrices: An evaluation on PROBID and sPROBID multi-criteria decision-making methods,” Demonstr. Math., vol. 56, no. 1, p. 20230117, 2023. [Google Scholar] [Crossref]
10.
M. Baydaş, T. Eren, Ž. Stević, V. Starčević, and R. Parlakkaya, “Proposal for an objective binary benchmarking framework that validates each other for comparing MCDM methods through data analytics,” PeerJ Comput. Sci., vol. 9, p. e1350, 2023. [Google Scholar] [Crossref]
11.
M. Baydaş, O. E. Elma, and Ž. Stević, “Proposal of an innovative MCDA evaluation methodology: Knowledge discovery through rank reversal, standard deviation, and relationship with stock return,” Financ. Innov., vol. 10, no. 1, pp. 1–35, 2024. [Google Scholar] [Crossref]
12.
H. Avunduk, G. Basmacı, and S. Genç, “Selection of the graphics card to be used in ethereum mining with linear BWM-TOPSIS,” Int. J. Contemp. Econ. Adm. Sci., vol. 11, no. 1, pp. 134–159, 2021. [Google Scholar] [Crossref]
13.
A. Lee, C. Yau, B. Michael  Giles, A. Doucet, and C. C. Holmes, “On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods,” J. Comput. Graph. Stat., vol. 19, no. 4, pp. 769–789, 2010. [Google Scholar] [Crossref]
14.
D. L. Cook, J. Ioannidis, A. D. Keromytis, and J. Luck, “CryptoGraphics: Secret key cryptography using graphics cards,” in CT-RSA’05: Proceedings of the 2005 International Conference on Topics in Cryptology, San Francisco CA. Springer Berlin Heidelberg, 2005, pp. 334–350. [Google Scholar]
15.
D. Komatitsch, D. Michéa, and G. Erlebacher, “Porting a high-order finite-element earthquake modeling application to NVIDIA graphics cards using CUDA,” J. Parallel Distrib. Comput., vol. 69, no. 5, pp. 451–460, 2009. [Google Scholar] [Crossref]
16.
O. E. Elma, Ž. Stević, and M. Baydaş, “An alternative sensitivity analysis for the evaluation of mcda applications: The significance of brand value in the comparative financial performance analysis of bist high-end companies,” Mathematics, vol. 12, no. 4, p. 520, 2024. [Google Scholar] [Crossref]
17.
E. Bakhtavar and S. Yousefi, “Assessment of workplace accident risks in underground collieries by integrating a multi-goal cause-and-effect analysis method with MCDM sensitivity analysis,” Stoch. Environ. Res. Risk Assess., vol. 32, no. 12, pp. 3317–3332, 2018. [Google Scholar] [Crossref]
18.
Ž. Stević, M. Subotić, E. Softić, and B. Božić, “Multi-criteria decision-making model for evaluating safety of road sections,” J. Intell. Manag. Decis., vol. 1, no. 2, pp. 78–87, 2022. [Google Scholar] [Crossref]
19.
“Memory size calculator.” https://calculator.academy/memory-size-calculator/ [Google Scholar]
20.
“Why do graphics cards have memory? VRAM explained.” https://computerinfobits.com/why-do-graphics-car ds-have-memory/ [Google Scholar]
21.
C. Gerum, O. Bringmann, and W. Rosenstiel, “Source level performance simulation of gpu cores,” in 2015 Design, Automation Test in Europe Conference Exhibition (DATE), Grenoble, France, 2015, pp. 217–222. [Google Scholar] [Crossref]
22.
“CPU vs. GPU: What’s the difference?,” 2024. https://www.intel.com/content/www/us/en/products/docs/proces sors/cpu-vs-gpu.html [Google Scholar]
23.
A. Cosoroaba, “Memory interfaces made easy with xilinx fpgas and the memory interface generator,” Xilinx Corporation, White Paper 260, 2007. [Google Scholar]
24.
S. A. McKee, “Maximizing memory bandwidth for streamed computations,” phdthesis, University of Virginia, 1995. [Google Scholar]
25.
F. T. Lima and V. M. Souza, “A large comparison of normalization methods on time series,” Big Data Research, vol. 34, p. 100407, 2023. [Google Scholar] [Crossref]
26.
Z. Wang, S. S. Parhi, G. P. Rangaiah, and A. K. Jana, “Analysis of weighting and selection methods for pareto-optimal solutions of multiobjective optimization in chemical engineering applications,” Ind. Eng. Chem. Res., vol. 59, no. 33, pp. 14850–14867, 2020. [Google Scholar] [Crossref]
27.
A. Aytekin, “Comparative analysis of the normalization techniques in the context of MCDM problems,” Decis. Mak.: Appl. Manag. Eng., vol. 4, no. 2, pp. 1–25, 2021. [Google Scholar] [Crossref]
28.
W. Sałabun and K. Urbaniak, “A new coefficient of rankings similarity in decision-making problems,” in Computational Science–ICCS 2020: 20th International Conference, Amsterdam, The Netherlands, 2020, pp. 632–645. [Google Scholar] [Crossref]
29.
M. Baydaş, T. Eren, and M. İyibildiren, “Normalization technique selection for MCDM methods: A flexible and conjunctural solution that can adapt to changes in financial data types,” Necmettin Erbakan Univ. Siyasal Bilg. Fak. Derg., vol. 5, pp. 148–164, 2023. [Google Scholar] [Crossref]
30.
Z. Wang, G. P. Rangaiah, and X. Wang, “Preference ranking on the basis of ideal-average distance method for multi-criteria decision-making,” Ind. Eng. Chem. Res., vol. 60, no. 30, pp. 11216–11230, 2021. [Google Scholar] [Crossref]
31.
Epey, “Ekran kartı fiyatları - epey - sayfa 3,” 2023. https://www.epey.com/ekran-karti/e/Tjtfczo5OiI0NTU3OkR FU0MiOw==/3/ [Google Scholar]
32.
“The World Bank,” 2023. https://data.worldbank.org/ [Google Scholar]
Search
Open Access
Research article

Does the Performance of MCDM Rankings Increase as Sensitivity Decreases? Graphics Card Selection and Pattern Discovery Using the PROBID Method

Mahmut Baydaş1*,
Mustafa Kavacık1,
Zhiyuan Wang2,3
1
Faculty of Applied Sciences, Necmettin Erbakan University, 42000 Konya, Türkiye
2
Department of Chemical and Biomolecular Engineering, National University of Singapore, 117585 Singapore, Singapore
3
Artificial Intelligence Research and Computational Optimization (AIRCO) Laboratory, DigiPen Institute of Technology Singapore, 139660 Singapore, Singapore
Journal of Intelligent Management Decision
|
Volume 3, Issue 2, 2024
|
Pages 91-103
Received: 01-31-2024,
Revised: 03-09-2024,
Accepted: 03-24-2024,
Available online: 04-09-2024
View Full Article|Download PDF

Abstract:

In general, a stable and strong system shouldn't have an overly sensitive/dependent response to inputs (unless consciously and planned desired), as this would reduce efficiency. As in other techniques, approaches, and methodologies, if the results are excessively affected when the input parameters change in MCDM methods, this situation is identified with sensitivity analyses. Oversensitivity is generally accepted as a problem in the MCDM (Multi-Criteria Decision Making) methodology family, which has more than 200 members according to the current literature. The MCDM family is not just a weight coefficient-sensitive methodology. MCDM types can also be sensitive to many different calculation parameters such as data type, normalization, fundamental equation, threshold value, preference function, etc. Many studies to understand the degree of sensitivity simply monitor whether the ranking position of the best alternative changes. However, this is incomplete for understanding the nature of sensitivity, and more evidence is undoubtedly needed to gain insight into this matter. Observing the holistic change of all alternatives compared to a single alternative provides the researcher with more reliable and generalizing evidence, information, or assumptions about the degree of sensitivity of the system. In this study, we assigned a fixed reference point to measure sensitivity with a more robust approach. Thus, we took the distance to the fixed point as a base reference while observing the changeable MCDM results. We calculated sensitivity to normalization, not just sensitivity to weight coefficients. In addition, past MCDM studies accept existing data as the only criterion in sensitivity analysis and make generalizations easily. To show that the model proposed in this study is not a coincidence, in addition to the graphics card selection problem, an exploratory validation was performed for another problem with a different set of data, alternatives, and criteria. We comparatively measured sensitivity using the relationship between MCDM-based performance and the static reference point. We statistically measured the sensitivity with four types of weighting methods and 7 types of normalization techniques with the PROBID method. The striking result, confirmed by 56 different MCDM ranking findings, was this: In general, if the sensitivity of an MCDM method is high, the relationship of that MCDM method to a fixed reference point is low. On the other hand, if the sensitivity is low, a high correlation with the reference point is produced. In short, uncontrolled hypersensitivity disrupts not only the ranking but also external relations, as expected.
Keywords: Multi-Criteria Decision Making (MCDM), Sensitivity analysis, Graphics card selection

1. Introduction

A graphics card is the computer hardware that transforms the digital data processed on the computer into an image that the user can understand and sends it to the monitor. A monitor is a device for displaying this image. Graphics cards are either fixed to the mainboard (onboard) or external in all computers [1]. A graphics card must be used to do data mining [2]. The processing power of the graphics card determines the quality of the resolution [3]. Located between the processor and the monitor, the graphics card is one of the main parts of the computer. The video card, with its software and hardware features, enables the creation and projection of high-resolution images onto the monitör. It allows graphics, pictures, movies and videos to be created and transferred to the screen. The display of more brilliant and clear colors in games is directly related to the graphics card. For this reason, good image quality depends on the graphics card [4]. Fast processing of data is also important. A video card with this capacity will take the graphics load from the computer, ensuring no performance loss and providing high-resolution graphics [5]. With the latest developments in the information sector, there is almost no field where computers are not used. Böyük et al. [6] used deep learning-based computers equipped with advanced graphics cards to detect the location of vehicles in traffic with unmanned aerial vehicles. Due to the complexity of today's video games and the demand for high screen resolution, video cards are equipped with high-performance GPUs (graphics processing units) [7]. While “graphics card" is often used interchangeably with “graphics processing unit" (GPU), it's important to note that the GPU is merely one element within the broader graphics card assembly [8].

Undoubtedly, dealing with complex decision problems with a single or a few selection criteria creates inefficiency in today's conditions and this is quite harmful. The widely used methodology to choose the best graphics card or the best alternative is MCDM (Multi-Criteria Decision Making) [9], [10], [11].

It can be said that in the existing literature there are some limited studies based on the MCDM methodology regarding the selection of the best “graphics card" to assist decision makers. For example, Avunduk et al. [12] conducted a study to decide on the most suitable graphics card for cryptocurrency mining using the BWM-Topsis method. According to the model obtained as a result of the applied method, the most suitable graphics card was found to be rx580. Through a case study, Lee et al. [13] analyzed the usage and speed of graphics cards based on Monte Carlo method. For cryptographic processing, the potential of using Graphics Processing Units (GPUs) in symmetric key encryption is being investigated by Cook et al. [14]. Komatitsch et al. [15] transferred the numerical simulation of seismic waves generated by earthquakes around the world to graphics cards using the CUDA method. Tests and measurements have shown that performance accelerates by a quarter in best use.

Sensitivity analysis for MCDM is a concept directly related to the degree of impact of a numerical change of an input parameter on the final results [16]. It can be said that measuring the effect of weight coefficient assignment on the sensitivity of the MCDM ranking is the first application that comes to mind in the literature. However, the common tendency in the literature is that instead of the entire ranking, it is generally checked whether the order of the best alternative in a ranking has changed. However, sensitivity analysis can cover all MCDM input components (such as normalization type, alternative, criterion, data type, threshold value, preference function, basic equation component, etc.). There is already sufficient consensus that sensitivity to input parameters should be poor for an MCDM method. Determination, stability, perseverance, and weak sensitivity are the sought-after and desired characteristics of an MCDM. However, it is not clear whether the determining factor of sensitivity is the weight coefficient of an MCDM method, the MCDM basic equation, or other elements such as data type, normalization, and threshold value. Accepting only the weight coefficient as a sensitivity determinant would be an incomplete approach in this regard. Moreover, in our opinion, all components may share in the determination of sensitivity. However, it cannot be said that this issue has been discussed comprehensively and in depth enough in the literature [17], [18].

In this study, we would like to draw attention to a few important deficiencies mentioned above in the literature. In the literature, sensitivity is often determined by simply looking at whether the best alternative has changed. According to reasonable comparison rules, a sensitivity analysis should be made by looking at the sensitivity of the entire ranking. Additionally, sensitivity is frequently identified with the original fundamental equation of MCDM. However, the type of the normalization component can change, so this is not a static choice. Normalization for each MCDM can also be a decisive measure of sensitivity, like the basic MCDM equation. Thus, it is necessary to remember that the whole we call MCDM is not just the basic equation and has other parts as well. Another point is that sensitivity does not always have to be negative. For example, electronic device sensors are very sensitive, and their benefit lies in their sensitivity. Positive and smart sentiment solutions can also be developed for the MCDM methodology in the future.

The exploratory model in this study, unlike the classical sensitivity analysis, is tested on two different problems rather than a single problem type by expanding the scope. Although we perform the sensitivity analysis on computer graphics card brands, alternatively we also do it based on MCDM calculations based on country economic performance selection. The data range, number of alternatives, number of criteria, and even the weight coefficients of these three problems are completely different from each other. Moreover, we measure sensitivity with a fixed reference order. When we changed the input parameters (weight coefficient, normalization, and data type) of the MCDM methods we determined, that we obtained 56 different MCDM rankings. We suggest that the change or sensitivity in the rankings can only be measured accurately by comparing them with a fixed reference ranking. For example, “price" for a computer graphics card and “GDP per capita" for a country's economic performance can be a fair reference point. This is a reasonable choice because we can predict that as a result of competition, there should be a close relationship between performance and price. In this study, we focused on the discovery of determinants of the degree of sensitivity of MCDM methods through data analytics.

2. Method and Material

In this study, the best computer graphics card selection, which is a selection problem for decision-makers, will be selected with the MCDM method and additionally with an innovative sensitivity analysis model validation. In other words, whether choosing the best alternative is a reasonable choice is discussed within the framework of sensitivity analysis. Which weight coefficient and which normalization method will be chosen will also be revealed with this sensitivity analysis approach. Stability, robustness, and verification degree will be investigated by performing a sensitivity analysis of a selected MCDM. In this study, as an alternative to classical confirmation analysis, we focused on how the correlation between 56 different MCDM rankings changes with a fixed reference point (price). Table 1 below is information about the methodology we use in selecting a computer graphics card.

Table 1. Normalization and MCDM methods, performance criteria and weighting technique used in this study

Normalization Method

Weighting Method

MCDM Methods

Performance Criteria

Rank Based, Decimal, Z-Score, Sum, Vector, Min-Max and Max Normalization

CRITIC, SD, ENTROPY, Mean

PROBID

Memory size, Memory speed (MHz), GPU cores, Memory interface width (bits), Memory bandwidth (GB/s), Graphics card power

The diagram showing the methodology applied in this study is shown in Figure 1.

Figure 1. The flow chart of the methodology used in this research
2.1 Performance Criteria

Informative explanations regarding the performance criteria defined for the selection of the graphics card used in this study can be seen below.

Memory size: Memory size pertains to the capacity of a computer or device to retain and access data. This capacity is usually quantified in units such as bytes, kilobytes, megabytes, gigabytes, and terabytes. It dictates the volume of information, encompassing files, documents, images, videos, and applications, that can be accommodated on a device concurrently. Greater memory sizes facilitate the storage and retrieval of more data, thereby enhancing the device's operational speed and effectiveness [19].

Memory speed (MHz): It impacts the performance of the graphics processing unit (GPU). Insufficient memory on the graphics card can restrict the range of options available for resolution, textures, shadows, and various other configurations, effectively placing constraints on the visual quality and performance of your system [20].

GPU cores: The GPU functions as a processor comprised of numerous smaller, specialized cores, each optimized for specific tasks, collectively enabling it to efficiently handle graphics rendering and computational tasks [21]. Cores make up the GPU. During a process, the cores work in coordination with each other, resulting in a smooth and high quality game [22].

Memory interface width (bits): A computer has many memory interfaces. The Memory Interface in a GPU essentially serves as the pathway that allows the GPU to communicate with its memory subsystem. It's like the bridge connecting the GPU's processing power with its memory resources. This interface determines how much data can be transferred between the GPU's processing units and its memory modules at any given time. In simpler terms, it's the width of the pipeline through which data flows between the GPU and its memory, influencing the speed and efficiency of data transfer within the graphics processing unit. Bus width denotes the size of the data path that allows information to flow between components, specifying how many bits can be transmitted concurrently to the CPU [23].

Memory bandwidth (GB/s): Bandwidth relates to the volume of data that can be transferred to or from a given location, and in the context of GPUs, the focus is primarily on global memory bandwidth [24].

2.2 Normalization, Weighting and Statistical Methods Used in this Study

Table 2 demonstrates the normalization, weighting and statistical methods used in this study.

Table 2. Demonstration of different normalization, weighting and statistical methods and equations

Converter/Normalization Method

Equation

Sum

${{F}_{ij}}=\frac{{{f}_{ij}}}{\mathop{\sum }_{k=1}^{m}{{f}_{kj}}}~~~~i\in \left\{ 1,2,\ldots ,m \right\};~j\in \left\{ 1,2,\ldots ,n \right\}$

Vector

${{F}_{ij}}=\frac{{{f}_{ij}}}{\sqrt{\mathop{\sum }_{k=1}^{m}{{f}_{kj}}^{2}}}~~~~~i\in \left\{ 1,2,\ldots ,m \right\};~j\in \left\{ 1,2,\ldots ,n \right\}$

Minimum-Maximum

${{F}_{ij}}=\frac{{{f}_{ij}}-mi{{n}_{i\in m}}{{f}_{ij}}}{ma{{x}_{i\in m}}{{f}_{ij}}-mi{{n}_{i\in m}}{{f}_{ij}}}~~~~~i\in \left\{ 1,2,\ldots ,m \right\};~~j\in \left\{ 1,2,\ldots ,n \right\}~\text{for benefit objectives}$

${{F}_{ij}}=\frac{ma{{x}_{i\in m}}{{f}_{ij}}-{{f}_{ij}}}{ma{{x}_{i\in m}}{{f}_{ij}}-mi{{n}_{i\in m}}{{f}_{ij}}}~~~~~~i\in \left\{ 1,2,\ldots ,m \right\};~j\in \left\{ 1,2,\ldots ,n \right\} \text{for cost objectives}$

Maximum

${{F}_{ij}}=\frac{{{f}_{ij}}}{ma{{x}_{i\in m}}{{f}_{ij}}}~~~~~i\in \left\{ 1,2,\ldots ,m \right\};~j\in \left\{ 1,2,\ldots ,n \right\}~\text{for benefit objectives}$

${{F}_{ij}}=\frac{mi{{n}_{i\in m}}{{f}_{ij}}}{{{f}_{ij}}}~~~~~i\in \left\{ 1,2,\ldots ,m \right\};~j\in \left\{ 1,2,\ldots ,n \right\}~\text{for cost objectives}$

Ranking Based Converter

For each criterion, the first rank is fort he best value, while n is for the worst value. Rank is assigned. Hence, the computation of the weighted preference function for each criterion column of the unit cell proceeds as follows: $~~~~~~~~~~~~~~~~~$

${{F}_{ij}}=$ ${{r}_{ij}}\times {{w}_{j}}$

rij refers the rank of result i for criteria j.

Note: Instead of utilizing normalization techniques, it is advisable to utilize the transformator or data converter method, particularly within the FUCA method.

Z-Score

${{n}_{ij}}=~\frac{{{x}_{ij}}-~{{\mu }_{j}}}{{{\sigma }_{j}}}~=~\frac{{{x}_{ij}}-~\frac{\mathop{\sum }_{i=1}^{m}{{x}_{ij}}}{m}}{\sqrt{\frac{\mathop{\sum }_{i=1}^{m}{{\left( {{x}_{ij}}-~{{\mu }_{j}} \right)}^{2}}}{m}}}~~~~{{n}_{ij}}=-~\frac{{{x}_{ij}}-~{{\mu }_{j}}}{{{\sigma }_{j}}}$

Z-score refers to the measurement of the standard deviation of a value from the mean of a given distribution.

Decimal

This technique involves shifting the decimal point of the values within a series. The specific shift depends on the number of digits present in the maximum value of the series. By employing decimal scaling, the series is transformed into a normalized form, where all values fall within the range of 0 to 1. The number of decimal places shifted is determined by the number of digits in the maximum value (denoted as 'd'):

${{F}_{ij}}$=${{f}_{ij}}$/10d
$i\in \left\{ 1,2,\ldots ,m \right\};~j\in \left\{ 1,2,\ldots ,n \right\}$

Weighted Methods

Entropy

Normalize the first decision matrix:

${{F}_{ij}}=\frac{{{f}_{ij}}}{\mathop{\sum }_{k=1}^{m}{{f}_{kj}}}~~~~~i\in \left\{ 1,2,\ldots ,m \right\};~j\in \left\{ 1,2,\ldots ,n \right\}$

Compute the Entropy of each criteria’s value:

${{E}_{j}}=-\frac{1}{\ln \left( m \right)}\mathop{\sum }_{i=1}^{m}({{F}_{ij}}\ln {{F}_{ij}})~~~~~~j\in \left\{ 1,2,\ldots ,n \right\}$

Define the weight of each criteria:

${{w}_{j}}=\frac{1-{{E}_{j}}}{\mathop{\sum }_{j=1}^{n}\left( 1-{{E}_{j}} \right)}~~~~~~~~~~j\in \left\{ 1,2,\ldots ,n \right\}$

SD (Standard Deviation)

$for\text{ }\!\!~\!\!\text{benefit and cost criteria}$

${{F}_{ij}}=\frac{{{f}_{ij}}-mi{{n}_{i\in m}}{{f}_{ij}}}{ma{{x}_{i\in m}}{{f}_{ij}}-mi{{n}_{i\in m}}{{f}_{ij}}}$

${{F}_{ij}}=\frac{ma{{x}_{i\in m}}{{f}_{ij}}-{{f}_{ij}}}{ma{{x}_{i\in m}}{{f}_{ij}}-mi{{n}_{i\in m}}{{f}_{ij}}}$

Compute the standard deviation of each criteria’s value:

${{\sigma }_{j}}=\sqrt{\frac{\mathop{\sum }_{i=1}^{m}{{({{F}_{ij}}-\overline{{{F}_{j}}})}^{2}}}{m}}$

$j\in \left\{ 1,2,\ldots ,n \right\}$

CRITIC Weighted Method (Criteria Importance Through Intercriteria Correlation)

Phases 1: “m” is the number of lines and “n” is the number of pillars;

${{F}_{ij}}=\frac{{{f}_{ij}}-mi{{n}_{i\in m}}{{f}_{ij}}}{ma{{x}_{i\in m}}{{f}_{ij}}-mi{{n}_{i\in m}}{{f}_{ij}}} i\in \left\{ 1,2,\ldots ,m \right\}; j\in \left\{ 1,2,\ldots ,n \right\}~\text{for benefit objectives}$

${{F}_{ij}}=\frac{ma{{x}_{i\in m}}{{f}_{ij}}-{{f}_{ij}}}{ma{{x}_{i\in m}}{{f}_{ij}}-mi{{n}_{i\in m}}{{f}_{ij}}} i\in \left\{ 1,2,\ldots ,m \right\}; j\in \left\{ 1,2,\ldots ,n \right\} \text{for cost objectives}$

Phases 2: A bilateral relations matrix is formulated to assess the association or relationship between the criterias.

${{\rho }_{jk}}=\frac{\mathop{\sum }_{i=1}^{m}({{F}_{ij}}-\overline{{{F}_{j}}})\left( {{F}_{ik}}-\overline{{{F}_{k}}} \right)}{\sqrt{\mathop{\sum }_{i=1}^{m}{{({{F}_{ij}}-\overline{{{F}_{j}}})}^{2}}}\sqrt{\mathop{\sum }_{i=1}^{m}{{({{F}_{ik}}-\overline{{{F}_{k}}})}^{2}}}}~~~~~j,k\in \left\{ 1,2,\ldots ,n \right\}$

Phases 3: The standard deviation of the criteria is determined.

${{\sigma }_{j}}=\sqrt{\frac{\mathop{\sum }_{i=1}^{m}{{({{F}_{ij}}-\overline{{{F}_{j}}})}^{2}}}{m}}~~~~~j\in \left\{ 1,2,\ldots ,n \right\}$

Here, $\overline{{{F}_{j}}}=\frac{1}{m}\underset{i=1}{\overset{m}{\mathop \sum }}\,{{F}_{ij}}$ The calculation involves finding the average of the jth normalized neutral measures. Subsequently, The process for determining the weight coefficients of each criterion is as follows.

${{c}_{j}}={{\sigma }_{j}}\mathop{\sum }_{k=1}^{n}(1-{{\rho }_{jk}})~~~~~~~~~~j\in \left\{ 1,2,\ldots ,n \right\}$ ${{w}_{j}}=\frac{{{c}_{j}}}{\mathop{\sum }_{k=1}^{n}{{c}_{k}}}~~~~~~j\in \left\{ 1,2,\ldots ,n \right\}$

Equal

Mean/Equal Weighting Method: The equal weighting method assigns uniform weights to each criterion, operating under the assumption that all criteria hold equal significance. Thus, it treats all n criteria with equal importance by assigning them identical weight coefficients: wj= 1/n j = { 1, 2, .....n}

Statistical Method Used

The Spearman rank correlation coefficient evaluates the strength and direction of the relationship between two variables by comparing their rankings rather than their raw data values:

${{r}_{s}}=1-\text{ }\!\!~\!\!\text{ }\frac{6\text{ }\!\!~\!\!\text{ }\sum d{{i}^{2}}\text{ }\!\!~\!\!\text{ }}{n\text{ }\!\!~\!\!\text{ }\left( {{n}^{2\text{ }\!\!~\!\!\text{ }\!\!~\!\!\text{ }}}-1 \right)}$. Here ${{r}_{s}}$ denotes Spearman's Rho coefficient. $di$ signifies the difference between bilateral sortings. And $n$ stands for the total number of alternatives considered within the formula.

Source: [25], [26], [27], [28], [29]
2.3 MCDM Method: Preference Ranking On the Basis of Ideal-Average Distance (PROBID) Method

The study utilized Multi-Criteria Decision Making (MCDM) methods, specifically focusing on PROBID. Wang et al. [30] devised the PROBID approach, employing a methodology similar to distance-based methods. Below are the equations of the PROBID method:

Phase 1. Using the Vector normalization method, the raw data is converted into a decision matrix with $m$ lines and $n$ pillars.

${{F}_{ij}}=\frac{{{f}_{ij}}}{\sqrt{\mathop{\sum }_{k=1}^{m}{{f}_{kj}}^{2}}}~~~~~i\in \left\{ 1,2,\ldots ,m \right\};~j\in \left\{ 1,2,\ldots ,n \right\}$
(1)

Phase 2. The weighted decision matrix is derived by multiplying each column by a specific weight coefficient:

${{v}_{ij}}={{F}_{ij}}\times {{w}_{j}}~~~~i\in \left\{ 1,2,\ldots ,m \right\};~j\in \left\{ 1,2,\ldots ,n \right\}$
(2)

Phase 3. The maximum figure PIS is defined $({{A}_{\left( 1 \right)}})$, 2nd PIS $\left( {{A}_{\left( 2 \right)}} \right)$, 3rd PIS $({{A}_{\left( 3 \right)}})$, …, and $m……{th}$ PIS $({{A}_{\left( m \right)}})$ (i.e., the most NIS).

${{A}_{\left( k \right)}}=\left\{ ~\left( Large\left( {{v}_{j}},k \right)\text{ }\!\!|\!\!\text{ }j\in J \right),~~\left( Small\left( {{v}_{j}},k \right)\text{ }\!\!|\!\!\text{ }j\in {J}' \right)~ \right\}=\left\{ {{v}_{\left( k \right)1}},~{{v}_{\left( k \right)2}},{{v}_{\left( k \right)3}},~\ldots ,{{v}_{\left( k \right)j}},\ldots ,~{{v}_{\left( k \right)n}} \right\}$
(3)

where, $k~\in \left\{ 1,2,\ldots ,m \right\}$, $J$ = set of benefit objectives from {1, 2, 3, 4, …, n}, ${J}'$ = set of cost objectives from {1, 2, 3, 4, …, $n$}, $Large\left( {{v}_{j}},k \right)$ means the largest figure in the $j{th}$ weighted normalized neutral pillar (i.e., ${{v}_{j}}$) and $Small\left( {{v}_{j}},k \right)$ means the $k^{th}$ smallest figure in the $j{th}$ weighted normalized neutral pillar (i.e., ${{v}_{j}}$). After that, calculate the mean figure of each neutral pillar.

${{\bar{v}}_{j}}=\frac{\mathop{\sum }_{k=1}^{m}{{v}_{\left( k \right)j}}}{m}~~~~~\text{for }\!\!~\!\!\text{ }~j\in \left\{ 1,2,\ldots ,n \right\}$
(4)

The mean result is then:

$\bar{A}=\left\{ {{{\bar{v}}}_{1}},~{{{\bar{v}}}_{2}},~{{{\bar{v}}}_{3}},\ldots ,{{{\bar{v}}}_{j}},~\ldots ,~{{{\bar{v}}}_{n}}~ \right\}$
(5)

Phase 4. Compute the Euclidean range of every result to every optimal results as well as to the mean result:

${{S}_{i\left( k \right)}}=\sqrt{\underset{j=1}{\overset{n}{\mathop \sum }}\,{{\left( {{v}_{ij}}-{{v}_{\left( k \right)j}} \right)}^{2}}}~~~~~i\in \left\{ 1,2,\ldots ,m \right\};~k\in \left\{ 1,2,\ldots ,~m \right\}$
(6)

Afterwards, the range to mean result is calculated as:

${{S}_{i\left( avg \right)}}=\sqrt{\underset{j=1}{\overset{n}{\mathop \sum }}\,{{\left( {{v}_{ij}}-{{{\bar{v}}}_{j}} \right)}^{2}}~}i\in \left\{ 1,2,\ldots ,m \right\}$
(7)

Phase 5. In this phase, the overall positive-optimal range weighted total range of a result to the first half of the optimal results, is found:

${{S}_{i\left( PIS \right)}}=\left\{ \begin{matrix} \underset{k=1}{\overset{\frac{m+1}{2}}{\mathop \sum }}\,~\frac{1}{k}{{S}_{i\left( k \right)}}i\in \left\{ 1,2,\ldots ,m \right\}~when~m~is~an~odd~number \\ \underset{k=1}{\overset{\frac{m}{2}}{\mathop \sum }}\,~\frac{1}{k}{{S}_{i\left( k \right)}}~i\in \left\{ 1,2,\ldots ,m \right\}~when~m~is~an~even~number \\ \end{matrix} \right.$
(8)

And, define the general NIS, which is actually the weighted total range of one result to the second half of optimal results.

${{S}_{i\left( NIS \right)}}=\left\{ \begin{matrix} \underset{k=\frac{m+1}{2}}{\overset{m}{\mathop \sum }}\,~\frac{1}{m-k+1}{{S}_{i\left( k \right)}}i\in \left\{ 1,2,\ldots ,m \right\}~when~m~is~an~odd~number \\ \underset{k=\frac{m}{2}+1}{\overset{m}{\mathop \sum }}\,\frac{1}{m-k+1}{{S}_{i\left( k \right)}}~i\in \left\{ 1,2,\ldots ,m \right\}~when~m~is~an~even~number \\ \end{matrix} \right.$
(9)

Here, weight is rising with the optimal result figure (i.e., $k$ increasing to $m$). Therefore, general positive-optimal and negative-optimal ranges of each result ($i$ = 1, 2, ..., $m$) are computed by Eqs. (8) and (9) in sequence.

Phase 6. Compute the $PIS/NIS$ ratio ($R_i$) and then performance figure ($P_i$) of each result is below:

${{R}_{i}}=\frac{{{S}_{i\left( pos-ideal \right)}}}{{{S}_{i\left( neg-ideal \right)}}\text{ }\!\!~~~~~~\!\!\text{ }\!\!~\!\!\text{ }\!\!~\!\!\text{ }}\text{ }\!\!~\!\!\text{ }\!\!~\!\!\text{ } \text{ }i\in \left\{ 1,2,\ldots ,m \right\}$
(10)
${{P}_{i}}=\frac{1}{1+R_{i}^{2}}+{{S}_{i\left( avg \right)}}\text{ }\!\!~\!\!\text{ }\!\!~\!\!\text{ }\!\!~\!\!\text{ }\!\!~\!\!\text{ }\!\!~~~~\!\!\text{ }i\in \left\{ 1,2,\ldots ,m \right\}$
(11)

The further a result is from NIS and the nearer it is from PIS, the higher the performance figure ${{P}_{i}}$. The result with the highest ${{P}_{i}}$ is advised to the decider.

3. Application

This study aims to test with an innovative sensitivity analysis whether it is possible to understand the confirmability and robustness of choosing the best graphics card among the alternatives with the MCDM method. As it is known, “sensitivity analysis" is frequently used in the evaluation of MCDM methods. And in this study, we questioned the “stability" approach of the sensitivity analysis methodology. Moreover, while classical sensitivity analysis focuses too much on whether the ranking position of the “best” alternative has changed, we thought it was more convincing to focus on the sentiment of the entire ranking. We thought that choosing to work with many alternatives was more useful in understanding sentiment. In this study, we focused on the selection of a graphics card consisting of 50 alternatives and 6 criteria. We used some methods whose formulas and information are given in the methods and materials section. In this study, an MCDM method, four weight coefficient assignments, and 7 normalization methods are adopted to capture the impact of MCDM components in a comprehensive view for sensitivity analysis. Thus, when we changed an input parameter, we determined the degree to which the results were affected through Spearman Rank Correlation, a statistical method. So, we calculated a rank correlation between MCDM results and graphics card price rankings. We accessed the performance criteria data about the graphics card from “https://www.epey.com" [31], an open-access commercial website. In this study, we wanted to see statistically how much the overall final ranking could change by changing an input parameter, to provide more solid evidence for sensitivity. In other words, we tried to determine the sensitivity analysis of the entire ranking by comparing it with a fixed reference ranking. The basis of the innovative sentiment model here is the comparison of the rankings produced by an MCDM with a fixed “price" ranking.

In Table 3 below, you can see the Spearman correlation results between PROBID, an MCDM method, and the price of graphics card alternatives. Thus, the effect or sensitivity of both weighting and normalization methods on the PROBID method can be seen at the same time. First of all, when examining the analysis, the analysis can be interpreted by keeping the weighting method along the column or the normalization technique along the row constant. Numerical values in the table are Spearman rank correlation values, which express the relationship of different MCDM ranks with price. On the other hand, the first of the two rows and columns written in bold is the standard deviation of the values and expresses the effect of the sensitivity proposed in this study. The second is the arithmetic average of the correlation values, which gives an idea about the strength of the produced ranking. The first criterion measures sensitivity based on standard deviation. The second criterion provides information about the level of relationship between performance-based MCDM and price, which indicates whether the external relationships of an MCDM ranking are broken.

In Table 3 below, the weight coefficients assigned to the criteria according to the methods can be seen.

Table 3. Weight coefficients of criteria assigned according to methods

Criteria

Benefit or Cost

Entropy

SD

CRITIC

Equal

Memory Size

Max

0.212866

0.126864

0.121205

0.166667

Memory Speed (Mhz)

Max

0.036408

0.143549

0.307620

0.166667

Gpu Cores

Max

0.351013

0.178710

0.168489

0.166667

Memory İnterface Width (Bits)

Max

0.056924

0.197264

0.146601

0.166667

Memory Bandwidth (Gb/S)

Max

0.204516

0.181304

0.121677

0.166667

Graphics Card Power

Max

0.138273

0.172310

0.134408

0.166667

In Table 4 below, the impact of normalization and weighting techniques on PROBID's external relations and the general sensitivity (standard deviation) level of PROBID in different scenarios can be clearly seen.

Table 4. Effect of normalization and weighting methods on PROBID MCDM method

Entropy

Equal

SD

CRITIC

StDv

Mean

Sum

0.879

0.883

0.885

0.856

0.011605

0.875750

Vector

0.876

0.875

0.877

0.850

0.011281

0.869500

Min-Max

0.867

0.870

0.868

0.741

0.055148

0.836500

Max

0.861

0.857

0.868

0.793

0.030136

0.844750

Rank Based

0.421

0.514

0.524

0.413

0.051201

0.468

Decimal

0.870

0.860

0.853

0.805

0.024990

0.847

Z-Score

0.647

0.617

0.648

0.452

0.081213

0.591

Stdv

0.16370

0.14007

0.1329

0.17404

Mean

0.77442

0.78228

0.7890

0.70142

According to the table above, the top line is interpreted as follows. “Sum normalization” was preferred for PROBID, where Entropy, Equal, CRITIC and SD weighting methods were applied separately, and when kept constant, the obtained correlations were 87.9%, 88.3%, 88.5% and 85.6%, respectively. Their standard deviation is 0.011605 and their mean is 0.87575. When we apply the same procedure for each row, we obtain different rankings and correlations. When looking at the correlation values with fixed price over all lines, it can be seen that Sum and Vector techniques have low standard deviation values while simultaneously producing high correlation. This is the first evidence that a low sensitivity MCDM ranking also correlates well with price. In fact, this situation (low sensitivity and high correlation with price) also shows a situation similar to pattern matching based on data analytics. On the other hand, if we look at the upper left column of Table 3 with the same approach, this time we will see the effects of normalization on the MCDM method. By keeping the “entropy weighting method” constant, the extent to which other normalization types affect the relationship with price can be understood with the correlation results 87.9, 87.6, 86.7, 86.1, 42.1, 87 and 64.7%. So, this evaluation is repeated for all columns. When we finally evaluate the results, we see that the standard deviation values are 0.163707, 0.140077, 0.132904 and 0.174048. Moreover, it can be seen that the average correlation values are 0.774429, 0.782286, 0.789 and 0.701429. In other words, the best normalization technique is the SD method, which reduces MCDM sensitivity, which can be understood with the correlation value of 0.789.

According to Table 3, it is understood that under conditions where sensitivity is high, the relationship with price decreases and vice versa. On the other hand, the efficient ecosystem situation in which the highest correlation in the matrix (88.5%) is obtained is the situation with the Sum/SD-PROBID compatible combination. In other words, the two techniques that showed the best normalization and weighting performance were combined with PROBID to produce the best correlation. It can be said that this is a result in line with expectations. Moreover, according to Table 3, the CRITIC method, which has the highest sensitivity, produced the lowest correlation on the overall average. As we said above, the SD method produced the highest relationship on average but was the method with the lowest sensitivity. Rank Based and Z-Score (interestingly, Min-Max and Z-score are almost indispensable technical choices in artificial intelligence applications) produced the lowest correlation and highest sensitivity among the MCDM rankings.

The “Gigabyte GeForce RTX 4090 Gaming" alternative, which produces the highest correlation, stands out as the best alternative to the SD-Sum-PROBID combination, which is the MCDM ranking that produces the best correlation (89%) among MCDM rankings (Table 5).

Table 5. The scores and rankings of the alternatives

Alternatives

Score

Rank

Alternatives

Score

Rank

Afox GeForce GTX 1660

0.048679

39

MSI GeForce RTX 4090 Ventus 3X 24G OC

0.907442

3

ASRock Radeon RX 6700 XT Challenger D 12GB OC

0.157482

28

MSI Radeon RX 6750 XT Mech 2X 12G OC

0.198831

27

Asus Cerberus GeForce GTX 1050 Ti OC 4G (1341 MHz)

0.023327

42

MSI Radeon RX 7900 XT Gaming Trio Classic 20G

0.628142

19

Asus Dual GeForce GTX 1660 Super OC Edition 6GB GDDR6 EVO

0.058592

38

NVIDIA Quadro P1000

0.012070

45

Asus Phoenix GeForce GTX 1050 Ti

0.023327

43

NVIDIA Quadro RTX 4000

0.118828

32

Asus ROG Strix GeForce RTX 4080 16GB GDDR6X White

0.699804

13

Palit GeForce RTX 4090 GameRock

0.907353

6

Asus ROG Strix Radeon RX 570 OC 8G

0.077457

37

PNY GeForce RTX 3080 10GB XLR8 Gaming Revel Epic-X RGB Triple Fan LHR

0.678167

16

Colorful GeForce RTX 4070 Ti NB EX-V

0.426456

22

PNY GeForce RTX 4070 Verto

0.276108

26

Colorful GeForce RTX 4080 16GB NB EX-V

0.699804

14

PNY GeForce RTX 4080 16GB TF Verto Edition

0.705477

12

Gainward GeForce RTX 4070 Ti Panther

0.426456

23

PNY Quadro P2000

0.026082

41

Gainward GeForce RTX 4080 Phantom

0.708727

11

PNY RTX A5500

0.792270

9

Galax GeForce RTX 4090 SG 1-Click OC

0.907353

5

PNY RTX A6000

0.847710

8

Gigabyte GeForce RTX 4080 16GB Aero

0.699804

15

PNY Tesla K20

0.128552

31

Gigabyte GeForce RTX 4090 Gaming

0.907442

1

PowerColor Red Devil Radeon RX 580 8GB GDDR5 Golden

0.107690

34

HP Radeon Pro W6800

0.456869

21

PowerColor Red Devil Radeon RX 7900 XT 20GB GDDR6

0.628142

20

HP RTX A2000 12GB

0.081063

36

Quadro GeForce GT 730 4G D3L

0.009510

47

HP RTX A6000

0.847710

7

Quadro GeForce GTX 1050 Ti

0.022962

44

iGame GeForce RTX 4070 Ti Ultra W OC-V

0.276108

25

Quadro Radeon R7 240 2GD5

0.010188

46

Inno3D GeForce RTX 2060 Super Twin x2 OC

0.13665

30

Sapphire Nitro+ Radeon RX 580 8GD5 Special Edition (8 GB / 1430 MHz)

0.146087

29

Intel Arc A770

0.331523

24

Sapphire Nitro+ Radeon RX 7900 XT Vapor-X

0.651587

17

Lenovo Quadro K420 2GB

0.008938

49

Sapphire Pulse Radeon RX 6700

0.114952

33

Lenovo RTX A5000

0.727075

10

Sapphire Pulse Radeon RX 7900 XT 20G

0.635927

18

MSI GeForce GTX 1660 Ti Ventus XS OC (1830 MHz)

0.043116

40

Seclife GeForce GT 730 2GB

0.007957

50

MSI GeForce RTX 2060 Gaming Z

0.081317

35

Seclife GeForce GT 730 4GB

0.009270

48

MSI GeForce RTX 4090 Suprim 24G

0.907442

2

Zotac Gamıng GeForce RTX 4090 Trinity

0.907442

4

In many MCDM problems, we see that sensitivity analysis results are incorrectly generalized. For example, authors introducing a new MCDM method perform sensitivity analysis to demonstrate the robustness of this method. However, it is ignored that these sensitivity analysis results are valid only for a specific problem. Although the basic equation of an MCDM method used is fixed, the weighting method, normalization type, and other parameters vary in different data and problems. With this change, the sensitivity of the MCDM method also changes. For example, if you use CRITIC on one data type, precision may increase, and if you use Entropy on another data type, precision may decrease. On the other hand, sensitivity may also decrease or increase depending on the type of normalization. So, in fact, the basic equation of an MCDM is not the sole determinant of sensitivity. In fact, the chosen combination-equation of an MCDM is not the sole determinant of sensitivity. Combinations of MCDM can be many, and normalization, weighting, and MCDM basic equation components are the key parameters that determine sensitivity. Although our main goal is to choose a graphics card, in this study we will also test our matching pattern model on a completely different problem and data type. Below, we have determined which year is the best year using the PROBID method, based on the 10-year economic performance criteria of the Turkiye economy. The sensitivity analysis results below can be better examined by following the basic procedure for the graphics card above.

The analysis findings in Table 6 are parallel to the previous analysis findings. It can be said that the low standard deviation and high correlation performance results of Sum and Vector normalization types are symptoms of low sensitivity and strong MCDM. On the other hand, it can be seen that Min-Max and Rank-Based converters produce high standard deviation and low average correlation with price. This situation also shows that these data converters negatively affect the sensitivity of MCDM and therefore the price relationship performance in general. On the other hand, we can look at how weight coefficient assignment methods affect sensitivity and performance. In general, we see that the Entropy method provides the lowest standard deviation and the highest correlation generation performance. The CRITIC method seems to have negatively affected the performance of the PROBID method by producing the highest standard deviation and the lowest correlation with price, showing a similar trend as in the previous data and problem. On the other hand, the highest performance among the 28 PROBID rankings belongs to the Rank-Based-Entropy-PROBID combination. Although we propose the low sensitivity-high correlation theorem here, it should be noted that this is a general clustering rule. A good ranking arises from a good combination, but the best alternative may not arise from this combination. While the best ranking combination is Sum or Vector-Entropy-PROBID, the ranking combination with the best alternative belongs to the Rank Based-Entropy-PROBID combination. In the previous problem, the ecosystem with the best ranking was SD-PROBID and the alternative that produced the best correlation emerged from this ranking. This situation shows us that the best ranking and the best alternative are located very close to each other, but there is no guarantee that they will be in the same position.

Table 6. The effect of normalization and weighting methods on the PROBID MCDM method for the Turkiye's economic performance according to years

Entropy

Equal

SD

CRITIC

StDv

Mean

Sum

0.745

0.782

0.782

0.806

0.02181

0.77870

Vector

0.745

0.794

0.782

0.806

0.02285

0.78170

Min-Max

0.697

0.067

0.067

0.176

0.26088

0.25170

Max

0.697

0.685

0.685

0.636

0.02346

0.67570

Rank Based

0.890

0.345

0.333

0.139

0.28506

0.43000

Decimal

0.794

0.794

0.709

0.673

0.05305

0.74250

Z-Score

0.879

0.782

0.806

0.152

0.29245

0.65470

StDv

0.07686

0.26659

0.26267

0.29039

Mean

0.78000

0.60700

0.59485

0.48400

Note: This MCDM problem includes 6 economic criteria and 10 alternative years. The criteria are as follows: Exports of goods and services (US\$), Imports of goods and services (US\$), Inflation, consumer prices (annual %) Interest payments (% of expense), Total reserves (includes gold, current US\$), Unemployment, total (% of total labor force) (national estimate). On the other hand, we calculated the country's economic performance with MCDM and chose GDP per capita as the fixed reference point in this study. The above results show the correlations between the country's economy performance and GDP per capita. Source: World Bank datas (https://data.worldbank.org/) [32]

4. Discussion

The sensitivity analysis application procedure of our study is different and innovative from other classical sensitivity analyses measuring the stability of MCDM methods:

• A key point in all our work was undoubtedly the “price", which we used as a fixed reference point. We produced a total of 56 different MCDM sequences for two different problems. We obtained correlations between these rankings and price. We understood from the standard deviation results that the sensitivity of the MCDM rankings, which produce the best relationship with price, is also low. As it is known, there is no reference point for MCDM results in classical sensitivity analysis, and this may cause problems in terms of the direction of sensitivity. The claim that all kinds of sensitivity are negative is not true in an absolute sense.

• Classical sensitivity analysis is not based on the degree to which the entire ranking is affected and is more concerned with the position of a single alternative. The statistical approach we propose here is more convincing.

• Another issue is that we think that all MCDM components determine sensitivity. It is difficult to understand the sensitivity just by the weight coefficient. In this study, the effects of normalization and data type on sensitivity were also investigated.

• This study discovers that low sensitivity and high correlation are reasonable and general pattern matching for understanding the location of the best MCDM rankings. However, although the MCDM methods that produce the best rankings are methods with low sensitivity and high correlation with price, the best alternative may not be in an identical position with the best rankings.

• Results from data analytics showed that for PROBID variants, the SD-Sum/PROBID combination produced the best relationship available. It is no coincidence that the Best alternative appears in the combination of SD and Sum that produces the best rankings. While CRITIC increases the sensitivity in weighting, SD and Entropy decrease it. It is understood that the Z-score method, which is frequently used in artificial intelligence studies, shows a mediocre performance. Each of the graphs below clearly shows the impact of the weighting methods on the MCDM final results for each method, or the innovative sensitivity of the MCDM methods, according to the correlations they produce. If we pay attention to the images, the CRITIC method, while having high sensitivity, produced a low correlation with price. While SD and Entropy Methods had low sensitivity, they produced a high correlation with price.

Figure 2 and Figure 3 show the levels of precision created through the impact of normalization and weighting methods on MCDM final results. While the Sum and Vector method had low sensitivity, it produced a high correlation with price. Although the SD weighting method had low sensitivity, it produced high correlation with price. The converse is also linearly true. These results provide strong evidence that there are pattern matches in terms of sensitivity and performance.

Figure 2. Sensitivity production levels of PROBID according to normalization methods
Figure 3. Sensitivity production levels of PROBID according to weight coefficient assignment methods

5. Conclusions

When you change any input parameter of a system, this may affect the final results of the system to a certain extent, that is, it may change the results. If this happens excessively, it causes inefficiency. So in general, over-influence also reduces the quality of any metric. For this purpose, sensitivity analyses have been used after study results as an important criterion for MCDM methods in recent years to understand the extent of the influence. Sensitivity analysis, often used in the sense of stability or confirmation, actually suggests and desires low sensitivity. However, since there is no reference point and direction for sensitivity, there is also confusion about which MCDM method is better, more robust, or more reliable. In this context, in this study focusing on the choice of the video card, which is a computer card, MCDM final results were compared with fixed “price" rankings of the same products, which is a fixed external reference. In other words, while the price order was kept constant, the change in MCDM rankings was observed. In the study, we focused on the entire sequence and approached sensitivity holistically. We tested 28 modified combinations of PROBID with 50 alternative e-display card alternatives and 6 decision criteria. We observed the degree of sensitivity with different input parameters (4 different weighting methods and 7 data transformation methods). In total, we examined the correlation relationships of 28 different MCDM rankings with changing prices. All the resulting MCDM rankings told us the same thing: In general, MCDM rankings produced a good relationship with the price if they had low sensitivity. And the opposite was also true linearly. We tested our model again with a completely different data type and a different problem. The results once again confirmed this pattern match. Moreover, we obtained strong evidence that the location of the best alternative should also be sought in clusters with low sensitivity and high correlation with price. These results are the unique discovery of our study.

Research Suggestions: Although sensitivity is considered negative, there are many positive examples of sensitivity in today's world. Positive and smart sensitivity solutions (like sensor technologies) for the MCDM methodology may also be developed in the future.

Author Contributions

Conceptualization, M.B. and M.K.; methodology, M.B., and Z.W.; validation, M.B., and Z.W.; formal analysis, M.B., and M.K.; investigation, M.B.; resources, M.B.; data curation, M.B.; writing—original draft preparation, M.B., and M.K.; writing—review and editing, M.B., Z.W., and M.K.; visualization and M.B. and M.K.; supervision, M.B., and Z.W. All authors have read and agreed to the published version of the manuscript. The relevant terms are explained in the CRediT taxonomy.

Data Availability

The data used to support the research findings are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References
1.
A. Durgut and Ö. Akçay, “Termal kamera ile ekran kartının 3 boyutlu modelinin oluşturulması,” Anadolu Univ. J. Sci. Tech. A-Appl. Sci. Eng., vol. 17, no. 1, pp. 51–63, 2016. [Google Scholar]
2.
G. Kızılbey, “Comparison and optimization of GPU performances on ethereum mining,” mastersthesis, Ankara Yildirim Beyazit University Graduate School of Natural and Applied Sciences Department of Electrical and Electronics Engineering, Ankara, 2019. [Online]. Available: https://acikerisim.ybu.edu.tr/xmlui/handle/20.500.12413/4138 [Google Scholar]
3.
A. P. de Almeida Rocha, G. Reynoso-Meza, R. C. Oliveira, and N. Mendes, “A pixel counting based method for designing shading devices in buildings considering energy efficiency, daylight use and fading protection,” Appl. Energy, vol. 262, p. 114497, 2020. [Google Scholar] [Crossref]
4.
“Ekran kartı nedir?,” ATOM BİLİŞİM BİLGİSAYAR, 2024. https://www.atombilisim.com.tr/ekran-karti-nedir [Google Scholar]
5.
“Ekran kartı ne İşe yarar? Performansa etkileri nelerdir?,” Casper Blog, 2024. https://www.casper.com.tr/blog/ekran-karti-ne-ise-yarar-performansa-etkileri-nelerdir [Google Scholar]
6.
M. Böyük, R. Duvar, and O. Urhan, “Deep learning based vehicle detection with images taken from unmanned air vehicle,” in 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), Istanbul, Turkey, Istanbul, Turkey, 2020, pp. 1–4. [Google Scholar] [Crossref]
7.
D. J. Bernstein, T. R. Chen, C. M. Cheng, T. Lange, and B. Y. Yang, “Ecm on graphics cards,” 2009, pp. 483–501. [Online]. Available: https://link.springer.com/chapter/10.1007/978-3-642-01001-9 28 [Google Scholar]
8.
“What is a graphics card?,” Business Insider, 2024. https://www.businessinsider.com/guides/tech/what-is-a-graphics-card [Google Scholar]
9.
Z. Wang, M. Baydaş, Ž. Stević, A. Özçil, S. A. Irfan, Z. Wu, and G. P. Rangaiah, “Comparison of fuzzy and crisp decision matrices: An evaluation on PROBID and sPROBID multi-criteria decision-making methods,” Demonstr. Math., vol. 56, no. 1, p. 20230117, 2023. [Google Scholar] [Crossref]
10.
M. Baydaş, T. Eren, Ž. Stević, V. Starčević, and R. Parlakkaya, “Proposal for an objective binary benchmarking framework that validates each other for comparing MCDM methods through data analytics,” PeerJ Comput. Sci., vol. 9, p. e1350, 2023. [Google Scholar] [Crossref]
11.
M. Baydaş, O. E. Elma, and Ž. Stević, “Proposal of an innovative MCDA evaluation methodology: Knowledge discovery through rank reversal, standard deviation, and relationship with stock return,” Financ. Innov., vol. 10, no. 1, pp. 1–35, 2024. [Google Scholar] [Crossref]
12.
H. Avunduk, G. Basmacı, and S. Genç, “Selection of the graphics card to be used in ethereum mining with linear BWM-TOPSIS,” Int. J. Contemp. Econ. Adm. Sci., vol. 11, no. 1, pp. 134–159, 2021. [Google Scholar] [Crossref]
13.
A. Lee, C. Yau, B. Michael  Giles, A. Doucet, and C. C. Holmes, “On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods,” J. Comput. Graph. Stat., vol. 19, no. 4, pp. 769–789, 2010. [Google Scholar] [Crossref]
14.
D. L. Cook, J. Ioannidis, A. D. Keromytis, and J. Luck, “CryptoGraphics: Secret key cryptography using graphics cards,” in CT-RSA’05: Proceedings of the 2005 International Conference on Topics in Cryptology, San Francisco CA. Springer Berlin Heidelberg, 2005, pp. 334–350. [Google Scholar]
15.
D. Komatitsch, D. Michéa, and G. Erlebacher, “Porting a high-order finite-element earthquake modeling application to NVIDIA graphics cards using CUDA,” J. Parallel Distrib. Comput., vol. 69, no. 5, pp. 451–460, 2009. [Google Scholar] [Crossref]
16.
O. E. Elma, Ž. Stević, and M. Baydaş, “An alternative sensitivity analysis for the evaluation of mcda applications: The significance of brand value in the comparative financial performance analysis of bist high-end companies,” Mathematics, vol. 12, no. 4, p. 520, 2024. [Google Scholar] [Crossref]
17.
E. Bakhtavar and S. Yousefi, “Assessment of workplace accident risks in underground collieries by integrating a multi-goal cause-and-effect analysis method with MCDM sensitivity analysis,” Stoch. Environ. Res. Risk Assess., vol. 32, no. 12, pp. 3317–3332, 2018. [Google Scholar] [Crossref]
18.
Ž. Stević, M. Subotić, E. Softić, and B. Božić, “Multi-criteria decision-making model for evaluating safety of road sections,” J. Intell. Manag. Decis., vol. 1, no. 2, pp. 78–87, 2022. [Google Scholar] [Crossref]
19.
“Memory size calculator.” https://calculator.academy/memory-size-calculator/ [Google Scholar]
20.
“Why do graphics cards have memory? VRAM explained.” https://computerinfobits.com/why-do-graphics-car ds-have-memory/ [Google Scholar]
21.
C. Gerum, O. Bringmann, and W. Rosenstiel, “Source level performance simulation of gpu cores,” in 2015 Design, Automation Test in Europe Conference Exhibition (DATE), Grenoble, France, 2015, pp. 217–222. [Google Scholar] [Crossref]
22.
“CPU vs. GPU: What’s the difference?,” 2024. https://www.intel.com/content/www/us/en/products/docs/proces sors/cpu-vs-gpu.html [Google Scholar]
23.
A. Cosoroaba, “Memory interfaces made easy with xilinx fpgas and the memory interface generator,” Xilinx Corporation, White Paper 260, 2007. [Google Scholar]
24.
S. A. McKee, “Maximizing memory bandwidth for streamed computations,” phdthesis, University of Virginia, 1995. [Google Scholar]
25.
F. T. Lima and V. M. Souza, “A large comparison of normalization methods on time series,” Big Data Research, vol. 34, p. 100407, 2023. [Google Scholar] [Crossref]
26.
Z. Wang, S. S. Parhi, G. P. Rangaiah, and A. K. Jana, “Analysis of weighting and selection methods for pareto-optimal solutions of multiobjective optimization in chemical engineering applications,” Ind. Eng. Chem. Res., vol. 59, no. 33, pp. 14850–14867, 2020. [Google Scholar] [Crossref]
27.
A. Aytekin, “Comparative analysis of the normalization techniques in the context of MCDM problems,” Decis. Mak.: Appl. Manag. Eng., vol. 4, no. 2, pp. 1–25, 2021. [Google Scholar] [Crossref]
28.
W. Sałabun and K. Urbaniak, “A new coefficient of rankings similarity in decision-making problems,” in Computational Science–ICCS 2020: 20th International Conference, Amsterdam, The Netherlands, 2020, pp. 632–645. [Google Scholar] [Crossref]
29.
M. Baydaş, T. Eren, and M. İyibildiren, “Normalization technique selection for MCDM methods: A flexible and conjunctural solution that can adapt to changes in financial data types,” Necmettin Erbakan Univ. Siyasal Bilg. Fak. Derg., vol. 5, pp. 148–164, 2023. [Google Scholar] [Crossref]
30.
Z. Wang, G. P. Rangaiah, and X. Wang, “Preference ranking on the basis of ideal-average distance method for multi-criteria decision-making,” Ind. Eng. Chem. Res., vol. 60, no. 30, pp. 11216–11230, 2021. [Google Scholar] [Crossref]
31.
Epey, “Ekran kartı fiyatları - epey - sayfa 3,” 2023. https://www.epey.com/ekran-karti/e/Tjtfczo5OiI0NTU3OkR FU0MiOw==/3/ [Google Scholar]
32.
“The World Bank,” 2023. https://data.worldbank.org/ [Google Scholar]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
Baydaş, M., Kavacık, M., & Wang, Z. Y. (2024). Does the Performance of MCDM Rankings Increase as Sensitivity Decreases? Graphics Card Selection and Pattern Discovery Using the PROBID Method. J. Intell. Manag. Decis., 3(2), 91-103. https://doi.org/10.56578/jimd030203
M. Baydaş, M. Kavacık, and Z. Y. Wang, "Does the Performance of MCDM Rankings Increase as Sensitivity Decreases? Graphics Card Selection and Pattern Discovery Using the PROBID Method," J. Intell. Manag. Decis., vol. 3, no. 2, pp. 91-103, 2024. https://doi.org/10.56578/jimd030203
@research-article{Baydaş2024DoesTP,
title={Does the Performance of MCDM Rankings Increase as Sensitivity Decreases? Graphics Card Selection and Pattern Discovery Using the PROBID Method},
author={Mahmut Baydaş and Mustafa KavacıK and Zhiyuan Wang},
journal={Journal of Intelligent Management Decision},
year={2024},
page={91-103},
doi={https://doi.org/10.56578/jimd030203}
}
Mahmut Baydaş, et al. "Does the Performance of MCDM Rankings Increase as Sensitivity Decreases? Graphics Card Selection and Pattern Discovery Using the PROBID Method." Journal of Intelligent Management Decision, v 3, pp 91-103. doi: https://doi.org/10.56578/jimd030203
Mahmut Baydaş, Mustafa KavacıK and Zhiyuan Wang. "Does the Performance of MCDM Rankings Increase as Sensitivity Decreases? Graphics Card Selection and Pattern Discovery Using the PROBID Method." Journal of Intelligent Management Decision, 3, (2024): 91-103. doi: https://doi.org/10.56578/jimd030203
cc
©2024 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.