This study designs a reconfigurable multi-cloud storage server architecture for dynamic and secure data sharing has been designed, improves the security of unstructured data using cryptographic index-based data slicing (CIBDS), and reduces the malicious insider through data encryption using a third data encryption algorithm (3DEA). Focusing on multi-cloud storage server (MCSS) and data life cycle which includes three stages (i.e., data input, transition and utilization), the authors determined the efficiency of reconfigurable data file slicing, standard format, privacy and trustworthiness of the customers, in contrast to existing methods. Every part of a data file was encrypted using 3DEA, and Rivest Shamir Adleman (RSA) was employed to produce the private key to secure the unstructured data. The results show that the proposed framework effectively searches the data files in MCSS based on tags, such as input file names and private keys. The performance of the framework was measured by the security level, uploading/downloading latency time between our method and conventional methods, under different data sizes in (MB). Overall, our method reduces the malicious insider to 0.23% using 3DEA and RSA, during data encryption in the existing USDS-MC, shortens the uploading/downloading latency time (s) by 10% and 12%, compared to USDS-MC, and enhances the unstructured data security by 12% in comparison with that method. In this way, the authors managed to improve the self-protection of reconfigurable and secure unstructured data files in huge cloud infrastructure. This research optimizes the data security and privacy of encryption, decryption and cryptography technologies, and helps with the online process and its security maintenance during cloud storage.
The numerous wireless sensor networks (WSNs) and sensors installed in smart cities have generated a significant amount of data that has been saved in multiple cloud storage servers. For the purpose of security, it is particularly challenging to identify and categorize the various data formats on cloud servers. Thus, this paper proposes a framework that efficiently searches the MCSS data files using tags like the name of the input file that was received and private keys. The proposed and conventional approaches were compared for performance evaluation, in terms of security level, uploading/downloading latency time, and varied data sizes (MB).
Numerous research has been conducted at various stages of time to determine the best way to achieve information security and secrecy in cloud computing. As a way of enforcing information security in the cloud storage scheme, the encryption enables explicit access authorization and cryptography. Sharing information demonstrates inclusivity in data security, particularly for cloud storage. This method cannot guarantee secure key distribution and management. Internal hacks that use a private cloud database are not tracked , , . This significantly lowers the procedure's competence. Some researchers presupposed that a data distribution process was at work in the cloud computing system, enabling a variety of data in addition to various clouds. However, the strategy fails to adequately take into account the distribution of keys in encrypted data channels. As a result, it could potentially compromise data integrity, something that typically occurs during the recovery process .
Figure 1 shows the system management of structured and unstructured data communication through integration with various cloud storage servers.
When we observe that the cloud is inaccessible, one of the regular risks is that the techniques for data transfer are adversely affected. The use of cloud computing systems is a common method for data storage. As a result, the breach that develops over time is a welcome response to information security intimidations , .
Yau et al.  devised a dynamic approach and four different types of algorithms to discuss the data security model for cloud computing in terms of cryptography and steganography. Additional switching operations were conducted on the transferred data, with differences in the original information that has been received. These operations significantly lowered the procedure's competence. These researchers have assumed that the data distribution process is effective in enabling the variety of data in addition to the various clouds in the cloud computing system. This could potentially compromise data integrity, something that typically occurs during the recovery process . To increase the security of the cloud model in the medical industry, Manjunath et al.  created an algorithm that offers materials readily accessed and effectively used in the e-health system. However, the algorithm causes overlapping and slows down transfer rates from the source to the destination, when utilized for a large volume of data stored in the cloud.
The 3DES and the RSA encryption are two technologies for data encryption. However, one challenge in using the 3DES encryption approach is that it is vulnerable to a brute force attack. Shivaji et al.  adopted RSA encryption to increase the security and efficiency of 3DES dynamic file slicing, and proved that this approach helps to consistently encrypt consumer data. The structure of the approach allows for the use of shared encrypted keys in symmetric key cryptography.
In order to maintain a highly secure data storage service and integrate the various forms of encrypted data, Shekhawat et al.  developed a model that decreases the capital expense and conversation procedure. It is exceedingly difficult to classify different data packets processed by different algorithms. Bhadlawala and Chachapara  implemented data sharing amongst cloud services to maximize cloud service consumption. However, any cloud retains a vast amount of data during the communication on a path, making it very hard to classify the original data.
Therefore, the goal of the current paper is to advance a structure that outshines the aforementioned experiments. Five or more cloud storage facilities at the very least would be submitted as part of the projected design. The proposed configuration makes use of dynamic file slicing to increase privacy. The record coding technique was adopted to enable the structure to maintain a higher level of data integrity. To prevent malicious insiders and additional risks to information, the security of the key sharing processes would take precedence. The proposed approach was proved as the most effective way to maintain data security in a multi-cloud environment .
The remainder of this paper is organized as follows: Section 2 introduces the proposed approach; Section 3 defines the architecture outline, constituents and its functioning with algorithms; Section 4 elucidates the test results; Section 5 concludes the work and looks forward to future works.
Figure 2 shows the structural design of our approach. The design shows that the interface structure is how the information holder transmits the data file, an image, and the private key. In SRUD-MD, the recorded database is uploaded. The slicing index and structure are used to locate the uploaded material. The encryption procedure is created using databases that were previously and currently stored on multiple cloud storage servers. Additionally, the private key is encrypted using RSA keys, and the owner is given access to part of the RSA public key and the cloud DB server, respectively. The decryption stage also uses a variety of techniques. Key details are supplied for decryption and combining procedures after choosing the exact image. The secret key is used to decrypt shared files on the recipient's computer after looking at the file title. Particularly, dynamic file sharing is the best active approach, for it includes all safe information transfer methods , . The outline provides a clear direction for how the user should conduct the file: file slicing happens before data encryption.
The SRUD-MC structure functions as a middleware to connect to the MCSS server . The following encryption techniques would be used with the structure outline:
At the outset, the establishments stated are the owners of data, which are responsible for attaching the data into the structure over the framework interface . The security of data is emphasized during the switchover process. As a result, the client supplies the slice identity for the file containing the secret key and image .
The process divides up file sharing into several parts for relaxed encryption decisions. The shared file directs users to local server storage . However, the method is also used in the 3DES for encrypting data by smearing the cipher process. The private key is produced using the RSA algorithm based on the most recent and older data that has been kept on the server , , , , , .
It is a known fact that the outline transfers the shared file to the multi-cloud server's storage. The file is then only accessible to official parties and is unrestricted from any danger . The installation or bundling of viruses, worms, spyware, Trojan Horses, and other harmful code into files is made possible by file sharing. The likelihood of data being infected by a dangerous virus increases when these files are moved to a multi-cloud storage server. Peer-to-peer (P2P) networks tend to have a higher prevalence, because it is harder to determine whether a file's source is reliable, but organizations are still at danger. Thus, the secret key and file are uploaded via the framework interface utilizing the proposed SRUD-MC for data storage.
Multi-cloud storage server: This is a gathering of various storage amenities which gets linked in a USB interface of application.
Reception of data: The information holder provides such particulars to the receiver, allowing them the admittance to the data confined on the server.
This stage consists the following phases:
- The outline displays a popup whenever the receiver arrives following the successful verification, allowing the consumer to choose the particular image for additional dispensation.
- The first step is choosing an image.
- After choosing the image, the customer may have the choice to provide the system with important information. Finally, the file merging technique takes place .
The whole data that the owner sent is received by the receiver. The cryptographic index-based standard is a useful framework to ensure the security of information exchanged among various positions. When the file title and key are applied as input, the scheme runs an automatic process to merge the disparate phases of the file appropriately , , , , .
The architecture of the proposed approach is explained as follows:
Data owner: This refers to the person in charge of choosing an image to add extra security to the 3DES secret key and to the numerous slices of the shared file. After sharing the file, the outline breaks it up and adopts the RSA key pair creation to encrypt the 3DES secret key before giving the key to the information holder . Figure 2 illustrates the suggested SURD-MC architecture, and Table 1 lists the related abbreviations and descriptions.
- Local machine: This is the component in charge of handling temporary data storage for the shared encrypted files.
- Receiver machine: This device receives decrypted files from the multi-cloud server.
- Cloud monitoring server: This device keeps an eye on the provider's and customer's high-privilege function activities. The super-admin of the cloud platform will be the server manager.
- Cloud key management server: This is the person in charge of keeping track of the encrypted and decrypted keys.
- Data receiver: This device reliably receives data sent by the owner. Nevertheless, they must upload the file and a key.
RSA private key
RSA public key
Customer date name
Number of slices
S1, S2, S3, S4, ..., Sn
File slices without encryption data
E(S1), E(S2), E(S3), E(S4), ..., E(Sn),
File slices with encryption data
Algorithm 1 examines the method whereby records are shared based on the customer-defined number, but are only allowed to be uploaded to different clouds and dynamic cloud storage services , , , , . This approach also uses the owner's machine storage for file uploading, indexed dependent sharing, and encryption, aiming to protect against malicious content supplied by malicious users. RSA encryption is implemented to protect the private key and also resolve the key escrow issue. In the end, the global public key is acquired by the owner, a second factor is sent to the cloud database server, and all encrypted share files are then stored on the multi-cloud server.
Algorithm 1: Data file slicing and encryption using SRUD-MC
- Input: Data file upload format→(.xpt,jpg,dicm,pdf,video….etc.) and secret key→n,img.
- Upload data file (D) and user identified secret key (Sk)
- Determine the data file size (Ds).
- Tag nth index value for each slice of data file defined by user.
- Create index-based-data file (S1, S2, S3, S4, ..., Sn) by the original name extension and store it in user local machine.
- Initialize encryption of sliced data using RSA and 3DES techniques [utilizing RSApk and RSApvk].
- Encrypt each part of the sliced data files [E(S1), E(S2), E(S3), E(S4), ..., E(Sn)] from local server to be stored in MCSS.
- Encrypt data file [E(S1), E(S2), E(S3), E(S4), ..., E(Sn)] and RSApk, RSApvk
The procedure for file decryption is described in Algorithm 2. The file name, image, and owner's public key are sent under this section via the legitimate document recipient. After recording every detail, the secret key is obtained from the cloud server through RSA decryption. The file names are looked up on the multi-cloud server and then sequentially decrypted on the source of indices before dispensing. The decrypted records are placed in the receiver's location and then concatenated at the index source , , , , .
Algorithm 2: Data file decryption and merging using SDUD-MC
- Input: Consider the data file format (Img) with file name without extension (.xpt,jpg,dicm,pdf,video….etc.) and RSApk.
- Verify the obtained correct image.
- Enter data file name (Dn) and public key (Pk).
- Associate the search for data file name with each MCSS tagged directory S1, S2, S3, S4, ..., Sn and obtain the path of the encrypted files (E(S1), E(S2), E(S3), E(S4), ..., E(Sn)).
- Obtain user defined Sk using Pk and PVk from cloud servers.
- Decrypt all the encrypted data file using Sk obtained from RSA decryption.
- Merge each sliced part of the decrypted data files [S1, S2, S3, S4, ..., Sn] from MCSS provider to generate the original data D. Merge the decrypted data file part into a data file Dn).
- Remove all encrypted and decrypted tags of each file stored in the respective service.
The scheme's advantages include the distribution of keys through untrusted channels, the elimination of file storage in central delivery, clarification of key escrow-related concerns, supervision of the keys in chief checking services, and self-protection from malicious files during upload . Additionally, the system ensures that details about the data cannot be accessed by insiders. The approach therefore aims to eliminate integrity issues from the data recovery process.
The effectiveness of our approach was verified using VS2020 C#, which requires the use of the net structure and the safety protocol. The verification was carried out on a 64-bit machine with Windows 10.
In SRUD-MD, the recorded database is uploaded. The slicing index and structure are adopted to locate the uploaded data. The encryption procedure is created using databases that were previously and currently stored on multiple cloud storage servers. The RSA algorithm is utilized to produce the private key based on the most recent and older data that has been kept on the server.
Data has traditionally been categorized based on size and the format in which it is kept on the server. However, the proposed approach divides the data into at least five different private clouds and uses them for the experiments. Our approach was compared with the standard method regarding communication overheads, mobile devices, and energy usage during data transfer into multilevel servers. The comparison shows that our approach improves the turnaround time and decreases power dissipation. Besides, our method uploads data effectively without interruption or data loss , , , , , .
Table 2 compares our approach with conventional methods in terms of the uploading/downloading latency time. It can be seen that our approach simplifies the encryption process, and saves the turnaround time for dynamic file slicing. This suits the recent demand of fast computer operations for parallel operations of multi-level cloud data storage processes.
Figure 3 and Figure 4 compare the performance of our method and that of conventional methods in terms of the uploading/downloading latency time for different sizes and formats. It can be seen that our approach SRUD-MC achieves better delay time than the conventional methods. The excellence might be attributed to the dispensation steps of our approach. The task threshold size of the record is 201 Mb and the least threshold of the supplier carriers is 6.
The proposed approach needs to be evaluated in many dimensions of security, such as confidentiality, integrity against insider attacks, and secrecy. This part intends to discuss the data security of our approach, and the combined performance of the said indices. Each dimension was evaluated against a scale of 0-10. In many cases, the cloud-based likelihood of incidence was taken into account. The data security of our approach and existing methods is compared in Table 3.
Our approach provides an authenticated procedure for slicing data. An authorized person only has the power to control the slicing of different data formats. Because several third-party servers are used, three unofficial persons are aware of CP-ABE file sharing, and two users are aware of SSDS-MC and DS-MC. If five unauthorized users are able to access the system, then privacy breaches total 100%. DSUDS-MC shows a 100% confidentiality level because only the information owner can notice any file sharing. It has been planned to outline information from end to end, making it simple to obtain while avoiding availability.
This factor was still assessed using the likelihood of incidence using third-party servers, denial-of-service attacks, conspiring attacks from unhappy customers, the provider environment, information tampering, and repudiation. Further, in SSDS-MC, three out of ten attacks were successful, while in CP-ABE, five out of ten attacks were successful. Attacks on SRUD-MC and USDSMC were unsuccessful, though. Our approach also allowed for the tracking of the insider monitoring service.
The confidentiality was evaluated based on how many authorized users disclose the secret key and how many file slices are used for each technique. In contrast to other models, where many people were involved, only one person under SRUD-MC knew the key, the image that was chosen, and the number of file sharing. The number of file sharing is fixed when the secret key was discovered. If no one makes a distinction between key and file sharing, privacy is guaranteed.
This factor was measured by the zeal with which a data owner seeks to upload a malicious folder to compromise the entire cloud setup. The SRUD-MC and USDS-MC eliminated all harms.
The ability to display data is not maliciously changed while the data is being stored. In other methods, data merging causes so many clatters that it is difficult to tell which information happens first and how the rest is put together. Note that all processes in our approach are automated, such as file sharing, merging, encryption, and decryption. However, these processes are semi-automated in every other method. No information was modified out of the six data entered into the SRUD-MC, demonstrating 100% data integrity. In other methods, five out of six pieces of information are altered in various ways. Figure 5 analyzes the security of the multiple methods. As shown in Figure 4, the proposed approach enhances the security of the multi-cloud platform.
Secrecy is assessed by dynamic file sharing. In every contrastive method, all authorized and unauthorized customers have information regarding the file sharing parts. The secure parts depend on the number of providers of storage. By contrast, in the proposed method, Only the owner is able to discern between file parts that considerably increase consumer trust and confidentiality. To ensure anonymity, just one person was able to access the key. When malicious files are posted, no methods can help. If uploaded successfully, the owner's computer becomes ostentatious rather than follow the cloud configuration. When more authorized users are aware of the specifics of their systems' private keys, there is a greater impact on secrecy. Many techniques store the keys that notably upset the concealment using AES encryption and servers run by other parties. The multi-cloud technique measures data integrity as it is being retrieved.
This work develops a reconfigurable multi-cloud storage server architecture for unstructured data sharing that is dynamic and secure. In contrast to the current USDS-MC approach, our approach improves the security of unstructured data by 12%. Using 3DEA and RSA throughout data encryption, the USDS-MC reduces the malicious insider to 0.23%. In comparison to USDS-MC, our approach cuts the uploading/downloading latency time (s) down by 10% and 12%, respectively. The proposed framework indexes files using slicing technologies after operating on a variety of file formats. It improves the data sharing process, thanks to its better unstructured data security. It is effective to encrypt and decrypt the sizable unstructured storage in multi-cloud storage servers without losing any information.
If the file can be dynamically sliced, the customers will be more confident. This work improves the data privacy and the migration of the data to multiple clouds. This work can be improved by focusing on the security features of user defined data. More customized features could help to improve secured data sharing in cloud. Every sliced file needs to have a randomly generated key, and data to be retrieved by the key aggregate cryptosystem, making viewing information tedious by attackers. With the dawn of the age of 6G communication, data sharing will reach the speed of 10000 Gb/s, calling for faster encryption and decryption. To adapt to the new age, the proposed approach needs to be further improved to avoid signal attenuation and information loss.
The data used to support the findings of this study are available from the corresponding author upon request.
The authors would like to thank JSS Science and Technology University, Mysuru, Dayanand Sagar Academy of Technology and Management, Bengaluru, JSS Academy of Technical Education, Bengaluru, Visvesvaraya technological university (VTU), Belagavi, Vision Group on Science and Technology (VGST), and Karnataka Fund for Infrastructure Strengthening in Science & Technology Level –2 (JSSATEB) for all the support to the research and publication of this paper.
The authors declare that they have no conflicts of interest.