In recent years, a large number of renewable energy power plants have been built all over the country, which has led to a sharp increase in the pressure on the current centralized electricity trading platform. There are data storage bottlenecks and data privacy problems in the current blockchain-based power transaction and access control system. In this paper, the interstellar file system is proposed to store the data of the new distributed power trading platform in a distributed way, so as to improve the storage capacity of the physical nodes; An improved ciphertext strategy attribute encryption scheme, which is lightweight, traceable and supports outsourced decryption combined with state secret algorithm, is adopted to integrate the user’s unique identity into the user’s private key and part of the expensive decryption calculation is outsourced to the cloud server to realize fine-grained data access control in the electricity market. The experimental results show that compared to the basic CP-ABE scheme, when the encryption attribute is 100, the decryption time of the proposed scheme is reduced from 556ms to 1.1ms; Compared to blockchain-based solutions alone, when storing data of 15MB, GAS consumption decreases from 2968738133 to 89360. The scheme can greatly improve the storage capacity of the system, improve the operational efficiency and meet the actual performance requirements.
Smart Grid (SG) has promoted a new round of technological changes in the electric power field. As various smart devices are connected to the network, real-time collection of power data makes it possible to accurately grasp the operating status of the power grid. When managing huge data in smart grids, we face problems such as the inability of third-party cloud storage to meet data privacy requirements, the high cost of data sharing between different systems, and the lack of effective sharing incentive mechanisms. In view of the problem that third-party cloud storage solutions cannot meet the problem of data security storage and privacy protection, we studied a smart grid power data storage solution based on blockchain. Use hash calculation on the original power data to obtain a fixed-length hash summary, and then upload the hash summary to the blockchain for storage. The original power data and summary information are stored on IPFS off-chain. We designed smart contracts to implement user identity authentication and query log records to reflect the power data circulation process. In particular, we design a Byzantine Fault Tolerance protocol (Aggregate-signature Byzantine Fault Tolerance, ABFT) based on aggregate signatures. This protocol reduces communication complexity by adjusting the node broadcast method, and is more suitable for use in scenarios with many nodes. A large number of experimental results show that the solution we proposed can resist common attack types, and has a greater improvement in data storage overhead, average delay and throughput compared with similar solutions.
High-density optical data storage is a current field gaining importance where research work is done in abundance to bring about holographic CDs to light. Dye-doped gelatin films are promising candidates as recording materials for holographic data storage because of the ease of preparation and low cost. In this report we suggest some acid red dyes as useful recording materials for optical data storage. Acid red dyes namely Acid Red 73 and Acid Red 114 that are completely water-soluble are used to sensitize gelatin thin films for data storage. These dyes have their absorption peak around 514 nm. Two coherent beams of Argon ion laser (514.5 nm) are used to form the grating in the dye-sensitized gelatin films. The grating formed is found to be permanent. The diffraction efficiency of each material as a function of different parameters like dye concentration, writing beam intensities and their ratios and spatial frequency has been studied and presented. An attempt to store data in the sample has been made.
Scientific WorkFlows (SWFs) play significant roles in scientific research and engineering simulation, which are often data intensive and has complex data dependencies. The storage of massive intermediate datasets has great impacts on the performance and the quality of service of a SWF system, which has become a difficult and complex task. Through analyzing the cost transitive tournament shortest path (CTT-SP)-based algorithm proposed for the intermediate data storage in SWF systems, the disadvantage of the CTT-SP algorithm over its sensitivity to the main branches can be found. We proposed an improved CTT-SP algorithm based on the critical path method (CPM), aiming at reducing the sensitivity of the main branch and improving the performance of the algorithm. By adding the generation information of the datasets on the arrows, the data dependency graph of a SWF can be converted to an activity on arrow (AOA) net. Then, the critical path of the AOA net is used to be the main branch of the CTT-SP algorithm, which can reduce the impacts of the main branch and keep the quality of service of the SWF systems. Experiments are designed to test the impacts of the main branches and the performance of the improved CTT-SP algorithms. The results show the significant impacts of the main branches on the performance of the CTT-SP algorithm and the effectiveness of the CPM on improving the performance of the CTT-SP algorithm. Comparison results demonstrate the positiveness and effectiveness of the improved CTT-SP algorithm.
Recently, many organizations and industries are using the cloud computing technologies for exchanging the resources and their confidential data. For this purpose, many cloud services are available and also provide the facility to categorize their users as private and public users for accessing their own data from private cloud and public cloud. The combination of these two clouds is called federated cloud which facilitates to allow both kinds of cloud users for accessing their own data on same cloud database. In this scenario, the authorization and authentication process is becoming complex task on cloud. For providing the facility to access their own data only from federated cloud, a new secured data storage and retrieval algorithm called AES and Triple-DES-based Secured Storage and Retrieval Algorithm (ATDSRA) are proposed for storing the private and public cloud user’s data securely on cloud database. Here, the TDES is used for encrypting the input data, data merging and aggregation methods were used for grouping the encrypted input data. Moreover, a new dynamic data auditing scheme called CRT-based Dynamic Data Auditing Algorithm (CRTDDA) is proposed for conducting the cloud data auditing over the federated cloud and also restricting the data access. The proposed new auditing mechanism that is able to protect the stored data from access violence. In addition, the standard Table64 is used for encryption and decryption processes. The experimental results of this work proves the efficiency of the proposed model in terms of security level.
By applying nonlinear dynamics to the dense storage of information, we demonstrate how a single nonlinear dynamical element can store M items, where M is variable and can be large. This provides the capability for naturally storing data in different bases or in different alphabets and can be used to implement multilevel logic. Further we show how this method of storing information can serve as a preprocessing tool for (exact or inexact) pattern matching searches. Since our scheme involves just a single procedural step, it is naturally set up for parallel implementation and can be realized with hardware currently employed for chaos-based computing architectures.
A schema database functions as a repository for interconnected data points, facilitating comprehension of data structures by organizing information into tables with rows and columns. These databases utilize established connections to arrange data, with attribute values linking related tuples. This integrated approach to data management and distributed processing enables schema databases to maintain models even when the working set size surpasses available RAM. However, challenges such as data quality, storage, scarcity of data science professionals, data validation, and sourcing from diverse origins persist. Notably, while schema databases excel at reviewing transactions, they often fall short in updating them effectively. To address these issues, a Chimp-based radial basis neural model (CbRBNM) is employed. Initially, the Schemaless database was considered and integrated into the Python system. Subsequently, compression functions were applied to both schema and schema-less databases to optimize relational data size by eliminating redundant files. Performance validation involved calculating compression parameters, with the proposed method achieving memory usage of 383.37Mb, a computation time of 0.455s, a training time of 167.5ms, and a compression rate of 5.60%. Extensive testing demonstrates that CbRBNM yields a favorable compression ratio and enables direct searching on compressed data, thereby enhancing query performance.
The Ribosome – a Restless Molecular Machine
An ICON in Clinical Research
Isilon: Scapes Up Big Data Storage
The following topics are under this section:
This paper presents a research work within Collaborative Research Centre 653 "Gentelligent Components in Their Lifecycle." The term "gentelligent" refers to the genetic and intelligent character of these components. Specific data are inherently saved in the components, which are used during its lifecycle for the means of identification, processing, and reproduction. The present study aims at the development of a method to manufacture and utilize gentelligent sintered parts. As data carrier, foreign materials shaped in fonts, logos, or codes are embedded in the powder material proceeded by pressing and sintering processes. The foreign material can be applied in forms of particles or compound powder. The information read-out is based on radiographic methods. The objectives of the investigations are the determination of the process parameters of each method and the impact of the integrated foreign materials on the mechanical properties of the component. The experimental studies are supported by numerical simulations.
In the recent decade, the family of Heusler compounds has attracted tremendous scientific and technological interest in the field of spintronics. This is essentially due to their exceptional magnetic properties, which qualify them as promising functional materials in various data-storage devices, such as giant-magnetoresistance spin valves, magnetic tunnel junctions, and spin-transfer torque devices. In this article, we provide a comprehensive review on the applications of the Heusler family in magnetic data storage. In addition to their important roles in the performance improvement of these devices, we also try to point out the challenges as well as possible solutions, of the current Heusler-based devices. We hope that this review would spark further investigation efforts into efficient incorporation of this eminent family of materials into data storage applications by fully arousing their intrinsic potential.
Magnetic materials provide the most important form of erasable data storage for information technology today. The demand for increased storage capacity has caused the size of the region used to represent a “1” or “0” of binary data and features of the read–write transducers to be reduced to the nanometer scale. However, increased storage capacity is useful only if there is a commensurate reduction in the time taken to read and write the data. In this chapter the basic principles that determine the behavior of nanomagnetic materials are introduced and their use in data storage systems is described. Particular attention is paid to processes that limit the speed of operation of the data storage system. It is shown that optical pump–probe experiments may be used to characterize dynamic magnetic processes with femtosecond temporal resolution. The macroscopic magnetization of a ferromagnet can be made to precess in response to an optically triggered magnetic field pulse, leading to reduced switching times. Alternatively, an ultrashort laser pulse may be used to manipulate the magnitude of the magnetization on femtosecond time scales, leading to an ultrafast demagnetization in certain ferromagnets, and providing new insight into magnetotransport phenomena. Finally, the outlook for increased record and replay rates is assessed and the prospect of further use of optical techniques within magnetic data storage technology is discussed.
This paper first analyzes the shortcomings of current Hadoop systems used for managing big data from electric power systems, before proposing an index management scheme for electric cloud data based on Hadoop to overcome the problems associated with big data storage and the query model in a Hadoop framework. Big data from electric power systems can be classified as information data or massive data, according to its storage space and access frequency. Based on the characteristics of different data types and combined with the characteristics of an Hbase database, a new index system and query solution scheme for electric big data is proposed. Using the divide and conquer strategy, the inverted index is built on information data by Zipf's law and weights. By using index cluster and Hbase cluster to construct the index of different granularity on massive data, the stability and reliability of the system can be improved.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.