Data classification and grading provide the foundation and guidance for information security management, and the construction of data security system is inseparable from this cornerstone. In recent years, the high-frequency use and complex interactions of power grid data have made the data security management increasingly in need of automated, efficient and credible data classification and grading. The characteristics of power grid data assets, such as sensitivity, complexity and multi-dimensions, exacerbate the challenges of the task. Due to the scattered businesses and complex categories of existing data, experts in a single field have limited knowledge and cannot label all data in all the fields, which also makes data labeling very difficult. To address the aforementioned issues, we present a two-stage unsupervised methodology for hierarchical text classification (TUHTC). Initially, we leverage the semantic information inherent in hierarchical labels for data augmentation of annotated datasets. Subsequently, we enhance the semantic embedding of labels to facilitate the effective classification of data. We conducted experimental verification using database information in authentic business scenarios, thereby validating the efficacy of the proposed methodology.
Signcryption has drawn a lot of attention due to its useful applications in many areas, in particular for applications where the computation and communication resources are constrained, for example, for lightweight devices. The traditional signcryption scheme does not support the homomorphic property. Recent work by Rezaeibagha et al. (Provsec 2017) offered a provably secure homomorphic signcryption scheme, in which for the first time, provided a scheme provably secure under some restriction. In this paper, we show that the homomorphic signcryption can be extended to provably secure broadcast signcryption scheme. We allow the broadcasted signcrypted data items to be aggregated without requiring decryption that is a desirable feature in distributed environments.
In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago.
This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project.
The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.
HEP collaborations are deploying grid technologies to address petabyte-scale data processing challenges. In addition to file-based event data, HEP data processing requires access to terabytes of non-event data (detector conditions, calibrations, etc.) stored in relational databases. Inadequate for non-event data delivery in these amounts, database access control technologies for grid computing are limited to encrypted message transfers. To overcome these database access limitations one must go beyond the existing grid infrastructure. A proposed hyperinfrastructure of distributed database services implements efficient secure data access methods. We introduce several technologies laying a foundation of a new hyperinfrastructure. We present efficient secure data transfer methods and secure grid query engine technologies federating heterogeneous databases. Lessons learned in a production environment of ATLAS Data Challenges are presented.
By advances in cloud storage systems, users have access to the data saved in the cloud and can manipulate the data without limitation of time and place. As the data owner no longer possesses data physically, he is required to ensure the integrity of the data stored in the cloud with the public key given by public key infrastructure (PKI). Thus the security of PKI and certificates are essential. However, there are numerous security risks in the traditional PKI and it is complex to administer the certificates. Certificateless public key cryptography is used in this paper to solve these problems. We also use elliptic curve group to reduce computation overhead. In this paper, we design a certificateless public verification mechanism to check the integrity of data outsourced in the cloud and we further extend it to support a multiuser group by batch verification. Specifically, a public verifier who replaces the data owner to check the integrity in the proposed scheme does not require to manage any certificates during the verification process. Meanwhile, a verifier is not required to download the entire file for integrity checking. Theoretical analyses verify the security of our scheme and experimental results show its efficiency.
Road side units (RSUs) can act as fog nodes to perform data aggregation at the edge of network, which can reduce communication overhead and improve the utilization of network resources. However, because the RSU is public infrastructure, this feature may bring data security and privacy risks in data aggregation. In this paper, we propose a secure multi-subinterval data aggregation scheme, named SMDA, with interval privacy preservation for vehicle sensing systems. Specifically, our scheme combines the 1-R encoding theory and proxy re-encryption to protect interval privacy, this can ensure that the interval information is only known by the data center, and the RSU can classify the encrypted data without knowing the plaintext of the data and interval information. Meanwhile, our scheme employs the Paillier homomorphic encryption to accomplish data aggregation at the RSU, and the Identity-based batch authentication technology to solve authentication and data integrity. Finally, the security analysis and performance evaluations illustrate the safety and efficiency of our scheme.
In embedded multicore shared memory systems, processing elements (PEs) are mutually untrusted since they carry different computing tasks independently. Therefore, the sharing of secret constants (SCs) between PEs, which is applied in the existing confidentiality protection schemes, will lead to the leakage of nonshared data. Besides, for integrity protection, tree construction checking over the whole counter space leads to the increase of both memory occupation and the average delay of verification. In this paper, we propose a ciphertext sharing confidentiality protection scheme based on certificateless proxy re-encryption and an integrity protection scheme based on a multigranularity scalable hash tree for secure data sharing between untrusted processing elements (SDSUP). With our schemes, the SC does not need to be shared and the scale of the checking tree is reduced, thus solving the leakage of nonshared data and reducing the high cost in integrity check. The results from the Rice Simulator for ILP Multiprocessors (RSIM) multicore simulator show that compared with the unprotected system, the performance degradation from applying the confidentiality protection scheme is 17.3% on average. Moreover, the performance degradation of the integrity protection scheme is 12.89%, which is superior to 35.36% for the bonsai Merkle tree (BMT), 29.49% for the multigrained hash tree (MGT) and 21.82% for the multigranularity incremental hash tree (MIT).
The Cenozoic era is the digital age where people, things, and any device with network capabilities can communicate with each other, and the Internet of Things (IoT) paves the way for it. Almost all domains are adopting IoT from smart home appliances, smart healthcare, smart transportation, Industrial IoT and many others. As the adoption of IoT increases, the accretion of data also grows. Furthermore, digital transformations have led to more security vulnerabilities, resulting in data breaches and cyber-attacks. One of the most prominent issues in smart environments is a delay in data processing while all IoT smart environments store their data in the cloud and retrieve them for every transaction. With increased data accumulations on the cloud, most smart applications face unprecedented delays. Thus, data security and low-latency response time are mandatory to deploy a robust IoT-based smart environment. Blockchain, a decentralized and immutable distributed ledger technology, is an essential candidate for ensuring secured data transactions, but it has a variety of challenges in accommodating resource-constrained IoT devices. Edge computing brings data storage and computation closer to the network’s edge and can be integrated with blockchain for low-latency response time in data processing. Integrating blockchain with edge computing will ensure faster and more secure data transactions, thus reducing the computational and communicational overhead concerning resource allocation, data transaction and decision-making. This paper discusses the seamless integration of blockchain and edge computing in IoT environments, various use cases, notable blockchain-enabled edge computing architectures in the literature, secured data transaction frameworks, opportunities, research challenges, and future directions.
The Internet of Things (IoT) is gaining a tons of attention in numerous industries due to its low-cost autonomous sensor operations. IoT devices in healthcare and medical activities establish an environment that recognizes the patients’ health status such as stress levels, oxygen supply, pulse and warmth, and responds quickly in the event of an emergency. Moreover, various systems founded on low-powered biosensor nodules have been proposed to monitor patients’ medical conditions utilizing Wireless Body Area Network (WBAN); despite the fact that controlling increasing power usage and communication expenses is time-consuming and attention-demanding. Another difficult research problem is data privacy and integrity in the presence of malicious traffic. Therefore, to overcome the above-stated limitations, this research introduces a Safe and the Energy-Efficient Framework for e-Healthcare using Internet of Medical Things (IoMT), whose main goal is to reduce transmission cost but also power usage among biomaterials while sending health records conveniently and, on the other hand, to protect patients’ medical data from unverified and malevolent base stations to increase internet confidentiality and protection.
Recently, wireless body area network (WBAN) becomes a hot research topic in the advanced healthcare system. The WBAN plays a vital role in monitoring the physiological parameters of the human body with sensors. The sensors are small in size, and it has a small-sized battery with limited life. Hence, the energy is limited in the multi-hop routing process. The patient data is collected by the sensor, and the data are transmitted with high energy consumption. It causes failure in the data transmission path. To avoid this, the data transmission process should be optimized. This paper presents an advanced authentication and energy-efficient routing protocol (AAERP) for optimal routing paths in WBAN. Patients’ data are aggregated from the WBAN through the IoMT devices in the initial stage. To secure the patient’s private data, a hybrid mechanism of the elliptic curve cryptosystem (ECC) and Paillier cryptosystem is proposed for the data encryption process. Data security is improved by authenticating the data before transmission using an encryption algorithm. Before the routing process, the data encryption approach converts the original plain text data into ciphertext data. This encryption approach assists in avoiding intrusions in the network system. The encrypted data are optimally routed with the help of the teamwork optimization algorithm (TOA) approach. The optimal path selection using this optimization technique improves the effectiveness and robustness of the system. The experimental setup is performed by using Python software. The efficacy of the proposed model is evaluated by solving parameters like network lifetime, network throughput, residual energy, success rate, number of packets received, number of packets sent, and number of packets dropped. The performance of the proposed model is measured by comparing the obtained results with several existing models.
In studying ample data security in universities, it is essential to protect the privacy and integrity of data. Current schemes utilize blockchain technology for data sharing, where cloud servers are responsible for storing ciphertext and performing partial encryption and decryption calculations. Data encryption technology can effectively prevent hacker attacks and ensure the integrity of data transmission. However, cloud servers that are only partially trusted may tamper with the ciphertext or return incorrect results. To solve these problems, this paper improves the privacy of access policy, attribute revocation and data authenticity of traditional CP-ABE algorithm. A protection scheme for ample public data security in universities is proposed, and a decentralized network structure based on multiple nodes is designed to improve the efficiency of data queries. The simulation results show that the scheme performs well in the scenario where the number of system attributes is moderate and the complexity of the access policy is not high.
The piecewise logistic map (PLM) is an improved version of logistic map, which is specifically designed for cryptographic application. However, the probability density distribution of PLM is not uniform. To overcome this shortage, a parameter-coupled piecewise logistic map (PCPLM) is presented. Using the PCPLM as a basic unit, we construct a four-dimensional chaotic model, which has even probability density distribution. The four-dimensional chaotic model based on PCPLMs also has a parallel structure, which is beneficial to enhancing the running efficiency. Finally, a novel pseudorandom number generator (PRNG) is presented based on this four-dimensional chaotic model. The security analysis and simulation tests on the proposed PRNG are performed. The results both confirm that the proposed PRNG is secure and efficient. Therefore, it can be used as a candidate for data security.
Although RDF ontologies are expressed based on XML syntax, existing methods to protect XML documents are not suitable for securing RDF ontologies. The graph style and inference feature of RDF ontologies demands new methods for access control. Driven by this goal, this paper proposes a query-oriented model for RDF ontology access control. The model adopts the concept of ontology view to rewrite user queries. In our approach, ontology views define accessible ontology concepts and instances a user can visit, and enables a controlled inference capability for the user. The design of the views guarantees that the views are free of conflict. Based on that, the paper describes algorithms for rewriting queries according to different views, and provides a system architecture along with an implemented prototype. In the evaluation, the system exhibits a promising result in terms of effectiveness and soundness.
Cloud computing allows for access to ubiquitous data storage and powerful computing resources through the use of web services. There are major concerns, however, with data security, reliability, and availability in the cloud. In this paper, we address these concerns by introducing a novel security mechanism for secure and fault-tolerant cloud information storage. The information storage model follows the RAID (Redundant Array of Independent Disks) concept by considering cloud service providers as independent virtual disk drives. As such, the model utilizes multiple cloud service providers as a cloud cluster for information storage, and a service directory for management of the cloud clusters including service query, key management, and cluster restoration. Our approach not only supports maintaining the confidentiality of the stored data, but also ensures that the failure or compromise of an individual cloud provider in a cloud cluster will not result in a compromise of the overall data set. To ensure a correct design, we present a formal model of the security mechanism using hierarchical colored Petri nets (HCPN), and verify some key properties of the model using model checking techniques.
Despite the popularity and many advantages of using cloud data storage, there are still major concerns about the data stored in the cloud, such as security, reliability and confidentiality. In this paper, we propose a reliable and secure distributed cloud data storage schema using Reed-Solomon codes. Different from existing approaches to achieving data reliability with redundancy at the server side, our proposed mechanism relies on multiple cloud service providers (CSP), and protects users’ cloud data from the client side. In our approach, we view multiple cloud-based storage services as virtual independent disks for storing redundant data encoded with erasure codes. Since each CSP has no access to a user’s complete data, the data stored in the cloud would not be easily compromised. Furthermore, the failure or disconnection of a CSP will not result in the loss of a user’s data as the missing data pieces can be readily recovered. To demonstrate the feasibility of our approach, we developed a prototype distributed cloud data storage application using three major CSPs. The experimental results show that, besides the reliability and security related benefits of our approach, the application outperforms each individual CSP for uploading and downloading files.
In this cryptosystem, we have presented a novel technique for security of video data by using matrix affine cipher (MAC) combined with two-dimensional discrete wavelet transform (2D-DWT). Existing schemes for security of video data provides only one layer of security, but the presented technique provides two layers of security for video data. In this cryptosystem, keys and arrangement of MAC parameters are imperative for decryption process. In this cryptosystem, if the attacker knows about all the exact keys, but has no information about the specific arrangement of MAC parameters, then the information of original video cannot be recovered from the encrypted video. Experimental results on standard examples support to the robustness and appropriateness of the presented cryptosystem of video encryption and decryption. The statistical analysis of the experimental results based on standard examples critically examine the behavior of the proposed technique. Comparison between existing schemes for security of video with the presented cryptosystem is also provided for the robustness of the proposed cryptosystem.
Microaggregation is a statistical disclosure control technique. Raw microdata (i.e. individual records) are grouped into small aggregates prior to publication. With fixed-size groups, each aggregate contains k records to prevent disclosure of individual information. Individual ranking is a usual criterion to reduce multivariate microaggregation to univariate case: the idea is to perform microaggregation independently for each variable in the record. Using distributional assumptions, we show in this paper how to find interval estimates for the original data based on the microaggregated data. Such intervals can be considerably narrower than intervals resulting from subtraction of means, and can be useful to detect lack of security in a microaggregated data set. Analytical arguments given in this paper confirm recent empirical results about the unsafety of individual ranking microaggregation.
The opportunities offered by the Internet are employed increasingly in medicine. To obtain data on the extent to which the Internet is used by hand surgeons, survey forms were sent to 1043 participants of the Congress of the IFSSH in Vancouver in 1998. Ninety-four per cent of the respondents use the Internet. Most of the participants use the World Wide Web for literature searches, information on events and to read scientific articles. E-mail is used for general and scientific communication with colleagues and also for transmission of patient-related data. Perceived apprehensions include secure transmission of sensitive data, slow data transmission, and the lack of structure and of an authority to control the contents of the Internet. Virtual congresses and a newsgroup on hand surgery seem to be worthwhile future goals. Some problems pointed out in this survey have already been solved, at least partially, and possible solutions for the rest are discussed.
In order to improve data security and network data security in the digital economy-driven environment, this paper combines data security technology and network security technology to build a digital economy security management and control system. Moreover, this paper describes the data encryption of the data owners before the framework index and the composition and construction method of the corresponding EncIR tree, and analyzes the spatial keyword group query algorithm on the EncIR tree. In addition, this paper analyzes the experimental performance of index building and spatial query, and builds an intelligent digital economy security system on the basis of these algorithms. The experimental research results verify that the data security technology and network data system driven by the digital economy have good security performance, and on this basis, follow-up security regulations can be formulated.
In this paper, we investigate the general problem of data hiding and propose an approach for effective cover noise interference rejection in oblivious applications. We first evaluate the performance in the commonly used direct sequence modulation approach where a low-power signal is embedded into the original cover signal. The optimal detection is derived and its performance is analyzed. Second, we study a novel approach in oblivious data hiding and evaluate its performance and compare it with existing algorithms. Both simulation studies and empirical data hiding results validate its efficiency in the multimedia oblivious applications.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.