Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The piecewise logistic map (PLM) is an improved version of logistic map, which is specifically designed for cryptographic application. However, the probability density distribution of PLM is not uniform. To overcome this shortage, a parameter-coupled piecewise logistic map (PCPLM) is presented. Using the PCPLM as a basic unit, we construct a four-dimensional chaotic model, which has even probability density distribution. The four-dimensional chaotic model based on PCPLMs also has a parallel structure, which is beneficial to enhancing the running efficiency. Finally, a novel pseudorandom number generator (PRNG) is presented based on this four-dimensional chaotic model. The security analysis and simulation tests on the proposed PRNG are performed. The results both confirm that the proposed PRNG is secure and efficient. Therefore, it can be used as a candidate for data security.
Privacy and trust of biomedical solutions that capture and share data is an issue rising to the center of public attention and discourse. While large-scale academic, medical, and industrial research initiatives must collect increasing amounts of personal biomedical data from patient stakeholders, central to ensuring precision health becomes a reality, methods for providing sufficient privacy in biomedical databases and conveying a sense of trust to the user is equally crucial for the field of biocomputing to advance with the grace of those stakeholders. If the intended audience does not trust new precision health innovations, funding and support for these efforts will inevitably be limited. It is therefore crucial for the field to address these issues in a timely manner. Here we describe current research directions towards achieving trustworthy biomedical informatics solutions.
Currently, the size of multimedia data is rising gradually from gigabytes to petabytes, due to the progression of a larger quantity of realistic data. The majority of big data is conveyed via the internet and they were accumulated on cloud servers. Since cloud computing offers internet-oriented services, there were a lot of attackers and malevolent users. They always attempt to deploy the private data of users without any right access. At certain times, they substitute the real data by any counterfeit data. As a result, data protection has turned out to be a noteworthy concern in recent times. This paper aims to establish an optimization-based privacy preservation model for preserving multimedia data by selecting the optimal secret key. Here, the encryption and decryption process is carried out by Improved Blowfish cryptographic technique, where the sensitive data in cloud server is preserved using the optimal key. Optimal key generation is the significant procedure to ensure the objectives of integrity and confidentiality. Likewise, data restoration is the inverse process of sanitization (decryption). In both the cases, key generation remains a major aspect, which is optimally chosen by a novel hybrid algorithm termed as “Clan based Crow Search with Adaptive Awareness probability (CCS-AAP). Finally, an analysis is carried out to validate the improvement of the proposed method.
Signcryption has drawn a lot of attention due to its useful applications in many areas, in particular for applications where the computation and communication resources are constrained, for example, for lightweight devices. The traditional signcryption scheme does not support the homomorphic property. Recent work by Rezaeibagha et al. (Provsec 2017) offered a provably secure homomorphic signcryption scheme, in which for the first time, provided a scheme provably secure under some restriction. In this paper, we show that the homomorphic signcryption can be extended to provably secure broadcast signcryption scheme. We allow the broadcasted signcrypted data items to be aggregated without requiring decryption that is a desirable feature in distributed environments.
In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago.
This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project.
The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.
As genetic sequencing becomes less expensive and data sets linking genetic data and medical records (e.g., Biobanks) become larger and more common, issues of data privacy and computational challenges become more necessary to address in order to realize the benefits of these datasets. One possibility for alleviating these issues is through the use of already-computed summary statistics (e.g., slopes and standard errors from a regression model of a phenotype on a genotype). If groups share summary statistics from their analyses of biobanks, many of the privacy issues and computational challenges concerning the access of these data could be bypassed. In this paper we explore the possibility of using summary statistics from simple linear models of phenotype on genotype in order to make inferences about more complex phenotypes (those that are derived from two or more simple phenotypes). We provide exact formulas for the slope, intercept, and standard error of the slope for linear regressions when combining phenotypes. Derived equations are validated via simulation and tested on a real data set exploring the genetics of fatty acids.
By advances in cloud storage systems, users have access to the data saved in the cloud and can manipulate the data without limitation of time and place. As the data owner no longer possesses data physically, he is required to ensure the integrity of the data stored in the cloud with the public key given by public key infrastructure (PKI). Thus the security of PKI and certificates are essential. However, there are numerous security risks in the traditional PKI and it is complex to administer the certificates. Certificateless public key cryptography is used in this paper to solve these problems. We also use elliptic curve group to reduce computation overhead. In this paper, we design a certificateless public verification mechanism to check the integrity of data outsourced in the cloud and we further extend it to support a multiuser group by batch verification. Specifically, a public verifier who replaces the data owner to check the integrity in the proposed scheme does not require to manage any certificates during the verification process. Meanwhile, a verifier is not required to download the entire file for integrity checking. Theoretical analyses verify the security of our scheme and experimental results show its efficiency.
Road side units (RSUs) can act as fog nodes to perform data aggregation at the edge of network, which can reduce communication overhead and improve the utilization of network resources. However, because the RSU is public infrastructure, this feature may bring data security and privacy risks in data aggregation. In this paper, we propose a secure multi-subinterval data aggregation scheme, named SMDA, with interval privacy preservation for vehicle sensing systems. Specifically, our scheme combines the 1-R encoding theory and proxy re-encryption to protect interval privacy, this can ensure that the interval information is only known by the data center, and the RSU can classify the encrypted data without knowing the plaintext of the data and interval information. Meanwhile, our scheme employs the Paillier homomorphic encryption to accomplish data aggregation at the RSU, and the Identity-based batch authentication technology to solve authentication and data integrity. Finally, the security analysis and performance evaluations illustrate the safety and efficiency of our scheme.
In embedded multicore shared memory systems, processing elements (PEs) are mutually untrusted since they carry different computing tasks independently. Therefore, the sharing of secret constants (SCs) between PEs, which is applied in the existing confidentiality protection schemes, will lead to the leakage of nonshared data. Besides, for integrity protection, tree construction checking over the whole counter space leads to the increase of both memory occupation and the average delay of verification. In this paper, we propose a ciphertext sharing confidentiality protection scheme based on certificateless proxy re-encryption and an integrity protection scheme based on a multigranularity scalable hash tree for secure data sharing between untrusted processing elements (SDSUP). With our schemes, the SC does not need to be shared and the scale of the checking tree is reduced, thus solving the leakage of nonshared data and reducing the high cost in integrity check. The results from the Rice Simulator for ILP Multiprocessors (RSIM) multicore simulator show that compared with the unprotected system, the performance degradation from applying the confidentiality protection scheme is 17.3% on average. Moreover, the performance degradation of the integrity protection scheme is 12.89%, which is superior to 35.36% for the bonsai Merkle tree (BMT), 29.49% for the multigrained hash tree (MGT) and 21.82% for the multigranularity incremental hash tree (MIT).
Cloud computing allows for access to ubiquitous data storage and powerful computing resources through the use of web services. There are major concerns, however, with data security, reliability, and availability in the cloud. In this paper, we address these concerns by introducing a novel security mechanism for secure and fault-tolerant cloud information storage. The information storage model follows the RAID (Redundant Array of Independent Disks) concept by considering cloud service providers as independent virtual disk drives. As such, the model utilizes multiple cloud service providers as a cloud cluster for information storage, and a service directory for management of the cloud clusters including service query, key management, and cluster restoration. Our approach not only supports maintaining the confidentiality of the stored data, but also ensures that the failure or compromise of an individual cloud provider in a cloud cluster will not result in a compromise of the overall data set. To ensure a correct design, we present a formal model of the security mechanism using hierarchical colored Petri nets (HCPN), and verify some key properties of the model using model checking techniques.
Microaggregation is a statistical disclosure control technique. Raw microdata (i.e. individual records) are grouped into small aggregates prior to publication. With fixed-size groups, each aggregate contains k records to prevent disclosure of individual information. Individual ranking is a usual criterion to reduce multivariate microaggregation to univariate case: the idea is to perform microaggregation independently for each variable in the record. Using distributional assumptions, we show in this paper how to find interval estimates for the original data based on the microaggregated data. Such intervals can be considerably narrower than intervals resulting from subtraction of means, and can be useful to detect lack of security in a microaggregated data set. Analytical arguments given in this paper confirm recent empirical results about the unsafety of individual ranking microaggregation.
In this cryptosystem, we have considered RGB images for two-dimensional (2D) data security. Security of RGB images during transmission is a major concern, discussed globally. This paper proposes a novel technique for color image security by random hill cipher (RHC) over SLn(𝔽) domain associated with 2D discrete wavelet transform. Existing techniques have discussed the security of image data on the basis of the keys only (which provide only one layer of security for image data), but in the proposed cryptosystem, the keys and the arrangement of RHC parameters are imperative for correct decryption of color image data. Additionally, key multiplication side (pre or post) with the RGB image data should inevitably be known, to correctly decrypt the encrypted image data. So, the proposed cryptosystem provides three layers of security for RGB image data. In this approach, we have considered keys from the special linear group over a field 𝔽, which provides enormous key-space for the proposed cryptosystem. A computer simulation on standard examples and results is given to support the fixture of the scheme. Security analysis, and detailed comparison between formerly developed techniques and proposed cryptosystem are also discussed for the robustness of the technique. This method will have large potential usage in the digital RGB image processing and the security of image data.