Cloud storage greatly facilitates both individuals and organizations to share data over the Internet. However, there are several security issues that impede to outsource their data. Among various approaches introduced to overcome these issues, attribute-based encryption (ABE) provides secure and flexible access control on shared data, and thus is rather promising. But the original ABE is not adaptable to some special circumstances, where attributes are organized in a hierarchical structure, such as enterprises and official institutions. On the other hand, although the wide use of mobile devices enables users to conveniently access shared data anywhere and anytime, this also increases the risk of key exposure, which will result into unwanted exposure of the shared data. In this paper, we extend the functionality of the original ABE and enhance its security by providing key generation delegation and forward security. Consequently, the enhanced ABE meets applications of large organizations with hierarchies and minimizes the damage in the case of unexpected key exposures. Specifically speaking, we present a forward-secure ciphertext-policy hierarchical attribute-based encryption scheme in prime order bilinear groups, as a core building of attribute-based data sharing scheme. The security of the proposed scheme is proven in the standard model. We conduct experiments to demonstrate its efficiency and practicability.
Driven by mutual benefits, there is a demand for transactional data sharing among organizations or parties for research or business analysis purpose. It becomes an essential concern to provide privacy-preserving data sharing and meanwhile maintain data utility, due to the fact that transactional data may contain sensitive personal information. Existing privacy-preserving methods, such as k-anonymity and l-diversity, cannot handle high-dimensional sparse data well, since they would bring about much data distortion in the anonymization process. In this paper, we use bipartite graphs with node attributes to model high-dimensional sparse data, and then propose a privacy-preserving approach for sharing transactional data in a new vision, in which the bipartite graph is anonymized into a weighted bipartite graph by clustering node attributes. Our approach can maintain privacy of the associations between entities and resist certain attackers with knowledge of partial items. Experiments have been performed on real-life data sets to measure the information loss and the accuracy of answering aggregate queries. Experimental results show that the approach improves the balance of performance between privacy protection and data utility.
The obvious need for using modern computer networking capabilities to enable the effective sharing of information has resulted in data-sharing systems, which store, and manage large amounts of data. These data need to be effectively searched and analyzed. More specifically, in the presence of dirty data, a search for specific information by a standard query (e.g., search for a name that is misspelled or mistyped) does not return all needed information, as required in homeland security, criminology, and medical applications, amongst others. Different techniques, such as soundex, phonix, n-grams, edit-distance, have been used to improve the matching rate in these name-matching applications. These techniques have demonstrated varying levels of success, but there is a pressing need for name matching approaches that provide high levels of accuracy in matching names, while at the same time maintaining low computational complexity. In this paper, such a technique, called ANSWER, is proposed and its characteristics are discussed. Our results demonstrate that ANSWER possesses high accuracy, as well as high speed and is superior to other techniques of retrieving fuzzy name matches in large databases.
P&G Children's Safe Drinking Water Program Celebrates 10 Billion Litre Milestone in Singapore.
New In-Vitro Studies Reinforce Efficacy of BETADINE® Skin Cleanser against Viruses that Cause Hand Foot & Mouth Disease.
Singapore Ranked Fifth in its Readiness to Achieve a Fully Integrated Health System Reveals First Future Health Index.
SCIEX Announces High Throughput, Industrialised Omics Solutions.
Novartis Expands Partnership with Medicines for Malaria Venture to Develop Next-Generation Antimalarial Treatment.
ADVA Launches Dengue Mission Buzz Barometer Tool to Boost Critical Dengue Prevention Awareness and Preparedness Amongst ASEAN Community.
Innovative Diagnostics Awarded Unprecedented Westgard Sigma Certification in Singapore.
There is significant interest amongst neuroscientists in sharing neuroscience data and analytical tools. The exchange of neuroscience data and tools between groups affords the opportunity to differently re-analyze previously collected data, encourage new neuroscience interpretations and foster otherwise uninitiated collaborations, and provide a framework for the further development of theoretically based models of brain function. Data sharing will ultimately reduce experimental and analytical error. Many small Internet accessible database initiatives have been developed and specialized analytical software and modeling tools are distributed within different fields of neuroscience. However, in addition large-scale international collaborations are required which involve new mechanisms of coordination and funding. Provided sufficient government support is given to such international initiatives, sharing of neuroscience data and tools can play a pivotal role in human brain research and lead to innovations in neuroscience, informatics and treatment of brain disorders. These innovations will enable application of theoretical modeling techniques to enhance our understanding of the integrative aspects of neuroscience. This article, authored by a multinational working group on neuroinformatics established by the Organization for Economic Co-operation and Development (OECD), articulates some of the challenges and lessons learned to date in efforts to achieve international collaborative neuroscience.
With the trend for smart maintenance for the electric multiple unit (EMU), there is a critical need for cross-organisational data sharing amongst multiple stakeholders. Traditional centralised solutions may not ensure sufficient trust for data sharing. In the maintenance field, blockchain technology has been introduced to improve security and privacy. However, the current blockchain solutions for data sharing are solely based on public chain or consortium chain, which may have the limitation of low transaction processing rate. Moreover, multiple levels of decentralisation are required in cross-organisational data sharing. For the purpose of providing multiple types of ledger’s governance and a performance improvement, this paper presents a method for developing EMU maintenance data sharing solution based on double blockchain. The method is validated using a case study in Shanghai EMU depot. The proposed solution adopts practical byzantine fault tolerance (PBFT) as the consensus mechanism. Interplanetary file system (IPFS) is introduced to reduce the payload of big data storage.
The quick emergence in the quantity of data produced through the linked devices of Internet of Things (IoT) models opened the novel potential to improve service qualities for budding tools considering data sharing. However, privacy problems are main issues of data providers for sharing data. The outflow of confidential data causes severe problems beyond the loss in finance of providers. A blockchain-based secured data-sharing model is devised for dealing with various kinds of parties. Thus, data-sharing issue is modeled as a machine learning issue by adapting federated learning (FL). Here, data privacy is controlled by sharing data in spite of exposing genuine data. At last, the FL is combined in consensus task of permissioned blockchain for accomplishing federated training. Here, the data model learning is executed using a deep maxout network (DMN), which is trained using jellyfish search African vultures optimization (JSAVO). Moreover, the data-sharing records are generated to share data amid data providers and requestors. The proposed JSAVO-based DMN outperformed with better accuracy of 93.3%, FPR of 0.054, loss function of 0.067, mean square error (MSE) of 0.346, mean average precision of 94.6, RMSE of 0.589, computational time of 17.47s, and memory usage of 48.62MB.
Peer data sharing systems use either schema-level or data-level mappings to resolve schema as well as data heterogeneity among data sources (peers). Schema-level mappings create structural relationships among different schemas. On the other hand, data-level mappings associate data values in two different sources. These two kinds of mappings are complementary to each other. However, existing peer database systems have been based solely on either one of these mappings. We believe that if both mappings are addressed simultaneously in a single framework, the resulting approach will enhance data sharing in a way such that we can overcome the limitations of the non-combined approaches.
In this paper, we present a model of a peer database management system which allows a bi-level mapping that combines schema-level and data-level mappings into a single relational framework. We present the syntax and semantics of this new kind of mappings. Furthermore, we present an algorithm for query translation that uses the bi-level mappings. Our algorithm relies on tableau for expressing both queries and mappings.
In data sharing systems, peers are acquainted through pair-wise data sharing settings/mappings for sharing and exchanging data. Besides query processing, supporting update exchange for interchanging data between peers is one of the challenging problems in data sharing systems. In update exchange, an update action posed to a peer is applied to the peer's local database instance and then the update is propagated to the related peers. Previous work on update exchange have considered update propagation considering schema-level mappings between peers, which are conceptually similar to the view maintenance problem. However, there are data sharing systems, where peers are acquainted by instance-level mappings. In such a system, peers use different schemas and data vocabularies to represent semantically same real world entities. The instance-level mappings express how data in one peer relate to data in another peer. One of the problems in exchanging updates in instance-mapped data sharing systems is to translate updates correctly between heterogeneous peers. The translation should be such that insertions, deletions, and modifications of the tuples made by an update in a peer and by the translated version of the update in an acquainted peer are related through the mappings between them. In this paper, we investigate such a mechanism for translating update actions between heterogeneous peer data sources. Before discussing the translation mechanism, the paper first formalize the notion of update translation and derive conditions under which the translation mechanism will produce correct translations of updates.
Clinical trials generate a large amount of data that have been underutilized due to obstacles that prevent data sharing including risking patient privacy, data misrepresentation, and invalid secondary analyses. In order to address these obstacles, we developed a novel data sharing method which ensures patient privacy while also protecting the interests of clinical trial investigators. Our flexible and robust approach involves two components: (1) an advanced cloud-based querying language that allows users to test hypotheses without direct access to the real clinical trial data and (2) corresponding synthetic data for the query of interest that allows for exploratory research and model development. Both components can be modified by the clinical trial investigator depending on factors such as the type of trial or number of patients enrolled. To test the effectiveness of our system, we first implement a simple and robust permutation based synthetic data generator. We then use the synthetic data generator coupled with our querying language to identify significant relationships among variables in a realistic clinical trial dataset.
With decreasing cost of biomedical technologies, the scale of the genetic and healthcare data have exponentially increased and become available to wider audiences. Hence, privacy of patients and study participants has garnered the attention of researchers and regulators alike. Availability of genetic and health care information for uses not anticipated at the time of collection gives rise to privacy concerns such that people suffer dignitary harm when their data is used in ways they did not desire or intend, even if no concrete economic damage results. In this workshop, we explore the issues surrounding data use to advance human health from a privacy perspective. Broadly this field can be considered in two encompassing areas: (1) Ethics and regulation of privacy: The ethical and regulatory frames through which we can consider privacy, the existing regulations regarding privacy and what is on the horizon, and implementation of such ethical considerations for data with the new Common Rule. (2) Approaches to ensuring privacy using technology: The technologies that allow responsible use and sharing of data such as encryption and the quantification of privacy leakages in publicly available data through privacy attacks for better risk-assessment tools.
Open banking regulatory regimes represent an attempt by regulators to take advantage of a wave of digitalization and its associated sea of data to encourage more dynamic and efficient financial services sectors. This chapter explores Australia’s Consumer Data Right regime and how its planned expansion beyond financial services could propel fundamental changes to large sectors of Australia’s economy and society. We explore why these regimes are being put into place now and what these changes could look like, including potential consequences. Lastly, we provide recommendations on how regulators should build data sharing regulatory regimes. In order to best achieve the goals of encouraging data leveraged social economic development, financial inclusion, and furthering innovation through data sharing requirements, we make three recommendations. First, regulators must ensure the burdens and benefits of these regimes are consistent and fair, as it is very easy to replace one asymmetry with another. Second, consumers must be at the center: their benefits and safety are paramount to adoption and success of an economy that properly leverages consumer data. Third, open banking regimes must consider other technological developments, regulations, and trends to be incorporated as part of a larger national data strategy if they are to be effectively adopted.
One of the major obstacles in querying distributed textile data involves semantic heterogeneity. Our work contributes to the Semantic Web and to textile information interoperability. This paper presents a sort of textile information sharing system architecture and ontology solution by the example of three-dimensional body scanning data sharing and apparel classify retrieve. It describes our system that provides textile information sharing, publishing and retrieving over heterogeneous textile data sets distributed over the Web. We focus on resolving semantic heterogeneity at the value level to accommodate distributed data sets having conceptually similar attributes in which values are drawn from diverse domains. The proposed system can effective overcome the structure and semantic heterogeneity of local three-dimensional body scanning database. By using ontology technology and domain rules, it can also offer intelligent and semantic retrieve tools.
Research data sharing is one of the most interesting and challenging issues for researchers in the academic community. To investigate the effect of individual characteristics and organizational contexts on data sharing and reuse behaviors, this study employed a second analysis of the survey data. This study found that older researchers and those who allocate a lower percentage of their worktime to research are likely to share data and show a positive attitude toward data sharing. The study also found that academic researchers are likely to share data if their funding agency requires them to provide a data management plan and if their organization or project provides the necessary funds and processes to support data management.
A growing number of academic and community clinics are conducting genomic testing to inform treatment decisions for cancer patients (1). In the last 3-5 years, there has been a rapid increase in clinical use of next generation sequencing (NGS) based cancer molecular diagnostic (MolDx) testing (2). The increasing availability and decreasing cost of tumor genomic profiling means that physicians can now make treatment decisions armed with patient-specific genetic information. Accumulating research in the cancer biology field indicates that there is significant potential to improve cancer patient outcomes by effectively leveraging this rich source of genomic data in treatment planning (3). To achieve truly personalized medicine in oncology, it is critical to catalog cancer sequence variants from MolDx testing for their clinical relevance along with treatment information and patient outcomes, and to do so in a way that supports large-scale data aggregation and new hypothesis generation. One critical challenge to encoding variant data is adopting a standard of annotation of those variants that are clinically actionable. Through the NIH-funded Clinical Genome Resource (ClinGen) (4), in collaboration with NLM’s ClinVar database and >50 academic and industry based cancer research organizations, we developed the Minimal Variant Level Data (MVLD) framework to standardize reporting and interpretation of drug associated alterations (5). We are currently involved in collaborative efforts to align the MVLD framework with parallel, complementary sequence variants interpretation clinical guidelines from the Association of Molecular Pathologists (AMP) for clinical labs (6). In order to truly democratize access to MolDx data for care and research needs, these standards must be harmonized to support sharing of clinical cancer variants. Here we describe the processes and methods developed within the ClinGen’s Somatic WG in collaboration with over 60 cancer care and research organizations as well as CLIA-certified, CAP-accredited clinical testing labs to develop standards for cancer variant interpretation and sharing.
The importance of open data has been increasingly recognized in recent years. Although the sharing and reuse of clinical data for translational research lags behind best practices in biological science, a number of patient-derived datasets exist and have been published enabling translational research spanning multiple scales from molecular to organ level, and from patients to populations. In seeking to replicate metabolomic biomarker results in Alzheimer’s disease our team identified three independent cohorts in which to compare findings. Accessing the datasets associated with these cohorts, understanding their content and provenance, and comparing variables between studies was a valuable exercise in exploring the principles of open data in practice. It also helped inform steps taken to make the original datasets available for use by other researchers. In this paper we describe best practices and lessons learned in attempting to identify, access, understand, and analyze these additional datasets to advance research reproducibility, as well as steps taken to facilitate sharing of our own data.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.