Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The following sections are included:
This study explores digitalization in the healthcare sector, and presents insights on the use of data for developing and organizing preventive healthcare services for young people, which has been identified as one of the most crucial issues to be tackled by public decision-makers. Conducted as a qualitative case study with municipal decision-makers, this study highlights the needs and implications of data as a complex, systemic issue with far reaching long-term impact, where the right kind of data could improve the health and well-being of the society far beyond healthcare domain.
As biomedical research data grow, researchers need reliable and scalable solutions for storage and compute. There is also a need to build systems that encourage and support collaboration and data sharing, to result in greater reproducibility. This has led many researchers and organizations to use cloud computing [1]. The cloud not only enables scalable, on-demand resources for storage and compute, but also collaboration and continuity during virtual work, and can provide superior security and compliance features. Moving to or adding cloud resources, however, is not trivial or without cost, and may not be the best choice in every scenario. The goal of this workshop is to explore the benefits of using the cloud in biomedical and computational research, and considerations (pros and cons) for a range of scenarios including individual researchers, collaborative research teams, consortia research programs, and large biomedical research agencies / organizations.
The availability of multiple publicly-available datasets studying the same phenomenon has the promise of accelerating scientific discovery. Meta-analysis can address issues of reproducibility and often increase power. The promise of meta-analysis is especially germane to rarer diseases like cystic fibrosis (CF), which affects roughly 100,000 people worldwide. A recent search of the National Institute of Health’s Gene Expression Omnibus revealed 1.3 million data sets related to cancer compared to about 2,000 related to CF. These studies are highly diverse, involving different tissues, animal models, treatments, and clinical covariates. In our search for gene expression studies of primary human airway epithelial cells, we identified three studies with compatible methodologies and sufficient metadata: GSE139078, Sala Study, and PRJEB9292. Even so, experimental designs were not identical, and we identified significant batch effects that would have complicated functional analysis. Here we present quantile discretization and Bayesian network construction using the Hill climb method as a powerful tool to overcome experimental differences and reveal biologically relevant responses to the CF genotype itself, exposure to virus, bacteria, and drugs used to treat CF. Functional patterns revealed by cluster Profiler included interferon signaling, interferon gamma signaling, interleukins 4 and 13 signaling, interleukin 6 signaling, interleukin 21 signaling, and inactivation of CSF3/G-CSF signaling pathways showing significant alterations. These pathways were consistently associated with higher gene expression in CF epithelial cells compared to non-CF cells, suggesting that targeting these pathways could improve clinical outcomes. The success of quantile discretization and Bayesian network analysis in the context of CF suggests that these approaches might be applicable to other contexts where exactly comparable data sets are hard to find.
The following sections are included:
This chapter focuses on establishing the conceptual foundation of the macroscopic perspective (structure and processes) of human thinking systems—the conceptualization of the general information theory (a component of the complexity-intelligence strategy). In this analysis, the human thinking system is perceived to be comprised of two components namely, the energy–matter subsystem (the natural component), and the physical symbol subsystem (an artificial component unique to humanity). The general procedure in which the human mind handles and exploits one or more physical symbol systems (symbols manipulation) is analyzed. The conceptual development encompassing the creation of the character set, capturing and transformation of data and information, acquisition of knowledge and emergent of wisdom (the four external entities), and the significance of a written language, as well as the additional associated functions are investigated.
The unique ability of creating a character set is confined to humanity indicating that human thinking systems are the most intense intelligence sources on this planet. The intrinsic and interactive properties of the character set and the language depict the characteristics and sophistication/ complexity of the physical symbol system. Besides interacting among themselves (data, information, knowledge and wisdom), these external entities also interact with the natural entities, the information-coded energy quanta, according to certain rules and principles. Subsequently, the energy quanta interact with the information-coded matter structure (the brain’s complex neural network). This unique ability (the presence of a written language) allows knowledge to be stored externally, and abstract concepts (including theory) and complex strategizing to emerge for the first time in the human world. In this respect, in the intense human intelligence source (individual or collective), information and knowledge exists in physical, energy and matter forms, and they are inter-convertible (internalization and externalization). The interactions among the six entities and the conversion from one form to another vividly review the presence of the uniqueness of the human thinking system, as well as the orgmind — greatly redefining the interactions and dynamic in the humanity and its organizations.
Subsequently, the boundaries and objectives of human thinking systems as information processing systems, and the necessity of artificial information systems are conceived as four postulates. In the intelligent organization theory, a deeper comprehension of the human thinking systems (complex adaptive systems) is vital, as they are the vital assets closely associated with the nurturing of highly intelligent human organizations (iCAS), including the coherency and contributions of a multi-layer intelligent structure (intelligent biotic macro-structure and agent-agent/system micro-structure). In this respect, the cognitive perspective must be better comprehended and utilized.
This chapter focuses on the macroscopic perspective of the human thinking system. The human thinking system is perceived to be comprised of two sub-components namely, the energy–matter subsystem and the physical symbol subsystem. The procedure in which the human mind handles one or more physical symbol systems is discussed. The conceptual development encompasses the transformation of data, information, knowledge and wisdom, and how a language emerges. The boundary of a human thinking system and the necessity of artificial information systems are also included. A better understanding of the human thinking system and the conceptualization of the general information theory is vital to the nurturing of highly intelligent human organizations.
The Data Quality Monitoring (DQM) system in the Compact Muon Solenoid (CMS) experiment is designed to monitor running condition and performance of detectors during online data taking as well as in the offline reconstruction. The DQM framework provides all necessary tools for the creation, filling and visualisation of data quality information. Each sub-system is responsible to define its choice of algorithm and analysis modules to produce information needed for an effective monitoring.
A description of the general DQM system is given with an emphasis on the Tracker specific case. A report on the experience gained during cosmic data taking is also presented.
Two worldwide events opened novel reflections in a highly vast scientific literature concerning a universal concept like that of “data”, namely as the results of observations, either experimental or human-mind generated: the “Big Data” and the universal use of informatics, related to each other and both in an exponential increase.
The capacity and speed of modern computers allow us to obtain such immense amounts of data, both from experimental setups or from the elaboration of algorithms, either human-built or through AI. Their handling too is necessarily operated via informatics means, considered distinct from mathematical means. Apparently, these facts have disconnected data evaluation from traditional fields, not only science but also the way to make decisions based on them.
This paper contains reflections on the properties and meaning of the data in themselves, specifically as the vehicle of information almost universally necessary to make decisions, the latter being a frame that is mixed and often prevalent in the literature about big data.