Please login to be able to save your searches and receive alerts for new content matching your search criteria.
It was long assumed that the pseudorandom distribution of prime numbers was free of biases. Specifically, while the prime number theorem gives an asymptotic measure of the probability of finding a prime number and Dirichlet’s theorem on arithmetic progressions tells us about the distribution of primes across residue classes, there was no reason to believe that consecutive primes might “know” anything about each other — that they might, for example, tend to avoid ending in the same digit. Here, we show that the Iterated Function System method (IFS) can be a surprisingly useful tool for revealing such unintuitive results and for more generally studying structure in number theory. Our experimental findings from a study in 2013 include fractal patterns that reveal “repulsive” phenomena among primes in a wide range of classes having specific congruence properties. Some of the phenomena shown in our computations and interpretation relate to more recent work by Lemke Oliver and Soundararajan on biases between consecutive primes. Here, we explore and extend those results by demonstrating how IFS points to the precise manner in which such biases behave from a dynamic standpoint. We also show that, surprisingly, composite numbers can exhibit a notably similar bias.
We introduce an information theoretic framework for a quantitative measure of originality to model the impact of various classes of biases, errors and error corrections on scientific research. Some of the open problems are also outlined.
Innovations in human-centered biomedical informatics are often developed with the eventual goal of real-world translation. While biomedical research questions are usually answered in terms of how a method performs in a particular context, we argue that it is equally important to consider and formally evaluate the ethical implications of informatics solutions. Several new research paradigms have arisen as a result of the consideration of ethical issues, including but not limited for privacy-preserving computation and fair machine learning. In the spirit of the Pacific Symposium on Biocomputing, we discuss broad and fundamental principles of ethical biomedical informatics in terms of Olelo Noeau, or Hawaiian proverbs and poetical sayings that capture Hawaiian values. While we emphasize issues related to privacy and fairness in particular, there are a multitude of facets to ethical biomedical informatics that can benefit from a critical analysis grounded in ethics.
Artificial Intelligence (AI) algorithms showcase the potential to steer a paradigm shift in clinical medicine, especially medical imaging. Concerns associated with model generalizability and biases necessitate rigorous external validation of AI algorithms prior to their adoption into clinical workflows. To address the barriers associated with patient privacy, intellectual property, and diverse model requirements, we introduce ClinValAI, a framework for establishing robust cloud-based infrastructures to clinically validate AI algorithms in medical imaging. By featuring dedicated workflows for data ingestion, algorithm scoring, and output processing, we propose an easily customizable method to assess AI models and investigate biases. Our novel orchestration mechanism facilitates utilizing the complete potential of the cloud computing environment. ClinValAI’s input auditing and standardization mechanisms ensure that inputs consistent with model prerequisites are provided to the algorithm for a streamlined validation. The scoring workflow comprises multiple steps to facilitate consistent inferencing and systematic troubleshooting. The output processing workflow helps identify and analyze samples with missing results and aggregates final outputs for downstream analysis. We demonstrate the usability of our work by evaluating a state-of-the-art breast cancer risk prediction algorithm on a large and diverse dataset of 2D screening mammograms. We perform comprehensive statistical analysis to study model calibration and evaluate performance on important factors, including breast density, age, and race, to identify latent biases. ClinValAI provides a holistic framework to validate medical imaging models and has the potential to advance the development of generalizable AI models in clinical medicine and promote health equity.