Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Hand gestures offer people a convenient way to interact with computers, in addition to give them the ability to communicate without physical contact and at a distance, which is essential in today’s health conditions, especially during an epidemic infectious viruses such as the COVID-19 coronavirus. However, factors, such as the complexity of hand gesture patterns, differences in hand size and position, and other aspects, can affect the performance of hand gesture recognition and classification algorithms. Some deep learning approaches such as convolutional neural networks (CNN), capsule networks (CapsNets) and autoencoders have been proposed by researchers to improve the performance of image recognition systems in this particular field: While CNNs are arguably the most widely used networks for object detection and image classification, CapsNets and Autoencoder seem to resolve some of the limitations identified in the first approach. For this reason, in this work, a specific combination of these networks is proposed to effectively solve the ASL problem. The results obtained in this work show that the proposed group with a simple data augmentation process improves precision performance by 99.43%.
Prostate Specific Antigen (PSA) level in the serum is one of the most widely used markers in monitoring prostate cancer (PCa) progression, treatment response, and disease relapse. Although significant efforts have been taken to analyze various socioeconomic and cultural factors that contribute to the racial disparities in PCa, limited research has been performed to quantitatively understand how and to what extent molecular alterations may impact differential PSA levels present at varied tumor status between African–American and European–American men. Moreover, missing values among patients add another layer of difficulty in precisely inferring their outcomes. In light of these issues, we propose a data-driven, deep learning-based imputation and inference framework (DIIF). DIIF seamlessly encapsulates two modules: an imputation module driven by a regularized deep autoencoder for imputing critical missing information and an inference module in which two deep variational autoencoders are coupled with a graphical inference model to quantify the personalized and race-specific causal effects. Large-scale empirical studies on the independent sub-cohorts of The Cancer Genome Atlas (TCGA) PCa patients demonstrate the effectiveness of DIIF. We further found that somatic mutations in TP53, ATM, PTEN, FOXA1, and PIK3CA are statistically significant genomic factors that may explain the racial disparities in different PCa features characterized by PSA.
Breast cancer is one of the most common types of cancer and it presents itself as being the leading cause of death among women. If its diagnosis occur soon enough, the probability to achieve the cure of the patient can be increased. Recently, it has been more common the use of deep neural network techniques to aid pathologists in their prognosis, but they still do not fully trust them because they lack interpretability. In light of that, this work investigates if previous training of the models as encoders could enhance their accuracy in both classification and interpretability. There were implemented three models to the BreakHis and BreCaHAD dataset: NASNet Mobile, DenseNET201, and MobileNetV2. The experiments have shown that the three models increased the classification performance and two models improved their interpretability using the proposed strategy. DenseNet201 encoder has performed almost 23% better than its vanilla version in classifying a tumor and the NASNet Mobile encoder has improved 28.5% in its tumor interpretation.
Probabilistic tsunami hazard and risk assessment (PTHA/PTRA) are vital tools for understanding tsunami risk and planning measures to mitigate impacts. At large-scales their use and scope are currently limited by the computational costs of numerically intensive simulations which are not always feasible without large computational resources like HPCs and may require reductions in resolution, number of scenarios modelled or use of simpler approximation schemes. To conduct the PTHA/PTRA for large proportions of a coast, we need therefore to develop concepts and algorithms for reducing the number of events simulated and more rapidly approximating the needed simulation results. This case study for a coastal region of Tohoku, Japan utilizes a limited number of tsunami simulations from submarine earthquakes along the subduction interface to generate a wave propagation and inundation database at different depths and fits these simulation results to a machine learning (ML) based variational autoencoder model to predict the intensity measure (water depth, velocity, etc.) of the tsunami at the location of interest. Such a hybrid ML-physical model can be further extended to compute the inundation for probabilistic tsunami hazard and risk onshore.
Electronic Health Records (EHRs) contain a wealth of patient data useful to biomedical researchers. At present, both the extraction of data and methods for analyses are frequently designed to work with a single snapshot of a patient’s record. Health care providers often perform and record actions in small batches over time. By extracting these care events, a sequence can be formed providing a trajectory for a patient’s interactions with the health care system. These care events also offer a basic heuristic for the level of attention a patient receives from health care providers. We show that is possible to learn meaningful embeddings from these care events using two deep learning techniques, unsupervised autoencoders and long short-term memory networks. We compare these methods to traditional machine learning methods which require a point in time snapshot to be extracted from an EHR.