Parkinson’s disease (PD), the second most prevalent neurodegenerative disorder globally, afflicting approximately 10 million individuals, necessitates early detection for optimal management. In this paper, we propose deep learning models to discern Parkinson’s disease through the nuanced analysis of handwriting with the overall objective of achieving transparency and trustworthiness through the integration of Explainable and Interpretable AI.
Leveraging transfer learning from well-established VGG16 and VGG19 architectures and introducing two bespoke CNN models (PD-Detect1 and PD-Detect2), we meticulously scrutinize diverse datasets (HandPD, NewhandPD, Parkinson Drawing) to ascertain the efficacy of our approach. LIME and SHAP Explainable AI techniques are employed to pinpoint specific regions of the spiral drawings that significantly influence the predictions made by the VGG16 and PD-Detect2 models. Additionally, Convolutional Filter Visualization and Grad-CAM are utilized to illustrate how the convolutional layers of the PD-Detect2 model function. Finally, LIME is applied to the PD-Detect2 model to identify visual markers of handwriting symptoms in the spiral drawing.
Remarkable results underscore our reliance on VGG16 and VGG19 for precise identification, achieving an outstanding 100% accuracy in the waves drawing dataset. PD-Detect1 exhibits commendable performance with an accuracy of 94.44% in the meander of NewhandPD dataset, while VGG16 achieves an accuracy of 95%. VGG16 records 95% accuracy while PD-Detect2 achieves 85% accuracy in the spiral drawing dataset. Both results further bolstered to 100% with the application of classic data augmentation techniques. The Positive/Negative superpixels from LIME and SHAP highlight the key regions used by VGG16 and PD-Detect2 for predictions, while PD-Detect2 places more emphasis on disease-related features. Visualizing convolutional filters provided insight into the functionality of each layer within the PD-Detect2 model, while the Class Activation Maps produced by Grad-CAM highlighted the image regions most influential to the model’s decision. Ultimately, LIME’s superpixels identified visual markers of handwriting symptoms associated with Parkinson’s disease.
Explainable AI and Interpretable AI enhance the quality of CNN models and support the decision-making process, enabling healthcare professionals to more accurately assess disease probability and monitor treatment responses. This leads to a more effective system for the early diagnosis of Parkinson’s disease through prediction and visual monitoring of handwriting symptoms.
This study contributes knowledge on the detection of depression through handwriting/drawing features, to identify quantitative and noninvasive indicators of the disorder for implementing algorithms for its automatic detection. For this purpose, an original online approach was adopted to provide a dynamic evaluation of handwriting/drawing performance of healthy participants with no history of any psychiatric disorders (n=28), and patients with a clinical diagnosis of depression (n=27). Both groups were asked to complete seven tasks requiring either the writing or drawing on a paper while five handwriting/drawing features’ categories (i.e. pressure on the paper, time, ductus, space among characters, and pen inclination) were recorded by using a digitalized tablet. The collected records were statistically analyzed. Results showed that, except for pressure, all the considered features, successfully discriminate between depressed and nondepressed subjects. In addition, it was observed that depression affects different writing/drawing functionalities. These findings suggest the adoption of writing/drawing tasks in the clinical practice as tools to support the current depression detection methods. This would have important repercussions on reducing the diagnostic times and treatment formulation.
This paper presents a system for offline recognition of cursive Arabic handwritten text based on Hidden Markov Models (HMMs). The proposed work reports an effective method taking into account the context of character by applying an embedded training-based HMMs to perform and enhance the character models. The system is analytical without explicit segmentation; extracted features preceded by baseline estimation are statistical and structural to integrate both the peculiarities of the text and the pixel distribution characteristics of the word image. The experiments are done on benchmark IFN/ENIT database. The proposed work shows the effectiveness of using embedded training-based HMMs for enhancing the recognition rate, and the obtained results are promising and encouraging.
The main difficulty in pattern recognition is the interpretation of the iconic picture, which consists of pixels, into primitive features. The definition of a feature classification as regular or singular recalled and justified. These ideas are applied to line images, and particularly to handwritten words. A regular feature, the axis, is found on the graph representation of a word. It allows segmentation of the word representation and the obtaining of a descriptive chain. Examples of such symbolic descriptions are given with their interpretations as words from a list of 25. The results show 87% success and 3% substitution rates for the handwriting of one person. Preliminary results show that it may be extended to large categories of handwritings.
An algorithmic architecture for a high-performance optical character recognition (OCR) system for hand-printed and handwritten addresses is proposed. The architecture integrates syntactic and contextual post-processing with character recognition to optimise postcode recognition performance, and verifies the postcode against simple features extracted from the remainder of the address to ensure a low error rate.
An enhanced version of the characteristic loci character recognition algorithm was chosen for the system to make it tolerant of variations in writing style. Feature selection for the classifier is performed automatically using the B/W algorithm.
Syntactic and contextual information for hand-printed British postcodes have been integrated into the system by combining low-level postcode syntax information with a dictionary trie structure. A full implementation of the postcode dictionary trie is described. Features which define the town name effectively, and can easily be extracted from a handwritten or hand-printed town name are used for postcode verification.
A database totalling 3473 postcode/address image has used to evaluate the performance of the complete postcode recognition process. The basic character recognition rate for the full unconstrained alphanumeric character set is 63.1%, compared with an expected maximum attainable 75–80%. The addition of the syntactic and contextual knowledge stages produces an overall postcode recognition rate which is equivalent to an alphanumeric character recognition rate of 86–90%. Separate verification experiments on a subset of 820 address images show that, with the first-order features chosen, an overall correct address feature code extraction rate of around 35% is achieved.
The interpretation of handwritten signature images should be closely related to the writer’s identity. The representation and analysis of the handwritten signature is the major challenge in the field of automatic signature verification. A new concept of representation and interpretation of handwritten signature images is advocated. The segmentation process breaks up the signature into a collection of arbitrarily-shaped primitives. In the next step, a local interpretation process serves as a sophisticated template matching, permitting the labeling of all primitives from the test primitive set. This is followed by the global interpretation process, which permits the evaluation of a similarity measure between two structural graphs. Experimental results obtained from a database of 800 handwritten signature images from 20 writers show a performance with a type I error rate of ∈1=1.50%, a type II error rate of ∈2=1.37% and a total error rate ∈t=1.43% in the best strategy proposed using a minimum-distance classifier and two reference signatures. A complete description of this novel automatic handwritten signature verification system is presented in this paper.
This paper proposes symbolic and neural classifiers to read unconstrained handwritten worded amounts in bankchecks. Features are extracted from the binary image of the worded amount. Depending on the features extracted, some words are recognized entirely symbolically, some words entirely neurally, and the remaining both symbolically and neurally. Results of experiments at word level and check level are provided.
Parkinson's disease (PD) is a widespread progressive neural degenerative disease. Patients encounter problems in carrying out ordinary voluntary movements such as gaiting and writing. Effects of tremor, rigidity, and bradykinesia are seen in writing. Since the diagnosis of PD is based on the presence of cardinal symptoms, therefore voluntary movements like writing could be helpful as an indicator in this process. Handwriting analysis can be a suitable method to separate patients from normal individuals. We recorded handwriting data from 17 normal and 13 PD subjects, and then used mathematical analysis in order to extract proper features. At the end of the study, we used these selected features in a classifier. We achieved 93.89% accuracy in the test phase. Hence, this tool may be aid in the diagnosing of both PD and suspected PD individuals in the early stages of the disease.
In a previous paper, we highlighted the design requirements of a computer-based system for the automated assessment of neuropsychological drawing tasks. In this paper, we shall examine the implementation of an analysis system specifically with reference to the software engineering principles utilized and the modular framework within with a flexible implementation can be realized. We shall highlight some of the implemented modules and, using two actual test batteries as examples, demonstrate the flow of information between each module. We shall also show the additional reporting and analysis features implemented for clinician support and describe how the framework can be utilized for more generic applications of handwriting/drawing analysis.
Handwriting has always been considered an important human task, and accordingly it has attracted the attention of researchers working in biomechanics, physiology, and related fields. There exist a number of studies on this area. This paper considers the human–machine analogy and relates robots with handwriting. The work is two-fold: it improves the knowledge in biomechanics of handwriting, and introduces some new concepts in robot control. The idea is to find the biomechanical principles humans apply when resolving kinematic redundancy, express the principles by means of appropriate mathematical models, and then implement them in robots. This is a step forward in the generation of human-like motion of robots. Two approaches to redundancy resolution are described: (i) "Distributed Positioning" (DP) which is based on a model to represent arm motion in the absence of fatigue, and (ii) the "Robot Fatigue" approach, where robot movements similar to the movements of a human arm under muscle fatigue are generated. Both approaches are applied to a redundant anthropomorphic robot arm performing handwriting. The simulation study includes the issues of legibility and inclination of handwriting. The results demonstrate the suitability and effectiveness of both approaches.
Essential hand tremor (EHT) is a prevalent neurological condition affecting geriatric populations, yet its underlying mechanisms remain poorly understood. This study aims to investigate the neural substrates associated with motor tasks in EHT patients, illuminating the complex neural activity characterizing this condition. Twenty participants underwent a thorough evaluation to ensure eligibility, excluding factors such as mental illness, drug dependency, or Parkinson’s disease (PD). Functional magnetic resonance imaging (fMRI) was utilized to examine brain activation patterns during non-handwriting tasks (NHWT) and handwriting tasks (HWT). Participants received training to standardize hand and forearm movements for effective fMRI assessment. fMRI data preprocessing included motion correction, filtering, removal of linear trends, normalization to Montreal Neurological Institute (MNI) space, and spatial smoothing. Distinctive patterns of brain activation were observed during motor tasks in individuals with EHT compared to controls. During NHWT, the EHT group showed significantly increased activation in the precentral gyrus, supplementary motor area, thalamus, and posterior cerebellar lobe, highlighting their role in mediating motor challenges in EHT. Similarly, during HWT, the EHT group exhibited heightened activation in the precentral gyrus and supplementary motor areas. In contrast, reduced activation was noted in the caudate nucleus, inferior temporal gyrus, and precuneus during HWT in the EHT group compared to controls. These findings align with previous research on involuntary movement disorders, such as early-stage PD, emphasizing the importance of the caudate nucleus and related regions in EHT. In conclusion, this study sheds light on the intricate neural activity underlying motor tasks in EHT patients. The identified neural regions and their functions offer insights into the neurophysiological basis of EHT-related motor impairments. These findings have the potential to enhance the understanding of EHT beyond its surface-level effects. This study has identified the brain regions involved in motor tasks affected by EHT. This sets a foundation for future research to better understand the complexities of this neurological condition. These discoveries may lead to novel therapeutic interventions tailored to address the unique challenges faced by individuals with EHT, representing a significant milestone in understanding and managing EHT.
This article discusses handwriting as a means of personality assessment. Through a discussion of the various internal and external factors, we see how the elements of spatial arrangement, writing form, and writing movement develop handwriting patterns. These are shown to combine in ways that demonstrate a wide range of personality traits in handwriting. The question is addressed whether cursive writing is still important in a digital age.
This paper proposes symbolic and neural classifiers to read unconstrained handwritten worded amounts in bankchecks. Features are extracted from the binary image of the worded amount. Depending on the features extracted, some words are recognized entirely symbolically, some words entirely neurally, and the remaining both symbolically and neurally. Results of experiments at word level and check level are provided.
The interpretation of handwritten signature images should be closely related to the writer's identity. The representation and analysis of the handwritten signature is the major challenge in the field of automatic signature verification. A new concept of representation and interpretation of handwritten signature images is advocated. The segmentation process breaks up the signature into a collection of arbitrarily-shaped primitives. In the next step, a local interpretation process serves as a sophisticated template matching, permitting the labeling of all primitives from the test primitive set. This is followed by the global interpretation process, which permits the evaluation of a similarity measure between two structural graphs. Experimental results obtained from a database of 800 handwritten signature images from 20 writers show a performance with a type I error rate of ɛ1 = 1.50%, a type II error rate of ɛ2 = 1.37% and a total error rate ɛt = 1.43% in the best strategy proposed using a minimum-distance classifier and two reference signatures. A complete description of this novel automatic handwritten signature verification system is presented in this paper.
In this chapter, we analyze several on-line cursive handwriting recognition systems. We find that virtually all such systems involve (a) a preprocessor, (b) a trainable classifier, and (c) a language modeling post-processor. Such architectures are described within the framework of Weighted Finite State Transductions, previously used in speech recognition by Pereira et al. We describe in some detail a recognition system built in our laboratory. It is a writer independent system which can handle a variety of writing styles including cursive script and handprint. The input to the system encodes the pen trajectory as a time-ordered sequence of feature vectors. A Time Delay Neural Network is used to estimate a posteriori probabilities for characters in a word. A Hidden Markov Model segments the word in a way which optimizes the global word score, taking a lexicon into account. The last part of the chapter is devoted to bibliographical notes.
The main difficulty in pattern recognition is the interpretation of the iconic picture, which consists of pixels, into primitive features. The definition of a feature classification as regular or singular is recalled and justified. These ideas are applied to line images, and particularly to handwritten words. A regular feature, the axis, is found on the graph representation of a word. It allows segmentation of the word representation and the obtaining of a descriptive chain. Examples of such symbolic descriptions are given with their interpretations as words from a list of 25. The results show 87% success and 3% substitution rates for the handwriting of one person. Preliminary results show that it may be extended to large categories of handwritings.
An algorithmic architecture for a high-performance optical character recognition (OCR) system for hand-printed and handwritten addresses is proposed. The architecture integrates syntactic and contextual post-processing with character recognition to optimise postcode recognition performance, and verifies the postcode against simple features extracted from the remainder of the address to ensure a low error rate.
An enhanced version of the characteristic loci character recognition algorithm was chosen for the system to make it tolerant of variations in writing style. Feature selection for the classifier is performed automatically using the B/W algorithm.
Syntactic and contextual information for hand-printed British postcodes have been integrated into the system by combining low-level postcode syntax information with a dictionary trie structure. A full implementation of the postcode dictionary trie is described. Features which define the town name effectively, and can easily be extracted from a handwritten or hand-printed town name are used for postcode verification.
A database totalling 3473 postcode/address images has been used to evaluate the performance of the complete postcode recognition process. The basic character recognition rate for the full unconstrained alphanumeric character set is 63.1%, compared with an expected maximum attainable 75–80%. The addition of the syntactic and contextual knowledge stages produces an overall postcode recognition rate which is equivalent to an alphanumeric character recognition rate of 86–90%. Separate verification experiments on a subset of 820 address images show that, with the first-order features chosen, an overall correct address feature code extraction rate of around 35% is achieved.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.