Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Arabic dialect identification (ADI) is a specific task of natural language processing (NLP) that intends to forecast the Arabic language dialect of the input text automatically. ADI is the preliminary step toward establishing many NLP applications, including cross-language text generation, multilingual text-to-speech synthesis, and machine translation. The automatic classification of the Arabic dialect is the first step in various dialect-sensitive Arabic NLP tasks. ADI includes predicting the dialects related to the textual input and classifying them on their respective labels. As a result, increased interest has been gained in the last few decades to address the problems of ADI through deep learning (DL) and machine learning (ML) algorithms. The study develops an Arabic multi-class dialect recognition using fast random opposition-based fractals learning aquila optimizer with DL (FROBLAO-DL) technique. The FROBLAO-DL technique utilizes the optimal DL model to identify distinct types of Arabic dialects. In the FROBLAO-DL technique, data preprocessing is involved in cleaning the input Arabic dialect dataset. In addition, the ROBERTa word embedding process is used to generate word embedding. The FROBLAO-DL technique uses attention bidirectional long short-term memory (ABiLSTM) network to identify distinct Arabic dialects. Moreover, the ABiLSTM model’s hyperparameter tuning is implemented using the FROBLOA method. The performance evaluation of the FROBLAO-DL method is tested under the Arabic dialect dataset. The empirical analysis implies the supremacy of the FROBLAO-DL technique over recent approaches under various measures.
Mental health (MH) assessment and prediction have become critical areas of focus in healthcare, leveraging developments in natural language processing (NLP). Recent advancements in machine learning have facilitated the exploration of predictive models for MH based on user-generated comments that overlooked the integration of emotional attention mechanisms. The methods often struggle with contextual nuances and emotional subtleties, leading to suboptimal predictions. The prevailing challenge lies in accurately understanding the emotional context embedded within textual comments, which is crucial for effective prediction and intervention. In this research, we introduce a novel approach employing contextual emotional transformer-based models (CETM) for comment analysis in MH case prediction. CETM leverages state-of-the-art transformer architectures enhanced with contextual embedding layers and emotional attention mechanisms for MH case prediction. By incorporating contextual information and emotional cues, CETM captures the underlying emotional states and MH indicators expressed in user comments. Through extensive experimentation and evaluation, both Roberta and bidirectional encoder representations from transformers (BERT) models exhibited enhanced accuracy, precision, recall, and F1 scores compared to their counterparts lacking emotional attention. Notably, the Roberta model attained a greater accuracy of 94.5% when matched to BERT’s 87.6% when emotional attention was employed. Hence, by incorporating emotional context into the predictive model, we achieved significant improvements, which offers promising avenues for more precise and personalized MH interventions.