Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (454)

Search Parameters:
Keywords = sign language

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 931 KiB  
Article
Early Detection of Mental Health Crises through Artifical-Intelligence-Powered Social Media Analysis: A Prospective Observational Study
by Masab A. Mansoor and Kashif H. Ansari
J. Pers. Med. 2024, 14(9), 958; https://doi.org/10.3390/jpm14090958 - 9 Sep 2024
Viewed by 291
Abstract
Background: The early detection of mental health crises is crucial for timely interventions and improved outcomes. This study explores the potential of artificial intelligence (AI) in analyzing social media data to identify early signs of mental health crises. Methods: We developed a multimodal [...] Read more.
Background: The early detection of mental health crises is crucial for timely interventions and improved outcomes. This study explores the potential of artificial intelligence (AI) in analyzing social media data to identify early signs of mental health crises. Methods: We developed a multimodal deep learning model integrating natural language processing and temporal analysis techniques. The model was trained on a diverse dataset of 996,452 social media posts in multiple languages (English, Spanish, Mandarin, and Arabic) collected from Twitter, Reddit, and Facebook over 12 months. Its performance was evaluated using standard metrics and validated against expert psychiatric assessments. Results: The AI model demonstrated a high level of accuracy (89.3%) in detecting early signs of mental health crises, with an average lead time of 7.2 days before human expert identification. Performance was consistent across languages (F1 scores: 0.827–0.872) and platforms (F1 scores: 0.839–0.863). Key digital markers included linguistic patterns, behavioral changes, and temporal trends. The model showed varying levels of accuracy for different crisis types: depressive episodes (91.2%), manic episodes (88.7%), suicidal ideation (93.5%), and anxiety crises (87.3%). Conclusions: AI-powered analysis of social media data shows promise for the early detection of mental health crises across diverse linguistic and cultural contexts. However, ethical challenges, including privacy concerns, potential stigmatization, and cultural biases, need careful consideration. Future research should focus on longitudinal outcome studies, ethical integration of the method with existing mental health services, and developing personalized, culturally sensitive models. Full article
(This article belongs to the Special Issue Ehealth, Telemedicine, and AI in the Precision Medicine Era)
Show Figures

Figure 1

26 pages, 12522 KiB  
Article
A Vision–Language Model-Based Traffic Sign Detection Method for High-Resolution Drone Images: A Case Study in Guyuan, China
by Jianqun Yao, Jinming Li, Yuxuan Li, Mingzhu Zhang, Chen Zuo, Shi Dong and Zhe Dai
Sensors 2024, 24(17), 5800; https://doi.org/10.3390/s24175800 - 6 Sep 2024
Viewed by 264
Abstract
As a fundamental element of the transportation system, traffic signs are widely used to guide traffic behaviors. In recent years, drones have emerged as an important tool for monitoring the conditions of traffic signs. However, the existing image processing technique is heavily reliant [...] Read more.
As a fundamental element of the transportation system, traffic signs are widely used to guide traffic behaviors. In recent years, drones have emerged as an important tool for monitoring the conditions of traffic signs. However, the existing image processing technique is heavily reliant on image annotations. It is time consuming to build a high-quality dataset with diverse training images and human annotations. In this paper, we introduce the utilization of Vision–language Models (VLMs) in the traffic sign detection task. Without the need for discrete image labels, the rapid deployment is fulfilled by the multi-modal learning and large-scale pretrained networks. First, we compile a keyword dictionary to explain traffic signs. The Chinese national standard is used to suggest the shape and color information. Our program conducts Bootstrapping Language-image Pretraining v2 (BLIPv2) to translate representative images into text descriptions. Second, a Contrastive Language-image Pretraining (CLIP) framework is applied to characterize not only drone images but also text descriptions. Our method utilizes the pretrained encoder network to create visual features and word embeddings. Third, the category of each traffic sign is predicted according to the similarity between drone images and keywords. Cosine distance and softmax function are performed to calculate the class probability distribution. To evaluate the performance, we apply the proposed method in a practical application. The drone images captured from Guyuan, China, are employed to record the conditions of traffic signs. Further experiments include two widely used public datasets. The calculation results indicate that our vision–language model-based method has an acceptable prediction accuracy and low training cost. Full article
Show Figures

Figure 1

14 pages, 372 KiB  
Article
Size Matters: Vocabulary Knowledge as Advantage in Partner Selection
by Michael Daller and Zehra Ongun
Languages 2024, 9(9), 297; https://doi.org/10.3390/languages9090297 - 6 Sep 2024
Viewed by 551
Abstract
Partner selection can be studied from different disciplines, such as psychology, sociology, and economics. However, linguistic perspectives have been neglected. That is why we need an interdisciplinary approach that includes language. The present article investigates how important the vocabulary size of a potential [...] Read more.
Partner selection can be studied from different disciplines, such as psychology, sociology, and economics. However, linguistic perspectives have been neglected. That is why we need an interdisciplinary approach that includes language. The present article investigates how important the vocabulary size of a potential partner is for marital choice. Our theoretical framework is mainly that of biological markets which are still being widely used. This framework assumes that human decisions are made on a rational basis, e.g., about the characteristics that a potential partner brings into a marriage such as economic assets (wealth, education), psychological traits (intelligence, kindness, fairness), or signs that show physical and mental health. Partner selection takes place on a biological market where assets are displayed and are part of the negotiation for the best partner. We argue that vocabulary knowledge is such an asset, which is acquired through lengthy and costly education and distinguishes potential partners (or their parents) who can afford the accumulation of this form of human capital. Markets are not fully transparent and our knowledge about a potential partner might be incomplete or even distorted through false information or even cheating as one can clearly see from advertisements in online dating. However, we cannot pretend, at least not over a longer period of time, to know words that are not at our disposal. This present study is based on data from 83 couples after more than 15 years of marriage. Their vocabulary scores correlate highly and it is possible that this correlation is the result of accommodation through marriage. However, through partialling out statistically the years of marriage we conclude that the vocabulary size of each partner was an important factor already right at the beginning of their relationship. Those with higher human capital in vocabulary attract similar partners, and this holds for males and females as well as vice versa. Our participants are all Turkish–English sequential bilinguals and the question is whether it is vocabulary knowledge in the first or the second language that plays a crucial role in partner selection. Our results show that both languages are important. We argue that it is not knowledge of words at the surface level but that it is knowledge of conceptual concepts underlying both languages that serve as a display of human capital on the biological market of partner selection. Full article
24 pages, 8029 KiB  
Article
Real-Time Machine Learning for Accurate Mexican Sign Language Identification: A Distal Phalanges Approach
by Gerardo García-Gil, Gabriela del Carmen López-Armas, Juan Jaime Sánchez-Escobar, Bryan Armando Salazar-Torres and Alma Nayeli Rodríguez-Vázquez
Technologies 2024, 12(9), 152; https://doi.org/10.3390/technologies12090152 - 4 Sep 2024
Viewed by 591
Abstract
Effective communication is crucial in daily life, and for people with hearing disabilities, sign language is no exception, serving as their primary means of interaction. Various technologies, such as cochlear implants and mobile sign language translation applications, have been explored to enhance communication [...] Read more.
Effective communication is crucial in daily life, and for people with hearing disabilities, sign language is no exception, serving as their primary means of interaction. Various technologies, such as cochlear implants and mobile sign language translation applications, have been explored to enhance communication and improve the quality of life of the deaf community. This article presents a new, innovative method that uses real-time machine learning (ML) to accurately identify Mexican sign language (MSL) and is adaptable to any sign language. Our method is based on analyzing six features that represent the angles between the distal phalanges and the palm, thus eliminating the need for complex image processing. Our ML approach achieves accurate sign language identification in real-time, with an accuracy and F1 score of 99%. These results demonstrate that a simple approach can effectively identify sign language. This advance is significant, as it offers an effective and accessible solution to improve communication for people with hearing impairments. Furthermore, the proposed method has the potential to be implemented in mobile applications and other devices to provide practical support to the deaf community. Full article
Show Figures

Figure 1

29 pages, 9366 KiB  
Article
Multimodal Driver Condition Monitoring System Operating in the Far-Infrared Spectrum
by Mateusz Knapik, Bogusław Cyganek and Tomasz Balon
Electronics 2024, 13(17), 3502; https://doi.org/10.3390/electronics13173502 - 3 Sep 2024
Viewed by 336
Abstract
Monitoring the psychophysical conditions of drivers is crucial for ensuring road safety. However, achieving real-time monitoring within a vehicle presents significant challenges due to factors such as varying lighting conditions, vehicle vibrations, limited computational resources, data privacy concerns, and the inherent variability in [...] Read more.
Monitoring the psychophysical conditions of drivers is crucial for ensuring road safety. However, achieving real-time monitoring within a vehicle presents significant challenges due to factors such as varying lighting conditions, vehicle vibrations, limited computational resources, data privacy concerns, and the inherent variability in driver behavior. Analyzing driver states using visible spectrum imaging is particularly challenging under low-light conditions, such as at night. Additionally, relying on a single behavioral indicator often fails to provide a comprehensive assessment of the driver’s condition. To address these challenges, we propose a system that operates exclusively in the far-infrared spectrum, enabling the detection of critical features such as yawning, head drooping, and head pose estimation regardless of the lighting scenario. It integrates a channel fusion module to assess the driver’s state more accurately and is underpinned by our custom-developed and annotated datasets, along with a modified deep neural network designed for facial feature detection in the thermal spectrum. Furthermore, we introduce two fusion modules for synthesizing detection events into a coherent assessment of the driver’s state: one based on a simple state machine and another that combines a modality encoder with a large language model. This latter approach allows for the generation of responses to queries beyond the system’s explicit training. Experimental evaluations demonstrate the system’s high accuracy in detecting and responding to signs of driver fatigue and distraction. Full article
Show Figures

Figure 1

21 pages, 822 KiB  
Article
Automated Speech Analysis in Bipolar Disorder: The CALIBER Study Protocol and Preliminary Results
by Gerard Anmella, Michele De Prisco, Jeremiah B. Joyce, Claudia Valenzuela-Pascual, Ariadna Mas-Musons, Vincenzo Oliva, Giovanna Fico, George Chatzisofroniou, Sanjeev Mishra, Majd Al-Soleiti, Filippo Corponi, Anna Giménez-Palomo, Laura Montejo, Meritxell González-Campos, Dina Popovic, Isabella Pacchiarotti, Marc Valentí, Myriam Cavero, Lluc Colomer, Iria Grande, Antoni Benabarre, Cristian-Daniel Llach, Joaquim Raduà, Melvin McInnis, Diego Hidalgo-Mazzei, Mark A. Frye, Andrea Murru and Eduard Vietaadd Show full author list remove Hide full author list
J. Clin. Med. 2024, 13(17), 4997; https://doi.org/10.3390/jcm13174997 - 23 Aug 2024
Viewed by 529
Abstract
Background: Bipolar disorder (BD) involves significant mood and energy shifts reflected in speech patterns. Detecting these patterns is crucial for diagnosis and monitoring, currently assessed subjectively. Advances in natural language processing offer opportunities to objectively analyze them. Aims: To (i) correlate [...] Read more.
Background: Bipolar disorder (BD) involves significant mood and energy shifts reflected in speech patterns. Detecting these patterns is crucial for diagnosis and monitoring, currently assessed subjectively. Advances in natural language processing offer opportunities to objectively analyze them. Aims: To (i) correlate speech features with manic-depressive symptom severity in BD, (ii) develop predictive models for diagnostic and treatment outcomes, and (iii) determine the most relevant speech features and tasks for these analyses. Methods: This naturalistic, observational study involved longitudinal audio recordings of BD patients at euthymia, during acute manic/depressive phases, and after-response. Patients participated in clinical evaluations, cognitive tasks, standard text readings, and storytelling. After automatic diarization and transcription, speech features, including acoustics, content, formal aspects, and emotionality, will be extracted. Statistical analyses will (i) correlate speech features with clinical scales, (ii) use lasso logistic regression to develop predictive models, and (iii) identify relevant speech features. Results: Audio recordings from 76 patients (24 manic, 21 depressed, 31 euthymic) were collected. The mean age was 46.0 ± 14.4 years, with 63.2% female. The mean YMRS score for manic patients was 22.9 ± 7.1, reducing to 5.3 ± 5.3 post-response. Depressed patients had a mean HDRS-17 score of 17.1 ± 4.4, decreasing to 3.3 ± 2.8 post-response. Euthymic patients had mean YMRS and HDRS-17 scores of 0.97 ± 1.4 and 3.9 ± 2.9, respectively. Following data pre-processing, including noise reduction and feature extraction, comprehensive statistical analyses will be conducted to explore correlations and develop predictive models. Conclusions: Automated speech analysis in BD could provide objective markers for psychopathological alterations, improving diagnosis, monitoring, and response prediction. This technology could identify subtle alterations, signaling early signs of relapse. Establishing standardized protocols is crucial for creating a global speech cohort, fostering collaboration, and advancing BD understanding. Full article
(This article belongs to the Section Mental Health)
Show Figures

Figure 1

15 pages, 364 KiB  
Article
Maintaining the Indigenous Udmurt Language beyond the Community: An Autoethnographic Analysis
by Svetlana Edygarova
Languages 2024, 9(9), 286; https://doi.org/10.3390/languages9090286 - 23 Aug 2024
Viewed by 419
Abstract
In this article, I emphasize the importance of maintaining and transmitting indigenous languages to the next generations, and I explore the motivations and difficulties of indigenous language speakers to do so when living far away from their native language community. The article is [...] Read more.
In this article, I emphasize the importance of maintaining and transmitting indigenous languages to the next generations, and I explore the motivations and difficulties of indigenous language speakers to do so when living far away from their native language community. The article is an autoethnographic analysis that amplifies the insider’s perspective and reflects on my own thoughts, perceptions, and emotional reactions regarding my language use practices. Specifically, I analyze the use of the Udmurt language with my children and the process of writing a blog in Udmurt. As a researcher of the Udmurt language, I use my previous sociolinguistic studies in the analysis and place it within the broader context of indigenous peoples from Russia. Indigenous languages often involve the use of multiple languages simultaneously, including language mixing, which is entirely natural. In societies with a monolingual language ideology, such practices are seen as signs of linguistic incompetence, leading to feelings of shame or inferiority among indigenous speakers. This negatively impacts the preservation of indigenous languages. Raising sociolinguistic and emotional awareness about how indigenous languages function and sharing personal experiences, including negative ones, can help overcome these challenges. Full article
(This article belongs to the Special Issue Linguistic Practices in Heritage Language Acquisition)
15 pages, 15745 KiB  
Article
Bengali-Sign: A Machine Learning-Based Bengali Sign Language Interpretation for Deaf and Non-Verbal People
by Md. Johir Raihan, Mainul Islam Labib, Abdullah Al Jaid Jim, Jun Jiat Tiang, Uzzal Biswas and Abdullah-Al Nahid
Sensors 2024, 24(16), 5351; https://doi.org/10.3390/s24165351 - 19 Aug 2024
Viewed by 466
Abstract
Sign language is undoubtedly a common way of communication among deaf and non-verbal people. But it is not common among hearing people to use sign language to express feelings or share information in everyday life. Therefore, a significant communication gap exists between deaf [...] Read more.
Sign language is undoubtedly a common way of communication among deaf and non-verbal people. But it is not common among hearing people to use sign language to express feelings or share information in everyday life. Therefore, a significant communication gap exists between deaf and hearing individuals, despite both groups experiencing similar emotions and sentiments. In this paper, we developed a convolutional neural network–squeeze excitation network to predict the sign language signs and developed a smartphone application to provide access to the ML model to use it. The SE block provides attention to the channel of the image, thus improving the performance of the model. On the other hand, the smartphone application brings the ML model close to people so that everyone can benefit from it. In addition, we used the Shapley additive explanation to interpret the black box nature of the ML model and understand the models working from within. Using our ML model, we achieved an accuracy of 99.86% on the KU-BdSL dataset. The SHAP analysis shows that the model primarily relies on hand-related visual cues to predict sign language signs, aligning with human communication patterns. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

26 pages, 9899 KiB  
Article
Spatial Cognition, Modality and Language Emergence: Cognitive Representation of Space in Yucatec Maya Sign Language (Mexico)
by Olivier Le Guen and José Alfredo Tuz Baas
Languages 2024, 9(8), 278; https://doi.org/10.3390/languages9080278 - 16 Aug 2024
Viewed by 476
Abstract
This paper analyzes spatial gestures and cognition in a new, or so-called “emerging”, visual language, the Yucatec Maya Sign Language (YSML). This sign language was created by deaf and hearing signers in various Yucatec Maya villages on the Yucatec Peninsula (Mexico). Although the [...] Read more.
This paper analyzes spatial gestures and cognition in a new, or so-called “emerging”, visual language, the Yucatec Maya Sign Language (YSML). This sign language was created by deaf and hearing signers in various Yucatec Maya villages on the Yucatec Peninsula (Mexico). Although the sign language is not a signed version of spoken Yucatec Maya, both languages evolve in a similar cultural setting. Studies have shown that cultures around the world seem to rely on one preferred spatial Frame of Reference (FoR), shaping in many ways how people orient themselves and think about the world around them. Prior research indicated that Yucatec Maya speakers rely on the use of the geocentric FoR. However, contrary to other cultures, it is mainly observable through the production of gestures and not speech only. In the case of space, gestures in spoken Yucatec Maya exhibit linguistic features, having the status of a lexicon. Our research question is the following: if the preferred spatial FoR among the Yucatec Mayas is based on co-expressivity and spatial linguistic content visually transmitted via multimodal interactions, will deaf signers of an emerging language created in the same cultural setting share the same cognitive preference? In order to answer this question, we conducted three experimental tasks in three different villages where YMSL is in use: a non-verbal rotation task, a Director-Matcher task and a localization task. Results indicate that YMSL signers share the same preference for the geocentric FoR. Full article
Show Figures

Figure 1

19 pages, 251 KiB  
Article
Insights from a Pre-Pandemic K-12 Virtual American Sign Language Program for a Post-Pandemic Online Era
by Casey W. Guynes, Nora Griffin-Shirley, Kristen Guynes and Leigh Kackley
Educ. Sci. 2024, 14(8), 892; https://doi.org/10.3390/educsci14080892 - 15 Aug 2024
Viewed by 349
Abstract
In the past five years, the number of virtual American Sign Language (ASL) classes has dramatically increased from being a novel option to being a common course delivery mode across the country. Yet, little is known regarding virtual ASL course design and the [...] Read more.
In the past five years, the number of virtual American Sign Language (ASL) classes has dramatically increased from being a novel option to being a common course delivery mode across the country. Yet, little is known regarding virtual ASL course design and the implementation of evidence-based practices. Overarchingly, this programmatic case study sought insight from a small population of experienced virtual ASL teachers who had been teaching ASL online prior to the crisis teaching phenomenon that has laid the foundation for virtual ASL as it stands today. More specifically, the qualitative design utilized questionnaires, semi-structured interviews, member checks, and document reviews of five teachers who had been teaching ASL virtually to K-12 students prior to the onset of the COVID-19 pandemic. Rich qualitative data, analyzed through directed and summative content analysis, revealed many themes specific to virtual ASL education, including differences from traditional ASL instruction, specific job responsibilities, limitations, advantages, disadvantages, and suggestions for improvement. Additionally, aligning with previous literature, we explored teacher, student, and programmatic characteristics that were perceived to be conducive to virtual students’ success. Finally, all participants expressed broader concerns that continue to exist in the field of ASL education. Implications for stakeholders, including K-12 ASL students, their families, teachers, administrators, and teacher training programs are addressed, followed by suggestions for future research. Full article
(This article belongs to the Topic Advances in Online and Distance Learning)
12 pages, 718 KiB  
Article
Translation, Cultural Adaptation, and Content Validity of the Saudi Sign Language Version of the General Nutrition Knowledge Questionnaire
by Jenan M. Aljubair, Dara Aldisi, Iman A. Bindayel, Madhawi M. Aldhwayan, Shaun Sabico, Tafany A. Alsaawi, Esraa Alghamdi and Mahmoud M. A. Abulmeaty
Nutrients 2024, 16(16), 2664; https://doi.org/10.3390/nu16162664 - 12 Aug 2024
Viewed by 633
Abstract
Profoundly hearing-impaired individuals lack health-promotion education on healthy lifestyles, and this may be due to communication barriers and limited awareness of available resources. Therefore, providing understandable healthy eating knowledge and a proper education evaluation via a questionnaire is vital. The present study aimed [...] Read more.
Profoundly hearing-impaired individuals lack health-promotion education on healthy lifestyles, and this may be due to communication barriers and limited awareness of available resources. Therefore, providing understandable healthy eating knowledge and a proper education evaluation via a questionnaire is vital. The present study aimed to translate, culturally adapt, and validate the content of a Saudi sign language version of the General Nutrition Knowledge Questionnaire (GNKQ). The study followed the World Health Organization guidelines for the translation and cultural adaptation of the GNKQ, using two-phase translation (from English into Arabic and then from Arabic into Saudi sign language), including forward-translation, back-translation, and pilot testing among profoundly hearing-impaired individuals. A total of 48 videos were recorded to present the GNKQ in Saudi sign language. The scale-level content validity index (S-CVI) value was equal to 0.96, and the item-level content validity index (I-CVI) value for all questions was between 1 and 0.9, except for question 6 in section 1, which was 0.6; this discrepancy was due to religious, social, and cultural traditions. The translation, cultural adaptation, and content validity of the Saudi sign language version of the GNKQ were satisfactory. Further studies are needed to validate other measurement properties of the present translated version of this questionnaire. Full article
(This article belongs to the Special Issue The Impact of Nutritional Education and Food Policy on Consumers)
Show Figures

Figure 1

22 pages, 12633 KiB  
Article
MediaPipe Frame and Convolutional Neural Networks-Based Fingerspelling Detection in Mexican Sign Language
by Tzeico J. Sánchez-Vicinaiz, Enrique Camacho-Pérez, Alejandro A. Castillo-Atoche, Mayra Cruz-Fernandez, José R. García-Martínez and Juvenal Rodríguez-Reséndiz
Technologies 2024, 12(8), 124; https://doi.org/10.3390/technologies12080124 - 1 Aug 2024
Viewed by 948
Abstract
This research proposes implementing a system to recognize the static signs of the Mexican Sign Language (MSL) dactylological alphabet using the MediaPipe frame and Convolutional Neural Network (CNN) models to correctly interpret the letters that represent the manual signals coming from a camera. [...] Read more.
This research proposes implementing a system to recognize the static signs of the Mexican Sign Language (MSL) dactylological alphabet using the MediaPipe frame and Convolutional Neural Network (CNN) models to correctly interpret the letters that represent the manual signals coming from a camera. The development of these types of studies allows the implementation of technological advances in artificial intelligence and computer vision in teaching Mexican Sign Language (MSL). The best CNN model achieved an accuracy of 83.63% over the sets of 336 test images. In addition, considering samples of each letter, the following results are obtained: an accuracy of 84.57%, a sensitivity of 83.33%, and a specificity of 99.17%. The advantage of this system is that it could be implemented on low-consumption equipment, carrying out the classification in real-time, contributing to the accessibility of its use. Full article
Show Figures

Figure 1

23 pages, 1305 KiB  
Article
Code-Switching at the Interfaces
by Antje Muntendam and M. Carmen Parafita Couto
Languages 2024, 9(8), 258; https://doi.org/10.3390/languages9080258 - 25 Jul 2024
Viewed by 1239
Abstract
One characteristic of multilingual speakers is that in everyday life, they may integrate elements from their languages in the same sentence or discourse, a practice known as code-switching. This paper examines code-switching at the interfaces, in particular as related to information structure. Despite [...] Read more.
One characteristic of multilingual speakers is that in everyday life, they may integrate elements from their languages in the same sentence or discourse, a practice known as code-switching. This paper examines code-switching at the interfaces, in particular as related to information structure. Despite the fact that a core question of modern linguistic theory is how syntactic and information-structural theories interact in accounting for licensing of different grammatical phenomena, there has been relatively little literature on code-switching and information structure. In this paper, we provide an overview of the available literature on code-switching across different language combinations, focusing in particular on subject pronoun–verb switches, ellipsis, light verbs, topic/focus particles, and code-switching between sign languages. We argue that the study of the interplay between information structure and code-switching sheds light on our understanding of multilingual grammars and language competence more generally. In this regard, we discuss theoretical and methodological considerations to guide future studies. Full article
(This article belongs to the Special Issue Syntax and Discourse at the Crossroads)
Show Figures

Figure 1

20 pages, 4716 KiB  
Article
Novel Wearable System to Recognize Sign Language in Real Time
by İlhan Umut and Ümit Can Kumdereli
Sensors 2024, 24(14), 4613; https://doi.org/10.3390/s24144613 - 16 Jul 2024
Viewed by 1076
Abstract
The aim of this study is to develop a practical software solution for real-time recognition of sign language words using two arms. This will facilitate communication between hearing-impaired individuals and those who can hear. We are aware of several sign language recognition systems [...] Read more.
The aim of this study is to develop a practical software solution for real-time recognition of sign language words using two arms. This will facilitate communication between hearing-impaired individuals and those who can hear. We are aware of several sign language recognition systems developed using different technologies, including cameras, armbands, and gloves. However, the system we propose in this study stands out for its practicality, utilizing surface electromyography (muscle activity) and inertial measurement unit (motion dynamics) data from both arms. We address the drawbacks of other methods, such as high costs, low accuracy due to ambient light and obstacles, and complex hardware requirements, which have limited their practical application. Our software can run on different operating systems using digital signal processing and machine learning methods specific to this study. For the test, we created a dataset of 80 words based on their frequency of use in daily life and performed a thorough feature extraction process. We tested the recognition performance using various classifiers and parameters and compared the results. The random forest algorithm emerged as the most successful, achieving a remarkable 99.875% accuracy, while the naïve Bayes algorithm had the lowest success rate with 87.625% accuracy. The new system promises to significantly improve communication for people with hearing disabilities and ensures seamless integration into daily life without compromising user comfort or lifestyle quality. Full article
Show Figures

Figure 1

37 pages, 4751 KiB  
Systematic Review
Machine Learning-Based Methods for Code Smell Detection: A Survey
by Pravin Singh Yadav, Rajwant Singh Rao, Alok Mishra and Manjari Gupta
Appl. Sci. 2024, 14(14), 6149; https://doi.org/10.3390/app14146149 - 15 Jul 2024
Viewed by 748
Abstract
Code smells are early warning signs of potential issues in software quality. Various techniques are used in code smell detection, including the Bayesian approach, rule-based automatic antipattern detection, antipattern identification utilizing B-splines, Support Vector Machine direct, SMURF (Support Vector Machines for design smell [...] Read more.
Code smells are early warning signs of potential issues in software quality. Various techniques are used in code smell detection, including the Bayesian approach, rule-based automatic antipattern detection, antipattern identification utilizing B-splines, Support Vector Machine direct, SMURF (Support Vector Machines for design smell detection using relevant feedback), and immune-based detection strategy. Machine learning (ML) has taken a great stride in this area. This study includes relevant studies applying ML algorithms from 2005 to 2024 in a comprehensive manner for the survey to provide insight regarding code smell, ML algorithms frequently applied, and software metrics. Forty-two pertinent studies allow us to assess the efficacy of ML algorithms on selected datasets. After evaluating various studies based on open-source and project datasets, this study evaluated additional threats and obstacles to code smell detection, such as the lack of standardized code smell definitions, the difficulty of feature selection, and the challenges of handling large-scale datasets. The current studies only considered a few factors in identifying code smells, while in this study, several potential contributing factors to code smells are included. Several ML algorithms are examined, and various approaches, datasets, dataset languages, and software metrics are presented. This study provides the potential of ML algorithms to produce better results and fills a gap in the body of knowledge by providing class-wise distributions of the ML algorithms. Support Vector Machine, J48, Naive Bayes, and Random Forest models are the most common for detecting code smells. Researchers can find this study helpful in better anticipating and taking care of software development design and implementation issues. The findings from this study, which highlight the practical implications of ML algorithms in software quality improvement, will help software engineers fix problems during software design and development to ensure software quality. Full article
(This article belongs to the Special Issue Artificial Intelligence in Software Engineering)
Show Figures

Figure 1

Back to TopTop