An expert in machine learning and speech processing/speaker identification, Najim Dehak is internationally known as the lead developer of I-vector, a factor analysis-based speaker recognition technique that became widely used in the fields of speech processing. He introduced this approach during the 2008 summer workshop at Johns Hopkins University’s Center for Language and Speech Processing, one of the world’s largest and most influential academic research centers devoted to the science and technology of language and speech. Eight years later, Dehak joined Johns Hopkins Whiting School of Engineering, where he is now a member of the CLSP and an associate professor in the Department of Electrical and Computer Engineering.
Dehak’s research focuses on speech processing and modeling, audio segmentation, speaker, language, and emotion recognition. One of Dehak’s interests has been building robust emotion detection systems that can be useful in several areas, including call centers, mental health, and social applications.
Dehak has been focusing his research on topics related to human aging. Currently, there are more Americans aged 65 and older (over 49 million) than at any other time in history, according to the U. S. Census Bureau. A significant increase in the number of individuals with severe chronic conditions will have profound social and economic effects on society. Dehak and his team are developing non-invasive, artificial intelligence-based tools to detect, assess, and monitor the functional and cognitive decline of elderly adults. With his team, he has been exploring the use of human language technologies to help in the diagnosis of Alzheimer’s Disease and Parkinsonian syndromes. The proposed systems use speech and natural language processing methods to evaluate the cognitive state of Alzheimer’s patients and to detect motor impairments such as dysarthria in Parkinson’s disease patients. Dehak’s team already has achieved up to 94% accuracy in differentiating between Parkinsonian patients and healthy subjects.
Dehak will be designing artificial intelligence-based systems that will use physiological signals to represent or gauge neuroanatomical and functional relationships that are commonly perturbed in elderly adults and related conditions such as Alzheimer’s, Parkinson’s diseases, frailty, and postoperative delirium. In addition to voice/speech, Dehak’s team will explore the combination of other bio-signals obtained from extraocular movement, handwriting, and gait to obtain a complete assessment of a patient’s the functional and cognitive status.
Dehak came to Johns Hopkins from the Spoken Language Systems Group at the MIT Computer Science and Artificial Intelligence Laboratory, where he was a research scientist. He earned his PhD at École de Technologie Supérieure in Montreal in 2009; his master of science degree at Pierre and Marie Curie University in Paris in 2004; and his bachelor of science at the University of Sciences and Technology d’Oran, in Algeria, in 2003 and is the author of more than 185 publications. He is member of IEEE and previously member of the IEEE Speech and Language Technical Committee.