2024-2025 Faculty Research Awards

The Johns Hopkins University + Amazon Initiative for Artificial Intelligence (AI2AI) has selected 8 JHU WSE Faculty research projects for its AY 2024-2025 AI2AI Faculty Research Awards.

Reliable and Robust Alignment of Large Language Models

Project Summary

Although Large Language Models (LLMs) perform well across various tasks, they can generate misinformation or harmful content, raising serious safety concerns. Reinforcement Learning from Human Feedback (RLHF) has proven effective in aligning LLMs with human values like helpfulness, accuracy, and safety. However, RLHF is resource-intensive and depends on human-labeled data, which can be biased, inaccurate, or malicious. This research aims to address these challenges by developing more reliable and efficient fine-tuning methods for LLMs.

Principal Investigator

Mahyar Falyzab, Principal Investigator for 2024-2025 AI2AI Faculty Research Award Winner
Mahyar Fazlyab
Assistant Professor, Department of Electrical and Computer Engineering

Multimodal Shared Representations: Speech, Text, and Video

Project Summary

The main objective of this proposal is to explore both continuous and discrete representations to learn modality-independent representations. Our proposed direction involves utilizing modality-specific modules to map continuous speech, video, and text into a shared space, capturing cross-modal interactions and connections between individual elements. We plan to encode speech, text and video information into a shared discrete representation created via vector quantization.

Principal Investigator

Leibny Paola Garcia Perera, Principal Investigator for AI2AI 2023-2024 and 2024-2025 Faculty Research Award Winners
Leibny Paola Garcia Perera
Assistant Research Scientist, Center for Language and Speech Processing

Multimodal LLM Framework for Heterogeneous Alphanumeric Data

Project Summary

This project uses large language models to analyze high-frequency time series data in combination with contextual data around the time series and their anomalies.

Principal Investigator

Kimia Ghobadi, Principal Investigator for 2024-2025 AI2AI Faculty Research Award Winner
Kimia Ghobadi
John C. Malone Assistant Professor, Department of Civil and Systems Engineering

Language Models with Easily Verifiable Generations

Project Summary

This brittleness of LLMs requires users to verify model responses by cross-checking with trusted sources. This project aims to enhance LLM safety by developing chatbots that base responses on direct quotes from reliable sources, minimizing user fact-checking.

Principal Investigator

Daniel Khashabi, Principal Investigator for 2024-2025 AI2AI Faculty Research Award Winner
Daniel Khashabi
Assistant Professor, Department of Computer Science

Safe and Efficient In-Context Learning via Risk Control

Project Summary

The behavior of language models, such as chat bots, can be hijacked by malicious users. This project aims to statistically control the extent to which a user can adapt the system, while not crippling its performance in intended use cases.

Principal Investigator

Eric Nalisnick, Principal Investigator for 2024-2025 AI2AI Faculty Research Award Winner
Eric Nalisnick
Assistant Professor, Department of Computer Science

Multimodal Theory of Mind for Engineering Socially Intelligent AI Assistants

Project Summary

This research project focuses on creating AI assistants that can understand and respond to human needs in everyday situations, like helping with online shopping or assisting around the house. By enhancing their social intelligence, these AI systems aim to offer more effective and personalized support in real-world tasks.

Principal Investigator

Tianmin Shu, Principal Investigator for 2024-2025 AI2AI Faculty Research Award Winner
Tianmin Shu
Assistant Professor, Department of Computer Science

Interactive and Contextualized Evaluation for Large Language Models

Project Summary

The goal for this project is to bridge the gap between how the model is currently evaluated and how users utilize the model in the real-world. The key goal for this research is to create interactive and contextualized evaluation framework that can account for real-world human-AI interaction by studying real-world user interactions and incorporating their diverse, nuanced evaluation criteria.

Principal Investigator

Ziang Xiao, Principal Investigator for 2024-2025 AI2AI Faculty Research Award Winner
Ziang Xiao
Assistant Professor, Department of Computer Science

Combining LVMs with LLMs to Improve Vision-Language Models

Project Summary

Coming soon!

Principal Investigator

Alan Yuille, Principal Investigator for AI2AI Faculty Research Award Winner,
Alan Yuille
Bloomberg Distinguished Professor, Department of Computer Science