The Johns Hopkins University + Amazon Initiative for Artificial Intelligence (AI2AI) has selected 8 JHU WSE Faculty research projects for its AY 2024-2025 AI2AI Faculty Research Awards.
2024-2025 Faculty Research Awards
Reliable and Robust Alignment of Large Language Models
Project Summary
Although Large Language Models (LLMs) perform well across various tasks, they can generate misinformation or harmful content, raising serious safety concerns. Reinforcement Learning from Human Feedback (RLHF) has proven effective in aligning LLMs with human values like helpfulness, accuracy, and safety. However, RLHF is resource-intensive and depends on human-labeled data, which can be biased, inaccurate, or malicious. This research aims to address these challenges by developing more reliable and efficient fine-tuning methods for LLMs.
Principal Investigator
Multimodal Shared Representations: Speech, Text, and Video
Project Summary
The main objective of this proposal is to explore both continuous and discrete representations to learn modality-independent representations. Our proposed direction involves utilizing modality-specific modules to map continuous speech, video, and text into a shared space, capturing cross-modal interactions and connections between individual elements. We plan to encode speech, text and video information into a shared discrete representation created via vector quantization.
Principal Investigator
Multimodal LLM Framework for Heterogeneous Alphanumeric Data
Project Summary
This project uses large language models to analyze high-frequency time series data in combination with contextual data around the time series and their anomalies.
Principal Investigator
Language Models with Easily Verifiable Generations
Project Summary
This brittleness of LLMs requires users to verify model responses by cross-checking with trusted sources. This project aims to enhance LLM safety by developing chatbots that base responses on direct quotes from reliable sources, minimizing user fact-checking.
Principal Investigator
Safe and Efficient In-Context Learning via Risk Control
Project Summary
The behavior of language models, such as chat bots, can be hijacked by malicious users. This project aims to statistically control the extent to which a user can adapt the system, while not crippling its performance in intended use cases.
Principal Investigator
Multimodal Theory of Mind for Engineering Socially Intelligent AI Assistants
Project Summary
This research project focuses on creating AI assistants that can understand and respond to human needs in everyday situations, like helping with online shopping or assisting around the house. By enhancing their social intelligence, these AI systems aim to offer more effective and personalized support in real-world tasks.
Principal Investigator
Interactive and Contextualized Evaluation for Large Language Models
Project Summary
The goal for this project is to bridge the gap between how the model is currently evaluated and how users utilize the model in the real-world. The key goal for this research is to create interactive and contextualized evaluation framework that can account for real-world human-AI interaction by studying real-world user interactions and incorporating their diverse, nuanced evaluation criteria.
Principal Investigator
Combining LVMs with LLMs to Improve Vision-Language Models
Project Summary
Coming soon!
Principal Investigator