Research Proposals for Faculty

The JHU + Amazon Initiative for Interactive Artificial Intelligence (AI2AI) solicits research proposals from faculty for advancing the state of the art in all aspects of interactive AI.

Amazon builds and deploys AI across three technology layers: the bottom layer consists of Amazon’s own high performance and cost-effective custom chips, as well as a variety of other computing options including from third-parties. The middle layer focuses on the customer’s choice by providing the broadest selection of Foundation Models—both Amazon-built as well as those from other leading providers. At the top layer Amazon offers generative AI applications and services to improve every customer experience.

There are three things that distinguish Amazon’s approach to the development and deployment of AI:
1) Maintaining a strategic focus on improving the customer and employee experience through practical, real-world applications of AI;
2) marshaling world-class data, compute, and talent resources to drive AI innovation; and
3) committing to the development of responsible, reliable, and trustworthy AI.

Topics of interest include, but are not limited to, those below. Please feel free to bring your own unique viewpoint and expertise to these topics:

  • AWS Security:
  • Low-cost computer vision techniques for object interactions for physical security.
  • Near-real-time anomaly detection at scale (e.g., streaming petabytes per hour) to identify malicious activity in Linux cloud-based environments.
  • Running distributed big data, AI/ML, and/or streaming models with minimal latency and cost using host, network, and audit telemetry sources. Such data engineering can be used for selecting interesting subsets of data including complex, multi-event sequences, improving matching and micro-summarization efficiencies for continuous log processing, and improving event summarization processing efficiency while maintaining accuracy.
  • Use of AI/ML (e.g., GenAI) to automate and improve security response workflows (for example, from written reports or threat detections), specifically through incident summarization, response planning, and communication management.
  • Artificial General Intelligence & Search:
  • Large Language Models (LLMs):
  • Retrieval augmented generation (RAG), fine-tuning and alignment (SFT, RLHF), and efficient inference: ensuring accuracy and reducing hallucinations; maintaining privacy and trust; reasoning over long contexts
  • Long form context methods
  • Improving data efficiency; effectively distilling models for real-time inference, data quality checks
  • Multi-lingual LLMs and challenges for cross-language defects (e.g. cross-language hallucinations)
  • Synthetic data generation for LLM learning
  • Adapting LLMs for dynamic content (e.g., feeds, web content) in knowledge-augmented scenarios
  • Tool and Code Empowered LLM
  • External Knowledge and Domain Knowledge Enhanced LLM and Knowledge Updating
  • Vision-Language:
  • Multimodal learning and video understanding: retrieval with multimodal inputs (e.g., video, image, text, speech)
  • Adversarial ML with multimodal inputs
  • Comprehensive video understanding with diverse content (open-vocabulary)
  • Shared multimodal representation spaces, aligned codecs
  • LLM and VLM based Intelligent Agents
  • Search and Retrieval:
  • Personalization in Search, semantic retrieval, conversational search: understanding descriptive and natural language queries for product search; retrieving information using LLMs’ output
  • Search page optimization (ranking) using heterogeneous content such as related keywords, shoppable images, videos, and ads
  • Tool Learning for Proactive Information Seeking
  • Efficient Generative AI:
  • Novel model architectures for improved performance (accuracy & efficiency)
  • Training large neural network models with efficiency: High performance distributed training and inference algorithms for Generative AI systems, quality metrics and evaluations
  • Responsible Generative AI:
  • Fact Checking and Factual Error Correction for Truthful LLMs
  • Privacy preserving continual learning/self-learning
  • Responsible AI for audio, image and video generation
  • This may include, but is not limited to measurement and mitigation, guardrail models, privacy concerns, detecting and mitigating adversarial use cases, and machine unlearning and model disgorgement

Eligibility: Full-time tenure-track and research-track faculty members with a primary appointment in the Whiting School of Engineering are eligible to submit proposals as principal investigators. Collaborative proposals led by WSE faculty with faculty from other JHU divisions are also welcome.

Note: Faculty who will be Amazon Scholars in AY 2024-2025 are eligible to submit proposals, but must adhere to  JHU Conflict of Interest policies and procedures.. These individuals are encouraged to consult with Laura Evans at [email protected] well in advance of proposal submission.

Important Dates:

  • March 15, 2024: Pre-proposals/Abstracts due (PDF, 1-page max, via the AI2AI portal)
  • March 29, 2024: AI2AI Matchmaking Workshop (Registration required & attendance strongly encouraged)
  • May 6, 2024: Full proposals due
  • June 2024: Award decisions made

The award period is nominally September 1, 2024 to August 31, 2025.

Award types, amount and number:  Two types of awards are available:

  1. Sponsored research awards, with terms comparable to those for other sponsored projects, are expected to involve direct collaboration with Amazon researchers. A few (1-3) awards with up to $100,000 in direct costs are expected to be given in AY 2024-2025.
  2. Gift-funded awards, are treated as unrestricted gifts and are expected to support research that is more exploratory in nature. Several (2-4) awards of up to $100,000 each are expected to be given in AY 2024-2025.

The application process for both types of awards is the same.  The award type will be decided in consultation with Amazon.  Applicable indirect charges will be added to the awarded proposals.

Format: The proposal format is single-spaced, 11-point font or larger, with no less than 0.5-inch page margins.

  1. Project Description (PDF, 3-page maximum)
    1. Title
    2. PI’s
    3. Executive Summary
    4. Technical description of the project
    5. Expected deliverables/outcomes, and milestones
  2. Budget and justification (Excel, using the provided template)
  3. References (unlimited)
  4. Biographies of the PI’s (up to 3 pages per PI in NSF Format)

Proposals currently under consideration by the Amazon Research Awards program should be noted to indicate concurrent submissions.  Proposals will not be selected for funding from both sources.