APEX Proposal Requirements and Submissions Instructions

Proposal Requirements

Proposals must clearly address the following areas (page limits will be specified in submission instructions):

Scientific Impact

  • Articulate the potential for substantial scientific advancement and impact within
    your field through the proposed AI-driven methodologies.
  • Include a description of how the proposed work could facilitate strong follow-on
    proposals to DOE allocation programs such as ALCC or INCITE.

Current Methods & AI Innovation

  • Provide an overview of current state-of-the-art methods in your scientific domain.
  • Clearly outline how your project intends to apply AI in novel ways, either through
    innovative methods or by advancing AI into new scientific domains.
  • Explain why this proposal has a current need, or facilitates future use, of leadership-scale computing resources.

Goals and Required Resources

  • Define clear, measurable objectives and anticipated outcomes.
  • Identify the specific ALCF resources desired (see list below) and indicate whether you are interested in exploring new or unfamiliar systems.
  • For the large-scale resources, Aurora and Polaris, computational resources requirements should be at start-up levels. Specify the computational resources requirements not exceeding:
    • Aurora : 250,000 node-hours annually
    • Polaris: 25,000 node-hours annually

Team & Effort Description

  • Detail team composition, roles, expertise, and proposed effort levels.
  • Some teams will be assigned a ALCF-funded post-doc. Explain how this person will be integrated into and meaningfully contribute to your project making it an effective and worthwhile collaboration for ALCF.
  • Describe funding sources for the proposed effort. Proposals without current or future funding will not be considered.

Methodological Approach

  • Describe your methodological framework, AI techniques, algorithms, models, and tools.
  • Explain the experimental or exploratory nature of your AI approach.
  • Projects must allow ALCF staff meaningful access to data and software to enable effective collaboration.
     

Example Project Types

Projects of interest may include, but are not limited to:

  • Complex workflows that integrate AI with large-scale simulation, data analysis, or
    parameter optimization.
  • Surrogate modeling to accelerate simulations and enable rapid exploration of parameter
    spaces.
  • AI systems that guide, automate, or adaptively steer scientific experiments or simulations.
  • Novel AI models that advance the application of AI for science.
  • AI models tailored to scientific workloads or adapted to new scientific domains. 
     

Proposal Evaluation

Proposals will be evaluated based on:

  • Originality and creativity in applying AI techniques.
  • Feasibility and clarity of methodology.
  • Alignment and contribution to DOE mission goals.
  • Potential scientific and societal impact.
  • Willingness to collaborate openly with ALCF staff.
  • Ability to demonstrate the current or future needs of leadership-scale computing.
  • Clearly outlined collaboration with ALCF staff to reach proposal goals.


Reporting

Awardees are required to submit concise quarterly progress reports summarizing achievements, challenges, and upcoming objectives. Failure to do so will result in removal from the program.

Available Resources

Proposals should indicate which of the following resources would be needed, targeted, or explored as part of the project:

  • Aurora: 10,000 nodes each with 6 Intel Data Center Max 1550 GPUs (128GB HBM, 2 GPU Tiles each), 2 Intel Xeon CPU Max 9470C CPUs, 1TB DDR, 128GB HBM.
  • Polaris: 560 nodes each with 4 Nvidia A100 GPUs (40GB HBM each), 1 AMD Zen 3 CPU, 512GB DDR.
  • Sophia: 24 nodes each with 8 Nvidia A100 GPUs (40GB HBM each), 2 AMD Rome CPUs, 1TB DDR. Runs a vLLM inference service for serving open-source LLMs.
  • Crux: 256 nodes each with 2 64-core AMD Rome CPUs, 128GB DDR per CPU.
  • Metis: SambaNova-40 nodes running SambaNova’s custom inference service for serving the latest open-source LLMs.
  • Tara: 600+ Grace-Hopper nodes, each with 2 Grace CPUs and 4 H200 GPUs.
  • AI Testbed: Includes specialized AI hardware for advanced applications.

Inference services on Sophia and Metis can be integrated into larger workflows using LLM inference. ALCF is also willing to host custom models for inference within workflows running on Aurora or Polaris.

Submission Instructions

Proposals must be submitted electronically by the deadline (February 27, 2026).
Submit the following:

  • Proposal, strict 5-page limit
  • Team Details, 1 paragraph listing:
  • personnel expected to contribute
  • effort level of each
  • how this effort is funded
  • CV for each team member.

Proposals can be submitted via this Box Form:
https://anl.app.box.com/f/11c55068d0af4190b9186925ababfb2d

Please send any questions to apex@alcf.anl.gov.