“Foundation models for science” are AI models that can solve a range of scientific problems within one or more science domains. They typically refer large models that take scientific data, not human language, as input and produce some output.

I’ll put examples of foundation models for science here as I find them.

“Frontier models for science” is a term made up by FASST. I’m not sure if it is meant to be different from foundation models for science.

Example models

ModelCreatorYearDomainParameters
ORBITORNL2024Climate113 billion
AuroraMicrosoft2024Climate1.3 billion
FourCastNetNVIDIA2022Climate100 million1

There’s also a paper where the authors “advocate for developing [foundation models] for power grids”2 but it doesn’t actually present a trained model.

Footnotes

  1. This has to be inferred since the paper only says the model requires 10 GB of GPU memory and took 1024 GPU-hours of A100 time. The model used 8 AFNO blocks (888K per block) and a 12-layer vision transformer (~7.08M per layer).

  2. Foundation models for the electric power grid - ScienceDirect