This is a messy collection of notes I’ve been maintaining with an eventual goal of writing a blog post about this. I don’t know if that will ever happen.
Infrastructure
The DOE Secretary of Energy Advisory Board (SEAB) created a report on Recommendations on Powering Artificial Intelligence and Data Center Infrastructure.
Opinions of others
Matt Hourihan posted a call to action in Why America Must Invest in DOE Labs To Win the AI Race Against China | RealClearDefense which beats the drum that the government should invest in the DOE labs to match China’s institutional investments in research. However, his argument does not acknowledge that, unlike China, advances in AI research in the US have been despite, not because of, government investment. There’s an implied message that pouring government money into government institutions to accelerate AI will be as effective in the US as it has been in China.
Bruce Schneier called for government-created foundation models (Essays: Public AI as an Alternative to Corporate AI - Schneier on Security). Ideally sure, but practical hurdles make this untenable.
The National Security Case for Public AI - Vanderbilt Policy Accelerator released a report that I picked apart as well.
Satya Nadella, Brad Smith, Marc Andreessen, and Ben Horowitz wrote an opinion piece called AI for Startups that made the following recommendations:
- Create policies that support a global ecosystem of accessible data (an Open Data Commons) to benefit AI development and cultural institutions.
- Copyright laws must allow machines to learn from data freely, ensuring knowledge and unprotected facts remain accessible to foster innovation in AI. That is, allow AI models to train on copyrighted content as long as it’s publicly available, because that’s how humans learn.
- Regulations should focus on mitigating risks associated with AI misuse while avoiding unnecessary barriers that could hinder the growth and formation of businesses.
Government’s flakiness
Government is flaky and volatile; budget swings of 8%1 reflect a lack of long-term commitment, and this uncertainty and lack of conviction is deeply harmful for public-private partnership and talent retention.
From The Returns of Government R&D: Evidence from U.S. Appropriations Shocks:
Quote
Based on a narrative classification of all significant postwar changes in R&D appropriations for five major federal agencies, we find that an increase in nondefense R&D appropriations leads to increases in various measures of innovative activity and higher business-sector productivity in the long run. […] The estimates indicate that government-funded R&D accounts for one quarter of business-sector TFP growth since WWII, and imply substantial underfunding of nondefense R&D.
Government is ice skating uphill
Industry leads innovation in AI research, infrastructure, and people gravity. Consider Eagle (Microsoft) and Meta’s H100 clusters (Meta). Also see anecdotes in LLM training at scale.
50 MW sounds big to the government, but it’s not for industry. Industry isn’t constrained like government is by having to build data centers based on politics rather than space, power, and cooling. For example, see How a small city in Iowa became an epicenter for advancing AI - Source (microsoft.com). A richer discussion is in sustainability in HPC.
Government will never catch up because paying customers are more demanding than public researchers:
- Who cares if Frontier is late and you can’t run HACC at a new breakthrough scale for another six to twelve months?
- By comparison, fortunes being made on lead times measured in months.
- This may settle out, but the AI industry will be pushed harder than government.
What is the role of government then?
The ship has sailed on the government playing a significant role in AI leadership, but traditional HPC for scientific computing is still fair game because it has unique requirements:
- High precision arithmetic (FP64)
- Fortran
There are also elements of AI in which the government has unique interest:
- Trustworthiness, safety, and security
- AI for science
What has the government done?
On October 24, 2024, the Biden administration released a national security memorandum that defined guardrails around using AI for defense. It specifically recommended:2
- Nuclear weapons will never be put in the hands of an AI
- Agencies must create annual reports on the risks of frontier models being used to proliferate nuclear, radiological, chemical, and biological weapons and threats
- AI cannot be used to classify people (e.g., as eligible asylum grantees, known terrorists, etc) without a human in the loop.
- Frontier models developed by private industry should be protected as national assets.
This memorandum doesn’t have teeth, but it at least is a starting point around which the government can form a shape around its role in AI.
How can government do better?
How can the government move forward?
- Give up: Build staffing plans that assume constant turnover like the postdoc programs associated with DOE’s big HPC procurements (NESAP, CAAR, and ESP) as people graduate from government to industry.
- Lean hard into the right incentives: Sabbaticals, joint appointments, and other ways to raise the ceiling. Work-life balance is a bad motivator; highly motivated people are going to work way too much regardless of where they work, and many big tech companies still offer work-life balance.
Mobilizing Tech Talent: Hiring Technologists to Power Better Government has a “Seven Strategies to Strengthen the Federal Government’s Technical Workforce” which is quite good at its surface:
- Hire, appoint, and empower leaders with knowledge of modern technology
- Use private-sector best practices to recruit and hire tech talent
- Create the conditions for success
- Upgrade the technical skills and competencies of the existing workforce
- Build the brand and tell more stories
- Remove structural barriers and make operational excellence possible
- Consider ideas for future exploration
All of these are applicable to any team, not just public sector, though the implementation of each may have unique aspects to government. Interestingly, this paper does not discuss pay at all.
My opinion
I am bearish of the government’s ability to add value to the LLM game and wrote about it in my ISC’24 recap blog post.
I read a lot of these calls to action and strategic initiatives, and I’ve responded to many on my blog:
- I am developing my personal response to the Notice of Request for Information (RFI) on Frontiers in AI for Science, Security, and Technology …
- ISC’24 recap (glennklockwood.com) has my most direct, point-by-point refutation of many of the key arguments being made
- Life and leaving NERSC (glennklockwood.com) has an older, pre-joining-Microsoft perspective on where I saw the future of the Labs going
- Thoughts on the NSF Future Directions Interim Report (glennklockwood.com) speaks to the bad trajectory of NSF’s HPC efforts as of 2015.