Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and SpaceX will provide their resources to help "augment warfighter decision-making in complex operational environments," the US Defence Department said.
Notably absent from the list is AI company Anthropic, after its public dispute and legal fight with the Trump administration over the ethics and safety of AI usage in war.
The US Defence Department has been rapidly accelerating its use of AI in recent years.
The technology can help the military reduce the time it takes to identify and strike targets on the battlefield, while aiding in the organisation of weapons maintenance and supply lines, according to a report from the Brennan Centre for Justice.
But AI has already raised concerns that its use could invade Americans' privacy or allow machines to choose targets on the battlefield.
The Pentagon's latest contracts come at a time of anxiety about the potential for over-reliance on the technology on the battlefield, said Helen Toner, interim executive director at Georgetown University's Centre for Security and Emerging Technology.
"A lot of modern warfare is based on people sitting in command centres behind monitors, making complicated decisions about confusing, fast-moving situations," said Toner, a former board member of OpenAI.
"AI systems can be helpful in terms of summarising information or looking at surveillance feeds and trying to identify potential targets."
But questions about the appropriate levels of human involvement, risk and training are still being worked out, she said.
Such concerns were raised by Anthropic. The tech company said it wanted assurances in its contract the military would not use its technology in fully autonomous weapons and the surveillance of Americans.
US Defence Secretary Pete Hegseth said the company must allow for any uses the Pentagon deemed lawful.
Anthropic sued after US President Donald Trump tried to stop all federal agencies from using the company's chatbot Claude. Hegseth sought to label the company a supply chain risk, a designation meant to protect against sabotage of national security systems by foreign adversaries.
One company's agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semi-autonomously, according to a person familiar with the agreement.
The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties.
Those resemble sticking points for Anthropic, though OpenAI has previously said it secured similar assurances when it made its own deal with the Pentagon.