close WATCH LIVE: AI on the battlefield in the hot seat on Capitol Hill Video

WATCH LIVE: AI on the battlefield in the hot seat on Capitol Hill

The House Armed Services Committee a hearing on the Department of Defense using artificial Intelligence.

Read this article for free! Plus get unlimited access to thousands of articles, videos and more with your free account! Please enter a valid email address. By entering your email, you are agreeing to Fox News Terms of Service and Privacy Policy, which includes our Notice of Financial Incentive. To access the content, check your email and follow the instructions provided.

OpenAI, the parent company of the popular artificial intelligence chatbot platform ChatGPT, altered its usage policy to get rid of a prohibition on using their technology for “military and warfare.”

OpenAI’s usage policy specifically banned the use of its technology for “weapons development, military and warfare” before January 10 of this year, but that policy has since been updated to only disallow use that would “bring harm to others,” according to a report from Computer World.

“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property,” an OpenAI spokesperson told Fox News Digital. “There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under ‘military’ in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.”

The quiet change will now enable OpenAI to work closely with the military, something that has been a point of division among those running the company.

ARTIFICIAL INTELLIGENCE AND US NUCLEAR WEAPONS DECISIONS: HOW BIG A ROLE?

Marine tests robot weapon

U.S. Marines with Tactical Training and Exercise Control Group, Marine Air- Ground Task Force Training Command and scientists with the Office of Naval Research conduct a proof-of-concept range for the Robotic Goat at Marine Corps Air-Ground Combat Center, Twentynine Palms, California, Sept. 9, 2023.  (U.S. Marine Corps photo by Lance Cpl. Justin J. Marty)

But Christopher Alexander, the chief analytics officer of Pioneer Development Group, believes that divide within the company comes from a misunderstanding about how the military would actually use OpenAI’s technology.

“The losing faction is concerned about AI becoming too powerful or uncontrollable and probably misunderstands how OpenAI might support the military,” Alexander told Fox News Digital. “The most likely use of OpenAI is for routine administrative and logistics work, which represents a massive cost savings to the taxpayer. I am glad to see OpenAI’s current leadership understands that improvements to DOD capabilities lead to enhanced effectiveness, which translates to fewer lives lost on the battlefield.”

As AI has continued to grow, so have concerns about the dangers posed by the technology. The Computer World report pointed to one such example last May, when hundreds of tech leaders and other public figures signed an open letter warning that AI could eventually lead to an extinction event and that putting guardrails in place to prevent that should be a priority.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the letter read.

OpenAI CEO Sam Altman was one of the most prominent figures in the industry to sign the letter, highlighting the company’s apparent long-held desire to limit the dangerous potential of AI.

Sam Altman speaks at conference

Sam Altman, CEO of OpenAI (Patrick T. Fallon/AFP via Getty Images)

WHAT IS CHATGPT?

But some experts believe that such a move was inevitable for the company, noting that American adversaries such as China are already looking toward a future battlefield where AI plays a prominent role.

“This is probably a confluence of events. First, the disempowerment of the nonprofit board probably tipped the balance toward abandoning this policy. Second, the military will have applications that save lives as well as might take lives, and not allowing those uses is hard to justify. And lastly, given the advances in AI with our enemies, I’m sure the U.S. government has asked the model providers to change those policies. We can’t have our enemies using the technology and the U.S. not,” Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation, told Fox News Digital.

“We should be concerned that as AI learns to become a killing-machine and more advanced in strategic warfare, that we have safeguards in place to prevent it from being used against domestic assets[.]”

Samuel Mangold-Lenett, a staff editor at The Federalist, expressed a similar sentiment, arguing that the best way to prevent a catastrophic event at the hands of an adversary such as China is for the U.S. to build its own robust AI capabilities for military use.

“OpenAI was likely always going to collaborate with the military. AI is the new frontier and is simply too important of a technological development to not use in defense,” Mangold-Lenett told Fox News Digital. “The federal government has made clear its intention to use it for this purpose. CEO Sam Altman has expressed concern over the threats AI poses to humanity; our adversaries, namely China, fully intend to use AI in future military endeavors that will likely involve the U.S.”

But such a need does not mean that AI development should not be done safely, said American Principles Project Director Jon Schweppe, who told Fox News Digital that leaders and developers will still have to have concern for “the runaway AI problem.”

CHINA, US RACE TO UNLEASH KILLER AI ROBOT SOLDIERS AS MILITARY POWER HANGS IN BALANCE: EXPERTS

“We not only have to worry about adversaries’ AI capabilities, but also we also have to worry about the runaway AI problem,” Schweppe said. “We should be concerned that as AI learns to become a killing-machine and more advanced in strategic warfare, that we have safeguards in place to prevent it from being used against domestic assets; or even in the nightmare runaway AI scenario, turning against its operator and engaging the operator as an adversary.”

The Pentagon seen in view from the air

The Pentagon (Alex Wong/Getty Images)

While the sudden change is likely to cause increased division within the ranks of OpenAI, some believe the company itself should be looked at with skepticism as it moves toward potential military partnerships. Among them is Heritage Foundation’s Tech Policy Center Research Associate Jake Denton, who pointed to the company’s secretive models.

“Companies like OpenAl are not moral guardians, and their pretty packaging of ethics is but a facade to appease critics,” Denton told Fox News Digital. “While adopting advanced Al systems and tools in our military is a natural evolution, OpenAl’s opaque black-box models should give pause. While the company may be eager to profit from future defense contracts, until their models are explainable, their inscrutable design should be disqualifying.”

CLICK HERE TO GET THE FOX NEWS APP

As the Pentagon gets more offers from AI companies for potential partnerships, Denton argues transparency should be an important trademark of any future deals.

“As our government explores Al applications for defense, we must demand transparency,” Denton said. “Opaque, unexplainable systems have no place in matters of national security.”

Leave a Reply

Your email address will not be published. Required fields are marked *