US army general says he relies on ChatGPT to make decisions
Major General William “Hank” Taylor, who serves as the commander of the 8th Field Army for the United States military in South Korea, confirmed he relies heavily on tools like ChatGPT to make decisions.
He told reporters this week that his relationship with the generative AI chatbot has become indispensable.
“I've become—Chat and I are really close lately,” Maj. Gen. Taylor said during a roundtable at the annual Association of the United States Army (AUSA) conference.
The general, who also holds the title of Director of Operations for United Nations Command and US Forces Korea , emphasized that his goal is to leverage the technology to gain a strategic advantage.
“As a commander, I want to make better decisions,” Taylor explained.
“I want to make sure that I make decisions at the right time to give me the advantage”.
Use Cases and Concerns
While Maj. Gen. Taylor stated he trusts the algorithm to make “key command decisions” , he did not provide specific examples of tactical or operational choices informed by the AI.
He did, however, detail other applications, noting that the field army is "regularly using” AI for predictive analysis related to sustainment and logistics.
He also uses the tools to draft weekly reports and is building models to enhance his and his soldiers’ individual leadership and decision-making skills.
The admission, however, immediately raised concerns among analysts regarding data security and the reliability of using commercial-grade AI for military operations.
Critics highlight that commercial Large Language Models (LLMs) are notorious for generating false or illogical information, a phenomenon known as "hallucination", which poses a serious risk in high-stakes military contexts.
The Pentagon has previously issued warnings against relying on public models due to the severe risk of exposing sensitive or confidential information.



