OpenAI upgrades ChatGPT with interactive learning tools as lawsuits and Pentagon backlash mount
OpenAI on Monday launched a set of interactive visual tools inside ChatGPT that let users manipulate mathematical and scientific formulas in real time — a genuinely impressive education feature that also serves as the company's most direct attempt yet to change the subject during the worst ten days of its corporate life.
The new experience covers more than 70 core math and science concepts, from the Pythagorean theorem to Ohm's law to compound interest. When a user asks ChatGPT to explain one of these topics, the chatbot now generates a dynamic module with adjustable sliders alongside its written response. Drag a variable, and the equations, graphs, and diagrams update instantly. The feature is available today to all logged-in users worldwide, across every plan, including free.
OpenAI tells VentureBeat that 140 million people already use ChatGPT each week for math and science learning. That is a staggering number — and it goes a long way toward explaining why the company chose this particular week to ship a product designed to make those users' experience meaningfully better. Since late February, OpenAI has been sued by the family of a 12-year-old mass shooting victim who alleges the company knew the attacker was planning violence through ChatGPT; lost its head of robotics over a Pentagon deal that triggered a near-300% spike in app uninstalls; watched more than 30 of its own employees file a legal brief supporting rival Anthropic against the U.S. government; and scrapped plans with Oracle to expand a flagship data center in Texas. Its chief competitor's app, Claude, now sits atop the App Store.
The interactive learning tools are, on their merits, a strong product. But they arrive at a company fighting on every front simultaneously — and burning through an estimated $15 billion in cash this year to do it.
How the new ChatGPT learning tools actually work
The feature is built on a simple pedagogical premise: students understand formulas better when they can see what happens as the inputs change.
Ask ChatGPT "help me understand the Pythagorean theorem," and the system now responds with a written explanation alongside an interactive panel. On the left, the formula $a^2 + b^2 = c^2$ appears in clean notation with sliders for sides $a$ and $b$. On the right, a geometric visualization — a right triangle with squares drawn on each side — reshapes dynamically as you adjust the values. The computed hypotenuse updates in real time. The same treatment applies across topics: voltage and resistance for Ohm's law, pressure and temperature for the ideal gas equation, radius and height for cone volume.
OpenAI's initial roster of more than 70 topics targets high school and introductory college material: binomial squares, Charles' law, circle equations, Coulomb's law, cylinder volume, degrees of freedom, exponential decay, Hooke's law, kinetic energy, the lens equation, linear equations, slope-intercept form, surface area of a sphere, trigonometric angle sum identities, and others.
The company cited research suggesting that "visual, interaction-based learning can lead to stronger conceptual understanding than traditional instruction for many students," and pointed to a recent Gallup survey in which more than half of U.S. adults said they struggle with math. In early testing, OpenAI said, students reported the modules helped them grasp how variables relate to one another, and parents described using them to work through problems alongside their children.
Anjini Grover, a high school mathematics teacher quoted in OpenAI's announcement, said the feature stands out for "how strongly this feature emphasizes conceptual understanding." Raquel Gibson, a high school algebra teacher, called it "a step towards empowering students to independently explore abstract concepts."
The tools build on ChatGPT's existing education features — a "study mode" for step-by-step problem solving and a quizzes feature for exam prep — and OpenAI said it plans to expand interactive learning to additional subjects. The company also said it intends to publish research through its NextGenAI initiative and OpenAI Learning Lab to study how AI shapes learning outcomes over time.
A lawsuit alleging OpenAI knew a mass shooter was planning an attack
The education launch shares the calendar with the most serious legal challenge OpenAI has ever faced.
On Monday, the mother of 12-year-old Maya Gebala filed a civil lawsuit against OpenAI in B.C. Supreme Court, alleging the company had "specific knowledge of the shooter's long-range planning of a mass casualty event" through ChatGPT interactions and "took no steps to act upon this knowledge." Gebala was shot three times during a mass shooting in Tumbler Ridge, British Columbia on February 10 that killed eight people and the 18-year-old attacker. She suffered what the lawsuit describes as a catastrophic traumatic brain injury with permanent cognitive and physical disabilities.
The claim paints a damning picture of how the shooter used ChatGPT. It alleges the platform functioned as a "counsellor, pseudo-therapist, trusted confidante, friend, and ally" and was "intentionally designed to foster psychological dependency between the user and ChatGPT." The shooter was under 18 when they began using the service, the suit states, and despite OpenAI's requirement that minors obtain parental consent, the company "took no steps to implement age verification or consent procedures."
OpenAI has separately acknowledged that it suspended the shooter's account months before the attack but did not alert Canadian law enforcement — a decision that provoked sharp political fallout. B.C. Premier David Eby said after a virtual meeting with Altman that the CEO agreed to apologize to the people of Tumbler Ridge and work with the provincial government on AI regulation recommendations.
None of the claims have been proven in court. OpenAI has not publicly commented on the lawsuit. But the case poses a question that transcends any single legal proceeding: when an AI company's own internal systems identify a user as dangerous enough to ban, what obligation does it have to tell someone?
The Pentagon deal that split OpenAI from the inside
The Tumbler Ridge lawsuit is unfolding against the backdrop of an internal crisis that has already cost OpenAI key talent and millions of users.
On February 28, CEO Sam Altman announced a deal giving the Pentagon access to OpenAI's AI models inside secure government computing systems. The agreement came days after Anthropic CEO Dario Amodei publicly refused similar terms, saying his company could not proceed without assurances against autonomous weapons and mass domestic surveillance. The Pentagon responded by designating Anthropic a "supply-chain risk" — a classification normally reserved for foreign adversaries — and Defense Secretary Pete Hegseth barred any military contractor from conducting commercial activity with the company.
The reaction inside OpenAI was immediate. Caitlin Kalinowski, who joined from Meta in 2024 to build out the company's robotics hardware division, resigned on principle. "AI has an important role in national security," she wrote publicly. "But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." Research scientist Aidan McLaughlin wrote on social media that he "personally don't think this deal was worth it." Another employee told CNN that many OpenAI staffers "really respect" Anthropic for walking away.
The reaction outside the company was even more dramatic. ChatGPT uninstalls spiked more than 295% on the day the deal was announced. Anthropic's Claude surged to No. 1 among free apps on the U.S. Apple App Store and remained there as of this past weekend. Protesters gathered outside OpenAI's San Francisco headquarters calling for a "QuitGPT" movement.
And in the most extraordinary development, more than 30 OpenAI and Google DeepMind employees — including DeepMind chief scientist Jeff Dean — filed an amicus brief Monday supporting Anthropic's lawsuit against the Defense Department. The brief argued that the Pentagon's actions, "if allowed to proceed," would "undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond." The employees signed in their personal capacity, but the spectacle of OpenAI's own researchers rallying to a competitor's legal defense against the same government their company just partnered with has no real precedent in the industry.
Altman, to his credit, has not pretended the situation is fine. In an internal memo later shared publicly, he admitted the deal "was definitely rushed" and "just looked opportunistic and sloppy." He revised the contract to include explicit prohibitions against mass domestic surveillance and the use of OpenAI technology on commercially acquired data. He also publicly said that enforcing the supply-chain risk designation against Anthropic "would be very bad for our industry and our country."
Meanwhile, Anthropic warned in court filings that the Pentagon's blacklisting could cost it up to $5 billion in lost business — roughly equivalent to its total revenue since commercializing its AI technology in 2023. The company is seeking a temporary court order to continue working with military contractors while the case proceeds.
Why OpenAI's $15 billion cash burn makes every user count
Strip away the lawsuits and the politics, and OpenAI still has a math problem of its own.
The company is expected to burn through approximately $15 billion in cash this year, up from $9 billion in 2025. It has roughly 910 million weekly users. About 95% of them pay nothing. Subscriptions alone cannot bridge that gap, which is why OpenAI is simultaneously building out an internal advertising infrastructure and leaning on partners like Criteo — and reportedly The Trade Desk — to bring advertisers into ChatGPT.
The company is hiring aggressively for this effort: a monetization infrastructure engineer, an engineering manager, a product designer for the ads experience, a senior manager for ad revenue accounting, and a trust and safety specialist dedicated to the ads product, all based at headquarters in San Francisco. The compensation bands run as high as $385,000 — the kind of investment a company makes when it plans to own its ad stack, not rent it.
But advertising inside ChatGPT introduces a trust problem that compounds the ones OpenAI is already managing. Users who abandoned the app over the Pentagon deal demonstrated that loyalty to ChatGPT is thinner than its market share suggests. Adding commercial messages to a product already under fire for its military ties and its handling of a mass shooter's data will require OpenAI to navigate user sentiment with a precision it has not recently demonstrated.
The infrastructure picture is equally unsettled. Oracle and OpenAI recently scrapped plans to expand a flagship AI data center in Abilene, Texas, after negotiations stalled over financing and OpenAI's evolving needs. Meta and Nvidia moved quickly to explore the site — a reminder that in the current AI arms race, any gap in execution gets filled by a competitor within days.
Why interactive learning is OpenAI's strongest remaining argument
This is where the education feature becomes more than a product announcement.
Education has always been ChatGPT's cleanest use case — the application where the technology most obviously augments human capability rather than surveilling it, weaponizing it, or monetizing the attention of people who came looking for help. It is the use case that resonates across demographics: students prepping for the SAT, parents revisiting algebra at the kitchen table, adults circling back to concepts they never quite understood. And it is the use case where ChatGPT still holds a clear lead. Google's Gemini, Anthropic's Claude, and xAI's Grok are all investing in education, but none has shipped anything comparable to real-time interactive formula visualization embedded in a conversational interface.
OpenAI acknowledged that the "research landscape on how AI affects learning is still taking shape," but pointed to its own early findings on study mode as showing "promising early signals." The company said it will continue working with educators and researchers through its NextGenAI initiative and OpenAI Learning Lab, and plans to publish findings and expand into additional subjects.
Somewhere tonight, a ninth-grader will open ChatGPT, drag a slider, and watch a hypotenuse lengthen across her screen. The Pythagorean theorem will make sense for the first time. She will not know about the Pentagon deal, or the Tumbler Ridge lawsuit, or the 295% spike in uninstalls, or the $15 billion cash burn underwriting the server that just rendered her triangle. She will only know that it worked. For OpenAI, that may have to be enough — for now.



