
For years, code-editing tools like Cursor, Windsurf, and GitHub's Copilot have been the standard for AI-powered software development. But as agentic AI grows more powerful and vibe-coding takes off, a subtle shift has changed how AI systems are interacting with software. Instead of working on code, they're increasingly interacting directly with the shell of whatever system they're installed in. It's a significant change in how AI-powered software development happens – and despite the low profile, it could have significant implications for where the field goes from here.
The terminal is best known as the black-and-white screen you remember from 90s hacker movies – a very old-school way of running programs and manipulating data. It's not as visually impressive as contemporary code editors, but it's an extremely powerful interface if you know how to use it. And while code-based agents can write and debug code, terminal tools are often needed to get software from written code to something that can actually be used.
The clearest sign of the shift to the terminal has come from major labs. Since February, Anthropic, DeepMind and OpenAI have all released command-line coding tools (Claude Code, Gemini CLI, and CLI Codex respectively), and they're already among the companies' most popular products. That shift has been easy to miss, since they're largely operating under the same branding as previous coding tools. But under the hood, there have been real changes in how agents interact with other computers, both online and offline. Some believe those changes are just getting started.
“Our big bet is that there's a future in which 95% of LLM-computer interaction is through a terminal-like interface,” says Alex Shaw, co-creator of the leading terminal-focused benchmark TerminalBench.
Terminal-based tools are also coming into their own just as prominent code-based tools are starting to look shaky. The AI code editor Windsurf has been torn apart by dueling acquisitions, with senior executives hired away by Google and the remaining company acquired by Cognition – leaving the consumer product's long-term future uncertain.
At the same time, new research suggests programmers may be overestimating productivity gains from conventional tools. A METR study testing out Cursor Pro, Windsurf's main competitor, found that while developers estimated they could complete tasks 20-30 percent faster, the observed process was nearly 20 percent slower. In short, the code assistant was actually costing programmers time.
That has left an opening for companies like Warp, which currently holds the top spot on TerminalBench. Warp bills itself as an “agentic development environment,” a middle ground between IDE programs and command-line tools like Claude Code. But Warp founder Zach Lloyd is still bullish on the terminal, seeing it as a way to tackle problems that would be out of scope for a code editor like Cursor.
“The terminal occupies a very low level in the developer stack, so it's the most versatile place to be running agents,” Lloyd says.
To understand how the new approach is different, it can be helpful to look at the benchmarks used to measure them. The code-based generation of tools was focused on solving GitHub issues, the basis of the SWE-Bench test. Each problem on SWE-Bench is an open issue from GitHub — essentially, a piece of code that doesn't work. Models iterate on the code until they find something that works, solving the problem. Integrated products like Cursor have built more sophisticated approaches to the problem, but the GitHub/SWE-Bench model is still the core of how these tools approach the problem: starting with broken code and turning it into code that works.
Terminal-based tools take a wider view, looking beyond the code to the whole environment a program is running in. That includes coding but also more DevOps-oriented tasks like configuring a Git server or troubleshooting why a script won't run. In one TerminalBench problem, the instructions give a decompression program and a target text file, challenging the agent to reverse-engineer a matching compression algorithm. Another asks the agent to build the Linux kernel from source, failing to mention that the agent will have to download the source code itself. Solving the issues requires the kind of bull-headed problem-solving ability that programmers need.
“What makes TerminalBench hard is not just the questions that we’re giving the agents,” says Shaw, “it’s the environments that we’re placing them in.”
Crucially, this new approach means tackling a problem step-by-step – the same skill that makes agentic AI so powerful. But even state-of-the-art agentic models can't handle all of those environments. Warp earned its high score on TerminalBench by solving just over half of the problems – a mark of how challenging the benchmark is, but also how much work still needs to be done to unlock the terminal's full potential.
Still, Lloyd believes we're already at a point where terminal-based tools can reliably handle much of a developer's non-coding work – a value proposition that's hard to ignore.
“If you think of the daily work of setting up a new project, figuring out the dependencies and getting it runnable, Warp can pretty much do that autonomously,” says Lloyd. “And if it can't do it, it will tell you why.”
-
Etiketler: