Exploring the 34th Thoughtworks Technology Radar: AI, Security, and Foundational Practices
The Thoughtworks Technology Radar is a biannual report that captures our firsthand experiences with the tech landscape, highlighting tools, techniques, platforms, and languages we've encountered. The 34th edition, released in April 2025, features 118 blips and is heavily shaped by AI, but also revisits core software craftsmanship principles. Below, we answer key questions about this release.
What is the Thoughtworks Technology Radar and what does the 34th edition cover?
The Thoughtworks Technology Radar is a biannual survey that curates our observations from real-world software projects. It offers concise assessments—called blips—on tools, techniques, platforms, and languages that have caught our attention or proven valuable. The 34th edition includes 118 such blips, spanning everything from AI-driven development to security practices. As expected, AI-oriented topics dominate, but the radar also revisits established methods like pair programming, zero trust architecture, mutation testing, and DORA metrics. This mix reflects a deliberate effort to balance AI's rapid pace with time-tested foundational skills.

How does AI influence the topics in this edition, particularly regarding foundational practices?
AI's influence in this radar is twofold. On one hand, it drives new blips around LLM-assisted development and agentic tools. On the other, it forces us to revisit the bedrock of software engineering. Many blips highlight techniques such as clean code, deliberate design, testability, and accessibility—principles that act as a counterweight to the complexity AI can generate. This isn't nostalgia; it's a necessary grounding. The radar also notes a resurgence of the command line, where agentic tools are bringing developers back to terminals as primary interfaces, after years of abstraction.
What are the security concerns around AI agents, and why is a strong security presence vital?
Security is a major theme in this radar, especially with the rise of AI agents that require broad permissions. Agents like OpenClaw and Claude Cowork need access to private data, external communication, and entire codebases—each arguing the payoff justifies the risk. However, safeguards haven't caught up. Prompt injection attacks remain unsolved: models can't reliably distinguish trusted instructions from untrusted input. That's why the addition of security expert Jim Gumbley to the radar writing team is crucial. He brings deep knowledge, including contributions to this site's Threat Modeling Guide, ensuring security blips are well-informed.
What is the "permission hungry" agent problem described in the radar?
The "permission hungry" problem captures a central tension in the current agent moment. The agents worth building—those that supervise real tasks or coordinate swarms across codebases—require broad access to sensitive systems. Yet the appetite for access collides with unresolved security issues. For example, prompt injection means an agent might be tricked into executing harmful actions. The radar uses the metaphor of a skier who just learned to turn and heads straight for a difficult black run: ambition outpaces safety. This bind is a key reason why Harness Engineering—designing guides, sensors, and controls—is a growing focus in this edition.
What is Harness Engineering and how does it relate to this radar?
Harness Engineering is a concept that emerged from discussions during the radar meeting, and it's a major source of ideas for Birgitta's article on the subject. It refers to designing the necessary guides and sensors to keep AI agents and complex systems safely constrained—like a harness. The radar includes several blips suggesting tools and techniques for building such harnesses, especially given the permission-hungry nature of modern agents. The expectation is that the next radar in six months will feature even more blips on this topic, as the community works to solve the safety-versus-access dilemma.
What trends might we expect in the next edition of the radar?
Based on the trajectory observed in the 34th edition, the next radar will likely expand on Harness Engineering, security for AI agents, and the resurgence of foundational practices. The problem of prompt injection and permission management will drive more tooling and techniques. We also anticipate a deeper exploration of agent orchestration and the command-line interfaces that support it. As AI continues to accelerate, the need for deliberate design and safety controls will only grow, making these themes central to future radars.