Lessons from the frontlines of AI-Agile integration
Ryan Fillman has coached Agile teams for nearly 15 years, but the last two have been unlike anything he's experienced. As AI transforms how teams work at Encora and within the industry as a whole, Ryan reveals what stays the same, what changes completely, and what becomes more important than ever before.
The patterns emerging from his frontline experience challenge common assumptions about AI's role in software development. Here are five critical insights that reveal how teams can integrate human-centered AI to deliver value.
Ryan observes a fundamental shift in how teams approach AI integration today. Instead of treating it as yet another development tool, they've started recognizing AI as a collaborative partner.
"As a team, we pair on prompts, inspect outputs together, and rely on retrospectives to reflect on what's working and what isn't," he explains. "We treat AI-generated code as something inspectable and improvable, not as a final product. We reinforce that AI isn't a replacement for anyone on the team—it's a teammate."
This teammate approach works the same way any good partnership does; it needs trust, shared standards, and getting better together over time.
Human-AI collaboration reveals psychological responses that traditional Agile coaching has never had to address. Ryan has witnessed the full spectrum of human reactions to AI integration.
"Some team members are quick to adopt AI, some over-rely on it, while others are hesitant or skeptical," he notes. "This can create friction among team members about proper AI use."
Managers must facilitate discussions about identity, contribution, and team value while openly addressing fear and uncertainty. This helps the team reframe its role from task executors to strategic guides and quality validators. The key is to create psychological safety for AI experimentation.
When teams have the permission to fail with AI tools, space to express job security concerns, and clarity about how their human skills remain irreplaceable, they are able to learn faster.
Ryan’s perspective on speed changed after a particular project. "One of our UX leaders built a user experience proof-of-concept involving a new process for handling credit card payments. Normally, that's a sprint's worth of work. AI-assisted software development helped us have a working prototype in less than a day."
The real insight, however, was different: "The win wasn't just about speed; it was learning earlier, aligning faster, and reducing risk before we'd invested too much."
The rapid prototype provided immediate stakeholder feedback, exposed edge cases early, and enabled assumption validation with actual working software rather than mockups.
This experience also highlighted a crucial distinction: while traditional Agile optimizes for delivery velocity and how quickly teams can ship features, AI-augmented Agile requires optimizing for learning velocity, allowing teams to validate their assumptions and change direction.
"The truth is that AI can deliver the wrong thing exponentially faster," Ryan warns. When teams generate working prototypes within hours, clarity about what you want becomes more important than how fast it can be built. Teams must clarify directions upfront and check in more often to ensure they're on track.
Agile practitioners evolve into ethics stewards as AI becomes more autonomous in generating code and making decisions. "We ensure that AI use is transparent and reviewable," Ryan explains. “This means Scrum Masters and team leads now have new jobs: auditing AI decisions, setting standards for when humans need to step in, and ensuring accountability for AI's output.”
This plays out in his team's everyday work. They track how they use AI, set up checkpoints to review AI suggestions, and know what to do when AI output doesn't feel right. They take AI transparency as seriously as security or code quality.
This mindset extends to the whole organization. Ryan connects what AI can do with the company's values, ensuring that moving faster doesn't mean compromising quality, security, or doing the right thing.
Success in AI-augmented Agile will demand new competencies and strengthen traditional ones.
"Prompting is a new language used to talk with AI. Teams need to learn how to guide AI, audit its output, and understand when it's wrong," explains Ryan. "Orchestration is emerging as a key skill and knowing how to balance what humans do best with what machines can accelerate becomes key."
The most successful approach uses established team practices. "Pair on everything and treat it as a learning experience," Ryan advises. Teams use retrospectives to spot skill gaps and make learning AI a team goal, not something individuals must figure out alone.
Beyond technical skills, teams need better judgment. "Success isn't just about velocity. It's about delivering value faster, with fewer mistakes," Ryan notes.
Teams must become skilled at quickly checking AI output, making decisions when conditions are uncertain, and explaining AI-assisted work to stakeholders.
These insights reveal a fundamental evolution in how teams create value. Rather than eliminating the need for human-centered practices, AI makes human intervention more critical.
"Agile isn't dying; it's evolving. We must evolve with it," Ryan reflects. Teams that thrive will be those that use agentic intelligence to amplify human judgment, creativity, and collaboration rather than replace them. The future belongs to teams that master human-AI partnership while doubling down on uniquely human elements of delivery: empathy, strategic thinking, ethical reasoning, and adaptive problem-solving.
Technology provides the speed, but humans provide the wisdom to ensure that speed serves meaningful purposes.
This article is part of the November edition of the Interface, Encora's thought leadership magazine, co-created with AI. Click here to go to the Interface homepage.