Ana Laura Robles Bencomo's journey mirrors the industry's transformation, moving from mechatronic engineering to back-end development to machine learning. As a machine learning engineer in Encora's AI Practice, she assists clients in integrating AI into their workflows. Her perspective is striking: the more powerful AI becomes; the more critical human essence becomes.
What follows are excerpts from our conversation.
My first approach was ChatGPT. I was surprised that people without deep technical backgrounds could use it in personal, playful ways. Then the code assistants arrived, and I realized: "This is a tool I can use for work."
But the real shock was witnessing this democratization. Students, businesses, and everyone are getting involved. I observed a domino effect—more users led to increased investment, which in turn created more use cases and users.
That's when I understood this wasn't just another tool but a whole paradigm; a realized this wasn't just another tool, but a whole paradigm — technology reshaping how we work, create, and solve problems. And it changed my work completely. AI has evolved from being another component to becoming the central core of our projects. Traditional machine learning became eclipsed by LLMs and generative AI. Now, 80 to 90% of my work involves coordinating complex LLM workflows, managing context windows, and orchestrating systems where generative AI is the primary focus.
We assisted a healthcare client in developing Software Requirements Specifications for each of their products. In healthcare, precision is delicate. Any ambiguity could impact patient safety or regulatory compliance.
We built an AI system that generated initial drafts based on their existing SRS repository. The AI would analyze the new product, search for similar ones, and combine relevant elements. This saved considerable time while maintaining consistency in terminology and compliance language.
When coding, I iterate with AI. Sometimes, you have an idea and ask AI to transform it into code. Usually, you get frustrated that it might not create what you asked for, but now you have a first draft. Even if it doesn't solve everything, it moves you forward.
But it's always iteration. I don't ask models to generate an idea without putting my idea in it. When you ask a model to develop something, it's essentially a summary, but it doesn't capture your essence. You need to iterate to create something meaningful.
Communication has become way more important. You need to explain your ideas clearly—not just to other people but also to AI tools. You're not taking full advantage if you can't articulate what you're trying to build and why.
What really matters now is the idea, not the code itself. An idea can be expressed in many programming languages, but the essence remains the same.
The AI models are the same for everyone: ChatGPT, Claude, and Gemini. But what's unique is the data we use them with, the problems we solve, and how we communicate with them.
We bring creativity, expertise, and context. We understand the unwritten rules, "why things are this way," our understanding of what "good" looks like, and the tribal knowledge that doesn't exist in documentation.
Initially, there was resentment. "Is your code generated? Are you sure that works?"
But confidence has grown. We're at a point where we say, "Ok, it's generated code." That doesn't mean we'll use it as is. This process involves iteration, but there's definitely more confidence.
Sometimes we even suggest to each other: "You might want to ask AI what the best approach is." It's nice to see how we're moving from not trusting AI to trusting it more.
Don't blindly fall into every trend. Right now, everyone is talking about agents. But there's so much noise. You don't get to solve everything with AI, so you need to be smart about when to use this technology.
It's easy to ask AI to solve everything, but that won't happen. If you want the work to be well done, you'll need to iterate.
I'm excited. I see this as an evolution, not a threat. The skills that make us valuable aren't going away; they're just being applied at a different level.
AI opens up opportunities for people from diverse backgrounds to enter software development field. As the barrier to entry decreases, we may have more people who can code, including designers and product specialists. More diverse perspectives could lead to better solutions.
On my part, I keep studying AI. Right now, agents are a hot topic. I am just focusing on continuous learning and staying curious, understanding how agentic systems work, and how to build and deploy them.
There are three key areas: First, advanced prompt engineering—not just writing prompts but designing entire prompt systems. Second, spec-driven development. We're moving toward a world where the specification is the deliverable. AI can generate much of the implementation if you can articulate precisely what you want.
Third, don't be afraid to experiment. I've seen juniors building impressive stuff because they're willing to play with these tools without overthinking. When you permit yourself to experiment, you're giving yourself space to get to know the tool, see where it works well, and where it falls short. That experimentation mindset is critical.
Ana's perspective cuts through productivity metrics and job anxiety. The more capable AI becomes, the more valuable human judgment, creativity, and context become. The future isn't about humans competing with AI. It's about understanding what only humans can contribute and bringing that essence to everything we build.
Ana's perspective on human essence and iteration offers one lens through which to view this transformation. For another perspective on how AI is reshaping development workflows and what separates teams that truly transform from those that merely work faster, read Rodrigo Vargas Rodriguez's insights on The Mindset Shift That Separates AI-First Teams.
Return to the full exploration: Unlike Anything Before: How AI Has Transformed Software Development