A senior design practitioner embedded in your team, teaching agentic development from inside the codebase
Schedule a ConversationDeep design skills meet agentic development, working on your code from day one.
AI doesn't replace your team's judgment; it amplifies whatever practices already exist. Strong design discipline plus AI produces remarkable results. Weak practices plus AI produces technical debt at unprecedented speed.1 Paul joins your team part-time, working on real features using tools like Claude Code, developing the practices that make agentic development actually work.
Most consultants advise from the outside. In an embedded engagement, Paul works on real features and real deadlines alongside your developers. The knowledge transfer happens through doing the work together, not through slide decks or recommendations documents.
The metric that matters: Shortest Time to Customer Value.
Not lines of code. Not velocity points. Not hours estimated vs. hours spent. The goal is compressing the time from "we have an idea" to "a customer is using it and we can see that it's working." Every practice in the engagement, from TDD to observability to domain modeling, serves that outcome.
Duration
3-6 months (starting with a 1-month pilot)
Commitment
20-30 hrs/week
Availability
One team at a time
Format
In-person (Denver) or remote
Not vibe coding. Not automation without oversight. A disciplined practice where AI amplifies experienced human judgment.
The rigor doesn't go away when AI writes the code. It moves up a level of abstraction, to discovery, design, specification, and review. That's where the real leverage is.
Agentic development delivers real productivity gains, but the size of the gain depends on your starting point. Here's an honest picture.
Order-of-magnitude gains
When building new, well-scoped features with clear specifications and good test coverage, agentic development compresses weeks of work into days. The AI thrives on consistent patterns, clear guidelines, and well-defined APIs.
This is where the engagement typically starts: a pilot project on new work where the team can experience the full potential of the approach.
Strategic, targeted gains
Codebases with 10 or 20 years of accumulated design decisions are harder for humans and AI alike. Gains come from wrapping existing capabilities in APIs, building test harnesses that make change safe, and strategically choosing where to modernize.
The embedded engagement helps identify where AI-assisted development will have the highest leverage in your existing systems, and where it won't.
Why this matters
Google's DORA research found that most teams adopting AI-assisted development actually experience increased delivery unpredictability, despite feeling faster.1 Accelerating the wrong part of a system makes the whole thing slower. The embedded engagement focuses on identifying and accelerating the right constraints first.
You won't have to wait months to find out. The engagement is designed to produce visible results quickly, so you can make informed decisions about continuing.
"Check this out"
The goal is something to show stakeholders early. Not a plan, not a deck, not an estimate. Something tangible you can point at and react to together.
If the approach doesn't produce visible results quickly on your codebase with your team, you'll know immediately.
Pipeline clarity
Cycle time from idea to deployed feature. Defect rates. Throughput. These pipeline metrics tell you whether the team is actually moving faster with higher quality, or just feeling faster.
The DORA research1 shows most teams feel faster but deliver less predictably. We measure so you don't have to guess.
Customer signal
Usage analytics, value validation, production observability. Building features that nobody uses is still waste, just faster waste. The engagement includes setting up the feedback loops to verify what's landing with customers.
Most teams have pipeline metrics but no customer observability. Both matter.
Direct: Time and cost compression
The time and cost to build new capabilities compresses significantly. How much depends on the work, the team, and the starting point, but the difference is measurable and visible early. This is the number your finance team will care about.
Indirect: Confidence and momentum
Faster, higher-quality delivery changes what your team is willing to take on. Projects that seemed too risky or too expensive get reconsidered. Teams that were hesitant about new approaches gain confidence from seeing real results. This kind of ROI is harder to measure but often matters more.
Why start with one month
A month is enough time to prove the model works on your codebase, with your team, on real work. If the results are there, extending to a multi-month engagement is straightforward. If they're not, you've learned something valuable at a bounded cost. The results from the first month inform what comes next.
Working side-by-side with your developers on production code, using agentic AI tools as part of the workflow. Building the design and testing habits that make AI-generated code trustworthy.
Reviewing architecture decisions, domain models, and bounded context boundaries as part of the team's regular workflow. Catching design issues early, when they're cheap to fix.
Building your team's capabilities in DDD, EventStorming, and software design. The goal is for the team to be self-sufficient after the engagement ends.
Running focused EventStorming sessions when the team encounters a new subdomain or needs to rethink an existing model. Lightweight, just-in-time exploration.
Paul has done this work across industries, team sizes, and technology stacks. Today, every engagement includes agentic development practices. Here are two examples from past work.
Embedded with a product engineering team at a Fortune 500 manufacturer, working on a large Ruby on Rails IoT platform. Led deep refactoring of the domain model across multiple bounded contexts, improving performance and aligning the code with the business language.
The design insights from this engagement became the basis for two conference talks at Explore DDD and DDD Europe.
Joined a financial services team for nearly a year, working on production systems handling sensitive financial data. Paired with developers daily, ran EventStorming sessions to map complex business processes, and helped the team adopt DDD practices that outlasted the engagement.
This is what embedded means: long enough to understand the domain deeply, build real trust, and leave the team stronger than you found them.
AI coding tools generate code faster than ever, but speed without design discipline creates instability. The real leverage is in design skills: knowing what to build, structuring code so AI can work with it, and reviewing generated output critically.
When the business logic is genuinely hard, an extra senior perspective on modeling and design decisions makes a real difference in the quality of the solution.
Strangler fig patterns, bounded context extraction, and incremental migration are easier with someone who has done it before working in the codebase with you.
Learning Domain-Driven Design from books only goes so far. An experienced practitioner working alongside you accelerates adoption and helps avoid common pitfalls.
Some teams have a backlog of product ideas that never got built because traditional estimates made them seem too expensive. Agentic development can change what's feasible, but the only way to know by how much is to try it on a real project. The pilot engagement gives you real evidence before committing further.
An embedded engagement is a partnership. It works best when the client side is set up for it.
A small, dedicated team that wants to learn
2-4 developers who can focus on the pilot project, not borrowed between other commitments. Eagerness to learn matters more than years of experience. The best results come from developers who are hungry to work differently.
Domain knowledge on the team
At least one person who deeply understands your business domain and can make decisions about requirements and priorities without lengthy approval chains.
Openness to new practices
Agentic development changes how developers work. The team needs to be willing to try test-driven workflows, pair programming with AI tools, and shifting from writing code to specifying and reviewing it.
Executive sponsorship (with discipline)
Someone at the leadership level who can protect the pilot from being deprioritized. Equally important: trust the team to execute without micromanaging, and resist the temptation to pile on requests once the "art of the possible" opens up. Paul manages scope actively and will say no to protect outcomes.
Every engagement begins with a pilot: a project scoped to deliver real value while building your team's agentic development capability.
We talk about your team's situation, the domain you're working in, and what you're trying to accomplish. No pitch, no demo. The goal is to understand whether this model is a good fit for where your team is right now.
Tailored to your situation: Engagements range from focused delivery on a single project to broader transformation work that includes strategic mapping, architecture guidance, and product thinking. The right depth depends on where your team is and what you're trying to accomplish. We figure that out together in this conversation.
Together, we identify the right first project. The sweet spot is a project that's impactful enough that people care about the outcome, but bounded enough that it's not a bet-the-company risk.
What makes a good pilot: New feature work or a self-contained module. A clear business outcome. A team that can be dedicated to the effort. Enough complexity to be meaningful, not so much that it takes six months to see results.
A focused kickoff to build relationships with the team, run an initial discovery or EventStorming session on the pilot domain, get the development environment set up, and establish the working rhythm. In-person is preferred where possible for building trust, but for fully remote teams, a remote kickoff works well too.
Paul joins the tiger team and starts contributing from day one. Pair programming, design reviews, and building features together using agentic development practices. No observation phase. Real work, real deadlines, real code.
Distributed teams: Many teams span time zones and geographies. For distributed tiger teams, we establish overlapping core hours for pairing and live collaboration, with async code review and design discussion filling the gaps. The practices work well in this model; agentic development is inherently asynchronous, with AI doing work between human review cycles.
As the pilot team builds capability, those practices spread to the rest of the organization. The tiger team members become internal champions who can coach their peers. The goal is always for the team to be self-sufficient after the engagement ends. These skills are caught, not taught.
Not here to replace your team. Paul shares credit from day one. Every win is the team's win. The tiger team members get the skills, the confidence, and the track record. When the engagement ends, they're the ones who carry it forward, and everyone in the organization knows it. The worst outcome for an embedded engagement is a team that can't function without the outside person. That's a dependency, not a transformation.
Not ready for a full embedded engagement? Coaching & consulting offers shorter-term options including 6-week pilot projects and monthly retainers.
Most teams adopting agentic development make the same three mistakes. The embedded engagement is designed to catch them early.
1
The instinct is to build infrastructure first: agent orchestration, custom tooling, internal platforms. Then, once the pipeline is ready, start delivering features. This burns months and budget before producing anything a customer can use. The embedded approach flips this: deliver features from day one, and the repeatable pipeline crystallizes from the delivery work itself.
2
The typical pipeline is linear: requirements, design, front-end code, back-end code, then tests, then deploy. Testing at the end means defects accumulate and get caught weeks later in a bug triage. TDD (red-green-refactor) woven into every step catches defects in seconds, not sprints. The test harness grows with the code, not as an afterthought bolted on at the end.
3
Most teams lack visibility in two places. Pipeline observability: cycle time, defect rates, throughput. Without it, you can't tell if you're actually faster or just feeling faster. Customer observability: usage analytics, value validation, production health. Without it, you're deploying features blind. Shipping faster doesn't help if nobody checks whether what you built is actually being used.
These aren't edge cases. They're the default path for most teams, including experienced ones. The embedded engagement builds the habits that prevent them from the start.
1. Google DORA, 2025 State of AI-Assisted Software Development. Survey of ~5,000 professionals. ↩
Every engagement is tailored to the team's situation, domain, and goals. Let's talk about whether this model is a good fit.
Schedule a Conversation