What This Means
Most design teams bolt AI onto existing processes and call it innovation. That creates tool sprawl, inconsistent outputs, and no measurable improvement. I restructure the workflow itself, embedding AI where it eliminates friction and replacing manual steps that no longer justify their cost. The result is a design operation that runs faster, scales without headcount, and produces consistent output across distributed teams.
How I Approach It
- Map current design workflows end-to-end, identifying manual bottlenecks, redundant handoffs, and steps where AI tooling delivers immediate ROI
- Introduce vibe coding as a production practice, not an experiment. Designers describe intent in natural language; AI generates functional UI, components, and prototypes that feed directly into the build pipeline
- Deploy agentic build systems where AI handles component generation, documentation, QA checks, and design-to-code translation autonomously
- Redesign legacy workflows to integrate AI tooling without requiring team restructuring or new hires. Existing roles absorb new capabilities through structured enablement
- Establish governance frameworks that define where AI outputs require human review and where they ship directly
When You Need This
- Your team is using AI tools individually but there's no standardised workflow, no governance, and no way to measure impact
- Design cycle times haven't improved despite adopting new tools because the underlying process is unchanged
- You're scaling output but can't justify proportional headcount increases
- Handoffs between design and development remain manual, slow, and error-prone
- Leadership is asking for an AI integration strategy and you need something concrete, not a slide deck
Expected Outcomes
- Design cycle times reduced by restructuring workflows around AI-assisted prototyping and automated component delivery
- Repeatable agentic pipelines that handle documentation, QA, and design-to-code translation without manual intervention
- Clear governance defining AI tool usage, review gates, and quality thresholds across the team
- A measurable reduction in time spent on production tasks, reallocating design capacity toward strategic and research work
- Team-wide adoption through structured enablement rather than ad-hoc experimentation
Expected Challenges
- AI-generated outputs require calibration. Early iterations will produce inconsistent quality until review gates and prompt standards are established. Budget two to three sprint cycles for tuning before expecting production-grade output
- Designers accustomed to manual workflows will resist adoption if AI tooling feels imposed rather than earned. Structured enablement works; mandates don't. Expect a transition period where parallel workflows run simultaneously
- Governance gaps surface quickly. Without clear rules on where AI outputs require human review, teams either over-review everything (negating speed gains) or under-review (shipping defects). Getting this boundary right takes deliberate iteration
- Tool fragmentation is a real risk. The AI tooling landscape shifts monthly. Committing too early to a single stack creates lock-in; staying too broad creates chaos. The strategy needs built-in flexibility with defined evaluation cycles
- Measuring impact is harder than it looks. Cycle time reduction is straightforward to track; quality improvement and creative output are not. Metrics frameworks need to account for both quantitative and qualitative signals from the start
What I Bring To This
At Johnson Controls, I directed 39 designers across multiple regions and cut operational costs by 85% through workflow restructuring. At SwissRe, I reduced design defects by 70% and bugs by 47% through systematic process redesign. I now apply the same operational rigour to AI integration, using vibe coding and agentic tooling in my own consultancy practice daily. This isn't theoretical. I build with these tools, ship with these tools, and know where they break.
