Legacy design workflows can't keep up with AI-assisted development & design. I teach teams to close that gap with vibe coding and vibe designing, making Agile work the way it was intended 25 years ago. Designers don't need permission to sit at the top table. Design engineers are already there.
I spent over a decade leading design operations at enterprise scale, directing up to 39 team members across regions, cutting operational costs by 85%, and embedding UX strategy into business roadmaps at Johnson Controls and SwissRe. Mature design systems and KPI frameworks only get you so far. Real parity with engineering comes from AI-native workflows, vibe coding, and vibe designing. That is what I build now.
| Name | Naresh Shan |
|---|---|
| Address | Zurich, Switzerland |
| Phone | +41 78 781 14 78 |
| mail@naresh-shan.com | |
| linkedin.com/in/naresh-shan |
Built an alignment intelligence platform that identifies communication gaps across product teams before they cause rework which is 80% of issues in design projects.
Founded consultancy to help design teams establish metrics frameworks, new AI workflows and teach how to demonstrate strategic value of design to stakeholders.
Directed 39 global UX & UI team members across regions, improved workflows, reduced costs by 30%, and boosted productivity. Cut operational costs by 85% through strategic alignment processes. Created data-driven UX standards that improved user satisfaction by 90% across all international teams.
Directed 14-member EMEA design team through organizational transformation; developed 4 direct reports into senior roles. Decreased user-reported bugs 50% and support tickets 30%. Architected scalable design system and restructured DesignOps workflows embedding UX strategy into business objective planning.
Led UX strategy, aligning with business goals to ensure maximum customer value. Managed research teams cutting 40% of non-valuable work. Established UX standards reducing UI technical debt by 80%. Measured UX success with data increasing conversions by 14%.
Collaborated on MVP features and wireframes, improving UI alignment. Refined product tasks for Agile sprints, reducing design-to-code time by 35%. Created design libraries and style guides reducing UI inconsistencies.
Contractor roles delivering front-end design and development across multiple organisations in London, building strong foundations in UI development and user-centred design practices.
Advanced programme combining management workflow methodologies with modern UX design, development practices and vibe coding techniques.
Professional certification in Agile Project Management at Foundation and Practitioner level.
PMI Agile Certified Practitioner training covering agile principles, practices and tools.
Certification combining PRINCE2 project management with agile delivery approaches at Foundation and Practitioner levels.
Comprehensive course covering design leadership, team management, and strategic design operations.
In-depth study of cognitive psychology applied to UX design and advanced journey mapping methodologies.
Undergraduate degree in Digital Media Production, building expertise in digital design, media production, and creative technology.
A senior design leader with 15+ years of experience scaling global UX teams and embedding design thinking into enterprise strategy. Based in Zurich, Switzerland, I specialise in DesignOps, cross-functional alignment, and translating complex workflows into measurable business outcomes.
I help organisations scale design maturity, embed UX strategy into business operations, and build teams that deliver measurable outcomes.
Integrating AI-native tools into design pipelines. Vibe coding, agentic builds, and legacy workflow redesign.
Design direction for global teams. Aligning UX vision with product roadmaps across markets.
Governing scalable component architecture. Coordinating cross-team adoption and delivery cadence.
Process frameworks that reduce costs. Streamlining workflows and resource allocation to maximise output.
KPI and OKR frameworks for design. Quantifying design contribution to business performance.
March 11, 2026
March 11, 2026
You spend 45 minutes demoing your design implementation. Half the engineering team has their cameras off. The PM asks three questions you already answered in the Figma file. Nobody mentions the edge case Sarah spent two days solving. The sprint review ends. Nothing changes in the next sprint.
This isn't a meeting problem. It's an alignment signal, and it's been compounding for at least two sprints before today.
When developers disengage from sprint reviews, the instinct is to treat it as a motivation issue. Alex schedules a retrospective about the retrospective. Marcus tweaks the agenda. Someone suggests making it more interactive. Those interventions target the symptom. The signal developers are sending is about the structure of the feedback loop itself: it isn't surfacing information they need, and they've learned through experience that showing up doesn't change what they build next.
The Scrum.org community forum documents this pattern explicitly. In a thread titled 'Developers Show Less Interest During Sprint Reviews,' practitioners describe developers who view demos as a waste of time — not because they're lazy, but because the ceremony doesn't connect to anything that affects their work. One Scrum Master noted that developers wanted the Product Owner and stakeholders to handle the review and just relay the feedback, effectively requesting to opt out of a ceremony they no longer trusted to route useful information back to them.
That preference for indirect feedback isn't apathy. It's a rational response to a broken loop.
Developer disengagement from sprint reviews typically surfaces one of three structural breakdowns. First, the demo doesn't reflect what they built. When Sarah's sprint review demo presents the product as designed in Figma rather than as implemented by engineering, developers are watching a version of their own work that doesn't exist. Second, the review isn't surfacing information they need. A sprint review where Marcus asks surface-level questions teaches developers that attendance delivers no new signal. Third, the feedback loop doesn't reach them anyway. In many teams, feedback from sprint reviews travels through the Product Owner before reaching developers. By the time it arrives, it's been filtered, prioritised, or deferred.
The attendance problem you observe in today's sprint review is the output of decisions made two or three sprints ago. The feedback loop broke before the cameras started turning off.
This is why treating sprint review attendance as an engagement metric misframes the problem. Engagement is a lagging signal — it reflects accumulated experience. Alignment is what's being measured. When developers no longer believe that attending the sprint review improves the quality of the decisions they make next sprint, disengagement isn't irrational. It's accurate.
Flowtrace's 2025 State of Meetings report found that 67% of meetings are considered unproductive by the executives who run them. But sprint reviews aren't generic meetings. They're the only ceremony in Scrum specifically designed to inspect the increment and adapt the product backlog. When that ceremony degrades into a demo watched by disengaged attendees, the product's feedback loop has been severed.
The useful diagnostic question isn't: how do we get developers back in the room? It's: what would a developer need to gain from attending that they can't get any other way?
If sprint review attendance functions as an alignment indicator, it can be read diagnostically. Track who's not attending, not just how many. If the same developers consistently opt out, the disengagement isn't random. It reflects which parts of the team have lost faith in the feedback loop. Ask what information is exclusive to the sprint review. If everything discussed in the review is available before or after, then the ceremony isn't generating new signal. Watch the gap between questions asked and answers incorporated.
The cameras turn off one by one across two or three sprints until someone notices the attendance has dropped and frames it as a team engagement problem. It's not. It's your alignment score. The developers who aren't in the room have already calculated that attending doesn't improve what they build. That calculation is the signal worth reading, before it becomes a retro item.
Start by asking the developers who've stopped attending one question: what would need to be true about the sprint review for it to be worth your time? The answers will map the alignment gap more precisely than any attendance tracker.
March 3, 2026
March 3, 2026
You know how this goes. The design team runs three rounds of research, tests multiple prototypes, and brings data to the roadmap review. Then the PM says the sprint cannot wait. The first version ships. No validation. Just assumptions in production.
Atlassian's inaugural State of Product 2026, surveying over 1,000 product professionals, found that 40% of product teams do little or no experimentation at all. That is not a minority edge case. That is four in ten teams making product decisions without checking whether those decisions map to reality.
The default framing is that teams skip validation because they are busy or under-resourced. That framing misses the deeper problem. When a team ships without validating, they are not just skipping a research step. They are shipping their internal assumptions as if those assumptions were verified user needs.
The difference matters. A team that knows it is guessing can at least hedge. A team that treats its assumptions as facts builds confidently toward the wrong target.
This is why the same Atlassian report found that 84% of product teams worry their current products will not succeed in the market. Teams that skip validation are not just operating with less data. They are operating with misplaced confidence: high certainty about things they have not tested, low certainty about what users actually need.
What happens after a feature ships without validation follows a predictable pattern. Adoption comes in lower than expected. Post-launch feedback reveals that users want the feature to work differently. Marcus, the PM, calls for iteration. Alex and the engineering team absorb the rework. Timelines slip. And somewhere in the retrospective, the phrase "design misunderstood the user" appears.
Sarah, the designer, did not misunderstand the user. The team skipped the step that would have surfaced the user's actual mental model. But by that point, the conversation has moved on to delivery timelines, and the root cause gets buried.
The numbers on rework confirm the cost. Forrester's 2025 Total Economic Impact study, commissioned by UserTesting, found that validating designs before development reduces iteration cycles by 25%, saving enterprises approximately $2.5 million in developer costs. Code Climate's analysis of DORA research found that average dev teams rework 26% of their code before release, costing mid-sized companies upwards of $4.7 million annually.
These are not abstract figures. They are the downstream invoice for the decision not to validate.
Every research and testing platform publishes content about better research practices. Maze positions itself as continuous product discovery. UserTesting publishes ROI content for enterprise buyers. Dovetail focuses on methodology for teams already running research. What none of them address is the 40% who do no validation at all. Their content assumes the reader is already a practitioner. That assumption excludes four in ten product teams.
Why do those teams skip validation? Four root causes appear consistently: time pressure compresses the process until research becomes a sprint casualty; competitive mimicry treats "the market leader has it" as sufficient validation; team overconfidence grows from months of proximity to a build; and in many organisations, design is not framed as a revenue function, so its methods are treated as optional overhead.
Each of these is understandable. None of them accounts for the downstream cost.
Airtable's product management trends research found that only one in three product teams say their workflows are truly efficient and repeatable. That statistic and the 40% no-validation figure are not independent data points. They are describing the same underlying condition.
Teams without efficient, repeatable workflows make ad-hoc calls about what to skip. Validation is one of the first things to go. The result is lower confidence in what ships, lower efficiency in how it ships, and a pattern that becomes self-reinforcing: teams that skip validation get burned by post-launch iteration, which creates more time pressure, which leads to more skipped validation.
The same Atlassian report found that 80% of product teams do not involve engineers early in the process. For Alex and engineering teams everywhere, the validation skip is not isolated. It is part of a broader pattern of alignment gaps across the product development cycle.
The solution is not a better research process. Teams that skip validation are not missing better tools or methodologies. They are missing the framing that connects the skip to its cost.
Validation is not a UX formality. It is a cost avoidance decision. The 25 minutes Sarah spends testing a prototype with five users is a lever against thousands of dollars in developer rework per saved cycle. The research round Marcus wants to skip has a price tag attached. That price tag only becomes visible weeks after the feature ships, when it shows up as rework, iteration backlog, and post-launch firefighting.
The disconnect is not between designers who want to research and PMs who do not. It is between the moment a product decision is made and the moment its consequences appear. Validation closes that gap before it becomes a cost. Until teams connect those two moments explicitly, 40% will keep shipping assumptions as features.
February 25, 2026
February 25, 2026
Every morning, product teams perform the same ritual: 15 minutes of "I'm still working on the same thing as yesterday." The daily standup survives untouched in every agile transformation, every async-first rebrand, every meeting audit. It is the sacred cow of product development, too embedded to question, too habitual to kill.
But in 2026, there is finally a case that the standup is not just inefficient. It is structurally incapable of catching the misalignment that actually costs your team.
The standup is a point-in-time snapshot. Someone says they are working on the checkout redesign. Someone else says the API is nearly done. Nobody mentions the Figma revision from 4pm yesterday. Nobody knows the Linear ticket moved to blocked this morning.
The standup catches what people remember to say out loud. It misses everything that changed in the tools between yesterday's standup and today's.
Only 11% of meetings are rated highly productive by attendees, according to Atlassian's State of Teams 2025 report. The same report found that 72% of workers say the only way to get information is to ask someone, which means scheduling yet another meeting. Meanwhile, Microsoft's Work Trend Index 2025 found that workers are interrupted 275 times a day, with 60% of meetings being unscheduled and ad hoc. The standup is meant to reduce these interruptions. Instead, it often creates the very information gaps that cause them.
In February 2026, Notion launched Custom Agents: autonomous AI teammates that handle entire workflows, monitor channels, route tasks, and compile updates across Slack, Figma, Linear, email, and HubSpot. These agents do not wait for a scheduled sync. They monitor continuously and surface misalignment the moment it happens.
The contrast is concrete. A standup catches what people remember to mention, what they feel is worth raising, and what they can articulate in 90 seconds. A continuous AI alignment agent catches the Figma component updated after yesterday's standup, the Linear ticket that moved to blocked without a comment, the Slack thread where an engineering decision quietly changed the spec, and the calendar conflict that means the designer and the engineer are now working from different assumptions.
Flowtrace's State of Meetings Report 2025 found that meeting time costs an average of $29,000 per employee per year. Harvard Business Review research found that companies reducing meeting time by 40% saw productivity increase by 71%. For a product team of 10, the standup cadence alone represents thousands of dollars annually in direct salary cost, plus the compounding cost of context-switching.
For designers, the standup is particularly frustrating because design work is context-heavy and asynchronous by nature. A designer might spend two days iterating on a flow, only to discover at the next standup that the backend constraint governing the entire interaction had changed on day one. The standup did not catch it. The Figma comment went unread. The Slack message was buried.
An AI agent monitoring Figma activity, Slack threads, and Linear tickets in real-time would have flagged that constraint change the hour it happened, routed a summary to the affected designer, and updated the relevant Notion page automatically. No standup required.
The daily standup was designed to create alignment. By 2026, the tools exist to create alignment continuously, autonomously, and without scheduling a meeting. The question is not whether AI agents can replace the standup. The question is why teams are still treating a 2001 agile ceremony as the best available tool for a 2026 coordination problem.
The 15 minutes every morning are not the cost. The misalignment accumulating quietly in Slack threads, Figma revisions, and unread tickets between standups is the cost. AI agents do not just save the 15 minutes. They catch what the ritual never could.
February 17, 2026
February 17, 2026
You spend two weeks running user research. You recruit participants, moderate sessions, tag observations in Dovetail, synthesise patterns, and write a report with actual findings. Then that report gets referenced in a PM summary. The PM summary gets distilled into a design brief. The design brief gets interpreted by whoever is opening the Figma file that week.
By that point, the insight is unrecognisable.
This is not a story about bad intentions. Nobody is deliberately discarding research. The degradation is structural, happening in the whitespace between tools: the gap between where research lives and where design decisions actually get made.
Follow a single finding through its lifecycle. Call it Insight Zero: "Users abandon the flow at the pricing step because they do not understand what they are paying for."
That insight, in full context, lives in your research repository. It has supporting quotes, session recordings, pattern tags, confidence ratings, and the demographic breakdown of which user segments said it most. It is specific, traceable, and rich with evidence.
Step 1. Synthesised report. The researcher condenses ten sessions into a slide deck. Insight Zero becomes: "Pricing page clarity is an issue." The supporting context stays in Dovetail. The slide deck gets sent to the product group.
Step 2. PM summary. The PM reads the report, attends the readout, and writes a one-pager for the sprint. Insight Zero becomes a line item: "Improve pricing page comprehension." The slide deck gets filed in a Confluence folder that will never be opened again.
Step 3. Design brief. The designer receives the one-pager. They are already mid-sprint on another feature. Insight Zero becomes a note in the design file: "pricing clarity: rework copy?" The intent is there. The evidence is gone.
Step 4. Figma interpretation. The designer makes the call. They update the headline, add a tooltip, and ship a version. It is a reasonable guess. But the original finding about which users, which step, which mental model mismatch: none of that is connected to the decision they just made.
Four steps. One insight. A completely different output at every stage.
The tools do not overlap. Research tools (Dovetail, Maze, UserTesting, Lookback) exist to capture and store insights. Design tools (Figma, Zeplin, Linear) exist to build from them. Nobody has claimed the space between.
Atlassian's State of Teams 2025 found that 25% of the workweek is wasted searching for information, and 72% of knowledge workers must ask someone else to find what they need. That statistic describes the research-to-design handoff precisely: the information exists, but accessing it requires a person-to-person request rather than a reliable system.
When designers cannot trace their decisions back to user evidence, they make reasonable guesses. Some guesses are right. Many are not. The product drifts from actual user needs, one design decision at a time, with no mechanism to detect the drift.
Organisations that integrate user research into product development report 2.7x better outcomes compared to those that rarely incorporate insights. Teams with democratised research cultures are 2x more likely to report that findings influence strategic decisions. The value of the research is not in question. The problem is the pipeline.
Closing the gap requires the insight to travel with its evidence. The finding, the supporting data, the user context, and the design decision it informed need to exist as a connected chain, not as a series of export-and-import handoffs between tools that do not communicate with each other.
Teams that get this right tend to do a few things differently. They bring researchers and designers into synthesis together rather than passing a finished artefact. They build lightweight rituals (brief weekly standups, linked comment threads, shared tagging systems) that keep evidence accessible at the point of decision.
The degradation chain is real. But it is not inevitable. The telephone game only works if nobody can check the original message.
February 9, 2026
February 9, 2026
Most people are scared of the wrong thing.
When the conversation turns to AI danger, it gravitates towards familiar territory: sentient machines, mass job displacement, autonomous weapons, existential risk. These are good narratives. They have clear villains and dramatic stakes. They also largely miss what is actually happening.
The thing worth being concerned about is not what AI can do. It is what happens when running AI costs nearly nothing.
In March 2023, accessing GPT-4 cost $30 per million input tokens and $60 per million output tokens. By mid-2024, GPT-4o Mini had reduced that to $0.15 and $0.60 respectively. DeepSeek R1, released by a Chinese research lab, undercut the entire market at $0.55 per million tokens while delivering near-frontier reasoning capability. According to Epoch AI, achieving GPT-3.5-equivalent performance became 280 times cheaper between November 2022 and October 2024.
Andreessen Horowitz has documented the inference cost decline at approximately 10x per year — faster than Moore's Law and faster than the decline in internet bandwidth costs during the early web era. They called this "LLMflation."
This is not a projected trend. It has already happened. The cost floor has already moved. And the consequences of that move are only beginning to land.
William Stanley Jevons identified this dynamic in 1865 when he noticed that making steam engines more efficient did not reduce coal consumption. It increased it, because cheaper operation made previously uneconomical applications worth running. The same mechanic applies to AI.
Even as per-token costs collapsed, average monthly AI spending across organisations rose 36%, and the share of companies spending more than $100,000 per month on AI doubled. Agentic tasks and multi-step reasoning chains consume upwards of 100 times more tokens than simple queries. The cheaper each token becomes, the more tokens get burned.
This matters because it means the transition is not gradual. It is not a slow substitution of human work by machine work. It is a step change that occurs when the economic friction that previously kept humans in the loop disappears.
The concern is not that AI becomes powerful. The concern is that AI becomes ubiquitous because it becomes free.
Content at industrial scale becomes trivial. When generating a thousand articles, product descriptions, legal summaries, or social media posts costs a fraction of a penny, the economics of every information industry shift. The throttle on volume was always cost. Remove the throttle and volume becomes essentially unlimited.
Automated decision-making stops being a premium product. Hiring screening, loan assessment, medical triage, benefits administration — these processes have kept humans in the loop partly because of genuine regulatory requirements, but also because automation had a cost. As that cost approaches zero, the case for human review weakens not because anyone decided to remove it, but because the economic argument for keeping it evaporates.
Manipulation becomes industrialised. Political influence operations, social engineering, fraud, and targeted disinformation are not new problems. But cost has always been a natural throttle on their scale. A campaign that required a hundred people to run becomes a campaign that requires a budget line and an API key.
Power concentrates at the infrastructure layer. Training a frontier AI model costs hundreds of millions of dollars. Using one costs a fraction of a cent. This creates a specific and underappreciated asymmetry: the ability to build AI consolidates within a small number of well-capitalised organisations, while the ability to deploy AI becomes universal. The few who control the infrastructure determine the rules. The many who consume it do not.
The sci-fi risk narrative requires AI to have agency. To want something. To choose to harm. This is a useful story because it has a clear protagonist and antagonist, and because it suggests that alignment research is the key variable.
The economic risk requires none of that. It does not require AI to be sentient, or to have goals, or to defect from human instructions. It only requires AI to be cheap enough that the organisations and individuals deploying it no longer have a financial reason to include humans in the process.
The robots do not need to decide to take your job. They just need to make doing your job cost a fraction of a cent.
The questions that matter are not "is this AI smarter than me?" They are: what happens to the value of my cognitive output when producing an equivalent output costs nothing? Who controls the infrastructure on which all of this runs? What governance structures exist when the cost of deploying AI at scale drops below the threshold of meaningful decision-making?
OpenAI was reportedly on track to lose $5 billion in 2024. Anthropic expected to be $2.7 billion in the red by 2025. Prices this low are not sustainable without subsidy or consolidation. The current pricing is a land-grab. When consolidation happens and investor pressure for profitability forces a correction, whoever controls the infrastructure at that moment holds the leverage.
That is the risk worth watching. Not whether AI becomes conscious. Whether it becomes cheap enough that no one notices when it replaces the human in the loop, and powerful enough that the people who built it can name their price when the market matures.
February 1, 2026
February 1, 2026
Every team knows what misalignment looks like. It is the moment someone presents a feature nobody asked for, or a sprint review reveals that two squads built the same thing differently. That moment is dramatic. It gets a post-mortem. People talk about it.
But misalignment rarely arrives as a single event. It accumulates. Quietly. Over weeks. Between the kickoff where everyone nods in agreement and the launch where everyone realises they were agreeing to different things. That slow, invisible accumulation is what this article names and examines: alignment decay.
Alignment decay is the gradual divergence between a team's shared understanding at the start of a project and each member's operating understanding as the project progresses. It is not a disagreement. Disagreements are visible. Alignment decay is the slow replacement of shared context with individual assumptions, none of which feel like assumptions at the time.
At kickoff, the team occupies a shared mental model. The problem is defined. The scope is agreed. Roles are understood. But from that point forward, every conversation, every design decision, every technical trade-off subtly shifts each person's internal map. Without a mechanism to re-sync those maps, the team drifts. Not away from the goal, but away from each other's version of the goal.
Gallup's research shows that only 46% of employees clearly know what is expected of them at work, a figure that has dropped 10 percentage points since 2020. That is not a failure of kickoffs. It is a failure of maintenance.
One reason alignment decay goes undetected is that leaders and teams have fundamentally different views of how aligned they are. In 2024, 44% of leaders believed their employees were entirely aligned with organisational goals. Only 14% of employees agreed. That 30-point gap is not a communication problem. It is an observability problem.
Leaders see alignment through the lens of what was communicated. Teams experience alignment through the lens of what was understood, and those are different things.
Alignment decay follows a predictable pattern. In weeks one and two, the team operates from the kickoff's shared context. By weeks three and four, small interpretation differences emerge. A scope question gets answered in a Slack thread that three people miss. A design review surfaces a direction nobody explicitly agreed to, but nobody explicitly objects to either.
By the midpoint of a project, the accumulated drift becomes structural. Teams are no longer making decisions from the same base. They are making decisions from their own evolved understanding of the project, which has been shaped by dozens of small, undocumented adjustments.
The final phase is discovery, which usually arrives too late. Integration testing, stakeholder reviews, or launch preparation reveals that what each person built is technically correct but collectively incoherent. The project is not broken. It is fragmented.
McKinsey's research shows that companies whose top executive teams are aligned are almost twice as likely to achieve above-median financial performance. Enterprises with strong alignment deliver three times the shareholder returns of those with weaker execution.
Gallup's 2025 global workplace report quantifies the broader cost: global employee engagement fell to 21% in 2024, with an estimated $438 billion in lost productivity. When alignment decays, expectations become ambiguous by default.
The first step to addressing alignment decay is accepting that kickoffs do not create durable alignment. They create a snapshot. That snapshot has a half-life, and the half-life is shorter than most teams assume.
Three specific practices reduce alignment decay. First, limit active goals to three to five per team per cycle. Second, make assumptions explicit. Regular "assumption audits," where team members articulate what they believe is true about the project's direction, surface divergence before it compounds. Third, measure alignment directly. Ask team members independently what the project's top priority is. If the answers diverge, that is alignment decay made visible.
Alignment decay is not a failure of intention. Every team that kicks off a project intends to stay aligned. The decay happens not because people stop caring, but because the mechanisms that maintained alignment at the start naturally erode as a project progresses. The teams that ship well are not the ones that aligned once. They are the ones that realigned continuously.
February 26, 2026
February 26, 2026
There is a kind of burnout that does not come with a dramatic exit. It does not surface in a meeting or a performance review. It builds slowly, in the space between what the role was supposed to be and what it has quietly become.
The lone designer on a cross-functional team knows this pattern well.
They joined to do meaningful product work. Within a few months, they are running every piece of design work for a squad of eight to fifteen engineers. They are the only person asking user questions, the only one thinking about interaction patterns, the only one pushing back when the product manager describes a feature that would confuse the people who will actually use it. They are doing everything because there is no one else to do it.
This is not an accident. It is a structural outcome that organisations have been building toward for years.
Nielsen Norman Group's staffing research puts the typical researcher-to-designer-to-developer ratio at 1:5:50. In squad-based organisations, the ratio often looks worse in practice: one designer embedded across a product team of eight to twelve engineers, with no dedicated researcher and no design lead with actual bandwidth to support them.
Peter Merholz, author of "Org Design for Design Orgs," has documented the consequences of this model in detail. When a designer is embedded alone into a cross-functional squad, organisations are asking a single person to deliver across an unrealistic range of capabilities: interaction design, visual design, content, user research, strategy, facilitation, stakeholder communication, and often, production-level execution at the same time.
Senior designers end up doing work far below their level because no one else is available to do it. Junior designers, without mentorship or peers, function effectively as production artists executing a product manager's specifications.
The pipeline collapses. Senior designers burn out and leave. Junior designers never develop. And the organisation keeps hiring for the mythical unicorn designer who can do it all, alone, indefinitely.
Cross-functional teams are optimised for engineering throughput. Sprints are set, tickets are written, and velocity is measured in shipped features. Design is expected to slot in around this cadence, delivering ahead of the engineering queue while also attending every planning session, every standup, and every stakeholder review.
The designer is, by default, the only person in the room who represents user thinking. That is not a small job. But because it has no equivalent in the engineering world, it is often invisible. There is no sprint point for "challenged a feature direction that would have created a confusing flow." There is no metric for the bad path that did not get built.
Memorisely's Burnout Curve analysis captures what happens next. After layoffs and team restructures, senior designers are left to cover every gap: research, strategy, prototyping, visual delivery, facilitation, mentoring, and sometimes even writing production copy or contributing to code. The breadth of responsibility does not plateau. It grows as the team contracts.
Gallup research puts 77% of employees at elevated burnout risk when they feel isolated. Being the only person in your discipline on a team is a structural form of isolation. There is no peer to pressure-test a decision with. No colleague who understands the trade-offs you navigated last week. No one who notices when the quality of your work is starting to slip because you are running at capacity and have been for months.
The most common mistake organisations make with lone designers is mistaking sustained output for sustainable capacity.
A designer who is producing work week after week looks fine from the outside. Tickets are being closed. Designs are being shipped. The machine is running. What is not visible is the cost of that output: the strategic work being deferred, the research that is not happening, the design debt accumulating because there is no time to revisit anything that shipped six months ago.
By the time burnout becomes visible, the designer has already decided to leave. The organisation then spends three to six months' salary replacing them, drops the problem back into the same structural conditions, and wonders why the new hire starts showing the same signs within a year.
This is a systemic problem, but it lands on individuals. If you are a lone designer on a cross-functional team, the most important thing you can do is make your capacity visible before it runs out.
Document the scope of what you are doing, not just the deliverables but the decisions, the unplanned requests, the work that never makes it into a ticket. Make this visible to your manager regularly. Not as a complaint, but as data. Organisations respond to data.
Set explicit boundaries around research and strategy time. If every hour is reactive, you are functioning as a production resource, not a designer. Those hours need to be scheduled and protected the same way engineering sprint capacity is.
Find external community. The isolation of being the only designer on a team is real. Design communities, peer groups, and mentorship networks do not fix the structural problem, but they close the peer gap enough to protect your thinking and your sanity.
Be honest with yourself about the trajectory. If the scope keeps expanding and the support does not, the situation is not going to improve on its own. That is not pessimism. It is pattern recognition.
None of this is solved by better self-care. The answer is not yoga or time management. It is organisational design.
Merholz argues that the team, not the individual designer, is the atomic unit of a design organisation. Designers need peers, mentors, craft community, and a shared identity beyond the squad they sit in. The embedded model has real advantages for collaboration and shipping velocity. But without a design discipline structure that supports people inside it, the model consumes its own practitioners.
The lone designer always burns out first because the conditions produce exactly that outcome. Until organisations redesign those conditions, the pattern will keep repeating.
January 10, 2026
January 10, 2026
As we write this article, our team is in full swing redesigning our application, and one thing that came to mind from our previous MVP. We were focused solely on the old way of working and niched ourselves down to a way of working which we are now seeeing is becoming more and more obselete as each day goes. The role of UX & UI designers has changed, and we're hearing more about designers being involved in the development process than ever before.
Back in the day, a designer would also be involved in some form of coding or actually do it themselves. Those days were the Unicorn days, when employers hired designers who could take their designs to code, and they were better off for it, as the designers paid a lot of attention to the minor details that developers never paid much attention to. Then we moved into an era of specialisation, where designers focused solely on the design part. Yes, they would know some form of coding. Still, it wasn't necessary as employers were looking for people to specialise in a particular area so that other team members could focus on another speciality. Fast forward to today, and now the time of the unicorn has come back again, but this time it is with the help of AI. With so many tools out there that let designers turn designs into code in a short time, we are seeing that designers are being asked whether they can also code the designs. This offers many opportunities to people who have never done it before, but unlike the unicorn, where you had to have some knowledge of code, now you need to be able to write a prompt, but write it well. The quality in your output is how clear and concise your context is in your prompt, and not so much in the technical knowledge (though it does help when you get stuck in a rut with vibe coding tools). Still, the time to take something to production is getting shorter, in terms of the effort, time, and resources needed. This is where the Design Engineer position has come into full force, giving designers the chance to take their designs to code without needing another team or multiple alignment meetings to get them right.
We looked at this in detail when we revised our MVP because the old way of working in Figma and handing over designs is slowly becoming outdated, time-consuming and prone to multiple errors and rework. As companies shift ideas about where to cut resources and use as much agentic AI as possible, I think designers can still keep their jobs. The uptake is massive at the moment, with more and more designers moving to vibe-coding platforms and skipping the design-in-Figma process altogether. There will still be times when designers work in Figma. Still, the doors are opening up more and more, with a lot of work moving towards prompting, and designers are another area moving towards that space very fast, as development has done.
We think that with these new ways of working, workflows will change in organisations dramatically and will see a shift in ownership of each part of the digital product, which is what designers have always strived for. The question is, are designers now ready to move and shift towards this new role, or do they want to stay comfortable in the place where they know best? Designing on a digital canvas has been around since the Photoshop years, and it is still tedious to be a full AI designer. However, the question still boils down to the design team having to learn another skill set to keep alive in this ever-changing career.