The People Side of AI-Native: What Actually Changes for Your Team
George Pappas
Conversations about AI-native digital delivery tend to focus on the technology: which platforms have embedded AI capabilities, which tools generate content or code or test cases, which workflows have been automated. These are reasonable questions, but they tend to obscure a more consequential shift that organisations need to understand before they can realise the value of an AI-native approach. The technology changes the nature of the work itself, and that means it changes what skills matter, how teams need to be structured, and how delivery is measured. Getting the people side of this transition right is not a secondary consideration. In most cases it is the determining one.
What changes for content teams
Content editors and strategists tend to be the first people in a digital organisation to encounter AI-native tooling in a meaningful way, because platforms like Optimizely SaaS CMS and SitecoreAI surface AI capabilities directly inside the authoring environment. Content suggestions, translation, SEO analysis, tone guidance: the tooling is increasingly capable and increasingly present in the day-to-day editorial workflow.
The change this creates is not primarily about producing more content. It is about what the editorial role actually involves. When AI handles a greater share of first-draft production, the work shifts toward directing and evaluating rather than drafting. A content editor working effectively alongside these tools spends more time on brief quality, on evaluating outputs against brand and audience criteria, and on interpreting what platform signals are telling them about what is and isn't resonating with users. The judgment required is not less sophisticated than before. It is differently sophisticated, and it is not a capability that develops automatically from access to the tools.
This has practical implications for how content teams are recruited and developed. The skills that matter most in an AI-native content operation are structured thinking, editorial clarity, an understanding of how content models work, and comfort with performance data. Teams that develop these capabilities will get substantially more from the platform. Teams that don't will find that AI tooling adds friction rather than flow, because the judgment layer required to use it well is not there.
What changes for developers
The individual-level effects of AI on engineering work are well documented: code assistants, AI-supported review, automated testing and documentation generation are now common parts of development practice, and the research on productivity gains is reasonably consistent. The organisational implications are less often discussed, and in some ways more important.
When AI handles a greater proportion of scaffolding, boilerplate, and routine implementation, the high-value engineering work concentrates in architecture, integration design, performance, and the judgment calls that the tools cannot make. Senior engineers who understand this can direct their attention accordingly and get significantly more done. Teams that have not made this shift consciously tend to find that AI tooling raises output without raising quality, because the additional capacity is being used on more of the same work rather than on the work that actually requires skilled judgment.
There is also a governance dimension that organisations frequently underinvest in. Where AI tooling is used in codebases that handle sensitive client data or operate in regulated environments, the rules need to be clear and genuinely understood by the team, not documented in a policy that gets acknowledged once and then forgotten. This is an operational reality rather than a theoretical concern, and any serious AI-native delivery practice needs to address it as part of how the team works rather than as an afterthought.
What changes for UX and design
Design has perhaps the most nuanced relationship with AI-native delivery, because the value of design work is difficult to decompose into discrete tasks in the way that content production or code generation can be. AI can generate concepts, produce variations at scale, and surface patterns in user behaviour that would not be visible through manual analysis. What it cannot do is make considered judgments about what users actually need, or hold the organisational and human context that good experience design reflects.
In practice, the most significant change for design teams is in the pace and scope of early-stage work. Concept generation that previously took days can now take hours. User journey analysis that previously required manual synthesis across multiple data sources can be supported by tools that surface patterns more quickly and completely. This is genuinely useful, and it frees design capacity for the work that requires human judgment: experience strategy, stakeholder alignment, and the upstream decisions about what to optimise for in the first place.
The risk worth being honest about is that the speed of AI-assisted design work can create an impression of thoroughness that is not always warranted. More concepts and more variations are not the same as better thinking about the problem. Design leads working in AI-native environments need to be deliberate about when to use the tools to accelerate and when to slow down, and need the organisational standing to make those calls without pressure to simply produce more.
What changes for digital leaders
For the people accountable for digital platforms and the teams that build and run them, AI-native delivery requires a reframing of what good looks like. The governance models and performance metrics inherited from a pre-AI context were built around a different kind of work, and applying them directly to AI-native delivery tends to measure the wrong things. Velocity in story points or quality measured purely through defect rates do not capture the ways in which the work has changed, or where the real risks and opportunities now sit.
A more useful frame is outcomes. AI-native teams should be producing better outcomes, faster, and with less rework, and if they are not, the tools are rarely the root cause. The problem is almost always in how the work is structured, how quality is defined within the team, and whether the team's capabilities have developed to match the environment they are now working in.
The talent question is also worth addressing directly. AI-native delivery does not reduce the need for skilled practitioners, but it does change the shape of the skills that matter. Organisations that treat AI tooling primarily as a way to do more with fewer people tend to create conditions where the tools are used at volume but the judgment layer that makes the output valuable is thin or absent. The more durable model is to use AI to extend what skilled practitioners can do: broader coverage, faster iteration, more analytical depth, with strong human judgment applied to the outputs. This is a different staffing and development conversation to the one many organisations are currently having.
The transition is the work
None of this happens automatically as a consequence of platform deployment or tool adoption. The shift to AI-native delivery is an organisational capability change, and it requires deliberate investment in how teams work, what skills they develop, and how performance is understood and evaluated over time.
The organisations that are furthest ahead on this are not necessarily those with the most advanced platforms or the most aggressive adoption of AI tooling. They are the ones that have thought carefully about the human side of the transition, about how to bring teams along rather than simply equipping them, and how to structure the work so that the tools amplify what their people can do rather than running alongside it independently. The technology is genuinely the easier part of this. The people side is where the value is realised or lost.