Just caught up with some interesting findings from the latest DORA research, and honestly, there's a lot here worth paying attention to if you're in tech delivery or software engineering.



So here's what's drawing my attention: 89% of organisations are now running AI into their development workflows. That's pretty much mainstream adoption at this point. Three-quarters of developers are using AI tools every single day. This isn't a future trend anymore—it's the operating reality right now.

But here's the thing that caught me—and this is the nuanced part that a lot of people miss. Yes, there's real productivity gains happening. A 25% bump in AI usage correlates with 3.4% improvement in code quality and 7.5% better documentation. That's measurable. That's real.

However, and this is critical, delivery stability can actually drop by up to 7% if you're not careful about how you're rolling this out. The teams I've seen that are drawing the most value from AI aren't just throwing it at everything—they're being deliberate. Smaller commits, solid automated testing, tight feedback loops. The fundamentals still matter.

What I found particularly useful is that AI is genuinely reducing the grind work—synthetic test data generation, regression testing automation, that kind of thing. But it's not replacing the human judgment calls. Prioritisation, architecture decisions, those still need people in the room. AI is the augmentation, not the replacement.

The governance angle is real too. Organisations with clear AI policies—defined data rules, mandatory code review, security validation—they're seeing higher adoption and more consistent results. It's not restrictive; it's actually enabling.

One more stat that jumped out: structured learning time for developers correlates with a 131% increase in AI adoption compared to teams without it. That's a massive difference. It's basically saying: if you invest in helping your team get comfortable with these tools in a controlled way, adoption accelerates significantly.

The underlying lesson? AI amplifies what's already there. Strong DevOps practices get accelerated. Weak processes get magnified in the worst ways. It's not a silver bullet—it's a multiplier on whatever engineering discipline you already have.

If you're working in regulated industries or just thinking about how to scale your delivery safely, this is worth diving into. The metrics that matter—lead time, deployment frequency, failure rate, recovery time—these are the actual measures of whether AI integration is working or just creating noise.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin