Continuous delivery is a software development strategy that optimizes your delivery process to get high-quality, valuable software delivered as quickly as possible. This approach allows you to validate your business hypothesis fast and then rapidly iterate by testing new ideas on users. Although Continuous Delivery focuses on engineering practices, the concept of continuous delivery has implications for the whole product-delivery process, including the “fuzzy front end” and the design and analysis of features.
Read the rest of this article on InformIT (free, no registration required)
When implementing continuous delivery, it’s easy to focus on automation and tooling because these are usually the easiest things to start with. However continuous delivery also relies for its success on optimizing your organizational structure for throughput. One of the biggest barriers we at ThoughtWorks have seen to continuous delivery is teams organized by role or by tier, rather than by business outcome. In this post I’ll address the root cause of this problem, and how to overcome it.
Continue reading Organize software delivery around outcomes, not roles: continuous delivery and cross-functional teams
I like to say that feature branches are evil in order to get people’s attention. However in reality I lack the determination and confidence to be a zealot. So here is the non-soundbite version.
First, let me say that Mercurial (and more recently Git) has been my workhorse since 2008, and I love distributed version control systems. There are many reasons why I think they represent a huge paradigm shift over existing tools, as discussed in Continuous Delivery (pp393-394). But like all powerful tools, there are many ways you can use them, and not all of them are good. None of my arguments should be construed as attacking DVCS: the practice of feature branching and the use of DVCS are completely orthogonal, and in my opinion, proponents of DVCS do themselves – and the tools – a disservice when they rely on feature branching to sell DVCS.
Continue reading On DVCS, continuous integration, and feature branches
I am delighted to report that Continuous Delivery is being used as the set text for the Agile Engineering Practices course, which forms one of the modules for the Software Engineering MSc at Oxford University.
This is especially sweet for me since I did my BA at Oxford. It’s also where I first got into systems administration. I accidentally reformatted my hard drive, and I couldn’t get hold of a copy of Windows. Instead, I popped down to computing services and picked up some new free operating system distribution called RedHat, so I could write my philosophy essays (using emacs, of course).
Thanks to Dr Robert Chatley (@rchatley), who teaches the course, for letting me know. He says he chose Continuous Delivery since it “gave the best motivation for putting all the technical practices together … [it] gave the big picture of what we were trying to do – minimise the cycle time from idea to delivery, and allow that cycle to be repeated frequently and reliably”. He has a write-up of the course here.
He was kind enough to let me reproduce a picture of the students with their course text. I wish them all the best with their future endeavours.
Translations: 中文 | 한국말
Many development teams are used to making heavy use of branches in version control. Distributed version control systems make this even more convenient. Thus one of the more controversial statements in Continuous Delivery is that you can’t do continuous integration and use branches. By definition, if you have code sitting on a branch, it isn’t integrated. One common case when it seems obvious to use branches in version control is when making a large-scale change to your application. However there is an alternative to using branches: a technique called branch by abstraction.
Branch by abstraction: a pattern for making large-scale changes to your application incrementally on mainline.
Continue reading Make Large Scale Changes Incrementally with Branch By Abstraction
I’m often asked what kind of systems continuous delivery is best applied to. While software-as-a-service is the most obvious example of where continuous delivery can be used, keeping systems constantly production-ready from early on in the delivery process – even if you don’t release to users regularly – is beneficial for all kinds of systems. However the benefits of continuous delivery come with a cost – in particular, more intense collaboration, more investment in automation, and more generally the extra effort required to develop software in an incremental fashion and deploy it regularly. In fact, the most important criterion for using continuous delivery isn’t concerned with the technical nature of system you deliver or even the market you work in – it’s whether the system is strategically important to your organization.
Continue reading Strategic vs Utility Services
Many large organizations have heavyweight change management processes that generate lead times of several days or more between asking for a change to be made and having it approved for deployment. This is a significant roadblock for teams trying to implement continuous delivery. Often frameworks like ITIL are blamed for imposing these kinds of burdensome processes.
However it’s possible to follow ITIL principles and practices in a lightweight way that achieves the goals of effective service management while at the same time enabling rapid, reliable delivery. In this occasional series I’ll be examining how to create such lightweight ITIL implementations. I welcome your feedback and real-life experiences.
I’m starting this series by looking at change management for two reasons. Firstly it’s often the bottleneck for experienced teams wanting to pursue continuous delivery, because it represents the interface between development teams and the world of operations. Second, it’s the first process in the Service Transition part of the ITIL lifecycle, and it’s nice to pursue these processes in some kind of order.
Continue reading Continuous Delivery and ITIL: Change Management
In our book, Dave and I focussed mainly on the principles and technical practices of continuous delivery and its ecosystem: things like automated testing, managing configuration, environments, and data, and implementing a deployment pipeline. One of the things we didn’t spend much time on was the business context and value proposition of continuous delivery. In a way that’s just as well, because the book is already long enough.
However this context is important – and not just because it helps you convince your boss to implement continuous delivery. One of the big technical memes of the last year has been continuous deployment – the practice of releasing every good version of your software, often multiple times a day. But that practice came out of a business imperative, which finds its general expression in the lean startup movement. In startups, it’s essential to get a minimum viable product available to users and then iterate rapidly based on real feedback. Sometimes you need to change direction in a more radical way (known as pivoting) if you discover what you built isn’t valuable.
When building strategic (as opposed to utility) software in a non-startup environment, many of the same imperatives apply. Furthermore, continuous delivery has other important benefits: for example, it reduces the risk of each individual release substantially, and provides a true measure of project progress.
I’ve found myself talking a lot about the value proposition of continuous delivery since I wrote the book, so I thought it would be useful to write it all down. You can get hold of my essay here on InformIT for free. InformIT have also made chapter 5 of my book – which explains the deployment pipeline in some detail – available free of charge too.
I was visiting a prospect a few weeks ago when I was delighted to run in to Kingsley Hendrickse, a former colleague at ThoughtWorks who left to study martial arts in China. He’s now back in London working as a tester. We were discussing the deployment pipeline pattern, which he somewhat sheepishly informed me he wasn’t a fan of. Of course I took the scientific view that there couldn’t possibly be anything wrong with the theory, and that the problem must be with the implementation.
Kingsley’s problem was that his team had implemented a deployment pipeline such that it was only possible to self-service a new build into his exploratory testing environment once the acceptance tests had been run against it. This typically took an hour or two. So when he found a bug and it was fixed by a developer, he had to wait ages before he could deploy the build with the fix into his testing environment to check it.
This problem results from a combination of two anti-patterns that are common when creating a deployment pipeline: insufficient parallelization, and over-constraining your pipeline workflow.
Continue reading Deployment pipeline anti-patterns
I work at ThoughtWorks Studios, where I am product manager for Go (our continuous integration and release management platform, of which more in a future post), and a general big mouth about build and release management. As part of the launch of the book and of Go 2.0, we’ve created some collateral you might find useful – a white paper on release management and a build and release management assessment.
Continue reading Release Management White Paper and Assessment