blog post banner image

Connecting Feature Flags and Dora Metrics

Mark Allen

7/16/2023

Intro to DORA Metrics

Measuring the success and impact of new development processes is crucial to ensuring your team is always well-equipped with tools and processes that actually drive growth. Any team looking to adopt feature flags in their workflow should also adopt a way to properly measure the success and effectiveness of feature flags. DORA metrics, specifically Deployment Frequency and Lead Time for Change, are two criteria that engineering managers and team leads can use to both motivate their team members to use feature flags, and provide rationale to their organizations on why feature flags are effective. 

Even for teams not yet using DORA metrics, evaluating your processes using these two criteria is a great way to get started with them. Deployment Frequency is easy enough to automate and measure as it is easily added to the CI/CD pipeline tracking when new deployments to production are completed. Lead Time for Change can be calculated from source control, such as git, from when the change was started to when it was pushed into production. 

And for a mature software engineering team with a continuous integration system already in place, feature flags can still increase their deployment frequency, move the team towards continuous deployment, and improve application stability. Ultimately, feature flags + DORA metrics are a powerful combination that can help even the most mature engineeringteams reach elite status.

Measuring the Impact of Feature Flags with DORA Metrics

Feature flags can be easily added to most applications. After enabling the feature flag application, code can then be introduced to evaluate the feature flags for specific users, and return an experience based upon that flag. With a new feature hidden behind a feature flag, developers can merge the feature into trunk and have it deployed to production without concern that the new feature will impact the application. 

Thus, feature flags reduce the lead time of change, as features that are completed (but not quite ready for release) can be deployed to production sooner. They also increase the number of deployments to production, as small, individual changes can be deployed more frequently. (Gone are the days of waiting until the end of the month for a massive feature release, only to find numerous bugs and poor performance.)

Additionally, feature flags reduce change failure rate by allowing developers to show new features to product managers, stakeholders and beta customers to get critical feedback about the feature’s stability and performance.

Lastly, feature flags greatly reduce the mean time to recovery by enabling dev teams to test in production, or rollout a feature to a small cohort of users to be sure the feature works before they release it to their entire user base. If they identify an issue with the feature, developers can simply roll the feature back or turn it off completely to fix the code before a large portion of their users see it.

Results

The charts below provide a visualization of the impact of using feature flags on both lead time to change and deployment frequency. (In this scenario, feature flags were implemented on the week of 3/09.) 

The lead time of change was derived from git and the CI/CD pipeline using the time from the first commit on a feature branch to the time that feature branch was deployed to production.  

Deployment frequency comes directly from the CI/CD pipeline logging the time of a successful deployment to production.

Ultimately, both graphs show a positive impact on the two metrics when feature flags were implemented. This is an excellent start, but it should be noted that DORA metrics should be continuously reviewed and evaluated to ensure that feature flags have a positive impact over time.

Written By

Mark Allen