Mobile Development vs. Operations: The Battle of Competing CI/CD Incentives
Mobile Devs want speed. Ops wants stability. How can they work together? Learn what's behind the conflict and how both sides can effectively reach their goals.
The traditional mobile application development lifecycle is often looked upon as a battle between mobile application developers and system administrators. These two groups are often at odds due to their competing incentives. Developers set out to complete new features and push them out the door as quickly as possible. Speed is the name of the game. On the other hand, operations professionals seek to minimize or slow down code pushes to production in favor of stability. The lack of alignment in key performance indicators (KPIs) between these teams too often creates an us-versus-them mentality, rather than a culture of collaboration and shared success. In this article, we’ll dive deeper into how these incentives compete with one another, and what developer and operations teams can do to improve alignment and satisfy the objectives of both sides.
Taking a look at specific developer performance metrics, one will often see data around lead time, cycle time, team velocity, sprint burndown, and release burndown. Lead time measures the turnaround time from idea to feature manifestation. Related to lead time is cycle time. Cycle time refers to the time from which a task is plucked from the backlog, moved to “in progress,” then the duration until it is marked as “done.” Naturally, developers aim to churn through tasks quickly to speed up cycle time and reduce lead time in order to keep the feature backlog low. This leads to another key metric: team velocity.
Team velocity measures the number of software units—tasks, features, story points, bugs, etc.—that a team is able to iterate through in any given sprint (defined period of development time). Velocity is important because it gives teams the ability to forecast with better accuracy. However, velocity can be a controversial measurement as teams are often compared to determine which has greater velocity, which is misinterpreted as one team having stronger capabilities over another. This is typically a result of the scoring mechanism used in the development process.
Two additional, and related, developer KPIs are sprint burndown and release burndown. Sprint burndown measures how much work is actually completed during a sprint. Release burndown takes a broader scope than a specific sprint to gauge whether the work set aside for a particular release is on or behind schedule. In both cases, developers want to see a steep downward slope, if possible.
From the operations perspective, some of the more common metrics are mean time to resolve/recover (MTTR), mean time between failure (MTBF), and mean time to failure (MTTF). MTTR is essentially the time it takes to diagnose the failure and then finish the repair. In essence, the time it takes to resolve a service ticket. Ideally, this time frame is short and the need for such service tickets is infrequent. MTBF is there to measure the average amount of time between failures. Naturally, the operations staff aims to keep this metric relatively high. Similarly, MTTF looks at the usual uptime of the system after its last issue was resolved. Again, this is a metric that operations teams hope to keep as high as possible.
Of these measurements, only one gives Ops the data needed to potentially reduce time – MTTR. This is highly dependent on the nature of the failure. Because of this metric, operations may naturally have pushback to the common practices of the agile development process adopted by most teams today. With efforts to reduce the rate of release, it is natural that both MTBF and MTTF will rise given that there are potentially fewer issues that need solving.
KPIs, at their core, are tools to push individuals, teams, and businesses in a certain direction. As the saying goes, “what gets measured gets managed.” They also signal what’s most important to the business. When taking a closer look at KPIs for developers, as well as operations teams, the common theme is time. The problem is that the developer team aims to shrink time, while the operations team looks to expand it.
Development and operations teams find themselves at odds with each other. How can each of these teams hit their objectives, when their goals seemingly contradict? In the case of mobile application development, one can see this even more clearly. The development team is moving fast and pushing commits. When code is merged and it’s time to cut that release, the build process becomes drastically more complex. It’s not as simple as moving a minified build directly to production servers. Instead, the build process needs to happen on specific hardware. These native app binaries need to be properly code-signed, which means additional maintenance. The build artifacts, once completed, need to be deployed out to testers or for end-user consumption on the app stores. These steps are tedious and not ones that operations teams want to handle constantly.
Now, this isn’t to say that these teams are going around pointing fingers or intentionally holding each other back from achieving their goals. But, it goes to show that without the proper process in place, the mindset seems to veer from an interwoven collaborative effort to more siloed teams with a touch of tension.
Today, mobile application developers and system administrators have found a way to overcome the disconnect and bridge this gap. The bridge is establishing DevOps tools and practices that aim to achieve both objectives: high-quality releases at a faster pace. At its core, DevOps aims to transition the thought process from individual teams hitting their KPIs to a shared team and culture with one goal: shipping great products.
In the context of mobile application development, DevOps practices include:
- Automatic code reviews and unit testing as code is checked in
- Managed build infrastructure as a service, so Ops teams don’t have to maintain it themselves
- Code-signed native binary builds without manual intervention
- App bundle deployment to testers and app stores
As developers and operations work closer together, releases to production become cleaner and the feedback loop becomes faster. Proper development practices, such as code reviews, unit tests, and continuous monitoring are adopted. Tools are adopted amongst the teams to add to this agility and increase communication. System administrators are happy when broken code doesn’t make it to the build phase. This stop-gap provides stronger metrics on the operations team KPIs. Mobile app developers are happy when there are tools in place to take their passing code, automate native binary builds using cloud-based virtual machines on the fly, and get new features in the hands of users as fast as possible. This keeps their KPIs in good standing. Both teams can agree that when they’ve adopted a DevOps process that eliminates the everyday burden of maintaining hardware, properly signing apps, distributing the apps, and all of the nuances in between, they’re better off for it. And of course, the real winner of this culture shift is the customer.
The transition of a business culture from where it may sit today to one that fully embraces DevOps isn’t a long shot. Mobile application developers and system administrators are likely using a subset of tools that they need already. The key, however, is finding a platform that takes these disparate tools and brings them together under one roof, and provides its own solutions to the problems these teams face every day. In the case of mobile application development, with its own set of unique challenges it brings to the table, teams turn to Appflow for their Mobile DevOps needs.