We have witnessed a drastic technology led transformation over the last few years. We are introduced to new technologies, social platforms, and consumer facing products every few days. This has increased the competition and pushed brands to go the extra mile to improve the experience. Most brands have started focusing on Agile methodologies and continuous delivery to improve time to market and reach out to markets before competitors can even envision such ideas.
Companies have started leveraging DevOps to bridge the siloed structure and improve the overall productivity with reduced release cycles. According to 2017 State of DevOps Report, high performing organizations that effectively utilize DevOps principles have 440 times faster lead time for changes and have higher levels of customer satisfaction and operational efficiency.
For leveraging DevOps successfully, it is imperative for the development and operations teams to function cohesively and collaboratively. A detailed roadmap and assessment framework can help companies kick off their DevOps journey in a better way. The main two components that companies should look at include:
- Pipeline assessment to study the code lifecycle from the developer’s desk to production and get an estimation on waste
- Assessment of continuous delivery including build management practices, release management, environment, and deployment structures, and so on.
While these are some of the things companies should consider, outlined below are some of the misconceptions that companies should avoid while leveraging DevOps.
1. DevOps is all about Automation
Companies often perceive that once they have invested in tools and automated the processes, their DevOps strategy is bound to be successful. However, they often forget that automation alone will leave communication and process gaps between the Development and Operations teams. The collaborative culture driven by people is the primary success factor of an effective DevOps strategy and takes it to the finish line.
2. DevOps doesn’t work with ITIL models
In a traditional setup, the IT Infrastructure Library helps an organization to develop resilient, stable and controlling IT operations with a set of inflexible best practices. DevOps is often looked in contrast to ITIL as it aims to break down the Ops silos and delivers working software continuously. However, this is not really true. ITIL also features lifecycle management principles which align with the end goal of DevOps. ITIL guidelines have evolved, and it has started giving the flexibility of implementation to companies making the marriage easy to work.
3. DevOps eliminates traditional way of working
DevOps strives to tie Dev and Ops centric attributes together. DevOps doesn’t eliminate trusted development and build/release processes or revamp traditional way of working radically. DevOps aims to bring continuous improvement and is perceived as an evolution of pre-existing disciplines of Agile IT. While there are new roles that DevOps focused companies pursue such as DevOps engineers, they still need specialized Dev and Ops people, and hence DevOps doesn’t eliminate traditional roles but just blends them together with a set of tools to automate pipeline.
Further to the misconceptions and how more companies are leveraging DevOps as a service, we would also like to bring on the table some of the real-world DevOps failures and key takeaways from these failures
Real world DevOps Failures & Lessons
1. SlideShare Failure – Accessibility over Awareness
Way back in 2012, when SlideShare was a startup with less than 20 people in its workforce, they tried to implement a DevOps model to speed up processes and stay ahead of the competition. The aim of adopting DevOps practices was to create a more interconnected team by overcoming geographic barriers for maximum efficiency. Further, it was spreading technical knowledge across the teams for limited impact in case of unavailability of any of the resources. The development team was divided geographically between San Francisco and New Delhi, with a complex infrastructure. Going with the idea of promoting a greater sense of ownership, developers were provided access to the production servers and production databases.
One of the software engineers, working on a database-related project, attempting a tool to explore a MySQL database graphically, reorganized the columns of the database in the tool for enhancing his own understanding. He was unaware that it was also changing the order of the columns on the actual database in production, locking it, simultaneously. This resulted in bringing down slideshare.net, shutting out more than the 60,000 users trying to access it during that short period.
Key Takeaways from Slideshare
- The first takeaway is paying heed to given accesses to required people and dwelling on how valuable it is. In this particular incident of database outage, the access of production databases to developers was more dangerous than useful. The developer could have made changes for his own understanding and extracted the same value in a staging database, without such a massive impact on the company.
- When practicing DevOps, awareness, and education about the functioning of the infrastructure prior to exposing developers to the production infrastructure should be the first step and embedded in the onboarding process of any organization. Everyone may not be aware of the fundamentals and the impact they may have on the project or organization as a whole.
2. IBM Failure – Imbalance between Employees and Processes
When IBM began its journey into DevOps, back in 2003, it wasn’t even an industry-used term. They attempted to integrate Agile methodology into their practices to speed up software releases for their customers. It seemed like an answer to woes related to software releases, an impressive solution, however it failed to achieve what it should have, in implementation. While development picked up the pace, operations could not catch up and were slow to respond and delayed software releases.
To counter this, IBM began to automate code deployment in addition to adhering to agile methodology but still could not deliver desired results and faster software release cycles to clients. On conducting an analysis and taking DevOps consulting from experts, they discovered that the hurdle was not in the automation of the code or the agile methodologies. It was the operational and development environment that acted as an obstacle in faster product releases. There was a lag between the development and operational development of the project. They were unable to understand their processes, after the implementation of Agile and code automation, from initiation to completion. It took IBM a lot of efforts for learning where the gaps were emerging from, critical iterations in the workflows to get the true value of DevOps, implement it to reduce the time to develop, and reach where it is today.
Key Takeaways from IBM
Designing the workflows keeping in mind the pace of processes and people
DevOps, often leveraged to reduce time to delivery and market, needs to balance the development and operations environment at the same pace. While sometimes tools and automation assist developers and operators in a significant manner, the human element remains imperative to the success of DevOps and should be balanced. Workflows should be designed and defined striking a balance between process and people.
3. U.S. government agency’s Failure
A U.S government agency on-boarded a DevOps vendor for outsourcing the entire DevOps project. The vendor for Application Lifecycle Management (ALM) leveraged DevOps for the deployment and configuration using tools and software with a collaborative and open-source approach.
The Failure – Forgetting about people and process
The project had strict and fixed deadlines with involvement from multiple teams. A robust, scalable platform was in place. However, the processes were not streamlined leading to deferring of commits, integration and merge from the developers as it failed to get a buy-in from the involved stakeholders and developers. It also meant that actual testing in production- like environments or with real users was not performed and broken builds was a common phenomenon. In reality, the platform was just supporting the old legacy practices without the integrated approach of people and processes that form the core of a DevOps strategy.
Once the web application released, it faced slow response times in addition to several critical and public failures. Since it had not been tested in a production environment for probable errors that could arise, it increased the development cycles to get it up and running and recover from the initial failures.
Even though the fixing of issues spanned a few months, the agency learned the hard way that collaborative practices, people, and processes are fundamentals of a successful DevOps strategy. The integration of people and processes in their implementation was multifaceted and stretched on for several more months before the web application was a user-friendly and quick one.
With the number of projects and people engaged in DevOps increasing every year, there is no doubt that it is yielding remarkable results for organizations across the world. It is quickly evolving into a mainstream operating model that accelerates the software delivery cycles. However, the approach of integrating the DevOps principles with cohesiveness and collaboration rests entirely in the hands of your organization.
Building a winning DevOps strategy is critical. Leveraging DevOps is all about using right tools, streamlining processes and training people to adapt and handle change efficiently. Companies that are new to DevOps should consider DevOps outsourcing to a vendor who can provide comprehensive DevOps consulting and audit. Having a good partner in place helps to leverage right DevOps tool-chain such as Jenkins, Docker, Puppet, Chef, Ansible and many others. A good DevOps outsourcing partner will also add a lot of value with problem management, release management, change management and overall infrastructure monitoring.