DevOps is a set of practices, tools, and cultural philosophies that integrates software development and IT operations into a common role, with the purpose of improving a company’s ability to deliver software at a high velocity.
Instead of IT operations and software development being siloed off from each other, DevOps breaks down the traditional boundaries that previously existed between them in order to achieve Continuous Integration and Continuous Delivery (CI/CD) of quality software features and applications to end-users.
DevOps started gaining wide acceptance immediately after it was conceived, thereby proving the validity of the concept. For example, RightScale’s “State of the Cloud Report” research in 2016, estimated that about 70% of small and medium businesses (SMB) had already adopted the practice.
On a basic level, the meaning of DevOps can easily be intuited from the two word combination from which it derives its name.
The practice of fusing software development (Dev), with information technology operations (Ops).
The Ops part encompasses IT professionals like system administrators, system engineers, network engineers, release engineers, DBAs, and operations staff.
However, anyone familiar with relatively new technological terms will appreciate how challenging it can be to pin down exactly what DevOps constitutes, and how to put its methodology into practice.
DevOps is mainly concerned with providing end-users with software applications faster by decreasing the failure rate of builds (releases). DevOps also emphasizes the tools that are needed to achieve this faster turnaround with measurable quality control.
While it isn’t always the practice, it is advisable to tightly integrate security and quality assurance teams within DevOps. Therefore, DevOps encompasses the integration of all the necessary facets of an organization required to make this regiment work, like the following:
In instances where security is the main focus and preoccupation of the everyone on the DevOps team, the arrangement is often referred to as a DevSecOps team instead.
At its very core, DevOps is devoted to two primary tenets: improved collaboration and operational efficiency. It is an innovation aimed at delivering high-quality software products to the end-user much faster.
DevOps is as much of a cultural practice as it is a technological optimization, based on applying Agile methods and lean manufacturing principles.
More than just a new technology catchphrase or gimmick, the DevOps approach requires an organization to buy-in to a culture of cross-business effort, with stakeholders collaborating to produce better quality code that is more secure through shorter release cycles supported by a prevalent regimen of testing; preferably, automated testing.
At the end of this process, a build pipeline infrastructure releases the code for delivery or deployment (automatically) to end-users.
Automation is so central to DevOps that a successful and functional DevOps implementation isn’t possible without addressing the need for automation. Good DevOps teams automate processes that have historically been manual, and thus, slow and cumbersome to execute.
These unwieldy regions usually fall within the areas of testing, the process of creating builds, and deploying code changes to production.
Coupled with agile methodology, shortening the software development life cycle requires the following DevOps practices:
Some of the most important benefits of DevOps, and by extension, the agile methodology is the cultural aspect of its adoption.
One major change in practice is operations personnel now using the tool and techniques as developers for their work such as testing, source control repositories, and participating in Agile meetings.
DevOps demands a culture of collaboration that provides everyone (stakeholders involved in the project) a seat at the table. And with this inclusion comes an expanded understanding of the value of teamwork, a shared sense of ownership, and cross-pollination of ideas.
DevOps has suddenly become so ubiquitous in software engineering circles that you’ll be forgiven if you failed to realize the term didn’t exist until 2009. The distinction for coining the term belongs to Patrick Debois, an independent IT consultant and founder of DevOpsDays.
As is usually the case in historical breakthroughs, a confluence of factors came together to breed the storm of events from which the DevOps movement was born. Though not as intimately linked in the public imagination as it ought to, DevOps owes a lot of debt to the lean manufacturing concept.
As lean manufacturing was establishing a foothold in manufacturing circles, it was identified as the “Toyota Manufacturing Method,” striving for process optimization throughout the manufacturing floor.
Its mantra of continuous improvement, which DevOps ultimately borrowed from, leads its practitioners to continually investigate and evaluate ways to do the following:
A crucial metric of lean manufacturing is the “order to ship” time, which is in tandem with the Continuous Integration/Continuous Delivery (CI/CD) ethos embraced in DevOps.
Lean manufacturing, with the aid of automation, is able to re-engineer processes to produce goods as fast as possible.
This mindset is imbibed by DevOps through its test early, test often strategy, underpinned by automation to accelerate and maintain the continuous release of software to end-users.
The waterfall method as a precursor to DevOps held sway for a while; but its linear, sequential method of software development didn’t match with the reality on the ground.
It eventually gave way to the iterative Agile methodology that is more suited to the constantly-reevaluating, and mutually-reinforcing nature of modern software development.
In 2007, Patrick Debois helped to crystallize the challenges that existed pre-DevOps after he became frustrated with conflicts between developers and system administrators while working for the Belgium government on a data center migration.
This compelled Debois to begin pondering a solution.
In August 2008, an Agile Conference was held in Toronto where a software developer named Andrew Shafer convened an Agile Architecture session which was attended by a grand total of one single person — yeah, you guessed it, Patrick Debois.
As fate would have it, Shafer initially had the room to himself; so presuming no one was interested in the topic, he left his own session. Debois later found him in the hallway and a productive discussion ensued where they agreed on forming Agile Systems Administration Group.
In June 2009, when the O’Reilly Velocity 09 conference was held, a now-famous talk titled 10 Deploys a Day: Dev and Ops Cooperation at Flickr was given by John Allspaw and Paul Hammond.
On Twitter, Deboise lamented that he was unable to attend the conference. Paul Nasrat then challenged him to organize his Velocity event in Belgium.
Debois obliged, calling his October 2009 conference DevOpsDays.
After the conference, discussions surrounding it move on to Twitter. To have a hashtag easy to remember, Debois was compelled to shorten it to #DevOps. Thus, a name and movement was born.
There are a range of goals and objectives that organizations can reap by utilizing DevOps.
The two common factors underlying all of these goals is the impetus to drive collaboration between key stakeholders and implement some continuous activity such as testing, integration, development, monitoring, and deployment.
The challenge of organizations with IT systems is being confronted with the tension and mismatch between two important teams: software developers who are more than ever under pressure to deliver a constant stream of innovation and features to end-users;
And IT operations teams tasked with maintaining increasingly byzantine infrastructure.
However, to many, DevOps remains a largely abstract concept. Unlike agile, there is no ultimate manifesto guiding its practices and objectives.
Even so, without an explicitly defined model and universally agreed upon goals, we can still focus on key outcomes, which most organizations who apply DevOps hope to gain.
DevOps extends beyond the tools and best practices needed to accomplish its implementation. The successful introduction of DevOps demands a change in culture and mindset. Organizations would have to eliminate the barriers separating development and operations teams.
In place of these silos, organizations have to navigate them towards collaborating, communicating, and optimization, to increase both the reliability of operations, along with the productivity of developers.
This culture imbibes taking full ownership for services, with all teams viewing the entire development and infrastructure lifecycle as part of their responsibilities, regardless of their stated roles or titles.
Speed is the rallying cry of DevOps practices. The demand to meet and adapt to the changing needs of the market, customers, and business objectives means the organization's development process and release capabilities need to be extremely nimble and fast.
DevOps processes mitigate or remove the risk of failure, thereby creating an environment where developers feel safe to innovate due to the presence of automated testing, early warning systems of failure, and the ability to immediately restore the last known good system state, in the event of failure.
DevOps’ frequent but small updates have the effect of making deployment less risky. This is because bugs are discovered more easily by identifying the last deployment that caused the error, and subsequently patched more quickly.
The benefit of DevOps practices such as Continuous Delivery is that it provides the capability of having software in an always releasable state. So, when an application or feature is ready to go live, it can go live.
and silos between departments and work groups for better synergy of efforts and collaboration. DevOps provides full visibility of the incoming software development pipeline for stakeholders so they are fully engaged with the innovation process.
Optimizing the planning and delivery of IT projects to reduce wastage with the aim of eliminating defects.
Since Agile methodologies enable stakeholders to both prioritize features and track them from inception through the delivery pipeline, it is possible to measure expected value against the actual value received over time for the feature.
Constant improvement of an organization’s system of production and service due to constant feedback loop, QA, and testing. With this improved quality comes the attendant decrease in costs.
Reduce the failure rate of new releases due to the integration of automated testing early and often in the development process, defects are quicker to detect and easier to fix.
Continuous Delivery increases the frequency of code deployment with smaller, faster, and more frequent releases. This reduces the time it takes to bring products to the marketplace while shortening the time to recovery.
While DevOps is scaffolded by the Agile methodology, and borrows heavily from the principles of Lean manufacturing, it still relies on a technical set of operations that requires the right tools and techniques to achieve organizational objectives.
In order for a DevOps team to be able to perform their duties adequately, they require a DevOps toolset.
Some of these tools help DevOps engineers to accomplish various tasks (provisioning infrastructure, deploying code changes) independently that would have previously required the help of other team members.
The effect is that the team’s velocity of application delivery is drastically enhanced.
This tool is at the center of the containerization trend. Containerization allows the bundling of an application, along with its related libraries, dependencies, and configuration files together, so that it can be run efficiently across different, multiple computing environments.
Another popular containerization ecosystem is Kubernetes. The genius of systems like Docker is that it allows the secure packaging, deploying, and running of applications to be easily accomplished, regardless of the environment.
As the name implies, containerization provides a self-contained environment where every application has its own source code, runtime, supporting files, configuration files and so on, thereby allowing the applications to execute in remote environments.
Vagrant helps DevOps to manage their development environment. Vagrant ensures that the environment for a project is consistent on all developers’ machines working on the project.
This allows them the ability to test applications faster without going through the hassles of setting up configurations.
If there were a ranking of DevOps tools in order of importance, Github would be among the top due to the way it facilitates easy collaboration.
It is a version control system that allows developers to make quick iterations to the codebase, while sending instant notifications to alert other team members of changes that have been made.
In the event of errors, it allows the quick and easy rollback of the entire system to any previously required state desired, in a matter of seconds. This is because Github maintains a branched history of changes made to the repository that it stores sequentially.
Bitbucket is also a version control code repository in the mold of Github, created by Atlassian. It has good project management capabilities and offers the advantage of a private repository at reduced cost compared to Github.
Other added bonuses include the easy integration it provides for other DevOps tools such as Jira, Trello, and in-built CI/CD functionality.
This is an open source tool that comes in handy as a network analyzer, performing all manner of network, server, and application monitoring. This is helpful for companies that possess a lot of circuitry in the form of routers, switches, and servers.
Nagios will issue alerts if there is any form of malfunction or failure of any device. It also provides a performance chart that enables system administrators to monitor system trends.
This is an open source automation server, distributing a project’s workload across multiple machines and platforms. It acts as a continuous delivery hub, helping projects to automate, build, and deploy their services.
If your organization needs a simple, yet effective way of configuring and orchestration of IT assets, then Ansible is your answer. It is a configuration management tool loaded with features but doesn’t hog a device’s resources in the background.
This bug detection tool is available in several programming languages and is used by companies such as Uber and Microsoft.
Sentry is able to scan lines of code across an entire program, subsequently highlighting problem areas and sending notifications when errors are found in the code.
While size and volume of updates will undoubtedly differ from one organization to the other, using a DevOps model allows companies to be capable of deploying software updates and application features much more often than those who stick to traditional software practices.
However, to reap the most benefit from the DevOps approach, best practices must be adopted.
A successful DevOps model rests on the following best practices:
CI means that developers now have to regularly merge their code changes into the central repository.
Instead of them hanging on to code in their private branches, and integrating them only at the end of the release cycle, CI demands developers to check-in their code every day onto the main trunk of the shared repository.
Automated builds are created after this, with tests (regression, integration) subsequently run on the build. The result of this CI practice is that bugs are discovered more quickly in the development process, and fixed faster; thereby improving the overall software quality.
This drastically decreases the time to validate and release new software and updates.
After Continuous Integration comes Continuous Delivery, although they are actually different sides of the same coin. CD expands upon CI by requiring that the code changes that have been made are automatically built, tested, and prepared for release to the production system.
Preferably, after the build stage, all coding changes are deployed to a testing environment before ultimately being deployed to production.
Proper implementation of CN ensures that the software development team always has a deployment-ready build artifact that has been validated by passing through a standardized test process.
Microservice architecture allows organizations to decouple and breakdown large, complex systems into more manageable, smaller, and independent projects.
Infrastructure as code enables software developers and system administrators to interact with the software’s infrastructure programmatically, for example, with the use of an API-driven model.
Consequently, the underlying infrastructure of an application is managed and provisioned by utilizing code and software development techniques such as continuous integration and version control.
Using scripts and other tools, the infrastructure-as-code concept allows DevOps engineers to use high-level, descriptive language to write code that engenders more flexible, adaptable provisions and deployment procedures.
Version control is the planetary force around which every other thing in the DevOps universe revolves around, the glue that holds everything together.
Because when Ops and Devs share and use the same version control system, then anyone can reproduce the production environment based on what exists in the version control.
Test early, test often with automated tools, preferably with Test-Driven Development (TDD).
Microservices allow developers to build a single application as a small set of services. The advantage of microservices as a DevOps tool is that its architecture makes applications deployed on it more flexible and enhances faster innovation.
Without DevOps, the blend of microservices and increased release frequency of deployment is bound to present operation challenges.
But due to adding the crucial ingredient of CI/CN, it solves these problems and thereby allows companies to achieve rapid delivery velocity in a safe and reliable manner.
There are many benefits that DevOps practices bring to an organization. For the purpose of clarity, they have been placed them into three broad categories, namely:
Because of the CI/CD pipeline, DevOps makes it possible for business to rapidly update existing services, quickly deploy new processes, applications, systems, and features.
DevOps makes business operations more efficient, and thus, reduces the time it takes employees to bring technical products to the end-user.
Shorter software batches, coupled with faster release cycles make it easier to identify points-of-failure and bugs in the system, and therefore, address them before they become a “needle in the haystack” that is difficult to pinpoint.
Hence, there are likely to be fewer bugs in the application. Furthermore, because quality control and automation are tightly integrated in DevOps pipelines, these practices also increase the quality of the software produced.
By using mechanisms such as continuous delivery, automated testing, and automated deployment, the continuous delivery of quality software with minimal overhead is made possible by DevOps.
Because code changes are released in small batches, there is less complexity to manage in DevOps operations.
In addition, it is also easier to detect in what particular software build or release a bug occurred. DevOps also makes it possible to rollback to a stable, working version of the software before the bug was introduced, due to the use of version control.
The benefits of automation are self-evident. Instead of repetitive manual processes that are time consuming and prone to errors, DevOps automation is integrated in a delivery pipeline that is streamlined to simplify various processes in SDLC.
However, DevOps automation isn’t something that is set-up once and then forgotten; it has to be monitored and continuously optimized to resolve bottlenecks in the system.
DevOps contributes in the creation of a trust of outcomes and trust among stakeholders due to its ethos of shared goals, peer reviews, and constant feedback.
The daily Agile Scrum meetings foster continual communication between traditional different team members like developers and system administrators. Sharing and visibility through shared version control repositories also enhances collaboration.
Despite a stakeholder’s titular title, DevOps compels everyone, because everyone has a seat at the table, thanks to Agile, they are able to contribute ideas to the project. Hence, they feel a more connected stake in its success or failure.
Overtime, the constant emphasis on quality assurance (QA) and constant delivery championed by DevOps translates into an improved business timeline for delivering quality products.
DevOps enables businesses to be flexible and nimble enough to meet the demands of changing markets, customers, and business objectives.
A company is able to achieve this with the maximized output of development (higher rate of features and functionality) and increased frequency of dependable releases.
Businesses can’t afford to rest on their laurels, especially in hyper-competitive markets. The CI/CD pipelines that DevOps relies on forces them into a mode of continuous planning.
This is because continuous release and deployment provides businesses the opportunity of getting valuable customer feedback and insight into their products. This feedback invariably leads to optimization and changes.
Also known as split testing, is a user experience research method that involves the process of comparing two versions of the same feature (web page, user-interface, other asset) and subsequently measuring the difference in performance.
A high-level testing of an entire system to determine whether it has met requirements. In DevOps, this is done before the system goes to production in order to ensure that both new and existing features satisfy overall quality standards.
Agile is a software development methodology and cultural philosophy that is focused on delivering quality software to the end-user rapidly through short, iterative software development cycles.
This is the process that enables the packaging and deployment of an application, or application update from its development environment, as the code transitions across various environments until it reaches the production system.
These are tools, scripts, and products that allow DevOps to automate application deployment, manage the continuous integration/deployment pipelines and release orchestration capabilities.
This is a type of test-driven development that is focused on communication and collaboration between business participants, QA, and non-technical stakeholders on one hand; and software development on the other side, geared toward defining user-stories that meet business outcomes.
Also known as behavioral testing, it is a method employed in software testing where the inner-workings of the item being tested are unknown to the tester. This therefore compels them to verify external, rather than the internal state or behavior.
This agent is useful in Continuous Integration, where it is installed locally or remotely, and facilitates software builds by sending and receiving messages with the CI server on how to handle software builds.
This repository provides a central storage for all binaries and artifacts with metadata constructs during builds. In doing so, it helps to make automated deployment scalable and practical by simplifying the build process and dependency management.
These are the frameworks and tools that make it possible to compile source code automatically into releasable binaries. The process usually includes code-level unit testing to verify functional code pieces are performing as expected.
This is a type of release where a new product or application version is provided to only a small subset of production servers, and hence end-users, in order to determine whether it is stable enough, functioning and behaving as expected.
If it is deemed to be functioning alright, it is subsequently rolled out to the entire production environment.
This determines the load signified by the number of users a computer, application, or server can bear or support before it fails.
Akin to the concept of mission-creep, configuration drift represents the tendency for hardware and software configurations to become inconsistent with the template version of the system due to ad hoc changes (example, hotfixes) that aren’t committed back to version control, resulting in a significant amount of technical debt.
The process and tools for maintaining consistent functional attributes and settings on a system. It includes system administrative tasks such as IT infrastructure automation.
Containers are valuable because they isolate application software from its environment. This ensures the software application will always run the same, regardless of infrastructure.
As a standard unit of software, containers package code and other associated dependencies such as CPU and memory, I/O rate, the file system, disk quota, network access, and root privileges.
This OS-level virtualization is much lighter than machine-level virtualization, and therefore, enables the application to run faster and more reliably from one environment to the other.
To enable fast delivery of high-quality software to be deployed rapidly, repeatedly, and reliably by going through a delivery pipeline with minimal human intervention.
When this delivery is automatic, it is known as Continuous Deployment. The latter is similar to Continuous Delivery, except for the fact that code goes to production automatically.
The practice of integrating code frequently, usually several times a day, into a shared repository. Check-ins are verified with automatic builds that allow for early detection of problems.
A constant quest for quality that stretches throughout the software development lifecycle (SDLC). It starts from the onset of requirements definition, coding, testing, operations, and throughout the pipeline orchestration.
The process of performing unattended automated tests across all environments as part of the software delivery pipeline aimed at detecting bugs and providing feedback on the quality of software.
This is a strategy for going live with a new deployment in which the code implementing the new features is released, but is either not made visible to the end-user, or not activated (fully or partially).
This is a software delivery process that includes a sequence of orchestration and automated tasks to facilitate the seamless deployment of application features to a production system.
This is the group of activities and functions that facilitate making an application available for use to the target environment.
DevOps: Development + Operations
This word is a portmanteau of development and operations. And like its name suggests, it is a blend of software development and IT operations that creates a synergy of processes, practices, and tools that result in the faster delivery of stable software of improved quality.
This is the integration of security with the DevOps process.
This embraces the same philosophy of infrastructure-as-code. In this development technique, all components required to deliver and build software is defined as code;
Some of these components include software environments, infrastructure, deployment packages, dashboards, release templates, and so on.
The advantage of an organization defining their delivery pipeline as code is that it provides them with a controlled, standardized method to on-board applications, projects, and teams.
In this manual testing method, human testers are given the ability to freely test areas where they perceive issues will arise that automated tests wouldn’t be able to suspect.
Experimenting with new features where they are allowed to fail quickly in order to gain rapid feedback to adapt accordingly.
This a major principle on which DevOps and CI/CN is based on creating a situation where an organization can engender fast continuous feedback; first between its development and operations team, and then with customers and target users. A reliable feedback loop.
In the IT realm, the act of governance ensures tech investments are operating as expected, complying with necessary regulations, and not introducing risk into the system. It is a formal process that allows companies to make sure that IT operations are aligned with business goals
Governance also entails compliance with common industry standards such as those found in CWE/SANS, OWASP, and PCI 3.2.
These are virtualization machines hosted on cloud platforms. They charge on a “pay-as-you-go” basis and provide clients with full control of their machines; although they have to install and configure additional applications and middleware themselves.
In this type of system configuration management technique, system assets and components such as operating systems, machines, and network devices can be specified in a fully automable format.
As a result, their blueprint specification is considered as code that is executed by means of provisioning tools, placed in version control systems — generally subject to the same guidelines used for software development.
This is the type of testing that occurs after unit testing is done but before validation testing, in order to expose if there are any faults or discrepancies in the interaction between integrated units.
A reliability of system metric that depicts the average time between system failures.
Measure the average time a system or component recovers from failure and returns to production status.
An architectural design pattern where complex applications are composed of small, independent, yet modular and interdependent processes communicating with each other via language-agnostic APIs.
These are tools that enable automated tasks on which DevOps Continuous Delivery pipeline is built.
Cloud-based platforms enable their clients to deploy applications on a pay-as-you-go basis, without bothering about the underlying OS-infrastructure.
This is the process of preparing new systems for users that usually involves configuration of the machines, with installation of OS, and associated middleware. It is often handled by automated system configuration management tools which perform virtualization and instantiation on demand.
End-to-end system testing used to verify that changes made to the system or applications didn’t negatively impact or disrupt existing functionality.
These enterprise-level tools optimize the release pipeline by offering vital, real-time visibility into the release process, release status, and enabling the modification of release plans in an auditable manner. They enforce compliance requirements and can coordinate automated tasks across multiple teams.
Shifting defect resolution left in the SDLC enables teams to detect defects earlier and faster, and consequently fix them at a much cheaper cost. Shifting left entails incorporating security testing, risk assessment, and compliance evaluation early in the delivery pipeline.
A DevOps toolchain is the set of tools that work together to facilitate the development, delivery, and management of a software application.
Verifying that the smallest, individual pieces of code function and behave as expected.
As known as beta testing, this is the final phase of testing where the software application or functionality is tested in real-world situations to determine whether it performs accordingly for the end-user.
Instead of using physical machines, applications are run on virtual instances of a computer system simulated on a layer abstracted from hardware. Provides a lot of operational flexibility because virtual machines can be started, stopped, cloned, and discarded in a matter of seconds.
Testing internal structures or behavior of the system to gain an internal perspective of its workings, as opposed to understanding its functionality.
No wiggle room or acceptance of failure of a specific kind. Customers usually have zero-tolerance for failure of a software application; and when this occurs, it can have a catastrophic impact on an organization's reputation.
Customers have grown to expect fast delivery of products; especially Millennials who grew up in the era of Amazon’s overnight delivery system.
These expectations have grown even more astronomical with Cloud-based Software-as-a-Service (SaaS) models that can rapidly respond to changing market needs and add functionality on a whim.
DevOps practices are uniquely suited to meet the accelerating demands of the time in such a way that doesn’t overburden the delivery pipelines or downgrade the quality of the resultant software applications.
To operate optimally, DevOps must fuse both technical techniques and cultural practices in a manner that elicits cooperation and collaboration among key stakeholders.