When I start working on any project, one of the first things I do is figure out my Dev Environment. This has evolved over the years from a local WAMP setup to a full blown CI. This article attempts to summarize what I consider to be a modern dev environment (which in all reality generally spans many groups and systems in an organization).
DevOps is a relatively new term which describes the collaboration and communication of several disciplines (development and operations). The best illustration I have seen which describes the idea is a Venn diagram from the DevOps wiki page.
The concept of DevOps is very compelling to me, I started out doing IT (networking), moved into support/QA and then into development. Moving across all these disciplines exposed me to the reality of the silos that exist, there is a general lack of understanding across disciplines. One of the oddest (imho) are web developers who have no apparent understanding of how a network, let alone the internet works, even how SDN works or what it is is beyond many web developer. But, that’s what specialization can do, create groups of people who know/understand only a very narrow topic very deeply which is both good and bad.
There are some real wins to be had with a little cross domain knowledge applied.
The power of a modern dev environment comes from applying cross domain knowledge to create a system that works together to get the most out of Development, QA and Technology Operations.
SCM (Software Configuration Management)
Imagine being able to click a button and deploy a fully configured server of your entire environment in minutes. Need a new dev instance? Click go! Screw up Test QA with some poorly written code? Click go! Need to scale up for a sale? Click go 10 times!
Modern CM software tools (Puppet, Chef, Ansible, SaltStack) move server & software configuration to scripts instead of using an error prone manual process that an admin must go through. This allows for consistent repeatable configuration of environments.
You can then put the config management scripts into source control right along side the code that it runs with. This allows for versioning system configuration changes right along with the code so that you always have code written for the system defined in the same version. This is pretty powerful over time, you don’t have to keep track of all the system configuration changes over time and attempt to keep track of what code works on which configuration of your servers.
Of course you need some sort of source control in place, git is my current favorite.
Some sort of ticketing. I have used Jira, activeCollab, Assembla, Bugzilla, Asana, atTask, Workfront and several others. They all do basically the same thing, most importantly move us away from email.
Continuous Integration / Delivery / Deployment
What is it?
The basic concept of continuous integration is that developer code is integrated with the main project several times a day. This can be done at a specific time, on each commit or on a merge to specific branches.
In it’s simplest form, a CI (continuous integration) system can be an auto deploy script which takes code that has been committed to source control and deploys it to a server.
A CI can be much more.
- Generate a new target environment (leverage a config management tool & script)
- Compile files (SASS, compiled languages)
- Execute static code analysis to ensure high quality code is being deployed
- Code optimization (Minify js/css, image optimization, byte code optimization)
- Automated tests (immediate feedback to developers, don’t deploy broken code)
- Deploy code to target systems
- Generate production move packages
This describes a continuous integration process that goes so far as to generate the deployable artifact but stops short of actually deploying the artifact into production.
This illustration helps understand why, and has a simplistic deployment pipeline:
Once your staff is confident in the process and tests that are in place, you can choose for the
Supporting CI tools
This has gotten to be a crowded space. Really most CI’s are just a GUI interface for build scripts. I started off by writing custom scripts to implement a CI system, but it’s really important to make the process visible to all the developers. I am familiar with Jenkins and bamboo, but there are a lot of options out there. I really like jenkins because of all the plugins available and I am always in the position of having to use on premise tools. A simple SAS tool that I’ve leveraged for some very simple automation is Travis CI. The best approach is probably to architect how you want things setup, determine some of the tech you will be using and whether you need on premise tools to accomplish the architecture and then start narrowing your selection based on the support each has for your chosen architecture.
I don’t believe that manual QA will ever go away, automated tests will never be able to catch everything, however it’s vital to have a good suite of automated tests to allow your software development to be nimble.
Why? QA is time consuming and error prone, if you create a test plan and execute the entire test plan between each release of a new feature you aren’t going to move very fast. Also, as a system becomes more complex and there is turnover on the development team you can end up in a situation where no one wants to touch certain parts of the code because no one understands what it’s supposed to do or whether something has broken when something is changed. Also, if problems are detected within minutes of work being done by a developer the QA cycle gets compressed, saving timeline, QA hours and development time.
It seems like test are always under-funded and never given the attention they need to be highly effective. QA departments are generally full of overworked manual testers and aren’t staffed appropriately to provide a good suite of stable automated tests. Because of this test automation is often left up to the development team. A lot of testing (unit & integration) will always be on the development team and requires a great deal of discipline to gain a significant amount of test coverage. You could always attempt tdd though I have yet to see man projects execute this strategy successfully.
Unit and Integration tests are great for getting immediate feedback from a CI or even from a local dev build, you won’t have good test coverage without a very disciplined dev team with support from the rest of the organization (so they have time to write tests).
E2E Tests (Browser Automation)
My go-to are end to end tests. It’s not ideal to lean to heavily on E2E tests but they are a great tool to test for cross browser compatibility and can be used even when there are no unit or integration tests. The downside is that it doesn’t narrow down the problem like unit tests do and require the entire system to be deployed, the upside is that they catch complex problems that aren’t detected by unit or integration tests.
Up until this point my choice for E2E tests has been Selenium. In my early days I used the FireFox plugin to automate tests for my local development needs and currently I manage a Selenium Grid which uses code in my projects to automate tests via the Selenium Web Driver. If I were operating in a less locked down environment I would use a service like Sauce Labs. Running your own selenium grid isn’t terrible, but it can be a real time suck.
There is a technology stack to figure out under all your test. Working within java projects I’ve found myself using maven, surefire, junit, cucumber, selenium web driver. There is a similar stack for many other languages. Cucumber and selenium web driver have been ported to many languages so you can generally use these in the same language as your project. I really enjoy using cucumber because it allows for human readable tests and creates reusable units of test code that allow for non-programmers to write tests with very little support from development once an initial library of tests is written.
I think about E2E tests (particularly cucumber driven tests) as Automated Acceptance Testing:
This is a pretty high level overview – maybe I need to come back through and add some diagrams showing how this works together…