Imagine a bacon-wrapped Ferrari. Still not better than our free technical reports.
See all our reports

Early Survey Results: App Releases Often Contain Critical Bugs

We recently published some early results on the topic of predictability in software releases as part of our upcoming Developer Productivity Report, and as of today 700 developers have shared their experiences with us in a 1-page, 5-min survey (which we verified for length).

Donate your brains for 5 minutes here

We’re still looking for more bad@$$ developers, testers and release managers to complete this short survey, hence the plea for your grey matter. We know it’s going to be an awesome contribution to the industry and help us learn a lot about how quality is differs between industry, company size, toolsets and practices.

So, here’s a sneak peak at some of the answers we recorded when asking developers about software quality:

  • Code quality who? Over half of dev teams surveyed (53%) don’t fix and/or don’t monitor the code quality of their apps (e.g. using Sonar). The rest fix at least some code quality problems.

  • Not so much automation-foo: A large minority (43%) of teams do not automate testing of their app, and only about 1 in 5 teams makes it a priority.

  • Buggy apps for you! Bugs that block functionality go into releases at least ‘sometimes’ for 60% of development teams, and this doesn’t include non-critical issues.

Early results: Code Quality, Automation and Functional Testing not widely embraced

A key question in developer productivity is quality. Without high standards of quality any amount of code can be produced and create more work for that value. A wealth of engineering practices, like code reviews, automated testing and code quality monitoring exist just to help tackle this complex problem.

Although there is no controversy in the need for high quality or measuring it, it’s a tougher challenge if you want to measure and compare quality across a wealth of different projects. Number of bugs, the simplest metric, depends on the size of the project as much as on anything else.

Nonetheless, we prepared a list of questions to get at the heart of the matter, and picked these 3 from the larger survey to represent the trend of what’s going on:

rebellabs developer productivity report software quality

Question: Do you monitor and fix code quality problems? (e.g. with Sonar)

We asked if development teams use code quality tools like Sonar (i.e. encompassing FindBugs, PMD, Checkstyle) to monitor their code base for bugs. Nearly 40% of respondents don’t monitor their code at all, and 13% monitor but don’t fix their code. So, over half of development teams we questioned have no little support in monitoring and fixing code. However, 48% of teams told us that fix at least some of their code when quality problems arise, and a small sliver (7%) of just-barely-statistically-significant folks fix all code quality issues they encounter. Let’s all hope to be using THEIR apps. ;-)

rebellabs developer productivity report software quality

Question: How much functionality (not code) is covered by automated tests?

The idea of automation in building and releasing apps is nothing new, yet a troublesome 15% do not automate testing (e.g. with Selenium), posing the question whether they test their apps at all and how effective that manual testing can be. Half (50%) of all respondents at least try testing between 25-75% of their apps functionality with automated tests. Only 6% of teams have essentially all functionality covered by automated tests.

rebellabs developer productivity report software quality

Question: Do you find critical or blocker bugs after release?

It’s nice to see that nearly 40% of development teams don’t appear to worry about show-stopping bugs sneaking into releases, but a majority over 60% has critical bugs in their release at least sometimes, and 1 in 10 dev teams replied that they ‘often’ or ‘always’ have critical bugs in releases. It would seem that only 5% development teams can release free of blocker bugs, which is probably the same small group of respondents that monitor and fix all their code quality problems and who have gone for automating their functional testing. This is something that 19 of 20 dev teams should strive for.

Summing it all up

In the end, after much discussion we decided that the only reliably comparable measure of quality is the number of blocker or critical bugs that are found in production, after the release is made.

If the answer to “do you have a bunch of critical bugs loafing around in your app?” is “No” or “Almost Never” then quality assurance is built well and the development team is productive. If the answer is “Always”, then there is definitely a problem somewhere.

An objection to this metric would be that we are measuring the quality assurance team as much, if not more, as we are measuring the development team. However, that answer is again – it is impossible to measure the development team completely aside from your organization, QA is as important part of the development process as is management and operations, it’s impossible to be productive as a developer without both dedicating time to quality and having support from a good QA team.

We’re still looking for more respondents to share their experiences with us, so take 5 minutes out of your busy day to assist the few primary researchers still out there! ;-)

Take the survey!