In answering why cyclomatic complexity (CC) is so widely employed in industry but widely criticized in academic research, we must first answer the question, what parts of the software industry are even aware of current software research?
As a professional programmer, I often receive marketing emails from many sources. They normally compete for my attention by offering insights into my development experience or offering education on some of the latest topics in large-scale software deployment. I subscribe (for free) to InfoQ (www.infoq.com), an organization that hosts many software development conferences and records all session presentations for later Web streaming. The presentations tend to be by professional programmers who work for large commercial organizations on interesting open source projects. They have little marketing content and are highly technical, but they rarely reference academia.
So, to put it bluntly, a divide exists between academic interests and those of day-to-day development. What industry might find useful, academia struggles to find useful, and vice versa. This underpins both parties’ views on metrics such as CC.
As a developer working in business, I also have little time to add to my education, and there’s a constant balance to maintain between the urgent and the important. A fundamental part of my skill set is to keeping it relevant. So, refreshing skills by constant learning is of paramount important. If half my (technical) knowledge becomes obsolete every five years, I need to double my knowledge every five years, just to stay up to date. Industry thus looks for simple, effective metrics (such as CC) that take very little time to understand, collect, and interpret.
Of course, development teams face another pressure – accountability. Senior (maybe nontechnical and definitely not computer science academic) management have been taught that they need to measure. If you’re not measuring, how do you know you’re improving? So, well-meaning programs are decreed from on high: Thou shalt measure!
But what should be measure, and how do we measure it? The following quote is from the McCabe homepage (www.mccabe.com):
McCabe IQ has been used to analyse the security, quality, and testing of mission, life, and business critical software worldwide.
How vulnerable is your code? What is the quality of your code? How well tested is the code?
If you are responsible for the development, security, reengineering, or testing of software applications that must not fail, you need answers to these questions. If you can’t answer them with certainty, you need McCabe IQ.
This is a compelling message. But it’s not aimed at software developers. It’s aimed at the people responsible for development. McCabe’s offering solves a different problem than a simple “lines of code” metric would. It lets managers tell senior directors that there’s a metrics program in place with plenty of industrial evidence about its efficacy. And, given the industrial-academic divide, no one in the company has the knowledge to argue with the management’s decision. This suggests that industry looks for metrics that can be communicated between the different management levels across the levels of development stakeholders. CC fits this bill.
James Cain, Principal Architect-Software at SAM