McCabe's Cyclomatic Complexity Metrics: acedemic or practicable?

Sep 29, 2004
18,656
68
91
So, I have to figure out a way to get this out of our project before the coding effort starts. It is a C/C++ project.

It's easy to spot bad code with my eyes and formal peer review. This is the way to find convoluted code. Not some academic paper that was written and for some reason now accepted. Not to mention that it adds more work for in terms of quality checks. We'll probably have to document that we ran the cyclomatic complexity metrics on each unit. And what happens later when things go wrong. It's just a waste of time.

From experience (last java project), I find the numbers that come out from cyclomatic complexity metrics to be all over the place. They often makes no sense. I can find well written code that scores horribly by cyclomatic complexity metrics and horrible code where cyclomatic complexity metrics say it is great.

So, what are your thoughts on cyclomatic complexity metrics in practice? Pointless or valuable?

Of course, I'm dealing with people that have a software background, but they are not software leads or developers. They are typically project leads that want to micro-manage. YES! They do not know how to design stuff and if you read a paper on cyclomatic complexity metrics, it sounds great! How to convince them otherwise? These are people that are concerned about how we wil lstore things in CVS 3 months before we start coding.
 

SearchMaster

Diamond Member
Jun 6, 2002
7,791
114
106
By themselves, I don't think they have a ton of merit. When viewed in the context of test-driven development, I believe CC is a good measurement. However, in my experience, if you're doing proper TDD the CC will work itself out as testing a method with outrageous CC is quite difficult.
 
Sep 29, 2004
18,656
68
91
Sorry, TDD? That's a new acronym for me :)

(TDD == Test Driver Development)?"Hoo Raaa":"boooo urnnnns"

I guess I see it the same exact way you do. Writing test cases themselves will often improve code quality as you refactor things for test purposes. Improvement would be found in terms of readability and maintainability.
 

SearchMaster

Diamond Member
Jun 6, 2002
7,791
114
106
Test Driven Development (which I mentioned in my OP ;) ) is basically a philosophy that you don't write any code without a test for it. Cyclomatic Complexity essentially measures the number of different paths through a unit of code. It is very difficult to write tests for a unit of code with many different paths (meaning high CC), so the natural refactoring path would be to split the code up into smaller (more testable) units. So TDD by its very nature tends to lead to code with lower CC, which is what I was trying to state originally.

There are exceptions of course. For example, a unit of code with a large CASE or SWITCH statement will have a high CC because of its algorithm, but is not necessarily difficult to read or difficult to test.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Test Driven Development (which I mentioned in my OP ) is basically a philosophy that you don't write any code without a test for it.

I'd say it goes further than that: it says you always write the test for the code first, or at least that's how it has always been presented to me.

I've seen so many different methodologies come and go in the last two decades, that I am fairly cynical about all of them.
 
Sep 29, 2004
18,656
68
91
Originally posted by: Markbnj
Test Driven Development (which I mentioned in my OP ) is basically a philosophy that you don't write any code without a test for it.

I'd say it goes further than that: it says you always write the test for the code first, or at least that's how it has always been presented to me.

I've seen so many different methodologies come and go in the last two decades, that I am fairly cynical about all of them.

Writing the test code first is an academic falsity. In my 9 years of software development, this simply does not happen. Not once have we done things this way. Wonderful in theory, impractical in practice. You really need to develop the code and test drivers in parallel.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Originally posted by: IHateMyJob2004
Originally posted by: Markbnj
Test Driven Development (which I mentioned in my OP ) is basically a philosophy that you don't write any code without a test for it.

I'd say it goes further than that: it says you always write the test for the code first, or at least that's how it has always been presented to me.

I've seen so many different methodologies come and go in the last two decades, that I am fairly cynical about all of them.

Writing the test code first is an academic falsity. In my 9 years of software development, this simply does not happen. Not once have we done things this way. Wonderful in theory, impractical in practice. You really need to develop the code and test drivers in parallel.

My last job was for a crew of mostly open-source developers working in Java and C++, and they tried like hell to actually make this happen. It's a very hard habit to get into, at least for me and about 80% of their staff, and so it didn't actually happen very often.
 
Sep 29, 2004
18,656
68
91
Originally posted by: Markbnj
Originally posted by: IHateMyJob2004
Originally posted by: Markbnj
Test Driven Development (which I mentioned in my OP ) is basically a philosophy that you don't write any code without a test for it.

I'd say it goes further than that: it says you always write the test for the code first, or at least that's how it has always been presented to me.

I've seen so many different methodologies come and go in the last two decades, that I am fairly cynical about all of them.

Writing the test code first is an academic falsity. In my 9 years of software development, this simply does not happen. Not once have we done things this way. Wonderful in theory, impractical in practice. You really need to develop the code and test drivers in parallel.

My last job was for a crew of mostly open-source developers working in Java and C++, and they tried like hell to actually make this happen. It's a very hard habit to get into, at least for me and about 80% of their staff, and so it didn't actually happen very often.

I've found that treating unit tests as white box tests as best practice. You might not realize that a method will have certain conditional logic in it till you code it. And if you don't know this prior to coding, how does one write tests properly to test those branches?

The thing is, black box tests already have a place. It's called integration.
 

Argo

Lifer
Apr 8, 2000
10,045
0
0
Cyclomatic complexity, along with any tools that try to "analyze" source code are absolutely aweful and I wish they would go away. The only thing they do is get developers to write code that the tool will like.

There are two major quantifiable measures of good code: is it doing what it's supposed to and how long it would take an average developer to read and understand what the code is trying to do. Neither one can be programmatically analyzed (save for creating a rather successful AI). The 2nd metric especially is completely un-measureable, it involves things like variable names, location and content of the comments, spacing, etc.
 

SearchMaster

Diamond Member
Jun 6, 2002
7,791
114
106
Originally posted by: Argo
Cyclomatic complexity, along with any tools that try to "analyze" source code are absolutely aweful and I wish they would go away. The only thing they do is get developers to write code that the tool will like.

There are two major quantifiable measures of good code: is it doing what it's supposed to and how long it would take an average developer to read and understand what the code is trying to do. Neither one can be programmatically analyzed (save for creating a rather successful AI). The 2nd metric especially is completely un-measureable, it involves things like variable names, location and content of the comments, spacing, etc.

I don't entirely disagree with you but I think source code analysis tools can do a decent job of identifying poorly written code as opposed to identifying well written code. For example, our codebase has a single method with almost 2500 lines of code. Does it work? Sure. Would you want to refactor it? If I run an analysis of all recent code check-ins and identify 'out-of-bounds' code, there is a much better chance of having it rewritten now than later if it needs such attention.
 

Argo

Lifer
Apr 8, 2000
10,045
0
0
Originally posted by: SearchMaster
Originally posted by: Argo
Cyclomatic complexity, along with any tools that try to "analyze" source code are absolutely aweful and I wish they would go away. The only thing they do is get developers to write code that the tool will like.

There are two major quantifiable measures of good code: is it doing what it's supposed to and how long it would take an average developer to read and understand what the code is trying to do. Neither one can be programmatically analyzed (save for creating a rather successful AI). The 2nd metric especially is completely un-measureable, it involves things like variable names, location and content of the comments, spacing, etc.

I don't entirely disagree with you but I think source code analysis tools can do a decent job of identifying poorly written code as opposed to identifying well written code. For example, our codebase has a single method with almost 2500 lines of code. Does it work? Sure. Would you want to refactor it? If I run an analysis of all recent code check-ins and identify 'out-of-bounds' code, there is a much better chance of having it rewritten now than later if it needs such attention.

But wouldn't any such code be detected by a code review? I'm of the opinion that any piece of checked in code should be reviewed by at least one other person on the team.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Originally posted by: Argo
Originally posted by: SearchMaster
Originally posted by: Argo
Cyclomatic complexity, along with any tools that try to "analyze" source code are absolutely aweful and I wish they would go away. The only thing they do is get developers to write code that the tool will like.

There are two major quantifiable measures of good code: is it doing what it's supposed to and how long it would take an average developer to read and understand what the code is trying to do. Neither one can be programmatically analyzed (save for creating a rather successful AI). The 2nd metric especially is completely un-measureable, it involves things like variable names, location and content of the comments, spacing, etc.

I don't entirely disagree with you but I think source code analysis tools can do a decent job of identifying poorly written code as opposed to identifying well written code. For example, our codebase has a single method with almost 2500 lines of code. Does it work? Sure. Would you want to refactor it? If I run an analysis of all recent code check-ins and identify 'out-of-bounds' code, there is a much better chance of having it rewritten now than later if it needs such attention.

But wouldn't any such code be detected by a code review? I'm of the opinion that any piece of checked in code should be reviewed by at least one other person on the team.

They aren't mutually exclusive solutions. Code reviews are really valuable, but the whole process is put under such strain by shortsighted timelines that many times they just don't happen, in my experience.

Using a tool to identify areas for manual review is also valuable. In my last gig we made extensive use of Flexelint, Valgrind and Memchk, among other tools. But I don't put this kind of use in the category of "applying metrics" to the development process. It's no different from using a Micrometer to check the dimension on an engraving.
 
Sep 29, 2004
18,656
68
91
Originally posted by: Markbnj
Originally posted by: Argo
Originally posted by: SearchMaster
Originally posted by: Argo
Cyclomatic complexity, along with any tools that try to "analyze" source code are absolutely aweful and I wish they would go away. The only thing they do is get developers to write code that the tool will like.

There are two major quantifiable measures of good code: is it doing what it's supposed to and how long it would take an average developer to read and understand what the code is trying to do. Neither one can be programmatically analyzed (save for creating a rather successful AI). The 2nd metric especially is completely un-measureable, it involves things like variable names, location and content of the comments, spacing, etc.

I don't entirely disagree with you but I think source code analysis tools can do a decent job of identifying poorly written code as opposed to identifying well written code. For example, our codebase has a single method with almost 2500 lines of code. Does it work? Sure. Would you want to refactor it? If I run an analysis of all recent code check-ins and identify 'out-of-bounds' code, there is a much better chance of having it rewritten now than later if it needs such attention.

But wouldn't any such code be detected by a code review? I'm of the opinion that any piece of checked in code should be reviewed by at least one other person on the team.

They aren't mutually exclusive solutions. Code reviews are really valuable, but the whole process is put under such strain by shortsighted timelines that many times they just don't happen, in my experience.

Using a tool to identify areas for manual review is also valuable. In my last gig we made extensive use of Flexelint, Valgrind and Memchk, among other tools. But I don't put this kind of use in the category of "applying metrics" to the development process. It's no different from using a Micrometer to check the dimension on an engraving.

Time, cost, quality .... pick two :)
 

SearchMaster

Diamond Member
Jun 6, 2002
7,791
114
106
Originally posted by: Argo
Originally posted by: SearchMaster
Originally posted by: Argo
Cyclomatic complexity, along with any tools that try to "analyze" source code are absolutely aweful and I wish they would go away. The only thing they do is get developers to write code that the tool will like.

There are two major quantifiable measures of good code: is it doing what it's supposed to and how long it would take an average developer to read and understand what the code is trying to do. Neither one can be programmatically analyzed (save for creating a rather successful AI). The 2nd metric especially is completely un-measureable, it involves things like variable names, location and content of the comments, spacing, etc.

I don't entirely disagree with you but I think source code analysis tools can do a decent job of identifying poorly written code as opposed to identifying well written code. For example, our codebase has a single method with almost 2500 lines of code. Does it work? Sure. Would you want to refactor it? If I run an analysis of all recent code check-ins and identify 'out-of-bounds' code, there is a much better chance of having it rewritten now than later if it needs such attention.

But wouldn't any such code be detected by a code review? I'm of the opinion that any piece of checked in code should be reviewed by at least one other person on the team.

We push out a LOT of new code every day. Code reviews on every LOC in our system is virtually an impossibility. If you code review everything, then sure - automated SC analysis is a waste of effort. Otherwise it can raise red flags for refactoring potential.
 

Argo

Lifer
Apr 8, 2000
10,045
0
0
Originally posted by: SearchMaster
Originally posted by: Argo
Originally posted by: SearchMaster
Originally posted by: Argo
Cyclomatic complexity, along with any tools that try to "analyze" source code are absolutely aweful and I wish they would go away. The only thing they do is get developers to write code that the tool will like.

There are two major quantifiable measures of good code: is it doing what it's supposed to and how long it would take an average developer to read and understand what the code is trying to do. Neither one can be programmatically analyzed (save for creating a rather successful AI). The 2nd metric especially is completely un-measureable, it involves things like variable names, location and content of the comments, spacing, etc.

I don't entirely disagree with you but I think source code analysis tools can do a decent job of identifying poorly written code as opposed to identifying well written code. For example, our codebase has a single method with almost 2500 lines of code. Does it work? Sure. Would you want to refactor it? If I run an analysis of all recent code check-ins and identify 'out-of-bounds' code, there is a much better chance of having it rewritten now than later if it needs such attention.

But wouldn't any such code be detected by a code review? I'm of the opinion that any piece of checked in code should be reviewed by at least one other person on the team.

We push out a LOT of new code every day. Code reviews on every LOC in our system is virtually an impossibility. If you code review everything, then sure - automated SC analysis is a waste of effort. Otherwise it can raise red flags for refactoring potential.

Code reviews do not necessarily prevent pushing out a lot of code - as long as you build up certain culture within your team. It also has a hidden benefit of familiarizing the rest of the team with an area that each individual is working on, preventing build up of narrow domain knowledge.
 

SearchMaster

Diamond Member
Jun 6, 2002
7,791
114
106
Originally posted by: Argo
Code reviews do not necessarily prevent pushing out a lot of code - as long as you build up certain culture within your team. It also has a hidden benefit of familiarizing the rest of the team with an area that each individual is working on, preventing build up of narrow domain knowledge.

Each development team has the latitude to do code reviews as they see fit. Most choose to do it on sections of critical new code or with newer developers, which has the benefits you mention. A typical day for us involves deploying 75 changesets from 25 different developers (we're a major website).