Skip to main content

Currently Skimming:

Computer Science Approaches: Visualization Tools and Software Metrics
Pages 116-136

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 116...
... To give a different talce on documentation a different sense of generally representing an electronic document we aslced Tom McCabe to describe general visualization tools and software metrics. Tom founded McCabe and Associates, which creates tools for software development, maintenance, and testing.
From page 117...
... sense of what these algorithms are doing, and I realize that this is a long stretch here so we'll give a quiz, and see where people lose the comprehension. And then we'll talk about using similar metrics for testing and maintaining as well.
From page 118...
... SO, it was a real operational applied lcind of business. The lcinds of things that go wrong with software are manifold.
From page 119...
... The problem is that documentation and traceability aren't there to let you get to it. We loolced at it another way and we'll talk about reengineering here as well and we reengineered systems and found 30 percent of the systems were redundant, in the sense that they were doing the same thing.
From page 120...
... One of the problems in the testing of software is that if you had a group of a dozen people, and you look at the criteria for shipping, it was purely ad hoc it was simply that one guy sent it through testing and a manager would say, "Fine." Well, one person might be good at it and the next guy lousy at it, but that was the criterion. So we're going to talk more about a defined mathematical criterion for completing testing.
From page 121...
... So it would talce for example, with maybe half a million lines of codeabout a half hour to parse and represent. SO, for a very big system, it might talce half a day to parse all the source code, show all the architecture so you could see it.
From page 122...
... Or the data I'm aslcing about at question six depended upon the outcomes of the data at five, three, one, and two. So it could also be thought of as a data dependency graph of the questions within the questionnaire, and it's been used that way in software as well.
From page 123...
... If it becomes separated or talces some time to do it, or it becomes bureaucratic, it won't be done. The point at which this particular stuff is applied is in unit integration testing, and if you don't get the testing done well there, the software starts to blow 11p as yol1 integrate things and go into field testing and the cost is incredibly high.
From page 124...
... And then, later, we'll talk a little bit about something called "~module] design complexity" (iv)
From page 125...
... So if you were testing that as an algorithm, you somehow want to come up with 11 paths. Now, typically, if you gave this to a software testing person you get answers all over the place.
From page 126...
... But it has eleven basis paths. You can mathematically prove that, if somebody doesn't test eleven basis paths, then that algorithm is overly complex.
From page 127...
... And it's about unstructured logic. Harlan Mills came out some time ago with Terry Balcer with papers about structured programming, and it meant a style of logic that loolced lilac it was pretty testable and reliable.35 Now I published a paper some time after that about unstructured programming what it meant, if you didn't use structured logic, how could you characterize the thought process behind the spaghetti code?
From page 128...
... There's a mathematical definition here, but if I lceep going I can essentially reduce this whole algorithm to a linear sequence. So it means I can separate my 37McCabe's essential complexity is computed by analyzing the software module's flow graph and removing all of the most primitive pieces of structured logic the lowest-level "if," "while," and "repeat" structures embedded within the code.
From page 129...
... is 1, and module design complexity (iv)
From page 130...
... is 18; essential complexity (ev) is 17, and module design complexity (iv)
From page 131...
... Adding in the dashed line, the resulting flow graph has cyclomatic complexity 1 1 and essential complexity 10. SOURCE: Workshop presentation by Thomas McCabe.
From page 132...
... But that's something that you can work with. And, by the way we'll talk about this later these things have function calls.
From page 133...
... And you malce your first set of fixes, and those fixes introduce secondary errors. You fix those secondary errors and you've got tertiary errors.
From page 134...
... So the way you'd think about testing is, you want to lcnow the design complexity and I'm going to force myself to do the design testing before I ship because I don't want to have the errors coming back in. If I want to do reengineering I want to see the traces across the architecture when it's being used.
From page 135...
... , is said to be "a measure of the usage of global (external) data within a module" and "is associated with the degree of module encapsulation." Hence, if two modules had equal cyclomatic complexity numbers but different values of gdv, the one with lower gdv would be preferable due to its stronger encapsulation.
From page 136...
... But you find often that the organizations producing quality software are getting it not so much as the result of a new paradigm. It's by engineering principles, stuff lilac regression testing and the stuff we're talking about.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.