Programming complexity is a term that includes many properties of a piece of software, all of which affect internal interactions. According to several commentators, there is a distinction between the terms complex and complicated. Complicated implies being difficult to understand but with time and effort, ultimately knowable. Complex, on the other hand, describes the interactions between a number of entities. As the number of entities increases, the number of interactions between them would increase exponentially, and it would get to a point where it would be impossible to know and understand all of them. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions and so increases the chance of introducing defects when making changes. In more extreme cases, it can make modifying the software virtually impossible. The idea of linking software complexity to the maintainability of the software has been explored extensively by Professor Manny Lehman, who developed his Laws of Software Evolution from his research. He and his co-Author Les Belady explored numerous possible Software Metrics in their oft-cited book, that could be used to measure the state of the software, eventually reaching the conclusion that the only practical solution would be to use one that uses deterministic complexity models.
Measures
Many measures of software complexity have been proposed. Many of these, although yielding a good representation of complexity, do not lend themselves to easy measurement. Some of the more commonly used metrics are
Henry and Kafura introduced Software Structure Metrics Based on Information Flow in 1981 which measures complexity as a function of fan in and fan out. They define fan-in of a procedure as the number of local flows into that procedure plus the number of data structures from which that procedure retrieves information. Fan-out is defined as the number of local flows out of that procedure plus the number of data structures that the procedure updates. Local flows relate to data passed to and from procedures that call or are called by, the procedure in question. Henry and Kafura's complexity value is defined as "the procedure length multiplied by the square of fan-in multiplied by fan-out".
A Metrics Suite for Object-Oriented Design was introduced by Chidamber and Kemerer in 1994 focusing, as the title suggests, on metrics specifically for object-oriented code. They introduce six OO complexity metrics; weighted methods per class, coupling between object classes, response for a class, number of children, depth of inheritance tree and lack of cohesion of methods
There are several other metrics that can be used to measure programming complexity:
Associated with, and dependent on the complexity of an existing program, is the complexity associated with changing the program. The complexity of a problem can be divided into two parts:
Essential complexity: Is caused by the characteristics of the problem to be solved and cannot be reduced.
Chidamber and Kemerer Metrics
Chidamber and Kemerer proposed a set of programing complexity metrics, widely used in many measurements and academic articles. They are WMC, CBO, RFC, NOC, DIT, and LCOM, described below: