McKinsey & Company published an article citing “IT efforts often cost much more than planned (Bloch et al 2012).”
Their research, conducted in collaboration with the University of Oxford, suggested that half of all IT projects (defined as those with initial price tag exceeding US $15 million) on average run 45% over budget, 7% over time, while delivering 56% less value than predicted.
Numerous studies by other thought leaders concluded similarly. Since the 1980s, academics and the business have revisited the “productivity paradox (Brynjolfsson et al 1996)” while computing power increases: productivity stagnates.
The challenge then is still the challenge today: how to focus on managing strategy: how to master technology and project content: how to build effective teams: how to excel at short delivery cycles and how to enforce rigorous quality checks?
Today, there is an effective and low-cost solution to this age-old problem.
Welcome to the missing link.
At this point, I guess the immediate response is “why is a business analyst talking about testing?” This is a perfectly normal response. This is normal because of where we are today in the IT industry. Today, an IT Project comprises of many different roles with many different actors, each actor using many different tools and techniques to deliver many different (and often disconnected) artefacts.
First, we have Mr. Requirements: he tells you what he wants, what he really, really wants. He is very results-focused and sometimes he will mix up requirements with solutions under the assumption that perhaps the IT Project delivers the solution quicker if the business provides a solution directly rather than some indirect means via a statement of requirements.
Without realising, Mr. Requirements uses different words and concepts to describe the same end-result. He is never too sure if you get what he wants so chances are he will write up his thoughts and put them in a Word document. He may go further and ask for sign-off. This, if sign-off happens, may give Mr. Requirements that warm, glowing feeling that the IT Project has completely understood the requirements.
From here on until D-Day (aka “Disappointment Day” aka “oh dear, I appear to have got exactly what I asked for but not exactly what I wanted” aka “well, the operation was a success but the patient died”) Mr. Requirements will repeat the same question over and over again. Is it finished? Is it ready? Does it work? Can I test it? Believe me: this is exactly the same question just each question uses different words to say the exactly the same thing. “Is it done?” is yet another example of saying exactly the same thing but using different words to do so.
Next up, Player 2 enters the game. Here we have Ms. Analyst. Ms. Analyst reads the document and her knee jerk reaction tends to be “what the heck?” The document from Mr. Requirements makes absolutely no sense. Ms. Analyst can see that Mr. Requirements used different words to say the same thing or, did he?
Thus, the IT Project engages an iteration of meetings, workshops, presentations and other endless attempts to “nail down” the requirements. Each attempt involves diagrams, charts, flow charts, models, more diagrams, more models, diagrams with arrows, different arrows, two headed arrows, multiple two-headed arrows pointing to one central humanoid figure entitled “end-user” (my heart goes out to these people: a life constantly at the rough end of the pineapple knowing that “this” is as good as it gets). These artefacts are all examples of saying exactly the same thing but using different words, pictures, tables, diagrams, charts, etc. to do so.
Next up, Player 3 enters the game. Here we have Mr. Developer. Mr. Developer looks at the diagrams, charts, models, code-generated model artefacts, etc. and tries to link one set of artefacts to another set of artefacts such as Office documents in Word or Excel format. The process of “linking” is just one technique to filter out the “solution” element from the artefacts so that, as a developer, he can focus on whether the solution he implements will satisfy the requirements.
Unsurprisingly, his knee jerk reaction to all these artefacts is like “what the flippin’ heck?” The requirements, analysis and specifications make absolutely no sense. Mr. Developer can see that both Mr. Requirements and Ms. Analyst used different things to mean the same thing or, was it the same thing.
Thus, the IT Project engages yet another iteration of meetings, workshops, presentations and other endless attempts to “nail down” the requirements. Conflict, confusion and chaos is absolutely everywhere.
Under pressure to “deliver something”, often the “inexperienced” developers are the first out of the gate, frantically creating code and creating the illusion that something good is happening: people are busy. Often, they hack together a patchwork of anti-patterns such as copy & paste, “golden hammer” and multiple versions of the “God” objects (each new “God” knows slightly more than the previous “God” but an inherent fear of change prevents all “Gods” knowing “everything”).
The “inexperienced” developers never develop with the concept of “how to test” in mind. Simply, they do not have the life-experiences of software engineering to teach them otherwise. In their “inexperienced” mind-set, testing is “something else” done by “someone else” and “nothing to do with IT”. Because of their “inexperienced” actions, they unknowingly push the IT Project further and further into “legacy” development in their all-too focused attempts to deliver something.
Often, following behind and gaining the reputation “trouble-maker” are the “experienced” developers. An “experienced” developer will think immediately “how do I test this function and why does the test need to know things that are not directly related to the function?” The “experienced” developer will challenge the solutions that the “inexperienced” developers implements, for example “why does the instrument class need to know about public holidays if this is a commercial loan?”
When the business analyst attempts to support such questions, they do not tend to appreciate the detail. Without understanding the detail, the answer adds to the confusion. For example “well, if the cash flow of the loan has the end date falling on a weekend, then you need to go forward to the next business day, but if that next business crosses a month, then you need to roll back to the last business day of the previous month (which means current month depending). So, it’s a requirement to know whether a non-weekend day is actually a business day or not.” It is just so obvious, right. Why cannot the programmer just code it up, deploy “something” and let me test it? What is the problem?
These conversations terrify Mr. Requirements. There is just too much detail to engage. He is missing the “big picture” from the conversation – and in this case, he wants to report accounting events for interest rate derivatives under US GAAP rules. After contributing in whatever ways he can, he returns to the next thing he knows best: “When will it be ready and when can I test it?”
Finally, Player 4 enters the game. Here we have Mrs. Tester. Mrs. Tester arrives far too late in the IT Project. By now, the calendar is somewhere between a “non-crashing system (no idea if it works properly)” and “go live”. In this calendar is a narrow opportunity just to “do some testing”. Often, the IT Project drops UAT phase 1 (no time available), goes straight to UAT phase 2 and requests Mrs. Tester to log defects using her favourite defect-tracking tool.
Unsurprisingly, the IT Project goes to production with a set of known bugs. If the bugs are truly annoying, the release will be a “technical” release and, if the sponsors can be bothered to keep funding the programme, management will give the IT Project an extension to “fix the bugs”.
During this defect fixing period, Mrs. Tester becomes the focal point of the IT Project and her repository of defects determines whether the sponsor resigns to the fact that “it is as good as it gets” and cuts all further funding or, simply “gives up” and cuts all further funding. Giving up means the sponsor will congratulate the IT Project in the successful back out of their deployment release.
This happens all the time. IT Projects spend too much time to deliver too little value for too much money. At this point, I guess the response is “true, funny, but funny because it is true. What does this have to do with testing and why is a business analyst talking about testing anyway? I’m confused: is there a point to all this?” This is a perfectly normal response.
What we learn from the above is that each role has the same end-result in mind and the same intention to encapsulate the same concept. However, each person will save the artefacts of his or her concept differently: Mr. Requirements may use Word or Excel. Ms. Analyst may use class diagrams or other models. Mr. Developer may use program code or meta-data configurations for an “off the shelf” package. Mrs. Tester my use JIRA and attach screen shots, reports, log files, Word documents or Excel files.
Each artefact is different from the other. Yet, the intention is that each artefact should mean the same thing: just expressed differently. Worse, each artefact exists in isolation to the other. If Mrs. Tester creates a new defect, the original business requirements document certainly does not update automatically to reflect the new concept. Similarly, the code does not update automatically so to reflect the changes to the UML (and so on).
Now, imagine for a moment what would happen if each role saved their concepts to one, common, shared artefact: for example, an Excel document. Moreover, suppose the artefact is so formal, so structured that it is both human-language independent but machine executable.
Suppose the requirement was a set of input & output tables in Excel. Suppose the tables represented the analysis. Suppose the tables represented the model. Suppose the tables represented the test case. Suppose the developer used the same Excel to validate the build artefacts automatically by injecting the input values into the code and comparing the results to the output tables. Suppose the developer used the Excel not only as a specification but used it as a test case that executes automatically each time the programmers’ code changes.
In this configuration, the requirement equals the specification equals the test equals the build. When the test passes, the build must satisfy the requirement according to the specification provided.
Better, when the test fails, only one of two possible events can take place: either the program contains a bug (which the developer can identify immediately) or the specification is wrong (which means the requirement is not clear and therefore may need to change).
In this configuration, roles and responsibilities work in harmony. The business analyst may know how to transpose a requirement into a set of input and output tables, but typically, the analyst is not so good at testing. Enter the testing specialist. Using the same artefact, the test specialist now has a common form of communication to challenge the analyst: what if the value is zero; what if there is no value; what if there are no products for this order; what if the data consumer does not have access rights to this data and so on. This essential feedback loop between the tester and the business analyst enriches the artefact and gains further clarifications to the requirements by resolving the detailed “what if” scenarios. For example, what is the requirement if such and such should happen? What is the requirement if X, Y and Z were to happen? And so on.
In this configuration, the developer regards the artefact as both specification and test case combined. Since the artefact is the same thing in terms of requirements and specification, it follows therefore that if the test passes, so must the software satisfy the requirements.
Moreover, the developer has all detailed knowledge described clearly as a table of input and expected output values. This avoids miscommunication in which each actor of the IT Project tries to explain the same understanding using different words, charts, diagrams, etc.
Moreover, all defects the developer finds normally in a unit-testing phase comes through automatically as part of the IT Build process. The value was not zero when the test case expected zero: bug. The date was in the wrong format according to the test case: bug. A value was expected but the program code did not generate such values: bug and so on.
In this configuration, the concept of a single artefact is, actually, nothing new. Years ago, in the days of punched cards and paper tape, often the roles and responsibilities were the same person. The programmer knew what he or she wanted. So much so, they would punch out the expected results as a test case and the process of testing was simply to hold up to the light the output cards / tape to the excepted cards / tape and see if the holes lined up. The test process itself was an automated affair: red light / green light means the holes lined up or not.
In this configuration: there is no missing link. Everything connects seamlessly via a single artefact that is both test and specification.
Readers may recognise this as the talk by Kent Beck regarding “rediscovering” test-driven development and may think we have already a solution (such as JUnit). However, in that configuration, the link is missing: the users are not writing Java code for their JUnit tests. The business analysts are not writing Java code for their JUnit tests. The test managers are not writing Java code for their JUnit tests. Instead, the programmer is misunderstanding some other document from elsewhere and inventing what he thinks are “perfectly good” examples of input and output values.
In that configuration, the link is missing. Worse, the programmer invests heavily to create not only a framework alongside their production code, but invests time to write the production code also. Worse, the test team cannot access the JUnit code and therefore continue to log defects in a different medium. Worse, the business analysts never created models using, for example, JIRA and so we now regard TDD (and other variants) as “things that mean well, but don’t really work in practice”.
The failure is understandable because the link was missing from the outset. If we go back to the time when the link was not missing, the roles occupied the same ground and the documentation was the same artefact, we would see an end to the thought leaders of tomorrow writing up about the “productivity paradox” that we all heard about 30 years ago. Instead, IT Projects would spend less time; deliver more value for less money.
My initial feedback was “the article shows a shift in thinking in which the roles as BA, developer and tester are merged into one role. This is an important shift and you need to think this way when you engage with Tao.”
My uncle Peter feels ” it adds interesting value to and supports the TAO approach and is worth sharing to inspire other insights.”