In a forthright post each month, Robin dispels some truth that you otherwise might not realise with his regular Unconventional blog. In the latest instalment, Robin addresses a small issue with his previous blog posts.
—————————————————– ————————————————-
Santa has let me know of a problem he found when checking whether my Unconventional Wisdom blogs were naughty or nice. Apparently my “conventional we’s dumb” assumptions were mistaken about how blogs are indexed and accessed. This resulted in searches for and links to my blogs topics being obscured. Thanks to Santa for finding the problem and apologies to all those potential readers who didn’t know what they were missing. More useful blog topic descriptions and links follow.
My Conventional Wisdom’s Undesirable Outcomes
If you read my first five Unconventional Wisdom blog posts, and I hope you will, you’ll see that each began with what I intended as kind of a branding, sort of what you might find as a masthead on top of stationery or webpages:
“It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”
– Will Rogers
Welcome to my Unconventional Wisdom blog. Much of conventional wisdom is valid and usefully time-saving. However, too much instead is mistaken and misleadingly blindly accepted as truth, what I call “conventional we’s dumb”. Each month, I’ll share some alternative possibly unconventional ideas and perspectives I hope you’ll find wise and helpful.
Then came the text unique to the individual post; and that is what I assumed would show up in response to searches. Imagine my surprise when instead searches displayed for each of my first five Unconventional Wisdom blogs the identical descriptive verbiage:
“It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.” – Will Rogers Welcome to my Unconventional Wisdom blog. Much of conventional wisdom is valid and usefully time-saving. However, too much instead is …
My Next Conventional Wisdom Mistake
I tried to recover somewhat in the sixth post by starting with a recap of the first five posts, embedding links to each post, and tying them all together with V6’s further discussion of guru issues. Because the text came through without the embedded links, only someone who actually had read the prior posts would have realised the introductory paragraph in fact was referencing each of the prior five blog posts.
Explicit URLs and Blog Content Introductory Paragraphs
Hopefully these annotated versions will make clearer what each of the six blog posts to date are about and interest you in reading them. I’ll avoid at least these mistaken assumptions in subsequent posts, though I still haven’t figured out how to use the blog editor to embed links–that’s something for another year. Thank you for your interest and let me know your thoughts about the blog posts.
Unconventional Wisdom V6: Gurus Often Least Likely To Understand Good Ideas
Have you noticed that much of what I call “conventional we’s dumb” is touted by gurus? Gurus are Testing’s Donald Trumps and the main servers of Exploratory’s Kool-Aid. Albeit with the best intentions, gurus lead us into the Testability Trap and over-relying on traceability and poor practices that are certainties rather than risks. Conventional wisdom suggests such supposed subject domain experts would best recognise, understand, and espouse good ideas; but too often they’re least likely to.
Unconventional Wisdom V5: Drinking Exploratory’s Kool-Aid
Exploratory testing is a useful technique; but perhaps its greatest success has been convincing many in the testing community to accept as conventional wisdom (getting them to “drink the Kool-Aid,” Google it) that exploratory is the best testing technique.
Unconventional Wisdom V4–Testing’s Donald Trumps
Like a train wreck, people can’t stop watching Donald Trump dominating a most unconventional race to become the Presidential nominee of the Republican Party. We in the US assume perhaps/hopefully mistakenly that everyone in the world knows and cares about our internal politics. Thanks to Trump, this year they should and probably do.
Unconventional Wisdom V3–Testability Trap
Lack of testability seems to be what the testing community considers the biggest issue with requirements. In turn, lack of the requirement’s clarity is the main reason one cannot write tests to demonstrate a requirement has been met. Consequently, unclear/untestable requirements are widely-accepted as the cause of creep that produces project budget and schedule overruns
Unconventional Wisdom V2–Traceability Truths
Traceability and the traceability matrix demonstrating it are widely touted as essential for requirements and testing. At its simplest, a traceability matrix traces forward from each requirement to the tests of that requirement’s implementation and traces backward from each test to the requirement whose implementation it tests.
Unconventional Wisdom V1: ‘Risks’ That Aren’t Risks At All
Let’s start with “risk-based testing.” I find many testers use the term to distinguish a particular type of testing that they perceive as somehow different from what they ordinarily do. Generally it’s raised as an approach to enlist when time and resources are inadequate for needed testing. My experience is that most use “risk-based testing” to mean a fairly formal and explicit analysis of the risks addressed by prospective tests.
“It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”
– Will Rogers