Experience-based Testing Strategies

On various software testing resources, the concept that you might see being mentioned often is the mysterious “Experience-Based Test Techniques”. Teaching testing I always had a hard time trying to explain what it was. It’s very hard to teach “experience”. One can present some practical task, designing it in a way that the students learn something, but it’s impossible for them to learn exactly from your experiences.

In testing, we used to deal with such philosophical categories that cannot be explicitly described with a language of logic, like: Quality, Useful, Good, Good-enough, Experienced etc. (Say Hello to Plato!)

The guys in Context-Driven School of Software Testing decided to deal with these in-definitive categories collecting them all together under the open-meaning term “the context”.

I recall (but fail to find the link) the blog post of some of those guys, trying to define if the testing is craftsmanship or art.

But dealing with teaching, one cannot teach the art.

Thus, I like the poetic idea that true art testing is wonderful, and it’s impossible for one artist tester to totally repeat masterpieces of the other.

But as a teacher I need to present the discipline of Software Testing as a number of exercises, that one may repeat on their way to becoming a better tester.

So here follows the list of “Bug Types” I met in my experience with the possible ways to identify these bugs also called “test strategies”. Of course the list is not complete and your suggestions on expanding it a bit are welcome!

Testing Strategies

User Interface

  • Check labels/messages text
  • Positioning of elements
  • Overlapping of dynamic elements

UX

  • Compile end-user oriented end-to-end test scenarios
  • Attention for possibly redundant steps in UX flow
  • Clearness of logic for elements grouping (might be some very subjective stuff 🙂 )
  • Timings: are the regular operations fast enough
  • Aren’t the cursor’s movements too long?

Data Sheet Calculations

  • For quick checks, if possible, conduct multiple calculations and then compare totals (checking for error accumulation)
  • Watch digit numbers – for similar calculations there should be similar number of digits
  • Prepare the data for volume testing introducing the realistic variousness
  • Test for acceptable/not acceptable number types
  • Construct ‘pipelines’ of calculation (error accumulation)

Inputs

  • Various types of inputs (let BugMagnet browser plugin inspire you)
  • Field value validations: in-line, on-focus, on-change-focus, on-submit, front-end/back-end.
  • Watch all places where the data from input are shown or used afterwards. Compile an explicit list of such places.
  • Injections: JS (XSS), SQL, format

Memory Leak

  • Data volume testing
  • Soak testing
  • Repeating some resources-costly actions again and again, watching the performance monitor for increasing resources consumption

File Saving/Reading

  • Overwriting existing files
  • Saving to file very often
  • Watch the changes while each save
  • Manually changing supposedly generated files

Integrations and API

  • Demand independent test environment
  • Populate all fields
  • Enumerate test values or vary them in your own way to easily recognize a piece of your test data in the further processing
  • Use pair-wise population (at first)
  • Check every piece of initial data traveling through the workflow from one end to another
  • Change format, order, length and other parameters within explicit limits

Concurrent multi-user interference

  • Input alike/same data
  • Change the same records
  • Interrupt/Enable the same streams
  • Use the same user roles/permissions
  • Access/change the same files
  • Use different clients simultaneously: desktop, mobile, web, API (different levels, if present), mobile web

Permissions

  • All available in the UI actions are allowed for the role
  • All actions that should be allowed are available in the UI for the role
  • Not allowed actions are not accessible for the role even by direct links (or other indirect ways)
  • “Allowed actions” set may vary through the workflow. Check at every step
  • Add/remove a permission for the active user/role
  • Newly added function is aligned with the system permissions policy

General strategies

  • Use “production” or “production-like” test data
  • Extra step. Extra mile. (Just another one little action…)
  • Finish the flow like a “real user” would
  • Watch units of measure throughout the system (don’t add miles to kilometers)
  • Do a random action that should not affect the main scenario at random time: before/in-between/after the “main” steps.
About the Author

Oleksii Burdin

Find out more about @oleksii-burdin-2