Automated Testing ≠ automating manual tests.

Home Forums Software Testing Discussions Automated Testing ≠ automating manual tests.

Viewing 12 posts - 1 through 12 (of 12 total)
  • Author
    Posts
  • #16696
    Kasper
    Participant
    @kasper

    Whenever I read about automated testing it is about automating manual tests or testscripts described first by humans.
    I am working on software that does much more than that.
    Based on the interface or API the softare writes its own tests.
    i.e. The software recognises text fields and will try different inputs, then validates the results do not contain (uncatched) errors or null pointers.
    Based on interactions the software learns to recognise errors / bugs – other than technical errors which are recognised by default.
    Based on interactions and indicators the software learns what parts of a program are high risk, and what parts are not.

    How would you view such a system?

    #16744
    sneha shinde
    Participant
    @abhilashawaghm1

    Test automation can convey many benefits to your mobile application testing cycles, enabling you to develop better applications with less exertion.
    top benefits of automated testing:
    1. Return on investment:
    2. Volume:
    3. Continuity:
    4. Test execution 24/7:
    5. Lesser number of human resources:
    [commerical link removed MOD/JO]

    #16831
    Michael
    Participant
    @michaelabolton

    Whenever I read about automated testing it is about automating manual tests or testscripts described first by humans.

    There’s a reason for that:  most of what gets written “automated testing” is pretty dopey.  This is not surprising, since most people think about tools in general in a pretty dopey way.  For instance, they talk about “automating manual tests”.

    People who don’t think about testing very much watch a tester at work and see nothing but the pressing of keys.  The modeling, studying, conjecture, risk analysis, and critical thinking are invisible to such people.  With that, the believe that tests can be automated.

    Tests, and testing, are neither manual nor automated; research, and researching, are neither manual nor automated; investigative journalism, and investigation, are neither manual nor automated. Programs, and programming, are neither manual nor automated.  No one ever looks at a compiler and says, “Lo, there be automated programming.”
    I am working on software that does much more than that.  Based on the interface or API the softare writes its own tests.  i.e. The software recognises text fields and will try different inputs, then validates the results do not contain (uncatched) errors or null pointers.

    Some aspects of this technology exist, and have existed for some time:  assertions and exception handling can be written into the code; compilers perform some fairly sophisticated checks; fuzzing tools vary input data both randomly and systematically.  There are far more potential problems than errors and null pointers; tools and tooling can be deployed against some of them.

    Based on interactions the software learns to recognise errors / bugs – other than technical errors which are recognised by default.

    Based on interactions and indicators the software learns what parts of a program are high risk, and what parts are not.

    I would be careful to say the “software refines its algorithms” rather than “the software learns”.  That might sound like nit-picking, but the distinction is important. As powerful as it might be, software doesn’t learn the way humans do. Software doesn’t really have a notion of risk; it’s not self-aware; it doesn’t have social agency; it doesn’t feel pain or embarrassment when it screws up—because it doesn’t know it’s screwing up.  It literally doesn’t know what it’s doing.

    How would you view such a system?

    Ambitious.  People have been trying to get software to “recognize problems” for at least 50 years.  If you think in terms of software being able to recognize inconsistencies between “this” and “that” in accordance with its programming, you may be able to create some powerful tools.  That’s not recognition (except metaphorically), but it can be useful. If you think about software as learning to recognize what parts of the program are high risk, you’ve got a long road ahead of you.

    #16859
    Lada
    Participant
    @lelazg

    If the issues would not be null pointers and other unhandled errors, how would this tool recognize a bug?

    I’m not clear what you meant in your post, how the concept of self-learning would work.

    Is the idea that the tool learns the user action patterns by recording usage of the system under test and then predict the expected actions? Expected actions would then become some sort of oracles to test against, ie to compare results of past actions to test actions.

    If so, then in case the user changes his behavior due to changed business needs (without having to change the system), it would be difficult for the tool to determine if the new flow returns expected result or not.

     

     

    #16864
    Robin
    Participant
    @robingoldsmith

    Exercising the code as written is a form of structural (white box) testing. Tools have been around for years that automate creating and executing tests based on the coded logic and data fields. Some take data field range characteristics into account so they can go beyond inputs that merely cause branches to be true or false and also generate boundary tests for each field. Such automated white box tests no doubt can save some time and grunt work; and they can enforce greater testing discipline, including possibly creating some unusual conditions that ordinarily might be assumed to be the sole province of exploratory testing.

    However, regardless of whether generated manually or in some automated fashion, white box and other tests based solely on what has been written cannot tell whether the code as written is what should have been written, and they’re especially poor at detecting omitted code. Effective testing takes functional (black box) as well as white box tests; and it takes informed analysis of test results.

    Robin F. Goldsmith, JD advises and trains business and systems professional on risk-based Proactive Software Quality Ass

    #16870
    Kasper
    Participant
    @kasper

    Error submitting reply. Will try to replicate my answer below.

     

    #16877
    Kasper
    Participant
    @kasper

    @michaelabolton
    I agree with most of what you say here. I use the term learning because I will be using machine learning algorithms and machine learning, deep learning etc. are now part of the common language. Your description is factually better but learning appeals more to non technical readers.
    In the fields of development and security testing there is software available that can do part of what I want to achieve, some of it open source. I am not aware of any software that does everything I aim for.
    I am aware that my goal is ambitious, and probably I can not achieve everything I aim for.
    By actually using the parts that are ready as tools I hope to learn more and accelerate development. Also by making it open source I hope others will use the software and possibly contribute to it. I also hope to learn from this.
    Even so I will probably need years for research and development – even if it only contains part of the intended functionality.

    Ultimately I do want the software to ‘recognize’ both inconsistencies and elements and parts of the program that are high risk.


    @lelazg

    I intend to use machine learning algorithms to let the software create a model and algorithms and predict desirable outcomes.
    This will be based on data, not on interactions of users.
    If the software would use user interactions it would not do much more than existing test tooling.
    As stated above I know this is highly ambitious and the process will absolutely need to be supervised in the beginning. As data grows and algorithms improve supervision will get less.

    So user actions are not directly intended input for the tool and changes in user actions should not impact the tool.

    I hope this explains my intentions?

    #17106
    Ronan Healy
    Keymaster
    @ronan

    Comment from Robin Goldsmith (i’m posting on his behalf):

    Exercising the code as written is a form of structural (white box) testing. Tools have been around for years that automate creating and executing tests based on the coded logic and data fields. Some take data field range characteristics into account so they can go beyond inputs that merely cause branches to be true or false and also generate boundary tests for each field. Such automated white box tests no doubt can save some time and grunt work; and they can enforce greater testing discipline, including possibly creating some unusual conditions that ordinarily might be assumed to be the sole province of exploratory testing.

    However, regardless of whether generated manually or in some automated fashion, white box and other tests based solely on what has been written cannot tell whether the code as written is what should have been written, and they’re especially poor at detecting omitted code. Effective testing takes functional (black box) as well as white box tests; and it takes informed analysis of test results.

    Thank you.
    Robin

    #17108
    Jeremias
    Participant
    @roesslerj

    What you describe sounds a little bit like monkey testing. If that is the case, there are already tools for that. We even wrote one ourself (ReTest). We also applied AI to monkey testing, with some interesting results. Yet you only find technical errors. I wouldn’t go as far as calling this testing.

    I would argue that testing is a sophisticated intellectual task, that cannot be automated until we have strong AI. But then you’d rather automate the development than the testing.

    most of what gets written “automated testing” is pretty dopey.

    I agree to Michael Bolton in that automated testing is a pretty misleading term that is impossible, given current technology.

    #17130
    Kasper
    Participant
    @kasper

    In answer to both @roesslerj and Robin Goldsmith, you don’t really grasp what I am trying to achieve.
    I want to use AI to learn about the software under test. This goes beyond existing whitebox tools and certainly beyond monkey testing.
    At the moment this AI learning ability is not present in any tool I researched.

    #17133
    Jeremias
    Participant
    @roesslerj

    We created one such tool and are currently improving it, using different AI techniques. I would love to get some feedback on ReTest, if you’re so inclined.

    #17145
    Robin
    Participant
    @robingoldsmith

    @Kasper, it’s not a question of my grasping what you’re trying to achieve, for I have no doubts about your sincere desires. I’m just trying to point out the limits of what can and cannot be learned from the software under test itself, regardless how intelligent or automated the learner is. I suspect what you want AI to learn is outside those limits. Far more important than what is in the software is what is not there or there but shouldn’t be, which you cannot ascertain purely based on what is there except for mechanically-ascertainable structural mistakes such as missing “elses.” Wrong and missing requirements, and resulting design, are the big issues in creating errors in the code that one is highly unlikely to discover just by looking at the code.

    Robin F. Goldsmith, JD advises and trains business and systems professional on risk-based Proactive Software Quality Ass

Viewing 12 posts - 1 through 12 (of 12 total)
  • You must be logged in to reply to this topic.