• Author
    Posts
  • #16696
    @kasper

    Whenever I read about automated testing it is about automating manual tests or testscripts described first by humans.
    I am working on software that does much more than that.
    Based on the interface or API the softare writes its own tests.
    i.e. The software recognises text fields and will try different inputs, then validates the results do not contain (uncatched) errors or null pointers.
    Based on interactions the software learns to recognise errors / bugs – other than technical errors which are recognised by default.
    Based on interactions and indicators the software learns what parts of a program are high risk, and what parts are not.

    How would you view such a system?

    #16744
    @abhilashawaghm1

    Test automation can convey many benefits to your mobile application testing cycles, enabling you to develop better applications with less exertion.
    top benefits of automated testing:
    1. Return on investment:
    2. Volume:
    3. Continuity:
    4. Test execution 24/7:
    5. Lesser number of human resources:
    [commerical link removed MOD/JO]

    #16831
    @michaelabolton

    Whenever I read about automated testing it is about automating manual tests or testscripts described first by humans.

    There’s a reason for that:  most of what gets written “automated testing” is pretty dopey.  This is not surprising, since most people think about tools in general in a pretty dopey way.  For instance, they talk about “automating manual tests”.

    People who don’t think about testing very much watch a tester at work and see nothing but the pressing of keys.  The modeling, studying, conjecture, risk analysis, and critical thinking are invisible to such people.  With that, the believe that tests can be automated.

    Tests, and testing, are neither manual nor automated; research, and researching, are neither manual nor automated; investigative journalism, and investigation, are neither manual nor automated. Programs, and programming, are neither manual nor automated.  No one ever looks at a compiler and says, “Lo, there be automated programming.”
    I am working on software that does much more than that.  Based on the interface or API the softare writes its own tests.  i.e. The software recognises text fields and will try different inputs, then validates the results do not contain (uncatched) errors or null pointers.

    Some aspects of this technology exist, and have existed for some time:  assertions and exception handling can be written into the code; compilers perform some fairly sophisticated checks; fuzzing tools vary input data both randomly and systematically.  There are far more potential problems than errors and null pointers; tools and tooling can be deployed against some of them.

    Based on interactions the software learns to recognise errors / bugs – other than technical errors which are recognised by default.

    Based on interactions and indicators the software learns what parts of a program are high risk, and what parts are not.

    I would be careful to say the “software refines its algorithms” rather than “the software learns”.  That might sound like nit-picking, but the distinction is important. As powerful as it might be, software doesn’t learn the way humans do. Software doesn’t really have a notion of risk; it’s not self-aware; it doesn’t have social agency; it doesn’t feel pain or embarrassment when it screws up—because it doesn’t know it’s screwing up.  It literally doesn’t know what it’s doing.

    How would you view such a system?

    Ambitious.  People have been trying to get software to “recognize problems” for at least 50 years.  If you think in terms of software being able to recognize inconsistencies between “this” and “that” in accordance with its programming, you may be able to create some powerful tools.  That’s not recognition (except metaphorically), but it can be useful. If you think about software as learning to recognize what parts of the program are high risk, you’ve got a long road ahead of you.

    #16859
    @lelazg

    If the issues would not be null pointers and other unhandled errors, how would this tool recognize a bug?

    I’m not clear what you meant in your post, how the concept of self-learning would work.

    Is the idea that the tool learns the user action patterns by recording usage of the system under test and then predict the expected actions? Expected actions would then become some sort of oracles to test against, ie to compare results of past actions to test actions.

    If so, then in case the user changes his behavior due to changed business needs (without having to change the system), it would be difficult for the tool to determine if the new flow returns expected result or not.

     

     

    #16870
    @kasper

    Error submitting reply. Will try to replicate my answer below.

     

    #16877
    @kasper

    @michaelabolton
    I agree with most of what you say here. I use the term learning because I will be using machine learning algorithms and machine learning, deep learning etc. are now part of the common language. Your description is factually better but learning appeals more to non technical readers.
    In the fields of development and security testing there is software available that can do part of what I want to achieve, some of it open source. I am not aware of any software that does everything I aim for.
    I am aware that my goal is ambitious, and probably I can not achieve everything I aim for.
    By actually using the parts that are ready as tools I hope to learn more and accelerate development. Also by making it open source I hope others will use the software and possibly contribute to it. I also hope to learn from this.
    Even so I will probably need years for research and development – even if it only contains part of the intended functionality.

    Ultimately I do want the software to ‘recognize’ both inconsistencies and elements and parts of the program that are high risk.

    @lelazg
    I intend to use machine learning algorithms to let the software create a model and algorithms and predict desirable outcomes.
    This will be based on data, not on interactions of users.
    If the software would use user interactions it would not do much more than existing test tooling.
    As stated above I know this is highly ambitious and the process will absolutely need to be supervised in the beginning. As data grows and algorithms improve supervision will get less.

    So user actions are not directly intended input for the tool and changes in user actions should not impact the tool.

    I hope this explains my intentions?

Viewing 6 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic.