• Author
  • #12140


    Being new to the testing career thing, but experienced within the same business and knowledgeable about the product, I was curious what approaches testers use in learning a new product for the very first time.

    Cursory googling hasn’t brought up much on the specific topic of a testers approach to this, just the usual Learning2Learn courses.

    Suppose my question is, do those, with experience in switching between entirely different products, have any insight or particular approaches, methodologies etc they use when learning new products to quickly and thoroughly form a basis to begin their testing?


    Look into “Rapid Software Testing”‘s heuristics and mnomics for learning new products quickly


    Hi Adam

    Welcome to the testing world!

    This is a challenge for me on pretty much every project I work on. I test hardware and software in the factory automation world, and have worked in R&D for the same company for 16 years now. Not having an engineering background for my education means that I have to learn about new devices regularly. Even with the extensive experience within my company, there are always new products to be developed.

    My approach is usually as follows:
    1 – Research competitor products. Get budget to buy competitor products and follow the manuals for configuring and using them. We do this for software and hardware products. It also helps to compare competitor products to find the best implementation, and to discover what not to do, what is confusing, and what is necessary. Send your feedback to the client and development team. Show demos, and get hands on with all your team to share knowledge. This will definitely save time later in the project!
    2 – Meet the client (or product owner, or end users). Sit with them to find out how they will use the product. Try to find any intrinsic requirements, those they expect but have not directly stated before.
    3 – Read the requirements. Ask questions on any that are not immediately clear to you. Make sure you’ve checked for conflicts, and get the conflicts clarified with the client (or product owner, etc). Clarifying requirements will really help you understand the purpose behind the product, which is not always what has been implemented!
    4 – If they exist yet, read the functional specs. This should give you a good idea of how the device works
    5 – If it exists, read the manual. Follow any procedures in the manual
    6 – Read the support requests from end users. These can give good insight into how end users actually use it.
    7 – Then the best part…. Play with. Follow the happy case, then try to break it.

    I hope this is helpful.


    Hi Adam,

    Exploratory testing has a significant role in learning a new product. Basically, you have to investigate it and go through its functionalities just like a user and see how it behaves in various situations. You may also notice possible weak points or vulnerabilities that can affect its behavior. Thus, you can understand better the product and see what is ok and what is not; then you can think about test scenarios and how to do the testing.

    Another important thing is reading the requirements. This can be combined with exploratory testing. However, it is possible that not everything is well documented because documentation may not cover all workflows, it may not give many details or not updated with the latest changes.

    Another option is to ask the colleagues who have more experience on working with the product and know better the product. It can be anybody from the team: dev, test, ops.

    I also recommend you to have a look into “Rapid Software Testing”.



    the most positive experience I had about learning a new product was going on-site and sitting down with users and watch them how they use the software.
    it was not planned for a tester to go on-site (it should have been the BA), but surely helped me a lot.


    A long time ago I moved from a dev role into test – a new job, software I wasn’t familiar with. The software was being ported from BDE to ADO and my first task (as sole tester for the company) was to find all the problems. Armed with the user help I started at the top of the menu and worked through every screen in the system with every combination of system settings finding plenty of problems as I went. I still didn’t find all the problems but having been able to speak to a selection of customers I was able to target key functional areas and follow the most frequently taken paths through the software. Similarly when testing changes in functionality I try to identify what the most frequent paths are going to be, how the user will translate their skills from the old to new versions of the product (which areas will they find most tricky – these will potentially fail testing on usability grounds) and whether there’s a data translation path from the old to the new.

    If there aren’t user guides and specs then talking to analysts, devs and implementation specialists will help as will researching similar products. I once used a free demo version of Sage to assist with testing a new Payroll product (in the days before HMRC produced robust test cases) with the premise that Sage was a widely used package so was unlikely to have any serious problems when calculating payslips. That proved far faster than manually checking calculations, although I did work through a sample to be sure I wasn’t finding bugs in Sage and replicating them (there were a few bugs in Sage).


    We published an eBook recently on this exact topic about adapting to testing quickly in a new context. It offers some tips on what to do. You can get the eBook here


    In my experience, you learn more about a tool when you actually use it for testing an application. So, my approach is simply start using and explore the tool as you do your testing. Always have the documents and tutorials, if any, ready for reference and read as you test.

Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic.