Home › Forums › Software Testing Discussions › Testing As A Bottleneck: Q&A with Kim
- This topic has 7 replies, 4 voices, and was last updated 8 years, 11 months ago by Kim.
-
AuthorPosts
-
December 7, 2015 at 4:05 pm #10258
Please submit your questions for Kim here.
The webinar recording is available now here
December 7, 2015 at 4:30 pm #10260Hey Kim and Ronan,
just wanted to say thank you for a great webinar, I really enjoyed hearing how things work at Microsoft. I especially love hearing about how complex Microsoft’s approach is, the level of analysis going into the decision of whether a test case of worth running repeatedly or not is both impressive and astounding. I’d love to get near a system like that!
I also enjoyed the cost model and it’s something I’ll look into in further detail as its something that gels well with my thought process. Thank you for answering my question relating to different tests for different environments, apologies for trying to squeeze sixty questions into the one tiny paragraph!
Cheers,
NickDecember 7, 2015 at 4:53 pm #10261My pleasure. Well. thiings in indutry at big scale are a bit different. So what do you do? Most of the problem stem from the fact that testing did not evolve as quick as development tools. We can build systems in incredible speed and people believed that would solve the problem. Of course it did not, faster build just means we build more often and this will trigger more test runs. The fact that testing was not the top 1 blocker 2 years ago did not imply that it will not become one, but as humans we do not look to far ahead.
Let me know if you have more questions. Happy to get any questions or feedback. For most of the stuff I presented, there exist papers that go a bit more into detail.Cheers,
//Kim
December 7, 2015 at 5:39 pm #10264Great presentation Kim. Thanks for sharing this!
I was wondering how you calculate the machine cost per time? do you only include hardware costs, or do you include software/licensing costs into that as well?
Thanks!
December 7, 2015 at 6:41 pm #10265Dave,
We used the price of an Azure VM that corresponded to the machines–just take the most expensive there is. 🙂 This includes maintenance and power. However, we do not count setup time or likewise. We only used the execution time of the test. Thus, the cost of running a test is actually much bigger. But the key point is not so much to get all cost factors 100% correct (you will never get there) but that the relations between the factors is stable or reflects reality. There are very few cases in which tests are nealry as expensive to run or to skip. For the fast majority of tests the it is absolutely clear if you want to run or not run.
Hope this casts some light into the shadow.Cheers
//Kim
December 7, 2015 at 6:42 pm #10266Hey Kim,
a follow up question if I may, you said that ideally you’d be mocking services outside of the integration environment, but cost/effort may exceed the payoff for creating the necessary mock service. I wonder in the scenario where you’re creating low level functional tests in an integration environment, do you change your normal approach in comparison to a mocked environment? i.e. overheads are higher so trying to keep automated test cases very low and almost to a smoke test subset? Or because it’s the lowest level environment you have, go ahead with automating at the lowest level you can, despite being in an integration environment? As an example you could consider, throwing a large number of variant data at an input box against a mocked service as opposed to the input box being hooked up to an API which is performing look-ups in real-time on the data input.
Cheers,
NickDecember 7, 2015 at 7:12 pm #10267Yeah as a quick and dirty estimate that is very helpful. Thanks!
December 8, 2015 at 6:31 am #10270Nicholas,
That depends on many aspect: the type of product, the test, the scenario, the team, the release cycles, etc. But I would say that most teams of the bigger products, say 500+ engineers, prefer full automated tests with mocks where needed, but as less as necessary. Usually, the test environment is defining these border: say we run tests in under non-admin privileges which means starting a service or spinning of VMs in not possible. If the environment provides them a huge benefit, say full automated tests running super fast, teams will invest to mock what is necessary. If they cannot make the test a unit or multi-component test fitting into the environment, they prefer to keep it a full fledged system and integration test but running it less frequent and maybe extracting the most essential parts into lower level tests.
Having said this, the problem is always that mocks do not behave as real code and that you need to maintain mocks. If you change the behavior of the code you might need to change mocks and most importantly you need to keep track of these changes. This can become a large overhead. So generally, you want to mock as little as you can but as much as necessary.
Additionally, the ability to mock depends on the modularity of your code (and on the stability of your API). Services are very good for mocking as they usually are very well componentized. Monolithic code bases are harder to mock. BUT and this is interesting, if you are componentized the world is not as shiny as it seems. Because you will run into versioning issues. Components can be on different release speeds and thus, dependencies to specific versions become and issue. Say A depends on B v1.2 and C depends on B v.1.3. A depends on B and C. Well, you have a conflict. Now who has to change? Do we force B to upgrade? What consequences would that have? How much instability does this introduce and is A then effectively defining release cadence for all other modules? And this is very well also a testing issue as we now need to have tests for different versions and different combinations of version. But you can mock very well. 🙂
-
AuthorPosts
- You must be logged in to reply to this topic.