- April 23, 2016 at 12:54 pm #11490Paul FrenchParticipant@paul-french
I’ll be up front and say that whilst I’ve been in the testing world for a long time my experience is largely in UI driven functional testing (and I also spent a considerable time in the video game sector also).
I have recently (over the last few months been leading a team (of mixed experience) on a project where a considerable portion of the testing is concerning in and outbound XML testing. Outbound testing principally consists of ensuring messages are generated in the source system conform to the structure defined in the interface definition, and that the payload in the message is taken from the right places in the database. Inbound testing is checking transformation to the target format is performed correctly by the Biztalk layer, and that the target system correctly validates the data and consumes message (if data valid) or error handles (if the data is invalid – both from a structure and config perspectives).
I want to describe the high level approach the team are taking and ask for feedback on the process as I genuinely feel that we’re really doing this in the smartest way possible. Currently, what I am describing is all done manually.
For outbound messages, we’re testing the XML against the definition and checking each attribute is present, and that the data in the message is correct. The latter is the grindy bit as to be 100% certain we need to query the database to see what the content of the field is for the tag, then ensure this is the data within the XML messages. For messages consisting of hundreds of attributes this is taking a long time to complete.
For inbound messages there’s a little more to it. First we need to test that the message transformation is done correctly by the Biztalk layer (our process is that the message is converted to a canonical schema then re-converted in to the message forwarded on to the target internal system. This is performed by dropping a sample message in of the max number of attributes (the canonical) and ensuring that the output message conforms to the schema and data is maintained (or transformed as per business rules).
There is generally no validation of the data at Biztalk so data passes through in to the message going to the target system and so we then need to test exception handling (all the usual things around giving the system garbage, trying to trick it in to overwriting different records etc etc). This is presently being done on a per attribute level – so again, if there are 100’s of attributes then the exception handling is tested gradually line by line.
The final bit is checking the business rules/validation – e.g. if an attribute is blank then what the default value should be (or error) etc. Often a lot of the attributes do have these specific rules.
The whole process is done manually, very grindy, and take a long time. This just doesn’t feel right to me, and really looking for opinions from others who have done this type of testing to look for ways to make this quicker.
I have some ideas of my own which I’m playing through in my head a bit – but these are more to do with amount of coverage and sign off (e.g. is there an acceptable coverage for us, acceptable risk, rather than trying to check all attributes) rather than getting through the testing quicker with the desired level of testing.
Appreciate this is a long post so if anyone has read this then thanks, and if you have any feedback or techniques you have used at all it would be really REALLY welcome.April 28, 2016 at 3:14 pm #11557JesperParticipant@jesper-lindholt-ottosen
Start by codifying the format of your XML streams, make a DTD so that you can code what fields are required, what fields can be repeated etc. IF not possible in DTD try set something up in regexp’s…
From there you should really really look into having an XML validator in front of your content processing – and after perhaps. I’m positive that this will save you a lot of manual time to automate the validation. if your coding platform doesn’t have this, consider to have a XML broker written in java/php, because these languages can handle XML very well.
No what is left to test is the CONTENT, not the format. I’m sure you can return the investment quickly when instead of churning xml files for breakfast, you can setup a tool to do this over and over again. Perhaps enable you to tweak the rule engine as a tester?
- You must be logged in to reply to this topic.