We're currently doing a couple of sprints to change some look and feel to enhance our web page for mobile. It's been a weird mix of interesting, and boring so far, and I want to share an approach I've taken.
Because our goal for our sprint is to tidy up the mobile experience, which involves a huge amount of changes, I'm being for once somewhat overwhelmed by builds being dropped into my test environment.
I came up with a checklist of things to cover which includes,
- confirm all key fields are present
- confirm error text when mandatory fields missing
- confirm error text when junk input given for fields
- confirm error text when duplicate entries (re-enter your password etc) don't match
- do all this on multiple browsers
- do all this for mobile and desktop settings
At the end of day one, we'd had 3 builds delivered. I'd been really dutiful and methodical for the first build. But by human nature my concentration was going on the huge checklist.
My concern was by end of sprint I might be getting a bit blase about it all. Ironically it's the end of sprint the find details of the checklist matter the most.
The problem is our automation checks we can perform business flows, it doesn't check the fine details of what's on the page. But that was kind of what I needed.
So I wrote my own using WebDriver, but here's the thing, it'd replicate all the above checks, but I acknowledged this was only to help us through the next few sprints whilst we put it through some UI changes. Afterwards it would be just deleted and never used again, because it didn't make sense as ongoing regression.
It'd do all the checklist "ticks" for me, but leave the browser in a state for me to run my eye over for anything out of place, and allow me to do a little bit of playing.
Cool features it had would be,
- Using multiple drivers for different browsers, so it'd check across browsers for me (mobile-proper would still be manual, but hey, it's freeing my time)
- I would use the Selenium setSize command to mimic either mobile or desktop settings in my tests
I time-boxed myself to no more than 2 hours on this - I delivered what I wanted in 1.5 hours. As usual I had a lot of ideas for enhancements, but reminded myself that would take me away from testing (which I was trying to save time on in the first place), and ultimately these pages being checked for content in this depth were not the kind of critical tests we'd need within our suite.
As expected, we had another 4 releases today, and it allowed me to keep better pace with them, whilst focusing my time on key changes for each task as delivered.
It was a good example of how you can knowingly break a lot of key rules of automation (I for instance use methods, but not the kind I break down to use on more detailed functional testing). And yet keep to the fundamental one - let automation deal with anything which looks like an item to check, and leave more freedom to explore to the manual tester.
Now Playing: "History", The Verve