A little over two years ago, I was hired to start up a Selenium automation effort at a SaaS start-up named EchoSign. Six weeks later, Adobe acquired that start-up. I’ll be leaving Adobe EchoSign at the end of next week, but first, I want to do a post-mortem on my two-year Selenium automation effort. What went well? What didn’t go well? What am I going to do differently (better!) in my next Selenium automation start-up effort?
WHAT WENT WELL
Picking Python. In my previous Selenium work, I had used Perl. But since Selenium 2.0 was going to be released without official support for Perl just a month or so after I started work at EchoSign in 2011, I knew I had to pick a different language. Python turned out to be a good choice for many reasons, chiefly because it is so easy to get up to speed in a hurry with Python. It took me almost no time to switch from Perl to Python; a manual tester who wanted to move into our automation team was similarly successful at picking up Python in short order.
Framework: Using an expert’s. Way way too many test developers think they have to create a framework before they can get started developing tests. Yes, they need a framework before they can start developing tests. No, they do not have to develop that framework themselves–they can select an open-source one. I chose one that had just been developed in summer 2011; it has since evolved into pysaunter, developed by Selenium consultant Adam Goucher. Using pysaunter allowed me to get started writing tests the same day I installed the framework. Literally! That kind of instant productivity is really important when working at a chaotic start-up, where one is often dragged into helping with manual testing far more often than one would like.
Framework: Selecting pysaunter. Besides the instant productivity that pysaunter enabled for us early on, the framework also provided us with many useful features, some of which were pass-throughs from the popular pytest framework upon which it is built. One of the most useful was the -m option and the related pytest.mark decorator which we put over every test, e.g.,
If we wanted to run the full regression suite, pysaunter -m ‘regression’ would suffice. If we wanted to run all regression tests except the send and authoring ones, we could specify pysaunter -m ‘regression and not (send or authoring)’.
Pysaunter also produced results in the Ant JUnit format understood by Jenkins, which simplified this integration.
Counting the number of tests in a particular functional area was also a snap. For example, the number of texttags tests in our suite can be determined via pysaunter -m ‘texttags’ –collectonly | grep Function | wc -l.
API: Choosing RC initially. Selenium 2.0 was a month away from release when my manager and I decided that I should start developing tests with the RC (Remote Control) API instead of WebDriver. We thought we’d wait six months to a year, and let somebody else “get the kinks out of” WebDriver. In my opinion, the RC API (at least in Python) is way easier to use than is the WebDriver one (even today). And the 1.0 docs were in far better shape in the summer of 2011 than were the 2.0 docs. Fewer bugs, improved usability, and better docs all helped make RC the right initial choice for us.
Integrating with Jenkins.
After a year and a half of having to download the latest bits onto actual hardware, modify the configuration files, and start up a run via the cmdline, it was a thrill to instead push a Build Now link on a Jenkins project. I wish we wouldn’t have put off this effort so long!
Mentoring a manual tester. About five months into the two years, a member of our functional (manual) testing group wanted to learn how to contribute to our Selenium project. She learned Python on her own, using Lynda.com courseware. I started her off with Selenium by creating one simple test case for our search functionality and then asking her to copy/modify it to create the rest of the search test cases (from the Test Case Manager we employ–TestLink). Then I did the same for the filter functionality on the EchoSign Manage page–I wrote the first test case and asked her to copy/edit in order to create the rest. Eventually, she moved on to modifying code in our Page Objects, and later still, to creating both new tests and new Page Object methods. After a year of steadily increasing automation work and steadily decreasing manual work, she was allowed to move into the automation team full-time. She now does virtually everything I do, including configuring Jenkins jobs, modifying and maintaining our Perl run script, and analyzing test case failures. I consider her success one of my biggest successes.
WHAT DIDN’T GO WELL
Integrating with Jenkins. Our first attempt at integrating with the EchoSign developers’ Jenkins installation went from huge thrill to huge disappointment in short order. The VMs that were connected to our Jenkins/Selenium projects kept crashing! This led me to temporarily pull the plug on the effort after three frustrating months. Once I got brave enough to try again, I found an expert elsewhere in the company, and by cloning one of his VMs, we were finally up and running with Jenkins for good. But this was two years in–way later than I would have liked.
Framework: Pysaunter snags. We hit a few pysaunter snags, especially in the area of upgrades. Pysaunter’s Google Group has only 22 members and pysaunter documentation is virtually nonexistent; more of both would be very useful.
Pysaunter’s most significant missing feature was the capability to re-run failing tests a second time. I managed to work around that problem via the run script I already had in place which permitted executing both the RC suite and the WD suite from a single command. To support re-running of failing tests, I enhanced that script to parse the .xml results files output by pysaunter, collect the failures, and then re-run each failing test via pysaunter. This was a bit laborious but well worth the effort since web site UI tests are somewhat prone to spurious failures.
Switching to WebDriver. When I started creating the first RC tests in our suite in 2011, I was the only automation QA engineer in EchoSign. Following the advice of Adam Goucher in a presentation entitled “Create Robust Selenium Tests with Page Objects,” I deliberately did not “flesh out” Page Objects completely as I went. Instead, I created just what I needed for the tests I was writing. This approach worked fine. However, by the time we started using the WebDriver API in fall 2012, the automation team had grown to three. It didn’t work so well to create “just enough Page Objects” when there were other test devs around. We should have taken a more bottom up approach at that point by creating the PO methods we thought we’d need first, and then writing tests that used them. Or maybe one of us could have had responsibility for all the PO methods, while the other two of us limited ourselves to writing only test cases.
WHAT I’LL DO DIFFERENTLY NEXT TIME
- Spend more time up front designing, writing, reviewing, and documenting our PO methods.
- Integrate with Jenkins sooner.
- Do more to help build a larger user community for whatever open-source framework I select so that I’ll have more peer support for my framework questions and problems.
All in all, it was immensely rewarding to start from nothing in the automation department and wind up two years later with a reasonably large test suite that has caught many bugs and is regularly used in all sorts of testing situations–patches, releases, failover/failback dry runs, daily developer work, tech ops changes, and more. I’m hoping to achieve the same success at my next gig, only more so, thanks to the lessons learned these past two years.