The Seven Deadly Sins of Selenium (with Apologies to Jason Carr and The East Bay Agilistry & QA Meetup Group)

An announcement for an East Bay Agilistry & QA Meetup Group contains a phrase that has piqued my interest: the seven deadly sins of Selenium. After recently spotting this intriguing combination of just six words, I’ve spent a few mental cycles almost every day trying to come up with might be on the list. I had the idea that I’d come up with my own list, then “check” my answers against those of Selenium expert Jason Carr, the presenter whose talk abstract contains the phrase. Eventually, I decided to “give up” and just see what his list contained. Turns out, the announcement was not for a past Meetup as I had thought but for a future Meetup (May 20th, to be precise). Consternation plus! So, I’m back to my original plan. Here’s my personal list of the seven deadly sins of Selenium:

  1. Failing to plan (at least a little!) for the eventual parallel execution of your tests. For example, let’s say you need to automate three tests, each of which changes a particular account setting to one of the three available values, then does something to verify that the change has taken effect. Those three tests can all use the same account if the tests are to be run sequentially. But if they’re eventually to be run in parallel, a different account will be needed for each of the three; otherwise, “test collisions” can and will occur. I once saw a test suite consisting of several hundred tests, all of which had been written without this type of planning and all of which were run in parallel across several VMs. As you can imagine, the spurious failure rate for this suite was quite high.
  2. Writing standalone sleep statements, i.e., those that are not part of a wait-for synchronization loop. In case you’ve missed the hundreds of posts that explain why these are a bad idea, here goes again: There’s no way for a mere mortal to know how long to sleep! If you use a sleep 30 when a sleep 31 is needed during a particular run, your test will timeout unnecessarily. And if you use a sleep 30 when a sleep 1 would have sufficed during a particular run, your test will literally waste 29 seconds.
  3. Creating brittle locators. A lot has been written about this topic also but people still create really bad locators, probably with the idea that they’ll come back later and improve them. “Later” never arrives, but what does happen is the tests depending on those locators break easily. I once worked on a suite of tests in which many locators were several inches wide (!), and full of square brackets with numbers inside. Those were the “fine China” of brittle locators. Any addition or deletion of one of the many earlier elements within the section of the page covered by such a locator would cause that locator to quit working.
  4. Placing hard-coded strings of expected user-visible messages inside your tests (instead of in a centralized file or set of files). This violation of the page object methodology is bad for several reasons. Many of these strings will probably be expected by more than one test. Having a duplicated string in a software entity is always a bad practice. Why would anyone want to set themselves up for responding to a Product Manager’s trivial wording-change request by making changes in several places within the test suite instead of just one place? It’s also easier to find and change a string within a file(s) of messaging strings than within hundreds (or thousands!) of test files.
  5. Failing to plan for the eventual utilization of your tests for L10N testing. Any checks for a message displayed to the user or text-based locators need to access a centralized file(s) of strings which can be replicated and modified for each language supported by the application under test. Even if one avoids placing hard-coded strings within tests (sin #4), one can still mess up by placing them within page objects, which probably won’t work well for later L10N utilization of your tests.
  6. Ignoring the test automation pyramid by creating too many Selenium tests and too few service and unit tests. At a recent South Bay Selenium Meetup featuring a presentation entitled Selenium at Salesforce Scale, presenter Greg Wester made a very candid admission–that Salesforce’s test automation pyramid was “upside down!” (He also said they were working hard on changing that.) Just as the tip of the food pyramid–fats and sugars–is very expensive (calorically), the tip of the test automation pyramid–UI tests–is very expensive (both to develop and especially to maintain). We all need to be mindful of this when deciding which tests to automate with Selenium. An excellent article on this is Will Hamill’s Automated Testing and the Evils of Ice Cream.
  7. Developing more Selenium tests than your team can properly maintain. This one may sound the same as sin #6 but it’s not. If you’re resource-rich, you can afford to have the tip of your test automation pyramid be a bit wider. But regardless of your resources, it is never okay to create more Selenium tests than you can manage to maintain. It is critically important to avoid spurious failures, which can have a variety of causes. Some tests will exhibit synchronization issues during execution runs long after the tests were developed. Some tests will fail due to the devs making a change to a page without the test devs making an accompanying change to the corresponding page object in the test suite. Sometimes spurious failures are due to an unstable test environment.  Whatever the reason, spurious failures are the breeding ground for developer complacency about test results. These failures can also be a huge time sink for the QA team which has to analyze test run results; this is often the same team responsible for developing tests for new features. If not dealt with promptly, these failures can quickly become an almost insurmountable obstacle. The best way to avoid creating such an obstacle is to ensure that an adequate amount of test developer time is allocated to test maintenance activities.

Now that you’ve read my list of the seven deadly sins of Selenium, I urge you to do the following:

  • Come up with your own list, publish it, and post the link to your list in a comment on this post.
  • If that sounds like too much work, post a comment here with any “sins” you consider worse than the ones I’ve listed; also specify which ones from my list should be replaced with yours.
  • Attend Jason’s talk on 5/20/14. If that’s not possible, then look for a published recording of the Meetup and/or Jason’s slides after the 20th.

Why am I so intrigued by the seven deadly sins of Selenium as a concept? I think it’s because more Selenium talks focus on “best practices” than “deadly sins.” But most of us are more motivated to avoid a “Selenium train wreck” than to adhere to “best practices.” So, it just might be in our best interest to focus more on avoiding Selenium sins rather than on following Selenium best practices. It’s just a slight shift in focus but one that could have a big impact on the quality of our Selenium test suites.

Advertisements

About Mary Ann May-Pumphrey

I'm a software QA automation engineer, focusing primarily on Selenium/Webdriver automation of the front end of web apps. View all posts by Mary Ann May-Pumphrey

2 responses to “The Seven Deadly Sins of Selenium (with Apologies to Jason Carr and The East Bay Agilistry & QA Meetup Group)

  • Jorge

    Hi Mary Ann,

    first of all, thanks for your post, it is quite interesting as it summarizes the daily annoyances that test engineers must confront when writing automated tests.

    I am working for a cool start-up named BugBuster, and we are creating a test platform on the cloud that can considerably lighten the burden by simplifying test automation. To be honest, 6 out of 7 sins that you quote are due to lack of methodology rather than Selenium, but there is at least one where BugBuster outperforms the former: it deals with asynchronous events and waits for the page to stabilize before moving on to the next action; which means, there is no longer need to use sleep commands.

    If you would like to take a look to BugBuster, you can try it for free and see how well it fits your projects. We would very much appreciate your feedback!

    More info: http://bugbuster.com

    Best,

    Jorge

  • autumnator

    Well, I don’t have specifically sins or best practices and numbered up to 7, but I do have a rant about bad practices I’ve seen people do, and what they should do instead:

    http://autumnator.wordpress.com/2013/07/11/developing-selenium-tests-with-proper-abstraction/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: