Selenium Conference 2015: A First-Time Attendee’s Recap

Last week’s Selenium Conference in Portland was the fifth of its kind but the first one for me. I’m super glad I went, as several of the talks were noteworthy:

  • Selenium creator Jason Huggins’ World Domination: Next Steps was the keynote talk closing out the second day of the conference (the first day was devoted to workshops). A deeply autobiographical talk that segued into the history of Selenium, it then covered the current status of Jason’s one-man company–Tapster Robotics–before bringing all the threads together with a totally amusing slide showing a futuristic API call to starDriver, where the star refers to the wildcard asterisk character for “anything.” Jason did a super job of interweaving the personal and the technical, leaving the audience with a clear but unspoken message: Follow your dreams!
  • Selenium/WebDriver project lead Simon Stewart’s Selenium: State of the  Union was the keynote talk opening the first day of the conference. The highlight of this talk for most audience members (judging by the audible “oohs!”) was the demo of three simulators simultaneously running the iOS Facebook app on the same host, using FBSimulatorControl, which was released during the Conference. Simon also covered a couple of inherent problems with end-to-end automated testing:
    • End-to-end testing is slow, for which he recommended Sauce Labs or BrowserStack as remedies.
    • End-to-end testing is unstable, for which he suggested that instead of focusing on writing better APIs, we might want to  focus more on sharing our knowledge and experience with each other. As a Selenium Meetup Organizer, I was particularly intrigued to hear this from Simon, as it supported my view that Selenium Meetup speakers’ move away from tips/advice/best practices to showing off cool technical inventions represents a trend not necessarily for the better of the overall Selenium community.
  • Anand Bagmar’s To Deploy or Not To Deploy – Decide Using TTA’s Trend & Failure Analysis drew me into the audience because I wanted to find out what a TTA was! I was totally mesmerized by the TTA’s graphic portrayal of how a test suite measured up against the famous testing pyramid. Too many places where I’ve worked make no effort to have their various types of tests–manual, Selenium/WebDriver, web service, view, JavaScript, integration, unit–in the same type of repo, so no attempt can be made to analyze their product’s test suite as a whole. I saw the TTA’s measurement of the test suite vs. the testing pyramid as an excellent “baby step” for such companies to try to take, a step which would then allow them to move on to the other types of test analysis covered in this excellent talk.
  • Appium project lead Jonathan Lipps’ The Mobile JSON Wire Protocol was a super informative talk, almost a technical lecture in style. It included a primer on http protocols and how they work, along with an explanation of the motivation for the JSONWP before launching into the MJSONWP. Jonathan created excellent slides and presented their material very nicely.
  • Moiz Virani’s Testing the Testing Machine had the single best quotation of the entire conference:  Software testers always go to heaven; they’ve already had their fair share of hell.
  • Denali Lumma’s Curing Impostor Syndrome (the keynote opening the third day of the conference) initially made me think she had made up some syndrome name for the purpose of her talk, but I soon found myself Googling “impostor syndrome” where I learned that it’s a well-known phenomenon! As her talk went on, I didn’t find myself thinking that I had impostor syndrome, but as soon as the ending applause was over, the woman sitting next to me gushed about how much the talk had resonated with her! This talk was very well delivered–it even included an exercise!
  • Anurag Singh’s A Year of Implementing Ideas from SeConf’14 was a talk that I think should be given at the Selenium Conference every year! Its subtle message was that every one of us who attended should come up with three things to implement over the course of the next year before the 2016 Selenium Conference, and get started on them as soon as we return to work.

Critic that I am, I only came up with two areas that could use improvement about the Conference:

  • Some of the speakers could really have benefitted from a review by, and feedback from, Conference organizers. For example, some of the non-native English speakers spoke too quickly to be easily understood, given their accents. A fluent speaker presented way too much Java code and didn’t seem to notice that she was literally losing her audience. Another speaker spent way too much time talking about how to test a ballpoint pen; that was fine for an attention-grabbing starter, but it quickly wore thin.As a former Toastmaster whose one year in the organization proved totally invaluable, I think technical presenters would do their careers a huge favor to join a local Toastmasters club. And I hope next year’s Conference Committee will solicit volunteers to be on a new Pre-Conference Presentation Feedback Subcommittee! I’d like to volunteer for that duty right now!
  • Some speakers appeared on the agenda more than once. Denali Lumma gave a keynote and a talk entitled 2020 Testing. Jason Huggins gave a keynote and was on the Lightning Talks agenda at the end of the second day.Worst of all was Anand Bagmar’s three appearances on the agenda. While his To Deploy or Not To Deploy – Decide Using TTA’s Trend & Failure Analysis talk the second morning was really good, he was back on stage that afternoon co-presenting Say “No” to (More) Selenium Tests. I was particularly annoyed at this latter talk because it employed a couple of the same slides and concepts that had been covered in his morning talk! Fortunately, due to a Southwest Airlines snafu, I had to leave the conference a bit early on the last day, so I missed Anand’s Automate Across Platform, OS, Technologies with TaaS talk. (I might have totally blown a gasket if I saw any more duplicate slides at it!)

    Multiple talks by one presenter is something I think the Conference Committee needs to totally disallow in future years. Nobody but the MC and maybe the Selenium/WebDriver PL should be in front of a mike more than one time per Conference!

Besides the overall excellent keynotes and tech talks, there were a lot of other big pluses I got from attending the Conference:

  • As the San Francisco Selenium Meetup Organizer, I was on the lookout for both speakers and hosts for future Meetups. I came home with five tentative “volunteers.” (Their arms were only mildly twisted!)
  • I bumped into a former immediate co-worker and spent all of the second evening chatting with her about our careers, our work, and lots of other stuff. We both live in San José but somehow we hadn’t gotten together for over a year until the Conference!
  • Due to Southwest Airlines messing up my return travel plans, I wound up on the same flight as Denali Lumma, who, as one of the Conference Committee members, was actually interested in hearing my feedback on the Conference!

Overall, I thought my first Selenium Conference was a super use of my time and money–many thanks to this year’s Conference Committee! I hope to attend again, at least the next time it’s in the U.S. And I’m definitely going to volunteer for the Conference Committee then too!

The Seven Deadly Sins of Selenium (with Apologies to Jason Carr and The East Bay Agilistry & QA Meetup Group)

An announcement for an East Bay Agilistry & QA Meetup Group contains a phrase that has piqued my interest: the seven deadly sins of Selenium. After recently spotting this intriguing combination of just six words, I’ve spent a few mental cycles almost every day trying to come up with might be on the list. I had the idea that I’d come up with my own list, then “check” my answers against those of Selenium expert Jason Carr, the presenter whose talk abstract contains the phrase. Eventually, I decided to “give up” and just see what his list contained. Turns out, the announcement was not for a past Meetup as I had thought but for a future Meetup (May 20th, to be precise). Consternation plus! So, I’m back to my original plan. Here’s my personal list of the seven deadly sins of Selenium:

  1. Failing to plan (at least a little!) for the eventual parallel execution of your tests. For example, let’s say you need to automate three tests, each of which changes a particular account setting to one of the three available values, then does something to verify that the change has taken effect. Those three tests can all use the same account if the tests are to be run sequentially. But if they’re eventually to be run in parallel, a different account will be needed for each of the three; otherwise, “test collisions” can and will occur. I once saw a test suite consisting of several hundred tests, all of which had been written without this type of planning and all of which were run in parallel across several VMs. As you can imagine, the spurious failure rate for this suite was quite high.
  2. Writing standalone sleep statements, i.e., those that are not part of a wait-for synchronization loop. In case you’ve missed the hundreds of posts that explain why these are a bad idea, here goes again: There’s no way for a mere mortal to know how long to sleep! If you use a sleep 30 when a sleep 31 is needed during a particular run, your test will timeout unnecessarily. And if you use a sleep 30 when a sleep 1 would have sufficed during a particular run, your test will literally waste 29 seconds.
  3. Creating brittle locators. A lot has been written about this topic also but people still create really bad locators, probably with the idea that they’ll come back later and improve them. “Later” never arrives, but what does happen is the tests depending on those locators break easily. I once worked on a suite of tests in which many locators were several inches wide (!), and full of square brackets with numbers inside. Those were the “fine China” of brittle locators. Any addition or deletion of one of the many earlier elements within the section of the page covered by such a locator would cause that locator to quit working.
  4. Placing hard-coded strings of expected user-visible messages inside your tests (instead of in a centralized file or set of files). This violation of the page object methodology is bad for several reasons. Many of these strings will probably be expected by more than one test. Having a duplicated string in a software entity is always a bad practice. Why would anyone want to set themselves up for responding to a Product Manager’s trivial wording-change request by making changes in several places within the test suite instead of just one place? It’s also easier to find and change a string within a file(s) of messaging strings than within hundreds (or thousands!) of test files.
  5. Failing to plan for the eventual utilization of your tests for L10N testing. Any checks for a message displayed to the user or text-based locators need to access a centralized file(s) of strings which can be replicated and modified for each language supported by the application under test. Even if one avoids placing hard-coded strings within tests (sin #4), one can still mess up by placing them within page objects, which probably won’t work well for later L10N utilization of your tests.
  6. Ignoring the test automation pyramid by creating too many Selenium tests and too few service and unit tests. At a recent South Bay Selenium Meetup featuring a presentation entitled Selenium at Salesforce Scale, presenter Greg Wester made a very candid admission–that Salesforce’s test automation pyramid was “upside down!” (He also said they were working hard on changing that.) Just as the tip of the food pyramid–fats and sugars–is very expensive (calorically), the tip of the test automation pyramid–UI tests–is very expensive (both to develop and especially to maintain). We all need to be mindful of this when deciding which tests to automate with Selenium. An excellent article on this is Will Hamill’s Automated Testing and the Evils of Ice Cream.
  7. Developing more Selenium tests than your team can properly maintain. This one may sound the same as sin #6 but it’s not. If you’re resource-rich, you can afford to have the tip of your test automation pyramid be a bit wider. But regardless of your resources, it is never okay to create more Selenium tests than you can manage to maintain. It is critically important to avoid spurious failures, which can have a variety of causes. Some tests will exhibit synchronization issues during execution runs long after the tests were developed. Some tests will fail due to the devs making a change to a page without the test devs making an accompanying change to the corresponding page object in the test suite. Sometimes spurious failures are due to an unstable test environment.  Whatever the reason, spurious failures are the breeding ground for developer complacency about test results. These failures can also be a huge time sink for the QA team which has to analyze test run results; this is often the same team responsible for developing tests for new features. If not dealt with promptly, these failures can quickly become an almost insurmountable obstacle. The best way to avoid creating such an obstacle is to ensure that an adequate amount of test developer time is allocated to test maintenance activities.

Now that you’ve read my list of the seven deadly sins of Selenium, I urge you to do the following:

  • Come up with your own list, publish it, and post the link to your list in a comment on this post.
  • If that sounds like too much work, post a comment here with any “sins” you consider worse than the ones I’ve listed; also specify which ones from my list should be replaced with yours.
  • Attend Jason’s talk on 5/20/14. If that’s not possible, then look for a published recording of the Meetup and/or Jason’s slides after the 20th.

Why am I so intrigued by the seven deadly sins of Selenium as a concept? I think it’s because more Selenium talks focus on “best practices” than “deadly sins.” But most of us are more motivated to avoid a “Selenium train wreck” than to adhere to “best practices.” So, it just might be in our best interest to focus more on avoiding Selenium sins rather than on following Selenium best practices. It’s just a slight shift in focus but one that could have a big impact on the quality of our Selenium test suites.

Starting a Selenium Automation Effort from Scratch: Post-Mortem @ Two Years

A little over two years ago, I was hired to start up a Selenium automation effort at a SaaS start-up named EchoSign. Six weeks later, Adobe acquired that start-up. I’ll be leaving Adobe EchoSign at the end of next week, but first, I want to do a post-mortem on my two-year Selenium automation effort. What went well? What didn’t go well? What am I going to do differently (better!) in my next Selenium automation start-up effort?


Picking Python. In my previous Selenium work, I had used Perl. But since Selenium 2.0 was going to be released without official support for Perl just a month or so after I started work at EchoSign in 2011, I knew I had to pick a different language. Python turned out to be a good choice for many reasons, chiefly because it is so easy to get up to speed in a hurry with Python. It took me almost no time to switch from Perl to Python; a manual tester who wanted to move into our automation team was similarly successful at picking up Python in short order.

Framework: Using an expert’s. Way way too many test developers think they have to create a framework before they can get started developing tests. Yes, they need a framework before they can start developing tests. No, they do not have to develop that framework themselves–they can select an open-source one. I chose one that had just been developed in summer 2011; it has since evolved into pysaunter, developed by Selenium consultant Adam Goucher. Using pysaunter allowed me to get started writing tests the same day I installed the framework. Literally! That kind of instant productivity is really important when working at a chaotic start-up, where one is often dragged into helping with manual testing far more often than one would like.

Framework: Selecting pysaunter. Besides the instant productivity that pysaunter enabled for us early on, the framework also provided us with many useful features, some of which were pass-throughs from the popular pytest framework upon which it is built. One of the most useful was the -m option and the related pytest.mark decorator which we put over every test, e.g.,


If we wanted to run the full regression suite, pysaunter -m ‘regression’ would suffice. If we wanted to run all regression tests except the send and authoring ones, we could specify pysaunter -m ‘regression and not (send or authoring)’.

Pysaunter also produced results in the Ant JUnit format understood by Jenkins, which simplified this integration.

Counting the number of tests in a particular functional area was also a snap. For example, the number of texttags tests in our suite can be determined via pysaunter -m ‘texttags’ –collectonly | grep Function | wc -l.

API: Choosing RC initially. Selenium 2.0 was a month away from release when my manager and I decided that I should start developing tests with the RC (Remote Control) API instead of WebDriver. We thought we’d wait six months to a year, and let somebody else “get the kinks out of” WebDriver.  In my opinion, the RC API (at least in Python) is way easier to use than is the WebDriver one (even today). And the 1.0 docs were in far better shape in the summer of 2011 than were the 2.0 docs. Fewer bugs, improved usability, and better docs all helped make RC the right initial choice for us.

Integrating with Jenkins.
After a year and a half of having to download the latest bits onto actual hardware, modify the configuration files, and start up a run via the cmdline, it was a thrill to instead push a Build Now link on a Jenkins project. I wish we wouldn’t have put off this effort so long!

Mentoring a manual tester. About five months into the two years, a member of our functional (manual) testing group wanted to learn how to contribute to our Selenium project. She learned Python on her own, using courseware. I started her off with Selenium by creating one simple test case for our search functionality and then asking her to copy/modify it to create the rest of the search test cases (from the Test Case Manager we employ–TestLink). Then I did the same for the filter functionality on the EchoSign Manage page–I wrote the first test case and asked her to copy/edit in order to create the rest. Eventually, she moved on to modifying code in our Page Objects, and later still, to creating both new tests and new Page Object methods. After a year of steadily increasing automation work and steadily decreasing manual work, she was allowed to move into the automation team full-time. She now does virtually everything I do, including configuring Jenkins jobs, modifying and maintaining our Perl run script, and analyzing test case failures. I consider her success one of my biggest successes.


Integrating with Jenkins. Our first attempt at integrating with the EchoSign developers’ Jenkins installation went from huge thrill to huge disappointment in short order. The VMs that were connected to our Jenkins/Selenium projects kept crashing! This led me to temporarily pull the plug on the effort after three frustrating months. Once I got brave enough to try again, I found an expert elsewhere in the company, and by cloning one of his VMs, we were finally up and running with Jenkins for good. But this was two years in–way later than I would have liked.

Framework: Pysaunter snags. We hit a few pysaunter snags, especially in the area of upgrades. Pysaunter’s Google Group has only 22 members and pysaunter documentation is virtually nonexistent; more of both would be very useful.

Pysaunter’s most significant missing feature was the capability to re-run failing tests a second time. I managed to work around that problem via the run script I already had in place which permitted executing both the RC suite and the WD suite from a single command. To support re-running of failing tests, I enhanced that script to parse the .xml results files output by pysaunter, collect the failures, and then re-run each failing test via pysaunter. This was a bit laborious but well worth the effort since web site UI tests are somewhat prone to spurious failures.

Switching to WebDriver. When I started creating the first RC tests in our suite in 2011, I was the only automation QA engineer in EchoSign. Following the advice of Adam Goucher in a presentation entitled “Create Robust Selenium Tests with Page Objects,” I deliberately did not “flesh out” Page Objects completely as I went. Instead, I created just what I needed for the tests I was writing. This approach worked fine. However, by the time we started using the WebDriver API in fall 2012, the automation team had grown to three. It didn’t work so well to create “just enough Page Objects” when there were other test devs around. We should have taken a more bottom up approach at that point by creating the PO methods we thought we’d need first, and then writing tests that used them. Or maybe one of us could have had responsibility for all the PO methods, while the other two of us limited ourselves to writing only test cases.


  • Spend more time up front designing, writing, reviewing, and documenting our  PO methods.
  • Integrate with Jenkins sooner.
  • Do more to help build a larger user community for whatever open-source framework I select so that I’ll have more peer support for my framework questions and problems.

All in all, it was immensely rewarding to start from nothing in the automation department and wind up two years later with a reasonably large test suite that has caught many bugs and is regularly used in all sorts of testing situations–patches, releases, failover/failback dry runs, daily developer work, tech ops changes, and more. I’m hoping to achieve the same success at my next gig, only more so, thanks to the lessons learned these past two years.

Selenium Framework: Effective Automation Simplified

Next Tuesday’s San José Selenium Meetup is promising to be one of the best attended yet! Selenium Framework: Effective Automation Simplified will be presented by Sivakumar Anna, Director of Enterprise Services at InfoStretch. RSVP now!

Remember! If you can’t make it to Adobe’s East Tower Park Conference Room to attend the Meetup in person, you can still watch/listen to it live, via Adobe Connect (guest login). And if you don’t have time for it until the next day or even later, you can still watch/listen to it, via a different Adobe Connect URL, one which I’ll post at the bottom of the Meetup event page after the Meetup has ended.

Response to “Starting from Scratch with RC, Python, and Pysaunter”

Better late than never! I just discovered that Pysaunter creator Adam Goucher wrote an almost point-by-point response to my Starting from Scratch with RC, Python, and Pysaunter talk at the 02/28 San José Selenium Meetup.

The entire post by AG is informative, but best of all is that he set up a Saunter Google Group! If you’re interested in either Pysaunter or SaunterPHP, please join the group now so we can start sharing best practices and supporting each other’s test automation work.

“The Restless Are Getting Native: Lessons Learnt While Automating an iOS App” at San José Selenium Meetup

Tonight’s San José Selenium Meetup was one of the best Selenium Meetups I’ve ever attended! Speaker Dante Briones, Principal Consultant for Cochiva, gave a talk entitled The Restless Are Getting Native: Lessons Learnt While Automating an iOS App. He started off with a long discussion of automation best practices, which was really interesting, but made me (as the Organizer) squirm in my chair, worrying over whether he was actually going to talk about automating an iOS app! But much to my relief, he soon showed a single-word slide–</digression> (!), and then launched into a very informative and amusing discussion of his experience automating an iOS app using the NativeDriver API.

If you weren’t there tonight, I heartily recommend you catch Dante’s presentation via Adobe Connect. It’s two-talks-in-one and both are superb!

“Starting from Scratch with RC, Python, and Pysaunter” @ San José Selenium Meetup

Last night’s San José Selenium Meetup, at which I talked about my experience starting up a Selenium automation effort at Adobe EchoSign, can be viewed via Adobe Connect. Hope you enjoy the talk, and hope even more that you try out Pysaunter! I’d love being part of a Pysaunter Users Google group!

Starting from Scratch with RC, Python, and Pysaunter: Installation & Set-Up

Next week’s San José Selenium Meetup–Selenium @ Adobe–features a couple “short talks,” one of which is by yours truly! Starting from Scratch with RC, Python, and Pysaunter will cover my experience starting up a Selenium automation effort literally “from scratch.” I’m sure you’ve all heard of RC and Python, but perhaps not Pysaunter. Pysaunter–developed by Selenium developer and consultant Adam Goucher–is an open-source framework that supports Selenium/Python test development with either the RC or WebDriver APIs.

Part of my goal in giving a talk about my experience with Pysaunter at next Tuesday’s Meetup is to increase the size of the Pysaunter community so that I’ll have more people with whom to discuss automation issues! To help achieve that goal, I want it to be as painless as possible for people to get started developing with Pysaunter. So here forthwith are installation instructions which I created for my co-workers’ use. (I could cover these in my talk next Tuesday, but that would make for a seriously boring couple of slides!) Obviously, you may not need to do every step, depending on what software is already installed on your system.

  1. Install Java (at least 1.6).
  2. Ensure that your path environment variable includes the directory where the java executable is located.
  3. Download the latest selenium-server-standalone-2.x.jar file from SeleniumHQ.
  4. Enter java -jar selenium-server-standalone-2.x.jar at the cmdline of either a Mac OS terminal window or a Windows cmd window. This starts up the Selenium server. Minimize the window and use a separate one for the remaining commands below.
  5. Download/install Python 2.7.2. (Python 3 will not work.)
  6. Ensure that your path environment variable includes the directory where the python executable (python.exe on Windows, python on Mac) is located.
  7. Download/install setuptools from PYPI.
  8. Ensure that your path environment variable includes the directory where the easy_install executable has been installed.
  9. Enter easy_install -U py.saunter at the cmdline. This installs the pysaunter software. Note, that you can also use this same command to update to the latest version of py.saunter in the future.
  10. Download py.saunter (in order to get the examples) from GitHub. (Ignore the Sorry, there aren’t any downloads for this repository message!)
  11. From the download location you’ve chosen, enter cd examples/saucelabs (RC example).
  12. Open conf/saunter.ini in an editor and go to the  [Selenium] heading.
  13. Modify the server_path line to point to the Selenium server .jar file you just installed.
  14. Modify the browser line to add the appropriate path for Firefox (chrome) on Mac OS. On Windows, be sure to not surround the pathname with double quotes, even if it contains embedded spaces.
  15. Run the examples via: -m deep -v

I hope you can make it to next Tuesday’s San José Selenium Meetup, either in person (please RSVP in that case!) or via our Adobe Connect session. If you’re interested in attending but can’t make the live timeslot, I’ll be posting the URL to the recording afterward, so watch for a new post that evening.

Finally, as I said at the beginning of this post, the Meetup will feature a couple of short talks. My colleague from NOIDA–Ashish Gupta–will present Test Data Extraction & Generation and Performance Analysis Using Selenium. Part of Ashish’s presentation deals with his use of Selenium for a purpose other than automated test cases, which I’ve done in the past and have always found intriguing. So, I’m particularly looking forward to hearing his talk!

San José Selenium Meetup!!!

WOOT! A San José Selenium Meetup is now a reality!!!

If you read my recent post about last Tuesday’s San Francisco Selenium Meetup, you know I found it very useful, interesting, thought-provoking, etc. What you don’t know is how exhausting I found it to commute from my job in downtown San José to this meetup and back again on a week night. (I live in downtown San José as well as work there). So, I decided to do something constructive about the matter by setting up the San José Selenium Meetup. And I’m hopeful that I’m not the only South Bay Selenium user who would like to attend a Selenium Meetup close to where we live and work.

Many thanks to Ashley Wilson from Sauce Labs for both her informative post on how to start a Selenium Meetup and her assistance in helping me set up this new one. Thanks also to Sauce Labs CEO John Dunham for his words of encouragement at the recent SF Selenium Meetup. And finally, a huge thank you to Selenium creator and Sauce Labs co-founder Jason Huggins for agreeing to be the speaker at the inaugural San José Selenium Meetup!!!

Thanks also to my employer, Adobe, for providing a fabulous venue, and to several co-workers who helped me (in a myriad of ways!) to pull this off.

Selenium Aficionados in the South Bay: Please go to the brand new San José Selenium Meetup site and RSVP to hear Jason Huggins on 01/18/12! While you’re at the site, suggest a program or speaker for a future Meetup! Better still, volunteer to be the speaker or to coordinate the program for a future Meetup!

Selenium Aficionados elsewhere: Consider starting your own Selenium Meetup! Read Ashley’s post on how to do it. Wait for her promised part-2 if you must. But then, go for it!

Review of SF Selenium Meetup’s Whiteboard Night

Tonight’s SF Selenium Meetup was the first “Whiteboard Night” for me. Two of the speakers were especially interesting….

okta’s QA Lead–Denali Lumma–flabbergasted me with the news that okta doesn’t have any manual testers at all! Engineers are required to create appropriate Selenium tests as part of their development of a new feature. While I’d certainly heard before this of developers being heavily invested in the test automation effort, the idea of no manual testing really intrigued me. Too many companies hire QA engineers to do automation but then feel compelled by the realities of aggressive release schedules to push those automation engineers into doing manual testing. This creates a vicious circle–the product continues to grow more rapidly than does the automation suite, leading to the need for still more manual testers to execute regression tests for those new releases.

Ms. Lumma also made clear how well Sauce Labs had served okta’s Selenium efforts, an opinion she recently expounded on in the okta blog, which I read on my train ride home after the Meetup. Her points really resonated with me. Too many companies don’t consider employing providers like Sauce Labs because of (a) the cost; or (b) a smug we-can-do-that-ourselves attitude. This is seriously short-sighted! The time employees spend dealing with automation infrastructure issues and solving Selenium mysteries is not free. It’s far better to have your automation engineers focus on creating customized automated tests for your product, and “out source” as much of the rest of the automation effort as possible to providers like Sauce Labs.

The other whiteboard presentation which I found particularly useful was that of Brian O’Neill, Senior QA Engineer at our Meetup host, Eventbrite. Eventbrite’s approach to Selenium automation is quite similar to what I’m doing–Selenium-RC, Python, page objects, Jenkins, etc. Since I’m the only automation engineer at present in the QA team where I work, I treated Brian like a temporary co-worker, peppering him with specific questions about Eventbrite’s test organization and structure. Occasionally other people attempted to ask questions, but eventually I had most of them scared off!

The Whiteboard Night format requires more effort on the part of an individual attendee than any of the other formats I’ve seen used in the SF Selenium Meetups. One has to quickly ascertain whether an individual presenter’s topic is at all useful, and if not, move on to the next speaker. Once one finds a presentation relevant to one’s own work, one must be assertive about asking the speaker questions, opinions, etc. But if one is willing to put in that extra effort, this format can be just as rewarding as the more traditional speaker/audience formats.

Kudos to Sauce Labs for their continued sponsorship of these SF Selenium Meetups!