Five most important Automation best practices for browser based and service oriented architectures
Considering all dimensions, a lot can be written and has been written on automation best practices. However, in this article, I would be focusing on the Five most important Automation best practices that I observed in practical during my automation career. All best practices are followed by an example for better understanding. These are more towards automation script development and not from framework development perspective. I will have another article sometimes on best practices for framework development. So, here we go:
1.Don’t run all your UI tests on all browsers/mobile devices/OS
I have seen people saying that we want to run all our UI automation tests on all browsers. Does that really require? Well, I think No!!!. Use your testing skills to sort out tests that covers all UI components and keep those tests as browser compatibility tests and run those tests only for browser compatibility. This way a lot of efforts can be saved on running and maintaining tests for all browsers
Example
Let’s take an example of amazon.com here. Let’s say you have a story to test that items are added to cart successfully. Here, you may have multiple test cases like adding one item to cart, adding multiple items to cart, adding different types of items to cart, etc. Here, for browser compatibility, you can just have each type of item added to cart once to ensure that UI components for all types of items are displayed properly and the user can interact with them. However, for different combinations of different items, you can just have scenarios with one browser because we already have tested all UI components in different types of browsers.
Question: Why we can’t test different combinations of items through services rather than having UI tests at all?
Answer is simple, since adding different types and different combination of items is an actual end-user scenario, so we would like to test this using browser mimicking the exact user behavior here. Of course, we need to be selective here and need to come up with most frequent user scenarios based on historical production data rather than ending up with 100s of permutations and combinations.
2. Keep UI or E2E automation to minimum
Never have tons of UI tests. UI tests are most expensive to run, develop and maintain. Rather, consider web services tests for whatever can be tested using services tests. UI tests should only be there for a few important end-user scenarios which covers most of the functionality. Ideally, maximum no of tests should be covered in unit tests, fewer tests should be there at integration level (mostly services layer) and minimum should be left for UI testing. Refer to the test pyramid below:
Example
Let’s say we are testing that google search displays desirable results. Here, we can have just one UI flow to test that google search is displaying the desired result. Different search strings including restricted searches etc. can be easily performed through a get call to google search.
3. Keep aside the known test failures from the batch execution
This will save a great deal of time while analyzing automation reports because if everything is fine, you will see all tests green (passed) and you don’t need to analyze anything. Later you can run automation for test cases with known failures to make sure that they are failing at the same point where it’s expected or you can do this step manually also if suitable.
Example
Let’s say, you have a defect where your email notifications are failing, so you can remove/comment those steps in your batch run and later can just run one test case to test email notification. This way, all your tests would be green and you won’t end up with analyzing and retesting failures for the known failures. For email notification, you can just test one or two test cases to test whether it’s still failing.
4. Consider CI and parallel execution while designing UI tests
Since UI test execution is time-consuming, we run it in parallel using tools like selenium grid or Browserstack (or any other parallel testing tools). Also, CI (continuous integration) is an integral part of most of the releases nowadays. What I have observed is, while designing tests, people run it using their local machine and sometimes ignore the fact that they are going to run in parallel and on a CI environment. You should always consider these points while designing tests because your tests may break while running in parallel and it won’t run on CI environment if it’s dependent on anything on your local machine. One good practice here is to have your tests independent of each other and also independent of anything on your local machine
Example
Let’s say we are testing an application where login is prerequisite for most of the test cases. One approach is to create a login once and use that login in all test cases. However, this will break your test cases when you would be testing in parallel because multiple test cases would be using your same login at the same time and this will create synchronization problem. As an example if we take example of PayPal, if one user is trying to add card in testcase-1 and another user is deleting the same card in testcase-2, this may break your test cases because card may not be yet added by testcase-1 as its running at the same time when testcase-2 is trying to delete card.Possible correct approach here is to create a separate login for each test case (you can create this through API,if possible to save time)
Another thing to keep in mind from CI and remote execution perspective is to keep all your dependent files like any input files, image files within your project only, so that they are available when checkout is triggered on remote machine. Also, make sure your automation does not have any kind of manual intervention, even to type a single letter, otherwise, it won’t run on CI server when automatically triggered or scheduled.
5. Put stringent review process
It is very important to place a stringent review process for your automation development, especially when you have a big team. This is vital because I have come across several situations where tests got broken by other team members commit. Ideally, each member of the team should review code to ensure that their own scripts are not broken but depending on team size this may not be practically possible. So, keep some limit like code can be only accepted if at least 3-4 team members have completed their reviews.Reviews should not be offline and it should be done and logged using tools like crucible, GitHub, bitbucket, etc so that everyone knows about review comments and can avoid the same in their own code going forward.
Example
A very good example here is updating or deleting of test data by other team members for their test cases which can break your test case. That’s why it’s advisable to thoroughly review code of each commits. This will ensure your test case’s reliability.
So that’s about the most important best practices that I came across my automation career. I hope you get value from this article and try to implement these practices in your projects. I have used a few terminologies here like CI, Web services, parallel testing, etc. and If you are not aware of these terminologies, please google it to understand my article thoroughly 🙂
About Author : Mufaddal Munim
“I am a hardcore automation developer and programmer with over 9 years of experience in test automation. My work experience include test automation and framework development/customization for UI and web services, mostly for browser based and service oriented architectures. I am passionate about learning and adopting new technologies. I am also one of the contributor to open source framework serenity-demos.
I believe that programming language is just a means to achieve automation, so I am always keen to learn new programming languages but my favorites as of now are Java and Node.js.”
LinkedIn profile: https://www.linkedin.com/in/mufaddalmunim/