Some Dos and Don’ts in Software Testing

26 / Jul / 2015 by Aayush Gupta 0 comments

It is very important for a test engineer to work towards breaking the product/software right from the beginning till the final release of the application. This post will not just focus on testing but also the tasks related to testing from tracking defects to configuration management.

Dos:

  • Ensure that the testing activities are in sync with the test plan.
  • Identify technical areas where you might need assistance or trainings. Plan and arrange for such trainings to solve the issue.
  • Strictly follow the standard testing strategies as identified in the test plan. Follow the guidelines as outlined in the test plan document.
  • Prefer to have a release notes document. Ask the development team of the same. This document contains the details of the release that was made to QA for testing. This should normally contain the following details:
    • Version of code to be tested
    • Features that are part of the release
    • Features that are not part of the release
    • New features and functionalities added
    • Changes done in the existing functionalities
    • Known issues, etc.
  • Stick to the entry and exit criteria for all the testing activities. Examples, if the input criterion for a release is sanity tested code from development team, ask for sanity test results.
  • Update the test results for the test cases during run time i.e., as and when you execute them
  • Report the defects found in defect tracking tool outlined in the test plan
  • Take the code from the configuration management (as identified in the plan) for build and installation
  • Ensure that the code is version controlled for every release.
  • Classify defects as P1/P2/P3/P4 or Critical/High/Medium/ Low so as to aid developers prioritize fixing defects
  • Perform a sanity testing as and when the release is made from development team.

Don’ts:

  • Do not update the test cases while executing them. Track the changes and update based on a written reference (SRS or functional specification etc). It is wrong practice to update the test case based on the look and feel of the application. UI may change with time. So document each and every scenario only with reference to an approved document.
  • Do not track defects in many places, example, in excel sheets, different defect tracking tools. This will increase the tracking time. Hence use one centralized repository for defect tracking.
  • Do not spend time focusing in testing the features that are not part of the current release
  • Do not focus your testing on the non-critical areas (think from the customer’s perspective)
  • Do not fail to document a defect even if it is identified as of low priority.
  • Do not make assumptions while verifying the fixed defects. Clarify if in confusion and only then close the ticket (if it is fixed).
  • Do not update the test cases in a hurry without actually running them on assumption that it worked in previous releases.
  • Do not focus on negative scenarios only, which are going to consume lots of time but in actual will be least used by end user. Though this needs to be tested at some point of time the idea really is to prioritize tests.

Thank You!

FOUND THIS USEFUL? SHARE IT

Leave a Reply

Your email address will not be published. Required fields are marked *