Saturday, April 20, 2013

Unit test coverage

As a Test Engineer, you should validate the unit tests coverage. Many times, we don't have access to all source code, and  it could be enough to check metrics in a tool like Sonar.
Otherwise, you could read unit tests and determine if programmers are covering the most important cases and suggest new scenarios.




In order to do it, I propose these validations:

Size:
For collections:

  • Test with an empty collection
  • A collection with 1 item
  • The smallest interesting case 
  • A collection with several items


Dichotomies:

  • Vowels / Non-vowels
  • Even / Odd 
  • Positive / Negative
  • Empty / Full.

Boundaries:

  • If the function behaves differently for values near a particular threshold.


Order:

  • If the function behaves differently when the values are in different orders. Identify each of those orders.


Example in Python:

import unittest

class TestStockPriceSummary(unittest.TestCase):
    """ Test class for function a1.stock_price_summary. """
 
 def test_empty_list(self):
  """ 
  Return an empty tuple when price_changes is empty.
  """
  price_changes = []
  actual = a1.stock_price_summary(price_changes)
  expected = (0,0)
  self.assertEqual(actual,expected)
 
 def test_single_item_positive(self):
  '''
  Test when the list only includes a positive item
  '''
  price_changes = [2.45]
  actual = a1.stock_price_summary(price_changes)
  expected = (2.45,0)
  self.assertEqual(actual,expected)
 
 def test_single_item_negative(self):
  '''
  Test when the list only includes a negative item
  '''
  price_changes = [-2.45]
  actual = a1.stock_price_summary(price_changes)
  expected = (0,-2.45)
  self.assertEqual(actual,expected)
 
 def test_single_item_zero(self):
  '''
  Test when the list contains only an item = 0
  '''
  price_changes = [0]
  actual = a1.stock_price_summary(price_changes)
  expected = (0,0)
  self.assertEqual(actual,expected)
 
 def test_general_case(self):
  """ 
  Return a 2-item tuple where the first item is the sum of the gains in price_changes and
  the second is the sum of the losses in price_changes.
  """
  price_changes = [0.01, 0.03, -0.02, -0.14, 0, 0, 0.10, -0.01]
  actual = a1.stock_price_summary(price_changes)
  expected = (0.14, -0.17)
  self.assertEqual(actual,expected)
 
if __name__ == '__main__':
    unittest.main(exit=False)

Thursday, April 18, 2013

Test case creation, Scripting, Execution, Cross Browsing (Step 5 to 10)


Cross Browsing testing is always a challenge for QA. I'm pretty sure everyone has faced issues with IE trying to run a script created for Mozilla Firefox. 
The following process describes my personal approach to gaining coverage gradually as automation is applied.







Every step in the graph involves a task.  but initially I recommend to automate against Mozilla Firefox and then continue with Chrome, IE, etc.
Next posts will describe each task in more depth.


Tuesday, April 9, 2013

Step 2: Customize your development process to provide support to automated testing.(Continued)


Tester Tasks:

Be part of planning meeting for current sprint: the tester will be in charge of provide estimations about data creation, test cases/acceptance test design, test execution, frameworks design/improvements, scripting tasks, setup environments, etc.

Be part of planning meeting for next sprint: the tester just needs to validate if he will create any complex environment before starting next sprint, or major improvements for the automation framework.

Write Acceptance Criteria for each item (*) for next sprint: Testers create acceptance criteria for the user stories compromised for next sprint, they work helping BA (if the team includes BAs) giving suggestions from QA perspective, Standards, User Experience, possible performance issues and future bugs detected. Depending how agile we are… we will have acceptance criteria and/or some test cases (also depending the company. sometimes you need to show some evidences about executions as test cases).
Create acceptance criteria could be a very creative task but we need to validate how possible they are and if we are in the same page that the PO, so a validation with the team could be necessary.

Update AC according to PO comments & Create Test Cases for next sprint: Once we received the team feedback we update the acceptance criteria and start creating test cases (or acceptance test if we are working with APIs).

Automate APIs (Current sprint): If the application is divided in tears, we will have APIs or Services. We could start scripting before programmers finish coding for the current sprint.
At first glance, all our tests will fail but gradually programmers finish their work our suite will pass.
Automate them is easier than UI and easier to maintain as well. Automation could be affronted using some tools like SoapUI, Jmeter, etc.

Execute AC manually (Current sprint): Test cases should be executed manually at least once when each User story is done. Manual verification will give us a set of bugs, an idea about limitations and how critical a test case is. On the other hand, this practice will determine the automate candidates for next sprint.

Automate Smoke Test UI / Regressions (Previous sprint): Automate UI once the application is stable could avoid you many headaches. In this stage, we will start automating the Smoke test for the current features done or regressions for complex modules. In feature iterations we will update the smoke test adding the new features. (see Step 1: Automation Feasibility validation. Run the checklist : Execution )

Exploratory Testing: this kind of testing will find bugs that no automated test will find ever. There is nothing compare than the tester creativity and we should apply this kind of  testing once every sprint.

Be part of Review meeting for current sprint: Testers or BA could lead the internal/external Demo. The goal is to show the commitment for current sprint and "ideally" we shouldn't show any issue live! So a Tester could choose the safer path to be shown and point out the known issues as well (an extra slide could be enough). Testers and BA speak human language, programmers don’t. ;)
 
Be part of Retrospective meeting for current sprint: In this meeting, everyone will have the same participation identifying things well done, things to improve and actions to apply.

Monday, April 8, 2013

Step 2: Customize your development process to provide support to automated testing.



TESTERS WORKING IN AN AGILE TEAM

The new tester role is more versatile and with a wide range of skills. Now, Testers have tasks along the sprint and they also program, yes!... as you heard, a tester programs as a common programmer (I don’t say developers as all participants in the team are part of the development so.. everyone is a developer: the BA, the Designer, the Tester, the Tech Writer and the Programmers).
New testers don’t need to be specialists in any language, the more languages they know, the better the solutions they will find. A good Engineer in testing should be able to adapt their knowledge according to the needs.


Programming Language Hall of Fame 
The hall of fame listing all "Programming Language of the Year" award winners is shown below. The award is given to the programming language that has the highest rise in ratings in a year. 



Let’s examine in greater depth the testers working in a team and their tasks distribution along the sprint.
(This process localizes tester tasks, it could be missing tasks for other roles).






Considering an Agile methodology, we could have previous, current and next sprints. Our Tester will have task related with each of these stages.

(To be continued...)

Thursday, April 4, 2013

Step 1: Automation Feasibility validation. Run the checklist


As I said on the previous post, the first step is to validate automation feasibility. 
I have created this spreadsheet as a new artifact to validate Automation feasibility for software developments based on my previous experience in automation projects.

Download Automation feasibility checklist

Analysis

Why Automate?

-Better ROI
-More Coverage
-Reusability
-Less time to execute

When to Automate?

Project Inception validations: This is the most important validation we should do. A negative result here will conclude our process executing the checklist.

Sprints Validation: If we got "High Feasibility" for project inception analysis we will focus on the right moment to start automating throughout the project.

























Which Automation tool?
We should research about existing tools and open source frameworks to make a decision:





Design & Construction
  • Vertical or Horizontal Automation?
    • Vertical: Automating End to End Flow where each test cases is independent of others
    • Horizontal: Execute test cases features wisely to save execution time
  • Keyword or Functional decomposition Driven?
    • Keyword Driven: Each business process is mapped into actions and further each operation is mapped as a keyword. It is easy for non technical users to create test scenarios without knowing much of the testing tool.
    • Functional Decomposition: Business processes are created first and while creating the test scenarios and test scripts, each business process is called in a sequence.
  • The Data design:
    • “Keep everything, that changes or has chances of changing, separate from the script.”
  • Environments:
    • Environment isolated and similar to production will guarantee reliable test results.
  • Coding Standards:
    • To apply specify coding practices and naming conventions will provide a good quality code.

Execution

What to Automate?
If you reached this point, you are ready to see all the benefits of your efforts. Mainly, we will identify the candidates for automation among test cases, based on:


Finally, This artifact will provide you evidences for rejecting/accepting test automation for your project and it will guide you in your choice of an automation tool , comparing the main features between them (Analysis section). Once you have decided to apply automation, Design & Construction section will help you to organize your framework following code and maintenance standards. And then, Execution section will identify key factors for test cases candidates for automation.

Hope you find this artifact helpful. If you have better ways to do it, by all means, do share with us.