.............................Welcome to Saravanan J Blog.............................

.

Saturday, December 3, 2011

Lonavala Trips with friends

Lonovala Trip Slideshow: Saravanan’s trip from Della Adventure,Lonavla (near Lonavla, Maharashtra, India) to Pune was created by TripAdvisor. See another Pune slideshow. Create your own stunning slideshow with our free photo slideshow maker.

Friday, May 7, 2010

GUI Testing Checklist



GUI Testing Checklist



A checklist to help testers check gui screens









CONTENTS:



Section 1 - Windows
Compliance Standards



1.1. Application

1.2. For Each Window in the Application

1.3. Text Boxes

1.4
. Option (Radio Buttons)

1.5. Check Boxes

1.6
. Command Buttons

1.7. Drop Down List Boxes


1.8
. Combo Boxes


1.9
. List Boxes



Section 2 - Tester's
Screen Validation Checklist



2.1. Aesthetic Conditions

2.2. Validation Conditions

2.3. Navigation Conditions

2.4. Usability Conditions

2.5. Data Integrity Conditions

2.6. Modes (Editable Read-only) Conditions

2.7. General Conditions

2.8. Specific Field Tests

       2.8.1. Date Field Checks

       2.8.2. Numeric Fields

       2.8.3. Alpha Field Checks



Section 3 -
Validation Testing - Standard Actions



3.1. On every Screen

3.2. Shortcut keys / Hot Keys

3.3. Control Shortcut Keys



Section 4 - Origin
& Inspiration



4.1. Document origin

4.2. Sources of Inspiration & information

4.3. Contacting the author.












Section 1 - Windows Compliance Testing










1.1. Application



Start Application by Double Clicking on its ICON. The
Loading message should show the application name,

version number, and a bigger pictorial representation of the icon (a 'splash'
screen).



No Login is necessary



The main window of the application should have the same caption as the
caption of the icon in Program Manager.



Closing the application should result in an "Are you Sure" message
box



Attempt to start application Twice

This should not be allowed - you should be returned to main Window



Try to start the application twice as it is loading.



On each window, if the application is busy, then the hour glass should be
displayed. If there is no hour glass

(e.g. alpha access enquiries) then some enquiry in progress message should be
displayed.



All screens should have a Help button, F1 should work doing the same.

 

 



1.2. For Each Window in the Application



If Window has a Minimise Button, click it.





Window Should return to an icon on the bottom of the screen

This icon should correspond to the Original Icon under Program Manager.



Double Click the Icon to return the Window to its original size.



The window caption for every application should have the name of the
application and the window name -

especially the error messages. These should be checked for spelling, English
and clarity , especially on the top

of the screen. Check does the title of the window makes sense.

 



If the screen has an Control menu, then use all ungreyed options. (see
below)





Check all text on window for Spelling/Tense and Grammar



Use TAB to move focus around the Window. Use SHIFT+TAB to move focus
backwards.



Tab order should be left to right, and Up to Down within a group box on the
screen. All controls

should get focus - indicated by dotted box, or cursor. Tabbing to an entry
field with text in it should highlight

the entire text in the field.



The text in the Micro Help line should change - Check for spelling, clarity
and non-updateable etc.



If a field is disabled (greyed) then it should not get focus. It should not
be possible to select them with either

the mouse or by using TAB. Try this for every greyed control.



Never updateable fields should be displayed with black text on a grey
background with a black label.



All text should be left-justified, followed by a colon tight to it.



In a field that may or may not be updateable, the label text and contents
changes from black to grey depending

on the current status.



List boxes are always white background with black text whether they are
disabled or not. All others are grey.



In general, do not use goto screens, use gosub, i.e. if a button causes
another screen to be displayed, the

screen should not hide the first screen, with the exception of tab in 2.0



When returning return to the first screen cleanly i.e. no other
screens/applications should appear.



In general, double-clicking is not essential. In general, everything can be
done using both the mouse and

the keyboard.



All tab buttons should have a distinct letter.

 

 



1.3. Text Boxes





Move the Mouse Cursor over all Enterable Text Boxes. Cursor
should change from arrow to Insert Bar.

If it doesn't then the text in the box should be grey or non-updateable. Refer
to previous page.



Enter text into Box



Try to overflow the text by typing to many characters - should be stopped
Check the field width with capitals W.



Enter invalid characters - Letters in amount fields, try strange characters
like + , - * etc. in All fields.



SHIFT and Arrow should Select Characters. Selection should also be possible
with mouse. Double Click should

select all text in box.

 



1.4. Option (Radio Buttons)





Left and Right arrows should move 'ON' Selection. So should
Up and Down.. Select with mouse by clicking.

 



1.5. Check Boxes





Clicking with the mouse on the box, or on the text should
SET/UNSET the box. SPACE should do the same.

 



Return to top of the page

 

 



1.6. Command Buttons





If Command Button leads to another Screen, and if the user
can enter or change details on the other screen then

the Text on the button should be followed by three dots.



All Buttons except for OK and Cancel should have a letter Access to them.
This is indicated by a letter underlined

in the button text. The button should be activated by pressing ALT+Letter. Make
sure there is no duplication.



Click each button once with the mouse - This should activate

Tab to each button - Press SPACE - This should activate

Tab to each button - Press RETURN - This should activate

The above are VERY IMPORTANT, and should be done for EVERY
command Button.



Tab to another type of control (not a command button). One button on the
screen should be default (indicated by

a thick black border). Pressing Return in ANY no command button control should
activate it.



If there is a Cancel Button on the screen , then pressing <Esc> should
activate it.



If pressing the Command button results in uncorrectable data e.g. closing an
action step, there should be a message

phrased positively with Yes/No answers where Yes results in the completion of
the action.

 



1.7. Drop Down List Boxes





Pressing the Arrow should give list of options. This List
may be scrollable. You should not be able to type text

in the box.



Pressing a letter should bring you to the first item in the list with that
start with that letter. Pressing ‘Ctrl - F4’

should open/drop down the list box.



Spacing should be compatible with the existing windows spacing (word etc.).
Items should be in alphabetical

order with the exception of blank/none which is at the top or the bottom of the
list box.



Drop down with the item selected should be display the list with the
selected item on the top.



Make sure only one space appears, shouldn't have a blank line at the bottom.


 



1.8. Combo Boxes





Should allow text to be entered. Clicking Arrow should allow
user to choose from list

 



1.9. List Boxes





Should allow a single selection to be chosen, by clicking
with the mouse, or using the Up and Down Arrow keys.



Pressing a letter should take you to the first item in the list starting
with that letter.



If there is a 'View' or 'Open' button beside the list box then double
clicking on a line in the List Box, should act in the same way as selecting and
item in the list box, then clicking the command button.



Force the scroll bar to appear, make sure all the data can be seen in the
box.










 Return to top of the page


 

 



Section 2 - Screen Validation Checklist



 



2.1. Aesthetic Conditions:



  1. Is the general screen
    background the correct colour?
  2. Are the field prompts the
    correct colour?
  3. Are the field backgrounds the
    correct colour?
  4. In read-only mode, are the
    field prompts the correct colour?
  5. In read-only mode, are the
    field backgrounds the correct colour?
  6. Are all the screen prompts
    specified in the correct screen font?
  7. Is the text in all fields
    specified in the correct screen font?
  8. Are all the field prompts
    aligned perfectly on the screen?
  9. Are all the field edit boxes
    aligned perfectly on the screen?
  10. Are all groupboxes aligned
    correctly on the screen?
  11. Should the screen be
    resizable?
  12. Should the screen be
    minimisable?
  13. Are all the field prompts
    spelt correctly?
  14. Are all character or
    alpha-numeric fields left justified? This is the default unless otherwise
    specified.
  15. Are all numeric fields right
    justified? This is the default unless otherwise specified.
  16. Is all the microhelp text
    spelt correctly on this screen?
  17. Is all the error message text
    spelt correctly on this screen?
  18. Is all user input captured in
    UPPER case or lower case consistently?
  19. Where the database requires a
    value (other than null) then this should be defaulted into fields. The

    user must either enter an alternative valid value or leave the default
    value intact.
  20. Assure that all windows have
    a consistent look and feel.
  21. Assure that all dialog boxes
    have a consistent look and feel.


 



2.2. Validation Conditions:



  1. Does a failure of validation
    on every field cause a sensible user error message?
  2. Is the user required to fix
    entries which have failed validation tests?
  3. Have any fields got multiple
    validation rules and if so are all rules being applied?
  4. If the user enters an invalid
    value and clicks on the OK button (i.e. does not TAB off the field) is the
    invalid entry identified and highlighted correctly with an error message.?
  5. Is validation consistently
    applied at screen level unless specifically required at field level?
  6. For all numeric fields check
    whether negative numbers can and should be able to be entered.
  7. For all numeric fields check
    the minimum and maximum values and also some mid-range values allowable?
  8. For all
    character/alphanumeric fields check the field to ensure that there is a
    character limit specified and that this limit is exactly correct for the
    specified database size?
  9. Do all mandatory fields
    require user input?
  10. If any of the database
    columns don't allow null values then the corresponding screen fields must
    be mandatory. (If any field which initially was mandatory has become
    optional then check whether null values are allowed in this field.)


 



2.3. Navigation Conditions:



  1. Can the screen be accessed
    correctly from the menu?
  2. Can the screen be accessed
    correctly from the toolbar?
  3. Can the screen be accessed
    correctly by double clicking on a list control on the previous screen?
  4. Can all screens accessible
    via buttons on this screen be accessed correctly?
  5. Can all screens accessible by
    double clicking on a list control be accessed correctly?
  6. Is the screen modal. i.e. Is
    the user prevented from accessing other functions when this screen is
    active and is this correct?
  7. Can a number of instances of
    this screen be opened at the same time and is this correct?


 



2.4. Usability Conditions:



  1. Are all the dropdowns on this
    screen sorted correctly? Alphabetic sorting is the default unless otherwise
    specified.
  2. Is all date entry required in
    the correct format?
  3. Have all pushbuttons on the
    screen been given appropriate Shortcut keys?
  4. Do the Shortcut keys work
    correctly?
  5. Have the menu options which
    apply to your screen got fast keys associated and should they have?
  6. Does the Tab Order specified
    on the screen go in sequence from Top Left to bottom right? This is the
    default unless otherwise specified.
  7. Are all read-only fields
    avoided in the TAB sequence?
  8. Are all disabled fields
    avoided in the TAB sequence?
  9. Can the cursor be placed in
    the microhelp text box by clicking on the text box with the mouse?
  10. Can the cursor be placed in
    read-only fields by clicking in the field with the mouse?
  11. Is the cursor positioned in
    the first input field or control when the screen is opened?
  12. Is there a default button
    specified on the screen?
  13. Does the default button work
    correctly?
  14. When an error message occurs
    does the focus return to the field in error when the user cancels it?
  15. When the user Alt+Tab's to
    another application does this have any impact on the screen upon return to
    The application?
  16. Do all the fields edit boxes
    indicate the number of characters they will hold by there length? e.g. a
    30 character field should be a lot longer


 



2.5. Data Integrity Conditions:



  1. Is the data saved when the
    window is closed by double clicking on the close box?
  2. Check the maximum field
    lengths to ensure that there are no truncated characters?
  3. Where the database requires
    a value (other than null) then this should be defaulted into fields. The
    user must either enter an alternative valid value or leave the default
    value intact.
  4. Check maximum and minimum
    field values for numeric fields?
  5. If numeric fields accept
    negative values can these be stored correctly on the database and does it
    make sense for the field to accept negative numbers?
  6. If a set of radio buttons
    represent a fixed set of values such as A, B and C then what happens if a
    blank value is retrieved from the database? (In some situations rows can
    be created on the database by other functions which are not screen based
    and thus the required initial values can be incorrect.)
  7. If a particular set of data
    is saved to the database check that each value gets saved fully to the
    database. i.e. Beware of truncation (of strings) and rounding of numeric
    values.


 



2.6. Modes (Editable Read-only) Conditions:



  1. Are the screen and field
    colours adjusted correctly for read-only mode?
  2. Should a read-only mode be
    provided for this screen?
  3. Are all fields and controls
    disabled in read-only mode?
  4. Can the screen be accessed
    from the previous screen/menu/toolbar in read-only mode?
  5. Can all screens available
    from this screen be accessed in read-only mode?
  6. Check that no validation is
    performed in read-only mode.


 



2.7. General Conditions:



  1. Assure the existence of the
    "Help" menu.
  2. Assure that the proper
    commands and options are in each menu.
  3. Assure that all buttons on
    all tool bars have a corresponding key commands.
  4. Assure that each menu command
    has an alternative(hot-key) key sequence which will invoke it where
    appropriate.
  5. In drop down list boxes,
    ensure that the names are not abbreviations / cut short
  6. In drop down list boxes,
    assure that the list and each entry in the list can be accessed via
    appropriate key / hot key combinations.
  7. Ensure that duplicate hot
    keys do not exist on each screen
  8. Ensure the proper usage of
    the escape key (which is to undo any changes that have been made) and
    generates a caution message "Changes will be lost - Continue
    yes/no"
  9. Assure that the cancel button
    functions the same as the escape key.
  10. Assure that the Cancel button
    operates as a Close button when changes have be made that cannot be
    undone.
  11. Assure that only command
    buttons which are used by a particular window, or in a particular dialog
    box, are present. - i.e make sure they don't work on the screen behind the
    current screen.
  12. When a command button is used
    sometimes and not at other times, assure that it is grayed out when it
    should not be used.
  13. Assure that OK and Cancel
    buttons are grouped separately from other command buttons.
  14. Assure that command button
    names are not abbreviations.
  15. Assure that all field
    labels/names are not technical labels, but rather are names meaningful to
    system users.
  16. Assure that command buttons
    are all of similar size and shape, and same font & font size.
  17. Assure that each command
    button can be accessed via a hot key combination.
  18. Assure that command buttons
    in the same window/dialog box do not have duplicate hot keys.
  19. Assure that each
    window/dialog box has a clearly marked default value (command button, or
    other object) which is invoked when the Enter key is pressed - and NOT the
    Cancel or Close button
  20. Assure that focus is set to
    an object/button which makes sense according to the function of the
    window/dialog box.
  21. Assure that all option
    buttons (and radio buttons) names are not abbreviations.
  22. Assure that option button
    names are not technical labels, but rather are names meaningful to system
    users.
  23. If hot keys are used to
    access option buttons, assure that duplicate hot keys do not exist in the
    same window/dialog box.
  24. Assure that option box names
    are not abbreviations.
  25. Assure that option boxes,
    option buttons, and command buttons are logically grouped together in
    clearly demarcated areas "Group Box"
  26. Assure that the Tab key
    sequence which traverses the screens does so in a logical way.
  27. Assure consistency of mouse
    actions across windows.
  28. Assure that the color red is
    not used to highlight active objects (many individuals are red-green color
    blind).
  29. Assure that the user will
    have control of the desktop with respect to general color and highlighting
    (the application should not dictate the desktop background
    characteristics).
  30. Assure that the screen/window
    does not have a cluttered appearance
  31. Ctrl + F6 opens next tab
    within tabbed window
  32. Shift + Ctrl + F6 opens
    previous tab within tabbed window
  33. Tabbing will open next tab
    within tabbed window if on last field of current tab
  34. Tabbing will go onto the
    'Continue' button if on last field of last tab within tabbed window
  35. Tabbing will go onto the next
    editable field in the window
  36. Banner style & size &
    display exact same as existing windows
  37. If 8 or less options in a
    list box, display all options on open of list box - should be no need to
    scroll
  38. Errors on continue will cause
    user to be returned to the tab and the focus should be on the field
    causing the error. (i.e the tab is opened, highlighting the field with the
    error on it)
  39. Pressing continue while on
    the first tab of a tabbed window (assuming all fields filled correctly)
    will not open all the tabs.
  40. On open of tab focus will be
    on first editable field
  41. All fonts to be the same
  42. Alt+F4 will close the tabbed
    window and return you to main screen or previous screen (as appropriate),
    generating "changes will be lost" message if necessary.
  43. Microhelp text for every
    enabled field & button
  44. Ensure all fields are
    disabled in read-only mode
  45. Progress messages on load of
    tabbed screens
  46. Return operates continue
  47. If retrieve on load of tabbed
    window fails window should not open


 



2.8. Specific Field Tests



 



2.8.1. Date Field Checks


  • Assure that leap years are
    validated correctly & do not cause errors/miscalculations
  • Assure that month code 00 and
    13 are validated correctly & do not cause errors/miscalculations
  • Assure that 00 and 13 are reported
    as errors
  • Assure that day values 00 and
    32 are validated correctly & do not cause errors/miscalculations
  • Assure that Feb. 28, 29, 30
    are validated correctly & do not cause errors/ miscalculations
  • Assure that Feb. 30 is
    reported as an error
  • Assure that century change is
    validated correctly & does not cause errors/ miscalculations
  • Assure that out of cycle
    dates are validated correctly & do not cause errors/miscalculations


 



2.8.2. Numeric Fields


  • Assure that lowest and
    highest values are handled correctly
  • Assure that invalid values
    are logged and reported
  • Assure that valid values are
    handles by the correct procedure
  • Assure that numeric fields
    with a blank in position 1 are processed or reported as an error
  • Assure that fields with a
    blank in the last position are processed or reported as an error an error
  • Assure that both + and -
    values are correctly processed
  • Assure that division by zero
    does not occur
  • Include value zero in all
    calculations
  • Include at least one in-range
    value
  • Include maximum and minimum
    range values
  • Include out of range values
    above the maximum and below the minimum
  • Assure that upper and lower
    values in ranges are handled correctly


 



2.8.3. Alpha Field Checks


  • Use blank and non-blank data
  • Include lowest and highest
    values
  • Include invalid characters
    & symbols
  • Include valid characters
  • Include data items with
    first position blank
  • Include data items with last
    position blank


 



 Return to top of the page










Section 3 - Validation Testing - Standard
Actions



3.1. Examples of Standard Actions - Substitute your specific commands


Add

View

Change

Delete

Continue - (i.e. continue saving changes or additions)



Add

View

Change

Delete

Cancel - (i.e. abandon changes or additions)



Fill each field - Valid data

Fill each field - Invalid data



Different Check Box / Radio Box combinations



Scroll Lists / Drop Down List Boxes



Help



Fill Lists and Scroll



Tab



Tab Sequence



Shift Tab



 



3.2. Shortcut keys / Hot Keys



Note: The following keys are used in some windows applications, and are
included as a guide.


 







































































































Key



No Modifier



Shift



CTRL



ALT



F1



Help



Enter Help Mode



n\a



n\a



F2



n\a



n\a



n\a



n\a



F3



n\a



n\a



n\a



n\a



F4



n\a



n\a



Close Document / Child window.



Close Application.



F5



n\a



n\a



n\a



n\a



F6



n\a



n\a



n\a



n\a



F7



n\a



n\a



n\a



n\a



F8



Toggle extend mode, if supported.



Toggle Add mode, if supported.



n\a



n\a



F9



n\a



n\a



n\a



n\a



F10



Toggle menu bar activation.



n\a



n\a



n\a



F11, F12



n\a



n\a



n\a



n\a



Tab



Move to next active/editable field. 



Move to previous active/editable field. 



Move to next open Document or Child window. (Adding SHIFT
reverses the order of movement).



Switch to previously used application. (Holding down the
ALT key displays all open applications).



Alt



Puts focus on first menu command (e.g. 'File').



n\a



n\a



n\a




 Return to top of the page



3.3. Control Shortcut Keys

 





















































Key



Function



CTRL + Z



Undo



CTRL + X



Cut



CTRL + C



Copy



CTRL + V



Paste



CTRL + N



New



CTRL + O



Open



CTRL + P



Print



CTRL + S



Save



CTRL + B



Bold*



CTRL + I



Italic*



CTRL + U



Underline*




* These shortcuts are suggested for text formatting applications, in the
context for

which they make sense. Applications may use other modifiers for these
operations.










Download this in
PDF Format










4. Origin & Inspiration



I first conceived this checklist for training testers who
were going to be working on a new PowerBuilder

application (on a Windows for Workgroups platform) to be used in the initial
screen validation testing phase.



The initial input for the list came from a site I found on the web, (see
below)
the remainder of the

checklist I made up from internal & external design standards, and
experience gained over the last few

years working in QA.



A long ago I replied to a query (on comp.software.testing) regarding
gui/client-server testing and when I

mentioned this checklist I got quite a few requests for copies, and as a result
I decided to publish it on the

web. I hope it can be of some use to you, and if you have any suggestions - or
if anyone wants to update

this to Win95/98/NT/XP standards (or even Mac) I'd love if they'd send me a
copy !



thanks, and good luck.

 Bazman  



Important Considerations for Test Automation



 



Often when a test automation tool is introduced to a
project, the expectations for the return on investment are very high. Project
members anticipate that the tool will immediately narrow down the testing
scope, meaning reducing cost and schedule. However, I have seen several test
automation projects fail - miserably.



 



The following very simple factors largely influence the
effectiveness of automated testing, and if not taken into account, the results
is usually a lot of lost effort, and very expensive ‘shelfware’.



 



 



  1. Scope -
    It is not practical to try to automate everything, nor is there the time
    available generally. Pick very carefully the functions/areas of the
    application that are to be automated.


 



  1. Preparation
    Timeframe
    - The preparation time for automated test scripts
    has to be taken into account. In general, the preparation time for
    automated scripts can be up to 2/3 times longer than for manual testing.
    In reality, chances are that initially the tool will actually increase the
    testing scope.  It is therefore very
    important to manage expectations. An automated testing tool does not replace
    manual testing, nor does it replace the test engineer. Initially, the test
    effort will increase, but when automation is done correctly it will
    decrease on subsequent releases.


 



  1. Return
    on Investment
    - Because the preparation time for test
    automation is so long, I have heard it stated that the benefit of the test
    automation only begins to occur after approximately the third time the
    tests have been run.


 



  1. When
    is the benefit to be gained?
    Choose your objectives
    wisely, and seriously think about when & where the
    benefit is to be gained. If your application is significantly
    changing regularly, forget about test automation - you will
    spend so much time updating your scripts that you will not reap many
    benefits. [However, if only disparate sections of the application are
    changing, or the changes are minor - or if there is a specific section
    that is not changing, you may still be able to successfully utilise
    automated tests
    ]. Bear in mind that you may only ever be able to do a
    complete automated test run when your application is almost ready for
    release – i.e. nearly fully tested!! If your application is very buggy,
    then the likelihood is that you will not be able to run a complete suite
    of automated tests – due to the failing functions encountered.


 



  1. The
    Degree of Change
    – The best use of test automation is for
    regression testing, whereby you use automated tests to ensure that
    pre-existing functions (e.g. functions from version 1.0 - i.e. not new
    functions in this release) are unaffected by any changes introduced in
    version 1.1.  And, since proper test
    automation planning requires that the test scripts are designed so that
    they are not totally invalidated by a simple gui change (such as renaming
    or moving a particular control), you need to take into account the time
    and effort required to update the scripts. For example, if your
    application is significantly changing, the scripts from
    version 1.0. may need to be completely re-written for version 1.1., and
    the effort involved may be at most prohibitive, at least not taken into
    account!  However, if only disparate
    sections of the application are changing, or the changes are minor, you
    should be able to successfully utilise automated tests to regress these
    areas.


 



  1. Test
    Integrity
    - how do you know (measure) whether a test passed or
    failed ?  Just because the tool
    returns a ‘pass’ does not necessarily mean that the test itself passed.
    For example, just because no error message appears does not mean that the
    next step in the script successfully completed. This needs to be taken
    into account when specifying test script fail/pass criteria.


 



  1. Test
    Independence
    -
    Test independence must be built in so that a failure in the first test
    case won't cause a domino effect and either prevent, or cause to fail, the
    rest of the test scripts in that test suite. However, in practice this is
    very difficult to achieve.


 



  1. Debugging
    or "testing" of the actual test scripts themselves
    -
    time must be allowed for this, and to prove the integrity of the tests
    themselves.


 



  1. Record
    & Playback
    - DO NOT RELY on record & playback as the
    SOLE means to generates a script. The idea is great. You execute the test
    manually while the test tool sits in the background and remembers what you
    do. It then generates a script that you can run to re-execute the test.
    It's a great idea - that rarely works (and proves very little).


 



  1. Maintenance
    of Scripts
    - Finally, there is a high maintenance
    overhead for automated test scripts - they have to be continuously kept up
    to date, otherwise you will end up abandoning hundreds of hours work
    because there has been too many changes to an application to make
    modifying the test script worthwhile. As a result, it is important that
    the documentation of the test scripts is kept up to date also.


 



 



Monday, May 3, 2010

How to write a good bug report? Tips and Tricks

Why good Bug report?
If your bug report is effective, chances are higher that it will get fixed. So fixing a bug depends on how effectively you report it. Reporting a bug is nothing but a skill and I will tell you how to achieve this skill.

“The point of writing problem report(bug report) is to get bugs fixed” - By Cem Kaner. If tester is not reporting bug correctly, programmer will most likely reject this bug stating as irreproducible. This can hurt testers moral and some time ego also. (I suggest do not keep any type of ego. Ego’s like “I have reported bug correctly”, “I can reproduce it”, “Why he/she has rejected the bug?”, “It’s not my fault” etc etc..)

What are the qualities of a good software bug report?
Anyone can write a bug report. But not everyone can write a effective bug report. You should be able to distinguish between average bug report and a good bug report. How to distinguish a good or bad bug report? It’s simple, apply following characteristics and techniques to report a bug.

1) Having clearly specified bug number:
Always assign a unique number to each bug report. This will help to identify the bug record. If you are using any automated bug-reporting tool then this unique number will be generated automatically each time you report the bug. Note the number and brief description of each bug you reported.

2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step. Step by step described bug problem is easy to reproduce and fix.

3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to summarize the problem in minimum words yet in effective way. Do not combine multiple problems even they seem to be similar. Write different reports for each problem.

How to Report a Bug?

Use following simple Bug report template:
This is a simple bug report format. It may vary on the bug report tool you are using. If you are writing bug report manually then some fields need to specifically mention like Bug number which should be assigned manually.

Reporter: Your name and email address.

Product:
In which product you found this bug.

Version: The product version if any.

Component: These are the major sub modules of the product.

Platform: Mention the hardware platform where you found this bug. The various platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.

Operating system: Mention all operating systems where you found the bug. Operating systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.

Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest priority” and P5 as ” Fix when time permits”.

Severity:
This describes the impact of the bug.
Types of Severity:

* Blocker: No further testing work can be done.
* Critical: Application crash, Loss of data.
* Major: Major loss of function.
* Minor: minor loss of function.
* Trivial: Some UI enhancements.
* Enhancement: Request for new feature or some enhancement in existing one.

Status:
When you are logging the bug in any bug tracking system then by default the bug status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.

Assign To:
If you know which developer is responsible for that particular module in which bug occurred, then you can specify email address of that developer. Else keep it blank this will assign bug to module owner or Manger will assign bug to developer. Possibly add the manager email address in CC list.

URL:
The page url on which bug occurred.

Summary:

A brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what the problem is and where it is.

Description:
A detailed description of bug. Use following fields for description field:

* Reproduce steps: Clearly mention the steps to reproduce the bug.
* Expected result: How application should behave on above mentioned steps.
* Actual result: What is the actual result on running above steps i.e. the bug behavior.

These are the important steps in bug report. You can also add the “Report type” as one more field which will describe the bug type.

The report types are typically:
1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem

Some Bonus tips to write a good bug report:

1) Report the problem immediately:If you found any bug while testing, do not wait to write detail bug report later. Instead write the bug report immediately. This will ensure a good and reproducible bug report. If you decide to write the bug report later on then chances are high to miss the important steps in your report.

2) Reproduce the bug three times before writing bug report:Your bug should be reproducible. Make sure your steps are robust enough to reproduce the bug without any ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the periodic nature of the bug.

3) Test the same bug occurrence on other similar module:
Sometimes developer use same code for different similar modules. So chances are high that bug in one module can occur in other similar modules as well. Even you can try to find more severe version of the bug you found.

4) Write a good bug summary:
Bug summary will help developers to quickly analyze the bug nature. Poor quality report will unnecessarily increase the development and testing time. Communicate well through your bug report summary. Keep in mind bug summary is used as a reference to search the bug in bug inventory.

5) Read bug report before hitting Submit button:
Read all sentences, wording, steps used in bug report. See if any sentence is creating ambiguity that can lead to misinterpretation. Misleading words or sentences should be avoided in order to have a clear bug report.

6) Do not use Abusive language:
It’s nice that you did a good work and found a bug but do not use this credit for criticizing developer or to attack any individual.

Conclusion:

No doubt that your bug report should be a high quality document. Focus on writing good bug reports, spend some time on this task because this is main communication point between tester, developer and manager. Mangers should make aware to their team that writing a good bug report is primary responsibility of any tester. Your efforts towards writing good bug report will not only save company resources but also create a good relationship between you and developers.

For better productivity write a better bug report.




Ref: http://www.softwaretestinghelp.com/how-to-write-good-bug-report/

Wednesday, September 16, 2009

Testing Without a Formal Test Plan

Testing Without a Formal Test Plan
A formal test plan is a document that provides and records important information about a test project, for example:

project and quality assumptions
project background information
resources
schedule & timeline
entry and exit criteria
test milestones
tests to be performed
use cases and/or test cases
For a range of reasons -- both good and bad -- many software and web development projects don't budget enough time for complete and comprehensive testing. A quality test team must be able to test a product or system quickly and constructively in order to provide some value to the project. This essay describes how to test a web site or application in the absence of a detailed test plan and facing short or unreasonable deadlines.

Identify High-Level Functions First
High-level functions are those functions that are most important to the central purpose(s) of the site or application. A test plan would typically provide a breakdown of an application's functional groups as defined by the developers; for example, the functional groups of a commerce web site might be defined as shopping cart application, address book, registration/user information, order submission, search, and online customer service chat. If this site's purpose is to sell goods online, then you have a quick-and-dirty prioritization of:

shopping cart
registration/user information
order submission
address book
search
online customer service chat
I've prioritized these functions according to their significance to a user's ability to complete a transaction. I've ignored some of the lower-level functions for now, such as the modify shopping cart quantity and edit saved address functions because they are a little less important than the higher-level functions from a test point-of-view at the beginning of testing.

Your opinion of the prioritization may disagree with mine, but the point here is that time is critical and in the absence of defined priorities in a test plan, you must test something now. You will make mistakes, and you will find yourself making changes once testing has started, but you need to determine your test direction as soon as possible.

Test Functions Before Display
Any web site should be tested for cross-browser and cross-platform compatibility -- this is a primary rule of web site quality assurance. However, wait on the compatibility testing until after the site can be verified to just plain work. Test the site's functionality using a browser/OS/platform that is expected to work correctly -- use what the designers and coders use to review their work.

By running through the site or application first with known-good client configurations allows testers to focus on the way the site functions, and allows testers to focus on the more important class of functional defects and problems early in the test project. Spend time up front identifying and reporting those functional-level defects and the developers will have more time to effectively fix and iteratively deliver new code levels to QA.

If your test team will not be able to exhaustively test a site or application -- and the premise of this essay is that your time is extremely short and you are testing without a formal plan -- you must first identify whether the damned thing can work, and then move on from there.

Concentrate on Ideal Path Actions First
Ideal paths are those actions and steps most likely to be performed by users. For example, on a typical commerce site, a user is likely to

identify an item of interest
add that item to the shopping cart
buy it online with a credit card
ship it to himself/herself
Now, this describes what the user would want to do, but many sites require a few more functions, so the user must go through some more steps, for example:

login to an existing registration account (if one exists)
register as a user if no account exists
provide billing & bill-to address information
provide ship-to address information
provide shipping & shipping method information
provide payment information
agree or disagree to receiving site emails and newsletters
Most sites offer (or force) an even wider range of actions on the user:

change product quantity in the shopping cart
remove product from shopping cart
edit user information (or ship-to information or bill-to information)
save default information (like default shipping preferences or credit card information)
All of these actions and steps may be important to some users some of the time (and some developers and marketers all of the time), but the majority of users will not use every function every time. Focus on the ideal path and identify those factors most likely to be used in a majority of user interactions.

Assume a user who knows what s/he wants to do, and so is not going to choose the wrong action for the task they want to complete. Assume the user won't make common data entry and interface control errors. Assume the user will accept any default form selections -- this means that if a checkbox is checked, the user will leave it checked; if a radio button is selected to a meaningful selection, the user will let that ride. This doesn't mean that non-values that are defaulted -- such as the drop-down menu that shows a "select one" value -- will left as-is to force errors. The point here is to keep it simple and lowest-common denominator and not force errors. Test as though everything is right in the world, life is beautiful, and your project manager is Candide.

Once the ideal paths have been tested, focus on secondary paths involving the lower-level functions or actions and steps that are less frequent but still reasonable variations.

Forcing errors comes later, if you have time.

Concentrate on Intrinsic Factors First
Intrinsic factors are those factors or characteristics that are part of the system or product being tested. An intrinsic factor is an internal factor. So, for a typical commerce site, the HTML page code that the browser uses to display the shopping cart pages is intrinsic to the site: change the page code and the site itself is changed. The code logic called by a submit button is intrinsic to the site.

Extrinsic factors are external to the site or application. Your crappy computer with only 8 megs of RAM is extrinsic to the site, so your home computer can crash without affecting the commerce site, and adding more memory to your computer doesn't mean a whit to the commerce site or its functioning.

Given a severe shortage of test time, focus first on factors intrinsic to the site:

does the site work?
do the functions work? (again with the functionality, because it is so basic)
do the links work?
are the files present and accounted for?
are the graphics MIME types correct? (I used to think that this couldn't be screwed up)
Once the intrinsic factors are squared away, then start on the extrinsic points:

cross-browser and cross-platform compatibility
clients with cookies disabled
clients with javascript disabled
monitor resolution
browser sizing
connection speed differences
The point here is that with myriad possible client configurations and user-defined environmental factors to think about, think first about those that relate to the product or application itself. When you run out of time, better to know that the system works rather than that all monitor resolutions safely render the main pages.

Boundary Test From Reasonable to Extreme
You can't just verify that an application works correctly if all input and all actions have been correct. People do make mistakes, so you must test error handling and error states. The systematic testing of error handling is called boundary testing (actually, boundary testing describes much more, but this is enough for this discussion).

During your pedal-to-the-floor, no-test-plan testing project, boundary testing refers to the testing of forms and data inputs, starting from known good values, and progressing through reasonable but invalid inputs all the way to known extreme and invalid values.

The logic for boundary testing forms is straightforward: start with known good and valid values because if the system chokes on that, it's not ready for testing. Move through expected bad values because if those fail, the system isn't ready for testing. Try reasonable and predictable mistakes because users are likely to make such mistakes -- we all screw up on forms eventually. Then start hammering on the form logic with extreme errors and crazy inputs in order to catch problems that might affect the site's functioning.

Good Values
Enter in data formatted as the interface requires. Include all required fields. Use valid and current information (what "valid and current" means will depend on the test system, so some systems will have a set of data points that are valid for the context of that test system). Do not try to cause errors.

Expected Bad Values
Some invalid data entries are intrinsic to the interface and concept domain. For example, any credit card information form will expect expired credit card dates -- and should trap for them. Every form that specifies some fields as required should trap for those fields being left blank. Every form that has drop-down menus that default to an instruction ("select one", etc.) should trap for that instruction. What about punctuation in name fields?

Reasonable and Predictable Mistakes
People will make some mistakes based on the design of the form, the implementation of the interface, or the interface's interpretation of the relevant concept domain(s). For example, people will inadvertently enter in trailing or leading spaces into form fields. People might enter a first and middle name into a first name form field ("Mary Jane").

Not a mistake, per se, but how does the form field handle case? Is the information case-sensitive? Or does the address form handle a PO address? Does the address form handle a business name?

Extreme Errors and Crazy Inputs
And finally, given time, try to kill the form by entering in extreme crap. Test the maximum size of inputs, test long strings of garbage, put numbers in text fields and text in numeric fields.

Everyone's favorite: enter in HTML code. Put your name in BLINK tags, enter in an IMG tag for a graphic from a competitor's site.

Enter in characters that have special meaning in a particular OS (I once crashed a server by using characters this way in a form field).

But remember, even if you kill the site with an extreme data input, the priority is handling errors that are more likely to occur. Use your time wisely and proceed from most likely to less likely.

Compatibility Test From Good to Bad
Once you get to cross-browser and cross-platform compatibility testing, follow the same philosophy of starting with the most important (as defined by prevalence among expected user base) or most common based on prior experience and working towards the less common and less important.

Do not make the assumption that because a site was designed for a previous version of a browser, OS, or platform it will also work on newer releases. Instead, make a list of the browsers and operating systems in order of popularity on the Internet in general, and then move those that are of special importance to your site (or your marketers and/or executives) to the top of the list.

The most important few configurations should be used for functional testing, then start looking for deviations in performance or behavior as you work down the list. When you run out of time, you want to have completed the more important configurations. You can always test those configurations that attract .01 percent of your user base after you launch.

The Drawbacks of This Testing Approach
Many projects are not mature and are not rational (at least from the point-of-view of the quality assurance team), and so the test team must scramble to test as effectively as possibly within a very short time frame. I've spelled out how to test quickly without a structured test plan, and this method is much better than chaos and somewhat better than letting the developers tell you what and how to test.

This approach has definite quality implications:

Incomplete functional coverage -- this is no way to exercise all of the software's functions comprehensively.
No risk management -- this is no way to measure overall risk issues regarding code coverage and quality metrics. Effective quality assurance measures quality over time and starting from a known base of evaluation.
Too little emphasis on user tasks -- because testers will focus on ideal paths instead of real paths. With no time to prepare, ideal paths are defined according to best guesses or developer feedback rather than by careful consideration of how users will understand the system or how users understand real-world analogues to the application tasks. With no time to prepare, testers will be using a very restricted set input data, rather than using real data (from user activity logs, from logical scenarios, from careful consideration of the concept domain).
Difficulty reproducing -- because testers are making up the tests as they go along, reproducing the specific errors found can be difficult, but also reproducing the tests performed will be tough. This will cause problems when trying to measure quality over successive code cycles.
Project management may believe that this approach to testing is good enough -- because you can do some good testing by following this process, management may assume that full and structured testing, along with careful test preparation and test results analysis, isn't necessary. That misapprehension is a very bad sign for the continued quality of any product or web site.
Inefficient over the long term -- quality assurance involves a range of tasks and foci. Effective quality assurance programs expand their base of documentation on the product and on the testing process over time, increasing the coverage and granularity of tests over time. Great testing requires good test setup and preparation, but success with the kind testplan-less approach described in this essay may reinforce bad project and test methodologies. A continued pattern of quick-and-dirty testing like this is a sign that the product or application is unsustainable in the long run.