.............................Welcome to Saravanan J Blog.............................

.

Wednesday, September 16, 2009

Testing Without a Formal Test Plan

Testing Without a Formal Test Plan
A formal test plan is a document that provides and records important information about a test project, for example:

project and quality assumptions
project background information
resources
schedule & timeline
entry and exit criteria
test milestones
tests to be performed
use cases and/or test cases
For a range of reasons -- both good and bad -- many software and web development projects don't budget enough time for complete and comprehensive testing. A quality test team must be able to test a product or system quickly and constructively in order to provide some value to the project. This essay describes how to test a web site or application in the absence of a detailed test plan and facing short or unreasonable deadlines.

Identify High-Level Functions First
High-level functions are those functions that are most important to the central purpose(s) of the site or application. A test plan would typically provide a breakdown of an application's functional groups as defined by the developers; for example, the functional groups of a commerce web site might be defined as shopping cart application, address book, registration/user information, order submission, search, and online customer service chat. If this site's purpose is to sell goods online, then you have a quick-and-dirty prioritization of:

shopping cart
registration/user information
order submission
address book
search
online customer service chat
I've prioritized these functions according to their significance to a user's ability to complete a transaction. I've ignored some of the lower-level functions for now, such as the modify shopping cart quantity and edit saved address functions because they are a little less important than the higher-level functions from a test point-of-view at the beginning of testing.

Your opinion of the prioritization may disagree with mine, but the point here is that time is critical and in the absence of defined priorities in a test plan, you must test something now. You will make mistakes, and you will find yourself making changes once testing has started, but you need to determine your test direction as soon as possible.

Test Functions Before Display
Any web site should be tested for cross-browser and cross-platform compatibility -- this is a primary rule of web site quality assurance. However, wait on the compatibility testing until after the site can be verified to just plain work. Test the site's functionality using a browser/OS/platform that is expected to work correctly -- use what the designers and coders use to review their work.

By running through the site or application first with known-good client configurations allows testers to focus on the way the site functions, and allows testers to focus on the more important class of functional defects and problems early in the test project. Spend time up front identifying and reporting those functional-level defects and the developers will have more time to effectively fix and iteratively deliver new code levels to QA.

If your test team will not be able to exhaustively test a site or application -- and the premise of this essay is that your time is extremely short and you are testing without a formal plan -- you must first identify whether the damned thing can work, and then move on from there.

Concentrate on Ideal Path Actions First
Ideal paths are those actions and steps most likely to be performed by users. For example, on a typical commerce site, a user is likely to

identify an item of interest
add that item to the shopping cart
buy it online with a credit card
ship it to himself/herself
Now, this describes what the user would want to do, but many sites require a few more functions, so the user must go through some more steps, for example:

login to an existing registration account (if one exists)
register as a user if no account exists
provide billing & bill-to address information
provide ship-to address information
provide shipping & shipping method information
provide payment information
agree or disagree to receiving site emails and newsletters
Most sites offer (or force) an even wider range of actions on the user:

change product quantity in the shopping cart
remove product from shopping cart
edit user information (or ship-to information or bill-to information)
save default information (like default shipping preferences or credit card information)
All of these actions and steps may be important to some users some of the time (and some developers and marketers all of the time), but the majority of users will not use every function every time. Focus on the ideal path and identify those factors most likely to be used in a majority of user interactions.

Assume a user who knows what s/he wants to do, and so is not going to choose the wrong action for the task they want to complete. Assume the user won't make common data entry and interface control errors. Assume the user will accept any default form selections -- this means that if a checkbox is checked, the user will leave it checked; if a radio button is selected to a meaningful selection, the user will let that ride. This doesn't mean that non-values that are defaulted -- such as the drop-down menu that shows a "select one" value -- will left as-is to force errors. The point here is to keep it simple and lowest-common denominator and not force errors. Test as though everything is right in the world, life is beautiful, and your project manager is Candide.

Once the ideal paths have been tested, focus on secondary paths involving the lower-level functions or actions and steps that are less frequent but still reasonable variations.

Forcing errors comes later, if you have time.

Concentrate on Intrinsic Factors First
Intrinsic factors are those factors or characteristics that are part of the system or product being tested. An intrinsic factor is an internal factor. So, for a typical commerce site, the HTML page code that the browser uses to display the shopping cart pages is intrinsic to the site: change the page code and the site itself is changed. The code logic called by a submit button is intrinsic to the site.

Extrinsic factors are external to the site or application. Your crappy computer with only 8 megs of RAM is extrinsic to the site, so your home computer can crash without affecting the commerce site, and adding more memory to your computer doesn't mean a whit to the commerce site or its functioning.

Given a severe shortage of test time, focus first on factors intrinsic to the site:

does the site work?
do the functions work? (again with the functionality, because it is so basic)
do the links work?
are the files present and accounted for?
are the graphics MIME types correct? (I used to think that this couldn't be screwed up)
Once the intrinsic factors are squared away, then start on the extrinsic points:

cross-browser and cross-platform compatibility
clients with cookies disabled
clients with javascript disabled
monitor resolution
browser sizing
connection speed differences
The point here is that with myriad possible client configurations and user-defined environmental factors to think about, think first about those that relate to the product or application itself. When you run out of time, better to know that the system works rather than that all monitor resolutions safely render the main pages.

Boundary Test From Reasonable to Extreme
You can't just verify that an application works correctly if all input and all actions have been correct. People do make mistakes, so you must test error handling and error states. The systematic testing of error handling is called boundary testing (actually, boundary testing describes much more, but this is enough for this discussion).

During your pedal-to-the-floor, no-test-plan testing project, boundary testing refers to the testing of forms and data inputs, starting from known good values, and progressing through reasonable but invalid inputs all the way to known extreme and invalid values.

The logic for boundary testing forms is straightforward: start with known good and valid values because if the system chokes on that, it's not ready for testing. Move through expected bad values because if those fail, the system isn't ready for testing. Try reasonable and predictable mistakes because users are likely to make such mistakes -- we all screw up on forms eventually. Then start hammering on the form logic with extreme errors and crazy inputs in order to catch problems that might affect the site's functioning.

Good Values
Enter in data formatted as the interface requires. Include all required fields. Use valid and current information (what "valid and current" means will depend on the test system, so some systems will have a set of data points that are valid for the context of that test system). Do not try to cause errors.

Expected Bad Values
Some invalid data entries are intrinsic to the interface and concept domain. For example, any credit card information form will expect expired credit card dates -- and should trap for them. Every form that specifies some fields as required should trap for those fields being left blank. Every form that has drop-down menus that default to an instruction ("select one", etc.) should trap for that instruction. What about punctuation in name fields?

Reasonable and Predictable Mistakes
People will make some mistakes based on the design of the form, the implementation of the interface, or the interface's interpretation of the relevant concept domain(s). For example, people will inadvertently enter in trailing or leading spaces into form fields. People might enter a first and middle name into a first name form field ("Mary Jane").

Not a mistake, per se, but how does the form field handle case? Is the information case-sensitive? Or does the address form handle a PO address? Does the address form handle a business name?

Extreme Errors and Crazy Inputs
And finally, given time, try to kill the form by entering in extreme crap. Test the maximum size of inputs, test long strings of garbage, put numbers in text fields and text in numeric fields.

Everyone's favorite: enter in HTML code. Put your name in BLINK tags, enter in an IMG tag for a graphic from a competitor's site.

Enter in characters that have special meaning in a particular OS (I once crashed a server by using characters this way in a form field).

But remember, even if you kill the site with an extreme data input, the priority is handling errors that are more likely to occur. Use your time wisely and proceed from most likely to less likely.

Compatibility Test From Good to Bad
Once you get to cross-browser and cross-platform compatibility testing, follow the same philosophy of starting with the most important (as defined by prevalence among expected user base) or most common based on prior experience and working towards the less common and less important.

Do not make the assumption that because a site was designed for a previous version of a browser, OS, or platform it will also work on newer releases. Instead, make a list of the browsers and operating systems in order of popularity on the Internet in general, and then move those that are of special importance to your site (or your marketers and/or executives) to the top of the list.

The most important few configurations should be used for functional testing, then start looking for deviations in performance or behavior as you work down the list. When you run out of time, you want to have completed the more important configurations. You can always test those configurations that attract .01 percent of your user base after you launch.

The Drawbacks of This Testing Approach
Many projects are not mature and are not rational (at least from the point-of-view of the quality assurance team), and so the test team must scramble to test as effectively as possibly within a very short time frame. I've spelled out how to test quickly without a structured test plan, and this method is much better than chaos and somewhat better than letting the developers tell you what and how to test.

This approach has definite quality implications:

Incomplete functional coverage -- this is no way to exercise all of the software's functions comprehensively.
No risk management -- this is no way to measure overall risk issues regarding code coverage and quality metrics. Effective quality assurance measures quality over time and starting from a known base of evaluation.
Too little emphasis on user tasks -- because testers will focus on ideal paths instead of real paths. With no time to prepare, ideal paths are defined according to best guesses or developer feedback rather than by careful consideration of how users will understand the system or how users understand real-world analogues to the application tasks. With no time to prepare, testers will be using a very restricted set input data, rather than using real data (from user activity logs, from logical scenarios, from careful consideration of the concept domain).
Difficulty reproducing -- because testers are making up the tests as they go along, reproducing the specific errors found can be difficult, but also reproducing the tests performed will be tough. This will cause problems when trying to measure quality over successive code cycles.
Project management may believe that this approach to testing is good enough -- because you can do some good testing by following this process, management may assume that full and structured testing, along with careful test preparation and test results analysis, isn't necessary. That misapprehension is a very bad sign for the continued quality of any product or web site.
Inefficient over the long term -- quality assurance involves a range of tasks and foci. Effective quality assurance programs expand their base of documentation on the product and on the testing process over time, increasing the coverage and granularity of tests over time. Great testing requires good test setup and preparation, but success with the kind testplan-less approach described in this essay may reinforce bad project and test methodologies. A continued pattern of quick-and-dirty testing like this is a sign that the product or application is unsustainable in the long run.

Tuesday, September 15, 2009

Common Software Errors

Common Software Errors
Introduction
This document takes you through whirl-wind tour of common software errors. This is an excellent aid for software testing. It helps you to identify errors systematically and increases the efficiency of software testing and improves testing productivity. For more information, please refer Testing Computer Software, Wiley Edition.

Type of Errors
• User Interface Errors
• Error Handling
• Boundary related errors
• Calculation errors
• Initial and Later states
• Control flow errors
• Errors in Handling or Interpreting Data
• Race Conditions
• Load Conditions
• Hardware
• Source, Version and ID Control
• Testing Errors

Let us go through details of each kind of error.

User Interface Errors
Functionality
Sl No Possible Error Conditions
1 Excessive Functionality
2 Inflated impression of functionality
3 Inadequacy for the task at hand
4 Missing function
5 Wrong function
6 Functionality must be created by user
7 Doesn't do what the user expects


Communication
Missing Information
Sl No Possible Error Conditions
1 No on Screen instructions
2 Assuming printed documentation is already available.
3 Undocumented features
4 States that appear impossible to exit
5 No cursor
6 Failure to acknowledge input
7 Failure to show activity during long delays
8 Failure to advise when a change will take effect
9 Failure to check for the same document being opened twice
Wrong, misleading, confusing information
10 Simple factual errors
11 Spelling errors
12 Inaccurate simplifications
13 Invalid metaphors
14 Confusing feature names
15 More than one name for the same feature
16 Information overland
17 When are data saved
18 Wrong function
19 Functionality must be created by user
20 Poor external modularity
Help text and error messages
21 Inappropriate reading levels
22 Verbosity
23 Inappropriate emotional tone
24 Factual errors
25 Context errors
26 Failure to identify the source of error
27 Forbidding a resource without saying why
28 Reporting non-errors
29 Failure to highlight the part of the screen
30 Failure to clear highlighting
31 Wrong/partial string displayed
32 Message displayed for too long or not long enough
Display Layout
33 Poor aesthetics in screen layout
34 Menu Layout errors
35 Dialog box layout errors
36 Obscured Instructions
37 Misuse of flash
38 Misuse of color
39 Heavy reliance on color
40 Inconsistent with the style of the environment
41 Cannot get rid of on screen information
Output
42 Can't output certain data
43 Can't redirect output
44 Format incompatible with a follow-up process
45 Must output too little or too much
46 Can't control output layout
47 Absurd printout level of precision
48 Can't control labeling of tables or figures
49 Can't control scaling of graphs
Performance
50 Program Speed
51 User Throughput
52 Can't redirect output
53 Perceived performance
54 Slow program
55 slow echoing
56 how to reduce user throughput
57 Poor responsiveness
58 No type ahead
59 No warning that the operation takes long time
60 No progress reports
61 Problems with time-outs
62 Program pesters you


Program Rigidity
User tailorability
Sl No Possible Error Conditions
1 Can't turn off case sensitivity
2 Can't tailor to hardware at hand
3 Can't change device initialization
4 Can't turn off automatic changes
5 Can't slow down/speed up scrolling
6 Can't do what you did last time
7 Failure to execute a customization commands
8 Failure to save customization commands
9 Side effects of feature changes
10 Can't turn off the noise
11 Infinite tailorability
Who is in control?
12 Unnecessary imposition of a conceptual style
13 Novice friendly, experienced hostile
14 Surplus or redundant information required
15 Unnecessary repetition of steps
16 Unnecessary limits


Command Structure and Rigidity
Inconsistencies
Sl No Possible Error Conditions
1 Optimizations
2 Inconsistent syntax
3 Inconsistent command entry style
4 Inconsistent abbreviations
5 Inconsistent termination rule
6 Inconsistent command options
7 Similarly named commands
8 Inconsistent Capitalization
9 Inconsistent menu position
10 Inconsistent function key usage
11 Inconsistent error handling rules
12 Inconsistent editing rules
13 Inconsistent data saving rules
Time Wasters
14 Garden paths
15 choice can't be taken
16 Are you really, really sure
17 Obscurely or idiosyncratically named commands
Menus
18 Excessively complex menu hierarchy
19 Inadequate menu navigation options
20 Too many paths to the same place
21 You can't get there from here
22 Related commands relegated to unrelated menus
23 Unrelated commands tossed under the same menu
Command Lines
24 Forced distinction between uppercase and lowercase
25 Reversed parameters
26 Full command names are not allowed
27 Abbreviations are not allowed
28 Demands complex input on one line
29 no batch input
30 can't edit commands
Inappropriate use of key board
31 Failure to use cursor, edit, or function keys
32 Non std use of cursor and edit keys
33 non-standard use of function keys
34 Failure to filter invalid keys
35 Failure to indicate key board state changes


Missing Commands
State transitions
Sl No Possible Error Conditions
1 Can't do nothing and leave
2 Can't quit mid-program
3 Can't stop mid-command
4 Can't pause
Disaster prevention
5 No backup facility
6 No undo
7 No are you sure
8 No incremental saves
Disaster prevention
9 Inconsistent menu position
10 Inconsistent function key usage
11 Inconsistent error handling rules
12 Inconsistent editing rules
13 Inconsistent data saving rules
Error handling by the user
14 No user specifiable filters
15 Awkward error correction
16 Can't include comments
17 Can't display relationships between variables
Miscellaneous
18 Inadequate privacy or security
19 Obsession with security
20 Can't hide menus
21 Doesn't support standard OS features
22 Doesn't allow long names

Error Handling

Error prevention
Sl No Possible Error Conditions
1 Inadequate initial state validation
2 Inadequate tests of user input
3 Inadequate protection against corrupted data
4 Inadequate tests of passed parameters
5 Inadequate protection against operating system bugs
6 Inadequate protection against malicious use
7 Inadequate version control


Error Detection
Sl No Possible Error Conditions
1 ignores overflow
2 ignores impossible values
3 ignores implausible values
4 ignores error flag
5 ignores hardware fault or error conditions
6 data comparison

Error Recovery
Sl No Possible Error Conditions
1 automatic error detection
2 failure to report error
3 failure to set an error flag
4 where does the program go back to
5 aborting errors
6 recovery from hardware problems
7 no escape from missing disks


Boundary related errors

Sl No Possible Error Conditions
1 Numeric boundaries
2 Equality as boundary
3 Boundaries on numerosity
4 Boundaries in space
5 Boundaries in time
6 Boundaries in loop
7 Boundaries in memory
8 Boundaries with data structure
9 Hardware related boundaries
10 Invisible boundaries
11 Mishandling of boundary case
12 Wrong boundary
13 Mishandling of cases outside boundary

Calculation Errors

Sl No Possible Error Conditions
1 Bad Logic
2 Bad Arithmetic
3 Imprecise Calculations
4 Outdated constants
5 Calculation errors
6 Impossible parenthesis
7 Wrong order of calculations
8 Bad underlying functions
9 Overflow and Underflow
10 Truncation and Round-off error
11 Confusion about the representation of data
12 Incorrect conversion from one data representation to another
13 Wrong Formula
14 Incorrect Approximation


Race Conditions

Sl No Possible Error Conditions
1 Races in updating data
2 Assumption that one event or task finished before another begins
3 Assumptions that one event or task has finished before another begins
4 Assumptions that input won't occur during a brief processing interval
5 Assumptions that interrupts won't occur during brief interval
6 Resource races
7 Assumptions that a person, device or process will respond quickly
8 Options out of sync during display changes
9 Tasks starts before its prerequisites are met
10 Messages cross or don't arrive in the order sent

Initial and Later States

Sl No Possible Error Conditions
1 Failure to set data item to zero
2 Failure to initialize a loop-control variable
3 Failure to initialize a or re-initialize a pointer
4 Failure to clear a string
5 Failure to initialize a register
6 Failure to clear a flag
7 Data were supposed to be initialized elsewhere
8 Failure to re-initialize
9 Assumption that data were not re-initialized
10 Confusion between static and dynamic storage
11 Data modifications by side effect
12 Incorrect initialization
Control Flow Errors

Program runs amok
Sl No Possible Error Conditions
1 Jumping to a routine that isn't resident
2 Re-entrance
3 Variables contains embedded command names
4 Wrong returning state assumed
5 Exception handling based exits

Return to wrong place
Sl No Possible Error Conditions
1 Corrupted Stack
2 Stack underflow/overflow
3 GOTO rather than RETURN from sub-routine

Interrupts
Sl No Possible Error Conditions
1 Wrong interrupt vector
2 Failure to restore or update interrupt vector
3 Invalid restart after an interrupt
4 Failure to block or un-block interrupts


Program Stops
Sl No Possible Error Conditions
1 Dead crash
2 Syntax error reported at run time
3 Waiting for impossible condition or combinations of conditions
4 Wrong user or process priority

Error Detection
Sl No Possible Error Conditions
1 infinite loop
2 Wrong starting value for the loop control variables
3 Accidental change of loop control variables
4 Command that do or don't belong inside the loop
5 Command that do or don't belong inside the loop
6 Improper loop nesting

If Then Else , Or may not
Sl No Possible Error Conditions
1 Wrong inequalities
2 Comparison sometimes yields wrong result
3 Not equal verses equal when there are three cases
4 Testing floating point values for equality
5 confusion between inclusive and exclusive OR
6 Incorrectly negating a logical expression
7 Assignment equal instead of test equal
8 Commands being inside the THEN or ELSE clause
9 Commands that don't belong either case
10 Failure to test a flag
11 Failure to clear a flag


Multiple Cases
Sl No Possible Error Conditions
1 Missing default
2 Wrong default
3 Missing cases
4 Overlapping cases
5 Invalid or impossible cases
6 Commands being inside the THEN or ELSE clause
7 Case should be sub-divided

Errors Handling or Interpreting Data

Problems in passing data between routines
Sl No Possible Error Conditions
1 Parameter list variables out of order or missing
2 Data Type errors
3 Aliases and shifting interpretations of the same area of memory
4 Misunderstood data values
5 inadequate error information
6 Failure to clean up data on exception handling
7 Outdated copies of data
8 Related variable get out of synch
9 Local setting of global data
10 Global use of local variables
11 Wrong mask in bit fields
12 Wrong value from table

Data boundaries
Sl No Possible Error Conditions
1 Un-terminated null strings
2 Early end of string
3 Read/Write past end of data structure or an element in it

Read outside the limits of message buffer
Sl No Possible Error Conditions
1 Complier padding to word boundaries
2 value stack underflow/overflow
3 Trampling another process's code or data

Messaging Problems
Sl No Possible Error Conditions
1 Messages sent to wrong process or port
2 Failure to validate an incoming message
3 Lost or out of synch messages
4 Message sent to only N of N+1 processes

Data Storage corruption
Sl No Possible Error Conditions
1 Overwritten changes
2 Data entry not saved
3 Too much data for receiving process to handle
4 Overwriting a file after an error exit or user abort

Load Conditions

Sl No Possible Error Conditions
1 Required resources are not available
2 No available large memory area
3 Input buffer or queue not deep enough
4 Doesn't clear item from queue, buffer or stock
5 Lost Messages
6 Performance costs
7 Race condition windows expand
8 Doesn't abbreviate under load
9 Doesn't recognize that another process abbreviates output under load
10 Low priority tasks not put off
11 Low priority tasks never done


Doesn't return a resource
Sl No Possible Error Conditions
1 Doesn't indicate that it's done with a device
2 Doesn't erase old files from mass storage
3 Doesn't return unused memory
4 Wastes computer time

Hardware

Sl No Possible Error Conditions
1 Wrong Device
2 Wrong Device Address
3 Device unavailable
4 Device returned to wrong type of pool
5 Device use forbidden to caller
6 Specifies wrong privilege level for the device
7 Noisy Channel
8 Channel goes down
9 Time-out problems
10 Wrong storage device
11 Doesn't check the directory of current disk
12 Doesn't close the file
13 Unexpected end of file
14 Disk sector bug and other length dependent errors
15 Wrong operation or instruction codes
16 Misunderstood status or return code
17 Underutilizing device intelligence
18 Paging mechanism ignored or misunderstood
19 Ignores channel throughput limits
20 Assuming device is or isn't or should be or shouldn't be initialized
21 Assumes programmable function keys are programmed correctly

Source, Version, ID Control

Sl No Possible Error Conditions
1 Old bugs mysteriously re appear
2 Failure to update multiple copies of data or program files
3 No title
4 No version ID
5 Wrong version number of title screen
6 No copy right message or bad one
7 Archived source doesn't compile into a match for shipping code
8 Manufactured disks don't work or contain wrong code or data

Testing Errors

Missing bugs in the program
Sl No Possible Error Conditions
1 Failure to notice a problem
2 You don't know what the correct test results are
3 You are bored or inattentive
4 Misreading the Screen
5 Failure to report problem
6 Failure to execute a planned test
7 Failure to use the most promising test case
8 Ignoring programmer's suggestions


Finding bugs that aren't in the program
Sl No Possible Error Conditions
1 Errors in testing programs
2 Corrupted data files
3 Misinterpreted specifications or documentation


Poor reporting
Sl No Possible Error Conditions
1 Illegible reports
2 Failure to make it clear how to reproduce the problem
3 Failure to say you can't reproduce the problem
4 Failure to check your report
5 Failure to report timing dependencies
6 Failure to simplify conditions
7 Concentration on trivia
8 Abusive language


Poor Tracking and follow-up
Sl No Possible Error Conditions
1 Failure to provide summary report
2 Failure to re-report serious bug
3 Failure to check for unresolved problems just before release
4 Failure to verify fixes

Skill Set For Test Engineer

1. Know Programming.

Might as well start out with the most controversial one. There's a popular myth that testing can be staffed with people who have little or no programming knowledge. It doesn't work, even though it is an unfortunately common approach. There are two main reasons why it doesn't work.

(A) They're testing software. Without knowing programming, they can't have any real insights into the kinds of bugs that come into software and the likeliest place to find them. There's never enough time to test "completely", so all software testing is a compromise between available resources and thoroughness. The tester must optimize scarce resources and that means focusing on where the bugs are likely to be. If you don't know programming, you're unlikely to have useful intuition about where to look.

(B) All but the simplest (and therefore, ineffectual) testing methods are tool- and technology-intensive. The tools, both as testing products and as mental disciplines, all presume programming knowledge. Without programmer training, most test techniques (and the tools based on those techniques) are unavailable. The tester who doesn't know programming will always be restricted to the use of ad-hoc techniques and the most simplistic tools.

Does this mean that testers must have formal programmer training, or
have worked as programmers? Formal training and experience is usually the easiest way to meet the "know programming" requirement, but it is not absolutely essential. I met a superb tester whose only training was as a telephone operator. She was testing a telephony application and doing a great job. But, despite the lack of formal training, she had a deep, valid, intuition about programming and had even tried a little of it herself. Sure she's good-good, hell! She was great. How much better would she have been and how much earlier would she have achieved her expertise if she had had the benefits of formal training and working experience? She would have been a lot better a lot earlier.

I like to see formal training in programming such as a university degree in Computer Science or Software Engineering, followed by two to three years of working as a programmer in an industrial setting. A stint on the customer-service hot line is also good training.
I don't like the idea of taking entry-level programmers and putting them into a test organization because:

(A) Loser Image.

Few universities offer undergraduate training in testing beyond "Be sure to test thoroughly." Entry-level people expect to get a job as a programmer and if they're offered a job in a test group, they'll often look upon it as a failure on their part: they believe that they didn't have what it takes to be a programmer in that organization. This unfortunate perception exists even in organizations that values testers highly.

(B) Credibility with Programmers.

Independent testers often have to deal with programmers far more senior than themselves. Unless they've been through a coop program as an undergraduate, all their programming experience is with academic toys: the novice often has no real idea of what programming in a professional, cooperative, programming environment is all about. As such, they have no credibility with their programming counterparts who can stuff off their concerns with "Look, kid. You just don't understand how programming is done here, or anywhere else, for that matter." It is setting up the novice tester for failure.
(C) Just Plain Know-How.

The Programmers right. The kid doesn't know how programming is really done. If the novice is a "real" programmer (as contrasted to a "mere tester") then the senior programmer will often take the time to mentor the junior and set her straight: but for a non-productive "leech" from the test group? Never! It's easiest for the novice tester to learn all that nitty-gritty stuff (such as doing a build, configuration control, procedures, process, etc.) while working as a programmer than to have to learn it, without actually doing it, as an entry-level tester.

2. Know the Application.

That's the other side of the knowledge coin. The ideal tester has deep insights into how the users will exploit the program's features and the kinds of cockpit errors that users are likely to make. In some cases, it is virtually impossible, or at least impractical, for a tester to know both the application and programming. For example, to test an income tax package properly, you must know tax laws and accounting practices. Testing a blood analyzer requires knowledge of blood chemistry; testing an aircraft's flight control system requires control theory and systems engineering, and being a pilot doesn't hurt; testing a geological application demands geology. If the application has a depth of knowledge in it, then it is easier to train the application specialist into programming than to train the programmer into the application. Here again, paralleling the programmer's qualification, I'd like to see a university degree in the relevant discipline followed by a few years of working practice before coming into the test group.


3. Intelligence.

Back in the 60's, there were many studies done to try to predict the ideal qualities for programmers. There was a shortage and we were dipping into other fields for trainees. The most infamous of these was IBM's programmers' Aptitude Test (PAT). Strangely enough, despite the fact the IBM later repudiated this test, it continues to be (ab) used as a benchmark for predicting programmer aptitude. What IBM learned with follow-on research is that the single most important quality for programmers is raw intelligence-good programmers are really smart people-and so are good testers.


4. Hyper-Sensitivity to Little Things.

Good testers notice little things that others (including programmers) miss or ignore. Testers see symptoms, not bugs. We know that a given bug can have many different symptoms, ranging from innocuous to catastrophic. We know that the symptoms of a bug are arbitrarily related in severity to the cause. Consequently, there is no such thing as a minor symptom-because a symptom isn't a bug. It is only after the symptom is fully explained (i.e., fully debugged) that you have the right to say if the bug that caused that symptom is minor or major. Therefore, anything at all out of the ordinary is worth pursuing. The screen flickered this time, but not last time-a bug. The keyboard is a little sticky-another bug. The account balance is off by 0.01 cents-great bug. Good testers notice such little things and use them as an entree to finding a closely-related set of inputs that will cause a catastrophic failure and therefore get the programmers' attention. Luckily, this attribute can be learned through training.


5. Tolerance for Chaos.

People react to chaos and uncertainty in different ways. Some cave in and give up while others try to create order out of chaos. If the tester waits for all issues to be fully resolved before starting test design or testing, she won't get started until after the software has been shipped. Testers have to be flexible and be able to drop things when blocked and move on to another thing that's not blocked. Testers always have many (unfinished) irons in the fire. In this respect, good testers differ from programmers. A compulsive need to achieve closure is not a bad attribute in a programmer-certainly serves them well in debugging-in testing, it means nothing gets finished. The testers' world is inherently more chaotic than the programmers'. A good indicator of the kind of skill I'm looking for here is the ability to do crossword puzzles in ink. This skill, research has shown, also correlates well with programmer and tester aptitude. This skill is very similar to the kind of unresolved chaos with which the tester must daily deal. Here's the theory behind the notion. If you do a crossword puzzle in ink, you can't put down a word, or even part of a word, until you have confirmed it by a compatible cross-word. So you keep a dozen tentative entries unmarked and when by some process or another, you realize that there is a compatible cross-word, you enter them both. You keep score by how many corrections you have to make-not by merely finishing the puzzle, because that's a given. I've done many informal polls of this aptitude at my seminars and found a much higher percentage of crossword-puzzles-in-ink aficionados than you'd get in a normal population.


6. People Skills.

Here's another area in which testers and programmers can differ. You can be an effective programmer even if you are hostile and anti-social; that won't work for a tester. Testers can take a lot of abuse from outraged programmers. A sense of humor and a thick skin will help the tester survive. Testers may have to be diplomatic when confronting a senior programmer with a fundamental goof. Diplomacy, tact, a ready smile-all work to the independent tester's advantage. This may explain one of the (good) reasons that there are so many women in testing. Women are generally acknowledged to have more highly developed people skills than comparable men-whether it is something innate on the X chromosome as some people contend or whether it is that without superior people skills women are unlikely to make it through engineering school and into an engineering career, I don't know and won't attempt to say. But the fact is there and those sharply-honed people skills are important.

7. Tenacity.

An ability to reach compromises and consensus can be at the expense
of tenacity. That's the other side of the people skills. Being socially smart and diplomatic doesn't mean being indecisive or a limp rag that anyone can walk all over. The best testers are both-socially adept and tenacious where it matters. The best testers are so skillful at it that the programmer never realizes that they've been had. Tenacious-my picture is that of an angry pit bull fastened on a burglar's rear-end. Good testers don You can't intimidate them-even by pulling rank. They'll need high-level backing, of course, if they're to get you the quality your product and market demands.

8. Organized.

I can't imagine a scatter-brained tester. There's just too much to keep track of to trust to memory. Good testers use files, data bases, and all the other accouterments of an organized mind. They make up checklists to keep themselves on track. They recognize that they too can make mistakes, so they double-check their findings. They have the facts and figures to support their position. When they claim that there's a bug-believe it, because if the developers don't, the tester will flood them with well-organized, overwhelming, evidence.

A consequence of a well-organized mind is a facility for good written and oral communications. As a writer and editor, I've learned that the inability to express oneself clearly in writing is often symptomatic of a disorganized mind. I don't mean that we expect everyone to write deathless prose like a Hemingway or Melville. Good technical writing is well-organized, clear, and straightforward: and it doesn't depend on a 500,000 word vocabulary. True, there are some unfortunate individuals who express themselves superbly in writing but fall apart in an oral presentation- but they are typically a pathological exception. Usually, a well-organized mind results in clear (even if not inspired) writing and clear writing can usually be transformed through training into good oral presentation skills.

9. Skeptical.

That doesn't mean hostile, though. I mean skepticism in the sense that nothing is taken for granted and that all is fit to be questioned. Only tangible evidence in documents, specifications, code, and test results matter. While they may patiently listen to the reassuring, comfortable words from the programmers ("Trust me. I know where the bugs are.")-and do it with a smile-they ignore all such in-substantive assurances.

10. Self-Sufficient and Tough.

If they need love, they don't expect to get it on the job. They can't be looking for the interaction between them and programmers as a source of ego-gratification and/or nurturing. Their ego is gratified by finding bugs, with few misgivings about the pain (in the programmers) that such finding might engender. In this respect, they must practice very tough love.

11. Cunning.

Or as Gruenberger put it, "low cunning." "Street wise" is another good descriptor, as are insidious, devious, diabolical, fiendish, contriving, treacherous, wily, canny, and underhanded. Systematic test techniques such as syntax testing and automatic test generators have reduced the need for such cunning, but the need is still with us and undoubtedly always will be because it will never be possible to systematize all aspects of testing. There will always be room for that offbeat kind of thinking that will lead to a test case that exposes a really bad bug. But this can be taken to extremes and is certainly not a substitute for the use of systematic test techniques. The cunning comes into play after all the automatically generated "sadistic" tests have been executed.

12. Technology Hungry.

They hate dull, repetitive, work-they'll do it for a while if they have to, but not for long. The silliest thing for a human to do, in their mind, is to pound on a keyboard when they're surrounded by computers. They have a clear notion of how error-prone manual testing is, and in order to improve the quality of their own work, they'll f ind ways to eliminate all such error-prone procedures. I've seen excellent testers re-invent the capture/playback tool many times. I've seen dozens of home-brew test data generators. I've seen excellent test design automation done with nothing more than a word processor, or earlier, with a copy machine and lots of bottles of white-out. I've yet to meet a tester who wasn't hungry for applicable technology. When asked why didn't they automate such and such-the answer was never "I like to do it by hand." It was always one of the following: (1) "I didn't know that it could be automated", (2) "I didn't know that such tools existed", or worst of all, (3) "Management wouldn't give me the time to learn how to use the tool."

13. Honest.

Testers are fundamentally honest and incorruptible. They'll compromise if they have to, but they'll righteously agonize over it. This fundamental honesty extends to a brutally realistic understanding of their own limitations as a human being. They accept the idea that they are no better and no worse, and therefore no less error-prone than their programming counterparts. So they apply the same kind of self-assessment procedures that good programmers will. They'll do test inspections just like programmers do code inspections. The greatest possible crime in a tester's eye is to fake test results.

So, I just Suggest you to refer more resources like Internet, Books and update your Knowledge as well, Do hard work for three months, you will be succeeded.

"Be Proud To Be A Tester"