调查:有谁是这样做的?请跟帖

发表于:2007-06-30来源:作者:点击数: 标签:
WebSite Testing Dr. Edward Miller, Software Research, Inc. Introduction The nearly instant worldwide audience makes a WebSite@#s quality and reliability crucial to its success. The nature of the WWW and of WebSite software pose unique soft

WebSite Testing

Dr. Edward Miller, Software Research, Inc.


Introduction


The nearly instant worldwide audience makes a WebSite@#s quality and reliability crucial to its success. The nature of the WWW and of WebSite software pose unique software testing challenges. Webmasters, WWW applications developers, and WebSite quality assurance managers need tools and methods that meet very specific needs. Our technical approach, based on extending existing WWW browsers, offers many attractive benefits in meeting these needs.

Background


Within minutes of going live, a WWW application can have many thousands more users than a conventional, non-WWW application. The immediacy of a WebSite creates immediate expectations of quality, but the technical complexities of a WebSite and variances in the available browsers make testing and quality control that much more difficult, than "conventional" client/server or application testing. Automated testing of WebSites thus is both an opportunity and a significant challenge.

Defining Website Quality and Reliability


Like any complex piece of software there is no single quality measure to fully characterize a WebSite.
There are many dimensions of quality, and each measure will pertain to a particular WebSite in varying degrees. Here are some of them:
·         Timeliness: How much has a WebSite changed since the last upgrade?
  • Structural Quality: Are all links inside and outside the WebSite working? Do all of the images work?
  • Content: Does the content of critical pages match what is expected?
  • Accuracy and Consistency: Are today@#s copies of the pages downloaded the same as yesterday@#s?
  • Response Time and Latency: Does the WebSite server respond to a browser request within certain parameters? In an E-commerce context, what is the end to end response time after a SUBMIT?
  • Performance: Is the Browser -> Web -> WebSite -> Web -> Browser connection quick enough? How does the performance vary by time of day, by load and usage?

Clearly, “Quality” is in the mind of the WebSite user. A poor-quality WebSite, one with many broken pages and faulty images, with Cgi-Bin error messages, etc. may cost in poor customer relations, lost corporate image, and even in lost sales revenue. Very complex WebSites can sometimes overload the user.

Website Architectural Factors


A WebSite can be quite complex, and that complexity can be a real impediment in assuring WebSite Quality.
What makes a WebSite complex? These are the issues test systems have to contend with:
·         Browser. There is a kind of de facto standard: the WebSite must use only those constructs that work with the majority of browsers. But this still leaves room for a lot of creativity, and a range of technical difficulties.
  • Display Technologies. What you see in your browser is actually composed from many sources:

o        HTML: Various versions of HTML must be supported.
    • Java, JavaScript, ActiveX: Obviously JavaScript and Java applets are likely parts of a WebSite, and the quality process must support these.
    • Cgi-Bin Scripts: All of the different types of Cgi-Bin Scripts (perl, awk, shell-scripts, etc.) need to be handled; tests will need to check “end to end” operation.
  • Navigation. Navigation in a WebSite often is complex and has to be quick and error free.
  • Object Mode. The display you see in a browser changes dynamically; the only constants are the "objects" that make up the display. Testing ought to be in terms of these objects.
  • Server Response. How fast the WebSite host responds influences whether a user moves, continues, or gives up.

Website Test Automation Requirements


Assuring WebSite quality automatically requires conducting sets of tests, automatically and repeatably, that demonstrate required properties and behaviors. Here are some required elements of tools that aim to do this.
·         Browser Independent Tests should be realistic, but not be dependent on a particular browser.
  • No Buffering, Caching. Local caching and buffering should be disabled so that timed experiments are a true measures of performance.
  • Object Mode. Object mode operation is essential to protect an investment in test suites and to assure that test suites continue operating when WebSite pages experience change.
  • Tables and Forms. Even when the layout of a table or form varies in the browser@#s view, tests of it should continue independent of these factors.

Tests need to operate from the browser level for two reasons: (1) this is where users see a WebSite, so tests based in browser operation are the most realistic; and (2) tests based in browsers can be run locally or across the Web equally well. Local execution is fine for quality control, but not for performance measurement work, where response time including Web-variable delays reflective of real-world usage is essential.

Website Dynamic Validation


Confirming validity of what is tested is the key to assuring WebSite quality the most difficult challenge of all. Here are four key areas where test automation will have a significant impact.
1.    Operational Testing. Individual test steps may involve a variety of checks on individual pages in the WebSite:
o        Page Consistency: Is the entire page identical to a prior version? Are key parts of the text the same or different?
    • Table, Form Consistency: Are all of the parts of a table or form present? Correctly laid out? Can you confirm that selected texts are in the "right place"
    • Page Relationships: Are all of the links on a page the same as they were before? Are there new or missing links? Are there any broken links?
    • Performance Consistency, Response Times: Is the response time for a user action the same as it was (within a range)?
  1. Test Suites. Typically you may have dozens or hundreds (or thousands?) of tests, and you may wish to run these tests in a variety of modes: unattended, distributed across many machines, background, etc.
  2. Content Validation. Apart from how a WebSite responds dynamically, the content should be checkable either exactly or approximately. Here are some ways that content validation could be accomplished:

o        Structural: All of the links and anchors should match with prior baseline data.
    • Checkpoints, Exact Reproduction: One or more text elements in a page should be markable as "required to match".
    • Selected Images/Fragments: The tester should be able to rubber-band sections of an image and require that the selection image match later during a subsequent rendition of it.
  1. Load Simulation. Load analysis needs to proceed by having a special purpose browser act like a human user. This assures that the performance checking experiment indicates true performance -- not performance on simulated but unrealistic conditions. There are many "http torture machines" that generate large numbers of http requests, but that is not necessarily the way real-world users generate requests.

Testing System Characteristics


Considering all of these disparate requirements, it seems evident that a single product that supports all of these goals will not be possible. However, there is one common theme: the majority of the work seems to be based on "...what does it [the WebSite] look like from the point of view of the user?" That is, from the point of view of someone using a browser to look at the WebSite.
This observation led our group to conclude that it would be worthwhile trying to build certain test features into a "test enabled web browser", which we called CAPBAK/Web in the expectation that this approach would let us do the majority of the WebSite quality control functions using that engine as a base.
Browser Based Solution - With this as a starting point, we determined that the browser based solution had to meet these additional requirements:
  • Commonly Available Technology Base. The browser had to be based on a well known base (there appear to be only two or three choices).

·         Some Browser Features Must Be Deletable. At the same time, certain requirements imposed limitations on what was to be built. For example, if we were going to have accurate timing data we had to be able to disable caching because otherwise we are measuring response times within the client machine rather than "across the web."
·         Extensibility Assured. To permit meaningful experiments, the product had to be extensible enough to permit timings, static analysis, and other information to be extracted.
Taking these requirements into account, and after investigation of W3C@#s Amaya Browser and the open-architecture Mozilla/Netscape Browser we chose the IE Browser as our initial base for our implementation of CAPBAK/Web. 

原文转自:http://www.ltesting.net