When testing, you must decide how to exercise the program, then do it. The doing is ever so much more interesting than the deciding. A tester's itch to start breaking the program is as strong as a programmer's itch to start writing code - and it has the same effect: design work is skimped, and quality suffers. Paying more attention to running tests than to designing them is a classic mistake. A tester who is not systematic, who does not spend time laying out the possibilities in advance, will overlook special cases. They may be the same subtle ones that the programmers overlooked.
在测试的时候,必须决定如何执行程序,然后完成它们。完成它们比决定它们要有趣的多。测试员渴望的是开始破坏程序,程序员渴望的是开始写代码——这导致相同结果:设计工作被忽略了,产品质量受到损害。将更多的注意力集中于运行测试而不是设计它们是一个典型错误。
Concentration on execution also results in unreviewed test designs. Just like programmers, testers can benefit from a second pair of eyes. Reviews of test designs needn't be as elaborate as product design reviews, but a short check of the testing approach and the resulting tests can find significant omissions at low cost.
集中于执行测试也导致未经审核的测试设计。就像程序员一样,测试员也得益于第二双眼睛的检查。测试设计的审核不必像产品审核那样严格,但是对测试方法和结果测试的快速检查可以低成本地找到重要的疏忽。
What is a test design?
什么是测试设计?
A test design should contain a description of the setup (including machine configuration for a configuration test), inputs given to the product, and a description of expected results. One common mistake is being too specific about test inputs and procedures.
测试设计应当包含设置描述(包括配置测试的机器配置),对产品的输入和预期结果的描述。一个常见错误是对测试输入和过程过于注重细节。
Let's assume manual test implementation for the moment. A related argument for automated tests will be discussed in the next section. Suppose you're testing a banking application. Here are two possible test designs:
让我们先假设一个手工测试实施。相关的自动化测试将在下一节讨论。假设你在测试银行应用程序。这里有两个可能的测试设计:
Design 1
设计1
Setup: initialize the balance in account 12 with $100.
设置:将帐户12的余额初始化为$100。
Procedure:
过程:
Start the program.
Type 12 in the Account window.
Press OK.
Click on the 'Withdraw' toolbar button.
In the withdraw popup dialog, click on the 'all' button.
Press OK.
Expect to see a confirmation popup that says "You are about to withdraw all the money from this account. Continue?"
Press OK.
Expect to see a 0 balance in the account window.
Separately query the database to check that the zero balance has been posted.
Exit the program with File->Exit.
启动程序。
在帐户窗口中输入12。
按“确定”按钮。
点击“取款”工具条按钮。
在弹出的取款对话框中,点击“所有”按钮。
按“确定”按钮。
预期会看到一个确认消息:“您将从此帐户中取出所有的钱,是否继续?”
按“确定”按钮。
在帐户窗口中预期会看到余额为0。
单独查询数据库,检查余额为0。
通过“文件->退出” 退出程序。
Design 2
设计2
Setup: initialize the balance with a positive value.
设置:将帐户余额初始化为一个正值。
Procedure:
过程:
Start the program on that account.
Withdraw all the money from the account using the 'all' button.
It's an error if the transaction happens without a confirmation popup.
Immediately thereafter:
- Expect a $0 balance to be displayed.
- Independently query the database to check that the zero balance has been posted.
启动该帐户的程序。
用“所有”按钮从帐户中取出所有的钱。
如果在事务发生时没有弹出确认消息,则是一个错误。
其后立即:
- 预期余额会显示$0。
- 单独查询数据库,检查余额为0。
The first design style has these advantages:
第一种设计风格有以下优点:
· The test will always be run the same way. You are more likely to be able to reproduce the bug. So will the programmer.
· 测试总是以相同方式运行。重现错误的可能性更大。程序员也一样。
· It details all the important expected results to check. Imprecise expected results make failures harder to notice. For example, a tester using the second style would find it easier to overlook a spelling error in the confirmation popup, or even that it was the wrong popup.
· 它将所有要检查的预期结果的细节都描述出来。不精确的预期结果使得错误更难注意到。例如,使用第二种风格的测试员将会发现更容易忽略确认对话框中的错误拼写,甚至是错误的对话框。
· Unlike the second style, you always know exactly what you've tested. In the second style, you couldn't be sure that you'd ever gotten to the Withdraw dialog via the toolbar. Maybe the menu was always used. Maybe the toolbar button doesn't work at all!
· 不像第二种测试风格,你总是能明确地知道你在测试什么。在第二种风格中,你不能确定可以通过工具条得到“取款”对话框。也许总是使用菜单。也许工具条根本不起作用!
· By spelling out all inputs, the first style prevents testers from carelessly overusing simple values. For example, a tester might always test accounts with $100, rather than using a variety of small and large balances. (Either style should include explicit tests for boundary and special values.)
· 通过写出所有的输入,第一种风格防止程序员无意间过度使用简单的值。例如,一个测试员可能总是用$100测试帐户,而不是使用一些小的和大的余额的组合。(这两种风格都应显式地包含边界值和特殊值测试。)
However, there are also some disadvantages:
但是,也有一些缺点:
· The first style is more expensive to create.
· 创建第一种风格的测试成本较高。
· The inevitable minor changes to the user interface will break it, so it's more expensive to maintain.
· 对用户界面的一些不可避免的更改将中断它,因此维护成本也就更高。
· Because each run of the test is exactly the same, there's no chance that a variation in procedure will stumble across a bug.
· 因为每一轮测试都完全相同,所以也就没有机会因为过程不同而偶然发现 bug 。
· It's hard for testers to follow a procedure exactly. When one makes a mistake - pushes the wrong button, for example - will she really start over?
· 测试员难于遵循测试过程。如果一个人出现错误——比如说按错按钮——她需要重新开始吗?
On balance, I believe the negatives often outweigh the positives, provided there is a separate testing task to check that all the menu items and toolbar buttons are hooked up. (Not only is a separate task more efficient, it's less error-prone. You're less likely to accidentally omit some buttons.)
如果能有一个独立的测试任务来检查所有的菜单项和工具条按钮都连接了代码(一个单独的测试不但更有效,而且不易出错。你不大会偶然地忽略掉一些按钮。),那么权衡利弊,我相信第一种设计的负面影响超过正面影响。
I do not mean to suggest that test cases should not be rigorous, only that they should be no more rigorous than is justified, and that we testers sometimes error on the side of uneconomical detail.
我不是认为测试用例不应当严格,只是说它们过分严格,而且我们测试员有时在不经济的细节中犯错误。
Detail in the expected results is less problematic than in the test procedure, but too much detail can focus the tester's attention too much on checking against the script he's following. That might encourage another classic mistake: not noticing and exploring "irrelevant" oddities. Good testers are masters at noticing "something funny" and acting on it. Perhaps there's a brief flicker in some toolbar button which, when investigated, reveals a crash. Perhaps an operation takes an oddly long time, which suggests to the attentive tester that increasing the size of an "irrelevant" dataset might cause the program to slow to a crawl. Good testing is a combination of following a script and using it as a jumping-off point for an exploration of the product.
详细的预期结果比详细的测试过程问题要少,但是过多的细节可能是测试员的注意力过多集中于检查他所依照的脚本。这可能也导致另一个典型错误:不能注意和探索“不相关的”奇怪现象。好的测试员善于注意到“有趣的东西”并对其进行操作。可能在工具条的一个短暂的闪动,经过调查后,揭示了一个失效错误。也许一个操作任务奇怪地花费了很长时间,可能使专注的程序员感到增加“不相关”的数据集合的大小可能使程序慢如蜗牛。好的测试是既遵循脚本,又能将它作为探索产品的出发点。
An important special case of overlooking bugs is checking that the product does what it's supposed to do, but not that it doesn't do what it isn't supposed to do. As an example, suppose you have a program that updates a health care service's database of family records. A test adds a second child to Dawn Marick's record. Almost all testers would check that, after the update, Dawn now has two children. Some testers - those who are clever, experienced, or subject matter experts - would check that Dawn Marick's spouse, Brian Marick, also now has two children. Relatively few testers would check that no one else in the database has had a child added. They would miss a bug where the programmer over-generalized and assumed that all "family information" updates should be applied both to a patient and to all members of her family, giving Paul Marick (aged 2) a child.
一个重要的忽略 bug的特例情况是检查产品完成预期操作,但不检查它是否没有完成不应该完成的操作。举个例子,假设你有一个更新医疗机构的家庭记录数据库的程序。一个测试是在Dawn Marick的记录中添加第二个小孩。几乎所有的测试员都将在更新之后检查Dawn Marick现在有两个小孩了。部分测试员——那些聪明的、有经验的专家——将会检查Dawn Marick的配偶——Brian Marick,现在也有两个小孩了。相对较少的测试员将检查数据库中没有其他人添加了小孩。如果程序员将规则过分扩展,认为应当对所有的既是病人又是她的家庭成员的人都更新 “家庭信息”,给了Paul Marick(2岁)一个小孩,则这个 bug 就被忽略了。
Ideally, every test should check that all data that should be modified has been modified and that all other data has been unchanged. With forethought, that can be built into automated tests. Complete checking may be impractical for manual tests, but occasional quick scans for data that might be corrupted can be valuable.
理想情况中,每个测试都应检查需要修改的数据都被修改了,其他数据都没有。在经过仔细考虑后,可以将这个过程构建到自动化测试中。完全检查可能对于手工测试来说不切合实际的,但是偶尔地快速检查数据是否破坏可能是很有价值的。
Testing should not be isolated work
测试不应当是孤立的工作
Here's another version of the test we've been discussing:
这里是我们讨论过的另一个版本:
Design 3
设计3
Withdraw all with confirmation and normal check for 0.
取出所有钱,需要确认,并检查余额为0。
That means the same thing as Design 2 - but only to the original author. Test suites that are understandable only by their owners are ubiquitous. They cause many problems when their owners leave the company; sometimes many month's worth of work has to be thrown out.
除了最初的作者,这与设计2是相同的。测试套件只有它们的作者才能理解是常见情况。当它们的拥有者离开公司后,会带来许多问题;有时候很多个月的工作就白费了。
I should note that designs as detailed as Designs 1 or 2 often suffer a similar problem. Although they can be run by anyone, not everyone can update them when the product's interface changes. Because the tests do not list their purposes explicitly, updates can easily make them test a little less than they used to. (Consider, for example, a suite of tests in the Design 1 style: how hard will it be to make sure that all the user interface controls are touched in the revised tests? Will the tester even know that's a goal of the suite?) Over time, this leads to what I call "test suite decay," in which a suite full of tests runs but no longer tests much of anything at all.
我需要说明的是像设计1和2那样详细的设计也存在同样的问题。虽然他们可能由任何人运行,但不是每个人都能在产品界面变化后更新它们。因为测试不会显式地列出它们的目的,更新它们可能很容易使得比以前测试的少一点点。(例如,考虑一下,设计1风格中的测试套件:要确保所有用户界面控件在更改后的测试中被涉及是一件多么困难的事情?)长期以来,这导致了我称为“测试套件变质”的问题,完整的测试套件仍旧在运行,但什么也测试不了。
Another classic mistake involves the boundary between the tester and programmer. Some products are mostly user interface; everything they do is visible on the screen. Other products are mostly internals; the user interface is a "thin pipe" that shows little of what happens inside. The problem is that testing has to use that thin pipe to discover failures. What if complicated internal processing produces only a "yes or no" answer? Any given test case could trigger many internal faults that, through sheer bad luck, don't produce the wrong answer.
另一个典型错误是测试员与程序员的边界。某些产品主要是用户界面;他们做的所有操作在屏幕上都是可见的。其他产品主要是内部的;用户界面是一个“细管道”,很少显示内部发生什么。问题是测试必须使用那个细管道来发现错误。如果一个复杂的内部处理产生的只是“是或否”的答案,结果会怎么样呢?任何给定的测试用例都能触发很多内部错误,仅仅通过不坏的运气,才不会产生错误的答案。
In such situations, testers sometimes rely solely on programmer ("unit") testing. In cases where that's not enough, testing only through the user-visible interface is a mistake. It is far better to get the programmers to add "testability hooks" or "testpoints" that reveal selected internal state. In essence, they convert a product like this:
在这样的情况中,有时候测试员单独依赖于程序员(“单元”) 测试。在这不够充足的情况下,仅从用户可见的界面测试是一个错误。如果使程序员加上“可测试性钩子”或“测试点”以揭示所选择的内部状态的话,会好得多。本质上,他们将一个产品:
to one like this:
转化为:
It is often difficult to convince programmers to add test support code to the product. (Actual quote: "I don't want to clutter up my code with testing crud.") Persevere, start modestly, and take advantage of these facts:
说服程序员向产品中添加测试支持代码常常是很困难的(一个真实引语:“我不想让测试代码弄乱我的程序。”)坚持下去,适时开始,并利用以下事实:
1. The test support code is often a simple extension of the debugging support code programmers write anyway.
测试支持代码常常只是程序员随便编写的调试支持程序的简单延伸。
2. A small amount of test support code often goes a long way.
少量的测试支持代码常常就会带来很大帮助。
A common objection to this approach is that the test support code must be compiled out of the final product (to avoid slowing it down). If so, tests that use the testing interface "aren't testing what we ship". It is true that some of the tests won't run on the final version, so you may miss bugs. But, without testability code, you'll miss bugs that don't reveal themselves through the user interface. It's a risk tradeoff, and I believe that adding test support code usually wins. See [Marick95], chapter 13, for more details.
对这种方法的普遍的反对意见是测试支持代码必须编译在最终产品之外(以避免显示)。如果是这样的,测试员使用的测试界面“不是我们交付的产品”。诚然,某些测试不会运行在最终版本中,所以可能会漏掉一些 bug 。但是,没有可测试的代码,你会漏掉一些通过用户界面无法揭示的 bug 。这是一个风险的权衡,我相信添加测试代码通常会占上风。参见[Marick95]的第13章以获取更多详细内容。
In one case, there's an alternative to having the programmer add code to the product: have a tool do it. Commercial tools like Purify, Boundschecker, and Sentinel automatically add code that checks for certain classes of failures (such as memory leaks). They provide a narrow, specialized testing interface. For marketing reasons, these tools are sold as programmer debugging tools, but they're equally test support tools, and I'm amazed that testing groups don't use them as a matter of course.
有一种情况是,有一个方案替代程序远向产品添加代码:用工具来完成。一些商用工具如Purify、Boundschecker和Sentinel可以自动添加代码以检查某种类型的错误(比如内存泄露)。它们提供一个狭小的、专用的测试界面。因为市场营销的原因,这些工具是作为程序员调试工具出售的,但它们等同于测试支持工具,测试小组没有把它们当成常规工具来使用,让我觉得很吃惊。
Testability problems are exacerbated in distributed systems like conventional client/server systems, multi-tiered client/server systems, Java applets that provide smart front-ends to web sites, and so forth. Too often, tests of such systems amount to shallow tests of the user interface component because that's the only component that the tester can easily control.
测试问题在分布式系统中,比如传统的客户/服务器系统、多层的客户/服务器系统、向站点提供灵巧的前端应用的Java小程序等,可测试性问题更为严重。常常地,测试这类系统等同于用户界面部件的浅显测试,因为它们是测试员能够容易控制的唯一部件。
Finding failures is only the start
发现错误仅仅是开始
It's not enough to find a failure; you must also report it. Unfortunately, poor bug reporting is a classic mistake. Tester bug reports suffer from five major problems:
发现错误是不够的,还必须报告它。不幸的是,低劣的 bug 报告是一个典型错误。测试员的错误报告存在五个主要问题:
1. They do not describe how to reproduce the bug. Either no procedure is given, or the given procedure doesn't work. Either case will likely get the bug report shelved.
他们没有描述如何重现 bug 。要么没有描述过程,要么描述的过程不正确。这两种情况都会使错误报告被搁置。
2. They don't explain what went wrong. At what point in the procedure does the bug occur? What should happen there? What actually happened?
他们没有解释出了什么问题。在什么地方出现了 bug ?将会发生什么?实际上又发生了什么?
3. They are not persuasive about the priority of the bug. Your job is to have the seriousness of the bug accurately assessed. There's a natural tendency for programmers and managers to rate bugs as less serious than they are. If you believe a bug is serious, explain why a customer would view it the way you do. If you found the bug with an odd case, take the time to reproduce it with a more obviously common or compelling case.
4. 关于 bug 的级别没有说服力。你的工作是评估 bug 的严重性。对于程序员和经理有一种很自然的倾向:评估的严重性比实际的严重性低。如果你确信一个 bug 是严重的,要解释一下为什么客户要以你的方式看待这个问题。如果你发现一个奇怪的错误,花一些时间以更普通、更令人信服的方式重现它。
5. They do not help the programmer in debugging. This is a simple cost/benefit tradeoff. A small amount of time spent simplifying the procedure for reproducing the bug or exploring the various ways it could occur may save a great deal of programmer time.
他们不能帮助程序员排除 bug 。这是一个简单的成本/收益权衡。花一点时间简化重现 bug 的过程或探索一下各种发生它的方法可以节约程序员大量的时间。
6. They are insulting, so they poison the relationship between developers and testers.
[Kaner93] has an excellent chapter (5) on how to write bug reports. Read it.
[Kaner93]有一章(第5章)非常好的内容说明了应该如何写 bug 报告。可以读一下。
Not all bug reports come from testers. Some come from customers. When that happens, it's common for a tester to write a regression test that reproduces the bug in the broken version of the product. When the bug is fixed, that test is used to check that it was fixed correctly.
不是所有的 bug 报告都是测试员写的。有一些是来自客户的。如果出现这样的情况,常见情况是测试员编写一个回归测试,在产品出现问题的版本上重现这个 bug 。如果 bug 得到修复,这个测试可以用于检查它是否正确修复。
However, adding only regression tests is not enough. A customer bug report suggests two things:
但是,仅仅添加回归测试是不够的。客户 bug 报告暗示着两个东西:
1. That area of the product is buggy. It's well known that bugs tend to cluster.
产品的那个领域包含了 bug。大家都知道, bug 一般是集中出现的。
2. That area of the product was inadequately tested. Otherwise, why did the bug originally escape testing?
产品的那个领域没有进行充分测试。否则的话,为什么开始测试的时候漏掉了那个 bug ?
An appropriate response to several customer bug reports in an area is to schedule more thorough testing for that area. Begin by examining the current tests (if they're understandable) to determine their systematic weaknesses.
对于某个领域中的几个客户 bug 报告的适当响应是对该领域安排一个更全面的测试。首先检查当前测试(如果它们是可以理解的话)以确定在系统性方面的不足之处。
Finally, every bug report is a gift from a customer that tells you how to test better in the future. A common mistake is failing to take notes for the next testing effort. The next product will be somewhat like this one, the bugs will be somewhat like these, and the tests useful in finding those bugs will also be somewhat like the ones you just ran. Mental notes are easy to forget, and they're hard to hand to a new tester. Writing is a wonderful human invention: use it. Both [Kaner93] and [Marick95] describe formats for archiving test information, and both contain general-purpose examples.
总之,每个 bug 报告都是客户的礼物,告诉我们在今后如何更好地测试。一个常见错误是不能为下次测试工作做好记录。下一个产品将在某种程度上类似于这一个, bug 在某种程度上类似于这一个,你刚才所做的测试在某种程度上类似于将来找出那些错误的测试。脑海中的记录容易忘记,也很难传授给新测试员。书写是人类一个美妙的发明:使用它。[Kaner93]和[Marick95]都描述了归档测试信息的格式,并包含了通用的示例。
文章来源于领测软件测试网 https://www.ltesting.net/