Using generated tests

About generated tests

Generated tests fill a different role from either extracted or manual tests. The idea behind generated tests is that because we specify software through its contracts, and because compliance of the software to those contracts can be actively monitored at runtime, we can know two things necessary for building tests:

  1. For any routine, what argument values are valid
  2. For the execution of any routine, what resulting states are acceptable

The first bit of knowledge comes from the preconditions of target routines. The second comes from postconditions of target routines and the invariants of target classes. Armed with this knowledge, we should be able to generate a series of invocations of target routines using random argument values, and evaluate the results. This is what is done by an internal facility of AutoTest that builds generated tests (this facility is often also referred to itself as AutoTest). After many of these randomly generated invocations, AutoTest attempts to synthesize the results of these feature calls into new test classes. The tests in these new test classes contain the calls leading up and including calls that fail. AutoTest will attempt to create only one test from each unique type of failure, so that your test directory doesn't get loaded with lots of duplicate tests.

You may look at a generated test class and think that it seems to be very long and to contain lots of stuff that you doubt is relevant. This is a fair assessment. The processes that AutoTest uses to build and minimize generated tests are constantly being improved. But for now, generated tests, although useful, retain a certain amount of that randomness that was used in their creation.

So for the time being, unlike manual and extracted tests, you should not make generated tests a part of your permanent test suite. Rather, you should consider them a disposable means to an end. Use each generated test as a guide for building an effective and readable manual test.

Creating generated tests

If you've been through the discussion of the creation of manual and extracted tests, then it should come as no surprise to learn that you use the New Eiffel test wizard to create generated tests. And much of this process will seem familiar now.

In the drop-down menu for the Create new test button, choose the item Generate tests for custom types.

At this point, you'll see the Test Generation wizard pane. This pane allows you to specify which classes you want to generate tests for. You can also adjust the values of certain parameters used in the test generation.

Let's type the class name BANK_ACCOUNT into the box labeled Class or type name and click the "+" button to added it to the list. Of course you can remove an item from the list by selecting it and clicking "-".

The rest of the pane is used to configure certain options for the test generation process.

Cutoff (minutes) lets you specify a number of minutes for AutoTest to run random invocations of the routines in your target class(es).

Cutoff (invocations) lets you control how long AutoTest will run random invocations by declaring a specific number of invocations.

Routine timeout sets an upper limit on how long AutoTest will wait for a random feature call to complete.

Random number generation seed provides a way for you to control the seeding of the random number generator used by AutoTest. When the value is 0, as shown here, the seed is created from the system clock. This is adequate in most cases, but this option is provided because there might be some cases in which you would want to try to reproduce a previous test generation run. And to do that, you would have to set the seed to the same value for multiple runs.

The two check boxes Slice minimization and DDmin for minimization allow you to select the approach that you want to use for minimizing the size of generated tests. Generally, the default value here is adequate. Slicing and ddmin are two different ways of doing minimization. Tests are generated after running many randomly generated calls to routines in your target class. Tests are generated for calls that fail. So, there may have been many randomized calls leading up to the failed call. Minimization helps to eliminate the majority of the unrelated randomly generated calls, leaving the test code as short as possible. You will notice that minimization processing is memory and processor intensive.

The last check box, HTML statistics gives you the option of having AutoTest record the results of a test generation run in a set of files that you can review with a web browser.

We can allow all these to remain their default values, with one exception. Let's check the HTML statistics box.

During the test generation you can watch the random invocations of your class's routines being logged in the Testing pane of the Outputs tool. When the generation completes, AutoTest directs you to the location of the results:

The file statistics.txt contains a summary of the generation run. If you enabled Create HTML output you can open the file index.html with your browser and view formatted summary and detail information.

Note: The result directory includes files that summarize the whole generated testing process. Some of these are lengthy because they contain information on test cases used for each target routine.

If we try to generate tests on the class BANK_ACCOUNT in which we have already fixed two bugs after the manual and extracted tests, we will see something about like the following results:

The important thing to notice here is the status: pass. There were no randomly generated cases that failed. So every valid invocation of a routine for class BANK_ACCOUNT completed satisfactorily. Therefore, no generated test class was created.

If we re-introduce the bug into the deposit procedure of class BANK_ACCOUNT (i.e., Remove this line of code: balance := balance + an_amount), and then request generated tests again, we get different results:

This time, as we expected, failures were encountered. And a generated test class was created.

A look at a generated test

The generated test class looks like this:

note description: "Generated test created by AutoTest." author: "Testing tool" class TEST_BANK_ACCOUNT_GENERATED_001 inherit EQA_SYNTHESIZED_TEST_SET feature -- Test routines generated_test_1 note testing: "type/generated" testing: "covers/{BANK_ACCOUNT}.deposit" local v_6: BANK_ACCOUNT v_7: INTEGER_32 do execute_safe (agent: BANK_ACCOUNT do create {BANK_ACCOUNT} Result end) if {l_ot1: BANK_ACCOUNT} last_object then v_6 := l_ot1 end v_7 := {INTEGER_32} 3 -- Final routine call set_is_recovery_enabled (False) execute_safe (agent v_6.deposit (v_7)) end end

Note: If you've been following along by the doing these examples in EiffelStudio, you may notice that your generated class looks slightly different.

This test is written in a way that is a little different from both the manual test we wrote and the extracted test. But it's not too hard to figure out what's going on. An object of type BANK_ACCOUNT will be created (local v_6) and the deposit feature will be applied to it with an argument value of 3 (local v_7).

You can see that this test, although it is implemented differently, is about the same as the manual test we wrote covering {BANK_ACCOUNT}.deposit. Because we have re-introduced the bug in BANK_ACCOUNT, if we run all tests, we see that both our manual test and the generated test are failing ... only the extracted test covering {BANK_ACCOUNT}.withdraw is successful:

If we replace the implementation for {BANK_ACCOUNT}.deposit that we had removed, and then re-execute the tests, all are successful.

cached: 03/19/2024 2:33:03.000 AM