Module doctest :: Class DocTestRunner
[hide private]
[frames] | no frames]

_ClassType DocTestRunner

Known Subclasses:

A class used to run DocTest test cases, and accumulate statistics. The run method is used to process a single DocTest case. It returns a tuple (f, t), where t is the number of test cases tried, and f is the number of test cases that failed.

>>> tests = DocTestFinder().find(_TestClass)
>>> runner = DocTestRunner(verbose=False)
>>> tests.sort(key = lambda test: test.name)
>>> for test in tests:
...     print test.name, '->', runner.run(test)
_TestClass -> (0, 2)
_TestClass.__init__ -> (0, 2)
_TestClass.get -> (0, 2)
_TestClass.square -> (0, 1)

The summarize method prints a summary of all the test cases that have been run by the runner, and returns an aggregated (f, t) tuple:

>>> runner.summarize(verbose=1)
4 items passed all tests:
   2 tests in _TestClass
   2 tests in _TestClass.__init__
   2 tests in _TestClass.get
   1 tests in _TestClass.square
7 tests in 4 items.
7 passed and 0 failed.
Test passed.
(0, 7)

The aggregated number of tried examples and failed examples is also available via the tries and failures attributes:

>>> runner.tries
7
>>> runner.failures
0

The comparison between expected outputs and actual outputs is done by an OutputChecker. This comparison may be customized with a number of option flags; see the documentation for testmod for more information. If the option flags are insufficient, then the comparison may also be customized by passing a subclass of OutputChecker to the constructor.

The test runner's display output can be controlled in two ways. First, an output function (out) can be passed to `TestRunner.run; this function will be called with strings that should be displayed. It defaults to sys.stdout.write. If capturing the output is not sufficient, then the display output can be also customized by subclassing DocTestRunner, and overriding the methods report\_start, report\_success, report\_unexpected\_exception, and report\_failure.

Instance Methods [hide private]
 
__init__(self, checker=None, verbose=None, optionflags=0)
Create a new test runner.
 
report_start(self, out, test, example)
Report that the test runner is about to process the given example.
 
report_success(self, out, test, example, got)
Report that the given example ran successfully.
 
report_failure(self, out, test, example, got)
Report that the given example failed.
 
report_unexpected_exception(self, out, test, example, exc_info)
Report that the given example raised an unexpected exception.
 
_failure_header(self, test, example)
 
__run(self, test, compileflags, out)
Run the examples in test.
 
__record_outcome(self, test, f, t)
Record the fact that the given DocTest (test) generated f failures out of t tried examples.
 
__patched_linecache_getlines(self, filename, module_globals=None)
 
run(self, test, compileflags=None, out=None, clear_globs=True)
Run the examples in test, and display the results using the writer function out.
 
summarize(self, verbose=None)
Print a summary of all the test cases that have been run by this DocTestRunner, and return a tuple (f, t), where f is the total number of failed examples, and t is the total number of tried examples.
 
merge(self, other)
Class Variables [hide private]
  DIVIDER = '***************************************************...
  __LINECACHE_FILENAME_RE = re.compile(r'<doctest (?P<name>[\w\....
Method Details [hide private]

__init__(self, checker=None, verbose=None, optionflags=0)
(Constructor)

 

Create a new test runner.

Optional keyword arg checker is the OutputChecker that should be used to compare the expected outputs and actual outputs of doctest examples.

Optional keyword arg 'verbose' prints lots of stuff if true, only failures if false; by default, it's true iff '-v' is in sys.argv.

Optional argument optionflags can be used to control how the test runner compares expected output to actual output, and how it displays failures. See the documentation for testmod for more information.

report_start(self, out, test, example)

 
Report that the test runner is about to process the given example. (Only displays a message if verbose=True)

report_success(self, out, test, example, got)

 
Report that the given example ran successfully. (Only displays a message if verbose=True)

__run(self, test, compileflags, out)

 
Run the examples in test. Write the outcome of each example with one of the DocTestRunner.report\_* methods, using the writer function out. compileflags is the set of compiler flags that should be used to execute examples. Return a tuple (f, t), where t is the number of examples tried, and f is the number of examples that failed. The examples are run in the namespace test.globs.

run(self, test, compileflags=None, out=None, clear_globs=True)

 

Run the examples in test, and display the results using the writer function out.

The examples are run in the namespace test.globs. If clear\_globs is true (the default), then this namespace will be cleared after the test runs, to help with garbage collection. If you would like to examine the namespace after the test completes, then use clear\_globs=False.

compileflags gives the set of flags that should be used by the Python compiler when running the examples. If not specified, then it will default to the set of future-import flags that apply to globs.

The output of each example is checked using DocTestRunner.check\_output, and the results are formatted by the DocTestRunner.report\_* methods.

summarize(self, verbose=None)

 

Print a summary of all the test cases that have been run by this DocTestRunner, and return a tuple (f, t), where f is the total number of failed examples, and t is the total number of tried examples.

The optional verbose argument controls how detailed the summary is. If the verbosity is not specified, then the DocTestRunner's verbosity is used.


Class Variable Details [hide private]

DIVIDER

Value:
'*********************************************************************\
*'

__LINECACHE_FILENAME_RE

Value:
re.compile(r'<doctest (?P<name>[\w\.]+)\[(?P<examplenum>\d+)\]>$')