Package docutils :: Package readers :: Package python :: Module moduleparser
[show private | hide private]
[frames | no frames]

Module docutils.readers.python.moduleparser

Parser for Python modules.

The `parse_module()` function takes a module's text and file name, runs it
through the module parser (using compiler.py and tokenize.py) and produces a
"module documentation tree": a high-level AST full of nodes that are
interesting from an auto-documentation standpoint.  For example, given this
module (x.py)::

    # comment

    '''Docstring'''

    '''Additional docstring'''

    __docformat__ = 'reStructuredText'

    a = 1
    '''Attribute docstring'''

    class C(Super):

        '''C's docstring'''

        class_attribute = 1
        '''class_attribute's docstring'''

        def __init__(self, text=None):
            '''__init__'s docstring'''

            self.instance_attribute = (text * 7
                                       + ' whaddyaknow')
            '''instance_attribute's docstring'''


    def f(x,                            # parameter x
          y=a*5,                        # parameter y
          *args):                       # parameter args
        '''f's docstring'''
        return [x + item for item in args]

    f.function_attribute = 1
    '''f.function_attribute's docstring'''

The module parser will produce this module documentation tree::

    <Module filename="test data">
        <Comment lineno=1>
            comment
        <Docstring>
            Docstring
        <Docstring lineno="5">
            Additional docstring
        <Attribute lineno="7" name="__docformat__">
            <Expression lineno="7">
                'reStructuredText'
        <Attribute lineno="9" name="a">
            <Expression lineno="9">
                1
            <Docstring lineno="10">
                Attribute docstring
        <Class bases="Super" lineno="12" name="C">
            <Docstring lineno="12">
                C's docstring
            <Attribute lineno="16" name="class_attribute">
                <Expression lineno="16">
                    1
                <Docstring lineno="17">
                    class_attribute's docstring
            <Method lineno="19" name="__init__">
                <Docstring lineno="19">
                    __init__'s docstring
                <ParameterList lineno="19">
                    <Parameter lineno="19" name="self">
                    <Parameter lineno="19" name="text">
                        <Default lineno="19">
                            None
                <Attribute lineno="22" name="self.instance_attribute">
                    <Expression lineno="22">
                        (text * 7 + ' whaddyaknow')
                    <Docstring lineno="24">
                        instance_attribute's docstring
        <Function lineno="27" name="f">
            <Docstring lineno="27">
                f's docstring
            <ParameterList lineno="27">
                <Parameter lineno="27" name="x">
                    <Comment>
                        # parameter x
                <Parameter lineno="27" name="y">
                    <Default lineno="27">
                        a * 5
                    <Comment>
                        # parameter y
                <ExcessPositionalArguments lineno="27" name="args">
                    <Comment>
                        # parameter args
        <Attribute lineno="33" name="f.function_attribute">
            <Expression lineno="33">
                1
            <Docstring lineno="34">
                f.function_attribute's docstring

(Comments are not implemented yet.)

compiler.parse() provides most of what's needed for this doctree, and
"tokenize" can be used to get the rest.  We can determine the line number from
the compiler.parse() AST, and the TokenParser.rhs(lineno) method provides the
rest.

The Docutils Python reader component will transform this module doctree into a
Python-specific Docutils doctree, and then a `stylist transform`_ will
further transform it into a generic doctree.  Namespaces will have to be
compiled for each of the scopes, but I'm not certain at what stage of
processing.

It's very important to keep all docstring processing out of this, so that it's
a completely generic and not tool-specific.

> Why perform all of those transformations?  Why not go from the AST to a
> generic doctree?  Or, even from the AST to the final output?

I want the docutils.readers.python.moduleparser.parse_module() function to
produce a standard documentation-oriented tree that can be used by any tool.
We can develop it together without having to compromise on the rest of our
design (i.e., HappyDoc doesn't have to be made to work like Docutils, and
vice-versa).  It would be a higher-level version of what compiler.py provides.

The Python reader component transforms this generic AST into a Python-specific
doctree (it knows about modules, classes, functions, etc.), but this is
specific to Docutils and cannot be used by HappyDoc or others.  The stylist
transform does the final layout, converting Python-specific structures
("class" sections, etc.) into a generic doctree using primitives (tables,
sections, lists, etc.).  This generic doctree does *not* know about Python
structures any more.  The advantage is that this doctree can be handed off to
any of the output writers to create any output format we like.

The latter two transforms are separate because I want to be able to have
multiple independent layout styles (multiple runtime-selectable "stylist
transforms").  Each of the existing tools (HappyDoc, pydoc, epydoc, Crystal,
etc.) has its own fixed format.  I personally don't like the tables-based
format produced by these tools, and I'd like to be able to customize the
format easily.  That's the goal of stylist transforms, which are independent
from the Reader component itself.  One stylist transform could produce
HappyDoc-like output, another could produce output similar to module docs in
the Python library reference manual, and so on.

It's for exactly this reason:

>> It's very important to keep all docstring processing out of this, so that
>> it's a completely generic and not tool-specific.

... but it goes past docstring processing.  It's also important to keep style
decisions and tool-specific data transforms out of this module parser.


Issues
======

* At what point should namespaces be computed?  Should they be part of the
  basic AST produced by the ASTVisitor walk, or generated by another tree
  traversal?

* At what point should a distinction be made between local variables &
  instance attributes in __init__ methods?

* Docstrings are getting their lineno from their parents.  Should the
  TokenParser find the real line no's?

* Comments: include them?  How and when?  Only full-line comments, or
  parameter comments too?  (See function "f" above for an example.)

* Module could use more docstrings & refactoring in places.

Classes
AssignmentVisitor  
Attribute  
AttributeTuple  
AttributeVisitor  
BaseVisitor  
Class  
ClassVisitor  
Comment  
Default  
Docstring  
DocstringVisitor  
ExcessKeywordArguments  
ExcessPositionalArguments  
Expression  
Function  
FunctionVisitor  
Import  
InitMethodVisitor  
Method  
MethodVisitor  
Module  
ModuleVisitor  
Node Base class for module documentation tree nodes.
Parameter  
ParameterList  
ParameterTuple  
TextNode  
TokenParser  

Function Summary
  normalize_parameter_name(name)
Converts a tuple like ('a', ('b', 'c'), 'd') into '(a, (b, c), d)'
  parse_module(module_text, filename)
Return a module documentation tree from module_text.
  trim_docstring(text)
Trim indentation and blank lines from docstring text & return it.

Function Details

normalize_parameter_name(name)

Converts a tuple like ('a', ('b', 'c'), 'd') into '(a, (b, c), d)'

parse_module(module_text, filename)

Return a module documentation tree from module_text.

trim_docstring(text)

Trim indentation and blank lines from docstring text & return it.

See PEP 257.


Generated by Epydoc 2.0 on Tue Jul 22 05:31:43 2003 http://epydoc.sf.net