[alpha_api]: javascript:alphaApi()
[alpha_implementation]: javascript:alphaImplementation()

Advanced Data Mapping {@name=advdatamapping}
======================

This section details all the options available to Mappers, as well as advanced patterns.

To start, heres the tables we will work with again:

    {python}
    from sqlalchemy import *

    metadata = MetaData()

    # a table to store users
    users_table = Table('users', metadata,
        Column('user_id', Integer, primary_key = True),
        Column('user_name', String(40)),
        Column('password', String(80))
    )

    # a table that stores mailing addresses associated with a specific user
    addresses_table = Table('addresses', metadata,
        Column('address_id', Integer, primary_key = True),
        Column('user_id', Integer, ForeignKey("users.user_id")),
        Column('street', String(100)),
        Column('city', String(80)),
        Column('state', String(2)),
        Column('zip', String(10))
    )

    # a table that stores keywords
    keywords_table = Table('keywords', metadata,
        Column('keyword_id', Integer, primary_key = True),
        Column('name', VARCHAR(50))
    )

    # a table that associates keywords with users
    userkeywords_table = Table('userkeywords', metadata,
        Column('user_id', INT, ForeignKey("users")),
        Column('keyword_id', INT, ForeignKey("keywords"))
    )


### More On Mapper Properties {@name=properties}

#### Overriding Column Names {@name=colname}

When mappers are constructed, by default the column names in the Table metadata are used as the names of attributes on the mapped class.  This can be customzed within the properties by stating the key/column combinations explicitly:

    {python}
    user_mapper = mapper(User, users_table, properties={
        'id' : users_table.c.user_id,
        'name' : users_table.c.user_name,
    })

In the situation when column names overlap in a mapper against multiple tables, columns may be referenced together with a list:

    {python}
    # join users and addresses
    usersaddresses = sql.join(users_table, addresses_table, users_table.c.user_id == addresses_table.c.user_id)
    m = mapper(User, usersaddresses,   
        properties = {
            'id' : [users_table.c.user_id, addresses_table.c.user_id],
        }
        )

#### Overriding Properties {@name=overriding}

A common request is the ability to create custom class properties that override the behavior of setting/getting an attribute.  Currently, the easiest way to do this in SQLAlchemy is how it would be done in any Python program; define your attribute with a different name, such as "_attribute", and use a property to get/set its value.  The mapper just needs to be told of the special name:

    {python}
    class MyClass(object):
        def _set_email(self, email):
           self._email = email
        def _get_email(self):
           return self._email
        email = property(_get_email, _set_email)
    
    mapper(MyClass, mytable, properties = {
       # map the '_email' attribute to the "email" column
       # on the table
       '_email': mytable.c.email
    })

It is also possible to route the the `select_by` and `get_by` functions on `Query` using the new property name, by establishing a `synonym`:

    {python}
    mapper(MyClass, mytable, properties = {
        # map the '_email' attribute to the "email" column
        # on the table
        '_email': mytable.c.email,

        # make a synonym 'email'
        'email' : synonym('_email')
    })
    
    # now you can select_by(email)
    result = session.query(MyClass).select_by(email='john@smith.com')

Synonym can be established with the flag "proxy=True", to create a class-level proxy to the actual property.  This has the effect of creating a fully functional synonym on class instances:

    {python}
    mapper(MyClass, mytable, properties = {
        '_email': mytable.c.email
        'email' : synonym('_email', proxy=True)
    })

    x = MyClass()
    x.email = 'john@doe.com'
    
    >>> x._email
    'john@doe.com'

#### Custom List Classes {@name=customlist}

Feature Status: [Alpha API][alpha_api] 

A one-to-many or many-to-many relationship results in a list-holding element being attached to all instances of a class.  The actual list is an "instrumented" list, which transparently maintains a relationship to a plain Python list.  The implementation of the underlying plain list can be changed to be any object that implements a `list`-style `append` and `__iter__` method.  A common need is for a list-based relationship to actually be a dictionary.  This can be achieved by subclassing `dict` to have `list`-like behavior.

In this example, a class `MyClass` is defined, which is associated with a parent object `MyParent`.  The collection of `MyClass` objects on each `MyParent` object will be a dictionary, storing each `MyClass` instance keyed to its `name` attribute.

    {python}
    # a class to be stored in the list
    class MyClass(object):
        def __init__(self, name):
            self.name = name
            
    # create a dictionary that will act like a list, and store
    # instances of MyClass
    class MyDict(dict):
        def append(self, item):
            self[item.name] = item
        def __iter__(self):
            return self.values()

    # parent class
    class MyParent(object):
        pass
    
    # mappers, constructed normally
    mapper(MyClass, myclass_table)
    mapper(MyParent, myparent_table, properties={
        'myclasses' : relation(MyClass, collection_class=MyDict)
    })
    
    # elements on 'myclasses' can be accessed via string keyname
    myparent = MyParent()
    myparent.myclasses.append(MyClass('this is myclass'))
    myclass = myparent.myclasses['this is myclass']

Note: SQLAlchemy 0.4 has an overhauled and much improved implementation for custom list classes, with some slight API changes.

#### Custom Join Conditions {@name=customjoin}

When creating relations on a mapper, most examples so far have illustrated the mapper and relationship joining up based on the foreign keys of the tables they represent.  in fact, this "automatic" inspection can be completely circumvented using the `primaryjoin` and `secondaryjoin` arguments to `relation`, as in this example which creates a User object which has a relationship to all of its Addresses which are in Boston:

    {python}
    class User(object):
        pass
    class Address(object):
        pass
    
    mapper(Address, addresses_table)
    mapper(User, users_table, properties={
        'boston_addresses' : relation(Address, primaryjoin=
                    and_(users_table.c.user_id==Address.c.user_id, 
                    Addresses.c.city=='Boston'))
    })
        
Many to many relationships can be customized by one or both of `primaryjoin` and `secondaryjoin`, shown below with just the default many-to-many relationship explicitly set:

    {python}
    class User(object):
        pass
    class Keyword(object):
        pass
    mapper(Keyword, keywords_table)
    mapper(User, users_table, properties={
        'keywords':relation(Keyword, secondary=userkeywords_table,
            primaryjoin=users_table.c.user_id==userkeywords_table.c.user_id,
            secondaryjoin=userkeywords_table.c.keyword_id==keywords_table.c.keyword_id
            )
    })

#### Lazy/Eager Joins Multiple Times to One Table {@name=multiplejoin}

The previous example leads in to the idea of joining against the same table multiple times.  Below is a User object that has lists of its Boston and New York addresses:

    {python}
    mapper(User, users_table, properties={
        'boston_addresses' : relation(Address, primaryjoin=
                    and_(users_table.c.user_id==Address.c.user_id, 
                    Addresses.c.city=='Boston')),
        'newyork_addresses' : relation(Address, primaryjoin=
                    and_(users_table.c.user_id==Address.c.user_id, 
                    Addresses.c.city=='New York')),
    })

Both lazy and eager loading support multiple joins equally well.

#### Deferred Column Loading {@name=deferred}

This feature allows particular columns of a table to not be loaded by default, instead being loaded later on when first referenced.  It is essentailly "column-level lazy loading".   This feature is useful when one wants to avoid loading a large text or binary field into memory when its not needed.  Individual columns can be lazy loaded by themselves or placed into groups that lazy-load together.

    {python}
    book_excerpts = Table('books', db, 
        Column('book_id', Integer, primary_key=True),
        Column('title', String(200), nullable=False),
        Column('summary', String(2000)),
        Column('excerpt', String),
        Column('photo', Binary)
    )

    class Book(object):
        pass

    # define a mapper that will load each of 'excerpt' and 'photo' in 
    # separate, individual-row SELECT statements when each attribute
    # is first referenced on the individual object instance
    mapper(Book, book_excerpts, properties = {
        'excerpt' : deferred(book_excerpts.c.excerpt),
        'photo' : deferred(book_excerpts.c.photo)
    })

Deferred columns can be placed into groups so that they load together:

    {python}
    book_excerpts = Table('books', db, 
        Column('book_id', Integer, primary_key=True),
        Column('title', String(200), nullable=False),
        Column('summary', String(2000)),
        Column('excerpt', String),
        Column('photo1', Binary),
        Column('photo2', Binary),
        Column('photo3', Binary)
    )

    class Book(object):
        pass

    # define a mapper with a 'photos' deferred group.  when one photo is referenced,
    # all three photos will be loaded in one SELECT statement.  The 'excerpt' will 
    # be loaded separately when it is first referenced.
    mapper(Book, book_excerpts, properties = {
        'excerpt' : deferred(book_excerpts.c.excerpt),
        'photo1' : deferred(book_excerpts.c.photo1, group='photos'),
        'photo2' : deferred(book_excerpts.c.photo2, group='photos'),
        'photo3' : deferred(book_excerpts.c.photo3, group='photos')
    })

You can defer or undefer columns at the `Query` level with the `options` method:

    {python}
    query = session.query(Book)
    query.options(defer('summary')).all()
    query.options(undefer('excerpt')).all()

#### Working with Large Collections

SQLAlchemy relations are generally simplistic; the lazy loader loads in the full list of child objects when accessed, and the eager load builds a query that loads the full list of child objects.  Additionally, when you are deleting a parent object, SQLAlchemy ensures that it has loaded the full list of child objects so that it can mark them as deleted as well (or to update their parent foreign key to NULL).  It does not issue an en-masse "delete from table where parent_id=?" type of statement in such a scenario.  This is because the child objects themselves may also have further dependencies, and additionally may also exist in the current session in which case SA needs to know their identity so that their state can be properly updated.

So there are several techniques that can be used individually or combined together to address these issues, in the context of a large collection where you normally would not want to load the full list of relationships:

* Use `lazy=None` to disable child object loading (i.e. noload)

        {python}
        mapper(MyClass, table, properties=relation{
            'children':relation(MyOtherClass, lazy=None)
        })

* To load child objects, just use a query.  Of particular convenience is that `Query` is a generative object, so you can return
it as is, allowing additional criterion to be added as needed:

        {python}
        class Organization(object):
            def __init__(self, name):
                self.name = name
            member_query = property(lambda self: object_session(self).query(Member).with_parent(self))

        myorg = sess.query(Organization).get(5)
        
        # get all members
        members = myorg.member_query.list()
        
        # query a subset of members using LIMIT/OFFSET
        members = myorg.member_query[5:10]
        
* Use `passive_deletes=True` to disable child object loading on a DELETE operation, in conjunction with "ON DELETE (CASCADE|SET NULL)" on your database to automatically cascade deletes to child objects.   Note that "ON DELETE" is not supported on SQLite, and requires `InnoDB` tables when using MySQL:

        {python}
        mytable = Table('mytable', meta,
            Column('id', Integer, primary_key=True),
            )

        myothertable = Table('myothertable', meta,
            Column('id', Integer, primary_key=True),
            Column('parent_id', Integer),
            ForeignKeyConstraint(['parent_id'],['mytable.id'], ondelete="CASCADE"),
            )

        mmapper(MyOtherClass, myothertable)
        
        mapper(MyClass, mytable, properties={
            'children':relation(MyOtherClass, passive_deletes=True)
        })
        
* As an alternative to using "ON DELETE CASCADE", for very simple scenarios you can create a simple `MapperExtension` that will issue a DELETE for child objects before the parent object is deleted:

        {python}
        class DeleteMemberExt(MapperExtension):
            def before_delete(self, mapper, connection, instance):
                connection.execute(member_table.delete(member_table.c.org_id==instance.org_id))

        mapper(Organization, org_table, extension=DeleteMemberExt(), properties = {
            'members' : relation(Member, lazy=None, passive_deletes=True, cascade="all, delete-orphan")
        })

Note that this approach is not nearly as efficient or general-purpose as "ON DELETE CASCADE", since the database itself can cascade the operation along any number of tables.

The latest distribution includes an example `examples/collection/large_collection.py` which illustrates most of these techniques.    

#### Relation Options {@name=relationoptions}

Options which can be sent to the `relation()` function.  For arguments to `mapper()`, see [advdatamapping_mapperoptions](rel:advdatamapping_mapperoptions).

* **association** - Deprecated; as of version 0.3.0 the association keyword is synonomous with applying the "all, delete-orphan" cascade to a "one-to-many" relationship.  SA can now automatically reconcile a "delete" and "insert" operation of two objects with the same "identity" in a flush() operation into a single "update" statement, which is the pattern that "association" used to indicate.  See the updated example of association mappings in [datamapping_association](rel:datamapping_association).
* **backref** - indicates the name of a property to be placed on the related mapper's class that will handle this relationship in the other direction, including synchronizing the object attributes on both sides of the relation.  Can also point to a `backref()` construct for more configurability.  See [datamapping_relations_backreferences](rel:datamapping_relations_backreferences).
* **cascade** - a string list of cascade rules which determines how persistence operations should be "cascaded" from parent to child. For a description of cascade rules, see [datamapping_relations_lifecycle](rel:datamapping_relations_lifecycle) and [unitofwork_cascade](rel:unitofwork_cascade).
* **collection_class** - a class or function that returns a new list-holding object.  will be used in place of a plain list for storing elements.  See [advdatamapping_properties_customlist](rel:advdatamapping_properties_customlist).
* **foreign_keys** - a list of columns which are to be used as "foreign key" columns.  this parameter should be used in conjunction with explicit
`primaryjoin` and `secondaryjoin` (if needed) arguments, and the columns within the `foreign_keys` list should be present within those join conditions.  Normally, `relation()` will inspect the columns within the join conditions to determine which columns are the "foreign key" columns, based on information in the `Table` metadata.  Use this argument when no ForeignKey's are present in the join condition, or to override the table-defined foreign keys.
* **foreignkey** - deprecated.  use the `foreign_keys` argument for foreign key specification, or `remote_side` for "directional" logic.
* **lazy=True** - specifies how the related items should be loaded.  a value of True indicates they should be loaded lazily when the property is first accessed.  A value of False indicates they should be loaded by joining against the parent object query, so parent and child are loaded in one round trip (i.e. eagerly).  A value of None indicates the related items are not loaded by the mapper in any case; the application will manually insert items into the list in some other way.  In all cases, items added or removed to the parent object's collection (or scalar attribute) will cause the appropriate updates and deletes upon flush(),  i.e. this option only affects load operations, not save operations.  
* **order_by** - indicates the ordering that should be applied when loading these items.  See the section [advdatamapping_orderby](rel:advdatamapping_orderby) for details.
* **passive_deletes=False** - Indicates if lazy-loaders should not be executed during the `flush()` process, which normally occurs in order to locate all existing child items when a parent item is to be deleted.  Setting this flag to True is appropriate when `ON DELETE CASCADE` rules have been set up on the actual tables so that the database may handle cascading deletes automatically.  This strategy is useful particularly for handling the deletion of objects that have very large (and/or deep) child-object collections.  See the example in [advdatamapping_properties_working](rel:advdatamapping_properties_working).
* **post_update** - this indicates that the relationship should be handled by a second UPDATE statement after an INSERT or before a DELETE.  Currently, it also will issue an UPDATE after the instance was UPDATEd as well, although this technically should be improved.  This flag is used to handle saving bi-directional dependencies between two individual rows (i.e. each row references the other), where it would otherwise be impossible to INSERT or DELETE both rows fully since one row exists before the other.  Use this flag when a particular mapping arrangement will incur two rows that are dependent on each other, such as a table that has a one-to-many relationship to a set of child rows, and also has a column that references a single child row within that list (i.e. both tables contain a foreign key to each other).  If a `flush()` operation returns an error that a "cyclical dependency" was detected, this is a cue that you might want to use `post_update` to "break" the cycle.
* **primaryjoin** - a ClauseElement that will be used as the primary join of this child object against the parent object, or in a many-to-many relationship the join of the primary object to the association table.  By default, this value is computed based on the foreign key relationships of the parent and child tables (or association table).
* **private=False** - deprecated.  setting `private=True` is the equivalent of setting `cascade="all, delete-orphan"`, and indicates the lifecycle of child objects should be contained within that of the parent.   See the example in [datamapping_relations_lifecycle](rel:datamapping_relations_lifecycle).
* **remote_side** - used for self-referential relationships, indicates the column or list of columns that form the "remote side" of the relationship.  See the examples in [advdatamapping_selfreferential](rel:advdatamapping_selfreferential).
* **secondary** - for a many-to-many relationship, specifies the intermediary table.  The `secondary` keyword argument should generally only be used for a table that is not otherwise expressed in any class mapping.  In particular, using the [Association Object Pattern](rel:datamapping_association) is generally mutually exclusive against using the `secondary` keyword argument.
* **secondaryjoin** - a ClauseElement that will be used as the join of an association table to the child object.  By default, this value is computed based on the foreign key relationships of the association and child tables.
* **uselist=(True|False)** - a boolean that indicates if this property should be loaded as a list or a scalar.  In most cases, this value is determined automatically by `relation()`, based on the type and direction of the relationship - one to many forms a list, many to one forms a scalar, many to many is a list.  If a scalar is desired where normally a list would be present, such as a bi-directional one-to-one relationship, set uselist to False.
* **viewonly=False** - when set to True, the relation is used only for loading objects within the relationship, and has no effect on the unit-of-work flush process.  Relations with viewonly can specify any kind of join conditions to provide additional views of related objects onto a parent object.  Note that the functionality of a viewonly relationship has its limits - complicated join conditions may not compile into eager or lazy loaders properly.  If this is the case, use an alternative method, such as those described in [advdatamapping_properties_working](rel:advdatamapping_properties_working), [advdatamapping_resultset](rel:advdatamapping_resultset), or [advdatamapping_selects](rel:advdatamapping_selects).

### Controlling Ordering {@name=orderby}

By default, mappers will attempt to ORDER BY the "oid" column of a table, or the primary key column, when selecting rows.  This can be modified in several ways.

The "order_by" parameter can be sent to a mapper, overriding the per-engine ordering if any.  A value of None means that the mapper should not use any ordering.  A non-None value, which can be a column, an `asc` or `desc` clause, or an array of either one, indicates the ORDER BY clause that should be added to all select queries:

    {python}
    # disable all ordering
    mapper = mapper(User, users_table, order_by=None)

    # order by a column
    mapper = mapper(User, users_table, order_by=users_tableusers_table.c.user_id)
    
    # order by multiple items
    mapper = mapper(User, users_table, order_by=[users_table.c.user_id, desc(users_table.c.user_name)])

"order_by" can also be specified with queries, overriding all other per-engine/per-mapper orderings:

    {python}
    # order by a column
    l = query.filter(users_table.c.user_name=='fred').order_by(users_table.c.user_id).all()
    
    # order by multiple criterion
    l = query.filter(users_table.c.user_name=='fred').order_by([users_table.c.user_id, desc(users_table.c.user_name)])

The "order_by" property can also be specified on a `relation()` which will control the ordering of the collection:

    {python}
    mapper(Address, addresses_table)
    
    # order address objects by address id
    mapper(User, users_table, properties = {
        'addresses' : relation(Address, order_by=addresses_table.c.address_id)
    })
    
    
### Limiting Rows Combined with Eager Loads {@name=limits}

As indicated in the docs on `Query`, you can limit rows using `limit()` and `offset()`.  However, things get tricky when dealing with eager relationships, since a straight LIMIT of rows will interfere with the eagerly-loaded rows.  So here is what SQLAlchemy will do when you use limit or offset with an eager relationship:

    {python}
    class User(object):
        pass
    class Address(object):
        pass
        mapper(User, users_table, properties={
        'addresses' : relation(mapper(Address, addresses_table), lazy=False)
    })
    r = session.query(User).filter(User.c.user_name.like('F%')).limit(20).offset(10).all()
    {opensql}SELECT users.user_id AS users_user_id, users.user_name AS users_user_name, 
    users.password AS users_password, addresses.address_id AS addresses_address_id, 
    addresses.user_id AS addresses_user_id, addresses.street AS addresses_street, 
    addresses.city AS addresses_city, addresses.state AS addresses_state, 
    addresses.zip AS addresses_zip 
    FROM 
    (SELECT users.user_id FROM users WHERE users.user_name LIKE %(users_user_name)s
    ORDER BY users.oid LIMIT 20 OFFSET 10) AS rowcount, 
     users LEFT OUTER JOIN addresses ON users.user_id = addresses.user_id 
    WHERE rowcount.user_id = users.user_id ORDER BY users.oid, addresses.oid
    {'users_user_name': 'F%'}
    
The main WHERE clause as well as the limiting clauses are coerced into a subquery; this subquery represents the desired result of objects.  A containing query, which handles the eager relationships, is joined against the subquery to produce the result.  This is something to keep in mind as it's a complex query which may be problematic on databases with poor support for LIMIT, such as Oracle which does not support it natively.

### Mapping a Class with Table Inheritance {@name=inheritance}

Inheritance in databases comes in three forms:  *single table inheritance*, where several types of classes are stored in one table, *concrete table inheritance*, where each type of class is stored in its own table, and *joined table inheritance*, where the parent/child classes are stored in their own tables that are joined together in a select.

There is also the ability to load "polymorphically", which is that a single query loads objects of multiple types at once.

SQLAlchemy supports all three kinds of inheritance.  Additionally, true "polymorphic" loading is supported in a straightfoward way for single table inheritance, and has some more manually-configured features that can make it happen for concrete and multiple table inheritance. 

Working examples of polymorphic inheritance come with the distribution in the directory `examples/polymorphic`.

Here are the classes we will use to represent an inheritance relationship:

    {python}
    class Employee(object):
        def __init__(self, name):
            self.name = name
        def __repr__(self):
            return self.__class__.__name__ + " " + self.name

    class Manager(Employee):
        def __init__(self, name, manager_data):
            self.name = name
            self.manager_data = manager_data
        def __repr__(self):
            return self.__class__.__name__ + " " + self.name + " " +  self.manager_data

    class Engineer(Employee):
        def __init__(self, name, engineer_info):
            self.name = name
            self.engineer_info = engineer_info
        def __repr__(self):
            return self.__class__.__name__ + " " + self.name + " " +  self.engineer_info

Each class supports a common `name` attribute, while the `Manager` class has its own attribute `manager_data` and the `Engineer` class has its own attribute `engineer_info`.
        
#### Single Table Inheritance

This will support polymorphic loading via the `Employee` mapper.

    {python}
    employees_table = Table('employees', metadata, 
        Column('employee_id', Integer, primary_key=True),
        Column('name', String(50)),
        Column('manager_data', String(50)),
        Column('engineer_info', String(50)),
        Column('type', String(20))
    )
    
    employee_mapper = mapper(Employee, employees_table, polymorphic_on=employees_table.c.type)
    manager_mapper = mapper(Manager, inherits=employee_mapper, polymorphic_identity='manager')
    engineer_mapper = mapper(Engineer, inherits=employee_mapper, polymorphic_identity='engineer')

#### Concrete Table Inheritance

Without polymorphic loading, you just define a separate mapper for each class.

    {python title="Concrete Inheritance, Non-polymorphic"}
    managers_table = Table('managers', metadata, 
        Column('employee_id', Integer, primary_key=True),
        Column('name', String(50)),
        Column('manager_data', String(50)),
    )

    engineers_table = Table('engineers', metadata, 
        Column('employee_id', Integer, primary_key=True),
        Column('name', String(50)),
        Column('engineer_info', String(50)),
    )

    manager_mapper = mapper(Manager, managers_table)
    engineer_mapper = mapper(Engineer, engineers_table)
    
With polymorphic loading, the SQL query to do the actual polymorphic load must be constructed, usually as a UNION.  There is a helper function to create these UNIONS called `polymorphic_union`.

    {python title="Concrete Inheritance, Polymorphic"}
    pjoin = polymorphic_union({
        'manager':managers_table,
        'engineer':engineers_table
    }, 'type', 'pjoin')

    employee_mapper = mapper(Employee, pjoin, polymorphic_on=pjoin.c.type)
    manager_mapper = mapper(Manager, managers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='manager')
    engineer_mapper = mapper(Engineer, engineers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='engineer')

#### Joined Table Inheritance

Like concrete table inheritance, this can be done non-polymorphically, or with a little more complexity, polymorphically:

    {python title="Multiple Table Inheritance, Non-polymorphic"}
    employees = Table('employees', metadata, 
       Column('person_id', Integer, primary_key=True),
       Column('name', String(50)),
       Column('type', String(30)))

    engineers = Table('engineers', metadata, 
       Column('person_id', Integer, ForeignKey('employees.person_id'), primary_key=True),
       Column('engineer_info', String(50)),
      )

    managers = Table('managers', metadata, 
       Column('person_id', Integer, ForeignKey('employees.person_id'), primary_key=True),
       Column('manager_data', String(50)),
       )

    person_mapper = mapper(Employee, employees)
    mapper(Engineer, engineers, inherits=person_mapper)
    mapper(Manager, managers, inherits=person_mapper)

Polymorphically, joined-table inheritance is easier than concrete, as a simple outer join can usually work:

    {python title="Joined Table Inheritance, Polymorphic"}
    person_join = people.outerjoin(engineers).outerjoin(managers)

    person_mapper = mapper(Person, people, select_table=person_join,polymorphic_on=people.c.type, polymorphic_identity='person')
    mapper(Engineer, engineers, inherits=person_mapper, polymorphic_identity='engineer')
    mapper(Manager, managers, inherits=person_mapper, polymorphic_identity='manager')

In SQLAlchemy 0.4, the above mapper setup can load polymorphically *without* the join as well, by issuing distinct queries for each subclasses' table.

The join condition in a joined table inheritance structure can be specified explicitly, using `inherit_condition`:

    {python}
    AddressUser.mapper = mapper(
            AddressUser,
            addresses_table, inherits=User.mapper, 
            inherit_condition=users_table.c.user_id==addresses_table.c.user_id
        )

### Mapping a Class against Multiple Tables {@name=joins}

Mappers can be constructed against arbitrary relational units (called `Selectables`) as well as plain `Tables`.  For example, The `join` keyword from the SQL package creates a neat selectable unit comprised of multiple tables, complete with its own composite primary key, which can be passed in to a mapper as the table.

    {python}
    # a class
    class AddressUser(object):
        pass

    # define a Join
    j = join(users_table, addresses_table)
    
    # map to it - the identity of an AddressUser object will be 
    # based on (user_id, address_id) since those are the primary keys involved
    m = mapper(AddressUser, j, properties={
        'user_id':[users_table.c.user_id, addresses_table.c.user_id]
    })

A second example:

    {python}
    # many-to-many join on an association table
    j = join(users_table, userkeywords, 
            users_table.c.user_id==userkeywords.c.user_id).join(keywords, 
               userkeywords.c.keyword_id==keywords.c.keyword_id)
     
    # a class 
    class KeywordUser(object):
        pass

    # map to it - the identity of a KeywordUser object will be
    # (user_id, keyword_id) since those are the primary keys involved
    m = mapper(KeywordUser, j, properties={
        'user_id':[users_table.c.user_id, userkeywords.c.user_id],
        'keyword_id':[userkeywords.c.keyword_id, keywords.c.keyword_id]
    })

In both examples above, "composite" columns were added as properties to the mappers; these are aggregations of multiple columns into one mapper property, which instructs the mapper to keep both of those columns set at the same value.

### Mapping a Class against Arbitrary Selects {@name=selects}

Similar to mapping against a join, a plain select() object can be used with a mapper as well.  Below, an example select which contains two aggregate functions and a group_by is mapped to a class:

    {python}
    s = select([customers, 
                func.count(orders).label('order_count'), 
                func.max(orders.price).label('highest_order')],
                customers.c.customer_id==orders.c.customer_id,
                group_by=[c for c in customers.c]
                ).alias('somealias')
    class Customer(object):
        pass
    
    m = mapper(Customer, s)
    
Above, the "customers" table is joined against the "orders" table to produce a full row for each customer row, the total count of related rows in the "orders" table, and the highest price in the "orders" table, grouped against the full set of columns in the "customers" table.  That query is then mapped against the Customer class.  New instances of Customer will contain attributes for each column in the "customers" table as well as an "order_count" and "highest_order" attribute.  Updates to the Customer object will only be reflected in the "customers" table and not the "orders" table.  This is because the primary keys of the "orders" table are not represented in this mapper and therefore the table is not affected by save or delete operations.

### Multiple Mappers for One Class {@name=multiple}

The first mapper created for a certain class is known as that class's "primary mapper."  Other mappers can be created as well, these come in two varieties.

* **secondary mapper** - this is a mapper that must be constructed with the keyword argument `non_primary=True`, and represents a load-only mapper.  Objects that are loaded with a secondary mapper will have their save operation processed by the primary mapper.  It is also invalid to add new `relation()`s to a non-primary mapper. To use this mapper with the Session, specify it to the `query` method:

example:

    {python}
    # primary mapper
    mapper(User, users_table)
    
    # make a secondary mapper to load User against a join
    othermapper = mapper(User, users_table.join(someothertable), non_primary=True)
    
    # select
    result = session.query(othermapper).select()

* **entity name mapper** - this is a mapper that is a fully functioning primary mapper for a class, which is distinguished from the regular primary mapper by an `entity_name` parameter.  Instances loaded with this mapper will be totally managed by this new mapper and have no connection to the original one.  Most methods on `Session` include an optional `entity_name` parameter in order to specify this condition.

example:

    {python}
    # primary mapper
    mapper(User, users_table)
    
    # make an entity name mapper that stores User objects in another table
    mapper(User, alternate_users_table, entity_name='alt')
    
    # make two User objects
    user1 = User()
    user2 = User()
    
    # save one in in the "users" table
    session.save(user1)
    
    # save the other in the "alternate_users_table"
    session.save(user2, entity_name='alt')
    
    session.flush()
    
    # select from the alternate mapper
    session.query(User, entity_name='alt').select()

### Self Referential Mappers {@name=selfreferential}

A self-referential mapper is a mapper that is designed to operate with an *adjacency list* table.  This is a table that contains one or more foreign keys back to itself, and is usually used to create hierarchical tree structures.  SQLAlchemy's default model of saving items based on table dependencies is not sufficient in this case, as an adjacency list table introduces dependencies between individual rows.  Fortunately, SQLAlchemy will automatically detect a self-referential mapper and do the extra lifting to make it work.  

    {python}
    # define a self-referential table
    trees = Table('treenodes', engine,
        Column('node_id', Integer, primary_key=True),
        Column('parent_node_id', Integer, ForeignKey('treenodes.node_id'), nullable=True),
        Column('node_name', String(50), nullable=False),
        )

    # treenode class
    class TreeNode(object):
        pass

    # mapper defines "children" property, pointing back to TreeNode class,
    # with the mapper unspecified.  it will point back to the primary 
    # mapper on the TreeNode class.
    TreeNode.mapper = mapper(TreeNode, trees, properties={
            'children' : relation(
                            TreeNode, 
                            cascade="all"
                         ),
            }
        )
        
This kind of mapper goes through a lot of extra effort when saving and deleting items, to determine the correct dependency graph of nodes within the tree.
    
A self-referential mapper where there is more than one relationship on the table requires that all join conditions be explicitly spelled out.  Below is a self-referring table that contains a "parent_node_id" column to reference parent/child relationships, and a "root_node_id" column which points child nodes back to the ultimate root node:

    {python}
    # define a self-referential table with several relations
    trees = Table('treenodes', engine,
        Column('node_id', Integer, primary_key=True),
        Column('parent_node_id', Integer, ForeignKey('treenodes.node_id'), nullable=True),
        Column('root_node_id', Integer, ForeignKey('treenodes.node_id'), nullable=True),
        Column('node_name', String(50), nullable=False),
        )

    # treenode class
    class TreeNode(object):
        pass

    # define the "children" property as well as the "root" property
    mapper(TreeNode, trees, properties={
            'children' : relation(
                            TreeNode, 
                            primaryjoin=trees.c.parent_node_id==trees.c.node_id
                            cascade="all",
                            backref=backref("parent", remote_side=[trees.c.node_id])
                         ),
            'root' : relation(
                    TreeNode,
                    primaryjoin=trees.c.root_node_id=trees.c.node_id, 
                    remote_side=[trees.c.node_id],
                    uselist=False
                )
            }
        )
        
The "root" property on a TreeNode is a many-to-one relationship.  By default, a self-referential mapper declares relationships as one-to-many, so the extra parameter `remote_side`, pointing to a column or list of columns on the remote side of a relationship, is needed to indicate a "many-to-one" self-referring relationship (note the previous keyword argument `foreignkey` is deprecated).
Both TreeNode examples above are available in functional form in the `examples/adjacencytree` directory of the distribution.    

### Statement and Result-Set ORM Queries {@name=resultset}

Take any textual statement, constructed statement or result set and feed it into a Query to produce objects.  Below, we define two class/mapper combinations, issue a SELECT statement, and send the result object to the method `instances()` method on `Query`:

    {python}
    class User(object):
        pass

    class Address(object):
        pass

    mapper(User, users_table)
    
    mapper(Address, addresses_table)

    # select users and addresses in one query
    # use_labels is so that the user_id column in both tables are distinguished
    s = select([users_table, addresses_table], users_table.c.user_id==addresses_table.c.user_id, use_labels=True)

    # execute it, and process the results, asking for both User and Address objects
    r = session.query(User, Address).instances(s.execute())
    
    # result rows come back as tuples
    for entry in r:
        user = r[0]
        address = r[1]

Alternatively, the `from_statement()` method may be used with either a textual string or SQL construct:

    {python}
    s = select([users_table, addresses_table], users_table.c.user_id==addresses_table.c.user_id, use_labels=True)

    r = session.query(User, Address).from_statement(s).all()

    for entry in r:
        user = r[0]
        address = r[1]

#### Combining Eager Loads with Statement/Result Set Queries

When full statement/result loads are used with `Query`, SQLAlchemy does not affect the SQL query itself, and therefore has no way of tacking on its own `LEFT [OUTER] JOIN` conditions that are normally used to eager load relationships.  If the query being constructed is created in such a way that it returns rows not just from a parent table (or tables) but also returns rows from child tables, the result-set mapping can be notified as to which additional properties are contained within the result set.  This is done using the `contains_eager()` query option, which specifies the name of the relationship to be eagerly loaded.

    {python}
    # mapping is the users->addresses mapping
    mapper(User, users_table, properties={
        'addresses':relation(Address, addresses_table)
    })
    
    # define a query on USERS with an outer join to ADDRESSES
    statement = users_table.outerjoin(addresses_table).select(use_labels=True)
    
    # construct a Query object which expects the "addresses" results 
    query = session.query(User).options(contains_eager('addresses'))
    
    # get results normally
    r = query.instances(statement.execute())

If the "eager" portion of the statement is "alisaed", the `alias` keyword argument to `contains_eager()` may be used to indicate it.  This is a string alias name or reference to an actual `Alias` object:

    {python}
    # use an alias of the addresses table
    adalias = addresses_table.alias('adalias')
    
    # define a query on USERS with an outer join to adalias
    statement = users_table.outerjoin(adalias).select(use_labels=True)

    # construct a Query object which expects the "addresses" results 
    query = session.query(User).options(contains_eager('addresses', alias=adalias))

    # get results normally
    {sql}r = query.from_statement(query).all()
    SELECT users.user_id AS users_user_id, users.user_name AS users_user_name, adalias.address_id AS adalias_address_id, 
    adalias.user_id AS adalias_user_id, adalias.email_address AS adalias_email_address, (...other columns...)
    FROM users LEFT OUTER JOIN email_addresses AS adalias ON users.user_id = adalias.user_id

In the case that the main table itself is also aliased, the `contains_alias()` option can be used:

    {python}
    # define an aliased UNION called 'ulist'
    statement = users.select(users.c.user_id==7).union(users.select(users.c.user_id>7)).alias('ulist')

    # add on an eager load of "addresses"
    statement = statement.outerjoin(addresses).select(use_labels=True)
    
    # create query, indicating "ulist" is an alias for the main table, "addresses" property should
    # be eager loaded
    query = create_session().query(User).options(contains_alias('ulist'), contains_eager('addresses'))
    
    # results
    r = query.instances(statement.execute())
    

### Mapper Keyword Arguments {@name=mapperoptions}

Keyword arguments which can be used with the `mapper()` function.  For arguments to `relation()`, see [advdatamapping_properties_relationoptions](rel:advdatamapping_properties_relationoptions).

* **allow_column_override** - if True, allows the usage of a `relation()` which has the same name as a column in the mapped table. The table column will no longer be mapped.
* **allow_null_pks=False** - indicates that composite primary keys where one or more (but not all) columns contain NULL is a valid primary key. Primary keys which contain NULL values usually indicate that a result row does not contain an entity and should be skipped.
* **always_refresh=False** - if True, all query operations for this mapped class will overwrite all data within object instances that already exist within the session, erasing any in-memory changes with whatever information was loaded from the database.  Note that this option bypasses the usage patterns for which the Session is designed - negative side effects should be expected, and usage issues involving this flag are not supported.  For a better way to refresh data, use `query.load()`, `session.refresh()`,  `session.expunge()`, or `session.clear()`.
* **batch=True** - when False, indicates that when a mapper is persisting a list of instances, each instance will be fully saved to the database before moving onto the next instance. Normally, inserts and updates are batched together per-table, such as for an inheriting mapping that spans multiple tables. This flag is for rare circumstances where custom `MapperExtension` objects are used to attach logic to `before_insert()`, `before_update()`, etc., and the user-defined logic requires that the full persistence of each instance must be completed before moving onto the next (such as logic which queries the tables for the most recent ID). Note that this flag has a significant impact on the efficiency of a large save operation.
* **column_prefix** - a string which will be prepended to the "key" name of all Columns when creating column-based properties from the given Table. Does not affect explicitly specified column-based properties. Setting `column_prefix='_'` is equivalent to defining all column-based properties as `_columnname=table.c.columnname`. See [advdatamapping_properties_colname](rel:advdatamapping_properties_colname) for information on overriding column-based attribute names.
* **concrete** - if True, indicates this mapper should use concrete table inheritance with its parent mapper.  Requires `inherits` to be set.
* **entity_name** - defines this mapping as local to a particular class of entities within a single class.  Allows alternate persistence mappings for a single class. See [advdatamapping_multiple](rel:advdatamapping_multiple).
* **extension** - a MapperExtension instance or list of MapperExtension instances which will be applied to all operations by this Mapper. See [advdatamapping_extending](rel:advdatamapping_extending).
* **inherits** - another Mapper or class for which this Mapper will have an inheritance relationship with. See the examples in [advdatamapping_inheritance](rel:advdatamapping_inheritance).
* **inherit_condition** - for joined table inheritance, a SQL expression (constructed ClauseElement) which will define how the two tables are joined;
defaults to a natural join between the two tables.
* **non_primary=False** - if True, construct a Mapper that will define only the selection of instances, not their persistence. It essentially creates a mapper that can be used for querying but does not define how instances of the class are stored. A non_primary mapper is always created after a regular primary mapper has already been created for the class. To use one, send it in place of the class argument when creating a query, such as `session.query(somemapper)`. Note that it is usually invalid to define additional relationships on a non_primary mapper as they will conflict with those of the primary. See [advdatamapping_multiple](rel:advdatamapping_multiple).
* **order_by** - a single Column or list of Columns for which selection operations should use as the default ordering for entities. Defaults to the OID/ROWID of the table if any, or the first primary key column of the table. See [advdatamapping_orderby](rel:advdatamapping_orderby).
* **polymorphic_on** - used with mappers in an inheritance relationship, a Column which will identify the class/mapper combination to be used with a particular row. requires the `polymorphic_identity` value to be set for all mappers in the inheritance hierarchy.
* **polymorphic_identity** - a value which will be stored in the Column denoted by `polymorphic_on`, corresponding to the "class identity" of this mapper.  See [advdatamapping_inheritance](rel:advdatamapping_inheritance).
* **primary_key** - a list of Column objects which define the "primary key" to be used against this mapper's selectable unit. The mapper normally determines these automatically from the given `local_table` of the mapper combined against any inherited tables. When this argument is specified, the primary keys of the mapped table if any are disregarded in place of the columns given. This can be used to provide primary key identity to a table that has no PKs defined at the schema level, or to modify what defines "identity" for a particular table.
* **properties** - a dictionary mapping the string names of object attributes to MapperProperty instances, which define the persistence behavior of that attribute. Note that the columns in the mapped table are automatically converted into ColumnProperty instances based on the "key" property of each Column (although they can be overridden using this dictionary).
* **select_table** - used with polymorphic mappers, this is a `Selectable` which will take the place of the `Mapper`'s main table argument when
performing queries.
* **version_id_col** - a Column which must have an integer type that will be used to keep a running "version id" of mapped entities in the database. This is used during save operations to ensure that no other thread or process has updated the instance during the lifetime of the entity, else a ConcurrentModificationError exception is thrown.

### Extending Mapper {@name=extending}

Mappers can have functionality augmented or replaced at many points in its execution via the usage of the MapperExtension class.  This class is just a series of "hooks" where various functionality takes place.  An application can make its own MapperExtension objects, overriding only the methods it needs.  Methods that are not overridden return the special value `sqlalchemy.orm.mapper.EXT_PASS`, which indicates the operation should proceed as normally.

    {python}
    class MapperExtension(object):
        """base implementation for an object that provides overriding behavior to various
        Mapper functions.  For each method in MapperExtension, a result of EXT_PASS indicates
        the functionality is not overridden."""
        def get_session(self):
            """called to retrieve a contextual Session instance with which to
            register a new object. Note: this is not called if a session is 
            provided with the __init__ params (i.e. _sa_session)"""
            return EXT_PASS
        def select_by(self, query, *args, **kwargs):
            """overrides the select_by method of the Query object"""
            return EXT_PASS
        def select(self, query, *args, **kwargs):
            """overrides the select method of the Query object"""
            return EXT_PASS
        def create_instance(self, mapper, selectcontext, row, class_):
            """called when a new object instance is about to be created from a row.  
            the method can choose to create the instance itself, or it can return 
            None to indicate normal object creation should take place.

            mapper - the mapper doing the operation

            selectcontext - SelectionContext corresponding to the instances() call

            row - the result row from the database

            class_ - the class we are mapping.
            """
            return EXT_PASS
        def append_result(self, mapper, selectcontext, row, instance, identitykey, result, isnew):
            """called when an object instance is being appended to a result list.

            If this method returns EXT_PASS, it is assumed that the mapper should do the appending, else
            if this method returns any other value or None, it is assumed that the append was handled by this method.

            mapper - the mapper doing the operation

            selectcontext - SelectionContext corresponding to the instances() call

            row - the result row from the database

            instance - the object instance to be appended to the result

            identitykey - the identity key of the instance

            result - list to which results are being appended

            isnew - indicates if this is the first time we have seen this object instance in the current result
            set.  if you are selecting from a join, such as an eager load, you might see the same object instance
            many times in the same result set.
            """
            return EXT_PASS
        def populate_instance(self, mapper, selectcontext, row, instance, identitykey, isnew):
            """called right before the mapper, after creating an instance from a row, passes the row
            to its MapperProperty objects which are responsible for populating the object's attributes.
            If this method returns EXT_PASS, it is assumed that the mapper should do the appending, else
            if this method returns any other value or None, it is assumed that the append was handled by this method.

            Essentially, this method is used to have a different mapper populate the object:

                def populate_instance(self, mapper, selectcontext, instance, row, identitykey, isnew):
                    othermapper.populate_instance(selectcontext, instance, row, identitykey, isnew, frommapper=mapper)
                    return True
            """
            return EXT_PASS
        def before_insert(self, mapper, connection, instance):
            """called before an object instance is INSERTed into its table.

            this is a good place to set up primary key values and such that arent handled otherwise."""
            return EXT_PASS
        def before_update(self, mapper, connection, instance):
            """called before an object instance is UPDATED"""
            return EXT_PASS
        def after_update(self, mapper, connection, instance):
            """called after an object instance is UPDATED"""
            return EXT_PASS
        def after_insert(self, mapper, connection, instance):
            """called after an object instance has been INSERTed"""
            return EXT_PASS
        def before_delete(self, mapper, connection, instance):
            """called before an object instance is DELETEed"""
            return EXT_PASS
        def after_delete(self, mapper, connection, instance):
            """called after an object instance is DELETEed"""
            return EXT_PASS        
To use MapperExtension, make your own subclass of it and just send it off to a mapper:

    {python}
    m = mapper(User, users_table, extension=MyExtension())

Multiple extensions will be chained together and processed in order; they are specified as a list:

    {python}
    m = mapper(User, users_table, extension=[ext1, ext2, ext3])
    
