1
==============================
2
What's New in SQLAlchemy 0.8?
3
==============================
5
.. admonition:: About this Document
7
This document describes changes between SQLAlchemy version 0.7,
8
undergoing maintenance releases as of October, 2012,
9
and SQLAlchemy version 0.8, which is expected for release
12
Document date: October 25, 2012
13
Updated: March 9, 2013
18
This guide introduces what's new in SQLAlchemy version 0.8,
19
and also documents changes which affect users migrating
20
their applications from the 0.7 series of SQLAlchemy to 0.8.
22
SQLAlchemy releases are closing in on 1.0, and each new
23
version since 0.5 features fewer major usage changes. Most
24
applications that are settled into modern 0.7 patterns
25
should be movable to 0.8 with no changes. Applications that
26
use 0.6 and even 0.5 patterns should be directly migratable
27
to 0.8 as well, though larger applications may want to test
28
with each interim version.
33
Targeting Python 2.5 and Up Now
34
-------------------------------
36
SQLAlchemy 0.8 will target Python 2.5 and forward;
37
compatibility for Python 2.4 is being dropped.
39
The internals will be able to make usage of Python ternaries
40
(that is, ``x if y else z``) which will improve things
41
versus the usage of ``y and x or z``, which naturally has
42
been the source of some bugs, as well as context managers
43
(that is, ``with:``) and perhaps in some cases
44
``try:/except:/else:`` blocks which will help with code
47
SQLAlchemy will eventually drop 2.5 support as well - when
48
2.6 is reached as the baseline, SQLAlchemy will move to use
49
2.6/3.3 in-place compatibility, removing the usage of the
50
``2to3`` tool and maintaining a source base that works with
51
Python 2 and 3 at the same time.
56
.. _feature_relationship_08:
58
Rewritten :func:`.relationship` mechanics
59
-----------------------------------------
61
0.8 features a much improved and capable system regarding
62
how :func:`.relationship` determines how to join between two
63
entities. The new system includes these features:
65
* The ``primaryjoin`` argument is **no longer needed** when
66
constructing a :func:`.relationship` against a class that
67
has multiple foreign key paths to the target. Only the
68
``foreign_keys`` argument is needed to specify those
69
columns which should be included:
75
__tablename__ = 'parent'
76
id = Column(Integer, primary_key=True)
77
child_id_one = Column(Integer, ForeignKey('child.id'))
78
child_id_two = Column(Integer, ForeignKey('child.id'))
80
child_one = relationship("Child", foreign_keys=child_id_one)
81
child_two = relationship("Child", foreign_keys=child_id_two)
84
__tablename__ = 'child'
85
id = Column(Integer, primary_key=True)
87
* relationships against self-referential, composite foreign
88
keys where **a column points to itself** are now
89
supported. The canonical case is as follows:
94
__tablename__ = 'folder'
97
['account_id', 'parent_id'],
98
['folder.account_id', 'folder.folder_id']),
101
account_id = Column(Integer, primary_key=True)
102
folder_id = Column(Integer, primary_key=True)
103
parent_id = Column(Integer)
104
name = Column(String)
106
parent_folder = relationship("Folder",
107
backref="child_folders",
108
remote_side=[account_id, folder_id]
111
Above, the ``Folder`` refers to its parent ``Folder``
112
joining from ``account_id`` to itself, and ``parent_id``
113
to ``folder_id``. When SQLAlchemy constructs an auto-
114
join, no longer can it assume all columns on the "remote"
115
side are aliased, and all columns on the "local" side are
116
not - the ``account_id`` column is **on both sides**. So
117
the internal relationship mechanics were totally rewritten
118
to support an entirely different system whereby two copies
119
of ``account_id`` are generated, each containing different
120
*annotations* to determine their role within the
121
statement. Note the join condition within a basic eager
127
folder.account_id AS folder_account_id,
128
folder.folder_id AS folder_folder_id,
129
folder.parent_id AS folder_parent_id,
130
folder.name AS folder_name,
131
folder_1.account_id AS folder_1_account_id,
132
folder_1.folder_id AS folder_1_folder_id,
133
folder_1.parent_id AS folder_1_parent_id,
134
folder_1.name AS folder_1_name
136
LEFT OUTER JOIN folder AS folder_1
138
folder_1.account_id = folder.account_id
139
AND folder.folder_id = folder_1.parent_id
141
WHERE folder.folder_id = ? AND folder.account_id = ?
143
* Previously difficult custom join conditions, like those involving
144
functions and/or CASTing of types, will now function as
145
expected in most cases::
147
class HostEntry(Base):
148
__tablename__ = 'host_entry'
150
id = Column(Integer, primary_key=True)
151
ip_address = Column(INET)
152
content = Column(String(50))
154
# relationship() using explicit foreign_keys, remote_side
155
parent_host = relationship("HostEntry",
156
primaryjoin=ip_address == cast(content, INET),
157
foreign_keys=content,
158
remote_side=ip_address
161
The new :func:`.relationship` mechanics make use of a
162
SQLAlchemy concept known as :term:`annotations`. These annotations
163
are also available to application code explicitly via
164
the :func:`.foreign` and :func:`.remote` functions, either
165
as a means to improve readability for advanced configurations
166
or to directly inject an exact configuration, bypassing
167
the usual join-inspection heuristics::
169
from sqlalchemy.orm import foreign, remote
171
class HostEntry(Base):
172
__tablename__ = 'host_entry'
174
id = Column(Integer, primary_key=True)
175
ip_address = Column(INET)
176
content = Column(String(50))
178
# relationship() using explicit foreign() and remote() annotations
179
# in lieu of separate arguments
180
parent_host = relationship("HostEntry",
181
primaryjoin=remote(ip_address) == \
182
cast(foreign(content), INET),
188
:ref:`relationship_configure_joins` - a newly revised section on :func:`.relationship`
189
detailing the latest techniques for customizing related attributes and collection
192
:ticket:`1401` :ticket:`610`
194
.. _feature_orminspection_08:
196
New Class/Object Inspection System
197
----------------------------------
199
Lots of SQLAlchemy users are writing systems that require
200
the ability to inspect the attributes of a mapped class,
201
including being able to get at the primary key columns,
202
object relationships, plain attributes, and so forth,
203
typically for the purpose of building data-marshalling
204
systems, like JSON/XML conversion schemes and of course form
207
Originally, the :class:`.Table` and :class:`.Column` model were the
208
original inspection points, which have a well-documented
209
system. While SQLAlchemy ORM models are also fully
210
introspectable, this has never been a fully stable and
211
supported feature, and users tended to not have a clear idea
212
how to get at this information.
214
0.8 now provides a consistent, stable and fully
215
documented API for this purpose, including an inspection
216
system which works on mapped classes, instances, attributes,
217
and other Core and ORM constructs. The entrypoint to this
218
system is the core-level :func:`.inspect` function.
219
In most cases, the object being inspected
220
is one already part of SQLAlchemy's system,
221
such as :class:`.Mapper`, :class:`.InstanceState`,
222
:class:`.Inspector`. In some cases, new objects have been
223
added with the job of providing the inspection API in
224
certain contexts, such as :class:`.AliasedInsp` and
225
:class:`.AttributeState`.
227
A walkthrough of some key capabilities follows::
229
>>> class User(Base):
230
... __tablename__ = 'user'
231
... id = Column(Integer, primary_key=True)
232
... name = Column(String)
233
... name_syn = synonym(name)
234
... addresses = relationship("Address")
237
>>> # universal entry point is inspect()
238
>>> b = inspect(User)
240
>>> # b in this case is the Mapper
242
<Mapper at 0x101521950; User>
244
>>> # Column namespace
246
Column('id', Integer(), table=<user>, primary_key=True, nullable=False)
248
>>> # mapper's perspective of the primary key
250
(Column('id', Integer(), table=<user>, primary_key=True, nullable=False),)
252
>>> # MapperProperties available from .attrs
254
['name_syn', 'addresses', 'id', 'name']
256
>>> # .column_attrs, .relationships, etc. filter this collection
257
>>> b.column_attrs.keys()
260
>>> list(b.relationships)
261
[<sqlalchemy.orm.properties.RelationshipProperty object at 0x1015212d0>]
263
>>> # they are also namespaces
264
>>> b.column_attrs.id
265
<sqlalchemy.orm.properties.ColumnProperty object at 0x101525090>
267
>>> b.relationships.addresses
268
<sqlalchemy.orm.properties.RelationshipProperty object at 0x1015212d0>
270
>>> # point inspect() at a mapped, class level attribute,
271
>>> # returns the attribute itself
272
>>> b = inspect(User.addresses)
274
<sqlalchemy.orm.attributes.InstrumentedAttribute object at 0x101521fd0>
276
>>> # From here we can get the mapper:
278
<Mapper at 0x101525810; Address>
280
>>> # the parent inspector, in this case a mapper
282
<Mapper at 0x101521950; User>
285
>>> print b.expression
286
"user".id = address.user_id
288
>>> # inspect works on instances
289
>>> u1 = User(id=3, name='x')
292
>>> # it returns the InstanceState
294
<sqlalchemy.orm.state.InstanceState object at 0x10152bed0>
296
>>> # similar attrs accessor refers to the
298
['id', 'name_syn', 'addresses', 'name']
300
>>> # attribute interface - from attrs, you get a state object
302
<sqlalchemy.orm.state.AttributeState object at 0x10152bf90>
304
>>> # this object can give you, current value...
308
>>> # ... current history
309
>>> b.attrs.id.history
310
History(added=[3], unchanged=(), deleted=())
312
>>> # InstanceState can also provide session state information
313
>>> # lets assume the object is persistent
318
>>> # now we can get primary key identity, always
319
>>> # works in query.get()
323
>>> # the mapper level key
325
(<class '__main__.User'>, (3,))
327
>>> # state within the session
328
>>> b.persistent, b.transient, b.deleted, b.detached
329
(True, False, False, False)
333
<sqlalchemy.orm.session.Session object at 0x101701150>
337
:ref:`core_inspection_toplevel`
341
New with_polymorphic() feature, can be used anywhere
342
----------------------------------------------------
344
The :meth:`.Query.with_polymorphic` method allows the user to
345
specify which tables should be present when querying against
346
a joined-table entity. Unfortunately the method is awkward
347
and only applies to the first entity in the list, and
348
otherwise has awkward behaviors both in usage as well as
349
within the internals. A new enhancement to the
350
:func:`.aliased` construct has been added called
351
:func:`.with_polymorphic` which allows any entity to be
352
"aliased" into a "polymorphic" version of itself, freely
357
from sqlalchemy.orm import with_polymorphic
358
palias = with_polymorphic(Person, [Engineer, Manager])
359
session.query(Company).\
360
join(palias, Company.employees).\
361
filter(or_(Engineer.language=='java', Manager.hair=='pointy'))
365
:ref:`with_polymorphic` - newly updated documentation for polymorphic
370
of_type() works with alias(), with_polymorphic(), any(), has(), joinedload(), subqueryload(), contains_eager()
371
--------------------------------------------------------------------------------------------------------------
373
The :meth:`.PropComparator.of_type` method is used to specify
374
a specific subtype to use when constructing SQL expressions along
375
a :func:`.relationship` that has a :term:`polymorphic` mapping as its target.
376
This method can now be used to target *any number* of target subtypes,
377
by combining it with the new :func:`.with_polymorphic` function::
379
# use eager loading in conjunction with with_polymorphic targets
380
Job_P = with_polymorphic(Job, [SubJob, ExtraJob], aliased=True)
381
q = s.query(DataContainer).\
382
join(DataContainer.jobs.of_type(Job_P)).\
383
options(contains_eager(DataContainer.jobs.of_type(Job_P)))
385
The method now works equally well in most places a regular relationship
386
attribute is accepted, including with loader functions like
387
:func:`.joinedload`, :func:`.subqueryload`, :func:`.contains_eager`,
388
and comparison methods like :meth:`.PropComparator.any`
389
and :meth:`.PropComparator.has`::
391
# use eager loading in conjunction with with_polymorphic targets
392
Job_P = with_polymorphic(Job, [SubJob, ExtraJob], aliased=True)
393
q = s.query(DataContainer).\
394
join(DataContainer.jobs.of_type(Job_P)).\
395
options(contains_eager(DataContainer.jobs.of_type(Job_P)))
397
# pass subclasses to eager loads (implicitly applies with_polymorphic)
398
q = s.query(ParentThing).\
401
ParentThing.container,
402
DataContainer.jobs.of_type(SubJob)
405
# control self-referential aliasing with any()/has()
407
q = s.query(Job).join(DataContainer.jobs).\
409
DataContainer.jobs.of_type(Job_A).\
410
any(and_(Job_A.id < Job.id, Job_A.type=='fred')
418
:ticket:`2438` :ticket:`1106`
420
Events Can Be Applied to Unmapped Superclasses
421
----------------------------------------------
423
Mapper and instance events can now be associated with an unmapped
424
superclass, where those events will be propagated to subclasses
425
as those subclasses are mapped. The ``propagate=True`` flag
426
should be used. This feature allows events to be associated
427
with a declarative base class::
429
from sqlalchemy.ext.declarative import declarative_base
431
Base = declarative_base()
433
@event.listens_for("load", Base, propagate=True)
434
def on_load(target, context):
435
print "New instance loaded:", target
437
# on_load() will be applied to SomeClass
438
class SomeClass(Base):
439
__tablename__ = 'sometable'
445
Declarative Distinguishes Between Modules/Packages
446
--------------------------------------------------
448
A key feature of Declarative is the ability to refer
449
to other mapped classes using their string name. The
450
registry of class names is now sensitive to the owning
451
module and package of a given class. The classes
452
can be referred to via dotted name in expressions::
457
peanuts = relationship("nuts.Peanut",
458
primaryjoin="nuts.Peanut.snack_id == Snack.id")
460
The resolution allows that any full or partial
461
disambiguating package name can be used. If the
462
path to a particular class is still ambiguous,
468
New DeferredReflection Feature in Declarative
469
---------------------------------------------
471
The "deferred reflection" example has been moved to a
472
supported feature within Declarative. This feature allows
473
the construction of declarative mapped classes with only
474
placeholder ``Table`` metadata, until a ``prepare()`` step
475
is called, given an ``Engine`` with which to reflect fully
476
all tables and establish actual mappings. The system
477
supports overriding of columns, single and joined
478
inheritance, as well as distinct bases-per-engine. A full
479
declarative configuration can now be created against an
480
existing table that is assembled upon engine creation time
485
class ReflectedOne(DeferredReflection, Base):
488
class ReflectedTwo(DeferredReflection, Base):
491
class MyClass(ReflectedOne):
492
__tablename__ = 'mytable'
494
class MyOtherClass(ReflectedOne):
495
__tablename__ = 'myothertable'
497
class YetAnotherClass(ReflectedTwo):
498
__tablename__ = 'yetanothertable'
500
ReflectedOne.prepare(engine_one)
501
ReflectedTwo.prepare(engine_two)
505
:class:`.DeferredReflection`
509
ORM Classes Now Accepted by Core Constructs
510
-------------------------------------------
512
While the SQL expressions used with :meth:`.Query.filter`,
513
such as ``User.id == 5``, have always been compatible for
514
use with core constructs such as :func:`.select`, the mapped
515
class itself would not be recognized when passed to :func:`.select`,
516
:meth:`.Select.select_from`, or :meth:`.Select.correlate`.
517
A new SQL registration system allows a mapped class to be
518
accepted as a FROM clause within the core::
520
from sqlalchemy import select
522
stmt = select([User]).where(User.id == 5)
524
Above, the mapped ``User`` class will expand into
525
the :class:`.Table` to which ``User`` is mapped.
529
Query.update() supports UPDATE..FROM
530
-------------------------------------
532
The new UPDATE..FROM mechanics work in query.update().
533
Below, we emit an UPDATE against ``SomeEntity``, adding
534
a FROM clause (or equivalent, depending on backend)
535
against ``SomeOtherEntity``::
538
filter(SomeEntity.id==SomeOtherEntity.id).\
539
filter(SomeOtherEntity.foo=='bar').\
542
In particular, updates to joined-inheritance
543
entities are supported, provided the target of the UPDATE is local to the
544
table being filtered on, or if the parent and child tables
545
are mixed, they are joined explicitly in the query. Below,
546
given ``Engineer`` as a joined subclass of ``Person``:
551
filter(Person.id==Engineer.id).\
552
filter(Person.name=='dilbert').\
553
update({"engineer_data":"java"})
559
UPDATE engineer SET engineer_data='java' FROM person
560
WHERE person.id=engineer.id AND person.name='dilbert'
564
rollback() will only roll back "dirty" objects from a begin_nested()
565
--------------------------------------------------------------------
567
A behavioral change that should improve efficiency for those
568
users using SAVEPOINT via ``Session.begin_nested()`` - upon
569
``rollback()``, only those objects that were made dirty
570
since the last flush will be expired, the rest of the
571
``Session`` remains intact. This because a ROLLBACK to a
572
SAVEPOINT does not terminate the containing transaction's
573
isolation, so no expiry is needed except for those changes
574
that were not flushed in the current transaction.
578
Caching Example now uses dogpile.cache
579
---------------------------------------
581
The caching example now uses `dogpile.cache <http://dogpilecache.readthedocs.org/>`_.
582
Dogpile.cache is a rewrite of the caching portion
583
of Beaker, featuring vastly simpler and faster operation,
584
as well as support for distributed locking.
586
Note that the SQLAlchemy APIs used by the Dogpile example as well
587
as the previous Beaker example have changed slightly, in particular
588
this change is needed as illustrated in the Beaker example::
590
--- examples/beaker_caching/caching_query.py
591
+++ examples/beaker_caching/caching_query.py
595
if query._current_path:
596
- mapper, key = query._current_path[-2:]
597
+ mapper, prop = query._current_path[-2:]
600
for cls in mapper.class_.__mro__:
601
if (cls, key) in self._relationship_options:
605
:mod:`dogpile_caching`
612
Fully extensible, type-level operator support in Core
613
-----------------------------------------------------
615
The Core has to date never had any system of adding support
616
for new SQL operators to Column and other expression
617
constructs, other than the :meth:`.ColumnOperators.op` method
618
which is "just enough" to make things work. There has also
619
never been any system in place for Core which allows the
620
behavior of existing operators to be overridden. Up until
621
now, the only way operators could be flexibly redefined was
622
in the ORM layer, using :func:`.column_property` given a
623
``comparator_factory`` argument. Third party libraries
624
like GeoAlchemy therefore were forced to be ORM-centric and
625
rely upon an array of hacks to apply new opertions as well
626
as to get them to propagate correctly.
628
The new operator system in Core adds the one hook that's
629
been missing all along, which is to associate new and
630
overridden operators with *types*. Since after all, it's
631
not really a column, CAST operator, or SQL function that
632
really drives what kinds of operations are present, it's the
633
*type* of the expression. The implementation details are
634
minimal - only a few extra methods are added to the core
635
:class:`.ColumnElement` type so that it consults it's
636
:class:`.TypeEngine` object for an optional set of operators.
637
New or revised operations can be associated with any type,
638
either via subclassing of an existing type, by using
639
:class:`.TypeDecorator`, or "globally across-the-board" by
640
attaching a new :class:`.TypeEngine.Comparator` object to an existing type
643
For example, to add logarithm support to :class:`.Numeric` types:
648
from sqlalchemy.types import Numeric
649
from sqlalchemy.sql import func
651
class CustomNumeric(Numeric):
652
class comparator_factory(Numeric.Comparator):
653
def log(self, other):
654
return func.log(self.expr, other)
656
The new type is usable like any other type:
661
data = Table('data', metadata,
662
Column('id', Integer, primary_key=True),
663
Column('x', CustomNumeric(10, 5)),
664
Column('y', CustomNumeric(10, 5))
667
stmt = select([data.c.x.log(data.c.y)]).where(data.c.x.log(2) < value)
668
print conn.execute(stmt).fetchall()
671
New features which have come from this immediately include
672
support for Postgresql's HSTORE type, as well as new
673
operations associated with Postgresql's ARRAY
674
type. It also paves the way for existing types to acquire
675
lots more operators that are specific to those types, such
676
as more string, integer and date operators.
680
:ref:`types_operators`
689
SQL expressions can now be associated with types. Historically,
690
:class:`.TypeEngine` has always allowed Python-side functions which
691
receive both bound parameters as well as result row values, passing
692
them through a Python side conversion function on the way to/back from
693
the database. The new feature allows similar
694
functionality, except on the database side::
696
from sqlalchemy.types import String
697
from sqlalchemy import func, Table, Column, MetaData
699
class LowerString(String):
700
def bind_expression(self, bindvalue):
701
return func.lower(bindvalue)
703
def column_expression(self, col):
704
return func.lower(col)
706
metadata = MetaData()
710
Column('data', LowerString)
713
Above, the ``LowerString`` type defines a SQL expression that will be emitted
714
whenever the ``test_table.c.data`` column is rendered in the columns
715
clause of a SELECT statement::
717
>>> print select([test_table]).where(test_table.c.data == 'HI')
718
SELECT lower(test_table.data) AS data
720
WHERE test_table.data = lower(:data_1)
722
This feature is also used heavily by the new release of GeoAlchemy,
723
to embed PostGIS expressions inline in SQL based on type rules.
727
:ref:`types_sql_value_processing`
731
Core Inspection System
732
-----------------------
734
The :func:`.inspect` function introduced in :ref:`feature_orminspection_08`
735
also applies to the core. Applied to an :class:`.Engine` it produces
736
an :class:`.Inspector` object::
738
from sqlalchemy import inspect
739
from sqlalchemy import create_engine
741
engine = create_engine("postgresql://scott:tiger@localhost/test")
742
insp = inspect(engine)
743
print insp.get_table_names()
745
It can also be applied to any :class:`.ClauseElement`, which returns
746
the :class:`.ClauseElement` itself, such as :class:`.Table`, :class:`.Column`,
747
:class:`.Select`, etc. This allows it to work fluently between Core
751
New Method :meth:`.Select.correlate_except`
752
-------------------------------------------
754
:func:`.select` now has a method :meth:`.Select.correlate_except`
755
which specifies "correlate on all FROM clauses except those
756
specified". It can be used for mapping scenarios where
757
a related subquery should correlate normally, except
758
against a particular target selectable::
760
class SnortEvent(Base):
761
__tablename__ = "event"
763
id = Column(Integer, primary_key=True)
764
signature = Column(Integer, ForeignKey("signature.id"))
766
signatures = relationship("Signature", lazy=False)
768
class Signature(Base):
769
__tablename__ = "signature"
771
id = Column(Integer, primary_key=True)
773
sig_count = column_property(
774
select([func.count('*')]).\
775
where(SnortEvent.signature == id).
776
correlate_except(SnortEvent)
781
:meth:`.Select.correlate_except`
783
Postgresql HSTORE type
784
----------------------
786
Support for Postgresql's ``HSTORE`` type is now available as
787
:class:`.postgresql.HSTORE`. This type makes great usage
788
of the new operator system to provide a full range of operators
789
for HSTORE types, including index access, concatenation,
790
and containment methods such as
791
:meth:`~.HSTORE.comparator_factory.has_key`,
792
:meth:`~.HSTORE.comparator_factory.has_any`, and
793
:meth:`~.HSTORE.comparator_factory.matrix`::
795
from sqlalchemy.dialects.postgresql import HSTORE
797
data = Table('data_table', metadata,
798
Column('id', Integer, primary_key=True),
799
Column('hstore_data', HSTORE)
803
select([data.c.hstore_data['some_key']])
807
select([data.c.hstore_data.matrix()])
813
:class:`.postgresql.HSTORE`
815
:class:`.postgresql.hstore`
819
Enhanced Postgresql ARRAY type
820
------------------------------
822
The :class:`.postgresql.ARRAY` type will accept an optional
823
"dimension" argument, pinning it to a fixed number of
824
dimensions and greatly improving efficiency when retrieving
829
# old way, still works since PG supports N-dimensions per row:
830
Column("my_array", postgresql.ARRAY(Integer))
832
# new way, will render ARRAY with correct number of [] in DDL,
833
# will process binds and results more efficiently as we don't need
834
# to guess how many levels deep to go
835
Column("my_array", postgresql.ARRAY(Integer, dimensions=2))
837
The type also introduces new operators, using the new type-specific
838
operator framework. New operations include indexed access::
840
result = conn.execute(
841
select([mytable.c.arraycol[2]])
844
slice access in SELECT::
846
result = conn.execute(
847
select([mytable.c.arraycol[2:4]])
850
slice updates in UPDATE::
853
mytable.update().values({mytable.c.arraycol[2:3]: [7, 8]})
856
freestanding array literals::
858
>>> from sqlalchemy.dialects import postgresql
861
... postgresql.array([1, 2]) + postgresql.array([3, 4, 5])
866
array concatenation, where below, the right side ``[4, 5, 6]`` is coerced into an array literal::
868
select([mytable.c.arraycol + [4, 5, 6]])
872
:class:`.postgresql.ARRAY`
874
:class:`.postgresql.array`
878
New, configurable DATE, TIME types for SQLite
879
---------------------------------------------
881
SQLite has no built-in DATE, TIME, or DATETIME types, and
882
instead provides some support for storage of date and time
883
values either as strings or integers. The date and time
884
types for SQLite are enhanced in 0.8 to be much more
885
configurable as to the specific format, including that the
886
"microseconds" portion is optional, as well as pretty much
891
Column('sometimestamp', sqlite.DATETIME(truncate_microseconds=True))
892
Column('sometimestamp', sqlite.DATETIME(
894
"%(year)04d%(month)02d%(day)02d"
895
"%(hour)02d%(minute)02d%(second)02d%(microsecond)06d"
897
regexp="(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})(\d{6})"
900
Column('somedate', sqlite.DATE(
901
storage_format="%(month)02d/%(day)02d/%(year)04d",
902
regexp="(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)",
906
Huge thanks to Nate Dub for the sprinting on this at Pycon 2012.
910
:class:`.sqlite.DATETIME`
912
:class:`.sqlite.DATE`
914
:class:`.sqlite.TIME`
918
"COLLATE" supported across all dialects; in particular MySQL, Postgresql, SQLite
919
--------------------------------------------------------------------------------
921
The "collate" keyword, long accepted by the MySQL dialect, is now established
922
on all :class:`.String` types and will render on any backend, including
923
when features such as :meth:`.MetaData.create_all` and :func:`.cast` is used::
925
>>> stmt = select([cast(sometable.c.somechar, String(20, collation='utf8'))])
927
SELECT CAST(sometable.somechar AS VARCHAR(20) COLLATE "utf8") AS anon_1
936
"Prefixes" now supported for :func:`.update`, :func:`.delete`
937
-------------------------------------------------------------
939
Geared towards MySQL, a "prefix" can be rendered within any of
940
these constructs. E.g.::
942
stmt = table.delete().prefix_with("LOW_PRIORITY", dialect="mysql")
945
stmt = table.update().prefix_with("LOW_PRIORITY", dialect="mysql")
947
The method is new in addition to those which already existed
948
on :func:`.insert`, :func:`.select` and :class:`.Query`.
952
:meth:`.Update.prefix_with`
954
:meth:`.Delete.prefix_with`
956
:meth:`.Insert.prefix_with`
958
:meth:`.Select.prefix_with`
960
:meth:`.Query.prefix_with`
968
.. _legacy_is_orphan_addition:
970
The consideration of a "pending" object as an "orphan" has been made more aggressive
971
------------------------------------------------------------------------------------
973
This is a late add to the 0.8 series, however it is hoped that the new behavior
974
is generally more consistent and intuitive in a wider variety of
975
situations. The ORM has since at least version 0.4 included behavior
976
such that an object that's "pending", meaning that it's
977
associated with a :class:`.Session` but hasn't been inserted into the database
978
yet, is automatically expunged from the :class:`.Session` when it becomes an "orphan",
979
which means it has been de-associated with a parent object that refers to it
980
with ``delete-orphan`` cascade on the configured :func:`.relationship`. This
981
behavior is intended to approximately mirror the behavior of a persistent
982
(that is, already inserted) object, where the ORM will emit a DELETE for such
983
objects that become orphans based on the interception of detachment events.
985
The behavioral change comes into play for objects that
986
are referred to by multiple kinds of parents that each specify ``delete-orphan``; the
987
typical example is an :ref:`association object <association_pattern>` that bridges two other kinds of objects
988
in a many-to-many pattern. Previously, the behavior was such that the
989
pending object would be expunged only when de-associated with *all* of its parents.
990
With the behavioral change, the pending object
991
is expunged as soon as it is de-associated from *any* of the parents that it was
992
previously associated with. This behavior is intended to more closely
993
match that of persistent objects, which are deleted as soon
994
as they are de-associated from any parent.
996
The rationale for the older behavior dates back
997
at least to version 0.4, and was basically a defensive decision to try to alleviate
998
confusion when an object was still being constructed for INSERT. But the reality
999
is that the object is re-associated with the :class:`.Session` as soon as it is
1000
attached to any new parent in any case.
1002
It's still possible to flush an object
1003
that is not associated with all of its required parents, if the object was either
1004
not associated with those parents in the first place, or if it was expunged, but then
1005
re-associated with a :class:`.Session` via a subsequent attachment event but still
1006
not fully associated. In this situation, it is expected that the database
1007
would emit an integrity error, as there are likely NOT NULL foreign key columns
1008
that are unpopulated. The ORM makes the decision to let these INSERT attempts
1009
occur, based on the judgment that an object that is only partially associated with
1010
its required parents but has been actively associated with some of them,
1011
is more often than not a user error, rather than an intentional
1012
omission which should be silently skipped - silently skipping the INSERT here would
1013
make user errors of this nature very hard to debug.
1015
The old behavior, for applications that might have been relying upon it, can be re-enabled for
1016
any :class:`.Mapper` by specifying the flag ``legacy_is_orphan`` as a mapper
1019
The new behavior allows the following test case to work::
1021
from sqlalchemy import Column, Integer, String, ForeignKey
1022
from sqlalchemy.orm import relationship, backref
1023
from sqlalchemy.ext.declarative import declarative_base
1025
Base = declarative_base()
1028
__tablename__ = 'user'
1029
id = Column(Integer, primary_key=True)
1030
name = Column(String(64))
1032
class UserKeyword(Base):
1033
__tablename__ = 'user_keyword'
1034
user_id = Column(Integer, ForeignKey('user.id'), primary_key=True)
1035
keyword_id = Column(Integer, ForeignKey('keyword.id'), primary_key=True)
1037
user = relationship(User,
1038
backref=backref("user_keywords",
1039
cascade="all, delete-orphan")
1042
keyword = relationship("Keyword",
1043
backref=backref("user_keywords",
1044
cascade="all, delete-orphan")
1047
# uncomment this to enable the old behavior
1048
# __mapper_args__ = {"legacy_is_orphan": True}
1050
class Keyword(Base):
1051
__tablename__ = 'keyword'
1052
id = Column(Integer, primary_key=True)
1053
keyword = Column('keyword', String(64))
1055
from sqlalchemy import create_engine
1056
from sqlalchemy.orm import Session
1058
# note we're using Postgresql to ensure that referential integrity
1059
# is enforced, for demonstration purposes.
1060
e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
1062
Base.metadata.drop_all(e)
1063
Base.metadata.create_all(e)
1065
session = Session(e)
1067
u1 = User(name="u1")
1068
k1 = Keyword(keyword="k1")
1070
session.add_all([u1, k1])
1072
uk1 = UserKeyword(keyword=k1, user=u1)
1074
# previously, if session.flush() were called here,
1075
# this operation would succeed, but if session.flush()
1076
# were not called here, the operation fails with an
1079
del u1.user_keywords[0]
1086
The after_attach event fires after the item is associated with the Session instead of before; before_attach added
1087
-----------------------------------------------------------------------------------------------------------------
1089
Event handlers which use after_attach can now assume the
1090
given instance is associated with the given session:
1094
@event.listens_for(Session, "after_attach")
1095
def after_attach(session, instance):
1096
assert instance in session
1098
Some use cases require that it work this way. However,
1099
other use cases require that the item is *not* yet part of
1100
the session, such as when a query, intended to load some
1101
state required for an instance, emits autoflush first and
1102
would otherwise prematurely flush the target object. Those
1103
use cases should use the new "before_attach" event:
1107
@event.listens_for(Session, "before_attach")
1108
def before_attach(session, instance):
1109
instance.some_necessary_attribute = session.query(Widget).\
1110
filter_by(instance.widget_name).\
1117
Query now auto-correlates like a select() does
1118
----------------------------------------------
1120
Previously it was necessary to call :meth:`.Query.correlate` in
1121
order to have a column- or WHERE-subquery correlate to the
1126
subq = session.query(Entity.value).\
1127
filter(Entity.id==Parent.entity_id).\
1130
session.query(Parent).filter(subq=="some value")
1132
This was the opposite behavior of a plain ``select()``
1133
construct which would assume auto-correlation by default.
1134
The above statement in 0.8 will correlate automatically:
1138
subq = session.query(Entity.value).\
1139
filter(Entity.id==Parent.entity_id).\
1141
session.query(Parent).filter(subq=="some value")
1143
like in ``select()``, correlation can be disabled by calling
1144
``query.correlate(None)`` or manually set by passing an
1145
entity, ``query.correlate(someentity)``.
1149
.. _correlation_context_specific:
1151
Correlation is now always context-specific
1152
------------------------------------------
1154
To allow a wider variety of correlation scenarios, the behavior of
1155
:meth:`.Select.correlate` and :meth:`.Query.correlate` has changed slightly
1156
such that the SELECT statement will omit the "correlated" target from the
1157
FROM clause only if the statement is actually used in that context. Additionally,
1158
it's no longer possible for a SELECT statement that's placed as a FROM
1159
in an enclosing SELECT statement to "correlate" (i.e. omit) a FROM clause.
1161
This change only makes things better as far as rendering SQL, in that it's no
1162
longer possible to render illegal SQL where there are insufficient FROM
1163
objects relative to what's being selected::
1165
from sqlalchemy.sql import table, column, select
1167
t1 = table('t1', column('x'))
1168
t2 = table('t2', column('y'))
1169
s = select([t1, t2]).correlate(t1)
1173
Prior to this change, the above would return::
1175
SELECT t1.x, t2.y FROM t2
1177
which is invalid SQL as "t1" is not referred to in any FROM clause.
1179
Now, in the absense of an enclosing SELECT, it returns::
1181
SELECT t1.x, t2.y FROM t1, t2
1183
Within a SELECT, the correlation takes effect as expected::
1185
s2 = select([t1, t2]).where(t1.c.x == t2.c.y).where(t1.c.x == s)
1189
SELECT t1.x, t2.y FROM t1, t2
1190
WHERE t1.x = t2.y AND t1.x =
1191
(SELECT t1.x, t2.y FROM t2)
1193
This change is not expected to impact any existing applications, as
1194
the correlation behavior remains identical for properly constructed
1195
expressions. Only an application that relies, most likely within a
1196
testing scenario, on the invalid string output of a correlated
1197
SELECT used in a non-correlating context would see any change.
1202
.. _metadata_create_drop_tables:
1204
create_all() and drop_all() will now honor an empty list as such
1205
----------------------------------------------------------------
1207
The methods :meth:`.MetaData.create_all` and :meth:`.MetaData.drop_all`
1208
will now accept a list of :class:`.Table` objects that is empty,
1209
and will not emit any CREATE or DROP statements. Previously,
1210
an empty list was interepreted the same as passing ``None``
1211
for a collection, and CREATE/DROP would be emitted for all
1212
items unconditionally.
1214
This is a bug fix but some applications may have been relying upon
1215
the previous behavior.
1219
Repaired the Event Targeting of :class:`.InstrumentationEvents`
1220
----------------------------------------------------------------
1222
The :class:`.InstrumentationEvents` series of event targets have
1223
documented that the events will only be fired off according to
1224
the actual class passed as a target. Through 0.7, this wasn't the
1225
case, and any event listener applied to :class:`.InstrumentationEvents`
1226
would be invoked for all classes mapped. In 0.8, additional
1227
logic has been added so that the events will only invoke for those
1228
classes sent in. The ``propagate`` flag here is set to ``True``
1229
by default as class instrumentation events are typically used to
1230
intercept classes that aren't yet created.
1234
No more magic coercion of "=" to IN when comparing to subquery in MS-SQL
1235
------------------------------------------------------------------------
1237
We found a very old behavior in the MSSQL dialect which
1238
would attempt to rescue users from themselves when
1239
doing something like this:
1243
scalar_subq = select([someothertable.c.id]).where(someothertable.c.data=='foo')
1244
select([sometable]).where(sometable.c.id==scalar_subq)
1246
SQL Server doesn't allow an equality comparison to a scalar
1247
SELECT, that is, "x = (SELECT something)". The MSSQL dialect
1248
would convert this to an IN. The same thing would happen
1249
however upon a comparison like "(SELECT something) = x", and
1250
overall this level of guessing is outside of SQLAlchemy's
1251
usual scope so the behavior is removed.
1255
Fixed the behavior of :meth:`.Session.is_modified`
1256
--------------------------------------------------
1258
The :meth:`.Session.is_modified` method accepts an argument
1259
``passive`` which basically should not be necessary, the
1260
argument in all cases should be the value ``True`` - when
1261
left at its default of ``False`` it would have the effect of
1262
hitting the database, and often triggering autoflush which
1263
would itself change the results. In 0.8 the ``passive``
1264
argument will have no effect, and unloaded attributes will
1265
never be checked for history since by definition there can
1266
be no pending state change on an unloaded attribute.
1270
:meth:`.Session.is_modified`
1274
:attr:`.Column.key` is honored in the :attr:`.Select.c` attribute of :func:`.select` with :meth:`.Select.apply_labels`
1275
-----------------------------------------------------------------------------------------------------------------------
1277
Users of the expression system know that :meth:`.Select.apply_labels`
1278
prepends the table name to each column name, affecting the
1279
names that are available from :attr:`.Select.c`:
1283
s = select([table1]).apply_labels()
1287
Before 0.8, if the :class:`.Column` had a different :attr:`.Column.key`, this
1288
key would be ignored, inconsistently versus when
1289
:meth:`.Select.apply_labels` were not used:
1294
table1 = Table('t1', metadata,
1295
Column('col1', Integer, key='column_one')
1297
s = select([table1])
1298
s.c.column_one # would be accessible like this
1299
s.c.col1 # would raise AttributeError
1301
s = select([table1]).apply_labels()
1302
s.c.table1_column_one # would raise AttributeError
1303
s.c.table1_col1 # would be accessible like this
1305
In 0.8, :attr:`.Column.key` is honored in both cases:
1310
table1 = Table('t1', metadata,
1311
Column('col1', Integer, key='column_one')
1313
s = select([table1])
1314
s.c.column_one # works
1315
s.c.col1 # AttributeError
1317
s = select([table1]).apply_labels()
1318
s.c.table1_column_one # works
1319
s.c.table1_col1 # AttributeError
1321
All other behavior regarding "name" and "key" are the same,
1322
including that the rendered SQL will still use the form
1323
``<tablename>_<colname>`` - the emphasis here was on
1324
preventing the :attr:`.Column.key` contents from being rendered into the
1325
``SELECT`` statement so that there are no issues with
1326
special/ non-ascii characters used in the :attr:`.Column.key`.
1330
single_parent warning is now an error
1331
-------------------------------------
1333
A :func:`.relationship` that is many-to-one or many-to-many and
1334
specifies "cascade='all, delete-orphan'", which is an
1335
awkward but nonetheless supported use case (with
1336
restrictions) will now raise an error if the relationship
1337
does not specify the ``single_parent=True`` option.
1338
Previously it would only emit a warning, but a failure would
1339
follow almost immediately within the attribute system in any
1344
Adding the ``inspector`` argument to the ``column_reflect`` event
1345
-----------------------------------------------------------------
1347
0.7 added a new event called ``column_reflect``, provided so
1348
that the reflection of columns could be augmented as each
1349
one were reflected. We got this event slightly wrong in
1350
that the event gave no way to get at the current
1351
``Inspector`` and ``Connection`` being used for the
1352
reflection, in the case that additional information from the
1353
database is needed. As this is a new event not widely used
1354
yet, we'll be adding the ``inspector`` argument into it
1359
@event.listens_for(Table, "column_reflect")
1360
def listen_for_col(inspector, table, column_info):
1365
Disabling auto-detect of collations, casing for MySQL
1366
-----------------------------------------------------
1368
The MySQL dialect does two calls, one very expensive, to
1369
load all possible collations from the database as well as
1370
information on casing, the first time an ``Engine``
1371
connects. Neither of these collections are used for any
1372
SQLAlchemy functions, so these calls will be changed to no
1373
longer be emitted automatically. Applications that might
1374
have relied on these collections being present on
1375
``engine.dialect`` will need to call upon
1376
``_detect_collations()`` and ``_detect_casing()`` directly.
1380
"Unconsumed column names" warning becomes an exception
1381
------------------------------------------------------
1383
Referring to a non-existent column in an ``insert()`` or
1384
``update()`` construct will raise an error instead of a
1389
t1 = table('t1', column('x'))
1390
t1.insert().values(x=5, z=5) # raises "Unconsumed column names: z"
1394
Inspector.get_primary_keys() is deprecated, use Inspector.get_pk_constraint
1395
---------------------------------------------------------------------------
1397
These two methods on ``Inspector`` were redundant, where
1398
``get_primary_keys()`` would return the same information as
1399
``get_pk_constraint()`` minus the name of the constraint:
1403
>>> insp.get_primary_keys()
1406
>>> insp.get_pk_constraint()
1407
{"name":"pk_constraint", "constrained_columns":["a", "b"]}
1411
Case-insensitive result row names will be disabled in most cases
1412
----------------------------------------------------------------
1414
A very old behavior, the column names in ``RowProxy`` were
1415
always compared case-insensitively:
1419
>>> row = result.fetchone()
1420
>>> row['foo'] == row['FOO'] == row['Foo']
1423
This was for the benefit of a few dialects which in the
1424
early days needed this, like Oracle and Firebird, but in
1425
modern usage we have more accurate ways of dealing with the
1426
case-insensitive behavior of these two platforms.
1428
Going forward, this behavior will be available only
1429
optionally, by passing the flag ```case_sensitive=False```
1430
to ```create_engine()```, but otherwise column names
1431
requested from the row must match as far as casing.
1435
``InstrumentationManager`` and alternate class instrumentation is now an extension
1436
----------------------------------------------------------------------------------
1438
The ``sqlalchemy.orm.interfaces.InstrumentationManager``
1440
``sqlalchemy.ext.instrumentation.InstrumentationManager``.
1441
The "alternate instrumentation" system was built for the
1442
benefit of a very small number of installations that needed
1443
to work with existing or unusual class instrumentation
1444
systems, and generally is very seldom used. The complexity
1445
of this system has been exported to an ``ext.`` module. It
1446
remains unused until once imported, typically when a third
1447
party library imports ``InstrumentationManager``, at which
1448
point it is injected back into ``sqlalchemy.orm`` by
1449
replacing the default ``InstrumentationFactory`` with
1450
``ExtendedInstrumentationRegistry``.
1458
SQLSoup is a handy package that presents an alternative
1459
interface on top of the SQLAlchemy ORM. SQLSoup is now
1460
moved into its own project and documented/released
1461
separately; see https://bitbucket.org/zzzeek/sqlsoup.
1463
SQLSoup is a very simple tool that could also benefit from
1464
contributors who are interested in its style of usage.
1471
The older "mutable" system within the SQLAlchemy ORM has
1472
been removed. This refers to the ``MutableType`` interface
1473
which was applied to types such as ``PickleType`` and
1474
conditionally to ``TypeDecorator``, and since very early
1475
SQLAlchemy versions has provided a way for the ORM to detect
1476
changes in so-called "mutable" data structures such as JSON
1477
structures and pickled objects. However, the
1478
implementation was never reasonable and forced a very
1479
inefficient mode of usage on the unit-of-work which caused
1480
an expensive scan of all objects to take place during flush.
1481
In 0.7, the `sqlalchemy.ext.mutable <http://docs.sqlalchemy.
1482
org/en/latest/orm/extensions/mutable.html>`_ extension was
1483
introduced so that user-defined datatypes can appropriately
1484
send events to the unit of work as changes occur.
1486
Today, usage of ``MutableType`` is expected to be low, as
1487
warnings have been in place for some years now regarding its
1492
sqlalchemy.exceptions (has been sqlalchemy.exc for years)
1493
---------------------------------------------------------
1495
We had left in an alias ``sqlalchemy.exceptions`` to attempt
1496
to make it slightly easier for some very old libraries that
1497
hadn't yet been upgraded to use ``sqlalchemy.exc``. Some
1498
users are still being confused by it however so in 0.8 we're
1499
taking it out entirely to eliminate any of that confusion.