Psycopg -- PostgreSQL database adapter for Python ************************************************* Psycopg is the most popular PostgreSQL database adapter for the Python programming language. Its main features are the complete implementation of the Python DB API 2.0 specification and the thread safety (several threads can share the same connection). It was designed for heavily multi-threaded applications that create and destroy lots of cursors and make a large number of concurrent "INSERT"s or "UPDATE"s. Psycopg 2 is mostly implemented in C as a libpq wrapper, resulting in being both efficient and secure. It features client-side and *server- side* cursors, *asynchronous communication* and *notifications*, "COPY TO/COPY FROM" support. Many Python types are supported out-of-the-box and *adapted to matching PostgreSQL data types*; adaptation can be extended and customized thanks to a flexible *objects adaptation system*. Psycopg 2 is both Unicode and Python 3 friendly. -[ Contents ]- * Introduction * Installation * Install from a package * Install from source * Creating a debug build * If you still have problems * Basic module usage * Passing parameters to SQL queries * Adaptation of Python values to SQL types * Transactions control * Server side cursors * Thread and process safety * Using COPY TO and COPY FROM * Access to PostgreSQL large objects * Two-Phase Commit protocol support * The "psycopg2" module content * Exceptions * Type Objects and Constructors * The "connection" class * The "cursor" class * More advanced topics * Connection and cursor factories * Adapting new Python types to SQL syntax * Type casting of SQL types into Python objects * Asynchronous notifications * Asynchronous support * Support for coroutine libraries * "psycopg2.extensions" -- Extensions to the DB API * SQL adaptation protocol objects * Database types casting functions * Additional exceptions * Isolation level constants * Transaction status constants * Connection status constants * Poll constants * Additional database types * "psycopg2.tz" -- "tzinfo" implementations for Psycopg 2 * "psycopg2.pool" -- Connections pooling * "psycopg2.extras" -- Miscellaneous goodies for Psycopg 2 * Connection and cursor subclasses * Additional data types * Fractional time zones * Coroutine support * "psycopg2.errorcodes" -- Error codes defined by PostgreSQL * Frequently Asked Questions * Problems with transactions handling * Problems with type conversions * Best practices * Problems compiling and deploying psycopg2 * Release notes * What's new in psycopg 2.5.1 * What's new in psycopg 2.5 * What's new in psycopg 2.4 * What's new in psycopg 2.3 * What's new in psycopg 2.2 * What's new in psycopg 2.0 Introduction ************ Psycopg is a PostgreSQL adapter for the Python programming language. It is a wrapper for the libpq, the official PostgreSQL client library. The "psycopg2" package is the current mature implementation of the adapter: it is a C extension and as such it is only compatible with CPython. If you want to use Psycopg on a different Python implementation (PyPy, Jython, IronPython) there is an experimental porting of Psycopg for Ctypes, but it is not as mature as the C implementation yet. The current "psycopg2" implementation supports: * Python 2 versions from 2.5 to 2.7 * Python 3 versions from 3.1 to 3.3 * PostgreSQL versions from 7.4 to 9.2 Note: "psycopg2" usually depends at runtime on the libpq dynamic library. However it can connect to PostgreSQL servers of any supported version, independently of the version of the libpq used: just install the most recent libpq version or the most practical, without trying to match it to the version of the PostgreSQL server you will have to connect to. Installation ************ If possible, and usually it is, please *install Psycopg from a package* available for your distribution or operating system. Compiling from source is a very easy task, however "psycopg2" is a C extension module and as such it requires a few more things in place respect to a pure Python module. So, if you don't have experience compiling Python extension packages, *above all if you are a Windows or a Mac OS user*, please use a pre-compiled package and go straight to the *module usage* avoid bothering with the gory details. Install from a package ====================== **Linux** Psycopg is available already packaged in many Linux distributions: look for a package such as "python-psycopg2" using the package manager of your choice. On Debian, Ubuntu and other deb-based distributions you should just need: sudo apt-get install python-psycopg2 to install the package with all its dependencies. **Mac OS X** Psycopg is available as a fink package in the *unstable* tree: you may install it with: fink install psycopg2-py27 The library is also available on MacPorts try: sudo port install py27-psycopg2 **Microsoft Windows** Jason Erickson maintains a packaged Windows port of Psycopg with installation executable. Download. Double click. Done. Install from source =================== These notes illustrate how to compile Psycopg on Linux. If you want to compile Psycopg on other platforms you may have to adjust some details accordingly. Psycopg is a C wrapper to the libpq PostgreSQL client library. To install it from sources you will need: * A C compiler. * The Python header files. They are usually installed in a package such as **python-dev**. A message such as *error: Python.h: No such file or directory* is an indication that the Python headers are missing. * The libpq header files. They are usually installed in a package such as **libpq-dev**. If you get an *error: libpq-fe.h: No such file or directory* you are missing them. * The **pg_config** program: it is usually installed by the **libpq- dev** package but sometimes it is not in a "PATH" directory. Having it in the "PATH" greatly streamlines the installation, so try running "pg_config --version": if it returns an error or an unexpected version number then locate the directory containing the **pg_config** shipped with the right libpq version (usually "/usr/lib/postgresql/X.Y/bin/") and add it to the "PATH": $ export PATH=/usr/lib/postgresql/X.Y/bin/:$PATH You only need it to compile and install "psycopg2", not for its regular usage. Note: The libpq header files used to compile "psycopg2" should match the version of the library linked at runtime. If you get errors about missing or mismatching libraries when importing "psycopg2" check (e.g. using **ldd**) if the module "psycopg2/_psycopg.so" is linked to the right "libpq.so". Use a Python package manager ---------------------------- If the above requirements are satisfied, you can use **easy_install**, **pip** or whatever the Python package manager of the week: $ pip install psycopg2 Please refer to your package manager documentation about performing a local or global installation, **virtualenv** (fully supported by recent Psycopg versions), using different Python versions and other nuances. Use the source package ---------------------- You can download a copy of Psycopg source files from the Psycopg download page. Once unpackaged, to compile and install the package you can run: $ python setup.py build $ sudo python setup.py install If you have less standard requirements such as: * creating a *debug build*, * using **pg_config** not in the "PATH", * supporting "mx.DateTime", then take a look at the "setup.cfg" file. Some of the options available in "setup.cfg" are also available as command line arguments of the "build_ext" sub-command. For instance you can specify an alternate **pg_config** version using: $ python setup.py build_ext --pg-config /path/to/pg_config build Use "python setup.py build_ext --help" to get a list of the options supported. Creating a debug build ====================== In case of problems, Psycopg can be configured to emit detailed debug messages, which can be very useful for diagnostics and to report a bug. In order to create a debug package: * Download and unpack the Psycopg source package. * Edit the "setup.cfg" file adding the "PSYCOPG_DEBUG" flag to the "define" option. * *Compile and install* the package. * Set the "PSYCOPG_DEBUG" variable: $ export PSYCOPG_DEBUG=1 * Run your program (making sure that the "psycopg2" package imported is the one you just compiled and not e.g. the system one): you will have a copious stream of informations printed on stdout. If you still have problems ========================== Try the following. *In order:* * Read again the *requirements*. * Read the *FAQ*. * Google for "psycopg2" *your error message*. Especially useful the week after the release of a new OS X version. * Write to the Mailing List. * Complain on your blog or on Twitter that "psycopg2" is the worst package ever and about the quality time you have wasted figuring out the correct "ARCHFLAGS". Especially useful from the Starbucks near you. Basic module usage ****************** The basic Psycopg usage is common to all the database adapters implementing the DB API 2.0 protocol. Here is an interactive session showing some of the basic commands: >>> import psycopg2 # Connect to an existing database >>> conn = psycopg2.connect("dbname=test user=postgres") # Open a cursor to perform database operations >>> cur = conn.cursor() # Execute a command: this creates a new table >>> cur.execute("CREATE TABLE test (id serial PRIMARY KEY, num integer, data varchar);") # Pass data to fill a query placeholders and let Psycopg perform # the correct conversion (no more SQL injections!) >>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", ... (100, "abc'def")) # Query the database and obtain data as Python objects >>> cur.execute("SELECT * FROM test;") >>> cur.fetchone() (1, 100, "abc'def") # Make the changes to the database persistent >>> conn.commit() # Close communication with the database >>> cur.close() >>> conn.close() The main entry points of Psycopg are: * The function "connect()" creates a new database session and returns a new "connection" instance. * The class "connection" encapsulates a database session. It allows to: * create new "cursor"s using the "cursor()" method to execute database commands and queries, * terminate transactions using the methods "commit()" or "rollback()". * The class "cursor" allows interaction with the database: * send commands to the database using methods such as "execute()" and "executemany()", * retrieve data from the database *by iteration* or using methods such as "fetchone()", "fetchmany()", "fetchall()". Passing parameters to SQL queries ================================= Psycopg casts Python variables to SQL literals by type. Many standard Python types are already adapted to the correct SQL representation. Example: the Python function call: >>> cur.execute( ... """INSERT INTO some_table (an_int, a_date, a_string) ... VALUES (%s, %s, %s);""", ... (10, datetime.date(2005, 11, 18), "O'Reilly")) is converted into the SQL command: INSERT INTO some_table (an_int, a_date, a_string) VALUES (10, '2005-11-18', 'O''Reilly'); Named arguments are supported too using "%(*name*)s" placeholders. Using named arguments the values can be passed to the query in any order and many placeholders can use the same values: >>> cur.execute( ... """INSERT INTO some_table (an_int, a_date, another_date, a_string) ... VALUES (%(int)s, %(date)s, %(date)s, %(str)s);""", ... {'int': 10, 'str': "O'Reilly", 'date': datetime.date(2005, 11, 18)}) When parameters are used, in order to include a literal "%" in the query you can use the "%%" string. While the mechanism resembles regular Python strings manipulation, there are a few subtle differences you should care about when passing parameters to a query: * The Python string operator "%" is not used: the "execute()" method accepts a tuple or dictionary of values as second parameter. **Never** use "%" or "+" to merge values into queries. * The variables placeholder must *always be a* "%s", even if a different placeholder (such as a "%d" for integers or "%f" for floats) may look more appropriate: >>> cur.execute("INSERT INTO numbers VALUES (%d)", (42,)) # WRONG >>> cur.execute("INSERT INTO numbers VALUES (%s)", (42,)) # correct * For positional variables binding, *the second argument must always be a sequence*, even if it contains a single variable. And remember that Python requires a comma to create a single element tuple: >>> cur.execute("INSERT INTO foo VALUES (%s)", "bar") # WRONG >>> cur.execute("INSERT INTO foo VALUES (%s)", ("bar")) # WRONG >>> cur.execute("INSERT INTO foo VALUES (%s)", ("bar",)) # correct >>> cur.execute("INSERT INTO foo VALUES (%s)", ["bar"]) # correct * Only variable values should be bound via this method: it shouldn't be used to set table or field names. For these elements, ordinary string formatting should be used before running "execute()". The problem with the query parameters ------------------------------------- The SQL representation for many data types is often not the same of the Python string representation. The classic example is with single quotes in strings: SQL uses them as string constants bounds and requires them to be escaped, whereas in Python single quotes can be left unescaped in strings bounded by double quotes. For this reason a naïve approach to the composition of query strings, e.g. using string concatenation, is a recipe for terrible problems: >>> SQL = "INSERT INTO authors (name) VALUES ('%s');" # NEVER DO THIS >>> data = ("O'Reilly", ) >>> cur.execute(SQL % data) # THIS WILL FAIL MISERABLY ProgrammingError: syntax error at or near "Reilly" LINE 1: INSERT INTO authors (name) VALUES ('O'Reilly') ^ If the variable containing the data to be sent to the database comes from an untrusted source (e.g. a form published on a web site) an attacker could easily craft a malformed string, either gaining access to unauthorized data or performing destructive operations on the database. This form of attack is called SQL injection and is known to be one of the most widespread forms of attack to servers. Before continuing, please print this page as a memo and hang it onto your desk. Psycopg can automatically convert Python objects to and from SQL literals: using this feature your code will be more robust and reliable. We must stress this point: Warning: Never, **never**, **NEVER** use Python string concatenation ("+") or string parameters interpolation ("%") to pass variables to a SQL query string. Not even at gunpoint. The correct way to pass variables in a SQL command is using the second argument of the "execute()" method: >>> SQL = "INSERT INTO authors (name) VALUES (%s);" # Note: no quotes >>> data = ("O'Reilly", ) >>> cur.execute(SQL, data) # Note: no % operator Adaptation of Python values to SQL types ======================================== Many standard Python types are adapted into SQL and returned as Python objects when a query is executed. The following table shows the default mapping between Python and PostgreSQL types: The mapping is fairly customizable: see *Adapting new Python types to SQL syntax* and *Type casting of SQL types into Python objects*. You can also find a few other specialized adapters in the "psycopg2.extras" module. Constants adaptation -------------------- Python "None" and boolean values "True" and "False" are converted into the proper SQL literals: >>> cur.mogrify("SELECT %s, %s, %s;", (None, True, False)) 'SELECT NULL, true, false;' Numbers adaptation ------------------ Python numeric objects "int", "long", "float", "Decimal" are converted into a PostgreSQL numerical representation: >>> cur.mogrify("SELECT %s, %s, %s, %s;", (10, 10L, 10.0, Decimal("10.00"))) 'SELECT 10, 10, 10.0, 10.00;' Reading from the database, integer types are converted into "int", floating point types are converted into "float", "numeric"/"decimal" are converted into "Decimal". Note: Sometimes you may prefer to receive "numeric" data as "float" instead, for performance reason or ease of manipulation: you can configure an adapter to *cast PostgreSQL numeric to Python float*. This of course may imply a loss of precision. See also: PostgreSQL numeric types Strings adaptation ------------------ Python "str" and "unicode" are converted into the SQL string syntax. "unicode" objects ("str" in Python 3) are encoded in the connection "encoding" before sending to the backend: trying to send a character not supported by the encoding will result in an error. Data is usually received as "str" (*i.e.* it is *decoded* on Python 3, left *encoded* on Python 2). However it is possible to receive "unicode" on Python 2 too: see *Unicode handling*. Unicode handling ~~~~~~~~~~~~~~~~ Psycopg can exchange Unicode data with a PostgreSQL database. Python "unicode" objects are automatically *encoded* in the client encoding defined on the database connection (the PostgreSQL encoding, available in "connection.encoding", is translated into a Python codec using the "encodings" mapping): >>> print u, type(u) àèìòù€ >>> cur.execute("INSERT INTO test (num, data) VALUES (%s,%s);", (74, u)) When reading data from the database, in Python 2 the strings returned are usually 8 bit "str" objects encoded in the database client encoding: >>> print conn.encoding UTF8 >>> cur.execute("SELECT data FROM test WHERE num = 74") >>> x = cur.fetchone()[0] >>> print x, type(x), repr(x) àèìòù€ '\xc3\xa0\xc3\xa8\xc3\xac\xc3\xb2\xc3\xb9\xe2\x82\xac' >>> conn.set_client_encoding('LATIN9') >>> cur.execute("SELECT data FROM test WHERE num = 74") >>> x = cur.fetchone()[0] >>> print type(x), repr(x) '\xe0\xe8\xec\xf2\xf9\xa4' In Python 3 instead the strings are automatically *decoded* in the connection "encoding", as the "str" object can represent Unicode characters. In Python 2 you must register a *typecaster* in order to receive "unicode" objects: >>> psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, cur) >>> cur.execute("SELECT data FROM test WHERE num = 74") >>> x = cur.fetchone()[0] >>> print x, type(x), repr(x) àèìòù€ u'\xe0\xe8\xec\xf2\xf9\u20ac' In the above example, the "UNICODE" typecaster is registered only on the cursor. It is also possible to register typecasters on the connection or globally: see the function "register_type()" and *Type casting of SQL types into Python objects* for details. Note: In Python 2, if you want to uniformly receive all your database input in Unicode, you can register the related typecasters globally as soon as Psycopg is imported: import psycopg2 import psycopg2.extensions psycopg2.extensions.register_type(psycopg2.extensions.UNICODE) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY) and forget about this story. Binary adaptation ----------------- Python types representing binary objects are converted into PostgreSQL binary string syntax, suitable for "bytea" fields. Such types are "buffer" (only available in Python 2), "memoryview" (available from Python 2.7), "bytearray" (available from Python 2.6) and "bytes" (only from Python 3: the name is available from Python 2.6 but it's only an alias for the type "str"). Any object implementing the Revised Buffer Protocol should be usable as binary type where the protocol is supported (i.e. from Python 2.6). Received data is returned as "buffer" (in Python 2) or "memoryview" (in Python 3). Changed in version 2.4: only strings were supported before. Changed in version 2.4.1: can parse the 'hex' format from 9.0 servers without relying on the version of the client library. Note: In Python 2, if you have binary data in a "str" object, you can pass them to a "bytea" field using the "psycopg2.Binary" wrapper: mypic = open('picture.png', 'rb').read() curs.execute("insert into blobs (file) values (%s)", (psycopg2.Binary(mypic),)) Warning: Since version 9.0 PostgreSQL uses by default a new "hex" format to emit "bytea" fields. Starting from Psycopg 2.4.1 the format is correctly supported. If you use a previous version you will need some extra care when receiving bytea from PostgreSQL: you must have at least libpq 9.0 installed on the client or alternatively you can set the bytea_output configuration parameter to "escape", either in the server configuration file or in the client session (using a query such as "SET bytea_output TO escape;") before receiving binary data. Date/Time objects adaptation ---------------------------- Python builtin "datetime", "date", "time", "timedelta" are converted into PostgreSQL's "timestamp[tz]", "date", "time", "interval" data types. Time zones are supported too. The Egenix mx.DateTime objects are adapted the same way: >>> dt = datetime.datetime.now() >>> dt datetime.datetime(2010, 2, 8, 1, 40, 27, 425337) >>> cur.mogrify("SELECT %s, %s, %s;", (dt, dt.date(), dt.time())) "SELECT '2010-02-08T01:40:27.425337', '2010-02-08', '01:40:27.425337';" >>> cur.mogrify("SELECT %s;", (dt - datetime.datetime(2010,1,1),)) "SELECT '38 days 6027.425337 seconds';" See also: PostgreSQL date/time types Time zones handling ~~~~~~~~~~~~~~~~~~~ The PostgreSQL type "timestamp with time zone" (a.k.a. "timestamptz") is converted into Python "datetime" objects with a "tzinfo" attribute set to a "FixedOffsetTimezone" instance. >>> cur.execute("SET TIME ZONE 'Europe/Rome';") # UTC + 1 hour >>> cur.execute("SELECT '2010-01-01 10:30:45'::timestamptz;") >>> cur.fetchone()[0].tzinfo psycopg2.tz.FixedOffsetTimezone(offset=60, name=None) Note that only time zones with an integer number of minutes are supported: this is a limitation of the Python "datetime" module. A few historical time zones had seconds in the UTC offset: these time zones will have the offset rounded to the nearest minute, with an error of up to 30 seconds. >>> cur.execute("SET TIME ZONE 'Asia/Calcutta';") # offset was +5:53:20 >>> cur.execute("SELECT '1930-01-01 10:30:45'::timestamptz;") >>> cur.fetchone()[0].tzinfo psycopg2.tz.FixedOffsetTimezone(offset=353, name=None) Changed in version 2.2.2: timezones with seconds are supported (with rounding). Previously such timezones raised an error. In order to deal with them in previous versions use "psycopg2.extras.register_tstz_w_secs()". Infinite dates handling ~~~~~~~~~~~~~~~~~~~~~~~ PostgreSQL can store the representation of an "infinite" date, timestamp, or interval. Infinite dates are not available to Python, so these objects are mapped to "date.max", "datetime.max", "interval.max". Unfortunately the mapping cannot be bidirectional so these dates will be stored back into the database with their values, such as "9999-12-31". It is possible to create an alternative adapter for dates and other objects to map "date.max" to "infinity", for instance: class InfDateAdapter: def __init__(self, wrapped): self.wrapped = wrapped def getquoted(self): if self.wrapped == datetime.date.max: return "'infinity'::date" elif self.wrapped == datetime.date.min: return "'-infinity'::date" else: return psycopg2.extensions.DateFromPy(self.wrapped).getquoted() psycopg2.extensions.register_adapter(datetime.date, InfDateAdapter) Of course it will not be possible to write the value of "date.max" in the database anymore: "infinity" will be stored instead. Lists adaptation ---------------- Python lists are converted into PostgreSQL "ARRAY"s: >>> cur.mogrify("SELECT %s;", ([10, 20, 30], )) 'SELECT ARRAY[10,20,30];' Note: You can use a Python list as the argument of the "IN" operator using the PostgreSQL ANY operator. ids = [10, 20, 30] cur.execute("SELECT * FROM data WHERE id = ANY(%s);", (ids,)) Furthermore "ANY" can also work with empty lists, whereas "IN ()" is a SQL syntax error. Note: Reading back from PostgreSQL, arrays are converted to lists of Python objects as expected, but only if the items are of a known type. Arrays of unknown types are returned as represented by the database (e.g. "{a,b,c}"). If you want to convert the items into Python objects you can easily create a typecaster for *array of unknown types*. Tuples adaptation ----------------- Python tuples are converted into a syntax suitable for the SQL "IN" operator and to represent a composite type: >>> cur.mogrify("SELECT %s IN %s;", (10, (10, 20, 30))) 'SELECT 10 IN (10, 20, 30);' Note: SQL doesn't allow an empty list in the "IN" operator, so your code should guard against empty tuples. Alternatively you can *use a Python list*. If you want PostgreSQL composite types to be converted into a Python tuple/namedtuple you can use the "register_composite()" function. New in version 2.0.6: the tuple "IN" adaptation. Changed in version 2.0.14: the tuple "IN" adapter is always active. In previous releases it was necessary to import the "extensions" module to have it registered. Changed in version 2.3: "namedtuple" instances are adapted like regular tuples and can thus be used to represent composite types. Transactions control ==================== In Psycopg transactions are handled by the "connection" class. By default, the first time a command is sent to the database (using one of the "cursor"s created by the connection), a new transaction is created. The following database commands will be executed in the context of the same transaction -- not only the commands issued by the first cursor, but the ones issued by all the cursors created by the same connection. Should any command fail, the transaction will be aborted and no further command will be executed until a call to the "rollback()" method. The connection is responsible for terminating its transaction, calling either the "commit()" or "rollback()" method. Committed changes are immediately made persistent into the database. Closing the connection using the "close()" method or destroying the connection object (using "del" or letting it fall out of scope) will result in an implicit rollback. It is possible to set the connection in *autocommit* mode: this way all the commands executed will be immediately committed and no rollback is possible. A few commands (e.g. "CREATE DATABASE", "VACUUM"...) require to be run outside any transaction: in order to be able to run these commands from Psycopg, the connection must be in autocommit mode: you can use the "autocommit" property ("set_isolation_level()" in older versions). Warning: By default even a simple "SELECT" will start a transaction: in long- running programs, if no further action is taken, the session will remain "idle in transaction", a condition non desiderable for several reasons (locks are held by the session, tables bloat...). For long lived scripts, either make sure to terminate a transaction as soon as possible or use an autocommit connection. A few other transaction properties can be set session-wide by the "connection": for instance it is possible to have read-only transactions or change the isolation level. See the "set_session()" method for all the details. "with" statement ---------------- Starting from version 2.5, psycopg2's connections and cursors are *context managers* and can be used with the "with" statement: with psycopg2.connect(DSN) as conn: with conn.cursor() as curs: curs.execute(SQL) When a connection exits the "with" block, if no exception has been raised by the block, the transaction is committed. In case of exception the transaction is rolled back. In no case the connection is closed: a connection can be used in more than a "with" statement and each "with" block is effectively wrapped in a transaction. When a cursor exits the "with" block it is closed, releasing any resource eventually associated with it. The state of the transaction is not affected. Server side cursors =================== When a database query is executed, the Psycopg "cursor" usually fetches all the records returned by the backend, transferring them to the client process. If the query returned an huge amount of data, a proportionally large amount of memory will be allocated by the client. If the dataset is too large to be practically handled on the client side, it is possible to create a *server side* cursor. Using this kind of cursor it is possible to transfer to the client only a controlled amount of data, so that a large dataset can be examined without keeping it entirely in memory. Server side cursor are created in PostgreSQL using the "DECLARE" command and subsequently handled using "MOVE", "FETCH" and "CLOSE" commands. Psycopg wraps the database server side cursor in *named cursors*. A named cursor is created using the "cursor()" method specifying the *name* parameter. Such cursor will behave mostly like a regular cursor, allowing the user to move in the dataset using the "scroll()" method and to read the data using "fetchone()" and "fetchmany()" methods. Normally you can only scroll forward in a cursor: if you need to scroll backwards you should declare your cursor "scrollable". Named cursors are also *iterable* like regular cursors. Note however that before Psycopg 2.4 iteration was performed fetching one record at time from the backend, resulting in a large overhead. The attribute "itersize" now controls how many records are fetched at time during the iteration: the default value of 2000 allows to fetch about 100KB per roundtrip assuming records of 10-20 columns of mixed number and strings; you may decrease this value if you are dealing with huge records. Named cursors are usually created "WITHOUT HOLD", meaning they live only as long as the current transaction. Trying to fetch from a named cursor after a "commit()" or to create a named cursor when the "connection" transaction isolation level is set to "AUTOCOMMIT" will result in an exception. It is possible to create a "WITH HOLD" cursor by specifying a "True" value for the "withhold" parameter to "cursor()" or by setting the "withhold" attribute to "True" before calling "execute()" on the cursor. It is extremely important to always "close()" such cursors, otherwise they will continue to hold server- side resources until the connection will be eventually closed. Also note that while "WITH HOLD" cursors lifetime extends well after "commit()", calling "rollback()" will automatically close the cursor. Note: It is also possible to use a named cursor to consume a cursor created in some other way than using the "DECLARE" executed by "execute()". For example, you may have a PL/pgSQL function returning a cursor: CREATE FUNCTION reffunc(refcursor) RETURNS refcursor AS $$ BEGIN OPEN $1 FOR SELECT col FROM test; RETURN $1; END; $$ LANGUAGE plpgsql; You can read the cursor content by calling the function with a regular, non-named, Psycopg cursor: cur1 = conn.cursor() cur1.callproc('reffunc', ['curname']) and then use a named cursor in the same transaction to "steal the cursor": cur2 = conn.cursor('curname') for record in cur2: # or cur2.fetchone, fetchmany... # do something with record pass Thread and process safety ========================= The Psycopg module and the "connection" objects are *thread-safe*: many threads can access the same database either using separate sessions and creating a "connection" per thread or using the same connection and creating separate "cursor"s. In DB API 2.0 parlance, Psycopg is *level 2 thread safe*. The difference between the above two approaches is that, using different connections, the commands will be executed in different sessions and will be served by different server processes. On the other hand, using many cursors on the same connection, all the commands will be executed in the same session (and in the same transaction if the connection is not in *autocommit* mode), but they will be serialized. The above observations are only valid for regular threads: they don't apply to forked processes nor to green threads. "libpq" connections shouldn't be used by a forked processes, so when using a module such as "multiprocessing" or a forking web deploy method such as FastCGI make sure to create the connections *after* the fork. Connections shouldn't be shared either by different green threads: see *Support for coroutine libraries* for further details. Using COPY TO and COPY FROM =========================== Psycopg "cursor" objects provide an interface to the efficient PostgreSQL "COPY" command to move data from files to tables and back. The methods exposed are: "copy_from()" Reads data *from* a file-like object appending them to a database table ("COPY table FROM file" syntax). The source file must have both "read()" and "readline()" method. "copy_to()" Writes the content of a table *to* a file-like object ("COPY table TO file" syntax). The target file must have a "write()" method. "copy_expert()" Allows to handle more specific cases and to use all the "COPY" features available in PostgreSQL. Please refer to the documentation of the single methods for details and examples. Access to PostgreSQL large objects ================================== PostgreSQL offers support for large objects, which provide stream- style access to user data that is stored in a special large-object structure. They are useful with data values too large to be manipulated conveniently as a whole. Psycopg allows access to the large object using the "lobject" class. Objects are generated using the "connection.lobject()" factory method. Data can be retrieved either as bytes or as Unicode strings. Psycopg large object support efficient import/export with file system files using the "lo_import()" and "lo_export()" libpq functions. Two-Phase Commit protocol support ================================= New in version 2.3. Psycopg exposes the two-phase commit features available since PostgreSQL 8.1 implementing the *two-phase commit extensions* proposed by the DB API 2.0. The DB API 2.0 model of two-phase commit is inspired by the XA specification, according to which transaction IDs are formed from three components: * a format ID (non-negative 32 bit integer) * a global transaction ID (string not longer than 64 bytes) * a branch qualifier (string not longer than 64 bytes) For a particular global transaction, the first two components will be the same for all the resources. Every resource will be assigned a different branch qualifier. According to the DB API 2.0 specification, a transaction ID is created using the "connection.xid()" method. Once you have a transaction id, a distributed transaction can be started with "connection.tpc_begin()", prepared using "tpc_prepare()" and completed using "tpc_commit()" or "tpc_rollback()". Transaction IDs can also be retrieved from the database using "tpc_recover()" and completed using the above "tpc_commit()" and "tpc_rollback()". PostgreSQL doesn't follow the XA standard though, and the ID for a PostgreSQL prepared transaction can be any string up to 200 characters long. Psycopg's "Xid" objects can represent both XA-style transactions IDs (such as the ones created by the "xid()" method) and PostgreSQL transaction IDs identified by an unparsed string. The format in which the Xids are converted into strings passed to the database is the same employed by the PostgreSQL JDBC driver: this should allow interoperation between tools written in Python and in Java. For example a recovery tool written in Python would be able to recognize the components of transactions produced by a Java program. For further details see the documentation for the above methods. The "psycopg2" module content ***************************** The module interface respects the standard defined in the DB API 2.0. psycopg2.connect(dsn, connection_factory=None, cursor_factory=None, async=False) psycopg2.connect(**kwargs, connection_factory=None, cursor_factory=None, async=False) Create a new database session and return a new "connection" object. The connection parameters can be specified either as a libpq connection string using the *dsn* parameter: conn = psycopg2.connect("dbname=test user=postgres password=secret") or using a set of keyword arguments: conn = psycopg2.connect(database="test", user="postgres", password="secret") The two call styles are mutually exclusive: you cannot specify connection parameters as keyword arguments together with a connection string; only the parameters not needed for the database connection (*i.e.* *connection_factory*, *cursor_factory*, and *async*) are supported together with the *dsn* argument. The basic connection parameters are: * "dbname" -- the database name (only in the *dsn* string) * "database" -- the database name (only as keyword argument) * "user" -- user name used to authenticate * "password" -- password used to authenticate * "host" -- database host address (defaults to UNIX socket if not provided) * "port" -- connection port number (defaults to 5432 if not provided) Any other connection parameter supported by the client library/server can be passed either in the connection string or as keywords. The PostgreSQL documentation contains the complete list of the supported parameters. Also note that the same parameters can be passed to the client library using environment variables. Using the *connection_factory* parameter a different class or connections factory can be specified. It should be a callable object taking a *dsn* string argument. See *Connection and cursor factories* for details. If a *cursor_factory* is specified, the connection's "cursor_factory" is set to it. If you only need customized cursors you can use this parameter instead of subclassing a connection. Using *async*="True" an asynchronous connection will be created: see *Asynchronous support* to know about advantages and limitations. Changed in version 2.4.3: any keyword argument is passed to the connection. Previously only the basic parameters (plus "sslmode") were supported as keywords. Changed in version 2.5: added the *cursor_factory* parameter. See also: * libpq connection string syntax * libpq supported connection parameters * libpq supported environment variables DB API extension: The parameters *connection_factory* and *async* are Psycopg extensions to the DB API 2.0. psycopg2.apilevel String constant stating the supported DB API level. For "psycopg2" is "2.0". psycopg2.threadsafety Integer constant stating the level of thread safety the interface supports. For "psycopg2" is "2", i.e. threads can share the module and the connection. See *Thread and process safety* for details. psycopg2.paramstyle String constant stating the type of parameter marker formatting expected by the interface. For "psycopg2" is "pyformat". See also *Passing parameters to SQL queries*. Exceptions ========== In compliance with the DB API 2.0, the module makes informations about errors available through the following exceptions: exception exception psycopg2.Warning Exception raised for important warnings like data truncations while inserting, etc. It is a subclass of the Python "StandardError". exception exception psycopg2.Error Exception that is the base class of all other error exceptions. You can use this to catch all errors with one single "except" statement. Warnings are not considered errors and thus not use this class as base. It is a subclass of the Python "StandardError". pgerror String representing the error message returned by the backend, "None" if not available. pgcode String representing the error code returned by the backend, "None" if not available. The "errorcodes" module contains symbolic constants representing PostgreSQL error codes. >>> try: ... cur.execute("SELECT * FROM barf") ... except Exception, e: ... pass >>> e.pgcode '42P01' >>> print e.pgerror ERROR: relation "barf" does not exist LINE 1: SELECT * FROM barf ^ cursor The cursor the exception was raised from; "None" if not applicable. diag A "Diagnostics" object containing further information about the error. >>> try: ... cur.execute("SELECT * FROM barf") ... except Exception, e: ... pass >>> e.diag.severity 'ERROR' >>> e.diag.message_primary 'relation "barf" does not exist' New in version 2.5. DB API extension: The "pgerror", "pgcode", "cursor", and "diag" attributes are Psycopg extensions. exception exception psycopg2.InterfaceError Exception raised for errors that are related to the database interface rather than the database itself. It is a subclass of "Error". exception exception psycopg2.DatabaseError Exception raised for errors that are related to the database. It is a subclass of "Error". exception exception psycopg2.DataError Exception raised for errors that are due to problems with the processed data like division by zero, numeric value out of range, etc. It is a subclass of "DatabaseError". exception exception psycopg2.OperationalError Exception raised for errors that are related to the database's operation and not necessarily under the control of the programmer, e.g. an unexpected disconnect occurs, the data source name is not found, a transaction could not be processed, a memory allocation error occurred during processing, etc. It is a subclass of "DatabaseError". exception exception psycopg2.IntegrityError Exception raised when the relational integrity of the database is affected, e.g. a foreign key check fails. It is a subclass of "DatabaseError". exception exception psycopg2.InternalError Exception raised when the database encounters an internal error, e.g. the cursor is not valid anymore, the transaction is out of sync, etc. It is a subclass of "DatabaseError". exception exception psycopg2.ProgrammingError Exception raised for programming errors, e.g. table not found or already exists, syntax error in the SQL statement, wrong number of parameters specified, etc. It is a subclass of "DatabaseError". exception exception psycopg2.NotSupportedError Exception raised in case a method or database API was used which is not supported by the database, e.g. requesting a "rollback()" on a connection that does not support transaction or has transactions turned off. It is a subclass of "DatabaseError". DB API extension: Psycopg may raise a few other, more specialized, exceptions: currently "QueryCanceledError" and "TransactionRollbackError" are defined. These exceptions are not exposed by the main "psycopg2" module but are made available by the "extensions" module. All the additional exceptions are subclasses of standard DB API 2.0 exceptions, so trapping them specifically is not required. This is the exception inheritance layout: "StandardError" |__ "Warning" |__ "Error" |__ "InterfaceError" |__ "DatabaseError" |__ "DataError" |__ "OperationalError" | |__ "psycopg2.extensions.QueryCanceledError" | |__ "psycopg2.extensions.TransactionRollbackError" |__ "IntegrityError" |__ "InternalError" |__ "ProgrammingError" |__ "NotSupportedError" Type Objects and Constructors ============================= Note: This section is mostly copied verbatim from the DB API 2.0 specification. While these objects are exposed in compliance to the DB API, Psycopg offers very accurate tools to convert data between Python and PostgreSQL formats. See *Adapting new Python types to SQL syntax* and *Type casting of SQL types into Python objects* Many databases need to have the input in a particular format for binding to an operation's input parameters. For example, if an input is destined for a DATE column, then it must be bound to the database in a particular string format. Similar problems exist for "Row ID" columns or large binary items (e.g. blobs or RAW columns). This presents problems for Python since the parameters to the .execute*() method are untyped. When the database module sees a Python string object, it doesn't know if it should be bound as a simple CHAR column, as a raw BINARY item, or as a DATE. To overcome this problem, a module must provide the constructors defined below to create objects that can hold special values. When passed to the cursor methods, the module can then detect the proper type of the input parameter and bind it accordingly. A Cursor Object's description attribute returns information about each of the result columns of a query. The type_code must compare equal to one of Type Objects defined below. Type Objects may be equal to more than one type code (e.g. DATETIME could be equal to the type codes for date, time and timestamp columns; see the Implementation Hints below for details). The module exports the following constructors and singletons: psycopg2.Date(year, month, day) This function constructs an object holding a date value. psycopg2.Time(hour, minute, second) This function constructs an object holding a time value. psycopg2.Timestamp(year, month, day, hour, minute, second) This function constructs an object holding a time stamp value. psycopg2.DateFromTicks(ticks) This function constructs an object holding a date value from the given ticks value (number of seconds since the epoch; see the documentation of the standard Python time module for details). psycopg2.TimeFromTicks(ticks) This function constructs an object holding a time value from the given ticks value (number of seconds since the epoch; see the documentation of the standard Python time module for details). psycopg2.TimestampFromTicks(ticks) This function constructs an object holding a time stamp value from the given ticks value (number of seconds since the epoch; see the documentation of the standard Python time module for details). psycopg2.Binary(string) This function constructs an object capable of holding a binary (long) string value. Note: All the adapters returned by the module level factories ("Binary", "Date", "Time", "Timestamp" and the "*FromTicks" variants) expose the wrapped object (a regular Python object such as "datetime") in an "adapted" attribute. psycopg2.STRING This type object is used to describe columns in a database that are string-based (e.g. CHAR). psycopg2.BINARY This type object is used to describe (long) binary columns in a database (e.g. LONG, RAW, BLOBs). psycopg2.NUMBER This type object is used to describe numeric columns in a database. psycopg2.DATETIME This type object is used to describe date/time columns in a database. psycopg2.ROWID This type object is used to describe the "Row ID" column in a database. The "connection" class ********************** class class connection Handles the connection to a PostgreSQL database instance. It encapsulates a database session. Connections are created using the factory function "connect()". Connections are thread safe and can be shared among many threads. See *Thread and process safety* for details. cursor(name=None, cursor_factory=None, scrollable=None, withhold=False) Return a new "cursor" object using the connection. If *name* is specified, the returned cursor will be a *server side cursor* (also known as *named cursor*). Otherwise it will be a regular *client side* cursor. By default a named cursor is declared without "SCROLL" option and "WITHOUT HOLD": set the argument or property "scrollable" to "True"/"False" and or "withhold" to "True" to change the declaration. The name can be a string not valid as a PostgreSQL identifier: for example it may start with a digit and contain non- alphanumeric characters and quotes. Changed in version 2.4: previously only valid PostgreSQL identifiers were accepted as cursor name. Warning: It is unsafe to expose the *name* to an untrusted source, for instance you shouldn't allow *name* to be read from a HTML form. Consider it as part of the query, not as a query parameter. The *cursor_factory* argument can be used to create non-standard cursors. The class returned must be a subclass of "psycopg2.extensions.cursor". See *Connection and cursor factories* for details. A default factory for the connection can also be specified using the "cursor_factory" attribute. Changed in version 2.4.3: added the *withhold* argument. Changed in version 2.5: added the *scrollable* argument. DB API extension: All the function arguments are Psycopg extensions to the DB API 2.0. commit() Commit any pending transaction to the database. By default, Psycopg opens a transaction before executing the first command: if "commit()" is not called, the effect of any data manipulation will be lost. The connection can be also set in "autocommit" mode: no transaction is automatically open, commands have immediate effect. See *Transactions control* for details. Changed in version 2.5: if the connection is used in a "with" statement, the method is automatically called if no exception is raised in the "with" block. rollback() Roll back to the start of any pending transaction. Closing a connection without committing the changes first will cause an implicit rollback to be performed. Changed in version 2.5: if the connection is used in a "with" statement, the method is automatically called if an exception is raised in the "with" block. close() Close the connection now (rather than whenever "del" is executed). The connection will be unusable from this point forward; an "InterfaceError" will be raised if any operation is attempted with the connection. The same applies to all cursor objects trying to use the connection. Note that closing a connection without committing the changes first will cause any pending change to be discarded as if a "ROLLBACK" was performed (unless a different isolation level has been selected: see "set_isolation_level()"). Changed in version 2.2: previously an explicit "ROLLBACK" was issued by Psycopg on "close()". The command could have been sent to the backend at an inappropriate time, so Psycopg currently relies on the backend to implicitly discard uncommitted changes. Some middleware are known to behave incorrectly though when the connection is closed during a transaction (when "status" is "STATUS_IN_TRANSACTION"), e.g. PgBouncer reports an "unclean server" and discards the connection. To avoid this problem you can ensure to terminate the transaction with a "commit()"/"rollback()" before closing. -[ Exceptions as connection class attributes ]- The "connection" also exposes as attributes the same exceptions available in the "psycopg2" module. See *Exceptions*. -[ Two-phase commit support methods ]- New in version 2.3. See also: *Two-Phase Commit protocol support* for an introductory explanation of these methods. Note that PostgreSQL supports two-phase commit since release 8.1: these methods raise "NotSupportedError" if used with an older version server. xid(format_id, gtrid, bqual) Returns a "Xid" instance to be passed to the "tpc_*()" methods of this connection. The argument types and constraints are explained in *Two-Phase Commit protocol support*. The values passed to the method will be available on the returned object as the members "format_id", "gtrid", "bqual". The object also allows accessing to these members and unpacking as a 3-items tuple. tpc_begin(xid) Begins a TPC transaction with the given transaction ID *xid*. This method should be called outside of a transaction (i.e. nothing may have executed since the last "commit()" or "rollback()" and "connection.status" is "STATUS_READY"). Furthermore, it is an error to call "commit()" or "rollback()" within the TPC transaction: in this case a "ProgrammingError" is raised. The *xid* may be either an object returned by the "xid()" method or a plain string: the latter allows to create a transaction using the provided string as PostgreSQL transaction id. See also "tpc_recover()". tpc_prepare() Performs the first phase of a transaction started with "tpc_begin()". A "ProgrammingError" is raised if this method is used outside of a TPC transaction. After calling "tpc_prepare()", no statements can be executed until "tpc_commit()" or "tpc_rollback()" will be called. The "reset()" method can be used to restore the status of the connection to "STATUS_READY": the transaction will remain prepared in the database and will be possible to finish it with "tpc_commit(xid)" and "tpc_rollback(xid)". See also: the "PREPARE TRANSACTION" PostgreSQL command. tpc_commit([xid]) When called with no arguments, "tpc_commit()" commits a TPC transaction previously prepared with "tpc_prepare()". If "tpc_commit()" is called prior to "tpc_prepare()", a single phase commit is performed. A transaction manager may choose to do this if only a single resource is participating in the global transaction. When called with a transaction ID *xid*, the database commits the given transaction. If an invalid transaction ID is provided, a "ProgrammingError" will be raised. This form should be called outside of a transaction, and is intended for use in recovery. On return, the TPC transaction is ended. See also: the "COMMIT PREPARED" PostgreSQL command. tpc_rollback([xid]) When called with no arguments, "tpc_rollback()" rolls back a TPC transaction. It may be called before or after "tpc_prepare()". When called with a transaction ID *xid*, it rolls back the given transaction. If an invalid transaction ID is provided, a "ProgrammingError" is raised. This form should be called outside of a transaction, and is intended for use in recovery. On return, the TPC transaction is ended. See also: the "ROLLBACK PREPARED" PostgreSQL command. tpc_recover() Returns a list of "Xid" representing pending transactions, suitable for use with "tpc_commit()" or "tpc_rollback()". If a transaction was not initiated by Psycopg, the returned Xids will have attributes "format_id" and "bqual" set to "None" and the "gtrid" set to the PostgreSQL transaction ID: such Xids are still usable for recovery. Psycopg uses the same algorithm of the PostgreSQL JDBC driver to encode a XA triple in a string, so transactions initiated by a program using such driver should be unpacked correctly. Xids returned by "tpc_recover()" also have extra attributes "prepared", "owner", "database" populated with the values read from the server. See also: the "pg_prepared_xacts" system view. DB API extension: The above methods are the only ones defined by the DB API 2.0 protocol. The Psycopg connection objects exports the following additional methods and attributes. closed Read-only attribute reporting whether the database connection is open (0) or closed (1). cancel() Cancel the current database operation. The method interrupts the processing of the current operation. If no query is being executed, it does nothing. You can call this function from a different thread than the one currently executing a database operation, for instance if you want to cancel a long running query if a button is pushed in the UI. Interrupting query execution will cause the cancelled method to raise a "QueryCanceledError". Note that the termination of the query is not guaranteed to succeed: see the documentation for "PQcancel()". New in version 2.3. reset() Reset the connection to the default. The method rolls back an eventual pending transaction and executes the PostgreSQL "RESET" and "SET SESSION AUTHORIZATION" to revert the session to the default values. A two-phase commit transaction prepared using "tpc_prepare()" will remain in the database available for recover. New in version 2.0.12. dsn Read-only string containing the connection string used by the connection. set_session([isolation_level,] [readonly,] [deferrable,] [autocommit]) Set one or more parameters for the next transactions or statements in the current session. See "SET TRANSACTION" for further details. Parameters: * **isolation_level** -- set the isolation level for the next transactions/statements. The value can be one of the *constants* defined in the "extensions" module or one of the literal values "READ UNCOMMITTED", "READ COMMITTED", "REPEATABLE READ", "SERIALIZABLE". * **readonly** -- if "True", set the connection to read only; read/write if "False". * **deferrable** -- if "True", set the connection to deferrable; non deferrable if "False". Only available from PostgreSQL 9.1. * **autocommit** -- switch the connection to autocommit mode: not a PostgreSQL session setting but an alias for setting the "autocommit" attribute. The parameters *isolation_level*, *readonly* and *deferrable* also accept the string "DEFAULT" as a value: the effect is to reset the parameter to the server default. The function must be invoked with no transaction in progress. At every function invocation, only the specified parameters are changed. The default for the values are defined by the server configuration: see values for "default_transaction_isolation", "default_transaction_read_only", "default_transaction_deferrable". Note: There is currently no builtin method to read the current value for the parameters: use "SHOW default_transaction_..." to read the values from the backend. New in version 2.4.2. autocommit Read/write attribute: if "True", no transaction is handled by the driver and every statement sent to the backend has immediate effect; if "False" a new transaction is started at the first command execution: the methods "commit()" or "rollback()" must be manually invoked to terminate the transaction. The autocommit mode is useful to execute commands requiring to be run outside a transaction, such as "CREATE DATABASE" or "VACUUM". The default is "False" (manual commit) as per DBAPI specification. Warning: By default, any query execution, including a simple "SELECT" will start a transaction: for long-running programs, if no further action is taken, the session will remain "idle in transaction", a condition non desiderable for several reasons (locks are held by the session, tables bloat...). For long lived scripts, either ensure to terminate a transaction as soon as possible or use an autocommit connection. New in version 2.4.2. isolation_level set_isolation_level(level) Note: From version 2.4.2, "set_session()" and "autocommit", offer finer control on the transaction characteristics. Read or set the transaction isolation level for the current session. The level defines the different phenomena that can happen in the database between concurrent transactions. The value set or read is an integer: symbolic constants are defined in the module "psycopg2.extensions": see *Isolation level constants* for the available values. The default level is "READ COMMITTED": at this level a transaction is automatically started the first time a database command is executed. If you want an *autocommit* mode, switch to "ISOLATION_LEVEL_AUTOCOMMIT" before executing any command: >>> conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT) See also *Transactions control*. encoding set_client_encoding(enc) Read or set the client encoding for the current session. The default is the encoding defined by the database. It should be one of the characters set supported by PostgreSQL notices A list containing all the database messages sent to the client during the session. >>> cur.execute("CREATE TABLE foo (id serial PRIMARY KEY);") >>> pprint(conn.notices) ['NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "foo_pkey" for table "foo"\n', 'NOTICE: CREATE TABLE will create implicit sequence "foo_id_seq" for serial column "foo.id"\n'] To avoid a leak in case excessive notices are generated, only the last 50 messages are kept. You can configure what messages to receive using PostgreSQL logging configuration parameters such as "log_statement", "client_min_messages", "log_min_duration_statement" etc. notifies List of "Notify" objects containing asynchronous notifications received by the session. For other details see *Asynchronous notifications*. Changed in version 2.3: Notifications are instances of the "Notify" object. Previously the list was composed by 2 items tuples "(*pid*,*channel*)" and the payload was not accessible. To keep backward compatibility, "Notify" objects can still be accessed as 2 items tuples. cursor_factory The default cursor factory used by "cursor()" if the parameter is not specified. New in version 2.5. get_backend_pid() Returns the process ID (PID) of the backend server process handling this connection. Note that the PID belongs to a process executing on the database server host, not the local host! See also: libpq docs for PQbackendPID() for details. New in version 2.0.8. get_parameter_status(parameter) Look up a current parameter setting of the server. Potential values for "parameter" are: "server_version", "server_encoding", "client_encoding", "is_superuser", "session_authorization", "DateStyle", "TimeZone", "integer_datetimes", and "standard_conforming_strings". If server did not report requested parameter, return "None". See also: libpq docs for PQparameterStatus() for details. New in version 2.0.12. get_transaction_status() Return the current session transaction status as an integer. Symbolic constants for the values are defined in the module "psycopg2.extensions": see *Transaction status constants* for the available values. See also: libpq docs for PQtransactionStatus() for details. protocol_version A read-only integer representing frontend/backend protocol being used. Currently Psycopg supports only protocol 3, which allows connection to PostgreSQL server from version 7.4. Psycopg versions previous than 2.3 support both protocols 2 and 3. See also: libpq docs for PQprotocolVersion() for details. New in version 2.0.12. server_version A read-only integer representing the backend version. The number is formed by converting the major, minor, and revision numbers into two-decimal-digit numbers and appending them together. For example, version 8.1.5 will be returned as "80105". See also: libpq docs for PQserverVersion() for details. New in version 2.0.12. status A read-only integer representing the status of the connection. Symbolic constants for the values are defined in the module "psycopg2.extensions": see *Connection status constants* for the available values. lobject([oid[, mode[, new_oid[, new_file[, lobject_factory]]]]]) Return a new database large object as a "lobject" instance. See *Access to PostgreSQL large objects* for an overview. Parameters: * **oid** -- The OID of the object to read or write. 0 to create a new large object and and have its OID assigned automatically. * **mode** -- Access mode to the object, see below. * **new_oid** -- Create a new object using the specified OID. The function raises "OperationalError" if the OID is already in use. Default is 0, meaning assign a new one automatically. * **new_file** -- The name of a file to be imported in the the database (using the "lo_import()" function) * **lobject_factory** -- Subclass of "lobject" to be instantiated. Available values for *mode* are: +---------+-------------------------------------------------------------------------------------------------------------------+ | *mode* | meaning | +=========+===================================================================================================================+ | "r" | Open for read only | +---------+-------------------------------------------------------------------------------------------------------------------+ | "w" | Open for write only | +---------+-------------------------------------------------------------------------------------------------------------------+ | "rw" | Open for read/write | +---------+-------------------------------------------------------------------------------------------------------------------+ | "n" | Don't open the file | +---------+-------------------------------------------------------------------------------------------------------------------+ | "b" | Don't decode read data (return data as "str" in Python 2 or "bytes" in Python 3) | +---------+-------------------------------------------------------------------------------------------------------------------+ | "t" | Decode read data according to "connection.encoding" (return data as "unicode" in Python 2 or "str" in Python 3) | +---------+-------------------------------------------------------------------------------------------------------------------+ "b" and "t" can be specified together with a read/write mode. If neither "b" nor "t" is specified, the default is "b" in Python 2 and "t" in Python 3. New in version 2.0.8. Changed in version 2.4: added "b" and "t" mode and unicode support. -[ Methods related to asynchronous support. ]- New in version 2.2.0. See also: *Asynchronous support* and *Support for coroutine libraries*. async Read only attribute: 1 if the connection is asynchronous, 0 otherwise. poll() Used during an asynchronous connection attempt, or when a cursor is executing a query on an asynchronous connection, make communication proceed if it wouldn't block. Return one of the constants defined in *Poll constants*. If it returns "POLL_OK" then the connection has been established or the query results are available on the client. Otherwise wait until the file descriptor returned by "fileno()" is ready to read or to write, as explained in *Asynchronous support*. "poll()" should be also used by the function installed by "set_wait_callback()" as explained in *Support for coroutine libraries*. "poll()" is also used to receive asynchronous notifications from the database: see *Asynchronous notifications* from further details. fileno() Return the file descriptor underlying the connection: useful to read its status during asynchronous communication. isexecuting() Return "True" if the connection is executing an asynchronous operation. The "cursor" class ****************** class class cursor Allows Python code to execute PostgreSQL command in a database session. Cursors are created by the "connection.cursor()" method: they are bound to the connection for the entire lifetime and all the commands are executed in the context of the database session wrapped by the connection. Cursors created from the same connection are not isolated, i.e., any changes done to the database by a cursor are immediately visible by the other cursors. Cursors created from different connections can or can not be isolated, depending on the connections' *isolation level*. See also "rollback()" and "commit()" methods. Cursors are *not* thread safe: a multithread application can create many cursors from the same connection and should use each cursor from a single thread. See *Thread and process safety* for details. description This read-only attribute is a sequence of 7-item sequences. Each of these sequences is a named tuple (a regular tuple if "collections.namedtuple()" is not available) containing information describing one result column: * "name": the name of the column returned. * "type_code": the PostgreSQL OID of the column. You can use the "pg_type" system table to get more informations about the type. This is the value used by Psycopg to decide what Python type use to represent the value. See also *Type casting of SQL types into Python objects*. * "display_size": the actual length of the column in bytes. Obtaining this value is computationally intensive, so it is always "None" unless the "PSYCOPG_DISPLAY_SIZE" parameter is set at compile time. See also PQgetlength. * "internal_size": the size in bytes of the column associated to this column on the server. Set to a negative value for variable-size types See also PQfsize. * "precision": total number of significant digits in columns of type "NUMERIC". "None" for other types. * "scale": count of decimal digits in the fractional part in columns of type "NUMERIC". "None" for other types. * "null_ok": always "None" as not easy to retrieve from the libpq. This attribute will be "None" for operations that do not return rows or if the cursor has not had an operation invoked via the "execute*()" methods yet. Changed in version 2.4: if possible, columns descriptions are named tuple instead of regular tuples. close() Close the cursor now (rather than whenever "del" is executed). The cursor will be unusable from this point forward; an "InterfaceError" will be raised if any operation is attempted with the cursor. Changed in version 2.5: if the cursor is used in a "with" statement, the method is automatically called at the end of the "with" block. closed Read-only boolean attribute: specifies if the cursor is closed ("True") or not ("False"). DB API extension: The "closed" attribute is a Psycopg extension to the DB API 2.0. New in version 2.0.7. connection Read-only attribute returning a reference to the "connection" object on which the cursor was created. name Read-only attribute containing the name of the cursor if it was creates as named cursor by "connection.cursor()", or "None" if it is a client side cursor. See *Server side cursors*. DB API extension: The "name" attribute is a Psycopg extension to the DB API 2.0. scrollable Read/write attribute: specifies if a named cursor is declared "SCROLL", hence is capable to scroll backwards (using "scroll()"). If "True", the cursor can be scrolled backwards, if "False" it is never scrollable. If "None" (default) the cursor scroll option is not specified, usually but not always meaning no backward scroll (see the "DECLARE" notes). Note: set the value before calling "execute()" or use the "connection.cursor()" *scrollable* parameter, otherwise the value will have no effect. New in version 2.5. DB API extension: The "scrollable" attribute is a Psycopg extension to the DB API 2.0. withhold Read/write attribute: specifies if a named cursor lifetime should extend outside of the current transaction, i.e., it is possible to fetch from the cursor even after a "connection.commit()" (but not after a "connection.rollback()"). See *Server side cursors* Note: set the value before calling "execute()" or use the "connection.cursor()" *withhold* parameter, otherwise the value will have no effect. New in version 2.4.3. DB API extension: The "withhold" attribute is a Psycopg extension to the DB API 2.0. -[ Commands execution methods ]- execute(operation[, parameters]) Prepare and execute a database operation (query or command). Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified either with positional ("%s") or named ("%(*name*)s") placeholders. See *Passing parameters to SQL queries*. The method returns "None". If a query was executed, the returned values can be retrieved using "fetch*()" methods. executemany(operation, seq_of_parameters) Prepare a database operation (query or command) and then execute it against all parameter tuples or mappings found in the sequence "seq_of_parameters". The function is mostly useful for commands that update the database: any result set returned by the query is discarded. Parameters are bounded to the query using the same rules described in the "execute()" method. callproc(procname[, parameters]) Call a stored database procedure with the given name. The sequence of parameters must contain one entry for each argument that the procedure expects. The result of the call is returned as modified copy of the input sequence. Input parameters are left untouched, output and input/output parameters replaced with possibly new values. The procedure may also provide a result set as output. This must then be made available through the standard "fetch*()" methods. mogrify(operation[, parameters]) Return a query string after arguments binding. The string returned is exactly the one that would be sent to the database running the "execute()" method or similar. >>> cur.mogrify("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar')) "INSERT INTO test (num, data) VALUES (42, E'bar')" DB API extension: The "mogrify()" method is a Psycopg extension to the DB API 2.0. setinputsizes(sizes) This method is exposed in compliance with the DB API 2.0. It currently does nothing but it is safe to call it. -[ Results retrieval methods ]- The following methods are used to read data from the database after an "execute()" call. Note: "cursor" objects are iterable, so, instead of calling explicitly "fetchone()" in a loop, the object itself can be used: >>> cur.execute("SELECT * FROM test;") >>> for record in cur: ... print record ... (1, 100, "abc'def") (2, None, 'dada') (3, 42, 'bar') Changed in version 2.4: iterating over a *named cursor* fetches "itersize" records at time from the backend. Previously only one record was fetched per roundtrip, resulting in a large overhead. fetchone() Fetch the next row of a query result set, returning a single tuple, or "None" when no more data is available: >>> cur.execute("SELECT * FROM test WHERE id = %s", (3,)) >>> cur.fetchone() (3, 42, 'bar') A "ProgrammingError" is raised if the previous call to "execute*()" did not produce any result set or no call was issued yet. fetchmany([size=cursor.arraysize]) Fetch the next set of rows of a query result, returning a list of tuples. An empty list is returned when no more rows are available. The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor's "arraysize" determines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned: >>> cur.execute("SELECT * FROM test;") >>> cur.fetchmany(2) [(1, 100, "abc'def"), (2, None, 'dada')] >>> cur.fetchmany(2) [(3, 42, 'bar')] >>> cur.fetchmany(2) [] A "ProgrammingError" is raised if the previous call to "execute*()" did not produce any result set or no call was issued yet. Note there are performance considerations involved with the size parameter. For optimal performance, it is usually best to use the "arraysize" attribute. If the size parameter is used, then it is best for it to retain the same value from one "fetchmany()" call to the next. fetchall() Fetch all (remaining) rows of a query result, returning them as a list of tuples. An empty list is returned if there is no more record to fetch. >>> cur.execute("SELECT * FROM test;") >>> cur.fetchall() [(1, 100, "abc'def"), (2, None, 'dada'), (3, 42, 'bar')] A "ProgrammingError" is raised if the previous call to "execute*()" did not produce any result set or no call was issued yet. scroll(value[, mode='relative']) Scroll the cursor in the result set to a new position according to mode. If "mode" is "relative" (default), value is taken as offset to the current position in the result set, if set to "absolute", value states an absolute target position. If the scroll operation would leave the result set, a "ProgrammingError" is raised and the cursor position is not changed. The method can be used both for client-side cursors and *server- side cursors*. Server-side cursors can usually scroll backwards only if declared "scrollable". Note: According to the DB API 2.0, the exception raised for a cursor out of bound should have been "IndexError". The best option is probably to catch both exceptions in your code: try: cur.scroll(1000 * 1000) except (ProgrammingError, IndexError), exc: deal_with_it(exc) arraysize This read/write attribute specifies the number of rows to fetch at a time with "fetchmany()". It defaults to 1 meaning to fetch a single row at a time. itersize Read/write attribute specifying the number of rows to fetch from the backend at each network roundtrip during *iteration* on a *named cursor*. The default is 2000. New in version 2.4. DB API extension: The "itersize" attribute is a Psycopg extension to the DB API 2.0. rowcount This read-only attribute specifies the number of rows that the last "execute*()" produced (for DQL (Data Query Language) statements like "SELECT") or affected (for DML (Data Manipulation Language) statements like "UPDATE" or "INSERT"). The attribute is -1 in case no "execute*()" has been performed on the cursor or the row count of the last operation if it can't be determined by the interface. Note: The DB API 2.0 interface reserves to redefine the latter case to have the object return "None" instead of -1 in future versions of the specification. rownumber This read-only attribute provides the current 0-based index of the cursor in the result set or "None" if the index cannot be determined. The index can be seen as index of the cursor in a sequence (the result set). The next fetch operation will fetch the row indexed by "rownumber" in that sequence. lastrowid This read-only attribute provides the OID of the last row inserted by the cursor. If the table wasn't created with OID support or the last operation is not a single record insert, the attribute is set to "None". Note: PostgreSQL currently advices to not create OIDs on the tables and the default for "CREATE TABLE" is to not support them. The "INSERT ... RETURNING" syntax available from PostgreSQL 8.3 allows more flexibility. query Read-only attribute containing the body of the last query sent to the backend (including bound arguments). "None" if no query has been executed yet: >>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar')) >>> cur.query "INSERT INTO test (num, data) VALUES (42, E'bar')" DB API extension: The "query" attribute is a Psycopg extension to the DB API 2.0. statusmessage Read-only attribute containing the message returned by the last command: >>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar')) >>> cur.statusmessage 'INSERT 0 1' DB API extension: The "statusmessage" attribute is a Psycopg extension to the DB API 2.0. cast(oid, s) Convert a value from the PostgreSQL string representation to a Python object. Use the most specific of the typecasters registered by "register_type()". New in version 2.4. DB API extension: The "cast()" method is a Psycopg extension to the DB API 2.0. tzinfo_factory The time zone factory used to handle data types such as "TIMESTAMP WITH TIME ZONE". It should be a "tzinfo" object. A few implementations are available in the "psycopg2.tz" module. nextset() This method is not supported (PostgreSQL does not have multiple data sets) and will raise a "NotSupportedError" exception. setoutputsize(size[, column]) This method is exposed in compliance with the DB API 2.0. It currently does nothing but it is safe to call it. -[ COPY-related methods ]- DB API extension: The "COPY" command is a PostgreSQL extension to the SQL standard. As such, its support is a Psycopg extension to the DB API 2.0. copy_from(file, table, sep='\t', null='\\N', size=8192, columns=None) Read data *from* the file-like object *file* appending them to the table named *table*. See *Using COPY TO and COPY FROM* for an overview. Parameters: * **file** -- file-like object to read data from. It must have both "read()" and "readline()" methods. * **table** -- name of the table to copy data into. * **sep** -- columns separator expected in the file. Defaults to a tab. * **null** -- textual representation of "NULL" in the file. The default is the two characters string "\N". * **size** -- size of the buffer used to read from the file. * **columns** -- iterable with name of the columns to import. The length and types should match the content of the file to read. If not specified, it is assumed that the entire table matches the file structure. Example: >>> f = StringIO("42\tfoo\n74\tbar\n") >>> cur.copy_from(f, 'test', columns=('num', 'data')) >>> cur.execute("select * from test where id > 5;") >>> cur.fetchall() [(6, 42, 'foo'), (7, 74, 'bar')] Changed in version 2.0.6: added the *columns* parameter. Changed in version 2.4: data read from files implementing the "io.TextIOBase" interface are encoded in the connection "encoding" when sent to the backend. copy_to(file, table, sep='\t', null='\\N', columns=None) Write the content of the table named *table* *to* the file-like object *file*. See *Using COPY TO and COPY FROM* for an overview. Parameters: * **file** -- file-like object to write data into. It must have a "write()" method. * **table** -- name of the table to copy data from. * **sep** -- columns separator expected in the file. Defaults to a tab. * **null** -- textual representation of "NULL" in the file. The default is the two characters string "\N". * **columns** -- iterable with name of the columns to export. If not specified, export all the columns. Example: >>> cur.copy_to(sys.stdout, 'test', sep="|") 1|100|abc'def 2|\N|dada ... Changed in version 2.0.6: added the *columns* parameter. Changed in version 2.4: data sent to files implementing the "io.TextIOBase" interface are decoded in the connection "encoding" when read from the backend. copy_expert(sql, file, size=8192) Submit a user-composed "COPY" statement. The method is useful to handle all the parameters that PostgreSQL makes available (see "COPY" command documentation). Parameters: * **sql** -- the "COPY" statement to execute. * **file** -- a file-like object to read or write (according to *sql*). * **size** -- size of the read buffer to be used in "COPY FROM". The *sql* statement should be in the form "COPY *table* TO STDOUT" to export "*table*" to the *file* object passed as argument or "COPY *table* FROM STDIN" to import the content of the *file* object into "*table*". *file* must be a readable file-like object (as required by "copy_from()") for *sql* statement "COPY ... FROM STDIN" or a writable one (as required by "copy_to()") for "COPY ... TO STDOUT". Example: >>> cur.copy_expert("COPY test TO STDOUT WITH CSV HEADER", sys.stdout) id,num,data 1,100,abc'def 2,,dada ... New in version 2.0.6. Changed in version 2.4: files implementing the "io.TextIOBase" interface are dealt with using Unicode data instead of bytes. More advanced topics ******************** Connection and cursor factories =============================== Psycopg exposes two new-style classes that can be sub-classed and expanded to adapt them to the needs of the programmer: "psycopg2.extensions.cursor" and "psycopg2.extensions.connection". The "connection" class is usually sub-classed only to provide an easy way to create customized cursors but other uses are possible. "cursor" is much more interesting, because it is the class where query building, execution and result type-casting into Python variables happens. The "extras" module contains several examples of *connection and cursor sublcasses*. Note: If you only need a customized cursor class, since Psycopg 2.5 you can use the "cursor_factory" parameter of a regular connection instead of creating a new "connection" subclass. An example of cursor subclass performing logging is: import psycopg2 import psycopg2.extensions import logging class LoggingCursor(psycopg2.extensions.cursor): def execute(self, sql, args=None): logger = logging.getLogger('sql_debug') logger.info(self.mogrify(sql, args)) try: psycopg2.extensions.cursor.execute(self, sql, args) except Exception, exc: logger.error("%s: %s" % (exc.__class__.__name__, exc)) raise conn = psycopg2.connect(DSN) cur = conn.cursor(cursor_factory=LoggingCursor) cur.execute("INSERT INTO mytable VALUES (%s, %s, %s);", (10, 20, 30)) Adapting new Python types to SQL syntax ======================================= Any Python class or type can be adapted to an SQL string. Adaptation mechanism is similar to the Object Adaptation proposed in the **PEP 246** and is exposed by the "psycopg2.extensions.adapt()" function. The "execute()" method adapts its arguments to the "ISQLQuote" protocol. Objects that conform to this protocol expose a "getquoted()" method returning the SQL representation of the object as a string (the method must return "bytes" in Python 3). Optionally the conform object may expose a "prepare()" method. There are two basic ways to have a Python object adapted to SQL: * the object itself is conform, or knows how to make itself conform. Such object must expose a "__conform__()" method that will be called with the protocol object as argument. The object can check that the protocol is "ISQLQuote", in which case it can return "self" (if the object also implements "getquoted()") or a suitable wrapper object. This option is viable if you are the author of the object and if the object is specifically designed for the database (i.e. having Psycopg as a dependency and polluting its interface with the required methods doesn't bother you). For a simple example you can take a look at the source code for the "psycopg2.extras.Inet" object. * If implementing the "ISQLQuote" interface directly in the object is not an option (maybe because the object to adapt comes from a third party library), you can use an *adaptation function*, taking the object to be adapted as argument and returning a conforming object. The adapter must be registered via the "register_adapter()" function. A simple example wrapper is "psycopg2.extras.UUID_adapter" used by the "register_uuid()" function. A convenient object to write adapters is the "AsIs" wrapper, whose "getquoted()" result is simply the "str()"ing conversion of the wrapped object. Example: mapping of a "Point" class into the "point" PostgreSQL geometric type: >>> from psycopg2.extensions import adapt, register_adapter, AsIs >>> class Point(object): ... def __init__(self, x, y): ... self.x = x ... self.y = y >>> def adapt_point(point): ... return AsIs("'(%s, %s)'" % (adapt(point.x), adapt(point.y))) >>> register_adapter(Point, adapt_point) >>> cur.execute("INSERT INTO atable (apoint) VALUES (%s)", ... (Point(1.23, 4.56),)) The above function call results in the SQL command: INSERT INTO atable (apoint) VALUES ('(1.23, 4.56)'); Type casting of SQL types into Python objects ============================================= PostgreSQL objects read from the database can be adapted to Python objects through an user-defined adapting function. An adapter function takes two arguments: the object string representation as returned by PostgreSQL and the cursor currently being read, and should return a new Python object. For example, the following function parses the PostgreSQL "point" representation into the previously defined "Point" class: >>> def cast_point(value, cur): ... if value is None: ... return None ... ... # Convert from (f1, f2) syntax using a regular expression. ... m = re.match(r"\(([^)]+),([^)]+)\)", value) ... if m: ... return Point(float(m.group(1)), float(m.group(2))) ... else: ... raise InterfaceError("bad point representation: %r" % value) In order to create a mapping from a PostgreSQL type (either standard or user-defined), its OID must be known. It can be retrieved either by the second column of the "cursor.description": >>> cur.execute("SELECT NULL::point") >>> point_oid = cur.description[0][1] >>> point_oid 600 or by querying the system catalog for the type name and namespace (the namespace for system objects is "pg_catalog"): >>> cur.execute(""" ... SELECT pg_type.oid ... FROM pg_type JOIN pg_namespace ... ON typnamespace = pg_namespace.oid ... WHERE typname = %(typename)s ... AND nspname = %(namespace)s""", ... {'typename': 'point', 'namespace': 'pg_catalog'}) >>> point_oid = cur.fetchone()[0] >>> point_oid 600 After you know the object OID, you can create and register the new type: >>> POINT = psycopg2.extensions.new_type((point_oid,), "POINT", cast_point) >>> psycopg2.extensions.register_type(POINT) The "new_type()" function binds the object OIDs (more than one can be specified) to the adapter function. "register_type()" completes the spell. Conversion is automatically performed when a column whose type is a registered OID is read: >>> cur.execute("SELECT '(10.2,20.3)'::point") >>> point = cur.fetchone()[0] >>> print type(point), point.x, point.y 10.2 20.3 A typecaster created by "new_type()" can be also used with "new_array_type()" to create a typecaster converting a PostgreSQL array into a Python list. Asynchronous notifications ========================== Psycopg allows asynchronous interaction with other database sessions using the facilities offered by PostgreSQL commands "LISTEN" and "NOTIFY". Please refer to the PostgreSQL documentation for examples about how to use this form of communication. Notifications are instances of the "Notify" object made available upon reception in the "connection.notifies" list. Notifications can be sent from Python code simply executing a "NOTIFY" command in an "execute()" call. Because of the way sessions interact with notifications (see "NOTIFY" documentation), you should keep the connection in "autocommit" mode if you wish to receive or send notifications in a timely manner. Notifications are received after every query execution. If the user is interested in receiving notifications but not in performing any query, the "poll()" method can be used to check for new messages without wasting resources. A simple application could poll the connection from time to time to check if something new has arrived. A better strategy is to use some I/O completion function such as "select()" to sleep until awaken from the kernel when there is some data to read on the connection, thereby using no CPU unless there is something to read: import select import psycopg2 import psycopg2.extensions conn = psycopg2.connect(DSN) conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT) curs = conn.cursor() curs.execute("LISTEN test;") print "Waiting for notifications on channel 'test'" while 1: if select.select([conn],[],[],5) == ([],[],[]): print "Timeout" else: conn.poll() while conn.notifies: notify = conn.notifies.pop() print "Got NOTIFY:", notify.pid, notify.channel, notify.payload Running the script and executing a command such as "NOTIFY test, 'hello'" in a separate **psql** shell, the output may look similar to: Waiting for notifications on channel 'test' Timeout Timeout Got NOTIFY: 6535 test hello Timeout ... Note that the payload is only available from PostgreSQL 9.0: notifications received from a previous version server will have the "payload" attribute set to the empty string. Changed in version 2.3: Added "Notify" object and handling notification payload. Asynchronous support ==================== New in version 2.2.0. Psycopg can issue asynchronous queries to a PostgreSQL database. An asynchronous communication style is established passing the parameter *async*=1 to the "connect()" function: the returned connection will work in *asynchronous mode*. In asynchronous mode, a Psycopg connection will rely on the caller to poll the socket file descriptor, checking if it is ready to accept data or if a query result has been transferred and is ready to be read on the client. The caller can use the method "fileno()" to get the connection file descriptor and "poll()" to make communication proceed according to the current connection state. The following is an example loop using methods "fileno()" and "poll()" together with the Python "select()" function in order to carry on asynchronous operations with Psycopg: def wait(conn): while 1: state = conn.poll() if state == psycopg2.extensions.POLL_OK: break elif state == psycopg2.extensions.POLL_WRITE: select.select([], [conn.fileno()], []) elif state == psycopg2.extensions.POLL_READ: select.select([conn.fileno()], [], []) else: raise psycopg2.OperationalError("poll() returned %s" % state) The above loop of course would block an entire application: in a real asynchronous framework, "select()" would be called on many file descriptors waiting for any of them to be ready. Nonetheless the function can be used to connect to a PostgreSQL server only using nonblocking commands and the connection obtained can be used to perform further nonblocking queries. After "poll()" has returned "POLL_OK", and thus "wait()" has returned, the connection can be safely used: >>> aconn = psycopg2.connect(database='test', async=1) >>> wait(aconn) >>> acurs = aconn.cursor() Note that there are a few other requirements to be met in order to have a completely non-blocking connection attempt: see the libpq documentation for "PQconnectStart()". The same loop should be also used to perform nonblocking queries: after sending a query via "execute()" or "callproc()", call "poll()" on the connection available from "cursor.connection" until it returns "POLL_OK", at which point the query has been completely sent to the server and, if it produced data, the results have been transferred to the client and available using the regular cursor methods: >>> acurs.execute("SELECT pg_sleep(5); SELECT 42;") >>> wait(acurs.connection) >>> acurs.fetchone()[0] 42 When an asynchronous query is being executed, "connection.isexecuting()" returns "True". Two cursors can't execute concurrent queries on the same asynchronous connection. There are several limitations in using asynchronous connections: the connection is always in "autocommit" mode and it is not possible to change it. So a transaction is not implicitly started at the first query and is not possible to use methods "commit()" and "rollback()": you can manually control transactions using "execute()" to send database commands such as "BEGIN", "COMMIT" and "ROLLBACK". Similarly "set_session()" can't be used but it is still possible to invoke the "SET" command with the proper "default_transaction_..." parameter. With asynchronous connections it is also not possible to use "set_client_encoding()", "executemany()", *large objects*, *named cursors*. *COPY commands* are not supported either in asynchronous mode, but this will be probably implemented in a future release. Support for coroutine libraries =============================== New in version 2.2.0. Psycopg can be used together with coroutine-based libraries and participate in cooperative multithreading. Coroutine-based libraries (such as Eventlet or gevent) can usually patch the Python standard library in order to enable a coroutine switch in the presence of blocking I/O: the process is usually referred as making the system *green*, in reference to the green threads. Because Psycopg is a C extension module, it is not possible for coroutine libraries to patch it: Psycopg instead enables cooperative multithreading by allowing the registration of a *wait callback* using the "psycopg2.extensions.set_wait_callback()" function. When a wait callback is registered, Psycopg will use libpq non-blocking calls instead of the regular blocking ones, and will delegate to the callback the responsibility to wait for the socket to become readable or writable. Working this way, the caller does not have the complete freedom to schedule the socket check whenever they want as with an *asynchronous connection*, but has the advantage of maintaining a complete DB API 2.0 semantics: from the point of view of the end user, all Psycopg functions and objects will work transparently in the coroutine environment (blocking the calling green thread and giving other green threads the possibility to be scheduled), allowing non modified code and third party libraries (such as SQLAlchemy) to be used in coroutine-based programs. Warning: Psycopg connections are not *green thread safe* and can't be used concurrently by different green threads. Trying to execute more than one command at time using one cursor per thread will result in an error (or a deadlock on versions before 2.4.2).Therefore, programmers are advised to either avoid sharing connections between coroutines or to use a library-friendly lock to synchronize shared connections, e.g. for pooling. Coroutine libraries authors should provide a callback implementation (and possibly a method to register it) to make Psycopg as green as they want. An example callback (using "select()" to block) is provided as "psycopg2.extras.wait_select()": it boils down to something similar to: def wait_select(conn): while 1: state = conn.poll() if state == extensions.POLL_OK: break elif state == extensions.POLL_READ: select.select([conn.fileno()], [], []) elif state == extensions.POLL_WRITE: select.select([], [conn.fileno()], []) else: raise OperationalError("bad state from poll: %s" % state) Providing callback functions for the single coroutine libraries is out of psycopg2 scope, as the callback can be tied to the libraries' implementation details. You can check the psycogreen project for further informations and resources about the topic. Warning: *COPY commands* are currently not supported when a wait callback is registered, but they will be probably implemented in a future release.*Large objects* are not supported either: they are not compatible with asynchronous connections. "psycopg2.extensions" -- Extensions to the DB API ************************************************* The module contains a few objects and function extending the minimum set of functionalities defined by the DB API 2.0. class class psycopg2.extensions.connection(dsn, async=False) Is the class usually returned by the "connect()" function. It is exposed by the "extensions" module in order to allow subclassing to extend its behaviour: the subclass should be passed to the "connect()" function using the "connection_factory" parameter. See also *Connection and cursor factories*. For a complete description of the class, see "connection". class class psycopg2.extensions.cursor(conn, name=None) It is the class usually returned by the "connection.cursor()" method. It is exposed by the "extensions" module in order to allow subclassing to extend its behaviour: the subclass should be passed to the "cursor()" method using the "cursor_factory" parameter. See also *Connection and cursor factories*. For a complete description of the class, see "cursor". class class psycopg2.extensions.lobject(conn[, oid[, mode[, new_oid[, new_file]]]]) Wrapper for a PostgreSQL large object. See *Access to PostgreSQL large objects* for an overview. The class can be subclassed: see the "connection.lobject()" to know how to specify a "lobject" subclass. New in version 2.0.8. oid Database OID of the object. mode The mode the database was open. See "connection.lobject()" for a description of the available modes. read(bytes=-1) Read a chunk of data from the current file position. If -1 (default) read all the remaining data. The result is an Unicode string (decoded according to "connection.encoding") if the file was open in "t" mode, a bytes string for "b" mode. Changed in version 2.4: added Unicode support. write(str) Write a string to the large object. Return the number of bytes written. Unicode strings are encoded in the "connection.encoding" before writing. Changed in version 2.4: added Unicode support. export(file_name) Export the large object content to the file system. The method uses the efficient "lo_export()" libpq function. seek(offset, whence=0) Set the lobject current position. tell() Return the lobject current position. truncate(len=0) New in version 2.2.0. Truncate the lobject to the given size. The method will only be available if Psycopg has been built against libpq from PostgreSQL 8.3 or later and can only be used with PostgreSQL servers running these versions. It uses the "lo_truncate()" libpq function. Warning: If Psycopg is built with "lo_truncate()" support (i.e. if the **pg_config** used during setup is version >= 8.3), but at runtime an older libpq is found, Psycopg will fail to import. See *the lo_truncate FAQ* about the problem. close() Close the object. closed Boolean attribute specifying if the object is closed. unlink() Close the object and remove it from the database. class class psycopg2.extensions.Notify(pid, channel, payload='') A notification received from the backend. "Notify" instances are made available upon reception on the "notifies" member of the listening connection. The object can be also accessed as a 2 items tuple returning the members "(*pid*,*channel*)" for backward compatibility. See *Asynchronous notifications* for details. New in version 2.3. channel The name of the channel to which the notification was sent. payload The payload message of the notification. Attaching a payload to a notification is only available since PostgreSQL 9.0: for notifications received from previous versions of the server this member is always the empty string. pid The ID of the backend process that sent the notification. Note: if the sending session was handled by Psycopg, you can use "get_backend_pid()" to know its PID. class class psycopg2.extensions.Xid(format_id, gtrid, bqual) A transaction identifier used for two-phase commit. Usually returned by the connection methods "xid()" and "tpc_recover()". "Xid" instances can be unpacked as a 3-item tuples containing the items "(*format_id*,*gtrid*,*bqual*)". The "str()" of the object returns the *transaction ID* used in the commands sent to the server. See *Two-Phase Commit protocol support* for an introduction. New in version 2.3. from_string(s) Create a "Xid" object from a string representation. Static method. If *s* is a PostgreSQL transaction ID produced by a XA transaction, the returned object will have "format_id", "gtrid", "bqual" set to the values of the preparing XA id. Otherwise only the "gtrid" is populated with the unparsed string. The operation is the inverse of the one performed by "str(xid)". bqual Branch qualifier of the transaction. In a XA transaction every resource participating to a transaction receives a distinct branch qualifier. "None" if the transaction doesn't follow the XA standard. database Database the recovered transaction belongs to. format_id Format ID in a XA transaction. A non-negative 32 bit integer. "None" if the transaction doesn't follow the XA standard. gtrid Global transaction ID in a XA transaction. If the transaction doesn't follow the XA standard, it is the plain *transaction ID* used in the server commands. owner Name of the user who prepared a recovered transaction. prepared Timestamp (with timezone) in which a recovered transaction was prepared. class class psycopg2.extensions.Diagnostics(exception) Details from a database error report. The object is returned by the "diag" attribute of the "Error" object. All the information available from the "PQresultErrorField()" function are exposed as attributes by the object, e.g. the "severity" attribute returns the "PG_DIAG_SEVERITY" code. Please refer to the PostgreSQL documentation for the meaning of all the attributes. New in version 2.5. The attributes currently available are: column_name constraint_name context datatype_name internal_position internal_query message_detail message_hint message_primary schema_name severity source_file source_function source_line sqlstate statement_position table_name A string with the error field if available; "None" if not available. The attribute value is available only if the error sent by the server: not all the fields are available for all the errors and for all the server versions. psycopg2.extensions.set_wait_callback(f) Register a callback function to block waiting for data. The callback should have signature "fun(*conn*)" and is called to wait for data available whenever a blocking function from the libpq is called. Use "set_wait_callback(None)" to revert to the original behaviour (i.e. using blocking libpq functions). The function is an hook to allow coroutine-based libraries (such as Eventlet or gevent) to switch when Psycopg is blocked, allowing other coroutines to run concurrently. See "wait_select()" for an example of a wait callback implementation. New in version 2.2.0. psycopg2.extensions.get_wait_callback() Return the currently registered wait callback. Return "None" if no callback is currently registered. New in version 2.2.0. SQL adaptation protocol objects =============================== Psycopg provides a flexible system to adapt Python objects to the SQL syntax (inspired to the **PEP 246**), allowing serialization in PostgreSQL. See *Adapting new Python types to SQL syntax* for a detailed description. The following objects deal with Python objects adaptation: psycopg2.extensions.adapt(obj) Return the SQL representation of *obj* as a string. Raise a "ProgrammingError" if how to adapt the object is unknown. In order to allow new objects to be adapted, register a new adapter for it using the "register_adapter()" function. The function is the entry point of the adaptation mechanism: it can be used to write adapters for complex objects by recursively calling "adapt()" on its components. psycopg2.extensions.register_adapter(class, adapter) Register a new adapter for the objects of class *class*. *adapter* should be a function taking a single argument (the object to adapt) and returning an object conforming the "ISQLQuote" protocol (e.g. exposing a "getquoted()" method). The "AsIs" is often useful for this task. Once an object is registered, it can be safely used in SQL queries and by the "adapt()" function. class class psycopg2.extensions.ISQLQuote(wrapped_object) Represents the SQL adaptation protocol. Objects conforming this protocol should implement a "getquoted()" and optionally a "prepare()" method. Adapters may subclass "ISQLQuote", but is not necessary: it is enough to expose a "getquoted()" method to be conforming. _wrapped The wrapped object passes to the constructor getquoted() Subclasses or other conforming objects should return a valid SQL string representing the wrapped object. In Python 3 the SQL must be returned in a "bytes" object. The "ISQLQuote" implementation does nothing. prepare(conn) Prepare the adapter for a connection. The method is optional: if implemented, it will be invoked before "getquoted()" with the connection to adapt for as argument. A conform object can implement this method if the SQL representation depends on any server parameter, such as the server version or the "standard_conforming_string" setting. Container objects may store the connection and use it to recursively prepare contained objects: see the implementation for "psycopg2.extensions.SQL_IN" for a simple example. class class psycopg2.extensions.AsIs(object) Adapter conform to the "ISQLQuote" protocol useful for objects whose string representation is already valid as SQL representation. getquoted() Return the "str()" conversion of the wrapped object. >>> AsIs(42).getquoted() '42' class class psycopg2.extensions.QuotedString(str) Adapter conform to the "ISQLQuote" protocol for string-like objects. getquoted() Return the string enclosed in single quotes. Any single quote appearing in the the string is escaped by doubling it according to SQL string constants syntax. Backslashes are escaped too. >>> QuotedString(r"O'Reilly").getquoted() "'O''Reilly'" class class psycopg2.extensions.Binary(str) Adapter conform to the "ISQLQuote" protocol for binary objects. getquoted() Return the string enclosed in single quotes. It performs the same escaping of the "QuotedString" adapter, plus it knows how to escape non-printable chars. >>> Binary("\x00\x08\x0F").getquoted() "'\\\\000\\\\010\\\\017'" Changed in version 2.0.14: previously the adapter was not exposed by the "extensions" module. In older versions it can be imported from the implementation module "psycopg2._psycopg". class class psycopg2.extensions.Boolean class class psycopg2.extensions.Float class class psycopg2.extensions.SQL_IN Specialized adapters for builtin objects. class class psycopg2.extensions.DateFromPy class class psycopg2.extensions.TimeFromPy class class psycopg2.extensions.TimestampFromPy class class psycopg2.extensions.IntervalFromPy Specialized adapters for Python datetime objects. class class psycopg2.extensions.DateFromMx class class psycopg2.extensions.TimeFromMx class class psycopg2.extensions.TimestampFromMx class class psycopg2.extensions.IntervalFromMx Specialized adapters for mx.DateTime objects. psycopg2.extensions.adapters Dictionary of the currently registered object adapters. Use "register_adapter()" to add an adapter for a new type. Database types casting functions ================================ These functions are used to manipulate type casters to convert from PostgreSQL types to Python objects. See *Type casting of SQL types into Python objects* for details. psycopg2.extensions.new_type(oids, name, adapter) Create a new type caster to convert from a PostgreSQL type to a Python object. The object created must be registered using "register_type()" to be used. Parameters: * **oids** -- tuple of OIDs of the PostgreSQL type to convert. * **name** -- the name of the new type adapter. * **adapter** -- the adaptation function. The object OID can be read from the "cursor.description" attribute or by querying from the PostgreSQL catalog. *adapter* should have signature "fun(*value*, *cur*)" where *value* is the string representation returned by PostgreSQL and *cur* is the cursor from which data are read. In case of "NULL", *value* will be "None". The adapter should return the converted object. See *Type casting of SQL types into Python objects* for an usage example. psycopg2.extensions.new_array_type(oids, name, base_caster) Create a new type caster to convert from a PostgreSQL array type to a list of Python object. The object created must be registered using "register_type()" to be used. Parameters: * **oids** -- tuple of OIDs of the PostgreSQL type to convert. It should probably be the oid of the array type (e.g. the "typarray" field in the "pg_type" table. * **name** -- the name of the new type adapter. * **base_caster** -- a Psycopg typecaster, e.g. created using the "new_type()" function. The caster should be able to parse a single item of the desired type. New in version 2.4.3. Note: The function can be used to create a generic array typecaster, returning a list of strings: just use the "STRING" as base typecaster. For instance, if you want to receive from the database an array of "macaddr", each address represented by string, you can use: psycopg2.extensions.register_type( psycopg2.extensions.new_array_type( (1040,), 'MACADDR[]', psycopg2.STRING)) psycopg2.extensions.register_type(obj[, scope]) Register a type caster created using "new_type()". If *scope* is specified, it should be a "connection" or a "cursor": the type caster will be effective only limited to the specified object. Otherwise it will be globally registered. psycopg2.extensions.string_types The global register of type casters. psycopg2.extensions.encodings Mapping from PostgreSQL encoding names to Python codec names. Used by Psycopg when adapting or casting unicode strings. See *Unicode handling*. Additional exceptions ===================== The module exports a few exceptions in addition to the *standard ones* defined by the DB API 2.0. exception exception psycopg2.extensions.QueryCanceledError (subclasses "OperationalError") Error related to SQL query cancellation. It can be trapped specifically to detect a timeout. New in version 2.0.7. exception exception psycopg2.extensions.TransactionRollbackError (subclasses "OperationalError") Error causing transaction rollback (deadlocks, serialization failures, etc). It can be trapped specifically to detect a deadlock. New in version 2.0.7. Isolation level constants ========================= Psycopg2 "connection" objects hold informations about the PostgreSQL transaction isolation level. The current transaction level can be read from the "isolation_level" attribute. The default isolation level is "READ COMMITTED". A different isolation level con be set through the "set_isolation_level()" method. The level can be set to one of the following constants: psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT No transaction is started when command are issued and no "commit()" or "rollback()" is required. Some PostgreSQL command such as "CREATE DATABASE" or "VACUUM" can't run into a transaction: to run such command use: >>> conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) See also *Transactions control*. psycopg2.extensions.ISOLATION_LEVEL_READ_UNCOMMITTED The "READ UNCOMMITTED" isolation level is defined in the SQL standard but not available in the MVCC (Multiversion concurrency control) model of PostgreSQL: it is replaced by the stricter "READ COMMITTED". psycopg2.extensions.ISOLATION_LEVEL_READ_COMMITTED This is usually the the default PostgreSQL value, but a different default may be set in the database configuration. A new transaction is started at the first "execute()" command on a cursor and at each new "execute()" after a "commit()" or a "rollback()". The transaction runs in the PostgreSQL "READ COMMITTED" isolation level: a "SELECT" query sees only data committed before the query began; it never sees either uncommitted data or changes committed during query execution by concurrent transactions. See also: Read Committed Isolation Level in PostgreSQL documentation. psycopg2.extensions.ISOLATION_LEVEL_REPEATABLE_READ As in "ISOLATION_LEVEL_READ_COMMITTED", a new transaction is started at the first "execute()" command. Transactions run at a "REPEATABLE READ" isolation level: all the queries in a transaction see a snapshot as of the start of the transaction, not as of the start of the current query within the transaction. However applications using this level must be prepared to retry transactions due to serialization failures. While this level provides a guarantee that each transaction sees a completely stable view of the database, this view will not necessarily always be consistent with some serial (one at a time) execution of concurrent transactions of the same level. Changed in version 2.4.2: The value was an alias for "ISOLATION_LEVEL_SERIALIZABLE" before. The two levels are distinct since PostgreSQL 9.1 See also: Repeatable Read Isolation Level in PostgreSQL documentation. psycopg2.extensions.ISOLATION_LEVEL_SERIALIZABLE As in "ISOLATION_LEVEL_READ_COMMITTED", a new transaction is started at the first "execute()" command. Transactions run at a "SERIALIZABLE" isolation level. This is the strictest transactions isolation level, equivalent to having the transactions executed serially rather than concurrently. However applications using this level must be prepared to retry transactions due to serialization failures. Starting from PostgreSQL 9.1, this mode monitors for conditions which could make execution of a concurrent set of serializable transactions behave in a manner inconsistent with all possible serial (one at a time) executions of those transaction. In previous version the behaviour was the same of the "REPEATABLE READ" isolation level. See also: Serializable Isolation Level in PostgreSQL documentation. Transaction status constants ============================ These values represent the possible status of a transaction: the current value can be read using the "connection.get_transaction_status()" method. psycopg2.extensions.TRANSACTION_STATUS_IDLE The session is idle and there is no current transaction. psycopg2.extensions.TRANSACTION_STATUS_ACTIVE A command is currently in progress. psycopg2.extensions.TRANSACTION_STATUS_INTRANS The session is idle in a valid transaction block. psycopg2.extensions.TRANSACTION_STATUS_INERROR The session is idle in a failed transaction block. psycopg2.extensions.TRANSACTION_STATUS_UNKNOWN Reported if the connection with the server is bad. Connection status constants =========================== These values represent the possible status of a connection: the current value can be read from the "status" attribute. It is possible to find the connection in other status than the one shown below. Those are the only states in which a working connection is expected to be found during the execution of regular Python client code: other states are for internal usage and Python code should not rely on them. psycopg2.extensions.STATUS_READY Connection established. No transaction in progress. psycopg2.extensions.STATUS_BEGIN Connection established. A transaction is currently in progress. psycopg2.extensions.STATUS_IN_TRANSACTION An alias for "STATUS_BEGIN" psycopg2.extensions.STATUS_PREPARED The connection has been prepared for the second phase in a *two- phase commit* transaction. The connection can't be used to send commands to the database until the transaction is finished with "tpc_commit()" or "tpc_rollback()". New in version 2.3. Poll constants ============== New in version 2.2.0. These values can be returned by "connection.poll()" during asynchronous connection and communication. They match the values in the libpq enum "PostgresPollingStatusType". See *Asynchronous support* and *Support for coroutine libraries*. psycopg2.extensions.POLL_OK The data being read is available, or the file descriptor is ready for writing: reading or writing will not block. psycopg2.extensions.POLL_READ Some data is being read from the backend, but it is not available yet on the client and reading would block. Upon receiving this value, the client should wait for the connection file descriptor to be ready *for reading*. For example: select.select([conn.fileno()], [], []) psycopg2.extensions.POLL_WRITE Some data is being sent to the backend but the connection file descriptor can't currently accept new data. Upon receiving this value, the client should wait for the connection file descriptor to be ready *for writing*. For example: select.select([], [conn.fileno()], []) psycopg2.extensions.POLL_ERROR There was a problem during connection polling. This value should actually never be returned: in case of poll error usually an exception containing the relevant details is raised. Additional database types ========================= The "extensions" module includes typecasters for many standard PostgreSQL types. These objects allow the conversion of returned data into Python objects. All the typecasters are automatically registered, except "UNICODE" and "UNICODEARRAY": you can register them using "register_type()" in order to receive Unicode objects instead of strings from the database. See *Unicode handling* for details. psycopg2.extensions.BOOLEAN psycopg2.extensions.DATE psycopg2.extensions.DECIMAL psycopg2.extensions.FLOAT psycopg2.extensions.INTEGER psycopg2.extensions.INTERVAL psycopg2.extensions.LONGINTEGER psycopg2.extensions.TIME psycopg2.extensions.UNICODE Typecasters for basic types. Note that a few other ones ("BINARY", "DATETIME", "NUMBER", "ROWID", "STRING") are exposed by the "psycopg2" module for DB API 2.0 compliance. psycopg2.extensions.BINARYARRAY psycopg2.extensions.BOOLEANARRAY psycopg2.extensions.DATEARRAY psycopg2.extensions.DATETIMEARRAY psycopg2.extensions.DECIMALARRAY psycopg2.extensions.FLOATARRAY psycopg2.extensions.INTEGERARRAY psycopg2.extensions.INTERVALARRAY psycopg2.extensions.LONGINTEGERARRAY psycopg2.extensions.ROWIDARRAY psycopg2.extensions.STRINGARRAY psycopg2.extensions.TIMEARRAY psycopg2.extensions.UNICODEARRAY Typecasters to convert arrays of sql types into Python lists. psycopg2.extensions.PYDATE psycopg2.extensions.PYDATETIME psycopg2.extensions.PYINTERVAL psycopg2.extensions.PYTIME psycopg2.extensions.PYDATEARRAY psycopg2.extensions.PYDATETIMEARRAY psycopg2.extensions.PYINTERVALARRAY psycopg2.extensions.PYTIMEARRAY Typecasters to convert time-related data types to Python "datetime" objects. psycopg2.extensions.MXDATE psycopg2.extensions.MXDATETIME psycopg2.extensions.MXINTERVAL psycopg2.extensions.MXTIME psycopg2.extensions.MXDATEARRAY psycopg2.extensions.MXDATETIMEARRAY psycopg2.extensions.MXINTERVALARRAY psycopg2.extensions.MXTIMEARRAY Typecasters to convert time-related data types to mx.DateTime objects. Only available if Psycopg was compiled with "mx" support. Changed in version 2.2.0: previously the "DECIMAL" typecaster and the specific time-related typecasters ("PY*" and "MX*") were not exposed by the "extensions" module. In older versions they can be imported from the implementation module "psycopg2._psycopg". "psycopg2.tz" -- "tzinfo" implementations for Psycopg 2 ******************************************************** This module holds two different tzinfo implementations that can be used as the "tzinfo" argument to "datetime" constructors, directly passed to Psycopg functions or used to set the "cursor.tzinfo_factory" attribute in cursors. class class psycopg2.tz.FixedOffsetTimezone(offset=None, name=None) Fixed offset in minutes east from UTC. This is exactly the implementation found in Python 2.3.x documentation, with a small change to the "__init__()" method to allow for pickling and a default name in the form "sHH:MM" ("s" is the sign.). The implementation also caches instances. During creation, if a FixedOffsetTimezone instance has previously been created with the same offset and name that instance will be returned. This saves memory and improves comparability. class class psycopg2.tz.LocalTimezone Platform idea of local timezone. This is the exact implementation from the Python 2.3 documentation. "psycopg2.pool" -- Connections pooling ************************************** Creating new PostgreSQL connections can be an expensive operation. This module offers a few pure Python classes implementing simple connection pooling directly in the client application. class class psycopg2.pool.AbstractConnectionPool(minconn, maxconn, *args, **kwargs) Base class implementing generic key-based pooling code. New *minconn* connections are created automatically. The pool will support a maximum of about *maxconn* connections. **args* and ***kwargs* are passed to the "connect()" function. The following methods are expected to be implemented by subclasses: getconn(key=None) Get a free connection and assign it to *key* if not "None". putconn(conn, key=None, close=False) Put away a connection. If *close* is "True", discard the connection from the pool. closeall() Close all the connections handled by the pool. Note that all the connections are closed, including ones eventually in use by the application. The following classes are "AbstractConnectionPool" subclasses ready to be used. class class psycopg2.pool.SimpleConnectionPool(minconn, maxconn, *args, **kwargs) A connection pool that can't be shared across different threads. Note: This pool class is useful only for single-threaded applications. class class psycopg2.pool.ThreadedConnectionPool(minconn, maxconn, *args, **kwargs) A connection pool that works with the threading module. Note: This pool class can be safely used in multi-threaded applications. class class psycopg2.pool.PersistentConnectionPool(minconn, maxconn, *args, **kwargs) A pool that assigns persistent connections to different threads. Note that this connection pool generates by itself the required keys using the current thread id. This means that until a thread puts away a connection it will always get the same connection object by successive "getconn()" calls. This also means that a thread can't use more than one single connection from the pool. Note: This pool class is mostly designed to interact with Zope and probably not useful in generic applications. "psycopg2.extras" -- Miscellaneous goodies for Psycopg 2 ******************************************************** This module is a generic place used to hold little helper functions and classes until a better place in the distribution is found. Connection and cursor subclasses ================================ A few objects that change the way the results are returned by the cursor or modify the object behavior in some other way. Typically "cursor" subclasses are passed as *cursor_factory* argument to "connect()" so that the connection's "cursor()" method will generate objects of this class. Alternatively a "cursor" subclass can be used one-off by passing it as the *cursor_factory* argument to the "cursor()" method. If you want to use a "connection" subclass you can pass it as the *connection_factory* argument of the "connect()" function. Dictionary-like cursor ---------------------- The dict cursors allow to access to the retrieved records using an interface similar to the Python dictionaries instead of the tuples. >>> dict_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor) >>> dict_cur.execute("INSERT INTO test (num, data) VALUES(%s, %s)", ... (100, "abc'def")) >>> dict_cur.execute("SELECT * FROM test") >>> rec = dict_cur.fetchone() >>> rec['id'] 1 >>> rec['num'] 100 >>> rec['data'] "abc'def" The records still support indexing as the original tuple: >>> rec[2] "abc'def" class class psycopg2.extras.DictCursor(*args, **kwargs) A cursor that keeps a list of column name -> index mappings. class class psycopg2.extras.DictConnection A connection that uses "DictCursor" automatically. Note: Not very useful since Psycopg 2.5: you can use "psycopg2.connect""(dsn, cursor_factory=DictCursor)" instead of "DictConnection". class class psycopg2.extras.DictRow(cursor) A row object that allow by-column-name access to data. Real dictionary cursor ---------------------- class class psycopg2.extras.RealDictCursor(*args, **kwargs) A cursor that uses a real dict as the base type for rows. Note that this cursor is extremely specialized and does not allow the normal access (using integer indices) to fetched data. If you need to access database rows both as a dictionary and a list, then use the generic "DictCursor" instead of "RealDictCursor". class class psycopg2.extras.RealDictConnection A connection that uses "RealDictCursor" automatically. Note: Not very useful since Psycopg 2.5: you can use "psycopg2.connect""(dsn, cursor_factory=RealDictCursor)" instead of "RealDictConnection". class class psycopg2.extras.RealDictRow(cursor) A "dict" subclass representing a data record. "namedtuple" cursor ------------------- New in version 2.3. These objects require "collections.namedtuple()" to be found, so it is available out-of-the-box only from Python 2.6. Anyway, the namedtuple implementation is compatible with previous Python versions, so all you have to do is to download it and make it available where we expect it to be... from somewhere import namedtuple import collections collections.namedtuple = namedtuple from psycopg.extras import NamedTupleConnection # ... class class psycopg2.extras.NamedTupleCursor A cursor that generates results as "namedtuple". "fetch*()" methods will return named tuples instead of regular tuples, so their elements can be accessed both as regular numeric items as well as attributes. >>> nt_cur = conn.cursor(cursor_factory=psycopg2.extras.NamedTupleCursor) >>> rec = nt_cur.fetchone() >>> rec Record(id=1, num=100, data="abc'def") >>> rec[1] 100 >>> rec.data "abc'def" class class psycopg2.extras.NamedTupleConnection A connection that uses "NamedTupleCursor" automatically. Note: Not very useful since Psycopg 2.5: you can use "psycopg2.connect""(dsn, cursor_factory=NamedTupleCursor)" instead of "NamedTupleConnection". Logging cursor -------------- class class psycopg2.extras.LoggingConnection A connection that logs all queries to a file or logger object. filter(msg, curs) Filter the query before logging it. This is the method to overwrite to filter unwanted queries out of the log or to add some extra data to the output. The default implementation just does nothing. initialize(logobj) Initialize the connection to log to "logobj". The "logobj" parameter can be an open file object or a Logger instance from the standard logging module. class class psycopg2.extras.LoggingCursor A cursor that logs queries using its connection logging facilities. class class psycopg2.extras.MinTimeLoggingConnection A connection that logs queries based on execution time. This is just an example of how to sub-class "LoggingConnection" to provide some extra filtering for the logged queries. Both the "initialize()" and "filter()" methods are overwritten to make sure that only queries executing for more than "mintime" ms are logged. Note that this connection uses the specialized cursor "MinTimeLoggingCursor". class class psycopg2.extras.MinTimeLoggingCursor The cursor sub-class companion to "MinTimeLoggingConnection". Additional data types ===================== JSON adaptation --------------- New in version 2.5. Psycopg can adapt Python objects to and from the PostgreSQL "json" type. With PostgreSQL 9.2 adaptation is available out-of-the-box. To use JSON data with previous database versions (either with the 9.1 json extension, but even if you want to convert text fields to JSON) you can use "register_json()". The Python library used to convert Python objects to JSON depends on the language version: with Python 2.6 and following the "json" module from the standard library is used; with previous versions the simplejson module is used if available. Note that the last "simplejson" version supporting Python 2.4 is the 2.0.9. In order to pass a Python object to the database as query argument you can use the "Json" adapter: curs.execute("insert into mytable (jsondata) values (%s)", [Json({'a': 100})]) Reading from the database, "json" values will be automatically converted to Python objects. Note: You can use "register_adapter()" to adapt any Python dictionary to JSON, either registering "Json" or any subclass or factory creating a compatible adapter: psycopg2.extensions.register_adapter(dict, psycopg2.extras.Json) This setting is global though, so it is not compatible with similar adapters such as the one registered by "register_hstore()". Any other object supported by JSON can be registered the same way, but this will clobber the default adaptation rule, so be careful to unwanted side effects. If you want to customize the adaptation from Python to PostgreSQL you can either provide a custom "dumps()" function to "Json": curs.execute("insert into mytable (jsondata) values (%s)", [Json({'a': 100}, dumps=simplejson.dumps)]) or you can subclass it overriding the "dumps()" method: class MyJson(Json): def dumps(self, obj): return simplejson.dumps(obj) curs.execute("insert into mytable (jsondata) values (%s)", [MyJson({'a': 100})]) Customizing the conversion from PostgreSQL to Python can be done passing a custom "loads()" function to "register_json()" (or "register_default_json()" for PostgreSQL 9.2). For example, if you want to convert the float values from "json" into "Decimal" you can use: loads = lambda x: json.loads(x, parse_float=Decimal) psycopg2.extras.register_json(conn, loads=loads) class class psycopg2.extras.Json(adapted, dumps=None) An "ISQLQuote" wrapper to adapt a Python object to "json" data type. "Json" can be used to wrap any object supported by the provided *dumps* function. If none is provided, the standard "json.dumps()" is used ("simplejson" for Python < 2.6; "getquoted()" will raise "ImportError" if the module is not available). dumps(obj) Serialize *obj* in JSON format. The default is to call "json.dumps()" or the *dumps* function provided in the constructor. You can override this method to create a customized JSON wrapper. psycopg2.extras.register_json(conn_or_curs=None, globally=False, loads=None, oid=None, array_oid=None) Create and register typecasters converting "json" type to Python objects. Parameters: * **conn_or_curs** -- a connection or cursor used to find the "json" and "json[]" oids; the typecasters are registered in a scope limited to this object, unless *globally* is set to "True". It can be "None" if the oids are provided * **globally** -- if "False" register the typecasters only on *conn_or_curs*, otherwise register them globally * **loads** -- the function used to parse the data into a Python object. If "None" use "json.loads()", where "json" is the module chosen according to the Python version (see above) * **oid** -- the OID of the "json" type if known; If not, it will be queried on *conn_or_curs* * **array_oid** -- the OID of the "json[]" array type if known; if not, it will be queried on *conn_or_curs* The connection or cursor passed to the function will be used to query the database and look for the OID of the "json" type. No query is performed if *oid* and *array_oid* are provided. Raise "ProgrammingError" if the type is not found. psycopg2.extras.register_default_json(conn_or_curs=None, globally=False, loads=None) Create and register "json" typecasters for PostgreSQL 9.2 and following. Since PostgreSQL 9.2 "json" is a builtin type, hence its oid is known and fixed. This function allows specifying a customized *loads* function for the default "json" type without querying the database. All the parameters have the same meaning of "register_json()". Hstore data type ---------------- New in version 2.3. The "hstore" data type is a key-value store embedded in PostgreSQL. It has been available for several server versions but with the release 9.0 it has been greatly improved in capacity and usefulness with the addition of many functions. It supports GiST or GIN indexes allowing search by keys or key/value pairs as well as regular BTree indexes for equality, uniqueness etc. Psycopg can convert Python "dict" objects to and from "hstore" structures. Only dictionaries with string/unicode keys and values are supported. "None" is also allowed as value but not as a key. Psycopg uses a more efficient "hstore" representation when dealing with PostgreSQL 9.0 but previous server versions are supported as well. By default the adapter/typecaster are disabled: they can be enabled using the "register_hstore()" function. psycopg2.extras.register_hstore(conn_or_curs, globally=False, unicode=False, oid=None, array_oid=None) Register adapter and typecaster for "dict"-"hstore" conversions. Parameters: * **conn_or_curs** -- a connection or cursor: the typecaster will be registered only on this object unless *globally* is set to "True" * **globally** -- register the adapter globally, not only on *conn_or_curs* * **unicode** -- if "True", keys and values returned from the database will be "unicode" instead of "str". The option is not available on Python 3 * **oid** -- the OID of the "hstore" type if known. If not, it will be queried on *conn_or_curs*. * **array_oid** -- the OID of the "hstore" array type if known. If not, it will be queried on *conn_or_curs*. The connection or cursor passed to the function will be used to query the database and look for the OID of the "hstore" type (which may be different across databases). If querying is not desirable (e.g. with *asynchronous connections*) you may specify it in the *oid* parameter, which can be found using a query such as "SELECT 'hstore'::regtype::oid". Analogously you can obtain a value for *array_oid* using a query such as "SELECT 'hstore[]'::regtype::oid". Note that, when passing a dictionary from Python to the database, both strings and unicode keys and values are supported. Dictionaries returned from the database have keys/values according to the *unicode* parameter. The "hstore" contrib module must be already installed in the database (executing the "hstore.sql" script in your "contrib" directory). Raise "ProgrammingError" if the type is not found. Changed in version 2.4: added the *oid* parameter. If not specified, the typecaster is installed also if "hstore" is not installed in the "public" schema. Changed in version 2.4.3: added support for "hstore" array. Composite types casting ----------------------- New in version 2.4. Using "register_composite()" it is possible to cast a PostgreSQL composite type (either created with the "CREATE TYPE" command or implicitly defined after a table row type) into a Python named tuple, or into a regular tuple if "collections.namedtuple()" is not found. >>> cur.execute("CREATE TYPE card AS (value int, suit text);") >>> psycopg2.extras.register_composite('card', cur) >>> cur.execute("select (8, 'hearts')::card") >>> cur.fetchone()[0] card(value=8, suit='hearts') Nested composite types are handled as expected, provided that the type of the composite components are registered as well. >>> cur.execute("CREATE TYPE card_back AS (face card, back text);") >>> psycopg2.extras.register_composite('card_back', cur) >>> cur.execute("select ((8, 'hearts'), 'blue')::card_back") >>> cur.fetchone()[0] card_back(face=card(value=8, suit='hearts'), back='blue') Adaptation from Python tuples to composite types is automatic instead and requires no adapter registration. Note: If you want to convert PostgreSQL composite types into something different than a "namedtuple" you can subclass the "CompositeCaster" overriding "make()". For example, if you want to convert your type into a Python dictionary you can use: >>> class DictComposite(psycopg2.extras.CompositeCaster): ... def make(self, values): ... return dict(zip(self.attnames, values)) >>> psycopg2.extras.register_composite('card', cur, ... factory=DictComposite) >>> cur.execute("select (8, 'hearts')::card") >>> cur.fetchone()[0] {'suit': 'hearts', 'value': 8} psycopg2.extras.register_composite(name, conn_or_curs, globally=False, factory=None) Register a typecaster to convert a composite type into a tuple. Parameters: * **name** -- the name of a PostgreSQL composite type, e.g. created using the "CREATE TYPE" command * **conn_or_curs** -- a connection or cursor used to find the type oid and components; the typecaster is registered in a scope limited to this object, unless *globally* is set to "True" * **globally** -- if "False" (default) register the typecaster only on *conn_or_curs*, otherwise register it globally * **factory** -- if specified it should be a "CompositeCaster" subclass: use it to *customize how to cast composite types* Returns: the registered "CompositeCaster" or *factory* instance responsible for the conversion Changed in version 2.4.3: added support for array of composite types Changed in version 2.5: added the *factory* parameter class class psycopg2.extras.CompositeCaster(name, oid, attrs, array_oid=None, schema=None) Helps conversion of a PostgreSQL composite type into a Python object. The class is usually created by the "register_composite()" function. You may want to create and register manually instances of the class if querying the database at registration time is not desirable (such as when using an *asynchronous connections*). make(values) Return a new Python object representing the data being casted. *values* is the list of attributes, already casted into their Python representation. You can subclass this method to *customize the composite cast*. New in version 2.5. Object attributes: name The name of the PostgreSQL type. schema The schema where the type is defined. New in version 2.5. oid The oid of the PostgreSQL type. array_oid The oid of the PostgreSQL array type, if available. type The type of the Python objects returned. If "collections.namedtuple()" is available, it is a named tuple with attributes equal to the type components. Otherwise it is just the "tuple" object. attnames List of component names of the type to be casted. atttypes List of component type oids of the type to be casted. Range data types ---------------- New in version 2.5. Psycopg offers a "Range" Python type and supports adaptation between them and PostgreSQL "range" types. Builtin "range" types are supported out-of-the-box; user-defined "range" types can be adapted using "register_range()". class class psycopg2.extras.Range(lower=None, upper=None, bounds='[)', empty=False) Python representation for a PostgreSQL "range" type. Parameters: * **lower** -- lower bound for the range. "None" means unbound * **upper** -- upper bound for the range. "None" means unbound * **bounds** -- one of the literal strings "()", "[)", "(]", "[]", representing whether the lower or upper bounds are included * **empty** -- if "True", the range is empty This Python type is only used to pass and retrieve range values to and from PostgreSQL and doesn't attempt to replicate the PostgreSQL range features: it doesn't perform normalization and doesn't implement all the operators supported by the database. "Range" objects are immutable, hashable, and support the "in" operator (checking if an element is within the range). They can be tested for equivalence but not for ordering. Empty ranges evaluate to "False" in boolean context, nonempty evaluate to "True". Although it is possible to instantiate "Range" objects, the class doesn't have an adapter registered, so you cannot normally pass these instances as query arguments. To use range objects as query arguments you can either use one of the provided subclasses, such as "NumericRange" or create a custom subclass using "register_range()". Object attributes: isempty "True" if the range is empty. lower The lower bound of the range. "None" if empty or unbound. upper The upper bound of the range. "None" if empty or unbound. lower_inc "True" if the lower bound is included in the range. upper_inc "True" if the upper bound is included in the range. lower_inf "True" if the range doesn't have a lower bound. upper_inf "True" if the range doesn't have an upper bound. The following "Range" subclasses map builtin PostgreSQL "range" types to Python objects: they have an adapter registered so their instances can be passed as query arguments. "range" values read from database queries are automatically casted into instances of these classes. class class psycopg2.extras.NumericRange(lower=None, upper=None, bounds='[)', empty=False) A "Range" suitable to pass Python numeric types to a PostgreSQL range. PostgreSQL types "int4range", "int8range", "numrange" are casted into "NumericRange" instances. class class psycopg2.extras.DateRange(lower=None, upper=None, bounds='[)', empty=False) Represents "daterange" values. class class psycopg2.extras.DateTimeRange(lower=None, upper=None, bounds='[)', empty=False) Represents "tsrange" values. class class psycopg2.extras.DateTimeTZRange(lower=None, upper=None, bounds='[)', empty=False) Represents "tstzrange" values. Note: Python lacks a representation for "infinity" date so Psycopg converts the value to "date.max" and such. When written into the database these dates will assume their literal value (e.g. "9999-12-31" instead of "infinity"). Check *Infinite dates handling* for an example of an alternative adapter to map "date.max" to "infinity". An alternative dates adapter will be used automatically by the "DateRange" adapter and so on. Custom "range" types (created with "CREATE TYPE" "... AS RANGE") can be adapted to a custom "Range" subclass: psycopg2.extras.register_range(pgrange, pyrange, conn_or_curs, globally=False) Create and register an adapter and the typecasters to convert between a PostgreSQL "range" type and a PostgreSQL "Range" subclass. Parameters: * **pgrange** -- the name of the PostgreSQL "range" type. Can be schema-qualified * **pyrange** -- a "Range" strict subclass, or just a name to give to a new class * **conn_or_curs** -- a connection or cursor used to find the oid of the range and its subtype; the typecaster is registered in a scope limited to this object, unless *globally* is set to "True" * **globally** -- if "False" (default) register the typecaster only on *conn_or_curs*, otherwise register it globally Returns: "RangeCaster" instance responsible for the conversion If a string is passed to *pyrange*, a new "Range" subclass is created with such name and will be available as the "range" attribute of the returned "RangeCaster" object. The function queries the database on *conn_or_curs* to inspect the *pgrange* type and raises "ProgrammingError" if the type is not found. If querying the database is not advisable, use directly the "RangeCaster" class and register the adapter and typecasters using the provided functions. class class psycopg2.extras.RangeCaster(pgrange, pyrange, oid, subtype_oid, array_oid=None) Helper class to convert between "Range" and PostgreSQL range types. Objects of this class are usually created by "register_range()". Manual creation could be useful if querying the database is not advisable: in this case the oids must be provided. Object attributes: range The "Range" subclass adapted. adapter The "ISQLQuote" responsible to adapt "range". typecaster The object responsible for casting. array_typecaster The object responsible to cast arrays, if available, else "None". UUID data type -------------- New in version 2.0.9. Changed in version 2.0.13: added UUID array support. >>> psycopg2.extras.register_uuid() >>> # Python UUID can be used in SQL queries >>> import uuid >>> my_uuid = uuid.UUID('{12345678-1234-5678-1234-567812345678}') >>> psycopg2.extensions.adapt(my_uuid).getquoted() "'12345678-1234-5678-1234-567812345678'::uuid" >>> # PostgreSQL UUID are transformed into Python UUID objects. >>> cur.execute("SELECT 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid") >>> cur.fetchone()[0] UUID('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11') psycopg2.extras.register_uuid(oids=None, conn_or_curs=None) Create the UUID type and an uuid.UUID adapter. Parameters: * **oids** -- oid for the PostgreSQL "uuid" type, or 2-items sequence with oids of the type and the array. If not specified, use PostgreSQL standard oids. * **conn_or_curs** -- where to register the typecaster. If not specified, register it globally. class class psycopg2.extras.UUID_adapter(uuid) Adapt Python's uuid.UUID type to PostgreSQL's uuid. "inet" data type ---------------- New in version 2.0.9. Changed in version 2.4.5: added inet array support. >>> psycopg2.extras.register_inet() >>> cur.mogrify("SELECT %s", (Inet('127.0.0.1/32'),)) "SELECT E'127.0.0.1/32'::inet" >>> cur.execute("SELECT '192.168.0.1/24'::inet") >>> cur.fetchone()[0].addr '192.168.0.1/24' psycopg2.extras.register_inet(oid=None, conn_or_curs=None) Create the INET type and an Inet adapter. Parameters: * **oid** -- oid for the PostgreSQL "inet" type, or 2-items sequence with oids of the type and the array. If not specified, use PostgreSQL standard oids. * **conn_or_curs** -- where to register the typecaster. If not specified, register it globally. class class psycopg2.extras.Inet(addr) Wrap a string to allow for correct SQL-quoting of inet values. Note that this adapter does NOT check the passed value to make sure it really is an inet-compatible address but DOES call adapt() on it to make sure it is impossible to execute an SQL-injection by passing an evil value to the initializer. Fractional time zones ===================== psycopg2.extras.register_tstz_w_secs(oids=None, conn_or_curs=None) The function used to register an alternate type caster for "TIMESTAMP WITH TIME ZONE" to deal with historical time zones with seconds in the UTC offset. These are now correctly handled by the default type caster, so currently the function doesn't do anything. New in version 2.0.9. Changed in version 2.2.2: function is no-op: see *Time zones handling*. Coroutine support ================= psycopg2.extras.wait_select(conn) Wait until a connection or cursor has data available. The function is an example of a wait callback to be registered with "set_wait_callback()". This function uses "select()" to wait for data available. "psycopg2.errorcodes" -- Error codes defined by PostgreSQL ********************************************************** New in version 2.0.6. This module contains symbolic names for all PostgreSQL error codes and error classes codes. Subclasses of "Error" make the PostgreSQL error code available in the "pgcode" attribute. From PostgreSQL documentation: All messages emitted by the PostgreSQL server are assigned five- character error codes that follow the SQL standard's conventions for "SQLSTATE" codes. Applications that need to know which error condition has occurred should usually test the error code, rather than looking at the textual error message. The error codes are less likely to change across PostgreSQL releases, and also are not subject to change due to localization of error messages. Note that some, but not all, of the error codes produced by PostgreSQL are defined by the SQL standard; some additional error codes for conditions not defined by the standard have been invented or borrowed from other databases. According to the standard, the first two characters of an error code denote a class of errors, while the last three characters indicate a specific condition within that class. Thus, an application that does not recognize the specific error code can still be able to infer what to do from the error class. See also: PostgreSQL Error Codes table An example of the available constants defined in the module: >>> errorcodes.CLASS_SYNTAX_ERROR_OR_ACCESS_RULE_VIOLATION '42' >>> errorcodes.UNDEFINED_TABLE '42P01' Constants representing all the error values documented by PostgreSQL versions between 8.1 and 9.2 are included in the module. psycopg2.errorcodes.lookup(code) Lookup an error code or class code and return its symbolic name. Raise "KeyError" if the code is not found. >>> try: ... cur.execute("SELECT ouch FROM aargh;") ... except Exception, e: ... pass ... >>> errorcodes.lookup(e.pgcode[:2]) 'CLASS_SYNTAX_ERROR_OR_ACCESS_RULE_VIOLATION' >>> errorcodes.lookup(e.pgcode) 'UNDEFINED_TABLE' New in version 2.0.14. Frequently Asked Questions ************************** Here are a few gotchas you may encounter using "psycopg2". Feel free to suggest new entries! Problems with transactions handling =================================== Why does "psycopg2" leave database sessions "idle in transaction"? Psycopg normally starts a new transaction the first time a query is executed, e.g. calling "cursor.execute()", even if the command is a "SELECT". The transaction is not closed until an explicit "commit()" or "rollback()". If you are writing a long-living program, you should probably make sure to call one of the transaction closing methods before leaving the connection unused for a long time (which may also be a few seconds, depending on the concurrency level in your database). Alternatively you can use a connection in "autocommit" mode to avoid a new transaction to be started at the first command. I receive the error *current transaction is aborted, commands ignored until end of transaction block* and can't do anything else! There was a problem *in the previous* command to the database, which resulted in an error. The database will not recover automatically from this condition: you must run a "rollback()" before sending new commands to the session (if this seems too harsh, remember that PostgreSQL supports nested transactions using the "SAVEPOINT" command). Why do I get the error *current transaction is aborted, commands ignored until end of transaction block* when I use "multiprocessing" (or any other forking system) and not when use "threading"? Psycopg's connections can't be shared across processes (but are thread safe). If you are forking the Python process make sure to create a new connection in each forked child. See *Thread and process safety* for further informations. Problems with type conversions ============================== Why does "cursor.execute()" raise the exception *can't adapt*? Psycopg converts Python objects in a SQL string representation by looking at the object class. The exception is raised when you are trying to pass as query parameter an object for which there is no adapter registered for its class. See *Adapting new Python types to SQL syntax* for informations. I can't pass an integer or a float parameter to my query: it says *a number is required*, but *it is* a number! In your query string, you always have to use "%s" placeholders, event when passing a number. All Python objects are converted by Psycopg in their SQL representation, so they get passed to the query as strings. See *Passing parameters to SQL queries*. >>> cur.execute("INSERT INTO numbers VALUES (%d)", (42,)) # WRONG >>> cur.execute("INSERT INTO numbers VALUES (%s)", (42,)) # correct I try to execute a query but it fails with the error *not all arguments converted during string formatting* (or *object does not support indexing*). Why? Psycopg always require positional arguments to be passed as a sequence, even when the query takes a single parameter. And remember that to make a single item tuple in Python you need a comma! See *Passing parameters to SQL queries*. >>> cur.execute("INSERT INTO foo VALUES (%s)", "bar") # WRONG >>> cur.execute("INSERT INTO foo VALUES (%s)", ("bar")) # WRONG >>> cur.execute("INSERT INTO foo VALUES (%s)", ("bar",)) # correct >>> cur.execute("INSERT INTO foo VALUES (%s)", ["bar"]) # correct My database is Unicode, but I receive all the strings as UTF-8 "str". Can I receive "unicode" objects instead? The following magic formula will do the trick: psycopg2.extensions.register_type(psycopg2.extensions.UNICODE) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY) See *Unicode handling* for the gory details. Psycopg converts "decimal"/"numeric" database types into Python "Decimal" objects. Can I have "float" instead? You can register a customized adapter for PostgreSQL decimal type: DEC2FLOAT = psycopg2.extensions.new_type( psycopg2.extensions.DECIMAL.values, 'DEC2FLOAT', lambda value, curs: float(value) if value is not None else None) psycopg2.extensions.register_type(DEC2FLOAT) See *Type casting of SQL types into Python objects* to read the relevant documentation. If you find "psycopg2.extensions.DECIMAL" not available, use "psycopg2._psycopg.DECIMAL" instead. Transferring binary data from PostgreSQL 9.0 doesn't work. PostgreSQL 9.0 uses by default the "hex" format to transfer "bytea" data: the format can't be parsed by the libpq 8.4 and earlier. The problem is solved in Psycopg 2.4.1, that uses its own parser for the "bytea" format. For previous Psycopg releases, three options to solve the problem are: * set the bytea_output parameter to "escape" in the server; * execute the database command "SET bytea_output TO escape;" in the session before reading binary data; * upgrade the libpq library on the client to at least 9.0. Arrays of *TYPE* are not casted to list. Arrays are only casted to list when their oid is known, and an array typecaster is registered for them. If there is no typecaster, the array is returned unparsed from PostgreSQL (e.g. "{a,b,c}"). It is easy to create a generic arrays typecaster, returning a list of array: an example is provided in the "new_array_type()" documentation. Best practices ============== When should I save and re-use a cursor as opposed to creating a new one as needed? Cursors are lightweight objects and creating lots of them should not pose any kind of problem. But note that cursors used to fetch result sets will cache the data and use memory in proportion to the result set size. Our suggestion is to almost always create a new cursor and dispose old ones as soon as the data is not required anymore (call "close()" on them.) The only exception are tight loops where one usually use the same cursor for a whole bunch of "INSERT"s or "UPDATE"s. When should I save and re-use a connection as opposed to creating a new one as needed? Creating a connection can be slow (think of SSL over TCP) so the best practice is to create a single connection and keep it open as long as required. It is also good practice to rollback or commit frequently (even after a single "SELECT" statement) to make sure the backend is never left "idle in transaction". See also "psycopg2.pool" for lightweight connection pooling. What are the advantages or disadvantages of using named cursors? The only disadvantages is that they use up resources on the server and that there is a little overhead because a at least two queries (one to create the cursor and one to fetch the initial result set) are issued to the backend. The advantage is that data is fetched one chunk at a time: using small "fetchmany()" values it is possible to use very little memory on the client and to skip or discard parts of the result set. Problems compiling and deploying psycopg2 ========================================= I can't compile "psycopg2": the compiler says *error: Python.h: No such file or directory*. What am I missing? You need to install a Python development package: it is usually called "python-dev". I can't compile "psycopg2": the compiler says *error: libpq-fe.h: No such file or directory*. What am I missing? You need to install the development version of the libpq: the package is usually called "libpq-dev". "psycopg2" raises "ImportError" with message *_psycopg.so: undefined symbol: lo_truncate* when imported. This means that Psycopg has been compiled with "lo_truncate()" support, which means that the libpq used at compile time was version >= 8.3, but at runtime an older libpq library is found. You can use: $ ldd /path/to/packages/psycopg2/_psycopg.so | grep libpq to find what is the version used at runtime. You can avoid the problem by using the same version of the **pg_config** at install time and the libpq at runtime. Psycopg raises *ImportError: cannot import name tz* on import in mod_wsgi / ASP, but it works fine otherwise. If "psycopg2" is installed in an egg (e.g. because installed by **easy_install**), the user running the program may be unable to write in the eggs cache. Set the env variable "PYTHON_EGG_CACHE" to a writable directory. With modwsgi you can use the WSGIPythonEggs directive. Release notes ************* What's new in psycopg 2.5.1 =========================== * Fixed build on Solaris 10 and 11 where the round() function is already declared (ticket #146). * Fixed comparison of "Range" with non-range objects (ticket #164). Thanks to Chris Withers for the patch. * Fixed double-free on connection dealloc (ticket #166). Thanks to Gangadharan S.A. for the report and fix suggestion. What's new in psycopg 2.5 ========================= New features: * Added *JSON adaptation*. * Added *support for PostgreSQL 9.2 range types*. * "connection" and "cursor" objects can be used in "with" statements as context managers as specified by recent DB API 2.0 extension. * Added "Diagnostics" object to get extended info from a database error. Many thanks to Matthew Woodcraft for the implementation (ticket #149). * Added "connection.cursor_factory" attribute to customize the default object returned by "cursor()". * Added support for backward scrollable cursors. Thanks to Jon Nelson for the initial patch (ticket #108). * Added a simple way to *customize casting of composite types* into Python objects other than namedtuples. Many thanks to Ronan Dunklau and Tobias Oberstein for the feature development. * "connection.reset()" implemented using "DISCARD ALL" on server versions supporting it. Bug fixes: * Properly cleanup memory of broken connections (ticket #148). * Fixed bad interaction of "setup.py" with other dependencies in Distribute projects on Python 3 (ticket #153). Other changes: * Added support for Python 3.3. * Dropped support for Python 2.4. Please use Psycopg 2.4.x if you need it. * "errorcodes" map updated to PostgreSQL 9.2. * Dropped Zope adapter from source repository. ZPsycopgDA now has its own project at . What's new in psycopg 2.4.6 --------------------------- * Fixed 'cursor()' arguments propagation in connection subclasses and overriding of the 'cursor_factory' argument. Thanks to Corry Haines for the report and the initial patch (ticket #105). * Dropped GIL release during string adaptation around a function call invoking a Python API function, which could cause interpreter crash. Thanks to Manu Cupcic for the report (ticket #110). * Close a green connection if there is an error in the callback. Maybe a harsh solution but it leaves the program responsive (ticket #113). * 'register_hstore()', 'register_composite()', 'tpc_recover()' work with RealDictConnection and Cursor (ticket #114). * Fixed broken pool for Zope and connections re-init across ZSQL methods in the same request (tickets #123, #125, #142). * connect() raises an exception instead of swallowing keyword arguments when a connection string is specified as well (ticket #131). * Discard any result produced by 'executemany()' (ticket #133). * Fixed pickling of FixedOffsetTimezone objects (ticket #135). * Release the GIL around PQgetResult calls after COPY (ticket #140). * Fixed empty strings handling in composite caster (ticket #141). * Fixed pickling of DictRow and RealDictRow objects. What's new in psycopg 2.4.5 --------------------------- * The close() methods on connections and cursors don't raise exceptions if called on already closed objects. * Fixed fetchmany() with no argument in cursor subclasses (ticket #84). * Use lo_creat() instead of lo_create() when possible for better interaction with pgpool-II (ticket #88). * Error and its subclasses are picklable, useful for multiprocessing interaction (ticket #90). * Better efficiency and formatting of timezone offset objects thanks to Menno Smits (tickets #94, #95). * Fixed 'rownumber' during iteration on cursor subclasses. Regression introduced in 2.4.4 (ticket #100). * Added support for 'inet' arrays. * Fixed 'commit()' concurrency problem (ticket #103). * Codebase cleaned up using the GCC Python plugin's static analysis tool, which has revealed several unchecked return values, possible NULL dereferences, reference counting problems. Many thanks to David Malcolm for the useful tool and the assistance provided using it. What's new in psycopg 2.4.4 --------------------------- * 'register_composite()' also works with the types implicitly defined after a table row, not only with the ones created by 'CREATE TYPE'. * Values for the isolation level symbolic constants restored to what they were before release 2.4.2 to avoid breaking apps using the values instead of the constants. * Named DictCursor/RealDictCursor honour itersize (ticket #80). * Fixed rollback on error on Zope (ticket #73). * Raise 'DatabaseError' instead of 'Error' with empty libpq errors, consistently with other disconnection-related errors: regression introduced in release 2.4.1 (ticket #82). What's new in psycopg 2.4.3 --------------------------- * connect() supports all the keyword arguments supported by the database * Added 'new_array_type()' function for easy creation of array typecasters. * Added support for arrays of hstores and composite types (ticket #66). * Fixed segfault in case of transaction started with connection lost (and possibly other events). * Fixed adaptation of Decimal type in sub-interpreters, such as in certain mod_wsgi configurations (ticket #52). * Rollback connections in transaction or in error before putting them back into a pool. Also discard broken connections (ticket #62). * Lazy import of the slow uuid module, thanks to Marko Kreen. * Fixed NamedTupleCursor.executemany() (ticket #65). * Fixed --static-libpq setup option (ticket #64). * Fixed interaction between RealDictCursor and named cursors (ticket #67). * Dropped limit on the columns length in COPY operations (ticket #68). * Fixed reference leak with arguments referenced more than once in queries (ticket #81). * Fixed typecasting of arrays containing consecutive backslashes. * 'errorcodes' map updated to PostgreSQL 9.1. What's new in psycopg 2.4.2 --------------------------- * Added 'set_session()' method and 'autocommit' property to the connection. Added support for read-only sessions and, for PostgreSQL 9.1, for the "repeatable read" isolation level and the "deferrable" transaction property. * Psycopg doesn't execute queries at connection time to find the default isolation level. * Fixed bug with multithread code potentially causing loss of sync with the server communication or lock of the client (ticket #55). * Don't fail import if mx.DateTime module can't be found, even if its support was built (ticket #53). * Fixed escape for negative numbers prefixed by minus operator (ticket #57). * Fixed refcount issue during copy. Reported and fixed by Dave Malcolm (ticket #58, Red Hat Bug 711095). * Trying to execute concurrent operations on the same connection through concurrent green thread results in an error instead of a deadlock. * Fixed detection of pg_config on Window. Report and fix, plus some long needed setup.py cleanup by Steve Lacy: thanks! What's new in psycopg 2.4.1 --------------------------- * Use own parser for bytea output, not requiring anymore the libpq 9.0 to parse the hex format. * Don't fail connection if the client encoding is a non-normalized variant. Issue reported by Peter Eisentraut. * Correctly detect an empty query sent to the backend (ticket #46). * Fixed a SystemError clobbering libpq errors raised without SQLSTATE. Bug vivisectioned by Eric Snow. * Fixed interaction between NamedTuple and server-side cursors. * Allow to specify --static-libpq on setup.py command line instead of just in 'setup.cfg'. Patch provided by Matthew Ryan (ticket #48). What's new in psycopg 2.4 ========================= New features and changes: * Added support for Python 3.1 and 3.2. The conversion has also brought several improvements: * Added 'b' and 't' mode to large objects: write can deal with both bytes strings and unicode; read can return either bytes strings or decoded unicode. * COPY sends Unicode data to files implementing 'io.TextIOBase'. * Improved PostgreSQL-Python encodings mapping. * Added a few missing encodings: EUC_CN, EUC_JIS_2004, ISO885910, ISO885916, LATIN10, SHIFT_JIS_2004. * Dropped repeated dictionary lookups with unicode query/parameters. * Improvements to the named cursors: * More efficient iteration on named cursors, fetching 'itersize' records at time from the backend. * The named cursors name can be an invalid identifier. * Improvements in data handling: * Added 'register_composite()' function to cast PostgreSQL composite types into Python tuples/namedtuples. * Adapt types 'bytearray' (from Python 2.6), 'memoryview' (from Python 2.7) and other objects implementing the "Revised Buffer Protocol" to 'bytea' data type. * The 'hstore' adapter can work even when the data type is not installed in the 'public' namespace. * Raise a clean exception instead of returning bad data when receiving bytea in 'hex' format and the client libpq can't parse them. * Empty lists correctly roundtrip Python -> PostgreSQL -> Python. * Other changes: * 'cursor.description' is provided as named tuples if available. * The build script refuses to guess values if 'pg_config' is not found. * Connections and cursors are weakly referenceable. Bug fixes: * Fixed adaptation of None in composite types (ticket #26). Bug report by Karsten Hilbert. * Fixed several reference leaks in less common code paths. * Fixed segfault when a large object is closed and its connection no more available. * Added missing icon to ZPsycopgDA package, not available in Zope 2.12.9 (ticket #30). Bug report and patch by Pumukel. * Fixed conversion of negative infinity (ticket #40). Bug report and patch by Marti Raudsepp. What's new in psycopg 2.3.2 --------------------------- * Fixed segfault with middleware not passing DateStyle to the client (ticket #24). Bug report and patch by Marti Raudsepp. What's new in psycopg 2.3.1 --------------------------- * Fixed build problem on CentOS 5.5 x86_64 (ticket #23). What's new in psycopg 2.3 ========================= psycopg 2.3 aims to expose some new features introduced in PostgreSQL 9.0. Main new features: * "dict" to "hstore" adapter and "hstore" to "dict" typecaster, using both 9.0 and pre-9.0 syntax. * Two-phase commit protocol support as per DBAPI specification. * Support for payload in notifications received from the backend. * "namedtuple"-returning cursor. * Query execution cancel. Other features and changes: * Dropped support for protocol 2: Psycopg 2.3 can only connect to PostgreSQL servers with version at least 7.4. * Don't issue a query at every connection to detect the client encoding and to set the datestyle to ISO if it is already compatible with what expected. * "mogrify()" now supports unicode queries. * Subclasses of a type that can be adapted are adapted as the superclass. * "errorcodes" knows a couple of new codes introduced in PostgreSQL 9.0. * Dropped deprecated Psycopg "own quoting". * Never issue a ROLLBACK on close/GC. This behaviour was introduced as a bug in release 2.2, but trying to send a command while being destroyed has been considered not safe. Bug fixes: * Fixed use of "PQfreemem" instead of "free" in binary typecaster. * Fixed access to freed memory in "conn_get_isolation_level()". * Fixed crash during Decimal adaptation with a few 2.5.x Python versions (ticket #7). * Fixed notices order (ticket #9). What's new in psycopg 2.2.2 --------------------------- Bux fixes: * the call to logging.basicConfig() in pool.py has been dropped: it was messing with some projects using logging (and a library should not initialize the logging system anyway.) * psycopg now correctly handles time zones with seconds in the UTC offset. The old register_tstz_w_secs() function is deprecated and will raise a warning if called. * Exceptions raised by the column iterator are propagated. * Exceptions raised by executemany() iterators are propagated. What's new in psycopg 2.2.1 --------------------------- Bux fixes: * psycopg now builds again on MS Windows. What's new in psycopg 2.2 ========================= This is the first release of the new 2.2 series, supporting not just one but two different ways of executing asynchronous queries, thanks to Jan and Daniele (with a little help from me and others, but they did 99% of the work so they deserve their names here in the news.) psycopg now supports both classic select() loops and "green" coroutine libraries. It is all in the documentation, so just point your browser to doc/html/advanced.html. Other new features: * truncate() method for lobjects. * COPY functions are now a little bit faster. * All builtin PostgreSQL to Python typecasters are now available from the psycopg2.extensions module. * Notifications from the backend are now available right after the execute() call (before client code needed to call isbusy() to ensure NOTIFY reception.) * Better timezone support. * Lots of documentation updates. Bug fixes: * Fixed some gc/refcounting problems. * Fixed reference leak in NOTIFY reception. * Fixed problem with PostgreSQL not casting string literals to the correct types in some situations: psycopg now add an explicit cast to dates, times and bytea representations. * Fixed TimestampFromTicks() and TimeFromTicks() for seconds >= 59.5. * Fixed spurious exception raised when calling C typecasters from Python ones. What's new in psycopg 2.0.14 ---------------------------- New features: * Support for adapting tuples to PostgreSQL arrays is now enabled by default and does not require importing psycopg2.extensions anymore. * "can't adapt" error message now includes full type information. * Thank to Daniele Varrazzo (piro) psycopg2's source package now includes full documentation in HTML and plain text format. Bug fixes: * No loss of precision when using floats anymore. * decimal.Decimal "nan" and "infinity" correctly converted to PostgreSQL numeric NaN values (note that PostgreSQL numeric type does not support infinity but just NaNs.) * psycopg2.extensions now includes Binary. It seems we're good citizens of the free software ecosystem and that big big big companies and people ranting on the pgsql-hackers mailing list we'll now not dislike us. *g* (See LICENSE file for the details.) What's new in psycopg 2.0.13 ---------------------------- New features: * Support for UUID arrays. * It is now possible to build psycopg linking to a static libpq library. Bug fixes: * Fixed a deadlock related to using the same connection with multiple cursors from different threads. * Builds again with MSVC. What's new in psycopg 2.0.12 ---------------------------- New features: * The connection object now has a reset() method that can be used to reset the connection to its default state. Bug fixes: * copy_to() and copy_from() now accept a much larger number of columns. * Fixed PostgreSQL version detection. * Fixed ZPsycopgDA version check. * Fixed regression in ZPsycopgDA that made it behave wrongly when receiving serialization errors: now the query is re-issued as it should be by propagating the correct exception to Zope. * Writing "large" large objects should now work. What's new in psycopg 2.0.11 ---------------------------- New features: * DictRow and RealDictRow now use less memory. If you inherit on them remember to set __slots__ for your new attributes or be prepare to go back to old memory usage. Bug fixes: * Fixed exception in setup.py. * More robust detection of PostgreSQL development versions. * Fixed exception in RealDictCursor, introduced in 2.0.10. What's new in psycopg 2.0.10 ---------------------------- New features: * A specialized type-caster that can parse time zones with seconds is now available. Note that after enabling it (see extras.py) "wrong" time zones will be parsed without raising an exception but the result will be rounded. * DictCursor can be used as a named cursor. * DictRow now implements more dict methods. * The connection object now expose PostgreSQL server version as the .server_version attribute and the protocol version used as .protocol_version. * The connection object has a .get_parameter_status() methods that can be used to obtain useful information from the server. Bug fixes: * None is now correctly always adapted to NULL. * Two double memory free errors provoked by multithreading and garbage collection are now fixed. * Fixed usage of internal Python code in the notice processor; this should fix segfaults when receiving a lot of notices in multithreaded programs. * Should build again on MSVC and Solaris. * Should build with development versions of PostgreSQL (ones with -devel version string.) * Fixed some tests that failed even when psycopg was right. What's new in psycopg 2.0.9 --------------------------- New features: * "import psycopg2.extras" to get some support for handling times and timestamps with seconds in the time zone offset. * DictCursors can now be used as named cursors. Bug fixes: * register_type() now accept an explicit None as its second parameter. * psycopg2 should build again on MSVC and Solaris. What's new in psycopg 2.0.9 --------------------------- New features: * COPY TO/COPY FROM queries now can be of any size and psycopg will correctly quote separators. * float values Inf and NaN are now correctly handled and can round- trip to the database. * executemany() now return the numer of total INSERTed or UPDATEd rows. Note that, as it has always been, executemany() should not be used to execute multiple SELECT statements and while it will execute the statements without any problem, it will return the wrong value. * copy_from() and copy_to() can now use quoted separators. * "import psycopg2.extras" to get UUID support. Bug fixes: * register_type() now works on connection and cursor subclasses. * fixed a memory leak when using lobjects. What's new in psycopg 2.0.8 --------------------------- New features: * The connection object now has a get_backend_pid() method that returns the current PostgreSQL connection backend process PID. * The PostgreSQL large object API has been exposed through the Cursor.lobject() method. Bug fixes: * Some fixes to ZPsycopgDA have been merged from the Debian package. * A memory leak was fixed in Cursor.executemany(). * A double free was fixed in pq_complete_error(), that caused crashes under some error conditions. What's new in psycopg 2.0.7 --------------------------- Improved error handling: * All instances of psycopg2.Error subclasses now have pgerror, pgcode and cursor attributes. They will be set to None if no value is available. * Exception classes are now chosen based on the SQLSTATE value from the result. (#184) * The commit() and rollback() methods now set the pgerror and pgcode attributes on exceptions. (#152) * errors from commit() and rollback() are no longer considered fatal. (#194) * If a disconnect is detected during execute(), an exception will be raised at that point rather than resulting in "ProgrammingError: no results to fetch" later on. (#186) Better PostgreSQL compatibility: * If the server uses standard_conforming_strings, perform appropriate quoting. * BC dates are now handled if psycopg is compiled with mxDateTime support. If using datetime, an appropriate ValueError is raised. (#203) Other bug fixes: * If multiple sub-interpreters are in use, do not share the Decimal type between them. (#192) * Buffer objects obtained from psycopg are now accepted by psycopg too, without segfaulting. (#209) * A few small changes were made to improve DB-API compatibility. All the dbapi20 tests now pass. Miscellaneous: * The PSYCOPG_DISPLAY_SIZE option is now off by default. This means that display size will always be set to "None" in cursor.description. Calculating the display size was expensive, and infrequently used so this should improve performance. * New QueryCanceledError and TransactionRollbackError exceptions have been added to the psycopg2.extensions module. They can be used to detect statement timeouts and deadlocks respectively. * Cursor objects now have a "closed" attribute. (#164) * If psycopg has been built with debug support, it is now necessary to set the PSYCOPG_DEBUG environment variable to turn on debug spew. What's new in psycopg 2.0.6 --------------------------- Better support for PostgreSQL, Python and win32: * full support for PostgreSQL 8.2, including NULLs in arrays * support for almost all existing PostgreSQL encodings * full list of PostgreSQL error codes available by importing the psycopg2.errorcodes module * full support for Python 2.5 and 64 bit architectures * better build support on win32 platform Support for per-connection type-casters (used by ZPsycopgDA too, this fixes a long standing bug that made different connections use a random set of date/time type-casters instead of the configured one.) Better management of times and dates both from Python and in Zope. copy_to and copy_from now take an extra "columns" parameter. Python tuples are now adapted to SQL sequences that can be used with the "IN" operator by default if the psycopg2.extensions module is imported (i.e., the SQL_IN adapter was moved from extras to extensions.) Fixed some small buglets and build glitches: * removed double mutex destroy * removed all non-constant initializers * fixed PyObject_HEAD declarations to avoid memory corruption on 64 bit architectures * fixed several Python API calls to work on 64 bit architectures * applied compatibility macros from PEP 353 * now using more than one argument format raise an error instead of a segfault What's new in psycopg 2.0.5.1 ----------------------------- * Now it really, really builds on MSVC and older gcc versions. What's new in psycopg 2.0.5 --------------------------- * Fixed various buglets such as: * segfault when passing an empty string to Binary() * segfault on null queries * segfault and bad keyword naming in .executemany() * OperationalError in connection objects was always None * Various changes to ZPsycopgDA to make it more zope2.9-ish. * connect() now accept both integers and strings as port parameter What's new in psycopg 2.0.4 --------------------------- * Fixed float conversion bug introduced in 2.0.3. What's new in psycopg 2.0.3 --------------------------- * Fixed various buglets and a memory leak (see ChangeLog for details) What's new in psycopg 2.0.2 --------------------------- * Fixed a bug in array typecasting that sometimes made psycopg forget about the last element in the array. * Fixed some minor buglets in string memory allocations. * Builds again with compilers different from gcc (#warning about PostgreSQL version is issued only if __GCC__ is defined.) What's new in psycopg 2.0.1 --------------------------- * ZPsycopgDA now actually loads. What's new in psycopg 2.0 ========================= * Fixed handle leak on win32. * If available the new "safe" encoding functions of libpq are used. * django and tinyerp people, please switch to psycopg 2 _without_ using a psycopg 1 compatibility layer (this release was anticipated so that you all stop grumbling about psycopg 2 is still in beta.. :) What's new in psycopg 2.0 beta 7 -------------------------------- * Ironed out last problems with times and date (should be quite solid now.) * Fixed problems with some arrays. * Slightly better ZPsycopgDA (no more double connection objects in the menu and other minor fixes.) * ProgrammingError exceptions now have three extra attributes: .cursor (it is possible to access the query that caused the exception using error.cursor.query), .pgerror and .pgcode (PostgreSQL original error text and code.) * The build system uses pg_config when available. * Documentation in the doc/ directory! (With many kudos to piro.) What's new in psycopg 2.0 beta 6 -------------------------------- * Support for named cursors (see examples/fetch.py). * Safer parsing of time intervals. * Better parsing of times and dates, no more locale problems. * Should now play well with py2exe and similar tools. * The "decimal" module is now used if available under Python 2.3. What's new in psycopg 2.0 beta 5 -------------------------------- * Fixed all known bugs. * The initial isolation level is now read from the server and .set_isolation_level() now takes values defined in psycopg2.extensions. * .callproc() implemented as a SELECT of the given procedure. * Better docstrings for a few functions/methods. * Some time-related functions like psycopg2.TimeFromTicks() now take the local timezone into account. Also a tzinfo object (as per datetime module specifications) can be passed to the psycopg2.Time and psycopg2.Datetime constructors. * All classes have been renamed to exist in the psycopg2._psycopg module, to fix problems with automatic documentation generators like epydoc. * NOTIFY is correctly trapped (see examples/notify.py for example code.) What's new in psycopg 2.0 beta 4 -------------------------------- * psycopg module is now named psycopg2. * No more segfaults when a UNICODE query can't be converted to the backend encoding. * No more segfaults on empty queries. * psycopg2.connect() now takes an integer for the port keyword parameter. * "python setup.py bdist_rpm" now works. * Fixed lots of small bugs, see ChangeLog for details. What's new in psycopg 2.0 beta 3 -------------------------------- * ZPsycopgDA now works (except table browsing.) * psycopg build again on Python 2.2. What's new in psycopg 2.0 beta 2 -------------------------------- * Fixed ZPsycopgDA version check (ZPsycopgDA can now be imported in Zope.) * psycopg.extras.DictRow works even after a new query on the generating cursor. * Better setup.py for win32 (should build with MSCV or mingw.) * Generic fixed and memory leaks plugs. What's new in psycopg 2.0 beta 1 -------------------------------- * Officially in beta (i.e., no new features will be added.) * Array support: list objects can be passed as bound variables and are correctly returned for array columns. * Added the psycopg.psycopg1 compatibility module (if you want instant psycopg 1 compatibility just "from psycopg import psycopg1 as psycopg".) * Complete support for BYTEA columns and buffer objects. * Added error codes to error messages. * The AsIs adapter is now exported by default (also Decimal objects are adapted using the AsIs adapter (when str() is called on them they already format themselves using the right precision and scale.) * The connect() function now takes "connection_factory" instead of "factory" as keyword argument. * New setup.py code to build on win32 using mingw and better error messages on missing datetime headers, * Internal changes that allow much better user-defined type casters. * A lot of bugfixes (binary, datetime, 64 bit arches, GIL, .executemany()) What's new in psycopg 1.99.13 ----------------------------- * Added missing .executemany() method. * Optimized type cast from PostgreSQL to Python (psycopg should be even faster than before.) What's new in psycopg 1.99.12 ----------------------------- * .rowcount should be ok and in sync with psycopg 1. * Implemented the new COPY FROM/COPY TO code when connection to the backend using libpq protocol 3 (this also removes all asprintf calls: build on win32 works again.) A protocol 3-enabled psycopg *can* connect to an old protocol 2 database and will detect it and use the right code. * getquoted() called for real by the mogrification code. What's new in psycopg 1.99.11 ----------------------------- * 'cursor' argument in .cursor() connection method renamed to 'cursor_factory'. * changed 'tuple_factory' cursor attribute name to 'row_factory'. * the .cursor attribute is gone and connections and cursors are properly gc-managed. * fixes to the async core. What's new in psycopg 1.99.10 ----------------------------- * The adapt() function now fully supports the adaptation protocol described in PEP 246. Note that the adapters registry now is indexed by (type, protocol) and not by type alone. Change your adapters accordingly. * More configuration options moved from setup.py to setup.cfg. * Fixed two memory leaks: one in cursor deallocation and one in row fetching (.fetchXXX() methods.) What's new in psycopg 1.99.9 ---------------------------- * Added simple pooling code (psycopg.pool module); see the reworked examples/threads.py for example code. * Added DECIMAL typecaster to convert postgresql DECIMAL and NUMERIC types (i.e, all types with an OID of NUMERICOID.) Note that the DECIMAL typecaster does not set scale and precision on the created objects but uses Python defaults. * ZPsycopgDA back in and working using the new pooling code. * Isn't that enough? :) What's new in psycopg 1.99.8 ---------------------------- * added support for UNICODE queries. * added UNICODE typecaster; to activate it just do: psycopg.extensions.register_type(psycopg.extensions.UNICODE) Note that the UNICODE typecaster override the STRING one, so it is not activated by default. * cursors now really support the iterator protocol. * solved the rounding errors in time conversions. * now cursors support .fileno() and .isready() methods, to be used in select() calls. * .copy_from() and .copy_in() methods are back in (still using the old protocol, will be updated to use new one in next release.) * fixed memory corruption bug reported on win32 platform. What's new in psycopg 1.99.7 ---------------------------- * added support for tuple factories in cursor objects (removed factory argument in favor of a .tuple_factory attribute on the cursor object); see the new module psycopg.extras for a cursor (DictCursor) that return rows as objects that support indexing both by position and column name. * added support for tzinfo objects in datetime.timestamp objects: the PostgreSQL type "timestamp with time zone" is converted to datetime.timestamp with a FixedOffsetTimezone initialized as necessary. What's new in psycopg 1.99.6 ---------------------------- * sslmode parameter from 1.1.x * various datetime conversion improvements. * now psycopg should compile without mx or without native datetime (not both, obviously.) * included various win32/MSVC fixes (pthread.h changes, winsock2 library, include path in setup.py, etc.) * ported interval fixes from 1.1.14/1.1.15. * the last query executed by a cursor is now available in the .query attribute. * conversion of unicode strings to backend encoding now uses a table (that still need to be filled.) * cursors now have a .mogrify() method that return the query string instead of executing it. * connection objects now have a .dsn read-only attribute that holds the connection string. * moved psycopg C module to _psycopg and made psycopg a python module: this allows for a neat separation of DBAPI-2.0 functionality and psycopg extensions; the psycopg namespace will be also used to provide python-only extensions (like the pooling code, some ZPsycopgDA support functions and the like.) What's new in psycopg 1.99.3 ---------------------------- * added support for python 2.3 datetime types (both ways) and made datetime the default set of typecasters when available. * added example: dt.py. What's new in psycopg 1.99.3 ---------------------------- * initial working support for unicode bound variables: UTF-8 and latin-1 backend encodings are natively supported (and the encoding.py example even works!) * added .set_client_encoding() method on the connection object. * added examples: encoding.py, binary.py, lastrowid.py. What's new in psycopg 1.99.2 ---------------------------- * better typecasting: * DateTimeDelta used for postgresql TIME (merge from 1.1) * BYTEA now is converted to a real buffer object, not to a string * buffer objects are now adapted into Binary objects automatically. * ported scroll method from 1.1 (DBAPI-2.0 extension for cursors) * initial support for some DBAPI-2.0 extensions: * .rownumber attribute for cursors * .connection attribute for cursors * .next() and .__iter__() methods to have cursors support the iterator protocol * all exception objects are exported to the connection object What's new in psycopg 1.99.1 ---------------------------- * implemented microprotocols to adapt arbitrary types to the interface used by psycopg to bind variables in execute; * moved qstring, pboolean and mxdatetime to the new adapter layout (binary is still missing; python 2.3 datetime needs to be written). What's new in psycopg 1.99.0 ---------------------------- * reorganized the whole source tree; * async core is in place; * splitted QuotedString objects from mx stuff; * dropped autotools and moved to pythonic setup.py (needs work.)