read_sql(sql, con, index_col: 'str | Sequence[str] | None' = None, coerce_float: 'bool' = True, params=None, parse_dates=None, columns=None, chunksize: 'int | None' = None) -> 'DataFrame | Iterator[DataFrame]'
This function is a convenience wrapper around read_sql_table
and read_sql_query
(for backward compatibility). It will delegate to the specific function depending on the provided input. A SQL query will be routed to read_sql_query
, while a database table name will be routed to read_sql_table
. Note that the delegated function might have more specific notes about their functionality not listed here.
SQL query to be executed or a table name.
Using SQLAlchemy makes it possible to use any DB supported by that library. If a DBAPI2 object, only sqlite3 is supported. The user is responsible for engine disposal and connection closure for the SQLAlchemy connectable; str connections are closed automatically. See here.
Column(s) to set as index(MultiIndex).
Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets.
List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249's paramstyle, is supported. Eg. for psycopg2, uses %(name)s so use params={'name' : 'value'}.
List of column names to parse as dates.
Dict of {column_name: format string}
where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps.
Dict of {column_name: arg dict}
, where the arg dict corresponds to the keyword arguments of pandas.to_datetime
Especially useful with databases without native Datetime support, such as SQLite.
List of column names to select from SQL table (only used when reading a table).
If specified, return an iterator where :None:None:`chunksize`
is the number of rows to include in each chunk.
Read SQL query or database table into a DataFrame.
read_sql_query
Read SQL query into a DataFrame.
read_sql_table
Read SQL database table into a DataFrame.
Read data from SQL via either a SQL query or a SQL tablename. When using a SQLite database only SQL queries are accepted, providing only the SQL tablename will result in an error.
This example is valid syntax, but we were not able to check execution>>> from sqlite3 import connectThis example is valid syntax, but we were not able to check execution
... conn = connect(':memory:')
... df = pd.DataFrame(data=[[0, '10/11/12'], [1, '12/11/10']],
... columns=['int_column', 'date_column'])
... df.to_sql('test_data', conn) 2
>>> pd.read_sql('SELECT int_column, date_column FROM test_data', conn) int_column date_column 0 0 10/11/12 1 1 12/11/10This example is valid syntax, but we were not able to check execution
>>> pd.read_sql('test_data', 'postgres:///db_name') # doctest:+SKIP
Apply date parsing to columns through the parse_dates
argument
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates=["date_column"]) int_column date_column 0 0 2012-10-11 1 1 2010-12-11
The parse_dates
argument calls pd.to_datetime
on the provided columns. Custom argument values for applying pd.to_datetime
on a column are specified via a dictionary format: 1. Ignore errors while parsing the values of "date_column"
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates={"date_column": {"errors": "ignore"}}) int_column date_column 0 0 2012-10-11 1 1 2010-12-11
Apply a dayfirst date parsing order on the values of "date_column"
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates={"date_column": {"dayfirst": True}}) int_column date_column 0 0 2012-11-10 1 1 2010-11-12
Apply custom formatting when date parsing the values of "date_column"
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',See :
... conn,
... parse_dates={"date_column": {"format": "%d/%m/%y"}}) int_column date_column 0 0 2012-11-10 1 1 2010-11-12
The following pages refer to to this document either explicitly or contain code examples using this.
pandas.io.sql.SQLDatabase.read_query
pandas.io.sql.read_sql_query
pandas.io.sql.read_sql_table
Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.
Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)
SVG is more flexible but power hungry; and does not scale well to 50 + nodes.
All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them