How do I serialize pyodbc cursor output (from .fetchone
, .fetchmany
or .fetchall
) as a Python dictionary?
I'm using bottlepy and need to return dict so it can return it as JSON.
How do I serialize pyodbc cursor output (from .fetchone
, .fetchmany
or .fetchall
) as a Python dictionary?
I'm using bottlepy and need to return dict so it can return it as JSON.
I know this question is old, but it helped me figure out how to do what I needed, which is slightly different than what OP was asking for, so I thought I'd share, to help anyone else that needs what I needed: If you want to fully generalize a routine that performs SQL Select Queries, but you need to reference the results by an index number, not a name, you can do this with a list of lists instead of a dictionary. Each row of returned data is represented in the returned list as a list of field(column) values. The column names can be provided as the first entry of the returned list, so parsing the returned list in the calling routine can be really easy and flexible. In this way, the routine doing the database call doesn't need to know anything about the data that it's handling. Here is such a routine:
Assuming you know you column names! Also, here are three different solutions,
you probably want to look at the last one!
That's an indexed version, not the most beautiful solution but it will work. Another would be to index the column name as dictionary key with a list within each key containing the data in order of row number. by doing:
Writing this, i understand that doing
for col in colnames
could be replaced byfor colindex in range(0, len())
but you get the idea. The later example tho would be useful when not fetching all data, but one row at a time, for instance:Using dict for each row of data
Getting tablenames (i think.. thanks to Foo Stack):
a more direct solution from beargle below!
Using @Beargle's result with bottlepy, I was able to create this very concise query exposing endpoint:
Here is a short form version you might be able to use
As you might be aware when you add * to a list you basically strips away the list, leaving the individual list entries as parameters to the function you are calling. By using zip we pick the 1st to n entry and zip them together like a the zipper in you pants.
so by using
you get
Since description is a tuple with tuples, where each tuple describes the header and the data type for each column, you can extract the first of each tuple with
equivalent to
Mainly going off @Torxed response, I created a full generalised set of functions to find the schema and data into a dictionary:
Feel free to go all code-golf on this to reduce the lines; but in the meantime, it works!
;)
I like @bryan and @foo-stack answers. If you are working with postgresql and you are using
psycopg2
you could use some goodies from psycopg2 to achieve the same by specifying the cursorfactory being aDictCursor
when creating your cursor from the connection, like this:cur = conn.cursor( cursor_factory=psycopg2.extras.DictCursor )
So now you can execute your sql query and you'll get a dictionary to fetch your results, without the need to map them by hand.
Please note that you'll have to
import psycopg2.extras
for that to work.