TL;DR If loaded fields in a Pandas DataFrame contain JSON documents themselves, how can they be worked with in a Pandas like fashion?
Currently I'm directly dumping json/dictionary results from a Twitter library (twython) into a Mongo collection (called users here).
from twython import Twython
from pymongo import MongoClient
tw = Twython(...<auth>...)
# Using mongo as object storage
client = MongoClient()
db = client.twitter
user_coll = db.users
user_batch = ... # collection of user ids
user_dict_batch = tw.lookup_user(user_id=user_batch)
for user_dict in user_dict_batch:
if(user_coll.find_one({"id":user_dict['id']}) == None):
user_coll.insert(user_dict)
After populating this database I read the documents into Pandas:
# Pull straight from mongo to pandas
cursor = user_coll.find()
df = pandas.DataFrame(list(cursor))
Which works like magic:
I'd like to be able to mangle the 'status' field Pandas style (directly accessing attributes). Is there a way?
EDIT: Something like df['status:text']. Status has fields like 'text', 'created_at'. One option could be flattening/normalizing this json field like this pull request Wes McKinney was working on.
One solution is just to smash it with the Series constructor:
In some cases you'll want to concat this to the DataFrame in place of the dict row:
If the it goes deeper, you can do this a few times...