我在MongoDB中收集大量的数据,我需要分析。 如何导入数据到熊猫吗?
我是新来的熊猫和numpy的。
编辑:MongoDB的集合包含标有日期和时间传感器值。 传感器值是float数据类型的。
样本数据:
{
"_cls" : "SensorReport",
"_id" : ObjectId("515a963b78f6a035d9fa531b"),
"_types" : [
"SensorReport"
],
"Readings" : [
{
"a" : 0.958069536790466,
"_types" : [
"Reading"
],
"ReadingUpdatedDate" : ISODate("2013-04-02T08:26:35.297Z"),
"b" : 6.296118156595,
"_cls" : "Reading"
},
{
"a" : 0.95574014778624,
"_types" : [
"Reading"
],
"ReadingUpdatedDate" : ISODate("2013-04-02T08:27:09.963Z"),
"b" : 6.29651468650064,
"_cls" : "Reading"
},
{
"a" : 0.953648289182713,
"_types" : [
"Reading"
],
"ReadingUpdatedDate" : ISODate("2013-04-02T08:27:37.545Z"),
"b" : 7.29679823731148,
"_cls" : "Reading"
},
{
"a" : 0.955931884300997,
"_types" : [
"Reading"
],
"ReadingUpdatedDate" : ISODate("2013-04-02T08:28:21.369Z"),
"b" : 6.29642922525632,
"_cls" : "Reading"
},
{
"a" : 0.95821381,
"_types" : [
"Reading"
],
"ReadingUpdatedDate" : ISODate("2013-04-02T08:41:20.801Z"),
"b" : 7.28956613,
"_cls" : "Reading"
},
{
"a" : 4.95821335,
"_types" : [
"Reading"
],
"ReadingUpdatedDate" : ISODate("2013-04-02T08:41:36.931Z"),
"b" : 6.28956574,
"_cls" : "Reading"
},
{
"a" : 9.95821341,
"_types" : [
"Reading"
],
"ReadingUpdatedDate" : ISODate("2013-04-02T08:42:09.971Z"),
"b" : 0.28956488,
"_cls" : "Reading"
},
{
"a" : 1.95667927,
"_types" : [
"Reading"
],
"ReadingUpdatedDate" : ISODate("2013-04-02T08:43:55.463Z"),
"b" : 0.29115237,
"_cls" : "Reading"
}
],
"latestReportTime" : ISODate("2013-04-02T08:43:55.463Z"),
"sensorName" : "56847890-0",
"reportCount" : 8
}
Answer 1:
pymongo
可能给你一只手,以下是一些代码我使用:
import pandas as pd
from pymongo import MongoClient
def _connect_mongo(host, port, username, password, db):
""" A util for making a connection to mongo """
if username and password:
mongo_uri = 'mongodb://%s:%s@%s:%s/%s' % (username, password, host, port, db)
conn = MongoClient(mongo_uri)
else:
conn = MongoClient(host, port)
return conn[db]
def read_mongo(db, collection, query={}, host='localhost', port=27017, username=None, password=None, no_id=True):
""" Read from Mongo and Store into DataFrame """
# Connect to MongoDB
db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)
# Make a query to the specific DB and Collection
cursor = db[collection].find(query)
# Expand the cursor and construct the DataFrame
df = pd.DataFrame(list(cursor))
# Delete the _id
if no_id:
del df['_id']
return df
Answer 2:
您可以加载您的MongoDB的数据使用此代码大熊猫数据帧。 这个对我有用。 希望对你太。
import pymongo
import pandas as pd
from pymongo import MongoClient
client = MongoClient()
db = client.database_name
collection = db.collection_name
data = pd.DataFrame(list(collection.find()))
Answer 3:
Monary
正是这么做的,这是超级快 。 ( 另一个链接 )
见这个凉爽的职位 ,其中包括一个快速教程和一些时序。
Answer 4:
import pandas as pd
from odo import odo
data = odo('mongodb://localhost/db::collection', pd.DataFrame)
Answer 5:
按PEP,简单比复杂好:
import pandas as pd
df = pd.DataFrame.from_records(db.<database_name>.<collection_name>.find())
您可以包括,你会定期的MongoDB数据库的工作,甚至使用find_one条件()从数据库等获取只有一个元素
瞧!
Answer 6:
对于处理外的核心有效(不配合到RAM)的数据(即并行执行),你可以尝试的Python大火的生态系统 :火焰/ DASK /奥多。
大火(和小户 )有外的现成的函数来处理的MongoDB。
一些有用的文章开始:
- 引入火焰Expessions (与MongoDB的查询示例)
- ReproduceIt:reddit的字数
- DASK数组和火焰之间的区别
和文章这说明了什么令人惊讶的事情是可能的火焰堆栈: 分析1.7亿与Blaze和因帕拉reddit的评论 (基本上,在几秒钟内查询的reddit的评论975 GB)。
PS我没有与任何这些技术的关联。
Answer 7:
运用
pandas.DataFrame(list(...))
会消耗大量的存储器如果迭代/发电机的结果是大
更好地产生在年底小块和CONCAT
def iterator2dataframes(iterator, chunk_size: int):
"""Turn an iterator into multiple small pandas.DataFrame
This is a balance between memory and efficiency
"""
records = []
frames = []
for i, record in enumerate(iterator):
records.append(record)
if i % chunk_size == chunk_size - 1:
frames.append(pd.DataFrame(records))
records = []
if records:
frames.append(pd.DataFrame(records))
return pd.concat(frames)
Answer 8:
我发现非常有用的另一种选择是:
from pandas.io.json import json_normalize
cursor = my_collection.find()
df = json_normalize(cursor)
这样你得到的嵌套MongoDB的文件展开免费。
Answer 9:
http://docs.mongodb.org/manual/reference/mongoexport
导出为CSV格式,并使用read_csv
或JSON和使用DataFrame.from_records
Answer 10:
继这个伟大的答案waitingkuo我想补充这样做的可能性,即符合使用CHUNKSIZE .read_sql()和.read_csv() 。 我从放大的答案申亮 ,避免去一一每个“迭代器” /“光标”的“记录”。 我将借用以前read_mongo功能。
def read_mongo(db,
collection, query={},
host='localhost', port=27017,
username=None, password=None,
chunksize = 100, no_id=True):
""" Read from Mongo and Store into DataFrame """
# Connect to MongoDB
#db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)
client = MongoClient(host=host, port=port)
# Make a query to the specific DB and Collection
db_aux = client[db]
# Some variables to create the chunks
skips_variable = range(0, db_aux[collection].find(query).count(), int(chunksize))
if len(skips_variable)<=1:
skips_variable = [0,len(skips_variable)]
# Iteration to create the dataframe in chunks.
for i in range(1,len(skips_variable)):
# Expand the cursor and construct the DataFrame
#df_aux =pd.DataFrame(list(cursor_aux[skips_variable[i-1]:skips_variable[i]]))
df_aux =pd.DataFrame(list(db_aux[collection].find(query)[skips_variable[i-1]:skips_variable[i]]))
if no_id:
del df_aux['_id']
# Concatenate the chunks into a unique df
if 'df' not in locals():
df = df_aux
else:
df = pd.concat([df, df_aux], ignore_index=True)
return df
Answer 11:
类似的方法像拉斐尔·瓦莱罗,waitingkuo和申梁采用分页 :
def read_mongo(
# db,
collection, query=None,
# host='localhost', port=27017, username=None, password=None,
chunksize = 100, page_num=1, no_id=True):
# Connect to MongoDB
db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)
# Calculate number of documents to skip
skips = chunksize * (page_num - 1)
# Sorry, this is in spanish
# https://www.toptal.com/python/c%C3%B3digo-buggy-python-los-10-errores-m%C3%A1s-comunes-que-cometen-los-desarrolladores-python/es
if not query:
query = {}
# Make a query to the specific DB and Collection
cursor = db[collection].find(query).skip(skips).limit(chunksize)
# Expand the cursor and construct the DataFrame
df = pd.DataFrame(list(cursor))
# Delete the _id
if no_id:
del df['_id']
return df
文章来源: How to import data from mongodb to pandas?