I'm looking for a way to make a GROUP BY operation in a query in datastore using MapReduce. AFAIK App Engine doesn't support GROUP BY itself in GQL and a good approach suggested by other developers is use MapReduce.
I downloaded the source code and I'm studying the demo code, and I tryied to implement in my case. But I hadn't success. Here is how I tryied to do it. Maybe everything I did is wrong. So if anyone could help me to do that, I would thank.
What I want to do is: I have a bunch of contacts in the datastore, and each contact have a date. There are a bunch of repeated contacts with the same date. What I want to do is simple the group by, gather the same contacts with the same date.
E.g:
Let's say I have this contacts:
- CONTACT_NAME: Foo1 | DATE: 01-10-2012
- CONTACT_NAME: Foo2 | DATE: 02-05-2012
- CONTACT_NAME: Foo1 | DATE: 01-10-2012
So after the MapReduce operation It would be something like this:
- CONTACT_NAME: Foo1 | DATE: 01-10-2012
- CONTACT_NAME: Foo2 | DATE: 02-05-2012
For a GROUP BY functionality I think word count does the work.
EDIT
The only thing that is shown in the log is:
/mapreduce/pipeline/run 200
Running GetContactData.WordCountPipeline((u'2012-02-02',), *{})#da26a9b555e311e19b1e6d324d450c1a
END EDIT
If I'm doing something wrong, and if I'm using a wrong approach to do a GROUP BY with MapReduce, help me in how to do that with MapReduce.
Here is my code:
from Contacts import Contacts
from google.appengine.ext import webapp
from google.appengine.ext.webapp import template
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.api import mail
from google.appengine.ext.db import GqlQuery
from google.appengine.ext import db
from google.appengine.api import taskqueue
from google.appengine.api import users
from mapreduce.lib import files
from mapreduce import base_handler
from mapreduce import mapreduce_pipeline
from mapreduce import operation as op
from mapreduce import shuffler
import simplejson, logging, re
class GetContactData(webapp.RequestHandler):
# Get the calls based on the user id
def get(self):
contactId = self.request.get('contactId')
query_contacts = Contact.all()
query_contacts.filter('contact_id =', int(contactId))
query_contacts.order('-timestamp_')
contact_data = []
if query_contacts != None:
for contact in query_contacts:
pipeline = WordCountPipeline(contact.date)
pipeline.start()
record = { "contact_id":contact.contact_id,
"contact_name":contact.contact_name,
"contact_number":contact.contact_number,
"timestamp":contact.timestamp_,
"current_time":contact.current_time_,
"type":contact.type_,
"current_date":contact.date }
contact_data.append(record)
self.response.headers['Content-Type'] = 'application/json'
self.response.out.write(simplejson.dumps(contact_data))
class WordCountPipeline(base_handler.PipelineBase):
"""A pipeline to run Word count demo.
Args:
blobkey: blobkey to process as string. Should be a zip archive with
text files inside.
"""
def run(self, date):
output = yield mapreduce_pipeline.MapreducePipeline(
"word_count",
"main.word_count_map",
"main.word_count_reduce",
"mapreduce.input_readers.DatastoreInputReader",
"mapreduce.output_writers.BlobstoreOutputWriter",
mapper_params={
"date": date,
},
reducer_params={
"mime_type": "text/plain",
},
shards=16)
yield StoreOutput("WordCount", output)
class StoreOutput(base_handler.PipelineBase):
"""A pipeline to store the result of the MapReduce job in the database.
Args:
mr_type: the type of mapreduce job run (e.g., WordCount, Index)
encoded_key: the DB key corresponding to the metadata of this job
output: the blobstore location where the output of the job is stored
"""
def run(self, mr_type, output):
logging.info(output) # here I should append the grouped duration in JSON
I based on the code @autumngard provided in this question and modified to fit my purpose and it worked.