Fastest way to do data type conversion using csv.D

2020-07-09 08:41发布

问题:

I'm working with a CSV file in python, which will have ~100,000 rows when in use. Each row has a set of dimensions (as strings) and a single metric (float).

As csv.DictReader or csv.reader return values as string only, I'm currently iterating over all rows and converting the one numeric value to a float.

for i in csvDict:
    i[col] = float(i[col])

Is there a better way that anyone could suggest to do this? I've been playing around with various combinations of map, izip, itertools and have searched extensively for some samples of doing it more efficiently, but unfortunately haven't had much success.

In case it helps: I'm doing this on appengine. I believe that what I'm doing may be resulting in me hitting this error: Exceeded soft process size limit with 267.789 MB after servicing 11 requests total - I only get it when the CSV is quite large.

Edit: My Goal I'm parsing this CSV so that I can use it as a data source for the Google Visualizations API. The final data set will be loaded in to a gviz DataTable for querying. Type must be specified during the construction of this table. My problem could also be solved if anyone knew of a good gviz csv->datatable converter in python!

Edit2: My Code

I believe that my issue has to do with the way I attempt to fixCsvTypes(). Also, data_table.LoadData() expects an iterable object.

class GvizFromCsv(object):
  """Convert CSV to Gviz ready objects."""

  def __init__(self, csvFile, dateTimeFormat=None):
    self.fileObj = StringIO.StringIO(csvFile)
    self.csvDict = list(csv.DictReader(self.fileObj))
    self.dateTimeFormat = dateTimeFormat
    self.headers = {}
    self.ParseHeaders()
    self.fixCsvTypes()

  def IsNumber(self, st):
    try:
        float(st)
        return True
    except ValueError:
        return False

  def IsDate(self, st):
    try:
      datetime.datetime.strptime(st, self.dateTimeFormat)
    except ValueError:
      return False

  def ParseHeaders(self):
    """Attempts to figure out header types for gviz, based on first row"""
    for k, v in self.csvDict[0].items():
      if self.IsNumber(v):
        self.headers[k] = 'number'
      elif self.dateTimeFormat and self.IsDate(v):
        self.headers[k] = 'date'
      else:
        self.headers[k] = 'string'

  def fixCsvTypes(self):
    """Only fixes numbers."""
    update_to_numbers = []
    for k,v in self.headers.items():
      if v == 'number':
        update_to_numbers.append(k)
    for i in self.csvDict:
      for col in update_to_numbers:
        i[col] = float(i[col])

  def CreateDataTable(self):
    """creates a gviz data table"""
    data_table = gviz_api.DataTable(self.headers)
    data_table.LoadData(self.csvDict)
    return data_table

回答1:

I had first exploited the CSV file with a regex, but since the data in the file is very strictly arranged in each row, we can simply use the split() function

import gviz_api

scheme = [('col1','string','SURNAME'),('col2','number','ONE'),('col3','number','TWO')]
data_table = gviz_api.DataTable(scheme)

#  --- lines in surnames.csv are : --- 
#  surname,percent,cumulative percent,rank\n
#  SMITH,1.006,1.006,1,\n
#  JOHNSON,0.810,1.816,2,\n
#  WILLIAMS,0.699,2.515,3,\n

with open('surnames.csv') as f:

    def transf(surname,x,y):
        return (surname,float(x),float(y))

    f.readline()
    # to skip the first line surname,percent,cumulative percent,rank\n

    data_table.LoadData( transf(*line.split(',')[0:3]) for line in f )
    # to populate the data table by iterating in the CSV file

Or without a function to be defined:

import gviz_api

scheme = [('col1','string','SURNAME'),('col2','number','ONE'),('col3','number','TWO')]
data_table = gviz_api.DataTable(scheme)

#  --- lines in surnames.csv are : --- 
#  surname,percent,cumulative percent,rank\n
#  SMITH,1.006,1.006,1,\n
#  JOHNSON,0.810,1.816,2,\n
#  WILLIAMS,0.699,2.515,3,\n

with open('surnames.csv') as f:

    f.readline()
    # to skip the first line surname,percent,cumulative percent,rank\n

    datdata_table.LoadData( [el if n==0 else float(el) for n,el in enumerate(line.split(',')[0:3])] for line in f )    
    # to populate the data table by iterating in the CSV file

At one moment, I believed I was obliged to populate the data table with one row at a time because I was using a regex and that needed to obtain the matches' groups before floating the numbers' strings. With split() all can be done in one instruction with LoadData()

.

Hence, your code can be shortened. By the way, I don't see why it should continue to define a class. Instead, a function seems enough for me:

def GvizFromCsv(filename):
  """ creates a gviz data table from a CSV file """

  data_table = gviz_api.DataTable([('col1','string','SURNAME'),
                                   ('col2','number','ONE'    ),
                                   ('col3','number','TWO'    ) ])

  #  --- with such a table schema , lines in the file must be like that: ---  
  #  blah, number, number, ...anything else...\n 
  #  SMITH,1.006,1.006, ...anything else...\n 
  #  JOHNSON,0.810,1.816, ...anything else...\n 
  #  WILLIAMS,0.699,2.515, ...anything else...\n

  with open(filename) as f:
    data_table.LoadData( [el if n==0 else float(el) for n,el in enumerate(line.split(',')[0:3])]
                         for line in f )
  return data_table

.

Now you must examine if the way in which the CSV data is read from another API can be inserted in this code to keep the iterating principle to populate the data table.



回答2:

First, you don't need any conversion if you need to only visualize these data: gviz can handle JSON (text-based, you know) or CSV (you already have it, no parsing required!). You can put the file in question on any reasonable web server and allow it to be accessed with fancy GET requests gviz issues, basically by ignoring the parameters.

But let's assume you need processing. It looks like you not only read the CSV file but also try to store it entirely in RAM. This may be impractical: you will hit RAM limit sooner and sooner as you add more processing. Process data one line at a time (or a reasonable number of lines if you apply window filters, etc) and put processed rows to the data store, not to any list, etc. Equally, when serving data via a GET request, read / process a row, write it to the response, and don't put it into any list or whatnot.

I see no problem with the conversion technique, as long as you use i reasonably later in code and don't memorize all is as you go.



回答3:

There are two distinct things: "data source" and "data table".

"data source" is the name of the formatted data that is delivered by the Google Visualization API server as a Visualization web service:

This page describes how you can implement a data source to feed data
to visualizations built on the Google Visualization API. 

http://code.google.com/intl/fr/apis/visualization/documentation/dev/implementing_data_source.html 

The name "data source" includes the notion of "wire protocol":

In response [to a request], the data source returns properly formatted data 
that the visualization can use to render the graphic on the page. 
This request-response protocol is known as the Google Visualization API wire protocol,

http://code.google.com/intl/fr/apis/visualization/documentation/dev/implementing_data_source_overview.html

To implement the "data source", there are two possibilities:

• Use one of the data source libraries listed in the Data Sources and Tools Gallery. 
All the data source libraries listed on that page implement the wire protocol.

• Write your own data source from scratch, 

http://code.google.com/intl/fr/apis/visualization/documentation/dev/implementing_data_source_overview.html

From the following:

• ... Data Sources and Tools Gallery : (....) You therefore need write only the
code needed to make your data available to the library in the form of a data table. 

• Write your own data source from scratch, as described in the
Writing your own Data Source

I understand that from scratch, we need to implement ourselves the wire protocol + the creation of a "data table", while with a data source library, we just have to create the "data table".


There are pages on the creation of a "data source"

http://code.google.com/intl/fr/apis/visualization/documentation/dev/implementing_data_source_overview.html

http://code.google.com/intl/fr/apis/visualization/documentation/dev/gviz_api_lib.html

In my opinion, the example at the address http://groups.google.com/group/google-visualization-api/browse_thread/thread/9d1d941e0f0b32ed is about the creation of a "data source" and the answer made there is dubious. But that's not very clear to me.


But these pages and subject are not the interesting ones for you, who wants, in fact, if I understand well, to know how to prepare the data, known as "data table", to be served through the "data source" , but not the construction of the "data source".

3.Prepare your data. You'll need to prepare the data to visualize; 
this means either specifying the data yourself in code, 
or querying a remote site for data.

http://code.google.com/intl/fr/apis/visualization/documentation/using_overview.html#keycomponents

A visualization stores the data that it visualizes as two-dimensional data table with 
rows and columns.
Cells are referenced by (row, column) where row is a zero-based row number, and column
is either a zero-based column index or a unique ID that you can specify. 

http://code.google.com/intl/fr/apis/visualization/documentation/using_overview.html#preparedata

So, the preparation of the "data table" is the key point.

Here it is:

There are two ways to create/populate your visualization's data table:

•Query a data provider. A data provider is another site that returns
a populated DataTable in response to a request from your code. 
Some data providers also accept SQL-like query strings to sort or 
filter the data. See Data Queries for more information and an example
of a query.

•Create and populate your own DataTable by hand. You can populate your
DataTable in code on your page. The simplest way to do this is to create
a DataTable object without any data and populate it by calling addRows()
on it. You can also pass a JavaScript literal representation of the data
table into the DataTable constructor, but this is more complex and is
covered on the reference page.

http://code.google.com/intl/fr/apis/visualization/documentation/using_overview.html#preparedata

Further information is found here:

2. Describe your table schema
The table schema is specified by the table_description parameter
passed into the constructor. You cannot change it later. 
The schema describes all the columns in the table: the data type of
each column, the ID, and an optional label.

Each column is described by a tuple: (ID [,data_type [,label [,custom_properties]]]). 



The table schema is a collection of column descriptor tuples. 
Every list member, dictionary key or dictionary value must be either 
another collection or a descriptor tuple. You can use any combination 
of dictionaries or lists, but every key, value, or member must
eventually evaluate to a descriptor tuple. Here are some examples.

•List of columns: [('a', 'number'), ('b', 'string')]
•Dictionary of lists: {('a', 'number'): [('b', 'number'), ('c', 'string')]}
•Dictionary of dictionaries: {('a', 'number'): {'b': 'number', 'c': 'string'}}
•And so on, with any level of nesting.


3. Populate your data
To add data to the table, build a structure of data elements in the
exact same structure as the table schema. So, for example, if your
schema is a list, the data must be a list: 

•schema: [("color", "string"), ("shape", "string")] 
•data: [["blue", "square"], ["red", "circle"]] 
If the schema is a dictionary, the data must be a dictionary:

•schema: {("rowname", "string"): [("color", "string"), ("shape", "string")] }
•data: {"row1": ["blue", "square"], "row2": ["red", "circle"]}

http://code.google.com/intl/fr/apis/visualization/documentation/dev/gviz_api_lib.html#populatedata

Finally, i would say that for your problem, you have to define a "table schema" and to process your CSV file in order to obtain a structure of data elements in the exact same structure as the table schema.

Definition of the type of data in a column is done in the "table schema" 's definition. If populating the "data table" must be done with data having the right type (not string, I want to say) I will help you to write the code for the extraction of data from the CSV, it's simple to do.

For the moment, I hope all this is right and will help