Can I speedup YAML?

2019-02-03 07:15发布

问题:

I made a little test case to compare YAML and JSON speed :

import json
import yaml
from datetime import datetime
from random import randint

NB_ROW=1024

print 'Does yaml is using libyaml ? ',yaml.__with_libyaml__ and 'yes' or 'no'

dummy_data = [ { 'dummy_key_A_%s' % i: i, 'dummy_key_B_%s' % i: i } for i in xrange(NB_ROW) ]


with open('perf_json_yaml.yaml','w') as fh:
    t1 = datetime.now()
    yaml.safe_dump(dummy_data, fh, encoding='utf-8', default_flow_style=False)
    t2 = datetime.now()
    dty = (t2 - t1).total_seconds()
    print 'Dumping %s row into a yaml file : %s' % (NB_ROW,dty)

with open('perf_json_yaml.json','w') as fh:
    t1 = datetime.now()
    json.dump(dummy_data,fh)
    t2 = datetime.now()
    dtj = (t2 - t1).total_seconds()
    print 'Dumping %s row into a json file : %s' % (NB_ROW,dtj)

print "json is %dx faster for dumping" % (dty/dtj)

with open('perf_json_yaml.yaml') as fh:
    t1 = datetime.now()
    data = yaml.safe_load(fh)
    t2 = datetime.now()
    dty = (t2 - t1).total_seconds()
    print 'Loading %s row from a yaml file : %s' % (NB_ROW,dty)

with open('perf_json_yaml.json') as fh:
    t1 = datetime.now()
    data = json.load(fh)
    t2 = datetime.now()
    dtj = (t2 - t1).total_seconds()
    print 'Loading %s row into from json file : %s' % (NB_ROW,dtj)

print "json is %dx faster for loading" % (dty/dtj)

And the result is :

Does yaml is using libyaml ?  yes
Dumping 1024 row into a yaml file : 0.251139
Dumping 1024 row into a json file : 0.007725
json is 32x faster for dumping
Loading 1024 row from a yaml file : 0.401224
Loading 1024 row into from json file : 0.001793
json is 223x faster for loading

I am using PyYAML 3.11 with libyaml C library on ubuntu 12.04. I know that json is much more simple than yaml, but with a 223x ratio between json and yaml I am wondering whether my configuration is correct or not.

Do you have same speed ratio ?
How can I speed up yaml.load() ?

回答1:

You've probably noticed that Python's syntax for data structures is very similar to JSON's syntax.

What's happening is Python's json library encodes Python's builtin datatypes directly into text chunks, replacing ' into " and deleting , here and there (to oversimplify a bit).

On the other hand, pyyaml has to construct a whole representation graph before serialising it into a string.

The same kind of stuff has to happen backwards when loading.

The only way to speedup yaml.load() would be to write a new Loader, but I doubt it could be a huge leap in performance, except if you're willing to write your own single-purpose sort-of YAML parser, taking the following comment in consideration:

YAML builds a graph because it is a general-purpose serialisation format that is able to represent multiple references to the same object. If you know no object is repeated and only basic types appear, you can use a json serialiser, it will still be valid YAML.

-- UPDATE

What I said before remains true, but if you're running Linux there's a way to speed up Yaml parsing. By default, Python's yaml uses the Python parser. You have to tell it that you want to use PyYaml C parser.

You can do it this way:

import yaml
from yaml import CLoader as Loader, CDumper as Dumper

dump = yaml.dump(dummy_data, fh, encoding='utf-8', default_flow_style=False, Dumper=Dumper)
data = yaml.load(fh, Loader=Loader)

In order to do so, you need yaml-cpp-dev (package later renamed to libyaml-cpp-dev) installed, for instance with apt-get:

$ apt-get install yaml-cpp-dev

And PyYaml with LibYaml as well. But that's already the case based on your output.

I can't test it right now because I'm running OS X and brew has some trouble installing yaml-cpp-dev but if you follow PyYaml documentation, they are pretty clear that performance will be much better.



回答2:

For reference, I compared a couple of human-readable formats and indeed Python's yaml reader is by far the slowest. (Note the log-scaling in the below plot.) If you're looking for speed, you want Python's built-in JSON reader:


Code to reproduce the plot:

import numpy
import perfplot

import json
import yaml
from yaml import Loader, CLoader
import pandas


def setup(n):
    data = numpy.random.rand(n, 3)

    with open('out.yml', 'w') as f:
        yaml.dump(data.tolist(), f)

    with open('out.json', 'w') as f:
        json.dump(data.tolist(), f, indent=4)

    with open('out.dat', 'w') as f:
        numpy.savetxt(f, data)

    return


def yaml_python(arr):
    with open('out.yml', 'r') as f:
        out = yaml.load(f, Loader=Loader)
    return out


def yaml_c(arr):
    with open('out.yml', 'r') as f:
        out = yaml.load(f, Loader=CLoader)
    return out


def json_read(arr):
    with open('out.json', 'r') as f:
        out = json.load(f)
    return out


def loadtxt(arr):
    with open('out.dat', 'r') as f:
        out = numpy.loadtxt(f)
    return out


def pandas_read(arr):
    out = pandas.read_csv('out.dat', header=None, sep=' ')
    return out.values


perfplot.show(
    setup=setup,
    kernels=[
        yaml_python, yaml_c, json_read, loadtxt, pandas_read
        ],
    n_range=[2**k for k in range(18)],
    logx=True,
    logy=True,
    )


回答3:

Yes, I also noticed that JSON is way faster. So a reasonable approach would be to convert YAML to JSON first. If you don't mind ruby, then you can get a big speedup and ditch the yaml install altogether:

import commands, json
def load_yaml_file(fn):
    ruby = "puts YAML.load_file('%s').to_json" % fn
    j = commands.getstatusoutput('ruby -ryaml -rjson -e "%s"' % ruby)
    return json.loads(j[1])

Here is a comparison for 100K records:

load_yaml_file: 0.95 s
yaml.load: 7.53 s

And for 1M records:

load_yaml_file: 11.55 s
yaml.load: 77.08 s

If you insist on using yaml.load anyway, remember to put it in a virtualenv to avoid conflicts with other software.



标签: python json yaml