Pretty print JSON dumps

2019-01-24 04:17发布

问题:

I use this code to pretty print some dict into JSON :

import json
d = {'a': 'blah', 'b': 'foo', 'c': [1,2,3]}
print json.dumps(d, indent = 2, separators=(',', ': '))

Output :

{
  "a": "blah",
  "c": [
    1,
    2,
    3
  ],
  "b": "foo"
}

This is a little bit too much (newline for each list element!).

Which syntax should I use to have ...

{
  "a": "blah",
  "c": [1, 2, 3],
  "b": "foo"
}

instead ?

回答1:

Write your own JSON serializer:

import numpy

INDENT = 3
SPACE = " "
NEWLINE = "\n"

def to_json(o, level=0):
    ret = ""
    if isinstance(o, dict):
        ret += "{" + NEWLINE
        comma = ""
        for k,v in o.iteritems():
            ret += comma
            comma = ",\n"
            ret += SPACE * INDENT * (level+1)
            ret += '"' + str(k) + '":' + SPACE
            ret += to_json(v, level + 1)

        ret += NEWLINE + SPACE * INDENT * level + "}"
    elif isinstance(o, basestring):
        ret += '"' + o + '"'
    elif isinstance(o, list):
        ret += "[" + ",".join([to_json(e, level+1) for e in o]) + "]"
    elif isinstance(o, bool):
        ret += "true" if o else "false"
    elif isinstance(o, int):
        ret += str(o)
    elif isinstance(o, float):
        ret += '%.7g' % o
    elif isinstance(o, numpy.ndarray) and numpy.issubdtype(o.dtype, numpy.integer):
        ret += "[" + ','.join(map(str, o.flatten().tolist())) + "]"
    elif isinstance(o, numpy.ndarray) and numpy.issubdtype(o.dtype, numpy.inexact):
        ret += "[" + ','.join(map(lambda x: '%.7g' % x, o.flatten().tolist())) + "]"
    else:
        raise TypeError("Unknown type '%s' for json serialization" % str(type(o)))
    return ret

inputJson = {'a': 'blah', 'b': 'foo', 'c': [1,2,3]}
print to_json(inputJson)

Output:

{
   "a": "blah",
   "c": [1,2,3],
   "b": "foo"
}


回答2:

Another alternative is print json.dumps(d, indent = None, separators=(',\n', ': '))

The output will be:

{"a": "blah",
"c": [1,
2,
3],
"b": "foo"}

Note that though the official docs at https://docs.python.org/2.7/library/json.html#basic-usage say the default args are separators=None --that actually means "use default of separators=(', ',': ') ). Note also that the comma separator doesn't distinguish between k/v pairs and list elements.



回答3:

I ended up using jsbeautifier:

import jsbeautifier
opts = jsbeautifier.default_options()
opts.indent_size = 2
jsbeautifier.beautify(json.dumps(d), opts)

Output:

{
  "a": "blah",
  "c": [1, 2, 3],
  "b": "foo"
}


回答4:

Perhaps not quite as efficient, but consider a simpler case (somewhat tested in Python 3, but probably would work in Python 2 also):

def dictJSONdumps( obj, levels, indentlevels = 0 ):
    import json
    if isinstance( obj, dict ):
        res = []
        for ix in sorted( obj, key=lambda x: str( x )):
            temp = ' ' * indentlevels + json.dumps( ix, ensure_ascii=False ) + ': '
            if levels:
                temp += dictJSONdumps( obj[ ix ], levels-1, indentlevels+1 )
            else:
                temp += json.dumps( obj[ ix ], ensure_ascii=False )
            res.append( temp )
        return '{\n' + ',\n'.join( res ) + '\n}'
    else:
        return json.dumps( obj, ensure_ascii=False )

This might give you some ideas, short of writing your own serializer completely. I used my own favorite indent technique, and hard-coded ensure_ascii, but you could add parameters and pass them along, or hard-code your own, etc.



回答5:

This has been bugging me for a while as well, I found a 1 liner I'm almost happy with:

print json.dumps(eval(str(d).replace('[', '"[').replace(']', ']"').replace('(', '"(').replace(')', ')"')), indent=2).replace('\"\\"[', '[').replace(']\\"\"', ']').replace('\"\\"(', '(').replace(')\\"\"', ')')

That essentially convert all lists or tuples to a string, then uses json.dumps with indent to format the dict. Then you just need to remove the quotes and your done!

Note: I convert the dict to string to easily convert all lists/tuples no matter how nested the dict is.

PS. I hope the Python Police won't come after me for using eval... (use with care)