Is there any support of this in jsonpickle?
E.g. I store and object, them modify its schema, then try to load it back.
The following change, for instance, (attribute addition)
import jsonpickle
class Stam(object):
def __init__(self, a):
self.a = a
def __str__(self):
return '%s with a=%s' % (self.__class__.__name__, str(self.a))
js = jsonpickle.encode(Stam(123))
print 'encoded:', js
class Stam(object):
def __init__(self, a, b):
self.a = a
self.b = b
def __str__(self):
return '%s with a=%s, b=%s' % (self.__class__.__name__, str(self.a), str(self.b))
s=jsonpickle.decode(js)
print 'decoded:', s
produces an error:
encoded: {"py/object": "__main__.Stam", "a": 123}
decoded: Traceback (most recent call last):
File "C:\gae\google\appengine\ext\admin\__init__.py", line 317, in post
exec(compiled_code, globals())
File "<string>", line 25, in <module>
File "<string>", line 22, in __str__
AttributeError: 'Stam' object has no attribute 'b'
There is no support for type evolution or type migrations within jsonpickle.
Your best course of action would be to load (via json.loads
) the JSON representation of your data into a basic Python structure of lists / dicts / strings / numbers. Traverse this Python representation, adding in empty/default b
keys. Then re-save the JSON via json.dumps
.
You can then use jsonpickle to load the modified version of the data.
temp = json.loads(js)
temp['b'] = None
js = json.dumps(temp)
jsonpickle.decode(js)
This obviously gets more complicated if your object model is more complex, but you can check the py/object key to see if you need to modify the object.
Because of the versioning problem, jsonpickle alone is not a sufficient for
persisting objects. You also need to keep a version identifier in
the JSON output so that you can retrofit (cleanup) the data when you are
reading an older version.
With that said, there are somethings you can do to make life easier. You
can use the default=dict parameter of json.dumps in conjunction with
iter on your object. This will let you persist your object as a
dictionary. Then when you read it in you can use the **dict operator and
key word arguments to re-instantiate your object from the JSON dictionary.
This allows you to read in your persisted objects and supply
initialization for any new attributes. For example if we start with an
class that has a val1 attribute and persist it, then expand the class to
have a val2 attribute and restore if from the persisted state:
import json
class Stam( object ) :
val1 = None
def __init__( self, val1=None ) :
self.val1 = val1
def __iter__( self ) : return {
'val1':self.val1
}.iteritems()
obj1 = Stam( val1='a' )
persisted = json.dumps( obj1, default=dict )
class Stam( object ) :
val1 = None
val2 = None
def __init__( self, val1=None, val2='b' ) :
self.val1 = val1
self.val2 = val2
def __iter__( self ) : return {
'val1':self.val1,
'val2':self.val2
}.iteritems()
obj2 = json.loads( persisted, object_hook=lambda d: Stam(**d) )
assert obj2.val1 == 'a'
assert obj2.val2 == 'b'
Of course, we could also use jsonpickle and skip the __iter__
and
extra json arguments because jsonpickle will ignore the missing
attributes. Thus any new val2 would have the static class initialization
supplied, but it would not run the initialization code in the __init__
ctor. This would become:
import jsonpickle
class Stam( object ) :
val1 = None
def __init__( self, val1 ) :
self.val1 = val1
obj1 = Stam( 'a' )
persisted = jsonpickle.encode( obj1 )
class Stam( object ) :
val1 = None
val2 = 'b'
def __init__( self, val1, val2 ) :
self.val1 = val1
self.val2 = val2
obj2 = jsonpickle.decode( persisted )
assert obj2.val1 == 'a'
assert obj2.val2 == 'b'