I am trying to parse the JSON incrementally, i.e. based on a condition.
Below is my json message and I am currently using JavaScriptSerializer to deserialize the message.
string json = @"{"id":2,
"method":"add",
"params":
{"object":
{"name":"test"
"id":"1"},
"position":"1"}
}";
JavaScriptSerializer js = new JavaScriptSerializer();
Message m = js.Deserialize<Message>(json);
Message class is shown below:
public class Message
{
public string id { get; set; }
public string method { get; set; }
public Params @params { get; set; }
public string position { get; set; }
}
public class Params
{
public string name { get; set; }
public string id{ get; set;
}
The above code parses the message with no problems. But it parses the entire JSON at once. I want it to proceed parsing only if the "method" parameter's value is "add". If it is not "add", then I don't want it to proceed to parse rest of the message. Is there a way to do incremental parsing based on a condition in C#? (Environment: VS 2008 with .Net 3.5)
You'd be wanting a SAX-type parser for JSON
http://en.wikipedia.org/wiki/Simple_API_for_XML
http://www.saxproject.org/event.html
SAX raises an event as it parses each piece of the document.
Doing something like that in JSON would (should) be pretty simple, given how simple the JSON syntax is.
This question might be of help: Is there a streaming API for JSON?
And another link: https://www.p6r.com/articles/2008/05/22/a-sax-like-parser-for-json/
I have to admit I'm not as familiar with the JavaScriptSerializer, but if you're open to use JSON.net, it has a
JsonReader
that acts much like aDataReader
.Here are the generic and simple methods I use to parse, load and create very large JSON files. The code uses now pretty much standard JSON.Net library. Unfortunately the documentation isn't very clear on how to do this but it's not very hard to figure it out either.
Below code assumes the scenario where you have large number of objects that you want to serialize as JSON array and vice versa. We want to support very large files whoes size is only limited by your storage device (not memory). So when serializing, the method takes
IEnumerable<T>
and while deserializing it returns the same. This way you can process the entire file without being limited by the memory.I've used this code on file sizes of several GBs with reasonable performance.
I'm currently in hour 3 of an unknown timespan, watching 160GB of JSON get deserialized into class objects. My memory use has been hanging tight at ~350MB, and when I inspect memory objects it's all stuff the GC can take care. Here's what I did:
The problem is the deserialization. That 160GB of data is way bigger than what my PC can handle at once.
I used a small snippet (which is tough, even just opening a 160GB file) and got a class structure via jsontochsarp.
I made a specific class for the big collection in the auto-generated-via-json-tool class structure, and subclassed System.Collection.ObjectModel.ObservableCollection instead of List. They both implement IEnumberable, which I think is all the Newtsonsoft JSON deserializer cares about.
I went in and overrode InsertItem, like this:
Again, my problems where partially about JSON deserialization speed but beyond that I couldn't fit ~160GB of JSON data into collection. Even tightened up, it would be in the dozens of gigs area, way bigger than what .net is going to be happy with.
InsertItem on ObservableCollection is the only method I'm aware of that you can handle when deserialization occurs. List.Add() doesn't. I know this solution isn't "elegant", but it's working as I type this.
If you take a look at Json.NET, it provides a non-caching, forward-only JSON parser that will suit your needs.
See the
JsonReader
andJsonTextReader
class in the documentation.What's the reason for this approach? If you concern is performance then it's likely "premature optimization", or in other words, worrying about a problem that might not exist.
I would strongly urge that you don't worry about this detail. Build your application, and then if it isn't fast enough use profiling tools to locate the actual bottlenecks--they likely won't be where you expect.
Focusing on performance before knowing it's an issue almost always leads to lost time, and excessive code.