I'm testing firebase for a project that may have a reasonably large numbers of keys, potentially millions.
I've tested loading a few 10k of records using node, and the load performance appears good. However the "FORGE" Web UI becomes unusably slow and renders every single record if I expand my root node.
Is Firebase not designed for this volume of data, or am I doing something wrong?
It's simply the limitations of the Forge UI. It's still fairly rudimentary.
The real-time functions in Firebase are not only suited for, but designed for large data sets. The fact that records stream in real-time is perfect for this.
Performance is, as with any large data app, only as good as your implementation. So here are a few gotchas to keep in mind with large data sets.
DENORMALIZE, DENORMALIZE, DENORMALIZE
If a data set will be iterated, and its records can be counted in thousands, store it in its own path.
This is bad for iterating large data sets:
This is good for iterating large data sets:
Avoid 'value' on large data sets
Use the
child_added
sincevalue
must load the entire record set to the client.Watch for hidden
value
operations on childrenWhen you call
child_added
, you are essentially callingvalue
on every child record. So if those children contain large lists, they are going to have to load all that data to return. Thus, the DENORMALIZE section above.