Working on single page applications i have to write a lot of boilerplate code in order to synchronise with the server side data.
PouchDB offers an elegant solution to this problem allowing to access the data locally on the client side.
What i don't understand, is whether Pouch is suitable as a database proxy or not, in cases when the database is too big to fully fit in the browser memory.
As far as i can read, Pouch works duplicating a whole remote database, and thus can be used just in those cases when the whole database fits in the browser memory.
Example use case
Let's say that i have a database with all Wikipedia articles and i want to manipulate part of them on the client side. Replication is not the way to go, what is needed is proxing. For example when a query is issued locally in the client side, just the matching results should be transferred. It is not feasible to run a query just on the replicated values, because it is not possible to replicate the whole database locally.
You're right that PouchDB sync wouldn't really do what you want it to do. It's designed to sync entire databases, or predefined subsets of a database using server-side design docs.
If I were you, I would probably still use PouchDB, but I would handle the syncing manually. Something like this:
Using
get()
is a little simplistic here; in your Wikipedia case you would probably want to doallDocs({startkey: query, endkey: query + '\uffff'})
to find all docs whose ID start with a query. Or you could use a secondary index.So although you wouldn't be getting the benefits of PouchDB's built-in sync, you are getting the benefits of being able to write the same code against the server as the client, plus PouchDB's cross-browser support. So I don't think this is a bad way to go about it.