I'm using Goliath (which is powered by eventmachine) and the postgres gem pg
, currently I'm using the pg gem in a blocking way: conn.exec('SELECT * FROM products')
(for example) and I'm wondering whether there is a better way to connect to a postgres database?
相关问题
- How to specify memcache server to Rack::Session::M
- Django distinct is not working
- Why am I getting a “C compiler cannot create execu
- PostgreSQL: left outer join syntax
- reference to a method?
相关文章
- postgresql 关于使用between and 中是字符串的问题
- postgresql 月份差计算问题
- Ruby using wrong version of openssl
- Using boolean expression in order by clause
- Table valued Parameter Equivalent in Postgresql
- in redshift postgresql can I skip columns with the
- Difference between Thread#run and Thread#wakeup?
- how to call a active record named scope with a str
The
pg
library provides full support for PostgreSQL's asynchronous API. I've added an example of how to use it to thesamples/
directory:I'd recommend that you read the documentation on the PQconnectStart function and the Asynchronous Command Processing section of the PostgreSQL manual, and then compare that with the sample above.
I haven't used EventMachine before, but if it lets you register a socket and callbacks for when it becomes readable/writable, I'd think it'd be fairly easy to integrate database calls into it.
I've been meaning to use the ideas in Ilya Grigorik's article on using Fibers to clean up evented code to make the async API easier to use, but that's a ways off. I do have a ticket open to track it if you're interested/motivated to do it yourself.
Yes, you can access postgres in a non-blocking fashion from goliath. I had the same need, and put together this proof of concept: https://github.com/levicook/goliath-postgres-spike
I'm not (anymore) very familiar with Pg, but I haven't heard that any popular database could to async connections. So you still need to maintain a connection to the database for the duration of the query. Therefore you still need to block some where down the stack.
Depending on your application, you might already be doing it the best possible way.
But when you are dealing with some kind of polling app (where same client sends multitude of requests in short time) and it is more important to get the response out, even if it is empty, then you could write a ruby
Fiber
or flull blown thread or process that is long lived and proxies queries to the DB and caches the results.For example: a request comes in from client A. Goliath app handles the query to the DB process with some unique ID and responds to the query with 'no data yet'. The DB process finishes the query and saves results to a cache with the ID. When next request comes in from the same client, Goliath sees that it already has query results waiting, removes the results from the cache and responds to client. At the same time it schedules next query with the DB process so that it would be ready sooner. If the next request comes in before last one is finished, no new query is scheduled (not multiplying the queries).
This way your responses are fast and non-blocking, while still serving fresh data from DB ASAP. Of course they could be a bit out of sync with actual data, but again, depending on the application, this might not be a problem.
The idea is to use an async adaptor to the database(Postgresql) in conjunction with an evented web server(Goliath) to gain performance. Mike Perham wrote a PG activerecord adaptor for Rails 2.3 last year. Maybe you can use that.
As another example, Ilya Grigorik released this demo of an async Rails stack. In this case the evented server is Thin, and the database is Mysql. Install the demo and try the benchmark with and without the EM aware driver. The difference is dramatic.