My top-level question is, how can I get Puma to stop failing. But that is really made up of lots of smaller questions. I will number and bold each of them, to try to make this question answerable.
I am hosting a Rails application on an EC2 instance that is a t2.nano. This is admittedly, a very small box--but I don't expect my website to receive any traffic. I configured everything successfully with Nginx and Puma using Capistrano and Capistrano Puma. Everything was great, until one day I went to my website and saw the Nginx 504 message.
I opened the Nginx error log and saw that it could not connect to Puma:
connect() to unix:/home/deploy/myapp/shared/tmp/sockets/puma.sock failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xxx.xxx, server: localhost, request: "GET / HTTP/1.0", upstream: "http://unix:/home/deploy/myapp/shared/tmp/sockets/puma.sock:/500.html", host: "myapp.com"
Debugging this, I learned that Puma had stopped running. That is why Nginx could not connect to it. I think there are two problems here: the first, is that Puma should not stop running. The server is tiny, but there is no traffic. the second, is that when Puma does fail, it should restart gracefully. However, I am just focusing on the first issue for now. Because if Puma is constantly restarting, it seems reasonable that sometimes it kills the process in a harsh way.
To debug this, I opened htop. Sure enough, the machine was running without any memory to spare. This makes sense--I am running a database, rails app, webserver, and memcache on one tiny machine. It keeps running out of memory and killing Puma.
I looked into the Puma configuration I had set up with Capistrano. In config/deploy.rb I had these lines--
set :puma_threads, [0, 8]
set :puma_workers, 0
I read all about puma_workers and puma_threads. I also learned that Nginx has its own workers. Puma processes are very expensive. What makes Puma cool is that it is properly muli-threaded--so the independent processes are awesome. It sounds like each worker has its own set of threads--so if there are 4 workers with 8 threads, there will be 32 processes. But in my case, I want to use very little memory. 2 processes sound good to me. 1. Is my understanding of workers and threads correct?
I updated my config/deploy.rb file and deployed, with 0 puma_workers and min=0, max=2 threads.
It appears the configuration for Nginx lives here: /etc/nginx/nginx.conf. And the configuration for Puma lives here: /home/deploy/myapp/shared/puma.rb. I would have expected my updates in config/deploy.rb to have had Capistano edit the config files. No luck--my min, max threads was still set to 0,8. 2. Is it correct to try and update these values through config/deploy.rb when using Capistano?
Also--I opened the nginx.conf and saw worker_processes 4;
. 3. Was this set to four when I installed Nginx or did Capistano set this default?
I opened htop and sure enough I had lots of Puma processes. Therefore, I edited my config files manually and restarted Puma and Nginx.
I changed the number of Nginx workers from 4 to 1. Looking in htop, this worked. I now only had 1 Nginx worker. However, the Nginx workers were never very expensive (compared to the Puma threads). So I don't think this matters much.
However, there were still more than 2 Puma threads--there were 6. On a lark, I changed the minimum number of threads from 0 to 1--thinking 0 isn't a possible number so maybe it's setting a default. This increased the number of Puma processes to 9. I also tried changing the number of puma_workers to 1, for the same reason, and the number of processes increased. 4. What does it mean to have 0 threads and/or workers?
I then tried to kill one of the puma processes manually (sudo kill xxxxx), and then all of the Puma processes died.
5. What do I have to do to have just 2 puma processes?
As you can see, my understanding of Puma is not great and the lines between what Puma vs Nginx vs Capistano touches is not clear. Any help is greatly appreciated. I haven't been able to find great resources regarding this issue.
A suspect for Puma hangs
The thing with Puma is that it's the only mainstream project that encourages the use of threading in MRI Ruby (well, anyway, Heroku encourages).
This is why we sometimes see statements from people working on Puma about how people think that Puma has various kinds of issues, while the problem is elsewhere, and it is, and it affects only Puma :P
"We" have discovered and fixed in the past some very freaky and nasty Ruby GC issues on heavy duty use of threads in Ruby MRI with some freaky corner cases (remember http://blog.skylight.io/hunting-for-leaks-in-ruby/) and who is to say this is not the last of such freaky issues that people attribute to Puma?
Try disabling threading for a while, see how it goes, and let us know, maybe the rabbit lies there, again
Docs explaining threads vs clustered mode vs workers
puma.rb
options: https://github.com/puma/puma/blob/master/examples/config.rbUnder
Thread pool
the docs explain how to set up the number of worker threads. Remember, Puma is/was primarily a JRuby thing and MRI support & forking was added only later as an afterthought, the ordering of configuration entries in the docs (how to set up threading before how to set up forking) is a consequence of this.The docs state:
Meaning, Puma will always thread, it's what it does, if you tell it to do 0/1 thread, it will do 1 thread so it can serve requests.
Additionally, if you set the number of workers (processes) to > 1, Puma will run in "Clustered mode" which means it will fork and each fork will thread,
i.e.
-w 3 -t4:4
will result with 3 processes running 4 threads each, allowing you to concurrently server 12 requests.Puma docs don't specify which and how many processes Puma will use for it's internals, but just an educated guess is that at the very minimum it needs to run all of the workers + 1 master process to manage them, deliver data to them, start them, stop them, channel their logs etc.
This is what I've learned--
To answer my original questions--