POST request turns into GET request

2019-03-04 01:21发布

I have a Rails 4.1 application with nginx and unicorn. Some of POST request are turn into GET request. I gues it's related my nginx config. Here is the previous question about it.

Here is the my nginx.conf file:

# you generally only need one nginx worker unless you're serving
# large amounts of static files which require blocking disk reads
worker_processes 3;

# # drop privileges, root is needed on most systems for binding to port 80
# # (or anything < 1024).  Capability-based security may be available for
# # your system and worth checking out so you won't need to be root to
# # start nginx to bind on 80
user nobody nogroup; # for systems with a "nogroup"
# user nobody nobody; # for systems with "nobody" as a group instead

# Feel free to change all paths to suite your needs here, of course
pid /run/nginx.pid;
# error_log /path/to/nginx.error.log;

events {
  worker_connections 1024; # increase if you have lots of clients
  accept_mutex off; # "on" if nginx worker_processes > 1
  # use epoll; # enable for Linux 2.6+
  # use kqueue; # enable for FreeBSD, OSX
}

http {
  # nginx will find this file in the config directory set at nginx build time
  include mime.types;

  # fallback in case we can't determine a type
  default_type application/octet-stream;

  # click tracking!
  # TODO: Fixme
  # access_log /path/to/nginx.access.log combined;

  # you generally want to serve static files with nginx since neither
  # Unicorn nor Rainbows! is optimized for it at the moment
  sendfile on;

  tcp_nopush on; # off may be better for *some* Comet/long-poll stuff
  tcp_nodelay off; # on may be better for some Comet/long-poll stuff
  keepalive_timeout 65;
  types_hash_max_size 2048;

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  # we haven't checked to see if Rack::Deflate on the app server is
  # faster or not than doing compression via nginx.  It's easier
  # to configure it all in one place here for static files and also
  # to disable gzip for clients who don't get gzip/deflate right.
  # There are other gzip settings that may be needed used to deal with
  # bad clients out there, see http://wiki.nginx.org/NginxHttpGzipModule
  gzip on;
  gzip_http_version 1.1;
  gzip_proxied any;
  gzip_min_length 500;
  gzip_disable "MSIE [1-6]\.";
  gzip_types text/plain text/html text/xml text/css
             text/comma-separated-values
             text/javascript application/x-javascript
             application/atom+xml;

  include /etc/nginx/conf.d/*.conf;
  include /etc/nginx/sites-enabled/*;
}

Here is the my host file located in

# this can be any application server, not just Unicorn/Rainbows!
upstream my-app {
  # fail_timeout=0 means we always retry an upstream even if it failed
  # to return a good HTTP response (in case the Unicorn master nukes a
  # single worker for timing out).

  # for UNIX domain socket setups:
  server unix:/tmp/my-app.unicorn.sock fail_timeout=0;

  # for TCP setups, point these to your backend servers
  # server 192.168.0.7:8080 fail_timeout=0;
  # server 192.168.0.8:8080 fail_timeout=0;
  # server 192.168.0.9:8080 fail_timeout=0;
}

server {
  # enable one of the following if you're on Linux or FreeBSD
  # listen 80 default deferred; # for Linux
  # listen 80 default accept_filter=httpready; # for FreeBSD

  # If you have IPv6, you'll likely want to have two separate listeners.
  # One on IPv4 only (the default), and another on IPv6 only instead
  # of a single dual-stack listener.  A dual-stack listener will make
  # for ugly IPv4 addresses in $remote_addr (e.g ":ffff:10.0.0.1"
  # instead of just "10.0.0.1") and potentially trigger bugs in
  # some software.
  # listen [::]:80 ipv6only=on; # deferred or accept_filter recommended
  # listen 80;

  client_max_body_size 4G;
  server_name my-app.com;
  # Remove trailing slash
  rewrite ^/(.*)/$ /$1 permanent;

  # ~2 seconds is often enough for most folks to parse HTML/CSS and
  # retrieve needed images/icons/frames, connections are cheap in
  # nginx so increasing this is generally safe...
  keepalive_timeout 5;

  # path for static files
  root /opt/www/my-app/current/public;

  # Enable gzip
  location ~* \.(ico|css|pdf|js|jpg|gif|jpeg|png|swf)$ {
    gzip_static on;
    expires 24h;
    add_header Cache-Control public;
  }

  location ~ ^/assets/.*-(.*)\..* {
    gzip_static on; # to serve pre-gzipped version
    expires 1y;
    add_header ETag $1;
    add_header Cache-Control public;
  }


  # Prefer to serve static files directly from nginx to avoid unnecessary
  # data copies from the application server.
  #
  # try_files directive appeared in in nginx 0.7.27 and has stabilized
  # over time.  Older versions of nginx (e.g. 0.6.x) requires
  # "if (!-f $request_filename)" which was less efficient:
  # http://bogomips.org/unicorn.git/tree/examples/nginx.conf?id=v3.3.1#n127
  try_files $uri/index.html $uri @my-app;

  location @my-app {
    # an HTTP header important enough to have its own Wikipedia entry:
    #   http://en.wikipedia.org/wiki/X-Forwarded-For
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    # enable this if you forward HTTPS traffic to unicorn,
    # this helps Rack set the proper URL scheme for doing redirects:
    # proxy_set_header X-Forwarded-Proto $scheme;

    # pass the Host: header from the client right along so redirects
    # can be set properly within the Rack application
    proxy_set_header Host $http_host;

    # we don't want nginx trying to do something clever with
    # redirects, we set the Host: header above already.
    proxy_redirect off;

    # set "proxy_buffering off" *only* for Rainbows! when doing
    # Comet/long-poll/streaming.  It's also safe to set if you're using
    # only serving fast clients with Unicorn + nginx, but not slow
    # clients.  You normally want nginx to buffer responses to slow
    # clients, even with Rails 3.1 streaming because otherwise a slow
    # client can become a bottleneck of Unicorn.
    #
    # The Rack application may also set "X-Accel-Buffering (yes|no)"
    # in the response headers do disable/enable buffering on a
    # per-response basis.
    # proxy_buffering off;

    proxy_pass http://my-app.com;
  }

  # Rails error pages
  error_page 500 502 503 504 /500.html;
  location = /500.html {
    root /opt/www/my-app/current/public;
  }
}

server {
  server_name www.my-app.com
  return 301 $scheme://my-app.com$request_uri;

}

For security reason I hide some info. Any idea?

标签: nginx unicorn
2条回答
欢心
2楼-- · 2019-03-04 02:06

You may also want to try a 307 redirect. I found that my POST's were turning into GET's with the 301 redirect.

I.e. your server block becomes:

server {
    server_name www.my-app.com my-app.com;
    return 307 $scheme://$host$request_uri;
}
查看更多
Ridiculous、
3楼-- · 2019-03-04 02:10
server {
 server_name www.my-app.com
 return 301 $scheme://my-app.com$request_uri;
}

This unconditionally will throw a 301 to every request to www.my-app.com

If there is no other return 301 in config: Check that the form action (or ajax request) is NOT using www.my-app.com domain, but just my-app.com, so it's not rewrited.

If there are others 301 in the config (removed for the post) could be something different.

查看更多
登录 后发表回答