I am seeing the following three things in my logs about access being denied. Two of them have security as critical. I don't really understand any of what they mean and after googling around a bit, still am unsure if I should be concerned or do anything. I am running Django on Apache with mod_wsgi.
Here are the three:
ModSecurity: Access denied with code 400 (phase 2). Pattern match "^\\w+:/" at REQUEST_URI_RAW. [file "/usr/local/apache/conf/modsec-imh/01_base_rules.conf"] [line "23"] [id "960014"] [msg "Proxy access attempt"] [severity "CRITICAL"] [tag "PROTOCOL_VIOLATION/PROXY_ACCESS"] [hostname "www.MYSITE.com"] [uri "/"] [unique_id "VaM7bUYn@9YAACtkIA8AAABq"]
ModSecurity: Access denied with code 501 (phase 2). Pattern match "(?:\\b(?:(?:n(?:et(?:\\b\\W+?\\blocalgroup|\\.exe)|(?:map|c)\\.exe)|t(?:racer(?:oute|t)|elnet\\.exe|clsh8?|ftp)|(?:w(?:guest|sh)|rcmd|ftp)\\.exe|echo\\b\\W*?\\by+)\\b|c(?:md(?:(?:32)?\\.exe\\b|\\b\\W*?\\/c)|d(?:\\b\\W*?[\\\\/]|\\W*?\\.\\.)|hmod.{0,40}? ..." at REQUEST_HEADERS:User-Agent. [file "/usr/local/apache/conf/modsec-imh/01_base_rules.conf"] [line "100"] [id "959006"] [msg "System Command Injection"] [data "; mail"] [severity "CRITICAL"] [tag "WEB_ATTACK/COMMAND_INJECTION"] [hostname "www.MYSITE.com"] [uri "/robots.txt"] [unique_id "VaPVSUYn@9YAACtkNioAAABL"]
ModSecurity: Access denied with code 406 (phase 2). Pattern match "\\%(?![0-9a-fA-F]{2}|u[0-9a-fA-F]{4})" at REQUEST_HEADERS:X-Opt-Forward. [file "/usr/local/apache/conf/modsec-imh/01_base_rules.conf"] [line "17"] [id "950107"] [msg "URL Encoding Abuse Attack Attempt"] [severity "WARNING"] [hostname "www.MYSITE.com"] [uri "/static/images/MYIMAGE.png"] [unique_id "VYnqMEYn@9YAAGtcKysAAAAT"]
The bold parts are where I edited things concerning my site.
Any help is appreciated. Thanks!
Running a website inevitably means you're going to get requests like these. The web is open and it costs scammers/hackers/script kiddies nothing to write a script and try numerous websites or IPs in the hope they find a vulnerable one. It's like having an email address - soon enough you'll get spam to it. Mostly this spam is harmless and just a nuisance. Occasionally you get something that causes real harm.
ModSecurity is a tool to examine web requests sent to your server and block then based on certain rules. This is usually done by writing rules to compare some of the HTTP request fields to a regular expression. There are some free rules sets available online and the OWASP Core Rule Set (CRS) is one of them. It is used to search for common attacks and any rule id 9XXXXX is from that.
ModSecurity is a really powerful tool with many advantages to protect your website. However it's not without its downsides too. For a start it makes you aware of requests like these - most of which are harmless and have probably been hitting your site for a while with no issue. You can then get in a panic when you look at ModSecurity log files for first time and see that. On the flip side, but even worse, it can also block "false positives" - legitimate traffic that should not be blocked. Similar to the way a spam filter can sometimes put a real email in your spam folder. The CRS definitely needs tweaking for your particular site.
So with that background let's look at the three examples you gave:
The first rule (960014) is flagging a request that might be an attempt to use your webserver as a proxy. Scammer's own servers are often blocked, so they like to proxy requests via other servers so that traffic appears to come from your IP rather than theirs. The rule is triggered when a request is received with a word followed by :// in the URL. This works because a request should never have this in it: www.example.com/page.html is a legitimate request but www.example.com/page.html/http://www.example2.com is not a legitimate request. However this can easily catch false positives with legitimate requests like this: www.example.com?referrer=http://www.google.com. Many search engines, ads, marketing links...etc. may use that sort of format and these would stop working due to this rule. Personally I don't find this rule that useful. By default Apache has its own protection against attempts to use your webserver as a proxy so this rule doesn't gain you much but can cause you problems. I would turn it off. You can speak to your web host service on how to do this (usually add a "SecRuleRemoveById 960014" line to .htaccess file).
The second (959006) runs a hugely complicated regexpr against the User Agent looking for dodgy requests. Some of the CRS rules are very difficult to understand unless you have a degree in regexpr! The User Agent should be your browser and all decent browsers return a good user agent. Additionally some know spam tools use a specific user agent that this rule can easily block. However that's usually easily changed so it can send a good user agent so it looks like a normal web browser so this rule really only picks up very simple bad requests. Then again it also rarely flags any false positives so is a nice rule because of that. Here the user agent sent was "; mail" (some rules like this are helpfully written to display the value that caused the issue, in the log - in this case in the "data" field). A user agent of "; mail" definitely looks suspect. Now you can specify anything you want in the user agent, and it shouldn't cause issues (ignoring attempts to manipulate the HTTP request to send other fields for now), so this rule doesn't really protect anything in itself, but if this requestor is sending something like that in that field, then it's probably not a legitimate request anyway and they could be trying other dodgy requests in other bits of the request - hence why this rule exists. Given the user agent shown there, I think this rule is doing you good here in blocking a bad request, so leave it alone to continue blocking.
The last rule 950107 looks for bad URL encodings. A web address encodes some characters (like spaces) so a request like "http://www.example.com?name=Joe Bloggs" becomes "http://www.example.com?name=Joe%20Bloggs" so the request can be handled by servers. URL encodings have a standard known format (basically start with a % and then have a hexadecimal code (0-9 or a-f)) so a request like this: "http://www.example.com?name=Joe%ZZBloggs" is invalid. In this case the bad match was on the X-Opt-Forward which I'm guessing is a field usually used for the original IP address handled by a proxy. Can't think of any reason this field should flag this rule for legitimate traffic, so again I'd say this is another scammer trying his luck and this should be blocked.
A lot to take in that and hope that helps but let us know if you have any questions.
Look in your Apache Access Logs if there is no "Matched Data"-Part in your ModSecurity log-entries.
Sometimes modsecurity puts in the log what part of the request matched and triggered the rule, like here:
[Matched Data: ) found within ARGS:q: Audi Hamburg (Kollaustr) Oder Willy Tiedke (22047)]