Why shouldn't data be modified on an HTTP GET

2019-01-04 11:59发布

I know that using non-GET methods (POST, PUT, DELETE) to modify server data is The Right Way to do things. I can find multiple resources claiming that GET requests should not change resources on the server.

However, if a client were to come up to me today and say "I don't care what The Right Way to do things is, it's easier for us to use your API if we can just use call URLs and get some XML back - we don't want to have to build HTTP requests and POST/PUT XML," what business-conducive reasons could I give to convince them otherwise?

Are there caching implications? Security issues? I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."

Edit:

Thanks for the answers so far regarding prefetching. I'm not as concerned with prefetching since is mostly surrounding internal network API use and not visitable HTML pages that would have links that could be prefetched by a browser.

标签: http get
7条回答
对你真心纯属浪费
2楼-- · 2019-01-04 12:05

Security for one. What happens if a web crawler comes across a delete link, or a user is tricked into clicking a hyperlink? A user should know what they're doing before they actually do it.

查看更多
不美不萌又怎样
3楼-- · 2019-01-04 12:06

GETs can be forced on a user and result in Cross-site Request Forgery (CSRF). For instance, if you have a logout function at http://example.com/logout.php, which changes the server state of the user, a malicious person could place an image tag on any site that uses the above URL as its source: http://example.com/logout.php. Loading this code would cause the user to get logged out. Not a big deal in the example given, but if that was a command to transfer funds out of an account, it would be a big deal.

查看更多
老娘就宠你
4楼-- · 2019-01-04 12:13
  • Prefetch: A lot of web browsers will use prefetching. Which means that it will load a page before you click on the link. Anticipating that you will click on that link later.
  • Bots: There are several bots that scan and index the internet for information. They will only issue GET requests. You don't want to delete something from a GET request for this reason.
  • Caching: GET HTTP requests should not change state and they should be idempotent. Idempotent means that issuing a request once, or issuing it multiple times gives the same result. I.e. there are no side effects. For this reason GET HTTP requests are tightly tied to caching.
  • HTTP standard says so: The HTTP standard says what each HTTP method is for. Several programs are built to use the HTTP standard, and they assume that you will use it the way you are supposed to. So you will have undefined behavior from a slew of random programs if you don't follow.
查看更多
做自己的国王
5楼-- · 2019-01-04 12:20

I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."

...

I don't care what The Right Way to do things is, it's easier for us

Tell them to think of the worst API they've ever used. Can they not imagine how that was caused by a quick hack that got extended?

It will be easier (and cheaper) in 2 months if you start with something that makes sense semantically. We call it the "Right Way" because it makes things easier, not because we want to torture you.

查看更多
姐就是有狂的资本
6楼-- · 2019-01-04 12:21

Security: CSRF is so much easier in GET requests.

Using POST won't protect you anyway but GET can lead easier exploitation and mass exploitation by using forums and places which accepts image tags.

Depending on what you do in server-side using GET can help attacker to launch DoS (Denial of Service). An attacker can spam thousands of websites with your expensive GET request in an image tag and every single visitor of those websites will carry out this expensive GET request against your web server. Which will cause lots of CPU cycle to you.

I'm aware that some pages are heavy anyway and this is always a risk, but it's bigger risk if you add 10 big records in every single GET request.

查看更多
Juvenile、少年°
7楼-- · 2019-01-04 12:29

How about Google finding a link to that page with all the GET parameters in the URL and revisiting it every now and then? That could lead to a disaster.

There's a funny article about this on The Daily WTF.

查看更多
登录 后发表回答