I am designing a new service that would enable 'customers' to register and pay a per-use type fee for particular searches they perform. This service would be exposed using a RESTFul and SOAP interface. Typically the web service would integrate with the customer's website and then be exposed to the 'public' where anyone would be able to use the customer's website and take advantage of my web service features (which the customer would pay for but have full control of moderating the requests so they don't get charged too much).
I want to design the service that optimises the integration to make it as simple as possible. The web service API will change so creating an internal proxy to expose the web service to the public in some cases is too much of a detractor for customers. So the issue as I see it is creating a web service that balances authentication, security and integration.
Ideal
- Not use OAuth
- Avoid forcing the customer to create an internal proxy which re-exposes the same web service API I have already.
- Be secure (token username/pass whatever and ssl)
- Embed a javascript library in customer website - This would be a client Javascript library to make integration steps even easier.
- The Javascript library would need to be secure enough so that the public wouldn't be able to simply grab credentials and re-purpose it themselves
- Not be too hacky, if possible, so the web service doesn't have to be re-built if Firefox 87 comes out (to be released in as many minutes) and decides to fubar it.
It seems that some kinda of 3-way authentication process is needed for this to work, i.e. authenticates a particular client (in the public), the web service (the customer) and the web service.
Has anyone implemented something kind of similar and how did they tackle a situation like this?
I also understand there is a balance between what can be done, and what would violate cross-domain security, so perhaps the entire web service might be exposed by another GET only interface which would return JSONP data.
/** Addendum **/
I have since discovered a web service that does what I'm looking after. However, I am not confident I understand the implementation details entirely. So perhaps someone could also elaborate on my thinking.
The web service I discovered seems to host the Javascript on the service side. The customer would then integrate their website with the service side by including the Javascript in a script tag, but supplies a key to do so i.e.
Somehow if I add the script to my website it doesn't work. So somewhere along the line the token must be registered to a particular customer domain, and the 'client-lib.js' is actually a servlet or something similar which can somehow detect that the user from the 'public' coming in has actually originated from the 'customer' domain.
Is my thinking right? Is there some kind of http header that can be used this way? Is that safe?
Cheers
The best way to go about it is something like this (taking that you want to use javascript hosted on your server and make the include part as simple as it can be):
*user registers on your website and he receives a token for his domain
*the user can include a js file pointing to your server the js file will be something like:
or
if you will use a .htaccess to redirect
*in the php file check if the token matches the requests domain, if yes echo out the js lib, if not throw a error or something
*in the js you will need to build some ajax calls to your service and stuff to manipulate the HTML (create a widget holder,show some data, etc.)
*also all the calls should have the token, and again you can use the same logic to check if token==server address
EDIT:
The REFERER is sent by the client's browser as part of the HTTP protocol, and is therefore unreliable indeed.
If you want to verify if a request is coming from your site, well you can't, but you can verify the user has been to your site and/or is authenticated. Cookies are sent in AJAX requests so you can rely on that. But this means you need to use something like oAuth
If you want to use this method, you should still check the referrer as well to prevent CSRF en.wikipedia.org/wiki/Cross-site_request_forgery
Ideally you should use a unique token per session per user (per request if you're paranoid) to prevent CSRF attacks. Checking the referrer is just security by obfuscation and not quite a real solution.
First of all - let me provide you a link to another SO question which I answered yesterday - as it gives a pretty extensive answer to a similar question-set.
I am assuming that you are going to charge the owner of the site from which the search is made, and not care so much who the individual user is who makes the search. If that's incorrect, please clarify and I will update my answer accordingly.
Clearly, in any such case, the first and foremost thing you need to do is to make sure you know which client this is on each request. And - as you said, you also want to make sure you're protecting yourself from cross-site attacks and people stealing your user's keys.
What you might consider would be the following:
Now - the way that you both do your tracking and authentication becomes fairly simple.
You mentioned providing a JS library which won't need to update every time FF updates. I suggest building that library using jQuery, or another, similarly supported cross-browser JS foundational library - and letting that wrap your AJAX.
When the client site requests your script, however, have them provide you something like:
On your side, when you receive this request, check the following:
401: Unauthorized
error.Now - when you serve the JS file, you can do so in a way that injects the key into that file - and therefore it has access to their shared key. On each of your AJAX requests, include that key so that you can identify which client this request is coming from again. In a RESTful environment, there shouldn't really be sessions - so you need this level of authentication on each post. I suggest including it as a cookie.
On your server-side - simply repeat the checks of their key on each subsequent request - and voila - you've built yourself some fairly tight security without a lot of overhead.
That said - if you expect a lot of traffic - you may want to come back to this and explore more deep security processes in the future, as rolling your own security matrix can leave unexpected holes. However - it is a good start and will get you off the ground.
Feel free to ask any questions if you need, and I will try to update my answer accordingly.