better many small ajax request or a big one for gl

2019-02-07 23:28发布

问题:

I have a wordpress site with an ajax search field that returns a list of posts, with just the title, the url, the date and the category.

I want to paginate the results so that max 10 results are shown every time.

My doubt is: is it good to make a different request every turn the page is turned or make only one request to fetch all the posts and then manage the pagination via javascript (the response is i JSON)?

Is it better to make more frequent small request with a light response or a big one?

I suppose that at the beginning of the site life the first solution is the best one. I'm not sure about scalability as the site grows.

What do you think?

UPDATE: I received a couple of very good answer, there were addressing more the user interface side of the problem.

Hwr I would like you to focus more on the performance point of view. My site is on a shared server but we expect traffic to go up fast, since the site will receive international exposure. My fear is that wordpress will not be able to cope with the increased overhead that comes from the ajax requests.

So going back to the question, what would it better for the server total load, many small requests, loading only the requested result page or a big one with all the results?

Considering that not all the user i suppose are going to check all of the results pages i suppose the first...

回答1:

The correct answer is: "It depends".

If you were dealing with known quantities (10 results per page, 10 pages of results), and you wanted all of them to be made available to the user, asap, then I'd suggest downloading chunks (10 or 20) on a 500ms timer or something similar.

Then you can fill up the extra back-pages asynchronously, and update the "total-pages" controls accordingly.

From there, your user has immediate results, and has the ability to flip back and forth between all of your data in 2-ish seconds.

If you had a site where you needed all of your data accessible right away, and you had 40 results that needed to be shown, then go with a big dump.

If you had an infinite-scroll site, then you'd want to grab a couple of page-lengths. For something like Twitter, I'd probably pre-calculate the average height of the container, versus the screen height. Then I'd download 3 or 4 screen-lengths worth of tweets. From there, when the user was scrolling into their 2nd or 3rd screen (or 3rd or 4th respectively), I'd download the next batch.

So my event might be attached to an onscroll, which checks if it's allowed to run (if it's been at least 16ms since the last time it's run, -- obviously, we're still scrolling), then it will check where it is, in terms of how close it is to the bottom, considering screen-height, and the total height of the last batch (screen_bottom >= latest_batch.height * 0.75) or similar. The screen_bottom would be relative to the last_batch, in that if the user was scrolling back up, higher than the previous batch, screen_bottom would be a negative number, completely.

...or normalize them, so that you're just dealing with percentages.

It's enough to make it feel like the data is always there for you. You don't want to have to wait for a huge block to load at the start, but you don't want to wait for tiny blocks to load, while you're trying to move around, either.

So figure out what the happy medium is, based on what you're doing, and how you expect the user to use your data.



回答2:

There are two factors that play a role in this decision: how users interact with your site (e.g. how many results they look at and how they query) and how big an "average search result" is. If you have thousands of posts and generic search terms, you will probably get very large result sets if you go the "big-one" road. If your users tend to browse a lot over this, this will result in a lot of requests if you make a request every page load.

There is no general answer, this depends a lot on your application and the search patterns of your users. In general I would do the simplest thing that does the job, but also monitor the user interaction (e.g. logging of queries and result sizes), the site performance (for example via Google Analytics load time) and the server load (for example via munin) on your page. If you run into problems, you can still optimize your application from this point on - and you will have a much better understanding on your users and your application by that time.



回答3:

Well, first of all. If your AJAX is creating the same posts query a normal pageload creates, you could simulate a pageload. Ie, query a bunch of posts (like a page with a lot of posts), send ALL their data to your JS, and let it handle pagination.

Of course you can't send all your posts at once, so you gotta handle how many pages will be available. Because of that, I rly think it's better to just query a page-worth amount of posts at a time. And remember that normal WP behavior is to make a query that returns posts IDs then make a query for a whole post for each post in the page.

If you rly wanna optimize your site, then install a cache plugin. It will cache in HD all DB queries, then use these files intead of making same queries again.



回答4:

I am creator of asjst and found out that the download is much faster but the requesting of the resources is slow because of the upload-speed of devices.

Big Upload is bad

As you can see in the screenshot above, you have many many bytes uploading and just a few bytes downloading.

Usually the download is faster than the upload.



回答5:

I am running into a similar situation. I don't have access to a server and have to save the files on Sharepoint in .xlsb format from Excel(the smallest file sizes I can get). I use a custom binary ajaxTransport to bring them back as arrayBuffers and then use threadsJS to process the buffers on individual threads to get usable JSON data via SheetsJS, and then merge them into a single array of JSON data.

From my testing, a single large file took 41 seconds to finish processing, 4 smaller files took 28 seconds, 8 even smaller files took about 20 seconds and then 16 even smaller files still took about 20 seconds...

so as the files get smaller, there are diminishing returns where the increased number of AJAX requests offsets the faster file processing times. To be perfectly honest, I see no actual speed increase between using threads or not using threads, possibly because the async nature of the AJAX calls allows processing to occur in between the time the calls are started and finished, or possibly because I haven't done enough of them to see a big difference, but the threaded version allows my page loader to not freeze while the data is loading, so I guess that is a plus.