as far as I know this is not possible as the solr wiki. do you guys have any work around?
相关问题
- What is the best way to do a search in a large fil
- JCR-SQL - contains function doesn't escape spe
- Solr Deduplication (dedupe) giving all zeros in si
- Search Multiple Arrays for
- Find index given multiple values of array with Num
相关文章
- What is the complexity of bisect algorithm?
- Solr - _version_ field must exist in schema and be
- Visual Studio: Is there an incremental search for
- How do I hide a site from search engines? [closed]
- Why is C# Array.BinarySearch so fast?
- Find three elements in a sorted array which sum to
- TreeMap - Search Time Complexity
- performance for searching through 100 million reco
If you need to get everything out, you can either set the number of rows ridiculously high (as indicated above, with the caveat that, well, it won't work because you'll run out of memory) or iterate through your results using "rows" and "start"
Pseudocode:
See http://wiki.apache.org/solr/CommonQueryParameters for use of "start"
Also remember that when you're grabbing gobs of documents, use the 'fl' parameter to only pull back what you're actually gonna use.
The only workaround is to set the rows value large enough to return all documents.
However, I wouldn't recommend this for anything larger than about 1000 documents. If the number of documents you are fetching is large enough, you will run into memory or timeout issues with the XML you have to generate and parse. For example, if there are 2-3 million documents in your index, do you really want all of that in a single response? It's paginated for a reason. You should probably leverage it.
Of secondary concern... Why are you doing this to begin with? What's the point of putting a bunch of data into a search index, if you are just going to pull it ALL out? You may be better off using your original data source at that point.