Solr的不搜索为整数?(Solr does not search into integers?)

2019-06-25 12:59发布

我目前使用深化发展为Solr的电子商务网站的搜索引擎。 所以,我得到了我的schema.xml中这两个领域:

   <field name="sku" type="string" indexed="true" stored="true" required="false" />
   <field name="collection" type="string" indexed="true" stored="true" required="false" />

(完整的schema.xml可用下图)

有关信息:

  • SKU看起来是这样的:959620,929345,912365,...
  • 集合看起来是这样的:奥尔科特,Spigrim,钽,...

他们是很好的索引。 例如,当我寻找:

http://localhost:8080/solr/myindex/select/?q=Alcott

我把所有产品征收“奥尔科特”。

但是,当我寻找;

http://localhost:8080/solr/myindex/select/?q=959620

我什么也没得到。

然而,当我深入推进这一要求,

http://localhost:8080/solr/myindex/select/?q=sku:969520

我有连接到该SKU的产品。

有没有什么办法让“Q = 969520”的工作? 更妙的是:“Q = 96”,导致与SKU开始由“96”所有的产品?

谢谢您的帮助 !

schema.xml中:

<?xml version="1.0" encoding="UTF-8" ?>

<schema name="example" version="1.2">


  <types>

    <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>

    <!-- boolean type: "true" or "false" -->
    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/>
    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
    <fieldtype name="binary" class="solr.BinaryField"/>


    <fieldType name="int" class="solr.TrieIntField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
    <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
    <fieldType name="long" class="solr.TrieLongField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
    <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>


    <fieldType name="tint" class="solr.TrieIntField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
    <fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
    <fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
    <fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>


    <fieldType name="date" class="solr.TrieDateField" omitNorms="true" precisionStep="0" positionIncrementGap="0"/>

    <!-- A Trie based date field for faster date range queries and date faceting. -->
    <fieldType name="tdate" class="solr.TrieDateField" omitNorms="true" precisionStep="6" positionIncrementGap="0"/>



    <fieldType name="pint" class="solr.IntField" omitNorms="true"/>
    <fieldType name="plong" class="solr.LongField" omitNorms="true"/>
    <fieldType name="pfloat" class="solr.FloatField" omitNorms="true"/>
    <fieldType name="pdouble" class="solr.DoubleField" omitNorms="true"/>
    <fieldType name="pdate" class="solr.DateField" sortMissingLast="true" omitNorms="true"/>



    <fieldType name="sint" class="solr.SortableIntField" sortMissingLast="true" omitNorms="true"/>
    <fieldType name="slong" class="solr.SortableLongField" sortMissingLast="true" omitNorms="true"/>
    <fieldType name="sfloat" class="solr.SortableFloatField" sortMissingLast="true" omitNorms="true"/>
    <fieldType name="sdouble" class="solr.SortableDoubleField" sortMissingLast="true" omitNorms="true"/>


    <fieldType name="random" class="solr.RandomSortField" indexed="true" />


    <!-- A text field that only splits on whitespace for exact matching of words -->
    <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
      <analyzer>
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
      </analyzer>
    </fieldType>


    <fieldType name="text" class="solr.TextField" positionIncrementGap="100">
    <analyzer type="index">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <!-- in this example, we will only use synonyms at query time
        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        -->
        <!-- Case insensitive stop word removal.
          add enablePositionIncrements=true in both the index and query
          analyzers to leave a 'gap' for more accurate phrase queries.
        -->
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="stopwords.txt"
                enablePositionIncrements="true"
                />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="stopwords.txt"
                enablePositionIncrements="true"
                />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/>
      </analyzer>
    </fieldType>

    <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">

      <analyzer type="query">
        <!-- normalisation des accents, cédilles, e dans l'o,... -->
        <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
        <!-- découpage selon les espaces -->
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <!-- suppression de la ponctuation -->
        <filter class="solr.PatternReplaceFilterFactory" pattern="^(\p{Punct}*)(.*?)(\p{Punct}*)$" replacement="$2"/>
        <!-- suppression des tokens vides et des mots démesurés -->
        <filter class="solr.LengthFilterFactory" min="1" max="100" />
        <!-- passage en minuscules -->
        <filter class="solr.LowerCaseFilterFactory"/>
        <!-- suppression des élisions (l', qu',...) -->
        <filter class="solr.ElisionFilterFactory" articles="elisionwords_fr.txt"/> 
        <!-- découpage des mots composés -->
        <filter class="solr.WordDelimiterFilterFactory" splitOnCaseChange="1" splitOnNumerics="1" stemEnglishPossessive="1" generateWordParts="1"
                                                        generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="1" preserveOriginal="1"/>
        <!-- suppression des mots insignifiants -->
        <filter class="solr.StopFilterFactory" ignoreCase="1" words="stopwords_fr.txt" enablePositionIncrements="true"/>
        <!-- gestion des synonymes -->
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms_fr.txt" ignoreCase="true" expand="true"/>
        <!-- partie de mot -->
        <filter class="solr.EdgeNGramFilterFactory" minGramSize="3" maxGramSize="6"/>
        <!-- lemmatisation (pluriels,...) -->
        <filter class="solr.SnowballPorterFilterFactory" language="French" protected="protwords_fr.txt"/>
        <!-- suppression des doublons éventuels -->
        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
      </analyzer>



      <analyzer type="index">
        <!-- normalisation des accents, cédilles, e dans l'o,... -->
        <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
        <!-- découpage selon les espaces -->
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <!-- suppression de la ponctuation -->
        <filter class="solr.PatternReplaceFilterFactory" pattern="^(\p{Punct}*)(.*?)(\p{Punct}*)$" replacement="$2"/>
        <!-- suppression des tokens vides et des mots démesurés -->
        <filter class="solr.LengthFilterFactory" min="1" max="100" />
        <!-- passage en minuscules -->
        <filter class="solr.LowerCaseFilterFactory"/>
        <!-- suppression des élisions (l', qu',...) -->
        <filter class="solr.ElisionFilterFactory" articles="elisionwords_fr.txt"/> 
        <!-- découpage des mots composés -->
        <filter class="solr.WordDelimiterFilterFactory" splitOnCaseChange="1" splitOnNumerics="1" stemEnglishPossessive="1" generateWordParts="1"
                                                        generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="1" preserveOriginal="1"/>
        <!-- suppression des mots insignifiants -->
        <filter class="solr.StopFilterFactory" ignoreCase="1" words="stopwords_fr.txt" enablePositionIncrements="true"/>
        <!-- gestion des synonymes -->
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms_fr.txt" ignoreCase="true" expand="true"/>
        <!-- partie de mot -->
        <filter class="solr.EdgeNGramFilterFactory" minGramSize="3" maxGramSize="6"/>
        <!-- lemmatisation (pluriels,...) -->
        <filter class="solr.SnowballPorterFilterFactory" language="French" protected="protwords_fr.txt"/>
        <!-- suppression des doublons éventuels -->
        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
      </analyzer>
    </fieldType>




    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
    <fieldType name="textTight" class="solr.TextField" positionIncrementGap="100" >
      <analyzer>
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/>
        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
             possible with WordDelimiterFilter in conjuncton with stemming. -->
        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
      </analyzer>
    </fieldType>


    <!-- A general unstemmed text field - good if one does not know the language of the field -->
    <fieldType name="textgen" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="stopwords.txt"
                enablePositionIncrements="true"
                />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
    </fieldType>


    <!-- A general unstemmed text field that indexes tokens normally and also
         reversed (via ReversedWildcardFilterFactory), to enable more efficient 
   leading wildcard queries. -->
    <fieldType name="text_rev" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true"
           maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="stopwords.txt"
                enablePositionIncrements="true"
                />
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="0"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
    </fieldType>


    <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
      <analyzer>
        <!-- KeywordTokenizer does no actual tokenizing, so the entire
             input string is preserved as a single token
          -->
        <tokenizer class="solr.KeywordTokenizerFactory"/>
        <!-- The LowerCase TokenFilter does what you expect, which can be
             when you want your sorting to be case insensitive
          -->
        <filter class="solr.LowerCaseFilterFactory" />
        <!-- The TrimFilter removes any leading or trailing whitespace -->
        <filter class="solr.TrimFilterFactory" />

        <filter class="solr.PatternReplaceFilterFactory"
                pattern="([^a-z])" replacement="" replace="all"
        />
      </analyzer>
    </fieldType>

    <fieldtype name="phonetic" stored="false" indexed="true" class="solr.TextField" >
      <analyzer>
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/>
      </analyzer>
    </fieldtype>

    <fieldtype name="payloads" stored="false" indexed="true" class="solr.TextField" >
      <analyzer>
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <!--
        The DelimitedPayloadTokenFilter can put payloads on tokens... for example,
        a token of "foo|1.4"  would be indexed as "foo" with a payload of 1.4f
        Attributes of the DelimitedPayloadTokenFilterFactory : 
         "delimiter" - a one character delimiter. Default is | (pipe)
   "encoder" - how to encode the following value into a playload
      float -> org.apache.lucene.analysis.payloads.FloatEncoder,
      integer -> o.a.l.a.p.IntegerEncoder
      identity -> o.a.l.a.p.IdentityEncoder
            Fully Qualified class name implementing PayloadEncoder, Encoder must have a no arg constructor.
         -->
        <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="float"/>
      </analyzer>
    </fieldtype>

    <!-- lowercases the entire field value, keeping it as a single token.  -->
    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
      <analyzer>
        <tokenizer class="solr.KeywordTokenizerFactory"/>
        <filter class="solr.LowerCaseFilterFactory" />
      </analyzer>
    </fieldType>


    <!-- since fields of this type are by default not stored or indexed,
         any data added to them will be ignored outright.  --> 
    <fieldtype name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" /> 

 </types>


 <fields>
 <!-- Vu fields -->
   <field name="id" type="string" indexed="true" stored="true" required="true" /> 
   <field name="sku" type="string" indexed="true" stored="true" required="false" /> 
   <field name="collection" type="string" indexed="true" stored="true" required="false" /> 
   <field name="title" type="text_fr" required="false" />
   <field name="description" type="text_fr" required="false" />
   <field name="price" type="float" required="false" indexed="true" stored="false" />
   <field name="brand_id" type="text" required="false" />
   <field name="date_online" type="date" required="false" />
   <field name="product_type" type="text" required="false" />   
   <field name="selection_id" type="sint" required="false" multiValued="true" indexed="true" stored="false" />
   <field name="stock_delay" type="sint" required="false"  />
   <field name="stock" type="sint" required="false"  />
   <field name="price_type" type="sint" required="false"  />
   <field name="main_product_id" type="text" required="false"  />
   <field name="date_price" type="date" required="false" />
   <!-- attributes -->
   <dynamicField name="attr_*" type="sint" indexed="true" multiValued="true"/>

   <field name="attr_13" type="int" indexed="true" multiValued="false"/>
   <field name="attr_14" type="int" indexed="true" multiValued="false"/>
   <field name="attr_19" type="int" indexed="true" multiValued="false"/>

    <!-- Ce champ contiendra la copie de tous les autres, pour faciliter la recherche -->
   <field name="global" type="text_fr" required="false" multiValued="true" />


   <!-- Valid attributes for fields:
     name: mandatory - the name for the field
     type: mandatory - the name of a previously defined type from the 
       <types> section
     indexed: true if this field should be indexed (searchable or sortable)
     stored: true if this field should be retrievable
     compressed: [false] if this field should be stored using gzip compression
       (this will only apply if the field type is compressable; among
       the standard field types, only TextField and StrField are)
     multiValued: true if this field may contain multiple values per document
     omitNorms: (expert) set to true to omit the norms associated with
       this field (this disables length normalization and index-time
       boosting for the field, and saves some memory).  Only full-text
       fields or fields that need an index-time boost need norms.
     termVectors: [false] set to true to store the term vector for a
       given field.
       When using MoreLikeThis, fields used for similarity should be
       stored for best performance.
     termPositions: Store position information with the term vector.  
       This will increase storage costs.
     termOffsets: Store offset information with the term vector. This 
       will increase storage costs.
     default: a value that should be used if no value is specified
       when adding a document.
   -->
    <!--
   <field name="id" type="string" indexed="true" stored="true" required="true" /> 
   <field name="sku" type="textTight" indexed="true" stored="true" omitNorms="true"/>
   <field name="name" type="textgen" indexed="true" stored="true"/>
   <field name="alphaNameSort" type="alphaOnlySort" indexed="true" stored="false"/>
   <field name="manu" type="textgen" indexed="true" stored="true" omitNorms="true"/>
   <field name="cat" type="text_ws" indexed="true" stored="true" multiValued="true" omitNorms="true" />
   <field name="features" type="text" indexed="true" stored="true" multiValued="true"/>
   <field name="includes" type="text" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" />

   <field name="weight" type="float" indexed="true" stored="true"/>
   <field name="price"  type="float" indexed="true" stored="true"/>
   <field name="popularity" type="int" indexed="true" stored="true" />
   <field name="inStock" type="boolean" indexed="true" stored="true" />
    -->

   <!-- Common metadata fields, named specifically to match up with
     SolrCell metadata when parsing rich documents such as Word, PDF.
     Some fields are multiValued only because Tika currently may return
     multiple values for them.
   -->
   <!--
   <field name="title" type="text" indexed="true" stored="true" multiValued="true"/>
   <field name="subject" type="text" indexed="true" stored="true"/>
   <field name="description" type="text" indexed="true" stored="true"/>
   <field name="comments" type="text" indexed="true" stored="true"/>
   <field name="author" type="textgen" indexed="true" stored="true"/>
   <field name="keywords" type="textgen" indexed="true" stored="true"/>
   <field name="category" type="textgen" indexed="true" stored="true"/>
   <field name="content_type" type="string" indexed="true" stored="true" multiValued="true"/>
   <field name="last_modified" type="date" indexed="true" stored="true"/>
   <field name="links" type="string" indexed="true" stored="true" multiValued="true"/>
    -->

   <!-- catchall field, containing all other searchable text fields (implemented
        via copyField further on in this schema  -->
   <!-- <field name="text" type="text" indexed="true" stored="false" multiValued="true"/> -->

   <!-- catchall text field that indexes tokens both normally and in reverse for efficient
        leading wildcard queries. -->
   <!-- <field name="text_rev" type="text_rev" indexed="true" stored="false" multiValued="true"/> -->

   <!-- non-tokenized version of manufacturer to make it easier to sort or group
        results by manufacturer.  copied from "manu" via copyField -->
   <!-- <field name="manu_exact" type="string" indexed="true" stored="false"/> -->

   <!-- <field name="payloads" type="payloads" indexed="true" stored="true"/> -->

   <!-- Uncommenting the following will create a "timestamp" field using
        a default value of "NOW" to indicate when each document was indexed.
     -->
   <!--
   <field name="timestamp" type="date" indexed="true" stored="true" default="NOW" multiValued="false"/>
     -->


   <!-- Dynamic field definitions.  If a field name is not found, dynamicFields
        will be used if the name matches any of the patterns.
        RESTRICTION: the glob-like pattern in the name attribute must have
        a "*" only at the start or the end.
        EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
        Longer patterns will be matched first.  if equal size patterns
        both match, the first appearing in the schema will be used.  -->
   <!--
   <dynamicField name="*_i"  type="int"    indexed="true"  stored="true"/>
   <dynamicField name="*_s"  type="string"  indexed="true"  stored="true"/>
   <dynamicField name="*_l"  type="long"   indexed="true"  stored="true"/>
   <dynamicField name="*_t"  type="text"    indexed="true"  stored="true"/>
   <dynamicField name="*_b"  type="boolean" indexed="true"  stored="true"/>
   <dynamicField name="*_f"  type="float"  indexed="true"  stored="true"/>
   <dynamicField name="*_d"  type="double" indexed="true"  stored="true"/>
   <dynamicField name="*_dt" type="date"    indexed="true"  stored="true"/>
    -->

   <!-- some trie-coded dynamic fields for faster range queries -->
   <!--
   <dynamicField name="*_ti" type="tint"    indexed="true"  stored="true"/>
   <dynamicField name="*_tl" type="tlong"   indexed="true"  stored="true"/>
   <dynamicField name="*_tf" type="tfloat"  indexed="true"  stored="true"/>
   <dynamicField name="*_td" type="tdouble" indexed="true"  stored="true"/>
   <dynamicField name="*_tdt" type="tdate"  indexed="true"  stored="true"/>

   <dynamicField name="*_pi"  type="pint"    indexed="true"  stored="true"/>

   <dynamicField name="ignored_*" type="ignored" multiValued="true"/>
   <dynamicField name="attr_*" type="textgen" indexed="true" stored="true" multiValued="true"/>

   <dynamicField name="random_*" type="random" />
    -->
   <!-- uncomment the following to ignore any fields that don't already match an existing 
        field name or dynamic field, rather than reporting them as an error. 
        alternately, change the type="ignored" to some other type e.g. "text" if you want 
        unknown fields indexed and/or stored by default --> 
   <!--dynamicField name="*" type="ignored" multiValued="true" /-->

 </fields>

 <!-- Field to use to determine and enforce document uniqueness. 
      Unless this field is marked with required="false", it will be a required field
   -->
 <uniqueKey>id</uniqueKey>

 <!-- field for the QueryParser to use when an explicit fieldname is absent -->
 <defaultSearchField>global</defaultSearchField>

 <!-- SolrQueryParser configuration: defaultOperator="AND|OR" -->
 <solrQueryParser defaultOperator="OR"/>

  <!-- copyField commands copy one field to another at the time a document
        is added to the index.  It's used either to index the same field differently,
        or to add multiple fields to the same field for easier/faster searching.  -->

   <copyField source="title" dest="global"/>
   <copyField source="description" dest="global"/>


</schema>

Answer 1:

是在你的schema.xml中添加这样的指令字段定义后:

<copyField source="sku" dest="text">

假设defaultSearchField设置为text

要搜索所有SKU年初96,你可以搜索96 *。 但请记住,这将返回所有以96开始将其限制于SKU的所有领域(而不只是单品),你将不得不寻找sku:96*



Answer 2:

基于行为的描述,它听起来就像你想使用基本SearchHandler查询语法开箱搜索对多个领域。 这是行不通了,你倒是希望。

有很多可用的选项:

  1. 前端查询,以便完全合格的字段名被发送(如“FIELDA:FOO或fieldb:富”)
  2. 搜索字段的内容复制到一个单一的搜索字段(通过copyField),并默认字段搜索
  3. 使用Solr的Dismax语法和(在请求QF参数)指定多个QueryFields

既然你有不同类型的字段,以及要应用通配符匹配和其他类似的东西,我建议你去Dismax路线和研究建立查询处理程序,更好地适合您的需要:

有关更多信息:

  • 默认SearchHandler: http://wiki.apache.org/solr/SearchHandler
  • Solr的有Dismax: http://wiki.apache.org/solr/DisMaxQParserPlugin


Answer 3:

你需要一个copyField设置你想成为默认搜索字段。

由于您的defaultSearchField设置为global ,尝试:

<copyField source="sku" dest="global"/>

你可能要为做同样的collection

<copyField source="collection" dest="global"/>

为了有部分匹配(如:Q = 95),无特殊运营商,你需要调整NGRAM过滤器。 您当前的设置,同时为索引时间和查询时间分析仪:

<filter class="solr.EdgeNGramFilterFactory" minGramSize="3" maxGramSize="6"/>

这意味着,部分匹配将可从3至6个字符,每例如:

  • 959
  • 9596
  • 95962
  • 596
  • ...

如果你想允许从2个字符(例如: 95 ),改变minGramSize在这两个分析仪的过滤器,你应该是好去:

<filter class="solr.EdgeNGramFilterFactory" minGramSize="2" maxGramSize="6"/>

最后,你的global领域可能不应该存储(默认),但只有索引:

<field name="global" type="text_fr" indexed="true" stored="false" required="false" multiValued="true" />

请记住,你需要重新启动Solr的和重新索引使更改生效。



文章来源: Solr does not search into integers?