solr scoring - fieldnorm

2019-02-24 09:24发布

问题:

I have the following records and the scores against it when I search for "iphone" -

Record1: FieldName - DisplayName : "Iphone" FieldName - Name : "Iphone"

11.654595 = (MATCH) sum of:
  11.654595 = (MATCH) max plus 0.01 times others of:
    7.718274 = (MATCH) weight(DisplayName:iphone^10.0 in 915195), product of:
      0.6654692 = queryWeight(DisplayName:iphone^10.0), product of:
        10.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      11.598244 = (MATCH) fieldWeight(DisplayName:iphone in 915195), product of:
        1.0 = tf(termFreq(DisplayName:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        1.0 = fieldNorm(field=DisplayName, doc=915195)
    11.577413 = (MATCH) weight(Name:iphone^15.0 in 915195), product of:
      0.99820393 = queryWeight(Name:iphone^15.0), product of:
        15.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      11.598244 = (MATCH) fieldWeight(Name:iphone in 915195), product of:
        1.0 = tf(termFreq(Name:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        1.0 = fieldNorm(field=Name, doc=915195)

Record2: FieldName - DisplayName : "The Iphone Book" FieldName - Name : "The Iphone Book"

7.284122 = (MATCH) sum of:
  7.284122 = (MATCH) max plus 0.01 times others of:
    4.823921 = (MATCH) weight(DisplayName:iphone^10.0 in 453681), product of:
      0.6654692 = queryWeight(DisplayName:iphone^10.0), product of:
        10.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      7.2489023 = (MATCH) fieldWeight(DisplayName:iphone in 453681), product of:
        1.0 = tf(termFreq(DisplayName:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.625 = fieldNorm(field=DisplayName, doc=453681)
    7.2358828 = (MATCH) weight(Name:iphone^15.0 in 453681), product of:
      0.99820393 = queryWeight(Name:iphone^15.0), product of:
        15.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      7.2489023 = (MATCH) fieldWeight(Name:iphone in 453681), product of:
        1.0 = tf(termFreq(Name:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.625 = fieldNorm(field=Name, doc=453681)

Record3: FieldName - DisplayName: "iPhone" FieldName - Name: "iPhone"

7.284122 = (MATCH) sum of:
  7.284122 = (MATCH) max plus 0.01 times others of:
    4.823921 = (MATCH) weight(DisplayName:iphone^10.0 in 5737775), product of:
      0.6654692 = queryWeight(DisplayName:iphone^10.0), product of:
        10.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      7.2489023 = (MATCH) fieldWeight(DisplayName:iphone in 5737775), product of:
        1.0 = tf(termFreq(DisplayName:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.625 = fieldNorm(field=DisplayName, doc=5737775)
    7.2358828 = (MATCH) weight(Name:iphone^15.0 in 5737775), product of:
      0.99820393 = queryWeight(Name:iphone^15.0), product of:
        15.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      7.2489023 = (MATCH) fieldWeight(Name:iphone in 5737775), product of:
        1.0 = tf(termFreq(Name:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.625 = fieldNorm(field=Name, doc=5737775)

Why is Record2 and Record3 have the same score when record2 has 3 words and record3 has just one word. So Record3 should have higher relevancy than record 2. Why are the fieldNorm of both Record2 and Record3 the same?

QueryParser: Dismax FieldType: text fieldtype as default in solrconfig.xml

Adding DataFeed:

Record1: Iphone

{
        "ListPrice":1184.526,
        "ShipsTo":1,
        "OID":"190502",
        "EAN":"9780596804299",
        "ISBN":"0596804296",
        "Author":"Pogue, David",
        "product_type_fq":"Books",
        "ShipmentDurationDays":"21",
        "CurrencyValue":"24.9900",
        "ShipmentDurationText":"NORMALLY SHIPS IN 21 BUSINESS DAYS",
        "Availability":0,
        "COD":0,
        "PublicationDate":"2009-08-07 00:00:00.0",
        "Discount":"25",
        "SubCategory_fq":"Hardware",
        "Binding":"Paperback",
        "Category_fq":"Non Classifiable",
        "ShippingCharges":"0",
        "OIDType":8,
        "Pages":"397",
        "CallOrder":"0",
        "TrackInventory":"Ingram",
        "Author_fq":"Pogue, David",
        "DisplayName":"Iphone",
        "url":"/iphone-pogue-david/books/9780596804299.htm",
        "CurrencyType":"USD",
        "SubSubCategory":"Handheld Devices",
        "Mask":0,
        "Publisher":"Oreilly & Associates Inc",
        "Name":"Iphone",
        "Language":"English",
        "DisplayPriority":"999",
        "rowid":"books_9780596804299"
        }

Record2: The Iphone Book

{
        "ListPrice":1184.526,
        "ShipsTo":1,
        "OID":"94694",
        "EAN":"9780321534101",
        "ISBN":"0321534107",
        "Author":"Kelby, Scott/ White, Terry",
        "product_type_fq":"Books",
        "ShipmentDurationDays":"21",
        "CurrencyValue":"24.9900",
        "ShipmentDurationText":"NORMALLY SHIPS IN 21 BUSINESS DAYS",
        "Availability":1,
        "COD":0,
        "PublicationDate":"2007-08-13 00:00:00.0",
        "Discount":"25",
        "SubCategory_fq":"Handheld Devices",
        "Binding":"Paperback",
        "BAMcategory_src":"Computers",
        "Category_fq":"Computers",
        "ShippingCharges":"0",
        "OIDType":8,
        "Pages":"219",
        "CallOrder":"0",
        "TrackInventory":"Ingram",
        "Author_fq":"Kelby, Scott/ White, Terry",
        "DisplayName":"The Iphone Book",
        "url":"/iphone-book-kelby-scott-white-terry/books/9780321534101.htm",
        "CurrencyType":"USD",
        "SubSubCategory":" Handheld Devices",
        "BAMcategory_fq":"Computers",
        "Mask":0,
        "Publisher":"Pearson P T R",
        "Name":"The Iphone Book",
        "Language":"English",        
        "DisplayPriority":"999",
        "rowid":"books_9780321534101"
        }

Record 3: iPhone

{
        "ListPrice":278.46,
        "ShipsTo":1,
        "OID":"694715",
        "EAN":"9781411423527",
        "ISBN":"1411423526",
        "Author":"Quamut (COR)",
        "product_type_fq":"Books",
        "ShipmentDurationDays":"21",
        "CurrencyValue":"5.9500",
        "ShipmentDurationText":"NORMALLY SHIPS IN 21 BUSINESS DAYS",
        "Availability":0,
        "COD":0,
        "PublicationDate":"2010-08-03 00:00:00.0",
        "Discount":"25",
        "SubCategory_fq":"Hardware",
        "Binding":"Paperback",
        "Category_fq":"Non Classifiable",
        "ShippingCharges":"0",
        "OIDType":8,
        "CallOrder":"0",        
        "TrackInventory":"BNT",
        "Author_fq":"Quamut (COR)",
        "DisplayName":"iPhone",
        "url":"/iphone-quamut-cor/books/9781411423527.htm",
        "CurrencyType":"USD",
        "SubSubCategory":"Handheld Devices",
        "Mask":0,
        "Publisher":"Sterling Pub Co Inc",
        "Name":"iPhone",
        "Language":"English",
        "DisplayPriority":"999",
        "rowid":"books_9781411423527"
        }         

回答1:

fieldnorm takes into account the field length i.e. the number of terms.
The fieldtype used is text for the fields display name & name, which would have the stopwords and the word delimiter filters.

Record 1 - Iphone
Would generate a single token - IPhone

Record 2 - The Iphone Book
Would generate 2 tokens - Iphone, Book
The would be removed by the stopwords.

Record 3 - iPhone
Would also generate 2 tokens - i,phone
As iPhone has a case change, the word delimiter filter with splitOnCaseChange would now split iPhone into 2 tokens i, Phone and would produce the field norm same as Record 2



回答2:

This is the answer to user1021590's follow-up question/answer on the "da vinci code" search example.

The reason all the documents get the same score is due to a subtle implementation detail of lengthNorm. Lucence TFIDFSimilarity doc states the following about norm(t, d):

the resulted norm value is encoded as a single byte before being stored. At search time, the norm byte value is read from the index directory and decoded back to a float norm value. This encoding/decoding, while reducing index size, comes with the price of precision loss - it is not guaranteed that decode(encode(x)) = x. For instance, decode(encode(0.89)) = 0.75.

If you dig into the code, you see that this float-to-byte encoding is implemented as follows:

public static byte floatToByte315(float f)
{
    int bits = Float.floatToRawIntBits(f);
    int smallfloat = bits >> (24 - 3);
    if (smallfloat <= ((63 - 15) << 3))
    {
        return (bits <= 0) ? (byte) 0 : (byte) 1;
    }
    if (smallfloat >= ((63 - 15) << 3) + 0x100)
    {
        return -1;
    }
    return (byte) (smallfloat - ((63 - 15) << 3));
}

and the decoding of that byte to float is done as:

public static float byte315ToFloat(byte b)
{
    if (b == 0)
        return 0.0f;
    int bits = (b & 0xff) << (24 - 3);
    bits += (63 - 15) << 24;
    return Float.intBitsToFloat(bits);
}

lengthNorm is calculated as 1 / sqrt( number of terms in field ). This is then encoded for storage using floatToByte315. For a field with 3 terms, we get:

floatToByte315( 1/sqrt(3.0) ) = 120

and for a field with 4 terms, we get:

floatToByte315( 1/sqrt(4.0) ) = 120

so both of them get decoded to:

byte315ToFloat(120) = 0.5.

The doc also states this:

The rationale supporting such lossy compression of norm values is that given the difficulty (and inaccuracy) of users to express their true information need by a query, only big differences matter.

UPDATE: As of Solr 4.10, this implementation and corresponding statements are part of DefaultSimilarity.



标签: solr scoring