I have a doc_type with a mapping similar to this very simplified one:
{
"test":{
"properties":{
"name":{
"type":"string"
},
"long_searchable_text":{
"type":"string"
},
"clearances":{
"type":"object"
}
}
}
}
The field clearances
should be an object, with a series of alphanumeric identifiers for filtering purposes. A typical document will have this format:
{
"name": "Lord Macbeth",
"long_searchable_text": "Life's but a walking shadow, a poor player, that..."
"clearances": {
"glamis": "aa2862jsgd",
"cawdor": "3463463551"
}
}
The problem is that sometimes during indexing, the first indexed content of a new field inside the object field clearances
will be completely numerical, as in the case above. This causes Elasticsearch to infer the type of this field as long
. But this is an accident. The field might be alphanumeric in another document. When a latter document containing an alphanumeric value in this field arrive, I get a parsing exception:
{"error":"MapperParsingException[failed to parse [clearances.cawdor]]; nested: NumberFormatException[For input string: \"af654hgss1\"]; ","status":400}%
I tried to solve this with a dynamic template defined like this:
{
"test":{
"properties":{
"name":{
"type":"string"
},
"long_searchable_text":{
"type":"string"
},
"clearances":{
"type":"object"
}
}
},
"dynamic_templates":[
{
"source_template":{
"match":"clearances.*",
"mapping":{
"type":"string",
"index":"not_analyzed"
}
}
}
]
}
But it keeps happening that if the first indexed document have a clearance.some_subfield
value that can be parsed as an integer, it would be inferred as an integer and all subsequent documents that have alphanumeric values on that subfield will fail to be indexed.
I could list all current subfields in the the mapping, but they are many and I expect their number to grow in the future (triggering an update of the mapping and a need for a full reindexation...).
Is there a way to make this work without resorting to this full reindexation everytime a new subfield is added?