Location of custom Kibana dashboards in ElasticSea

2019-01-16 19:53发布

问题:

I know for a fact that saved Kibana dashboards (ie, the JSON file of the dashboard) are saved in OR associated to a particular ElasticSearch instance. If I were to save my Kibana instance when attached to one server hosting ElasticSearch and I were to switch my ElasticSearch server to another address, I would lose my saved dashboard. But, if I were to switch back to the original server address, I will get the saved dashboard back.

My question, thus, is where exactly in the elasticsearch installation directory are the dashboards saved. I would rather be able to run a script to automatically load my pre-created Kibana dashboards than be forced to copy/paste JSON through the web console every time I start up a new ElasticSearch instance.

Thank you for the help.

UPDATE

According to this Google Groups post, the dashboards are saved into the kibana-int _index with a _type of dashboard and an _id of what I named the . So, to save my dashboards into new ElasticSearch instances, do I just need to execute a PUT into this _index through CURL? Is there a better way to do this?

回答1:

Yes, the Kibana dashboards are being saved in Elasticsearch under kibana-int index (by default, you can override that in the config.js file). If you want to move your Kibana dashboards to another ES cluster you have two options:

  1. Export manually the dashboards. Click on Save -> Advanced -> Export Schema. You have to save the file and then in the new Kibana you have to import click over Load -> Advanced -> Choose File and choosing the file that have selected. This is a pain, because you have to do this operation per dashboard that you want to migrate.
  2. You can use an utility to migrate an ES index from one ES cluster to another. There are some utilities already that can perform this operation. Searching in SO, I found this answer that suggest you to use the Elasticsearch.pm library (Perl :S) to do this. Probably there are more utilities like this but I do not think that doing a script that migrate an index to another cluster is such a difficult task.

EDIT: For the second option, you can use the python elasticsearch library and its helper reindex, if you feel more confortable with Python: https://elasticsearch-py.readthedocs.org/en/latest/helpers.html#elasticsearch.helpers.reindex



回答2:

In fact, very easy, Copy two folders:

1) .\elasticsearch\data\nodes\0\indices\.kibana 
2) .\elasticsearch\data\nodes\0\indices\kibana-int

paste in new elasticsearch.



回答3:

In version 1.0.0+ of ElasticSearch the snapshot and restore APIs have been made available:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html

http://chrissimpson.co.uk/elasticsearch-snapshot-restore-api.html

This enables you to quickly back-up (snapshot) and restore every, or any, index that is on any given cluster. Thus you might want to look at upgrading to that version, since this will give you a simple API call to take a snapshot of the "kibana-int" index, and restore that index to any other cluster.



回答4:

Here's a standalone Python script that can copy Kibana dashboards from elasticsearch host to another.

#!/bin/env python

"""Migrate all the kibana dashboard from SOURCE_HOST to DEST_HOST.

This script may be run repeatedly, but any dashboard changes on
DEST_HOST will be overwritten if so.

"""

import urllib2, urllib, json


SOURCE_HOST = "your-old-es-host"
DEST_HOST = "your-new-es-host"


def http_post(url, data):
    request = urllib2.Request(url, data)
    return urllib2.urlopen(request).read()


def http_put(url, data):
    opener = urllib2.build_opener(urllib2.HTTPHandler)
    request = urllib2.Request(url, data)
    request.get_method = lambda: 'PUT'
    return opener.open(request).read()


if __name__ == '__main__':
    old_dashboards_url = "http://%s:9200/kibana-int/_search" % SOURCE_HOST

    # All the dashboards (assuming we have less than 9999) from
    # kibana, ignoring those with _type: temp.
    old_dashboards_query = """{
       size: 9999,
       query: { filtered: { filter: { type: { value: "dashboard" } } } } }
    }"""

    old_dashboards_results = json.loads(http_post(old_dashboards_url, old_dashboards_query))
    old_dashboards_raw = old_dashboards_results['hits']['hits']

    old_dashboards = {}
    for doc in old_dashboards_raw:
        old_dashboards[doc['_id']] = doc['_source']

    for id, dashboard in old_dashboards.iteritems():
        put_url = "http://%s:9200/kibana-int/dashboard/%s" % (DEST_HOST, urllib.quote(id))
        print http_put(put_url, json.dumps(dashboard))


回答5:

For kibana 4, I found the default index value in config/kibana.yml file and it was ".kibana"

Following is the line from kibana configuration file.

kibana_index: ".kibana"

And here is the query that displayed me the required results

curl -XGET http://localhost:9200/.kibana/_search?type=dashboard&pretty=1



回答6:

As others have said, you can find all of the objects that Kibana saves in the .kibana index within elasticsearch.

The most recent versions of Kibana 4 include an export and import feature that makes moving objects from one installation to another very easy. You can find this functionality by clicking the "settings" and then "objects" tabs.



回答7:

A standalone Ruby script that can copy a single dashboard, its visualizations, and their stored searches from one cluster to another is at https://github.com/jim-davis/kibana-helper-scripts. It's a bit too large to paste into this box.