ElasticSearch config/install: +Logstash +Kibana

I am brand new to Elasticsearch, trying to use it with Logstash and Kibana. I have seemingly got everything installed and running but hit problems with what I believe are configuration errors. My end goal here is to run this locally and send in data via browser-based POST requests or, in rare cases, from static JSON files. I want to use my browser to send a few(10-15) POST requests and then see this data reflected in Kibana.

I am running this on Mac OS X 10.10.1

Elasticsearch appears to be installed and running:

$ curl -XGET 'http://localhost:9200/'
{
  "status" : 200,
  "name" : "Robert Kelly",
  "cluster_name" : "elasticsearch_w",
  "version" : {
    "number" : "1.5.2",
    "build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c",
    "build_timestamp" : "2015-04-27T09:21:06Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  },
  "tagline" : "You Know, for Search"
}

I tried to change things in elasticsearch.yml but they had no impact on restart or otherwise. For example within my current .yml file I changed

> cluster.name: elasticsearch

to

> cluster.name: logstash

but this did not have any impact. I don't quite understand how the curl request comes back with "cluster_name" : elasticsearch_w


$ curl localhost:9200/_nodes/process?pretty
{
  "cluster_name" : "elasticsearch_w",
  "nodes" : {
    "PBmaBG_0Tsiv4RgKV1t0jQ" : {
      "name" : "Robert Kelly",
      "transport_address" : "inet[/127.0.0.1:9300]",
      "host" : "mac",
      "ip" : "192.168.1.78",
      "version" : "1.5.2",
      "build" : "62ff986",
      "http_address" : "inet[/127.0.0.1:9200]",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 3621,
        "max_file_descriptors" : 10240,
        "mlockall" : false
      }
    }
  }
}

http://localhost:9200/_cluster/health?pretty
{
"cluster_name":"elasticsearch_w",
"status":"yellow",  // I think this is yellow because of local disk space, I have ~15 GBs to work with - can always clear more space
"timed_out":false,
"number_of_nodes":1,
"number_of_data_nodes":1,
"active_primary_shards":20,
"active_shards":20,
"relocating_shards":0,
"initializing_shards":0,
"unassigned_shards":20,
"number_of_pending_tasks":0
}

Logstash appears to be installed and running:

Logstash startup completed
./bin/logstash -e 'input { stdin { } } output { elasticsearch { host => localhost protocol => http } }'
{
       "message" => "./bin/logstash -e 'input { stdin { } } output { elasticsearch { host => localhost protocol => http } }'",
      "@version" => "1",
    "@timestamp" => "2015-06-15T19:55:08.013Z",
          "type" => "human",
          "host" => "mac"
}

Kibana appears to be installed and running:

$ bin/kibana
{
"name":"Kibana",
"hostname":"mac",
"pid":3995,
"level":30,
"msg":"Found kibana index",  // Saw another msg after this that reads ↓↓↓
"msg":"Listening on 0.0.0.0:5601",  // Not sure why this shows 0.0.0.0 instead of 127.0.0.1
"time":"2015-06-13T03:31:36.753Z",
"v":0
}

My gut tells me that I have problems with config files for either Elasticsearch:
Pastebin link to my current elasticsearch.yml file
or for the logstash-simple.conf file (contents shown in the Logstash prompt output above)

The default config files have a lot of comments but most of what is there was either over my head or outside the scope of what I am trying to do. I am primarily using defaults on everything mostly because for now:

  • This "system" will NEVER be under load, only a few requests generated directly from my browser by hand
  • No Security is necessary here, though I would like to enable support for https requests
  • No Shards, No multi-unit Clusters, No multi-casting, No Plug-ins(for now)

I am looking for guidance or additional troubleshooting tips. For example, the health check shows "yellow" but I cannot figure out what "yellow" means or how to resolve it - only that there are statuses for green, yellow, or red. My guess is that my problem(s) stem from bad configuration, write permissions on the logs/log directories, or other missteps during install. I have Kibana running but get stuck on the "Settings" screen where it wants me to configure an index pattern and then a timefield name. I know that I need to resolve the log issue first and get ES + LS to start creating these files before Kibana can use them. Thanks for any help here....

Yellow means you have unassigned replicas, replicas can never be assigned to the same host as the primary, given you only have one host in this cluster they will never be assigned! You can fix this with curl -XPUT localhost:9200/*/_settings -d '{ "index" : { "number_of_replicas" : 0 } }'

KB listens to 0.0.0.0, which is any IP on the host, by default. That includes 127.0.0.1.

You'd have to post (or even better, link to gist/pastebin/etc) your ES config to see why the hostname is elasticsearch_w.

Can you elaborate on why you mean by stuck on the Setting page? Have you set the index settings as is expected?

1 Like

Sorry, my ES config was in my original post that got deleted, I had to retype this whole thing. I had originally put the config file on PasteBin:
http://pastebin.com/6mGjMJBu

I am stuck on the "Settings" page because I cannot provide a valid path to a log name pattern. I have not set the index setting as expected because: 1) I do not know what is expected and 2) I don't know how to specify with ES configuration where to put logs and how to name them.

How are you calling ES when you start it, are you calling it from /Users/w/Desktop/data/elasticsearch-1.6.0/?

You should also change path.data to this as well - /Users/w/Desktop/data/elasticsearch-1.6.0/data/elasticsearch/. If you use /Users/w/Desktop/data/elasticsearch-1.6.0/data/elasticsearch/nodes/0/indices it'll just create another set of nodes/0/indices directories under that one, which is just going to be confusing.

That aside, have you passed data through LS to ES?
What does the output from the _cat/indices call show?

Yes, I cd into that directory:

/Users/w/Desktop/data/elasticsearch-1.6.0

then run elasticsearch from command prompt

I will change the data path as suggested.

_cat/indices output:

green open tiq     5 0 0 0  575b  575b 
green open .kibana 5 0 1 0 2.9kb 2.9kb 
green open animal  5 0 0 0  575b  575b 
green open twitter 5 0 0 0  575b  575b

Right, so you have no logstash data in ES for KB to read. You need to push some in there otherwise KB cannot do its thing.

I couldn't agree more :smiley:

This is where I am hung up, I don't know where/how to specify that in my ES or LS or Kibana config. I am missing the piece of the puzzle where I say:

  • ElasticSearch put the data in LOCATIONXXX
  • Logstash look at the data in LOCATIONXXX, transform it according to PARSING_INSTRUCTIONS_FOR_JSON, then put it in LOCATIONYYY
  • Kibana use the parsed data in indexes located at LOCATIONYYY to feed dashboards/reports/etc.

This is my rough understanding of how this all works. Reading through the installation and configuration instructions did not clear up the gray areas for me. I was hoping for some really basic instructions to encapsulate the steps I list above here.

ES will handle data placement by itself, so don't worry about that part for now.

For LS, that is what the config does. In the one you posted;

input { stdin { } } output { elasticsearch { host => localhost protocol => http } }

It'll take anything from stdin and then send that directly to ES via HTTP, which also handles creating of the index. If you had that config in a file you would run LS with -f /path/to/file.conf.

Then once the data is in there you point KB to the index, just use the example logstash- one and you should be good.