Elasticsearch network configuration

I'm new to elk stack trying to get my hands dirty.

There are two machines - one with logstash where i want to ingest a csv file and store it on another machine where elasticsearch and kibana are installed.

I tried editing elasticsearch.yml and logstash.yml files to assign addresses of host but it is giving error "can't assign requested address".

I couldn't find any resources related to this.

What configuration settings do i need to make?

To configure which Elasticsearch host Logstash sends to change the hosts option of the elasticsearch output.

you need to create configuration file that will look something like:

input {
#ingest your csv file here, read: INPUT PLUGIN FILE
}
filter {
#whatever you want to filder etc, read: FILTER PLUGIN CSV
}
output {
elasticsearch {
hosts => ["YOUR_ELASTICSEARCH:9200"]
index => "YOUR_INDEX_NAME"
document_type => "default"
}
}

Place your config file in /etc/logstash/conf.d/ with extension .conf
If you want to have dedicated configuration file location, than you will have to edit the file:

/etc/logstash/pipelines.yml and add custom path to the configuration file, for example:
you have a config file in /your/path/myconfig.conf
Make sure that logstash can read from this location.

Than you you need to add to the file /etc/logstash/pipelines.yml

  • pipeline.id: whatever_name_you_want_to_have
    path.config: "/your/path/myconfig.conf"
    pipeline.workers: 16 #by default it is the number of threads your cpu have

Does that answer your question?

Thanks @pastechecker @magnusbaeck . It solved the problem.


Now it is giving a new error "Unexpected Pool Error".

Tried all solutions in threads related to "Unexpected Pool Error" but it doesn't work.



i'm running logstash in windows machine and elasticsearch in ubuntu. Are there any extra configurations needed?

What's the full error message?

Hey @magnusbaeck

This thread (Unexpected pool error) solved the "Unexpected Pool Error".

It is now showing following message.

[2018-09-04T20:57:28,485][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::Elasticsearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://192.168.75.204:9200/][Manticore::SocketException] Connection refused: connect {:url=>http://192.168.75.204:9200/, :error_message=>"Elasticsearch Unreachable: [http://192.168.75.204:9200/][Manticore::SocketException] Connection refused: connect", :error_class=>"LogStash::Outputs::Elasticsearch::HttpClient::Pool::HostUnreachableError"}

[2018-09-04T20:57:28,516][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://192.168.75.204:9200/][Manticore::SocketException] Connection refused: connect", :class=>"LogStash::Outputs::Elasticsearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}

Well, is http://192.168.75.204:9200 actually accessible to the Logstash host?

Yes, it is. Echo request is working.
Windows firewall is disabled too.

Echo request is working.

You mean ping?

Can you verify the connectivity using TCP?
use telnet, curl, wget, tcptraceroute and take traffic sample with tcpdump: ("tcpdump -nni any host 192.168.75.203 and port 9200 and tcp" ) to verify it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.