Is Logstash working?

Hi, I'm new to ELK and Docker (learning curve).

I'm struggling to get data into Elasticsearch. Using a .csv as a test.

My docker-compose file:

version: '2'

services:

elasticsearch:
build:
context: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk

logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "6000:6000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch

kibana:
build:
context: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch

networks:

elk:
driver: bridge

The logs from logstash:

date stream content
2018-04-22 18:13:13 stdout [2018-04-22T18:13:13,459][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
2018-04-22 18:13:12 stdout [2018-04-22T18:13:12,906][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7f2631a run>"}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,705][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.1.231:9200"]}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,596][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,548][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,540][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,539][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,521][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.1.231:9200/"}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,504][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.1.231:9200/, :path=>"/"}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,502][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.1.231:9200/]}}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,431][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//192.168.1.231:9200"]}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,359][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,343][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
2018-04-22 18:13:11 stdout [2018-04-22T18:13:11,141][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.1.231:9200/"}
2018-04-22 18:13:10 stdout [2018-04-22T18:13:10,563][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.1.231:9200/, :path=>"/"}
2018-04-22 18:13:10 stdout [2018-04-22T18:13:10,535][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.1.231:9200/]}}
2018-04-22 18:13:09 stdout [2018-04-22T18:13:09,151][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
2018-04-22 18:12:54 stdout [2018-04-22T18:12:54,633][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
2018-04-22 18:12:53 stdout [2018-04-22T18:12:53,548][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.2"}
2018-04-22 18:12:51 stdout [2018-04-22T18:12:51,857][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
2018-04-22 18:12:50 stdout [2018-04-22T18:12:50,230][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
2018-04-22 18:12:50 stdout [2018-04-22T18:12:50,177][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
2018-04-22 18:12:49 stdout Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties

I presume thats working as expected?

192.168.1.231:9200/_cluster/health?pretty=true
{
"cluster_name" : "docker-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0

http://192.168.1.231:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size

(yep - blank)

Any help or guidance apprevaited

And what's the contents of ./logstash/pipeline?

logstash.conf

input {
file {
path => "/usr/share/logstash/Zur.csv"
start_position => beginning
# to read from the beginning of file
sincedb_path => "/dev/null"
}
}

filter {
csv {
separator => ","
columns => ["snhID", "SnhAlertedAt", "LastChangedAt", "Severity", "Account", "ElementName", "ClassDisplayName", "ClassName", "EventName", "EventText", "Owner", "Ticket", "RootAction"]
}
}

Add your filters / logstash plugins configuration here

output {
elasticsearch {
hosts => "192.168.1.231:9200"
manage_template => false
index => "csv_index"

}

stdout { codec => rubydebug }
}

Is /usr/share/logstash/Zur.csv really available inside the Docker container?

Hi Magnus. That was exactly my problem. It wasn't mounted so the file wasn't inside the docker container.

Copied the file into the container and it worked.

Kicking myself.

Thanks Magnus

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.