Logstash not sending to Elasticsearch if it receives input from Filebeat

Hello, I am creating the Elastic stack using the official docker images. My problem is that logstash will not send any info to elasticsearch when I start in in a docker-compose file

This is my docker-compose file:

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
    container_name: elasticsearch
    environment:
      - cluster.name=elastic-cluster
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
      - "discovery.type=single-node"
    ulimits:
      nofile:
        soft: 65536
        hard: 65536
      memlock:
        soft: -1
        hard: -1
    volumes:
      - elastic-data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    expose:
      - "9200"
    networks:
      - elastic-net
  kibana:
    depends_on:      
      - "elasticsearch"
    image: docker.elastic.co/kibana/kibana:7.6.0
    container_name: kibana
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
    networks:
      - elastic-net
  filebeat:
    image: docker.elastic.co/beats/filebeat:7.6.0
    container_name: filebeat
    user: root 
    volumes:
      - "./filebeat_log.yml:/usr/share/filebeat/filebeat.yml"
      - "./pl_agent_test.log:/var/log/myLogs/sample.log"
      - "/var/lib/docker/containers:/usr/share/filebeat/dockerlogs:ro"
      - "/var/run/docker.sock:/var/run/docker.sock"
    networks:
      - elastic-net
  logstash:
    depends_on:
      - "elasticsearch"
      - "kibana"
    image: docker.elastic.co/logstash/logstash:7.6.0
    container_name: logstash
    ports:
      - 5044:5044
    expose: 
      - "5044"
    volumes:
    - "./logstash.conf:/usr/share/logstash/logstash.conf"
    - "./logstash.yml:/usr/share/logstash/config/logstash.yml"
    - "~/pipeline/:/usr/share/logstash/pipeline/"
    networks:
      - elastic-net
    

volumes:
  elastic-data:

networks:
  elastic-net:
    driver: bridge

When looking at the logs I am able to see that logstash receives the data from Filebeat but then it seems to just stop.

This is my logstash.conf file

input {
	beats {
		port => 5044
	}
}

output {
	
	stdout{ codec => rubydebug }
	
	elasticsearch { 
		hosts => ["elasticsearch:9200"]
		index => "logstash_pl_agent_log_testing_from_filebeat"
	}
	
}

This is the output when recieving data from filebeat for one of the events they all look alike:

logstash         | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash         | WARNING: An illegal reflective access operation has occurred
logstash         | WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.9.0.jar) to method sun.nio.ch.NativeThread.signal(long)
logstash         | WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
logstash         | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
logstash         | WARNING: All illegal access operations will be denied in a future release
logstash         | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash         | [2020-03-02T16:37:49,978][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash         | [2020-03-02T16:37:50,018][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
logstash         | [2020-03-02T16:37:50,618][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.6.0"}
logstash         | [2020-03-02T16:37:50,665][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"0fcdfb15-27df-464e-aeb3-732385abeb52", :path=>"/usr/share/logstash/data/uuid"}
logstash         | [2020-03-02T16:37:52,701][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash         | [2020-03-02T16:37:53,168][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash         | [2020-03-02T16:37:53,283][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
logstash         | [2020-03-02T16:37:53,289][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash         | [2020-03-02T16:37:53,635][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
logstash         | [2020-03-02T16:37:53,637][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
logstash         | [2020-03-02T16:37:55,122][INFO ][org.reflections.Reflections] Reflections took 65 ms to scan 1 urls, producing 20 keys and 40 values 
logstash         | [2020-03-02T16:37:55,906][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
logstash         | [2020-03-02T16:37:55,911][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x59f5550d run>"}
logstash         | [2020-03-02T16:37:57,967][INFO ][logstash.inputs.beats    ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
logstash         | [2020-03-02T16:37:57,988][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
logstash         | [2020-03-02T16:37:58,108][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
logstash         | [2020-03-02T16:37:58,234][INFO ][org.logstash.beats.Server][main] Starting server on port: 5044
logstash         | [2020-03-02T16:37:59,529][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"ebdd88635541942b096027ed79be84efc3dd562a5f0e1b78fca83c7b5c9a1a7c", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_6257eed9-6378-40ad-8ea1-514176043c7b", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
logstash         | [2020-03-02T16:37:59,634][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash         | [2020-03-02T16:37:59,680][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash         | [2020-03-02T16:37:59,706][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] ES Output version determined {:es_version=>7}
logstash         | [2020-03-02T16:37:59,707][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
logstash         | [2020-03-02T16:37:59,797][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
logstash         | [2020-03-02T16:37:59,803][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x5d96429f run>"}
logstash         | [2020-03-02T16:37:59,889][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash         | [2020-03-02T16:37:59,904][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
logstash         | [2020-03-02T16:38:00,201][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
logstash         | /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
logstash         | {
logstash         |           "host" => {
logstash         |         "name" => "f0a5cbb124f3"
logstash         |     },
logstash         |        "message" => "2019-12-02 09:47:46,024 - root - INFO - There are no changes need to apply.",
logstash         |          "input" => {
logstash         |         "type" => "log"
logstash         |     },
logstash         |          "agent" => {
logstash         |              "version" => "7.6.0",
logstash         |         "ephemeral_id" => "77efeda1-9e66-4819-b6b0-34bcf53a77e8",
logstash         |                 "type" => "filebeat",
logstash         |             "hostname" => "f0a5cbb124f3",
logstash         |                   "id" => "878ece7c-fe1c-4af8-9451-b96ad982b8e8"
logstash         |     },
logstash         |            "log" => {
logstash         |           "file" => {
logstash         |             "path" => "/var/log/myLogs/sample.log"
logstash         |         },
logstash         |         "offset" => 152
logstash         |     }

I find I very confusing that it does not work because when I choose to not create logstash in the docker-compose file and instead use this command:
docker run -h logstash --name logstash --net=project_elastic-net --link Container_ID:elasticsearch -it --rm -v "$PWD":/config-dir docker.elastic.co/logstash/logstash:7.6.0 -f /config-dir/logstash.conf

Then I can set the input to stdin{} and then logstash will imidietly send the information to elasticsearch.

This is the conf file for using stdin{}

input {
	stdin{}
}

output {
	
	stdout{ codec => rubydebug }
	
	elasticsearch { 
		hosts => ["elasticsearch:9200"]
		index => "logstash_pl_agent_log_testing_from_filebeat"
	}
}

This is the output from stdin{}

{
      "@version" => "1",
    "@timestamp" => 2020-03-02T15:59:54.614Z,
       "message" => "Test_1",
          "host" => "logstash"
}

I can then also see this output in kibana with the correct index assigned to it

After doing some more testing I were able to find this error message.

[2020-03-02T20:07:15,196][WARN ][logstash.outputs.elasticsearch][main] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash_pl_agent_log_testing_from_filebeat", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x201a8ded], :response=>{"index"=>{"_index"=>"logstash_pl_agent_log_testing_from_filebeat", "_type"=>"_doc", "_id"=>"Rw3cnHABX8XSthBzGerM", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id 'Rw3cnHABX8XSthBzGerM'. Preview of field's value: '{name=86ce4726397f}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:331"}}}}}

I got one of this for each line of data that filebeat is trying to send to logstash.

index=>"logstash_pl_agent_log_testing_from_filebeat If this line is confusing it is simply how I have named the index for these test run in the logstash.conf file. See previous post for complete file.

With your beat you have [host] as an object that contains a field called name.

logstash         |           "host" => {
logstash         |         "name" => "f0a5cbb124f3"
logstash         |     },

I suspect there is something else feeding elasticsearch with [host] as a string. A generator, or file, or tcp input in logstash would do that. Probably others too.

A field in elasticsearch cannot be an object on some documents and a string on others. That is exactly what the "Can't get text on" error message is trying to tell you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.