HTM / Logstasg Configuration / Pipeline Config

Hello everyone,

I would like to know if there are some place (thread) to create a pipeline (.conf file) to get the input/filter and output from a HTM file or if using filebeat I could get some view in Kibana... I am new using these tools, and my log files are HTM

I appreciate all comments that you can share!!

THANKS!

You have HTML logs and want help with creating Filebeat and Logstash configurations to parse them? It's not very clear what you're asking for.

Hi Magnus, let me explain you with more details...

I am trying to get a dashboard (or view) in Kibana about a bunch of logs (htm files), those files are showing the job fails related a provisioning process... the action that I want to perform is to upload that data to Logastash or Filebeat (here is my doubt) using the modules or if it is not necessary only to use the pipelines (logstash file.conf -with the input/filter/output-)

The problem that I am seeing is the HTM file has this info and well I am not sure if using in my file.conf in Input section the file option to define where are the htm files... in filter section to use another option like: grok, fingerprint, or another option to filter the data of that htm file... and I don't know what option I could use in output section...

I understand after to achieve the correct pipeline, I could skip the modules and I will get something in Kibana, but there is where I am lost... so could you please help me to know if I am in the correct way or if I have to convert the HTM file in another one (like JSON) to add the files in Logstash (or Filebeat) and use ELK suite.

Adding an example and this is an extract of the full data (from HTM file)... in fact I would like to take some info from that file like "domain","service","compute", etc..):

FILENAME: 30742430.htm

Job 30742430

|Domain|Z14JHV5895044878|
| --- | --- |
|Service|DgA0630US2Z14301222|
|Namespace|dbaas|
|Service Type|dbaas|
|Compute Site|US006_Z16|
|Username|c9qa-infra_ww@oracle.com|
|Operation|create-dbaas-service|
|Status|Failed|
|Sub Status||
|Create Time|2018-06-30T12:23:46.092+00:00|
|Start Time|2018-06-30T12:31:33.936+00:00|
|End Time|2018-06-30T12:39:22.174+00:00|
|Update Time|2018-06-30T12:39:22.184+00:00|
|Job Info||
|Request Parameters|{{{trial=false, enableListenerPort=false, description=Description For Test Service, subscriptionType=HOURLY, dbConsolePort=1158, disasterRecovery=false, listenerPort=1521, cloudStorageContainer=https://us2.storage.oraclecloud.com/v1/Storage-Z14JHV5895044878/dbbackup, serviceInstance=DgA0630US2Z14301222, server_base_uri=https://jaas.oraclecloud.com:443/paas/service/dbcs/, ibkupOnPremise=false, operationName=create-dbaas-service, backupDestination=BOTH, noRetry=false, goldenGate=false, createStorageContainerIfMissing=false, version=11.2.0.4, serviceVersion=11.2.0.4, serviceEntitlementId=14099, timezone=UTC, isRac=false, usableStorage=50, isBYOL=false, sid=ORCL, emExpressPort=5500, computeSiteName=US006_Z16, useHighPerformanceStorage=false, noRollback=false, sla=NONE, assignPublicIP=true, ibkup=no, useOAuthForStorage=false, edition=EE, tenant=Z14JHV5895044878, hdg=false, provisioningTimeout=180, cloudStorageUser=c9qa-infra_ww@oracle.com, level=PAAS, count=2, serviceType=dbaas, enableNotification=false, failoverDatabase=false, serviceName=DgA0630US2Z14301222, identity_domain_id=Z14JHV5895044878, charset=AL32UTF8, tags=[], vmPublicKeyText=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCxIdPC1Nh+DjSHLlLjum/sqZHjgG4R/Qftc7b8FBbSy3fVp7HNYetPimEPgcz5D6kBHCmaweQTj2VhhxrKEFkfRG43jdWg9ycnrCkvfkqIhBGj9j5UcBuWnPpwR9qN7KTsagrUNx4EqdGxbW7Yda38hIN5vREip8rnc/IQdGEg/8waD2oCkDb2xxSAmJ4uwGW+QPt7DGOPRH7+8PSeoeuTD2A2N2leA7pwIhWzHe/0jm4I85Gj98wD6dHfOOr1GbPiT4vPc/qJeMpelEddjsQ88buiIHZ0AOz9lQEskJ4gHAgohSz5g7x5HUz9x6Tc2fW2chFJgt4T6VYAv79wZA4f root@395b5e41f127, namespace=dbaas, shape=oc3, ncharset=AL16UTF16}}}|
|Supplemental Logs|none|
|Summary|Job <30742430> v41, action=handleFailure, Failed, namespace=dbaas, version=11.2.0.4, operation=create-dbaas-service, cleanupActionIndex=1, retryCount:0, jobRetryCount:1, jobRetryWaitTime:0, created: 2018-06-30T12:23:46.092+0000, started: 2018-06-30T12:31:33.936+0000, to retry: 2018-06-30T12:31:31.690+0000, failingStartTime: 2018-06-30T12:35:01.969+0000, domain:Z14JHV5895044878, instance:DgA0630US2Z14301222, wm:SM-MS-chr302ru25.usdc2.oracleclo, owner:c9qa-infra_ww@oracle.com FAILED CURRENT JOB 30742430: action: startServices FAILED CHILD JOB 30703665: action: awaitResourcesForVMs code: PSM-COMPUTE-ERROR-004, message: Unable to start the Compute resources... The orchestration /Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/db_1/vm-1/resources is in 'error' state since Sat Jun 30 2018 12:24:51:000 OPlan [boot]: No suitable pools found to place volume "/Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/db_1/vm-1/boot" OPlan [redo]: No suitable pools found to place volume "/Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/db_1/vm-1/redo" OPlan [fra]: No suitable pools found to place volume "/Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/db_1/vm-1/fra" OPlan [bits]: No suitable pools found to place volume "/Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/db_1/vm-1/bits" OPlan [data]: No suitable pools found to place volume "/Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/db_1/vm-1/data" FAILED CHILD JOB 30703716: action: awaitResourcesForVMs code: PSM-COMPUTE-ERROR-004, message: Unable to start the Compute resources... The orchestration /Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/1/db_1/vm-1/resources is in 'error' state since Sat Jun 30 2018 12:32:55:000 OPlan [boot]: No suitable pools found to place volume "/Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/1/db_1/vm-1/boot" OPlan [redo]: No suitable pools found to place volume "/Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/1/db_1/vm-1/redo" OPlan [fra]: No suitable pools found to place volume "/Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/1/db_1/vm-1/fra" OPlan [bits]: No suitable pools found to place volume "/Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/1/db_1/vm-1/bits" OPlan [data]: No suitable pools found to place volume "/Compute-Z14JHV5895044878/c9qa-infra_ww@oracle.com/dbaas/DgA0630US2Z14301222/1/db_1/vm-1/da|

# Logs

## setTags

v5, SM-MS-chr302ru25.usdc2.oracleclo, SM-MS-chr302ru25.usdc2.oracleclo-1530301723862, appsmser, chr302ru25.usdc2.oraclecloud.com, sm version=18.2.6-551; SHA=91a8de2e9fe43e93e240a0a861209429d19a3827; build.date=2018-06-22 02:13 2018-06-30T12:23:51.147+00:00 [INFO]:executing action: setTags 2018-06-30T12:23:51.147+00:00 [INFO]:Job Attributes: 2018-06-30T12:23:51.155+00:00 [INFO]:finished action: setTags 2018-06-30T12:23:51.156+00:00 [INFO]:Action returned status: SUCCESS

These kinds of multiline log files are a bit tricky to process, but I think you'll want to use a file input with a multiline codec that joins all lines with the previous line, and then you set the codec's auto_flush_interval option to a low integer (e.g. 5) so that it doesn't wait forever before flushing the log contents into an event.

Then apply e.g. a grok or dissect filter to parse the message and extract the fieds you want and finish off with an elasticsearch output to send everything to Elasticsearch.

Hei Magnus!

I am trying to execute this Logstash command: /usr/share/logstash/bin/logstash --path.settings=/etc/logstash/ --path.config=/etc/logstash/conf.d/html_files.conf --path.data=/usr/share/logstash/data to load the pipelines in elasticsearch and to get a view or dashboard in Kibana... but I am getting this error (in /var/log/logstash/logstash-plain.log):

[ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::File start_position=>"beginning", path=>["/home/opc/create-dbaas-service/30742430.htm"], codec=>LogStash::Coh_interval=5, negate=>true, enable_metric=>true, charset=>"UTF-8", multiline_tag=>"multiline", max_lines=>500, max_bytes=>10485760>, id=>c=>true, stat_interval=>1.0, discover_interval=>15, sincedb_write_interval=>15.0, delimiter=>"\n", close_older=>3600.0, mode=>"tail", filent=>140737488355327, file_sort_by=>"last_modified", file_sort_direction=>"asc">
Error: Permission denied - Permission denied
Exception: Errno::EACCES
Stack: org/jruby/RubyFile.java:1172:in utime' uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/fileutils.rb:1164:inblock in touch'
org/jruby/RubyArray.java:1734:in `each'

and this is my pipeline file.conf:

# THIS FILE HAS THE INPUT + FILTER AND OUTPUT OF HTML FILES (FOR THE BUGS)

# THIS IS THE INPUT CONFIGURATION
input {
  file {
    path => "/home/opc/create-dbaas-service/30742430.htm"
    start_position => beginning
    ignore_older => 0
    codec => multiline {
      pattern => "Job"
      negate => true
      what => "next"
      auto_flush_interval => 5
    }
  }
}

# THIS IS THE FILTER CONFIGURATION
filter {
  grok {
    match => { "message" => "<h1>%{DATA:Job_word}%{SPACE}%{NUMBER:job_id}</h1><table><tr><th>%{DATA:Domain_word}</th><td>%{DATA:Domain_id}</td></tr><tr><th>%{DATA:Service_word}</th><td>%{DATA:Service_id}</td></tr><tr><th>%{DATA:Namespace_word</th><td>%{DATA:Namespace_type}</td></tr><tr><th>%{DATA:ServiceType_word}</th><td>%{DATA:ServiceType_id}</td></tr><tr><th>%{DATA:ComputeSite_word}</th><td>%{DATA:ComputeSite_id}</td></tr><tr><th>>%{DATA:Username_word}</th><td>%{DATA:Username_value}</td></tr><tr><th>%{DATA:Operation_word}</th><td>%{DATA:Operation_value}</td></tr><tr><th>%{DATA:Status_word}</th><td>%{DATA:Status_value}</td></tr><tr><th>%{DATA:SubStatus_word}</th><td>%{DATA:SubStatus_value</td></tr><tr><th>%{DATA:CreateTime_word}</th><td>%{TIMESTAMP_ISO8601}</td></tr><tr><th>%{DATA:StartTime_word}</th><td>%{TIMESTAMP_ISO8601}</td></tr><tr><th>%{DATA:EndTime_word}</th><td>%{TIMESTAMP_ISO8601}</td></tr><tr><th>%{DATA:UpdateTime_word}</th><td>%{TIMESTAMP_ISO8601}</td></tr><tr><th>%{DATA:JobInfo_word}</th><td>%{DATA:JobInfo_text}</td></tr><tr><th>%{DATA:RequestParameters_word}</th><td>%{DATA:RequestParameters_log}</tr><tr><th>%{DATA:SupplementalLogs_word}</th><td>%{DATA:SupplementalLogs_value}</td></tr><tr><th>%{DATA:Summary_word}</th><td><pre>%{DATA:Summary_log}</pre></td></tr></table>"
      }
  }
}

# THIS IS THE OUTPUT CONFIGURATION
output {
    elasticsearch {}
    stdout {}
}

as you see... I defined a local directory (where are my html (logs):

/home/opc/create-dbaas-service/

and I am testing the pipelines with a specific file:

30742430.htm

these are the permissions on that directory:

[root@elk640 logstash]# ll /home/opc/
total 84
drwxr-xr-x 2 root root 4096 Aug 28 20:45 create-dbaas-service

Could you please advise me about what I am doing wrong?

Thanks

The permissions of the /home and /home/opc directories (and the file itself, of course) also matter.

But I'm not sure that's the problem. Please post the full error message with the full stacktrace. Format everything as preformatted text so it doesn't get mangled.

I changed the permissions and after to restart logstash and to re-execute the command this is the output (in another comment)

Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-08-29T20:13:57,682][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-29T20:13:58,543][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.0"}
[2018-08-29T20:14:01,691][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", hosts=>[http://localhost:9200], sniffing=>false, manage_template=>false, id=>"2196aa69258f6adaaf9506d8988cc76ab153e658434074dcf2e424e0aca0d381", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_2e710b43-a229-4892-843e-7a731f325f51", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-08-29T20:14:01,801][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-08-29T20:14:05,906][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-29T20:14:06,259][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-29T20:14:06,269][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-29T20:14:06,275][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2018-08-29T20:14:06,276][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2018-08-29T20:14:06,516][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-29T20:14:06,519][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2018-08-29T20:14:06,615][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-29T20:14:06,618][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-29T20:14:06,619][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-29T20:14:06,662][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-29T20:14:06,678][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1"]}
[2018-08-29T20:14:06,733][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2018-08-29T20:14:06,749][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-29T20:14:06,761][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-29T20:14:06,961][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-29T20:14:06,962][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-29T20:14:06,974][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-29T20:14:06,977][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6}
[2018-08-29T20:14:06,977][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-29T20:14:07,278][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x51eb4767 run>"}
[2018-08-29T20:14:07,642][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_de4f11e64849498051d727e3231a2185", :path=>["/home/logs/create-dbaas-service/30742430.htm"]}
[2018-08-29T20:14:07,688][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x43a8b3b4 sleep>"}
[2018-08-29T20:14:07,759][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2018-08-29T20:14:07,798][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
[2018-08-29T20:14:07,814][INFO ][logstash.inputs.metrics  ] Monitoring License OK
[2018-08-29T20:14:08,586][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}

Checking the Discover section in Kibana, I cannot see anything from Logstash:

image

Any idea?

Thanks!

new update... after to restart Kibana, Elasticsearch and Logtash... and to re-execute the command to load the pipeline, I received a new error:

Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-08-29T21:47:18,542][INFO ][logstash.configmanagement.bootstrapcheck] Using Elasticsearch as config store {:pipeline_id=>["input", "filter", "output", "html_files"], :poll_interval=>"5000000000ns"}
[2018-08-29T21:47:18,584][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: You must set the password using the "xpack.management.elasticsearch.password" in logstash.yml>, :backtrace=>["/usr/share/logstash/x-pack/lib/config_management/elasticsearch_source.rb:39:in `initialize'", "/usr/share/logstash/x-pack/lib/config_management/hooks.rb:41:in `after_bootstrap_checks'", "org/logstash/execution/EventDispatcherExt.java:69:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:293:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-08-29T21:47:18,603][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[root@elk640 opc]# vi /etc/logstash/logstash.yml
[root@elk640 opc]# systemctl restart kibana.service
[root@elk640 opc]# systemctl restart elasticsearch.service
[root@elk640 opc]#
[root@elk640 opc]# systemctl restart logstash.service
[root@elk640 opc]# /usr/share/logstash/bin/logstash  --path.settings=/etc/logstash/ --path.config=/etc/logstash/conf.d/html_files.conf --path.data=/usr/share/logstash/data^C
[root@elk640 opc]# /usr/share/logstash/bin/logstash  --path.settings=/etc/logstash/ --path.data=/usr/share/logstash/data
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-08-29T22:03:49,330][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.0"}
[2018-08-29T22:03:51,527][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, input, filter, output at line 42, column 2 (byte 2251) after ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:157:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:22:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:309:in `block in converge_state'"]}
[2018-08-29T22:03:52,525][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", hosts=>[http://localhost:9200], sniffing=>false, manage_template=>false, id=>"2196aa69258f6adaaf9506d8988cc76ab153e658434074dcf2e424e0aca0d381", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_18c8bccc-0915-488e-a29f-48fdab34cdb3", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-08-29T22:03:52,601][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-08-29T22:03:53,423][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-29T22:03:53,438][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-29T22:03:53,679][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-29T22:03:53,795][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-29T22:03:53,799][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-29T22:03:53,893][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2018-08-29T22:03:54,204][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-29T22:03:54,204][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-29T22:03:54,215][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-29T22:03:54,220][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6}
[2018-08-29T22:03:54,220][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-29T22:03:54,441][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x27791442 run>"}
[2018-08-29T22:03:54,547][INFO ][logstash.inputs.metrics  ] Monitoring License OK
[2018-08-29T22:03:54,839][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-08-29T22:03:58,428][ERROR][logstash.agent           ] Internal API server error {:status=>500, :request_method=>"GET", :path_info=>"/_node/stats", :query_string=>"", :http_version=>"HTTP/1.1", :http_accept=>nil, :error=>"Unexpected Internal Error", :class=>"LogStash::Instrument::MetricStore::MetricNotFound", :message=>"For path: events. Map keys: [:pipelines, :reloads]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:225:in `block in get_recursively'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:224:in `get_recursively'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:235:in `block in get_recursively'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:224:in `get_recursively'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:95:in `block in get'", "org/jruby/ext/thread/Mutex.java:148:in `synchronize'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:94:in `get'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:108:in `get_shallow'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:157:in `block in extract_metrics'", "org/jruby/RubyArray.java:1734:in `each'", "org/jruby/RubyEnumerable.java:936:in `inject'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:133:in `extract_metrics'", "/usr/share/logstash/logstash-core/lib/logstash/api/service.rb:29:in `extract_metrics'", "/usr/share/logstash/logstash-core/lib/logstash/api/commands/base.rb:22:in `extract_metrics'", "/usr/share/logstash/logstash-core/lib/logstash/api/commands/stats.rb:42:in `events'", "/usr/share/logstash/logstash-core/lib/logstash/api/modules/node_stats.rb:35:in `events_payload'", "/usr/share/logstash/logstash-core/lib/logstash/api/modules/node_stats.rb:21:in `block in GET /?:filter?'", "org/jruby/RubyMethod.java:111:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1611:in `block in compile!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:975:in `block in route!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:994:in `route_eval'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:975:in `block in route!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1015:in `block in process_route'", "org/jruby/RubyKernel.java:1114:in `catch'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1013:in `process_route'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:973:in `block in route!'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:972:in `route!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1085:in `block in dispatch!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1067:in `block in invoke'", "org/jruby/RubyKernel.java:1114:in `catch'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1067:in `invoke'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1082:in `dispatch!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:907:in `block in call!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1067:in `block in invoke'", "org/jruby/RubyKernel.java:1114:in `catch'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1067:in `invoke'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:907:in `call!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:895:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/xss_header.rb:18:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/path_traversal.rb:16:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/json_csrf.rb:18:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/base.rb:49:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/base.rb:49:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/frame_options.rb:31:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/nulllogger.rb:9:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/head.rb:13:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:182:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:2013:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/urlmap.rb:66:in `block in call'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/urlmap.rb:50:in `call'", "/usr/share/logstash/logstash-core/lib/logstash/api/rack_app.rb:57:in `call'", "/usr/share/logstash/logstash-core/lib/logstash/api/rack_app.rb:31:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/builder.rb:153:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:557:in `handle_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:404:in `process_client'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:270:in `block in run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/thread_pool.rb:106:in `block in spawn_thread'"]}
[2018-08-29T22:03:58,445][ERROR][logstash.agent           ] API HTTP Request {:status=>500, :request_method=>"GET", :path_info=>"/_node/stats", :query_string=>"", :http_version=>"HTTP/1.1", :http_accept=>nil}
[2018-08-29T22:04:00,874][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x27791442 run>"}
[root@elk640 opc]# tail -f /var/log/logstash/logstash-plain.log
[2018-08-29T22:03:58,445][ERROR][logstash.agent           ] API HTTP Request {:status=>500, :request_method=>"GET", :path_info=>"/_node/stats", :query_string=>"", :http_version=>"HTTP/1.1", :http_accept=>nil}
[2018-08-29T22:04:00,874][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x27791442 run>"}
[2018-08-29T22:04:18,762][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/usr/share/logstash/data/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-08-29T22:04:18,782][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-08-29T22:04:45,992][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/usr/share/logstash/data/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[2018-08-29T22:04:46,014][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2018-08-29T22:05:11,294][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/usr/share/logstash/data/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:447:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:229:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:140:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:139:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}

You're listing the logs from several invocations of Logstash and it's not entirely clear which is the currently remaining problem. To avoid confusion and waste of time let's focus on one problem at a time.

Hi Magnus, this is the current problem, when I start Logstash, and I check its logs, I receive these error messages:

[2018-08-30T16:50:54,933][ERROR][logstash.agent           ] Internal API server error {:status=>500, :request_method=>"GET", :path_info=>"/_node/stats", :query_string=>"", :http_version=>"HTTP/1.1", :http_accept=>nil, :error=>"Unexpected Internal Error", :class=>"LogStash::Instrument::MetricStore::MetricNotFound", :message=>"For path: events. Map keys: [:pipelines, :reloads]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:225:in `block in get_recursively'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:224:in `get_recursively'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:235:in `block in get_recursively'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:224:in `get_recursively'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:95:in `block in get'", "org/jruby/ext/thread/Mutex.java:148:in `synchronize'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:94:in `get'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:108:in `get_shallow'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:157:in `block in extract_metrics'", "org/jruby/RubyArray.java:1734:in `each'", "org/jruby/RubyEnumerable.java:936:in `inject'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:133:in `extract_metrics'", "/usr/share/logstash/logstash-core/lib/logstash/api/service.rb:29:in `extract_metrics'", "/usr/share/logstash/logstash-core/lib/logstash/api/commands/base.rb:22:in `extract_metrics'", "/usr/share/logstash/logstash-core/lib/logstash/api/commands/stats.rb:42:in `events'", "/usr/share/logstash/logstash-core/lib/logstash/api/modules/node_stats.rb:35:in `events_payload'", "/usr/share/logstash/logstash-core/lib/logstash/api/modules/node_stats.rb:21:in `block in GET /?:filter?'", "org/jruby/RubyMethod.java:111:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1611:in `block in compile!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:975:in `block in route!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:994:in `route_eval'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:975:in `block in route!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1015:in `block in process_route'", "org/jruby/RubyKernel.java:1114:in `catch'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1013:in `process_route'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:973:in `block in route!'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:972:in `route!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1085:in `block in dispatch!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1067:in `block in invoke'", "org/jruby/RubyKernel.java:1114:in `catch'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1067:in `invoke'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1082:in `dispatch!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:907:in `block in call!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1067:in `block in invoke'", "org/jruby/RubyKernel.java:1114:in `catch'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:1067:in `invoke'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:907:in `call!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:895:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/xss_header.rb:18:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/path_traversal.rb:16:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/json_csrf.rb:18:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/base.rb:49:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/base.rb:49:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-protection-1.5.5/lib/rack/protection/frame_options.rb:31:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/nulllogger.rb:9:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/head.rb:13:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:182:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sinatra-1.4.8/lib/sinatra/base.rb:2013:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/urlmap.rb:66:in `block in call'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/urlmap.rb:50:in `call'", "/usr/share/logstash/logstash-core/lib/logstash/api/rack_app.rb:57:in `call'", "/usr/share/logstash/logstash-core/lib/logstash/api/rack_app.rb:31:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/builder.rb:153:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:557:in `handle_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:404:in `process_client'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:270:in `block in run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/thread_pool.rb:106:in `block in spawn_thread'"]}
[2018-08-30T16:50:54,959][ERROR][logstash.agent           ] API HTTP Request {:status=>500, :request_method=>"GET", :path_info=>"/_node/stats", :query_string=>"", :http_version=>"HTTP/1.1", :http_accept=>nil}

This is the first part of the error that I am getting in the log (/var/log/logstash/logstash-plain.log)...

This comment is about the another erro that I am getting in same Log view:

[2018-08-30T16:51:24,083][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, input, filter, output at line 42, column 2 (byte 2251) after ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:157:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:22:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:309:in `block in converge_state'"]}

Regarding this error, I thing is related my file.conf, right?

This is the file.conf that I created:

  1 # THIS FILE HAS THE INPUT + FILTER AND OUTPUT OF HTML FILES (FOR THE BUGS)
  2
  3 # THIS IS THE INPUT CONFIGURATION
  4 input {
  5   file {
  6     path => "/home/logs/create-dbaas-service/30742430.htm"
  7     start_position => beginning
  8     ignore_older => 0
  9     codec => multiline {
 10       pattern => "Job"
 11       negate => true
 12       what => "next"
 13       auto_flush_interval => 5
 14     }
 15   }
 16 }
 17
 18 # THIS IS THE FILTER CONFIGURATION
 19 filter {
 20   grok {
 21     match => { "message" => "<h1>%{DATA:Job_word}%{SPACE}%{NUMBER:job_id}</h1><table><tr><th>%{DATA:Domain_word}</th><td>%{DATA:Domain_id}</td></tr><tr><th>%{DATA:Service_word}</th><td>%{DATA:Service_id}</td></tr><tr><th>%{DATA:N        amespace_word</th><td>%{DATA:Namespace_type}</td></tr><tr><th>%{DATA:ServiceType_word}</th><td>%{DATA:ServiceType_id}</td></tr><tr><th>%{DATA:ComputeSite_word}</th><td>%{DATA:ComputeSite_id}</td></tr><tr><th>>%{DATA:Username_word        }</th><td>%{DATA:Username_value}</td></tr><tr><th>%{DATA:Operation_word}</th><td>%{DATA:Operation_value}</td></tr><tr><th>%{DATA:Status_word}</th><td>%{DATA:Status_value}</td></tr><tr><th>%{DATA:SubStatus_word}</th><td>%{DATA:Sub        Status_value</td></tr><tr><th>%{DATA:CreateTime_word}</th><td>%{TIMESTAMP_ISO8601}</td></tr><tr><th>%{DATA:StartTime_word}</th><td>%{TIMESTAMP_ISO8601}</td></tr><tr><th>%{DATA:EndTime_word}</th><td>%{TIMESTAMP_ISO8601}</td></tr><        tr><th>%{DATA:UpdateTime_word}</th><td>%{TIMESTAMP_ISO8601}</td></tr><tr><th>%{DATA:JobInfo_word}</th><td>%{DATA:JobInfo_text}</td></tr><tr><th>%{DATA:RequestParameters_word}</th><td>%{DATA:RequestParameters_log}</tr><tr><th>%{DA        TA:SupplementalLogs_word}</th><td>%{DATA:SupplementalLogs_value}</td></tr><tr><th>%{DATA:Summary_word}</th><td><pre>%{DATA:Summary_log}</pre></td></tr></table>"
 22       }
 23   }
 24 }
 25
 26 # THIS IS THE OUTPUT CONFIGURATION
 27 output {
 28 }sticsearch {}
 29     stdout { codec => rubydebug }
 30     http {
 31         http_method => "put"
 32         url => "http://localhost"
 33         format => "message"
 34         message=> '{
 35             "Domain_id":"%{Domain_id}",
 36             "Service_id":"%{Service_id}",
 37             "Namespace_type":"%{Namespace_type}",
 38             "ServiceType_id":"%{ServiceType_id}",
 39             "ComputeSite_id":"%{ComputeSite_id}",
 40             "Username_value":"%{Username_value}",
 41             "Operation_value":"%{Operation_value}",
 42             "Status_value":"%{Status_value}",
 43             "Summary_log":"%{Summary_log}"
 44         }'
 45     }
 46 }

This is the first part of the error that I am getting in the log ( /var/log/logstash/logstash-plain.log )...

It looks like Logstash is having problems reporting its metrics to Elaticsearch. How's the health of your ES cluster?

This comment is about the another erro that I am getting in same Log view:

There's a syntax error in your configuration file:

output {
 }sticsearch {}

Hi Magnus,

I fixed the typos in my file.conf... also I had an additional curly brace at the end part of Output section... so after to fix those errors and after to restart logstash, I can see this new output (part 1):

[2018-08-31T15:58:29,541][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.0"}
[2018-08-31T15:58:33,915][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", hosts=>[http://localhost:9200], sniffing=>false, manage_template=>false, id=>"2196aa69258f6adaaf9506d8988cc76ab153e658434074dcf2e424e0aca0d381", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0516e08c-882b-4e98-835e-fd0a3e67c1a0", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-08-31T15:58:34,085][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-08-31T15:58:35,272][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-31T15:58:35,285][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-31T15:58:36,136][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-31T15:58:40,262][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-31T15:58:40,485][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-31T15:58:40,852][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2018-08-31T15:58:41,099][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", sniffing=>true, manage_template=>false, id=>"070a7bd33a72b1b1bc135259aed4f2b5b292131a3c0f04e3aeb88a8b74a9d8eb", hosts=>[//localhost:9200], document_type=>"%{[@metadata][type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_1e853da0-c0b1-4bf6-8a61-4445dfb3d995", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-08-31T15:58:41,173][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-31T15:58:41,300][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-31T15:58:41,313][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2018-08-31T15:58:41,314][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2018-08-31T15:58:41,315][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-31T15:58:41,339][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-31T15:58:41,344][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2018-08-31T15:58:41,349][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6}
[2018-08-31T15:58:41,350][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-31T15:58:41,354][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-31T15:58:41,355][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-31T15:58:41,366][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1"]}
[2018-08-31T15:58:41,373][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}

(part 2):

[2018-08-31T15:58:41,430][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-31T15:58:41,464][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}

[2018-08-31T15:58:41,465][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-31T15:58:41,485][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-31T15:58:41,492][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-31T15:58:41,492][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-31T15:58:41,550][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-08-31T15:58:41,816][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x33016685 sleep>"}
[2018-08-31T15:58:42,112][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_de4f11e64849498051d727e3231a2185", :path=>["/home/logs/create-dbaas-service/30742430.htm"]}
[2018-08-31T15:58:42,496][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-08-31T15:58:42,549][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x22802943 sleep>"}
[2018-08-31T15:58:42,637][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-08-31T15:58:42,736][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2018-08-31T15:58:42,773][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
[2018-08-31T15:58:42,812][INFO ][logstash.inputs.metrics  ] Monitoring License OK
[2018-08-31T15:58:44,306][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-08-31T15:58:46,592][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[http://localhost:9200/], :added=>[http://127.0.0.1:9200/]}}
[2018-08-31T15:58:46,593][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2018-08-31T15:58:46,601][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}

it seems the log doesn't have errors (some warnings), but the weird thing is that I cannot get any value in Kibana, should I configure something else?