Logstash and pipelines

Hello, I am attempting my first pipeline from a csv to logstash. I am running elasticsearch and kibana on a single alma linux machine. I downloaded elasticsearch, logstash and kibana from the 8.17 webpage and after extracting ran bin/<\appname>\ to start. Elasticsearch is running. kibana is running and is connected to elastic using the elastic user built-in. In elastic and kibana I made no changes to their yaml files. Now I created a logstash config for my csv file and ran bin/logstash* -f . However I am now getting errors in logstash. My first error as the same as this discussion. https://discuss.elastic.co/t/attempted-to-resurrect-connection-to-dead-es-instance/300480

I then attempted to fix by security by Secure your connection to Elasticsearch | Logstash Reference [8.17] | Elastic I stopped at Secure your connection to Elasticsearch | Logstash Reference [8.17] | Elastic as logstash fails with the following error
logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://logstash_internal:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}

I reviewed the following "Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}

My ask is 2 fold: 1. I would appreciate any insight into how to pass my current error. and 2. I would like a pointer to a better understanding of the default security in elastic 8.

Thank you in advance

This error means that Logstash cannot connect to Elasticsearch.

Is you elasticsearch using http or https? How is it configured? Please share both elasticsearch.yml and kibana.yml.

thank you for your help

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 30-01-2025 02:29:37
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["localhost"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
#logging.appenders.default:
#  type: file
#  fileName: /var/logs/kibana.log
#  layout:
#    type: json

# Example with size based log rotation
#logging.appenders.default:
#  type: rolling-file
#  fileName: /var/logs/kibana.log
#  policy:
#    type: size-limit
#    size: 256mb
#  strategy:
#    type: numeric
#    max: 10
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# Enables debug logging on the browser (dev console)
#logging.browser.root:
#  level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000


# This section was automatically generated during setup.
elasticsearch.hosts: ['https://192.168.1.37:9200']
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3MzgyMDQ1OTU0OTM6T0VGWHRrOXNUM2FUVW00aTEyRGN5dw
elasticsearch.ssl.certificateAuthorities: [/home/trex/Downloads/kibana-8.17.1/data/ca_1738204596395.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://192.168.1.37:9200'], ca_trusted_fingerprint: 555d03a49472cc5a6af2bbb6402172512130b646ac17e6f1482b6dd46090a114}]

You have https configured, so in your logstash configuration you need to use https not http.

You will also need to provide the certificate that is being used as explained here.

I think I am still getting a communication error

Sending Logstash logs to /home/trex/Downloads/logstash-8.17.1/logs which is now configured via log4j2.properties
[2025-02-10T22:36:26,081][INFO ][logstash.runner          ] Log4j configuration path used is: /home/trex/Downloads/logstash-8.17.1/config/log4j2.properties
[2025-02-10T22:36:26,084][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.17.1", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.5+11-LTS on 21.0.5+11-LTS +indy +jit [x86_64-linux]"}
[2025-02-10T22:36:26,085][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2025-02-10T22:36:26,101][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2025-02-10T22:36:26,101][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2025-02-10T22:36:26,171][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2025-02-10T22:36:26,396][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2025-02-10T22:36:26,642][INFO ][org.reflections.Reflections] Reflections took 44 ms to scan 1 urls, producing 151 keys and 528 values
[2025-02-10T22:36:26,890][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2025-02-10T22:36:26,897][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://localhost:9200"]}
[2025-02-10T22:36:26,967][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://localhost:9200/]}}
[2025-02-10T22:36:27,070][WARN ][logstash.outputs.elasticsearch][main] Health check failed {:code=>401, :url=>https://localhost:9200/, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://localhost:9200/'"}
[2025-02-10T22:36:27,074][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch main endpoint returns 401 {:message=>"Got response code '401' contacting Elasticsearch at URL 'https://localhost:9200/'", :body=>"{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"missing authentication credentials for REST request [/]\",\"header\":{\"WWW-Authenticate\":[\"Basic realm=\\\"security\\\", charset=\\\"UTF-8\\\"\",\"Bearer realm=\\\"security\\\"\",\"ApiKey\"]}}],\"type\":\"security_exception\",\"reason\":\"missing authentication credentials for REST request [/]\",\"header\":{\"WWW-Authenticate\":[\"Basic realm=\\\"security\\\", charset=\\\"UTF-8\\\"\",\"Bearer realm=\\\"security\\\"\",\"ApiKey\"]}},\"status\":401}"}
[2025-02-10T22:36:27,075][ERROR][logstash.javapipeline    ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<LogStash::ConfigurationError: Could not read Elasticsearch. Please check the credentials>, :backtrace=>["/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:275:in `block in healthcheck!'", "org/jruby/RubyHash.java:1615:in `each'", "/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:267:in `healthcheck!'", "/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:401:in `update_urls'", "/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:109:in `update_initial_urls'", "/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:103:in `start'", "/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch/http_client.rb:373:in `build_pool'", "/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch/http_client.rb:64:in `initialize'", "org/jruby/RubyClass.java:922:in `new'", "/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:106:in `create_http_client'", "/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:102:in `build'", "/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:42:in `build_client'", "/home/trex/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch.rb:301:in `register'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:69:in `register'", "/home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/java_pipeline.rb:245:in `block in register_plugins'", "org/jruby/RubyArray.java:1981:in `each'", "/home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/java_pipeline.rb:244:in `register_plugins'", "/home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/java_pipeline.rb:622:in `maybe_setup_out_plugins'", "/home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/java_pipeline.rb:257:in `start_workers'", "/home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/java_pipeline.rb:198:in `run'", "/home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/java_pipeline.rb:150:in `block in start'"], "pipeline.sources"=>["/home/trex/Downloads/logstash-8.17.1/config/testPipe.conf"], :thread=>"#<Thread:0x782acb2c /home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/java_pipeline.rb:138 run>"}
[2025-02-10T22:36:27,076][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2025-02-10T22:36:27,080][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}

I copied the Elasticsearch cert to a logstash accessible location and referenced the location in the config

input {
  file {
    path => "/home/trex/Downloads/Power-12.2.csv"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}

filter {
  csv {
    separator => ","
    columns => ["Day of Date Eastern", "Device Id", "Location", "Time Eastern", "Date Eastern", "Hour Eastern", "count", "Address", "Geohash", "Ip Address", "Timestamp UTC", "count_locations", "count_signals", "Horizontal Accuracy", "Latitude", "Longitude", "Timestamp"]
  }

  date {
    match => ["Timestamp UTC", "MM/dd/yyyy hh:mm:ss a"]
    target => "@timestamp"
    timezone => "UTC"
  }

  mutate {
    convert => {
      "Hour Eastern" => "integer"
      "count_locations" => "integer"
      "count_signals" => "integer"
      "Horizontal Accuracy" => "float"
      "Latitude" => "float"
      "Longitude" => "float"
      "Timestamp" => "float"
    }
  }

  geoip {
    source => "Ip Address"
    target => "geoip"
    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
  }
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    index => "substation_sabotage_%{+YYYY.MM.dd}"
    ssl_certificate_authorities=>['/home/trex/Downloads/logstash-8.17.1/config/certs/http_ca.crt']
    
  }
  stdout { codec => rubydebug }
}

you are missing the authentication, user/password or API key is required.

You have
xpack.security.enabled: true

Enabled in elasticsearch.yml therefore, authentication is required

Thank you for the direction. I attempted to add a username & password the documentationLogstash Authentication I created a logstash_internal user via the instructions


I also created a password at the UI, but is that password sufficient for the x-pack? I dont think so

2025-02-12T23:01:39,166][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"{\" at line 48, column 3 (byte 1337) after output {\n  elasticsearch {\n    hosts => [\"https://localhost:9200\"]\n    index => \"substation_sabotage_%{+YYYY.MM.dd}\"\n    ssl_certificate_authorities=>['/home/trex/Downloads/logstash-8.17.1/config/certs/http_ca.crt']\n    user=> logstash_internal\n    password => ------------\n  ", :backtrace=>["/home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:294:in `initialize'", "org/logstash/execution/AbstractPipelineExt.java:227:in `initialize'", "/home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "org/jruby/RubyClass.java:949:in `new'", "/home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/pipeline_action/create.rb:50:in `execute'", "/home/trex/Downloads/logstash-8.17.1/logstash-core/lib/logstash/agent.rb:420:in `block in converge_state'"]}

Also the instructions for API creation dont feel complete. How do I create a password for the internal_logstash user? Do I have to add the user/password entry to the input and filter sections of the config file? Do you have another reference to complete the password? Do you have another reference for the API creation? Thank you

That error message indicates you have a syntax error in your conf file...

Share your conf file with password anonymized

Look carefully you have a syntax Error it is even trying to tell you where... Although it is often not exact

input {
  file {
    path => "/home/trex/Downloads/Power-12.2.csv"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}

filter {
  csv {
    separator => ","
    columns => ["Day of Date Eastern", "Device Id", "Location", "Time Eastern", "Date Eastern", "Hour Eastern", "count", "Address", "Geohash", "Ip Address", "Timestamp UTC", "count_locations", "count_signals", "Horizontal Accuracy", "Latitude", "Longitude", "Timestamp"]
  }

  date {
    match => ["Timestamp UTC", "MM/dd/yyyy hh:mm:ss a"]
    target => "@timestamp"
    timezone => "UTC"
  }

  mutate {
    convert => {
      "Hour Eastern" => "integer"
      "count_locations" => "integer"
      "count_signals" => "integer"
      "Horizontal Accuracy" => "float"
      "Latitude" => "float"
      "Longitude" => "float"
      "Timestamp" => "float"
    }
  }

  geoip {
    source => "Ip Address"
    target => "geoip"
    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
  }
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    index => "substation_sabotage_%{+YYYY.MM.dd}"
    ssl_certificate_authorities=>['/home/trex/Downloads/logstash-8.17.1/config/certs/http_ca.crt']
    user=> logstash_internal
    password => ----------------
  }
  stdout { codec => rubydebug }
}

Open the configuration file in a file editor on your system and take a like at this place, line 48, column 3.

Suggestion, use double quotes around the username and password.

Ok I believe I corrected the syntax error but not i am getting this error,

[2025-02-27T17:04:15,205][INFO ][logstash.runner          ] Log4j configuration path used is: /home/dwight-admin/Downloads/logstash-8.17.1/config/log4j2.properties
[2025-02-27T17:04:15,215][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.17.1", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.5+11-LTS on 21.0.5+11-LTS +indy +jit [x86_64-linux]"}
[2025-02-27T17:04:15,217][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2025-02-27T17:04:15,247][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2025-02-27T17:04:15,247][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2025-02-27T17:04:15,386][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2025-02-27T17:04:15,726][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2025-02-27T17:04:16,113][INFO ][org.reflections.Reflections] Reflections took 96 ms to scan 1 urls, producing 151 keys and 528 values
[2025-02-27T17:04:16,402][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ArgumentError) URI is not valid - host is not specified", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:137)", "org.logstash.execution.AbstractPipelineExt.initialize(AbstractPipelineExt.java:240)", "org.logstash.execution.AbstractPipelineExt$INVOKER$i$initialize.call(AbstractPipelineExt$INVOKER$i$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:847)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1379)", "org.jruby.ir.instructions.InstanceSuperInstr.interpret(InstanceSuperInstr.java:139)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:363)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:128)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:115)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:446)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:92)", "org.jruby.RubyClass.newInstance(RubyClass.java:949)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:446)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:92)", "org.jruby.ir.instructions.CallBase.interpret(CallBase.java:548)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:363)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:88)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:238)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:225)", "org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:228)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:476)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:293)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:324)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.ir.interpreter.Interpreter.INTERPRET_BLOCK(Interpreter.java:118)", "org.jruby.runtime.MixedModeIRBlockBody.commonYieldPath(MixedModeIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:66)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.Block.call(Block.java:144)", "org.jruby.RubyProc.call(RubyProc.java:354)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:111)", "java.base/java.lang.Thread.run(Thread.java:1583)"], :cause=>{:exception=>Java::OrgJrubyExceptions::ArgumentError, :message=>"(ArgumentError) URI is not valid - host is not specified", :backtrace=>["RUBY.initialize(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/util/safe_uri.rb:44)", "org.jruby.RubyClass.new(org/jruby/RubyClass.java:922)", "RUBY.from(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/util/safe_uri.rb:55)", "home.dwight_minus_admin.Downloads.logstash_minus_8_dot_17_dot_1.logstash_minus_core.lib.logstash.config.mixin.validate_value(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/config/mixin.rb:555)", "org.jruby.RubyArray.map(org/jruby/RubyArray.java:2803)", "home.dwight_minus_admin.Downloads.logstash_minus_8_dot_17_dot_1.logstash_minus_core.lib.logstash.config.mixin.validate_value(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/config/mixin.rb:552)", "home.dwight_minus_admin.Downloads.logstash_minus_8_dot_17_dot_1.logstash_minus_core.lib.logstash.plugins.ecs_compatibility_support.validate_value(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/plugins/ecs_compatibility_support.rb:41)", "RUBY.validate_value(/home/dwight-admin/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch/data_stream_support.rb:242)", "home.dwight_minus_admin.Downloads.logstash_minus_8_dot_17_dot_1.logstash_minus_core.lib.logstash.config.mixin.process_parameter_value(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/config/mixin.rb:329)", "home.dwight_minus_admin.Downloads.logstash_minus_8_dot_17_dot_1.logstash_minus_core.lib.logstash.config.mixin.validate_check_parameter_values(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/config/mixin.rb:354)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1981)", "home.dwight_minus_admin.Downloads.logstash_minus_8_dot_17_dot_1.logstash_minus_core.lib.logstash.config.mixin.validate_check_parameter_values(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/config/mixin.rb:348)", "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1615)", "RUBY.validate_check_parameter_values(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/config/mixin.rb:347)", "RUBY.validate(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/config/mixin.rb:264)", "RUBY.config_init(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/config/mixin.rb:110)", "RUBY.config_init(/home/dwight-admin/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch.rb:390)", "RUBY.initialize(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/outputs/base.rb:75)", "RUBY.initialize(/home/dwight-admin/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-mixin-ecs_compatibility_support-1.3.0-java/lib/logstash/plugin_mixins/ecs_compatibility_support/selector.rb:61)", "RUBY.initialize(/home/dwight-admin/Downloads/logstash-8.17.1/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.10-java/lib/logstash/outputs/elasticsearch.rb:276)", "org.logstash.plugins.factory.ContextualizerExt.initialize(org/logstash/plugins/factory/ContextualizerExt.java:97)", "org.jruby.RubyClass.new(org/jruby/RubyClass.java:949)", "org.logstash.plugins.factory.ContextualizerExt.initialize_plugin(org/logstash/plugins/factory/ContextualizerExt.java:80)", "org.logstash.plugins.factory.ContextualizerExt.initialize_plugin(org/logstash/plugins/factory/ContextualizerExt.java:53)", "org.jruby.RubyClass.new(org/jruby/RubyClass.java:949)", "org.logstash.config.ir.compiler.OutputDelegatorExt.initialize(org/logstash/config/ir/compiler/OutputDelegatorExt.java:79)", "org.logstash.config.ir.compiler.OutputDelegatorExt.initialize(org/logstash/config/ir/compiler/OutputDelegatorExt.java:56)", "org.logstash.plugins.factory.PluginFactoryExt.plugin(org/logstash/plugins/factory/PluginFactoryExt.java:241)", "org.logstash.execution.AbstractPipelineExt.initialize(org/logstash/execution/AbstractPipelineExt.java:240)", "RUBY.initialize(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/java_pipeline.rb:47)", "org.jruby.RubyClass.new(org/jruby/RubyClass.java:949)", "RUBY.execute(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/pipeline_action/create.rb:50)", "RUBY.converge_state(/home/dwight-admin/Downloads/logstash-8.17.1/logstash-core/lib/logstash/agent.rb:420)"]}}
[2025-02-27T17:04:16,413][INFO ][logstash.runner          ] Logstash shut down.
[2025-02-27T17:04:16,416][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit

Heya @Trent-alex
The only way we can help if you post both the configuration and the error at the same time. Otherwise we're just guessing :slight_smile:

Share the error and the same configuration file that created it

Actually did some comparision with other posts and corrected the syntax to this

# This is a test config file for the mobile data

input {
  file {
    path => "/home/dwight-admin/Downloads/test.csv"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}

filter {
  csv {
    separator => "[-]"
    columns => [
      "uuid_aaid",
      "lat_long_timestamp_ip_device_app"
    ]
  }

  # Split the second column further
  csv {
    source => "lat_long_timestamp_ip_device_app"
    separator => "\N"
    columns => [
      "lat_long_timestamp_ip",
      "device_app"
    ]
  }

  # Split latitude, longitude, timestamp, and IP
  grok {
    match => { "lat_long_timestamp_ip" => "%{NUMBER:latitude}%{NUMBER:longitude}%{NUMBER:timestamp}%{IP:ip_address}" }
  }

  # Split device type and app ID
  grok {
    match => { "device_app" => "%{WORD:device_type}\\%{WORD:app_id}" }
  }

  mutate {
    convert => {
      "latitude" => "float"
      "longitude" => "float"
      "timestamp" => "integer"
    }
    remove_field => ["lat_long_timestamp_ip", "device_app"]
  }

  date {
    match => [ "timestamp", "UNIX" ]
    target => "@timestamp"
  }
}

output {
  elasticsearch {
    hosts => ["https//localhost:9200"]
    index => "mobile_data"
    user => "logstash_internal"
    password => "-------------------"
  }
  stdout { codec => "rubydebug" }
}


however now i am back to the ES failure

[2025-02-27T17:22:06,157][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash_internal:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}

Just to recap. My elastic stack is a 3-node local server. Please help. thank you.

[2025-02-27T17:22:06,157][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash_internal:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::Elasticsearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}

This error means that Logstash cannot connect to your Elasticsearch cluster.

Is Logstash running on the same machine as Elasticsearch? You are using https://localhost:9200 as the elasticsearch host, this will make logstash try to connect to a Elasticsearch instance running on the same machine.

If Logstash is not on the same machine, then you need to use the IP address of your Elasticsearch instance, if they are on the same machine then you need to check if Elasticsearch is running correctly.

Ok so I have tried 2 settings and return the same warning. PLease notethis is a 3 server cluster, with each node on a server. Kibana and logstash and elasticsearch are running on the same server. My current config is

# This is a test config file for the mobile data

input {
  file {
    path => "/home/dwight-admin/Downloads/test.csv"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}

filter {
  csv {
    separator => "[-]"
    columns => [
      "uuid_aaid",
      "lat_long_timestamp_ip_device_app"
    ]
  }

  # Split the second column further
  csv {
    source => "lat_long_timestamp_ip_device_app"
    separator => "\N"
    columns => [
      "lat_long_timestamp_ip",
      "device_app"
    ]
  }

  # Split latitude, longitude, timestamp, and IP
  grok {
    match => { "lat_long_timestamp_ip" => "%{NUMBER:latitude}%{NUMBER:longitude}%{NUMBER:timestamp}%{IP:ip_address}" }
  }

  # Split device type and app ID
  grok {
    match => { "device_app" => "%{WORD:device_type}\\%{WORD:app_id}" }
  }

  mutate {
    convert => {
      "latitude" => "float"
 "timestamp" => "integer"
    }
    remove_field => ["lat_long_timestamp_ip", "device_app"]
  }

  date {
    match => [ "timestamp", "UNIX" ]
    target => "@timestamp"
  }
}

output {
  elasticsearch {
    hosts => ["https://192.168.10.60:9200"]
    index => "mobile_data"
    user => "logstash_internal"
    password => "--------------"
  }
  stdout { codec => "rubydebug" }
}

[2025-03-13T16:40:59,884][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash_internal:xxxxxx@192.168.10.60:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://192.168.10.60:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}

You need to configure the Logstash output to Elasticsearch to trust in the certificate used for the http layer in Elasticsearch.

Basically you need to add those settings:

    ssl => true
    cacert => "/path/to/http_ca.crt"