New to ELK cant get logstash to work

Hello,

I have an elk on an on prem cluster on VMs. One is running norconex web crawler (windows) which is working and creating logs and filebeats.

The other is running logstash (RHEL).

I installed the lastest versions of both elastic and logstash. I did the default logstash in stall and created a .conf that has some test grok filters in it. Logstash wrote to the logstash-pipline.log once, then never again and never created an index on my elastic server which is the same RHEL server.

I am not sure if I have missed something or what but it no longer writes to the log, it doesnt and never has created an index but according to the console the port is listening for beats and running....

Any ideas? here is my conf file and the output from the console

Using bundled JDK: /usr/share/logstash/jdk
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_int
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_f
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2024-03-18 14:45:30.922 [main] runner - NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
[INFO ] 2024-03-18 14:45:30.929 [main] runner - Starting Logstash {"logstash.version"=>"8.12.0", "jruby.version"=>"jruby 9.4.5.0 (3.1.4) 2023-11-02 1abae2700f OpenJDK 64-Bit Server VM 17.0.9+9 on 17.0.9+9 +indy +jit [x86_64-linux]"}
[INFO ] 2024-03-18 14:45:30.931 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[INFO ] 2024-03-18 14:45:30.933 [main] runner - Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[INFO ] 2024-03-18 14:45:30.933 [main] runner - Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[WARN ] 2024-03-18 14:45:31.053 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2024-03-18 14:45:31.596 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[INFO ] 2024-03-18 14:45:32.014 [Converge PipelineAction::Create<main>] Reflections - Reflections took 94 ms to scan 1 urls, producing 132 keys and 468 values
[INFO ] 2024-03-18 14:45:32.174 [Converge PipelineAction::Create<main>] json - ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[INFO ] 2024-03-18 14:45:32.426 [Converge PipelineAction::Create<main>] javapipeline - Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[INFO ] 2024-03-18 14:45:32.437 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://100.xxx.xxx.25:9200"]}
[INFO ] 2024-03-18 14:45:32.510 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://loguser:xxxxxx@100.xxx.xxx.25:9200/]}}
[WARN ] 2024-03-18 14:45:32.894 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"https://loguser:xxxxxx@100.xxx.xxx.25:9200/"}
[INFO ] 2024-03-18 14:45:32.895 [[main]-pipeline-manager] elasticsearch - Elasticsearch version determined (8.12.2) {:es_version=>8}
[WARN ] 2024-03-18 14:45:32.895 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[INFO ] 2024-03-18 14:45:32.903 [[main]-pipeline-manager] elasticsearch - Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"norconex-logs"}
[INFO ] 2024-03-18 14:45:32.903 [[main]-pipeline-manager] elasticsearch - Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`
[INFO ] 2024-03-18 14:45:32.917 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.2-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:164] elasticsearch - Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[WARN ] 2024-03-18 14:45:33.129 [[main]-pipeline-manager] grok - ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[INFO ] 2024-03-18 14:45:33.206 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/norconex-pipeline.conf"], :thread=>"#<Thread:0x7455877a /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[INFO ] 2024-03-18 14:45:33.908 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.7}
[INFO ] 2024-03-18 14:45:33.914 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2024-03-18 14:45:33.926 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2024-03-18 14:45:33.940 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2024-03-18 14:45:33.992 [[main]<beats] Server - Starting server on port: 5044
input {
    beats {
        port => 5044
        codec => "json"
    }
}

filter {
    grok {
        match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{DATA:thread}\] %{LOGLEVEL:level} %{DATA:component}" }
    }
    date {
        match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
        target => "@timestamp"
    }
    useragent {
        source => "agent"
        target => "user_agent"
    }

    # Additional filters for specific errors
    if [response] == "404" {
        mutate {
            add_tag => [ "404_error" ]
        }
    }
}

output {
    elasticsearch {
        hosts => ["https://100.xxx.xxx.25:9200"]
        ssl_enabled => true # Use 'ssl_enabled' instead of 'ssl'
        ssl_certificate_authorities => ["/datadrive/ca/http_ca.crt"] # Use 'ssl_certificate_authorities' instead of 'ca$
        user => "loguser"
        password => "testpassword"
        index => "norconex-logs"
    }
}

It looks like logstash has started OK. It's not going to log anything else whilst it is processing events unless there is an error. Elasticsearch will not create the index until logstash writes to it, so I would conclude that nothing is sending data to your beats input.

I don't use beats much but I doubt that you want a json codec, although that will not prevent events flowing.

Im forcing crawls and seeing logs generated on the beats server, I read that beats sends in json but i could be wrong.

Logstash just sits there, and it is not generating a log either, I thought it would at least have a log of starting

Hi @elastic_user2

Take that out, not correct / not needed for beats input

And I. The output section add under the elasticsearch section

stdout ()

This will show the events logstash is trying to output to the console

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.