Create a Custom Log not import in logstash

Hi, i have a log customized, and when i try read and input in elastic, nothing happens. The logstash dont generate any error or message.

File INPUT: /var/log/link.log
2025-03-21T07:43:22-03:00 Link2 ON
2025-03-21T07:43:22-03:00 Link2 OFF
2025-03-21T07:43:22-03:00 Link2 OFF
2025-03-21T07:43:22-03:00 Link2 OFF

/etc/logstash/conf.d/link.conf
input {
file {
path => "/var/log/link.log"
mode => "read"
}
}
filter {
grok {
ecs_compatibility => "disabled"
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{WORD:link} %{WORD:status}" }
}
}

output {
stdout {
}
}

Report Logstash after start:

root@fedora:/etc/logstash/conf.d# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/link.conf
Using bundled JDK: /usr/share/logstash/jdk
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2025-03-21 08:27:51.159 [main] runner - Starting from version 9.0, running with superuser privileges is not permitted unless you explicitly set 'allow_superuser' to true, thereby acknowledging the possible security risks
[WARN ] 2025-03-21 08:27:51.163 [main] runner - NOTICE: Running Logstash as a superuser is strongly discouraged as it poses a security risk. Set 'allow_superuser' to false for better security.
[WARN ] 2025-03-21 08:27:51.167 [main] runner - 'pipeline.buffer.type' setting is not explicitly defined.Before moving to 9.x set it to 'heap' and tune heap size upward, or set it to 'direct' to maintain existing behavior.
[INFO ] 2025-03-21 08:27:51.168 [main] runner - Starting Logstash {"logstash.version"=>"8.17.3", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.6+7-LTS on 21.0.6+7-LTS +indy +jit [x86_64-linux]"}
[INFO ] 2025-03-21 08:27:51.171 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[INFO ] 2025-03-21 08:27:51.276 [main] StreamReadConstraintsUtil - Jackson default value override logstash.jackson.stream-read-constraints.max-string-length configured to 200000000
[INFO ] 2025-03-21 08:27:51.276 [main] StreamReadConstraintsUtil - Jackson default value override logstash.jackson.stream-read-constraints.max-number-length configured to 10000
[WARN ] 2025-03-21 08:27:51.354 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2025-03-21 08:27:51.585 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[INFO ] 2025-03-21 08:27:51.743 [Converge PipelineAction::Create] Reflections - Reflections took 63 ms to scan 1 urls, producing 152 keys and 530 values
[INFO ] 2025-03-21 08:27:51.909 [Converge PipelineAction::Create] javapipeline - Pipeline main is configured with pipeline.ecs_compatibility: v8 setting. All plugins in this pipeline will default to ecs_compatibility => v8 unless explicitly configured otherwise.
[INFO ] 2025-03-21 08:27:51.965 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>20, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2500, "pipeline.sources"=>["/etc/logstash/conf.d/link.conf"], :thread=>"#<Thread:0x3bc0f56e /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:138 run>"}
[INFO ] 2025-03-21 08:27:52.341 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.38}
[INFO ] 2025-03-21 08:27:52.350 [[main]-pipeline-manager] file - No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_b16820b5539015bbabbb862c8f13d558", :path=>["/var/log/link.log"]}
[INFO ] 2025-03-21 08:27:52.353 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2025-03-21 08:27:52.358 [[main]<file] observingread - START, creating Discoverer, Watch with file and sincedb collections
[INFO ] 2025-03-21 08:27:52.360 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}

I dont know why i dont have any report, the match was validate in grok debugger and works fine

Logstash keeps track of what it has read, so perhaps the file has been read already if you're in testing mode perhaps?

input {
  file {
    path => "/var/log/link.log"
    mode => "read"
    sincedb_path => "/dev/null" 
  }
}

Besides what @stephenb mentioned, there are other couple of issues.

Is the file /var/log/link.log still being written? If it is receiving new lines, then you cannot use the mode as read, you need to use it in the tail mode, which is the default, you can just remove this setting or use mode => "tail", and if you use in the tail mode will also need to set start_position => "beginning".

Other issue is that you are running Logstash as root, which is not recommended and will mess the permissions if you want to run it as a service with systemd in the future.

1 Like