Logstash parsing Nginx logs failed

I'm running Logstash-oss 7.2.0 (docker) and Filebeat 7.2.0 (rpm).
Filebeat collecting Nginx access/error logs using Nginx module and sending to Logstash.
My Logstash pipeline was taken from the official documentation:

ilm_enabled => false 
input {
beats {
port => 5044
host => "192.168.36.60"
type => "nginx"
}
}
filter {
if [type] == "nginx" {
  if [fileset][module] == "nginx" {
    if [fileset][name] == "access" {
      grok {
    match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
    remove_field => "message"
  }
  mutate {
    add_field => { "read_timestamp" => "%{@timestamp}" }
  }
  date {
    match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
    remove_field => "[nginx][access][time]"
  }
  useragent {
    source => "[nginx][access][agent]"
    target => "[nginx][access][user_agent]"
    remove_field => "[nginx][access][agent]"
  }
  geoip {
    source => "[nginx][access][remote_ip]"
    target => "[nginx][access][geoip]"
  }
}
else if [fileset][name] == "error" {
  grok {
    match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
    remove_field => "message"
  }
  mutate {
    rename => { "@timestamp" => "read_timestamp" }
  }
  date {
    match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
    remove_field => "[nginx][error][time]"
  }
}
  }
}
}
output {
    elasticsearch {
    hosts => ["http://odfe-node1:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "admin"
    password => "admin"
}
}

When I start Logstash container, I gen an error:

    OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. 
        WARNING: An illegal reflective access operation has occurred
        WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
    WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
    Thread.exclusive is deprecated, use Thread::Mutex
    Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
    [2019-07-30T09:43:46,853][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-07-30T09:43:46,864][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-07-30T09:43:47,094][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2019-07-30T09:43:47,111][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"da52531a-a4bd-4aa3-a131-cd028d1213cc", :path=>"/usr/share/logstash/data/uuid"}
[2019-07-30T09:43:47,624][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, input, filter, output at line 1, column 1 (byte 
1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in c
ompile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47
:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:24:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:in `block 
in converge_state'"]}
[2019-07-30T09:43:47,807][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-07-30T09:43:52,852][INFO ][logstash.runner          ] Logstash shut down. 

Could someone help me to resolve the issue, thanks.

After I move ilm_enabled => false into output section, it's start working:

output {
        elasticsearch {
        hosts => ["http://odfe-node1:9200"]
        manage_template => false
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        user => "admin"
        password => "admin"
        ilm_enabled => false
}
}

but get another error:

[2019-07-30T10:29:00,931][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-07-30T10:29:07,306][ERROR][logstash.javapipeline    ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats host=>"192.168.36.60", id=>"c016806bced27f8bd02dab18cb63ad991e62649f41101ef3d654c3cd717e4634", type=>"nginx", port=>5044, enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_93f4d2a3-ae8e-4190-a662-27b93823032b", enable_m
etric=>true, charset=>"UTF-8">, ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA38
4", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECD
SA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>4>
  Error: Cannot assign requested address
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:461)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:453)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:130)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:558)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1358)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:501)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:486)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:1019)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:366)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:404)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:462)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:897)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:834)

Interesting, I comment #host => "192.168.1.10" from input section, and no errors anymore:

input {
beats {
port => 5044
#host => "192.168.36.60"
type => "nginx"
}
}

but logs aren't parsed.

This is like my access logs look:
domain.com 192.168.36.195 [30/Jul/2019:13:53:03 +0300] "GET /oracle/WebMvcModules/Account/Login?lastLoginName=ADMIN HTTP/1.1" 499 "192.168.0.1:443" 0 0.008 ms "https://domain.com/oracle/WebMvcModules/Host" "-" "-"
Please help, thanks

The host option should contain a hostname or address on the machine where logstash is running. Sometimes folks this it should be the address of the machine where beats is running, it should not.

Your parsing is conditional upon the contents of the [fileset] object. What does that look like in Kibana? (Copy and paste from the JSON tab.)

{
  "_index": "filebeat-7.2.0-2019.07.30",
  "_type": "_doc",
  "_id": "FicEQ2wB3Enz_fN8rVlv",
  "_version": 1,
  "_score": null,
  "_source": {
    "agent": {
      "hostname": "vm-nginx",
      "ephemeral_id": "51b3558b-374b-4571-8665-7395fd6a14e4",
      "id": "99f9f868-c1ea-4c67-9661-36dd997a677a",
      "version": "7.2.0",
      "type": "filebeat"
    },
    "input": {
      "type": "log"
    },
    "@version": "1",
    "log": {
      "offset": 312633,
      "file": {
        "path": "/var/log/nginx/uacld.ssl.access.log"
      }
    },
    "tags": [
      "beats_input_codec_plain_applied",
      "_grokparsefailure"
    ],
    "message": "domain.com 192.168.1.10 [30/Jul/2019:16:14:30 +0300] \"GET /oracle/WebMvcModules/Account/Login?lastLoginName=ADMIN HTTP/1.1\" 200 \"192.168.36.52:443\" 2049 0.008 ms \"https://domain.com/oracle/WebMvcModules/Host\" \"2.52\" \"-\" ",
    "@timestamp": "2019-07-30T13:14:33.037Z",
    "ecs": {
      "version": "1.0.0"
    },
    "host": {
      "architecture": "x86_64",
      "hostname": "vm-nginx",
      "id": "c84e318b27e7d08f810d7a995d39af63",
      "containerized": false,
      "os": {
        "codename": "xenial",
        "family": "debian",
        "kernel": "4.4.0-142-generic",
        "version": "16.04.6 LTS (Xenial Xerus)",
        "platform": "ubuntu",
        "name": "Ubuntu"
      },
      "name": "vm-nginx"
    }
  },
  "fields": {
    "@timestamp": [
      "2019-07-30T13:14:33.037Z"
    ]
  },
  "sort": [
    1564492473037
  ]
}

None of those conditionals are satisfied by that document, and I do not think the ingest pipeline is removing them. What does the filebeat configuration look like?

I had the same guesses, and I tried to comment it, but no result.
My filebeat config is simple:

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/nginx/*.log 
  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  exclude_files: ['.gz$']

#----------------------------- Logstash output --------------------------------

output.logstash:
  # The Logstash hosts
  hosts: ["tstkibana:5044"]

You do not appear to have the nginx module enabled in filebeat. That would explain why it is not adding the expected fields.

It's enabled from the beginning.

# filebeat modules list                                                                                                                                                                                                                   
Enabled: 
nginx

@Badger, I think that problem is in first two parameters, in my case access logs has domain name and IP on the begining separate by space:
domain.com 192.168.36.195
and grok patters has remote_ip and user_name separated by dash(-) :
%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.