Logstash output creating error

Hi all,

I have a weird error i can't nail down. Running 7.15.1 ES / Logstash

I have a simple Logstash conf thats listening on a port for beats data and it should just push it to an index.

input {
    beats {
        port => 2598
        type => "wds-metricbeat-input"
    }
}

filter {
}

output {
    if [type] == "wds-metricbeat-input" {
        elasticsearch {
            hosts => "http://10.0.60.60:9200"
            user => logstash_system
            password => 6EnArfBZ6OZtL2ncpkHQ
            index => "ecs-metricbeat-%{+YYYY.MM.dd}"
        }
    }
    else {
        elasticsearch {
            hosts => "http://10.0.60.60:9200"
            user => logstash_system
            password => 6EnArfBZ6OZtL2ncpkHQ
            index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        }
    }
}

i get the following error

Nov 01 15:53:19 prodlst001 logstash[6281]: [2021-11-01T15:53:19,565][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:metricbeat, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"{\" at line 17, column 13 (byte 316) after output {\n    if [type] == \"wds-metricbeat-input\" {\n        elasticsearch {\n            hosts => \"http://10.0.60.60:9200\"\n            user => logstash_system\n            password => 6EnArfBZ6OZtL2ncpkHQ\n            ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:187:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:391:in `block in converge_state'"]}
Nov 01 15:53:23 prodlst001 logstash[6281]: [2021-11-01T15:53:23,422][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
Nov 01 15:53:23 prodlst001 logstash[6281]: [2021-11-01T15:53:23,424][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

no matter what i try its got a problem with index => "ecs-metricbeat-%{+YYYY.MM.dd}"

i tried converting it to a data stream, same issue.

i know its not a connection issue as i can curl with auth and get response

lstadmin@prodlst001:~$ sudo curl -u logstash_system:6EnArfBZ6OZtL2ncpkHQ http://10.0.60.60:9200
{
  "name" : "prodwds001",
  "cluster_name" : "the-cluster",
  "cluster_uuid" : "VxXE6B4BRpiDoc9raj81-w",
  "version" : {
    "number" : "7.15.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "83c34f456ae29d60e94d886e455e6a3409bba9ed",
    "build_date" : "2021-10-07T21:56:19.031608185Z",
    "build_snapshot" : false,
    "lucene_version" : "8.9.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

i tried outputing to a file.. basic as you can get, listen... no filter, output to file

input {
    beats {
        port => 2598
        type => "wds-metricbeat-input"
    }
}

filter {
}


output {
    file {
        path => "/tmp/test.txt"
    }
}

as you can imagine, this works.

for the life of me i can't figure out why this will not work pushing it to the ES index

the only weird thing about this system is that it has had Ubuntu CIS hardening run on it, so there's been joyous travels in getting permissions nailed down and getting logstash to run.

further to this i have found this is the pipeline log

[2021-11-01T16:10:53,001][DEBUG][io.netty.util.internal.PlatformDependent0] java.nio.Bits.unaligned: available, true
[2021-11-01T16:10:53,005][DEBUG][io.netty.util.internal.PlatformDependent0] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$6 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @3695ea5f
[2021-11-01T16:10:53,008][DEBUG][io.netty.util.internal.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, int): unavailable
[2021-11-01T16:10:53,014][DEBUG][io.netty.util.internal.PlatformDependent] sun.misc.Unsafe: available
[2021-11-01T16:10:53,018][DEBUG][io.netty.util.internal.PlatformDependent] maxDirectMemory: 1038876672 bytes (maybe)
[2021-11-01T16:10:53,021][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.tmpdir: /var/log/logstash/tmp (java.io.tmpdir)
[2021-11-01T16:10:53,022][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
[2021-11-01T16:10:53,026][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes
[2021-11-01T16:10:53,027][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
[2021-11-01T16:10:53,034][DEBUG][io.netty.util.internal.CleanerJava9] java.nio.ByteBuffer.cleaner(): available
[2021-11-01T16:10:53,035][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.noPreferDirect: false
[2021-11-01T16:10:53,051][DEBUG][io.netty.util.internal.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available

dont know if this is the smoking gun

from what i've found this is just netty noise... dosn't mean its not working. As my test with output to file showed.. the route in through input and out through output is working.

full netty log in case your interested

[2021-11-01T16:10:52,601][INFO ][logstash.javapipeline    ] Pipeline Java execution initialization time {"seconds"=>1.27}
[2021-11-01T16:10:52,644][INFO ][logstash.inputs.beats    ] Starting input listener {:address=>"0.0.0.0:2598"}
[2021-11-01T16:10:52,677][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"metricbeat"}
[2021-11-01T16:10:52,693][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2021-11-01T16:10:52,837][DEBUG][io.netty.util.internal.logging.InternalLoggerFactory] Using SLF4J as the default logging framework
[2021-11-01T16:10:52,845][DEBUG][io.netty.channel.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 8
[2021-11-01T16:10:52,894][DEBUG][io.netty.util.internal.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
[2021-11-01T16:10:52,896][DEBUG][io.netty.util.internal.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
[2021-11-01T16:10:52,917][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.noKeySetOptimization: false
[2021-11-01T16:10:52,918][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
[2021-11-01T16:10:52,996][DEBUG][io.netty.util.internal.PlatformDependent0] -Dio.netty.noUnsafe: false
[2021-11-01T16:10:52,996][DEBUG][io.netty.util.internal.PlatformDependent0] Java version: 11
[2021-11-01T16:10:52,998][DEBUG][io.netty.util.internal.PlatformDependent0] sun.misc.Unsafe.theUnsafe: available
[2021-11-01T16:10:52,999][DEBUG][io.netty.util.internal.PlatformDependent0] sun.misc.Unsafe.copyMemory: available
[2021-11-01T16:10:53,000][DEBUG][io.netty.util.internal.PlatformDependent0] java.nio.Buffer.address: available
[2021-11-01T16:10:53,000][DEBUG][io.netty.util.internal.PlatformDependent0] direct buffer constructor: unavailable: Reflective setAccessible(true) disabled
[2021-11-01T16:10:53,001][DEBUG][io.netty.util.internal.PlatformDependent0] java.nio.Bits.unaligned: available, true
[2021-11-01T16:10:53,005][DEBUG][io.netty.util.internal.PlatformDependent0] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$6 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @3695ea5f
[2021-11-01T16:10:53,008][DEBUG][io.netty.util.internal.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, int): unavailable
[2021-11-01T16:10:53,014][DEBUG][io.netty.util.internal.PlatformDependent] sun.misc.Unsafe: available
[2021-11-01T16:10:53,018][DEBUG][io.netty.util.internal.PlatformDependent] maxDirectMemory: 1038876672 bytes (maybe)
[2021-11-01T16:10:53,021][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.tmpdir: /var/log/logstash/tmp (java.io.tmpdir)
[2021-11-01T16:10:53,022][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
[2021-11-01T16:10:53,026][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes
[2021-11-01T16:10:53,027][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
[2021-11-01T16:10:53,034][DEBUG][io.netty.util.internal.CleanerJava9] java.nio.ByteBuffer.cleaner(): available
[2021-11-01T16:10:53,035][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.noPreferDirect: false
[2021-11-01T16:10:53,051][DEBUG][io.netty.util.internal.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available
[2021-11-01T16:10:53,060][INFO ][org.logstash.beats.Server] Starting server on port: 2598
[2021-11-01T16:10:53,115][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.processId: 7066 (auto-detected)
[2021-11-01T16:10:53,119][DEBUG][io.netty.util.NetUtil    ] -Djava.net.preferIPv4Stack: false
[2021-11-01T16:10:53,119][DEBUG][io.netty.util.NetUtil    ] -Djava.net.preferIPv6Addresses: false
[2021-11-01T16:10:53,123][DEBUG][io.netty.util.NetUtilInitializations] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
[2021-11-01T16:10:53,125][DEBUG][io.netty.util.NetUtil    ] /proc/sys/net/core/somaxconn: 4096
[2021-11-01T16:10:53,126][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.machineId: 00:50:56:ff:fe:b7:71:71 (auto-detected)
[2021-11-01T16:10:53,149][DEBUG][io.netty.util.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
[2021-11-01T16:10:53,149][DEBUG][io.netty.util.ResourceLeakDetector] -Dio.netty.leakDetection.targetRecords: 4
[2021-11-01T16:10:53,199][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 8
[2021-11-01T16:10:53,199][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 8
[2021-11-01T16:10:53,199][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
[2021-11-01T16:10:53,200][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
[2021-11-01T16:10:53,200][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
[2021-11-01T16:10:53,200][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
[2021-11-01T16:10:53,200][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
[2021-11-01T16:10:53,200][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
[2021-11-01T16:10:53,200][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
[2021-11-01T16:10:53,200][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimIntervalMillis: 0
[2021-11-01T16:10:53,201][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true
[2021-11-01T16:10:53,201][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
[2021-11-01T16:10:53,223][DEBUG][io.netty.buffer.ByteBufUtil] -Dio.netty.allocator.type: pooled
[2021-11-01T16:10:53,224][DEBUG][io.netty.buffer.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0
[2021-11-01T16:10:53,224][DEBUG][io.netty.buffer.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384

Your log is complaining about a configuration error:

Nov 01 15:53:19 prodlst001 logstash[6281]: [2021-11-01T15:53:19,565][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:metricbeat, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "{" at line 17, column 13 (byte 316) after output {\n if [type] == "wds-metricbeat-input" {\n elasticsearch {\n hosts => "http://10.0.60.60:9200"\n user => logstash_system\n password => 6EnArfBZ6OZtL2ncpkHQ\n "

This normally means a typo or something missing, can you double check your output configuration on that part?

gone through it, checked syntax, grammar and placement
chucked into a formatter as well
i have altered pulled and pushed this output config, i've done so many of these its hard to imagine a mistake in there

im getting a fresh box spun up without the CIS hardening and ill copy the configs across after installing logstash and run it then.
i suspect that will work and i have a sneaking suspicion its the hardening that's done something deep in the core of what log stash needs

If you are still getting that configuration error, it means that the pipeline is not even running, it is a blocking error, the only issue you have in the logs you shared until now is this error.

Do you still get this kind of error when starting logstash?

yes,,, however if i change the output to file based.. it works fine

so digging through some notes after the hardening we had to make a change to the java.options file, specifically:

set the I/O temp directory

-Djava.io.tmpdir=/var/log/logstash/tmp

i have a bunch of java files in dir's there with none of them are executeable though
logstash is owner and perms set to 0644

This is probably unrelated, if logstash can't write on the java tmpdir it won't even start, your issue is that logstash is starting, but it is complaining about a configuration error, which makes the pipeline does not start.

But looking at the output you shared I do not see anything that would make Logstash complain about a configuration error.

Can you try to start it again to get more recent logs? Share the ERROR logs it shows, I think that for this moment you can disable the DEBUG level is it will produce too much noise.

Also, are you editing the file directly in the server? Is it a Windows or Linux machine? Or are you editing on an outside editor and pasting the configuration?

so all servers are linux, ubuntu 20.04
2 x es nodes
1x kibana
1x logstash

all have had cis hardening running on them

the most recent from logstash-plain.log


[2021-11-02T12:19:50,707][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-11-02T12:19:54,995][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[2021-11-02T12:19:54,999][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["/etc/logstash/conf.d/metricbeat/2-wds-metricbeat-filter.conf.old", "/etc/logstash/conf.d/metricbeat/3-wds-metricbeat-output.conf.old", "/etc/logstash/conf.d/metricbeat/98-fail-filter.conf.old", "/etc/logstash/conf.d/metricbeat/99-fail-output.conf.old", "/etc/logstash/conf.d/metricbeat/test.conf.old"]}
[2021-11-02T12:19:55,000][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/metricbeat/1-wds-metricbeat.conf"}
[2021-11-02T12:19:55,000][DEBUG][org.logstash.config.ir.PipelineConfig] -------- Logstash Config ---------
[2021-11-02T12:19:55,001][DEBUG][org.logstash.config.ir.PipelineConfig] Config from source, source: LogStash::Config::Source::MultiLocal, pipeline_id:: metricbeat
[2021-11-02T12:19:55,001][DEBUG][org.logstash.config.ir.PipelineConfig] Config string, protocol: file, id: /etc/logstash/conf.d/metricbeat/1-wds-metricbeat.conf
[2021-11-02T12:19:55,001][DEBUG][org.logstash.config.ir.PipelineConfig] 

input {
    beats {
        port => 2598
        type => "wds-metricbeat-input"
    }
}

filter {
}

output {
    if [type] == "wds-metricbeat-input" {
        elasticsearch {
          hosts => "http://10.0.60.60:9200"
          user => logstash_system
          password => 6EnArfBZ6OZtL2ncpkHQ
          index => "ecs-metricbeat-%{+YYYY.MM.dd}"
        }
    }
    else {
        elasticsearch {
          hosts => "http://10.0.60.60:9200"
          user => logstash_system
          password => 6EnArfBZ6OZtL2ncpkHQ
          index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        }
    }
}
[2021-11-02T12:19:55,001][DEBUG][org.logstash.config.ir.PipelineConfig] Merged config
[2021-11-02T12:19:55,001][DEBUG][org.logstash.config.ir.PipelineConfig] 

input {
    beats {
        port => 2598
        type => "wds-metricbeat-input"
    }
}

filter {
}

output {
    if [type] == "wds-metricbeat-input" {
        elasticsearch {
          hosts => "http://10.0.60.60:9200"
          user => logstash_system
          password => 6EnArfBZ6OZtL2ncpkHQ
          index => "ecs-metricbeat-%{+YYYY.MM.dd}"
        }
    }
    else {
        elasticsearch {
          hosts => "http://10.0.60.60:9200"
          user => logstash_system
          password => 6EnArfBZ6OZtL2ncpkHQ
          index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        }
    }
}
[2021-11-02T12:19:55,001][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2021-11-02T12:19:55,003][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:metricbeat}
[2021-11-02T12:19:55,014][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:metricbeat, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"{\" at line 17, column 11 (byte 308) after output {\n    if [type] == \"wds-metricbeat-input\" {\n        elasticsearch {\n          hosts => \"http://10.0.60.60:9200\"\n          user => logstash_system\n          password => 6EnArfBZ6OZtL2ncpkHQ\n          ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:187:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:391:in `block in converge_state'"]}
[2021-11-02T12:19:55,711][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-11-02T12:19:55,711][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-11-02T12:20:00,716][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-11-02T12:20:00,718][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-11-02T12:20:05,721][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-11-02T12:20:05,722][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-11-02T12:20:10,725][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-11-02T12:20:10,725][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

and the pipeline_metricbeat.log also from last night

[2021-11-01T16:44:23,305][DEBUG][org.logstash.beats.ConnectionHandler] 170d2444: batches pending: true
[2021-11-01T16:44:23,309][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:2598, remote: 127.0.0.1:49358] Received a new payload
[2021-11-01T16:44:23,309][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:2598, remote: 127.0.0.1:49358] Sending a new message for the listener, 
sequence: 17
[2021-11-01T16:44:23,325][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:2598, remote: 127.0.0.1:49358] Sending a new message for the listener, sequence: 18
[2021-11-01T16:44:23,327][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:2598, remote: 127.0.0.1:49358] Sending a new message for the listener, sequence: 19
[2021-11-01T16:44:23,328][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:2598, remote: 127.0.0.1:49358] Sending a new message for the listener, sequence: 20
[2021-11-01T16:44:23,329][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:2598, remote: 127.0.0.1:49358] Sending a new message for the listener, sequence: 21
[2021-11-01T16:44:23,330][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:2598, remote: 127.0.0.1:49358] Sending a new message for the listener, sequence: 22
[2021-11-01T16:44:23,331][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:2598, remote: 127.0.0.1:49358] Sending a new message for the listener, sequence: 23
[2021-11-01T16:44:23,332][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:2598, remote: 127.0.0.1:49358] Sending a new message for the listener, sequence: 24
[2021-11-01T16:44:23,333][DEBUG][org.logstash.beats.BeatsHandler] 170d2444: batches pending: false
[2021-11-01T16:44:23,443][DEBUG][logstash.outputs.file    ] File, writing event to file. {:filename=>"/tmp/test.txt"}
[2021-11-01T16:44:23,444][DEBUG][logstash.outputs.file    ] File, writing event to file. {:filename=>"/tmp/test.txt"}
[2021-11-01T16:44:23,444][DEBUG][logstash.outputs.file    ] File, writing event to file. {:filename=>"/tmp/test.txt"}
[2021-11-01T16:44:23,444][DEBUG][logstash.outputs.file    ] File, writing event to file. {:filename=>"/tmp/test.txt"}
[2021-11-01T16:44:23,444][DEBUG][logstash.outputs.file    ] File, writing event to file. {:filename=>"/tmp/test.txt"}
[2021-11-01T16:44:23,444][DEBUG][logstash.outputs.file    ] File, writing event to file. {:filename=>"/tmp/test.txt"}
[2021-11-01T16:44:23,444][DEBUG][logstash.outputs.file    ] File, writing event to file. {:filename=>"/tmp/test.txt"}
[2021-11-01T16:44:23,446][DEBUG][logstash.outputs.file    ] File, writing event to file. {:filename=>"/tmp/test.txt"}
[2021-11-01T16:44:24,041][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:44:24,041][DEBUG][logstash.outputs.file    ] Flushing file {:path=>"/tmp/test.txt", :fd=>#<IOWriter:0x62c4c829 @active=true, @io=#<File:/tmp/test.txt>>}
[2021-11-01T16:44:32,047][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:44:32,047][DEBUG][logstash.outputs.file    ] Flushing file {:path=>"/tmp/test.txt", :fd=>#<IOWriter:0x62c4c829 @active=true, @io=#<File:/tmp/test.txt>>}
[2021-11-01T16:44:32,686][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2021-11-01T16:44:34,048][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:44:34,049][DEBUG][logstash.outputs.file    ] Flushing file {:path=>"/tmp/test.txt", :fd=>#<IOWriter:0x62c4c829 @active=true, @io=#<File:/tmp/test.txt>>}
[2021-11-01T16:44:36,050][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:44:36,050][DEBUG][logstash.outputs.file    ] Flushing file {:path=>"/tmp/test.txt", :fd=>#<IOWriter:0x62c4c829 @active=true, @io=#<File:/tmp/test.txt>>}
[2021-11-01T16:44:37,686][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2021-11-01T16:44:37,712][DEBUG][logstash.outputs.file    ] Starting stale files cleanup cycle {:files=>{"/tmp/test.txt"=>#<IOWriter:0x62c4c829 @active=true, @io=#<File:/tmp/test.txt>>}}
[2021-11-01T16:44:40,053][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:44:40,053][DEBUG][logstash.outputs.file    ] Flushing file {:path=>"/tmp/test.txt", :fd=>#<IOWriter:0x62c4c829 @active=false, @io=#<File:/tmp/test.txt>>}
[2021-11-01T16:44:42,054][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:44:42,054][DEBUG][logstash.outputs.file    ] Flushing file {:path=>"/tmp/test.txt", :fd=>#<IOWriter:0x62c4c829 @active=false, @io=#<File:/tmp/test.txt>>}
[2021-11-01T16:44:47,714][INFO ][logstash.outputs.file    ] Closing file /tmp/test.txt
[2021-11-01T16:44:57,686][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2021-11-01T16:44:58,061][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:45:00,061][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:45:01,449][DEBUG][io.netty.buffer.PoolThreadCache] Freed 3 thread-local buffer(s) from thread: nioEventLoopGroup-2-3
[2021-11-01T16:45:01,449][DEBUG][io.netty.buffer.PoolThreadCache] Freed 3 thread-local buffer(s) from thread: nioEventLoopGroup-2-2
[2021-11-01T16:45:02,062][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:45:02,686][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2021-11-01T16:45:02,733][DEBUG][logstash.outputs.file    ] Starting stale files cleanup cycle {:files=>{}}
[2021-11-01T16:45:02,734][DEBUG][logstash.outputs.file    ] 0 stale files found {:inactive_files=>{}}
[2021-11-01T16:45:04,063][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:45:05,474][DEBUG][io.netty.buffer.PoolThreadCache] Freed 20 thread-local buffer(s) from thread: defaultEventExecutorGroup-4-1
[2021-11-01T16:45:05,474][DEBUG][io.netty.buffer.PoolThreadCache] Freed 21 thread-local buffer(s) from thread: defaultEventExecutorGroup-4-2
[2021-11-01T16:45:06,066][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2021-11-01T16:45:07,494][DEBUG][logstash.inputs.beats    ] Closing {:plugin=>"LogStash::Inputs::Beats"}
[2021-11-01T16:45:07,500][DEBUG][logstash.pluginmetadata  ] Removing metadata for plugin 3b99335f787b18a5454d2c07e964f07005d724e6a381adbba4fa7a9b127b2430
[2021-11-01T16:45:07,505][DEBUG][logstash.javapipeline    ] Input plugins stopped! Will shutdown filter/output workers. {:pipeline_id=>"metricbeat", :thread=>"#<Thread:0x4ee1c9f3 run>"}
[2021-11-01T16:45:07,521][DEBUG][logstash.javapipeline    ] Shutdown waiting for worker thread {:pipeline_id=>"metricbeat", :thread=>"#<Thread:0x5e4d2951 run>"}
[2021-11-01T16:45:07,611][DEBUG][logstash.outputs.file    ] Closing {:plugin=>"LogStash::Outputs::File"}
[2021-11-01T16:45:08,071][DEBUG][logstash.outputs.file    ] Close: closing files
[2021-11-01T16:45:08,072][DEBUG][logstash.pluginmetadata  ] Removing metadata for plugin af32de41e63db12756a1d299870422f571f3368e645cc576585337a6345eb2a1
[2021-11-01T16:45:08,074][DEBUG][logstash.javapipeline    ] Pipeline has been shutdown {:pipeline_id=>"metricbeat", :thread=>"#<Thread:0x4ee1c9f3 run>"}
[2021-11-01T16:45:08,076][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"metricbeat"}

ive also had a chance to test the same config on a fresh box with no hardening and only installing logstash and it does the same, spits out the same error.

so i went through logs on the ES box, nothing, no errors nothing about connectivity issues, nothing about failed connections or auths.

by doing a curl test, is that conclusive enough to say logstash should have no issues communicating with ES server? tcp/9200 is the only port that logstash uses to communicate and upload it traffic on ?

just to check, this is the current xpack on the cluster Elasticsearch.yml

#action.destructive_requires_name: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
#xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.keystore.path: http.p12
#xpack.security.authc.api_key.enabled


as far as this goes, its set to only encrypt cluster traffic, not client based traffic. my understanding of this is that its the 2 x es nodes and the 1 x kibana node. nothing else at this point is using heightened security on traffic.

is there maybe an issue with this somehow ? i dont see any logs or errors that scream "no cert found" or " you should be using a cert to talk to me, connection closed" so im not sure if this is an issue.. seeing as i have a role setup and am using a username and password to authenticate... the curl works! so theoretically so should the output portion on the config.

on the -Djava.io.tmpdir=/var/log/logstash/tmp front

i tested the writability of logstash on that dir... i changed perms... denied it access.. and it screamed.

i then deleted the whole dir, restarted logstash and it recreated the whole dir with java files inside. so im pretty certain its got enough perms to do what it needs to.

Hi,

You cannot have if statements in your output plugin.

May i suggest you to use filter instead by changing target index metadata ?

input {
    beats {
        port => 2598
        type => "wds-metricbeat-input"
    }
}

filter {
    if [type] == "wds-metricbeat-input" {
        mutate{ add_field =>{"[@metadata][target_index]"=> "ecs-metricbeat-%{+YYYY.MM.dd}"}}
}

output {
        elasticsearch {
            hosts => "http://10.0.60.60:9200"
            user => logstash_system
            password => 6EnArfBZ6OZtL2ncpkHQ
            index => "%{[@metadata][target_index]}"
        }
}

interesting... thank you, much neater
still get same error, but i get the IF statement not in output section

Please show me your full pipeline configuration and the debug log of logstash booting.

I think this is a simple configuration mistake.

You need double quotes around the password. I would use double quotes around the user as well.

I looks like you cannot have a bareword that starts with a digit. It is a spectacularly unhelpful error message!

This is not correct, you can have conditionals in the output block, this is in the documentation.

You can have something like this:

output {
    if conditional {
        elasticsearch {}
    }
}

What you cannot have, and not just in the output plugin, is something like this:

output {
    elasticsearch {
        if conditional { }
    }
}

I think that @Badger found the issue, could be a problem with the user and password values not having quotes, double or single, and the error message is not helpful.

still getting same error


Nov 03 09:46:06 prodlst001 logstash[82164]: [2021-11-03T09:46:06,603][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:metricbeat}
Nov 03 09:46:06 prodlst001 logstash[82164]: [2021-11-03T09:46:06,700][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:metricbeat, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"=>\" at line 14, column 23 (byte 268) after filter {\n    if [type] == \"wds-metricbeat-input\" {\n        mutate{ add_field =>{\"[@metadata][target_index]\"=> \"ecs-metricbeat-%{+YYYY.MM.dd}\"}}\n}\n\noutput {\n        elasticsearch ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:187:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:391:in `block in converge_state'"]}

logstash.yml

node.name: logstash-core
path.data: /var/lib/logstash
config.reload.automatic: true
config.reload.interval: 60s
config.debug: true
http.host: "10.0.60.61"
http.port: 9600
log.level: debug
path.logs: /var/log/logstash
pipeline.separate_logs: true

pipelines.yml

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

#- pipeline.id: main
#  path.config: "/etc/logstash/conf.d/*.conf"
- pipeline.id: metricbeat
  path.config: "/etc/logstash/conf.d/metricbeat/*.conf"
  pipeline.workers: 1
  pipeline.batch.size: 5000
  pipeline.batch.delay: 50
  queue.type: memory

logstash conf.d metricbeat.conf

input {
    beats {
        port => 2598
        type => "wds-metricbeat-input"
    }
}

filter {
    if [type] == "wds-metricbeat-input" {
        mutate{ add_field =>{"[@metadata][target_index]"=> "ecs-metricbeat-%{+YYYY.MM.dd}"}}
}

output {
        elasticsearch {
            hosts => "http://10.0.60.60:9200"
            user => "logstash_system"
            password => "6EnArfBZ6OZtL2ncpkHQ"
            index => "%{[@metadata][target_index]}"
        }
}

logstash startup.options file

# After changing anything here, you need to re-run $LS_HOME/bin/system-install
# as root to push the changes to the init script.
################################################################################

# Override Java location
#JAVACMD=/usr/bin/java

# Set a home directory
LS_HOME=/usr/share/logstash

# logstash settings directory, the path which contains logstash.yml
LS_SETTINGS_DIR=/etc/logstash

# Arguments to pass to logstash
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"

# Arguments to pass to java
LS_JAVA_OPTS=""

# pidfiles aren't used the same way for upstart and systemd; this is for sysv users.
LS_PIDFILE=/var/run/logstash.pid

# user and group id to be invoked as
LS_USER=logstash
LS_GROUP=logstash

# Enable GC logging by uncommenting the appropriate lines in the GC logging
# section in jvm.options
LS_GC_LOG_FILE=/var/log/logstash/gc.log

# Open file limit
LS_OPEN_FILES=16384

# Nice level
LS_NICE=19

# Change these to have the init script named and described differently
# This is useful when running multiple instances of Logstash on the same
# physical box or vm
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"

# If you need to run a command or script before launching Logstash, put it
# between the lines beginning with `read` and `EOM`, and uncomment those lines.
###
## read -r -d '' PRESTART << EOM
## EOM

logstash jvm.options file

## JVM configuration

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms1g
-Xmx1g

################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################

## GC configuration
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly

## Locale
# Set the locale language
#-Duser.language=en

# Set the locale country
#-Duser.country=US

# Set the locale variant, if any
#-Duser.variant=

## basic

# set the I/O temp directory
-Djava.io.tmpdir=/var/log/logstash/tmp

# set to headless, just in case
-Djava.awt.headless=true

# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8

# use our provided JNA always versus the system one
#-Djna.nosys=true

# Turn on JRuby invokedynamic
-Djruby.compile.invokedynamic=true
# Force Compilation
-Djruby.jit.threshold=0
# Make sure joni regexp interruptability is enabled
-Djruby.regexp.interruptible=true

## heap dumps

# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof

## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime

# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}

# Entropy source for randomness
-Djava.security.egd=file:/dev/urandom

# Copy the logging context from parent threads to children
-Dlog4j2.isThreadContextMapInheritable=true

full start up log

[2021-11-03T10:07:12,488][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2021-11-03T10:07:12,498][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.15.1", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.12+7 on 11.0.12+7 +indy +jit [linux-x86_64]"}
[2021-11-03T10:07:12,505][DEBUG][logstash.modules.scaffold] Found module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2021-11-03T10:07:12,506][DEBUG][logstash.plugins.registry] Adding plugin to the registry {:name=>"netflow", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x2e7011b5 @directory="/usr/share/logstash/modules/netflow/configuration", @module_name="netflow", @kibana_version_parts=["6", "0", "0"]>}
[2021-11-03T10:07:12,508][DEBUG][logstash.modules.scaffold] Found module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2021-11-03T10:07:12,508][DEBUG][logstash.plugins.registry] Adding plugin to the registry {:name=>"fb_apache", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x6af761ba @directory="/usr/share/logstash/modules/fb_apache/configuration", @module_name="fb_apache", @kibana_version_parts=["6", "0", "0"]>}
[2021-11-03T10:07:12,804][DEBUG][logstash.runner          ] -------- Logstash Settings (* means modified) ---------
[2021-11-03T10:07:12,804][DEBUG][logstash.runner          ] *node.name: "logstash-core" (default: "prodlst001")
[2021-11-03T10:07:12,805][DEBUG][logstash.runner          ] *path.data: "/var/lib/logstash" (default: "/usr/share/logstash/data")
[2021-11-03T10:07:12,805][DEBUG][logstash.runner          ] modules.cli: <Java::OrgLogstashUtil::ModulesSettingArray:1 []>
[2021-11-03T10:07:12,805][DEBUG][logstash.runner          ] modules: []
[2021-11-03T10:07:12,805][DEBUG][logstash.runner          ] modules_list: []
[2021-11-03T10:07:12,805][DEBUG][logstash.runner          ] modules_variable_list: []
[2021-11-03T10:07:12,805][DEBUG][logstash.runner          ] modules_setup: false
[2021-11-03T10:07:12,805][DEBUG][logstash.runner          ] config.test_and_exit: false
[2021-11-03T10:07:12,806][DEBUG][logstash.runner          ] *config.reload.automatic: true (default: false)
[2021-11-03T10:07:12,806][DEBUG][logstash.runner          ] *config.reload.interval: #<Java::OrgLogstashUtil::TimeValue:0x23d1aff3> (default: #<Java::OrgLogstashUtil::TimeValue:0x64117499>)
[2021-11-03T10:07:12,806][DEBUG][logstash.runner          ] config.support_escapes: false
[2021-11-03T10:07:12,806][DEBUG][logstash.runner          ] config.field_reference.parser: "STRICT"
[2021-11-03T10:07:12,806][DEBUG][logstash.runner          ] metric.collect: true
[2021-11-03T10:07:12,806][DEBUG][logstash.runner          ] pipeline.id: "main"
[2021-11-03T10:07:12,806][DEBUG][logstash.runner          ] pipeline.system: false
[2021-11-03T10:07:12,807][DEBUG][logstash.runner          ] pipeline.workers: 4
[2021-11-03T10:07:12,807][DEBUG][logstash.runner          ] pipeline.batch.size: 125
[2021-11-03T10:07:12,807][DEBUG][logstash.runner          ] pipeline.batch.delay: 50
[2021-11-03T10:07:12,807][DEBUG][logstash.runner          ] pipeline.unsafe_shutdown: false
[2021-11-03T10:07:12,807][DEBUG][logstash.runner          ] pipeline.java_execution: true
[2021-11-03T10:07:12,807][DEBUG][logstash.runner          ] pipeline.reloadable: true
[2021-11-03T10:07:12,807][DEBUG][logstash.runner          ] pipeline.plugin_classloaders: false
[2021-11-03T10:07:12,808][DEBUG][logstash.runner          ] *pipeline.separate_logs: true (default: false)
[2021-11-03T10:07:12,808][DEBUG][logstash.runner          ] pipeline.ordered: "auto"
[2021-11-03T10:07:12,808][DEBUG][logstash.runner          ] pipeline.ecs_compatibility: "disabled"
[2021-11-03T10:07:12,808][DEBUG][logstash.runner          ] path.plugins: []
[2021-11-03T10:07:12,808][DEBUG][logstash.runner          ] *config.debug: true (default: false)
[2021-11-03T10:07:12,808][DEBUG][logstash.runner          ] *log.level: "debug" (default: "info")
[2021-11-03T10:07:12,808][DEBUG][logstash.runner          ] version: false
[2021-11-03T10:07:12,809][DEBUG][logstash.runner          ] help: false
[2021-11-03T10:07:12,809][DEBUG][logstash.runner          ] enable-local-plugin-development: false
[2021-11-03T10:07:12,809][DEBUG][logstash.runner          ] log.format: "plain"
[2021-11-03T10:07:12,809][DEBUG][logstash.runner          ] http.enabled: true
[2021-11-03T10:07:12,809][DEBUG][logstash.runner          ] *http.host: "10.0.60.61" (default: "127.0.0.1")
[2021-11-03T10:07:12,809][DEBUG][logstash.runner          ] *http.port: 9600..9600 (default: 9600..9700)
[2021-11-03T10:07:12,809][DEBUG][logstash.runner          ] http.environment: "production"
[2021-11-03T10:07:12,810][DEBUG][logstash.runner          ] queue.type: "memory"
[2021-11-03T10:07:12,810][DEBUG][logstash.runner          ] queue.drain: false
[2021-11-03T10:07:12,810][DEBUG][logstash.runner          ] queue.page_capacity: 67108864
[2021-11-03T10:07:12,810][DEBUG][logstash.runner          ] queue.max_bytes: 1073741824
[2021-11-03T10:07:12,810][DEBUG][logstash.runner          ] queue.max_events: 0
[2021-11-03T10:07:12,810][DEBUG][logstash.runner          ] queue.checkpoint.acks: 1024
[2021-11-03T10:07:12,810][DEBUG][logstash.runner          ] queue.checkpoint.writes: 1024
[2021-11-03T10:07:12,810][DEBUG][logstash.runner          ] queue.checkpoint.interval: 1000
[2021-11-03T10:07:12,811][DEBUG][logstash.runner          ] queue.checkpoint.retry: false
[2021-11-03T10:07:12,811][DEBUG][logstash.runner          ] dead_letter_queue.enable: false
[2021-11-03T10:07:12,811][DEBUG][logstash.runner          ] dead_letter_queue.max_bytes: 1073741824
[2021-11-03T10:07:12,811][DEBUG][logstash.runner          ] dead_letter_queue.flush_interval: 5000
[2021-11-03T10:07:12,811][DEBUG][logstash.runner          ] slowlog.threshold.warn: #<Java::OrgLogstashUtil::TimeValue:0x3a369bbd>
[2021-11-03T10:07:12,811][DEBUG][logstash.runner          ] slowlog.threshold.info: #<Java::OrgLogstashUtil::TimeValue:0x5476f67a>
[2021-11-03T10:07:12,811][DEBUG][logstash.runner          ] slowlog.threshold.debug: #<Java::OrgLogstashUtil::TimeValue:0x1a366d0>
[2021-11-03T10:07:12,812][DEBUG][logstash.runner          ] slowlog.threshold.trace: #<Java::OrgLogstashUtil::TimeValue:0x28056451>
[2021-11-03T10:07:12,812][DEBUG][logstash.runner          ] keystore.classname: "org.logstash.secret.store.backend.JavaKeyStore"
[2021-11-03T10:07:12,812][DEBUG][logstash.runner          ] *keystore.file: "/etc/logstash/logstash.keystore" (default: "/usr/share/logstash/config/logstash.keystore")
[2021-11-03T10:07:12,812][DEBUG][logstash.runner          ] *path.queue: "/var/lib/logstash/queue" (default: "/usr/share/logstash/data/queue")
[2021-11-03T10:07:12,812][DEBUG][logstash.runner          ] *path.dead_letter_queue: "/var/lib/logstash/dead_letter_queue" (default: "/usr/share/logstash/data/dead_letter_queue")
[2021-11-03T10:07:12,812][DEBUG][logstash.runner          ] *path.settings: "/etc/logstash" (default: "/usr/share/logstash/config")
[2021-11-03T10:07:12,812][DEBUG][logstash.runner          ] *path.logs: "/var/log/logstash" (default: "/usr/share/logstash/logs")
[2021-11-03T10:07:12,813][DEBUG][logstash.runner          ] xpack.monitoring.enabled: false
[2021-11-03T10:07:12,813][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.hosts: ["http://localhost:9200"]
[2021-11-03T10:07:12,813][DEBUG][logstash.runner          ] xpack.monitoring.collection.interval: #<Java::OrgLogstashUtil::TimeValue:0x5d0c6120>
[2021-11-03T10:07:12,813][DEBUG][logstash.runner          ] xpack.monitoring.collection.timeout_interval: #<Java::OrgLogstashUtil::TimeValue:0x6aa7c6c6>
[2021-11-03T10:07:12,813][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.username: "logstash_system"
[2021-11-03T10:07:12,813][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.ssl.verification_mode: "certificate"
[2021-11-03T10:07:12,814][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.sniffing: false
[2021-11-03T10:07:12,814][DEBUG][logstash.runner          ] xpack.monitoring.collection.pipeline.details.enabled: true
[2021-11-03T10:07:12,814][DEBUG][logstash.runner          ] xpack.monitoring.collection.config.enabled: true
[2021-11-03T10:07:12,814][DEBUG][logstash.runner          ] monitoring.enabled: false
[2021-11-03T10:07:12,814][DEBUG][logstash.runner          ] monitoring.elasticsearch.hosts: ["http://localhost:9200"]
[2021-11-03T10:07:12,814][DEBUG][logstash.runner          ] monitoring.collection.interval: #<Java::OrgLogstashUtil::TimeValue:0x1ace7331>
[2021-11-03T10:07:12,814][DEBUG][logstash.runner          ] monitoring.collection.timeout_interval: #<Java::OrgLogstashUtil::TimeValue:0xa06e763>
[2021-11-03T10:07:12,814][DEBUG][logstash.runner          ] monitoring.elasticsearch.username: "logstash_system"
[2021-11-03T10:07:12,815][DEBUG][logstash.runner          ] monitoring.elasticsearch.ssl.verification_mode: "certificate"
[2021-11-03T10:07:12,815][DEBUG][logstash.runner          ] monitoring.elasticsearch.sniffing: false
[2021-11-03T10:07:12,815][DEBUG][logstash.runner          ] monitoring.collection.pipeline.details.enabled: true
[2021-11-03T10:07:12,815][DEBUG][logstash.runner          ] monitoring.collection.config.enabled: true
[2021-11-03T10:07:12,815][DEBUG][logstash.runner          ] node.uuid: ""
[2021-11-03T10:07:12,815][DEBUG][logstash.runner          ] xpack.management.enabled: false
[2021-11-03T10:07:12,815][DEBUG][logstash.runner          ] xpack.management.logstash.poll_interval: #<Java::OrgLogstashUtil::TimeValue:0x42f623e1>
[2021-11-03T10:07:12,816][DEBUG][logstash.runner          ] xpack.management.pipeline.id: ["main"]
[2021-11-03T10:07:12,816][DEBUG][logstash.runner          ] xpack.management.elasticsearch.username: "logstash_system"
[2021-11-03T10:07:12,816][DEBUG][logstash.runner          ] xpack.management.elasticsearch.hosts: ["https://localhost:9200"]
[2021-11-03T10:07:12,816][DEBUG][logstash.runner          ] xpack.management.elasticsearch.ssl.verification_mode: "certificate"
[2021-11-03T10:07:12,816][DEBUG][logstash.runner          ] xpack.management.elasticsearch.sniffing: false
[2021-11-03T10:07:12,816][DEBUG][logstash.runner          ] --------------- Logstash Settings -------------------
[2021-11-03T10:07:12,856][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[2021-11-03T10:07:12,928][DEBUG][logstash.agent           ] Setting up metric collection
[2021-11-03T10:07:13,065][DEBUG][logstash.instrument.periodicpoller.os] Starting {:polling_interval=>5, :polling_timeout=>120}
[2021-11-03T10:07:13,290][DEBUG][logstash.instrument.periodicpoller.jvm] Starting {:polling_interval=>5, :polling_timeout=>120}
[2021-11-03T10:07:13,402][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-11-03T10:07:13,410][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-11-03T10:07:13,432][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2021-11-03T10:07:13,443][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Starting {:polling_interval=>5, :polling_timeout=>120}

logstash starup pt2

[2021-11-03T10:07:13,883][DEBUG][logstash.agent           ] Starting agent
[2021-11-03T10:07:13,913][DEBUG][logstash.agent           ] Starting puma
[2021-11-03T10:07:13,924][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
[2021-11-03T10:07:13,966][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[2021-11-03T10:07:13,972][DEBUG][logstash.api.service     ] [api-service] start
[2021-11-03T10:07:14,107][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["/etc/logstash/conf.d/metricbeat/2-wds-metricbeat-filter.conf.old", "/etc/logstash/conf.d/metricbeat/3-wds-metricbeat-output.conf.old", "/etc/logstash/conf.d/metricbeat/98-fail-filter.conf.old", "/etc/logstash/conf.d/metricbeat/99-fail-output.conf.old", "/etc/logstash/conf.d/metricbeat/test.conf.old"]}
[2021-11-03T10:07:14,114][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/metricbeat/1-wds-metricbeat.conf"}
[2021-11-03T10:07:14,153][DEBUG][org.logstash.config.ir.PipelineConfig] -------- Logstash Config ---------
[2021-11-03T10:07:14,157][DEBUG][org.logstash.config.ir.PipelineConfig] Config from source, source: LogStash::Config::Source::MultiLocal, pipeline_id:: metricbeat
[2021-11-03T10:07:14,157][DEBUG][org.logstash.config.ir.PipelineConfig] Config string, protocol: file, id: /etc/logstash/conf.d/metricbeat/1-wds-metricbeat.conf
[2021-11-03T10:07:14,158][DEBUG][org.logstash.config.ir.PipelineConfig] 

input {
    beats {
        port => 2598
        type => "wds-metricbeat-input"
    }
}

filter {
    if [type] == "wds-metricbeat-input" {
        mutate{ add_field =>{"[@metadata][target_index]"=> "ecs-metricbeat-%{+YYYY.MM.dd}"}}
}

output {
        elasticsearch {
            hosts => "http://10.0.60.60:9200"
            user => "logstash_system"
            password => "6EnArfBZ6OZtL2ncpkHQ"
            index => "%{[@metadata][target_index]}"
        }
}
[2021-11-03T10:07:14,158][DEBUG][org.logstash.config.ir.PipelineConfig] Merged config
[2021-11-03T10:07:14,158][DEBUG][org.logstash.config.ir.PipelineConfig] 

input {
    beats {
        port => 2598
        type => "wds-metricbeat-input"
    }
}

filter {
    if [type] == "wds-metricbeat-input" {
        mutate{ add_field =>{"[@metadata][target_index]"=> "ecs-metricbeat-%{+YYYY.MM.dd}"}}
}

output {
        elasticsearch {
            hosts => "http://10.0.60.60:9200"
            user => "logstash_system"
            password => "6EnArfBZ6OZtL2ncpkHQ"
            index => "%{[@metadata][target_index]}"
        }
}
[2021-11-03T10:07:14,218][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2021-11-03T10:07:14,227][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2021-11-03T10:07:14,234][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:metricbeat}
[2021-11-03T10:07:14,992][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:metricbeat, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"=>\" at line 14, column 23 (byte 268) after filter {\n    if [type] == \"wds-metricbeat-input\" {\n        mutate{ add_field =>{\"[@metadata][target_index]\"=> \"ecs-metricbeat-%{+YYYY.MM.dd}\"}}\n}\n\noutput {\n        elasticsearch ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:187:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:391:in `block in converge_state'"]}
[2021-11-03T10:07:18,482][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-11-03T10:07:18,485][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}