Getting error while running .conf file

Below is the error while i run .CONF file logstash -f logconfig.conf

C:\Users\Priyesh.Chourasia\Desktop\ElkOld\logstash-6.5.2\bin>logstash -f logconf
ig.conf
Sending Logstash logs to C:/Users/Priyesh.Chourasia/Desktop/ElkOld/logstash-6.5.
2/logs which is now configured via log4j2.properties
[2019-03-20T17:07:59,402][INFO ][logstash.setting.writabledirectory] Creating di
rectory {:setting=>”path.queue”, :path=>”C:/Users/Priyesh.Chourasia/Desktop/ElkO
ld/logstash-6.5.2/data/queue”}
[2019-03-20T17:07:59,415][INFO ][logstash.setting.writabledirectory] Creating di
rectory {:setting=>”path.dead_letter_queue”, :path=>”C:/Users/Priyesh.Chourasia/
Desktop/ElkOld/logstash-6.5.2/data/dead_letter_queue”}
[2019-03-20T17:07:59,515][WARN ][logstash.config.source.multilocal] Ignoring the
‘pipelines.yml’ file because modules or command line options are specified
[2019-03-20T17:07:59,567][INFO ][logstash.runner ] Starting Logstash {”
logstash.version”=>”6.5.2″}
[2019-03-20T17:07:59,597][INFO ][logstash.agent ] No persistent UUID f
ile found. Generating new UUID {:uuid=>”16a82f1b-11cd-4380-9928-77ff7321e06a”, :
path=>”C:/Users/Priyesh.Chourasia/Desktop/ElkOld/logstash-6.5.2/data/uuid”}
[2019-03-20T17:08:00,407][ERROR][logstash.agent ] Failed to execute ac
tion {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>”J
ava::JavaLang::NoSuchMethodError”, :message=>”org.apache.commons.codec.binary.He
x.encodeHexString([B)Ljava/lang/String;”, :backtrace=>[“org.logstash.execution.A bstractPipelineExt.initialize(AbstractPipelineExt.java:124)”, “org.logstash.exec ution.AbstractPipelineExt$INVOKER$i$3$0$initialize.call(AbstractPipelineExt$INVO KER$i$3$0$initialize.gen)”, “org.jruby.internal.runtime.methods.JavaMethod$JavaM ethodThree.call(JavaMethod.java:1186)”, “org.jruby.internal.runtime.methods.Java Method$JavaMethodN.call(JavaMethod.java:743)”, “org.jruby.ir.runtime.IRRuntimeHe lpers.instanceSuper(IRRuntimeHelpers.java:983)”, “org.jruby.ir.runtime.IRRuntime Helpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:974)”, “org.jruby.ir.target s.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)”, “C_3a_.Users .Priyesh_dot_Chourasia.Desktop.ElkOld.logstash_minus_6_dot_5_dot_2.logstash_minu … … [2019-03-20T17:08:00,462][FATAL][logstash.runner ] An unexpected error
occurred! {:error=>#<LogStash::Error: Don't know how to handle Java::JavaLang:: NoSuchMethodError for PipelineAction::Create>, :backtrace=>[“org/logsta
sh/execution/ConvergeResultExt.java:103:in create'”, “org/logstash/execution/Co nvergeResultExt.java:34:inadd'”, “C:/Users/Priyesh.Chourasia/Desktop/ElkOld/lo
gstash-6.5.2/logstash-core/lib/logstash/agent.rb:329:in `block in converge_state
‘”]}
[2019-03-20T17:08:00,550][ERROR][org.logstash.Logstash ] java.lang.IllegalSta
teException: Logstash stopped processing because of an error: (SystemExit) exit

below is the jar in my logstash folder C:\Users\Priyesh.Chourasia\Desktop\ElkOld\logstash-6.5.2\logstash-core\lib\jars

commons-codec-1.11 which has below method i’m confused which version is conflicting checked with version 1.4 to 1.12 version every jar has the below common method.

public static String encodeHexString(byte data)
{
return new String(encodeHex(data));
}

please suggest solution for the above issue.

below is the logstash.conf file contents

input {
file {
type => "java"
path => "C:\elk\spring-boot-elk.log"
codec => multiline {
pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}.*"
negate => "true"
what => "previous"
}
}
}

filter {

if [message] =~ "\tat" {
grok {
match => ["message", "^(\tat)"]
add_tag => ["stacktrace"]
}
}

grok {
match => [ "message",
"(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- [(?[A-Za-z0-9-]+)] [A-Za-z0-9.].(?[A-Za-z0-9#_]+)\s:\s+(?.)",
"message",
"(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- .+? :\s+(?.
)"
]
}

date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
}
}

output {

stdout {
codec => rubydebug
}

elasticsearch {
hosts => ["localhost:9200"]

}
}

Below is the pipelines.yml file contents

List of pipelines to be loaded by Logstash

This document must be a list of dictionaries/hashes, where the keys/values are pipeline settings.

Default values for ommitted settings are read from the logstash.yml file.

When declaring multiple pipelines, each MUST have its own pipeline.id.

Example of two pipelines:

- pipeline.id: test

pipeline.workers: 1

pipeline.batch.size: 1

config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"

- pipeline.id: another_test

queue.type: persisted

path.config: "/tmp/logstash/*.config"

Available options:

# name of the pipeline

pipeline.id: mylogs

# The configuration string to be used by this pipeline

config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"

# The path from where to read the configuration text

path.config: "C:\Users\Priyesh.Chourasia\Desktop\ElkOld\logstash-6.5.2\bin\logstash.conf"

# How many worker threads execute the Filters+Outputs stage of the pipeline

pipeline.workers: 1 (actually defaults to number of CPUs)

# How many events to retrieve from inputs before sending to filters+workers

pipeline.batch.size: 125

# How long to wait in milliseconds while polling for the next event

# before dispatching an undersized batch to filters+outputs

pipeline.batch.delay: 50

# How many workers should be used per output plugin instance

pipeline.output.workers: 1

# Internal queuing model, "memory" for legacy in-memory based queuing and

# "persisted" for disk-based acked queueing. Defaults is memory

queue.type: memory

# If using queue.type: persisted, the page data files size. The queue data consists of

# append-only data files separated into pages. Default is 64mb

queue.page_capacity: 64mb

# If using queue.type: persisted, the maximum number of unread events in the queue.

# Default is 0 (unlimited)

queue.max_events: 0

# If using queue.type: persisted, the total capacity of the queue in number of bytes.

# Default is 1024mb or 1gb

queue.max_bytes: 1024mb

# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint

# Default is 1024, 0 for unlimited

queue.checkpoint.acks: 1024

# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint

# Default is 1024, 0 for unlimited

queue.checkpoint.writes: 1024

# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page

# Default is 1000, 0 for no periodic checkpoint.

queue.checkpoint.interval: 1000

# Enable Dead Letter Queueing for this pipeline.

dead_letter_queue.enable: false

If using dead_letter_queue.enable: true, the maximum size of dead letter queue for this pipeline. Entries

will be dropped if they would increase the size of the dead letter queue beyond this setting.

Default is 1024mb

dead_letter_queue.max_bytes: 1024mb

If using dead_letter_queue.enable: true, the directory path where the data files will be stored.

Default is path.data/dead_letter_queue

path.dead_letter_queue:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.