WARNING: Could not find logstash.yml

Hello/Bonjour

I have installed ELK.

Note : All lines of differents config files not mentionned after are commented (#xyabc)

Elasticsearch conf in "/etc/elasticsearch/elasticsearch.yml" is :
node.name: elastic1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.162.188.102
http.port: 9200

Kibana conf in "/etc/kibana/kibana.yml" is :
server.port: 5601
server.host: "10.162.188.102"
server.name: "kibana"
elasticsearch.url: "http://10.162.188.102:9200"

Logstash conf in "/etc/logstash/logstash.yml" is :
node.name: logstash
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
config.test_and_exit: true
path.logs: /var/log/logstash

What do you think about that ?

So now, I take an example of config file to monitoring system information of local server where stack is installed. I created file in "/etc/logstash/conf.d/logstash-syslog.conf" :

input {

file {
path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
type => "syslog"
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
elasticsearch {
type => "stdin-type"
host => "10.162.188.102"
port => "9300"
node_name => "elastic1" }
stdout { codec => rubydebug }
}

To perform my file, i go in "/usr/share/logstash/bin" and execute this command :
./logstash -f logstash-syslog.conf (in first i stoped logstash service).

It return this message :

**WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults

Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties.
Using default config which logs to console

14:00:36.538 [LogStash::Runner] INFO logstash.agent - No config files found in path {:path=>"/usr/share/logstash/bin/logstash-syslog.conf"}

14:00:36.546 [LogStash::Runner] ERROR logstash.agent - failed to fetch pipeline configuration {:message=>"No config files found: logstash-syslog.conf. Can you make sure this path is a logstash config file?"}**

:cry:

It's because Logstash usually looks for the file in your $LS_HOME/config directory. You can either start logstash with the flag --path.setting /etc/logstash or create a symlink. (see https://discuss.elastic.co/t/logstash-configuration-of-5-0-1/68883)

You passed logstash-syslog.conf to the -f parameter. -f expects a full path.

Your complete command-line statement for logstash should be:
./logstash -f /etc/logstash/conf.d/logstash-syslog.conf --path.setting /etc/logstash

1 Like

thank you vm !

If i understand, i must edit the startup.option file and modifiy /etc/logstash/startup.option :

 LS_HOME=/usr/share/logstash
LS_SETTINGS_DIR=/etc/logstash

as

     LS_HOME=/etc/logstash
     LS_SETTINGS_DIR=/etc/logstash

To solve it ?

PS : How to execute this command without be in the /bin ?

./usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-syslog.conf --path.setting /etc/logstash

As => :

./logstash -f /etc/logstash/conf.d/logstash-syslog.conf --path.setting 

It's possible ?. Thank for your helping

Hey

Command is executed with success, it return message Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties.

It's good !

I go see logs in /var/log/logstash and i view that :

Jan 6 23:58 logstash-plain-2017-01-06.log
Jan 7 23:58 logstash-plain-2017-01-07.log
Jan 8 23:58 logstash-plain-2017-01-08.log
Jan 9 23:58 logstash-plain-2017-01-09.log
Jan 10 23:59 logstash-plain-2017-01-10.log
Jan 11 13:59 logstash-plain-2017-01-11.log
Jan 12 10:25 logstash-plain.log

OK ! I cat logstash-plain.log and i view this bad message :

[2017-01-12T10:25:09,344][FATAL][logstash.runner          ] The given configuration is invalid. Reason: The setting `type` in plugin `elasticsearch` is obsolete and is no longer available. You can achieve this same behavior with the new conditionals, like: `if [type] == "sometype" { elasticsearch { ... } }`.

It's a probleem about the DB elastic ?

The type field is no longer supported in the filter and output sections, you can use conditionals for that same purpose (like the error message informs).
You need to change:

output {
elasticsearch {
type => "stdin-type"
host => "10.162.188.102"
port => "9300"
node_name => "elastic1" }
stdout { codec => rubydebug }
}

to:

output {
  if [type] == "stdin-type" { # send to elasticsearch all events with type == stdin-type
    elasticsearch {
      host => "10.162.188.102"
      port => "9300"
      node_name => "elastic1"
    }
  }
  stdout { codec => rubydebug } # regardless of the value of event, print to stdout
}

Thank you vm !

This good but now i have :

[2017-01-12T11:44:16,037][ERROR][logstash.outputs.elasticsearch] Unknown setting 'host' for elasticsearch
[2017-01-12T11:44:16,040][ERROR][logstash.outputs.elasticsearch] Unknown setting 'port' for elasticsearch
[2017-01-12T11:44:16,040][ERROR][logstash.outputs.elasticsearch] Unknown setting 'node_name' for elasticsearch

I go search the problem ! If you know explain me please

see here the current options of the output elasticsearch plugin https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html

To sum up:

  • node_name is no longer necessary since the documents are sent through the REST API
  • host and port have been replaces by a hosts => ["ip:host"] option

Ahhh thank you !

I miss sometimes with older versions ...

I don't understand the lastest fatal error message, everythings feel good but, when i read log message of logstash.plain i look that :

[2017-01-12T14:42:06,252][FATAL][logstash.runner          ] The given configuration is invalid. Reason: Expected one of #, ,, ] at line 25, column 19 (byte 654) after output {
    if [type] == "stdin-type" {
            elasticsearch {
            hosts => [10.162

Wtf ? If i understand, it miss a "#" ??? But This is an IP address why it says that to me ?

10.162#188.102 :sob:

can you show me exact configuration you are using? btw, you need to enclose the ip in quotes, i.e. hosts => ["10.162.188.102"]

Ok I will be try that !

My first conf under "/etc/logstash/conf.d/logstash-syslog.conf" is :

input {
file {
path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
type => "syslog"
 } 
}

filter {
if [type] == "syslog" { 
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
    match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
  } 
 }
}

output {
 if [type] == "stdin-type" {
         elasticsearch {
          hosts => [10.162.188.102:9300]}}
    stdout { codec => rubydebug }
}

I will create all my futur pipeline under this directory. It's a good practise you don't find ?

1 pipeline = One graph on kibana, the one way

Ty

@Beuhlet_Reseau, if you looked at this documentation you would have recognized that you have to use the following syntax: hosts => ["host1:port", "host2:port"]
So in your case it would look like this: hosts => ["10.162.188.102:9300"]
(Don't forget the quotes)



It's up to you. I am using one input-file, one output file, and different filter files for each type. But it's a personal thing, how you can work best.

Ok JvS you're the best !

I begin in the use of ELK stack.

I am very interesting by your technical organization.

What link your one input file, filter and output with separate file :frowning:

One file for one type of filter is a very good idea, it's very adaptable and cleaner than me. If you're a technical documentation about that i take it with pleasure !

Again thank you !

After many corrector, configuration it's OK !

[2017-01-13T11:04:21,711][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

My first conf is a basic web's copy to monitoring local system logs (CPU, Memory...) ... normally :smile:

Where is the index name required by Kibana to Configure an index pattern ? I don't understand why I don't find with my kibana.

I test this command from getting started :

logstash -f /etc/logstash/conf.d/logstash-syslog.conf --config.reload.automatic

OK !.

But always this message unable to fetch mapping. do you have indices matching the pattern I tired the getting started is very abstract

Ps : I saw a strang things ! When i using this command :

curl '10.162.188.102:9200/_cat/indices'

Console return this message :

yellow open .kibana Y2TFE8VQTcq4HILKNSO_7g 1 1 1 0 3.1kb 3.1kb

On a graphic interface, i look Status: Green

Maybe it's nothing but i try :blush:

I search about output I informe you if i win

I go look of log's elasticsearch !

In var/log/el/elasticsearch_deprecation.log i show that ! :slight_smile:

[2017-01-09T16:26:43,316][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [title]

[2017-01-09T16:26:43,316][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [description]

[2017-01-09T16:26:43,316][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [timelion_sheet]

[2017-01-09T16:26:43,316][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [timelion_interval]

[2017-01-09T16:26:43,316][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [timelion_other_interval]

[2017-01-09T16:26:43,317][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [searchSourceJSON]

[2017-01-09T16:26:43,819][WARN ][o.e.d.i.q.QueryParseContext] query malformed, empty clause found at [1:143]

Maybe it's nothing but i don't think :slight_smile:

Error Log's Kibana

main ERROR Unable to invoke factory method in class class org.apache.logging.log4j.core.appender.
RollingFileAppender for element RollingFile. java.lang.reflect.InvocationTargetException

main ERROR FileManager (/var/log/logstash/logstash-plain.log) java.io.FileNotFoundException: /var/log/logstash/logstash-plain.log (Permission denied) java.io.FileNotFoundException: /var/log/logstash/logstash-plain.log (Permission denied)

Caused by: java.lang.IllegalStateException: ManagerFactory [org.apache.logging.log4j.core.appender.rolling.RollingFileMan
ager$RollingFileManagerFactory@71259e4e] unable to create manager for [/var/log/logstash/logstash-plain.log] with data [org.apache.logging.log4j.core.appende
r.rolling.RollingFileManager$FactoryData@6bdd6cd7[pattern=/var/log/logstash/logstash-plain-%d{yyyy-MM-dd}.log, append=true, bufferedIO=true, bufferSize=8192,
policy=CompositeTriggeringPolicy(policies=[TimeBasedTriggeringPolicy(nextRolloverMillis=0, interval=1, modulate=true)]), strategy=DefaultRolloverStrategy(mi
n=1, max=7), advertiseURI=null, layout=org.apache.logging.log4j.core.layout.JsonLayout@1b5f061d]]

I modify my configuration with three files.

Input.conf :

input {
file {
path => [ "/var/log/*.log" , "/var/log/syslog" ]
type => "syslog"
     }
}

Filter-syslog :

input {
file {
path => [ "/var/log/*.log" , "/var/log/syslog" ]
type => "syslog"
}
}
[root@opmaic41 conf.d]# cat 10-syslog-filter.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

AND Output.conf :

output {
elasticsearch {
hosts => ["10.162.188.102:9200"]
index => "logstash-%{+YYYY.MM.dd}"}
}

My index IS logstash-%{+YYYY.MM.dd}....
Even that way, Kibana does not find it ! (Unable to matching...)

./logstash -f /etc/logstash/conf.d/02-syslog-input.conf --path.settings /etc/logstash --config.reload.automatic

ERROR: Failed to load settings file from "path.settings". Aborting... path.setting=/etc/logstash, exception=Psych::SyntaxError, message=>(): expected , but found BlockMappingStart while parsing a block mapping at line 155 column 2

What's it again ...

Also when i force a pipeline creation i have the following error message :

error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200][Manticore::SocketException] Connection refused (Connection refused)"}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.