Index not created and more it doesn't open and read the log file to ship logs when starting logstash

I am running to an issue since yesterday that it doesn't open and read the log file when starting logstash. Could you please help take a look what is the problem here?

1. Here is the logstash.conf

input {
        file{
                path => "C:\elkstack\elasticsearch-6.5.1\logs\njson.log"		
                start_position => "beginning"
				sincedb_path => "null"
				codec => "json"
            }
		
      }
                
output {
    elasticsearch {
	 action => "index"
	 hosts  => "localhost:9200"
	 index  => "logstash-talent"
	 }
	 
}

Switch to using the stdout output while debugging...

input {
        file{
                path => "C:\elkstack\elasticsearch-6.5.1\logs\njson.log"    
                start_position => "beginning"
        sincedb_path => "null"
        codec => "json"
            }
    
      }
                
output {
  stdout { rubydebug } }
}

Then Turn on TRACE level logging in config/logstash.yml

Then post the log lines that are printed at TRACE here.

when i pasting the log and save, the page returns me message "Sorry, you can only mention 10 users in a post." and not able to paste the log successfully

sorry , how can I pase the ful log here??

Hi guyboertje,
I use this
input {
file{
path => "C:\elkstack\elasticsearch-6.5.1\logs\app.log"
start_position => "beginning"
sincedb_path => "null"
codec => "json"
}

  }

output {
elasticsearch {
action => "index"
hosts => "localhost:9200"
index => "logstash-scm"
}

stdout {
        codec => rubydebug
		
		}

}

AND I attach a pic of full log,

. Please help advice.

the size of full log trace is too big to post directly in debugmode.

Thanks,
Cherie

Hi guyboertje,

I found this "Found existing elastic template. Skipping". Does this matter?

[2019-01-18T15:19:07,292][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
019-01-18T15:19:07,477][DEBUG][logstash.outputs.elasticsearch] Found existing Elasticsearch template. Skipping template management {:name=>"logstash"}
[2019-01-18T15:19:08,395][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x27107877 run>"}

Ok. About posting logs here.
You have some options:

  • Put triple backticks before and after the log text.```
  • Use pastebin, github gists or similar and put in a link.

That said, I looked at the image and it looks like you set the logging level to DEBUG but I asked you do set at TRACE...

Then Turn on TRACE level logging in config/logstash.yml
Then post the log lines that are printed at TRACE here.

I don't need all the log lines only the ones at TRACE and only the ones that grep with
grep 'TRACE]\[filewatch' ./logs/logstash-plain.log or similar.

Windows Findstr: Find Specific String In Files With Windows Command (Grep Alternative) TRACE][filewatch

Hi guyboertje,

Sorry , please check below , and observed that "[TRACE][filewatch.discoverer ] discover_files {"count"=>0}". But I do double check that the log file exist in the path and there is one event in the file which is json format.
[2019-01-19T08:22:00,158][DEBUG][logstash.outputs.elasticsearch] Found existing Elasticsearch template. Skipping template management {:name=>"logstash"}
[2019-01-19T08:22:00,898][TRACE][logstash.inputs.file ] Registering file input {:path=>["C:\elkstack\elasticsearch-6.5.1\logs\app.log"]}
[2019-01-19T08:22:01,023][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1f520e7c run>"}
[2019-01-19T08:22:01,082][TRACE][logstash.agent ] Converge results {:success=>true, :failed_actions=>, :successful_actions=>["id: main, action_type: LogStash::PipelineAction::Create"]}
[2019-01-19T08:22:01,164][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-01-19T08:22:01,202][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-01-19T08:22:01,229][DEBUG][logstash.agent ] Starting puma
[2019-01-19T08:22:01,256][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
[2019-01-19T08:22:01,295][TRACE][filewatch.sincedbcollection] open: reading from null
[2019-01-19T08:22:01,308][TRACE][filewatch.sincedbcollection] open: count of keys read: 0
[2019-01-19T08:22:01,367][DEBUG][logstash.api.service ] [api-service] start
[2019-01-19T08:22:01,390][TRACE][filewatch.discoverer ] discover_files {"count"=>0}
[2019-01-19T08:22:01,735][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-19T08:22:03,997][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2019-01-19T08:22:04,491][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-01-19T08:22:04,493][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-01-19T08:22:06,097][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x1f520e7c sleep>"}
[2019-01-19T08:22:09,002][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2019-01-19T08:22:09,514][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-01-19T08:22:09,516][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-01-19T08:22:11,107][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x1f520e7c sleep>"}
[2019-01-19T08:22:14,007][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2019-01-19T08:22:14,534][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-01-19T08:22:14,536][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-01-19T08:22:15,537][TRACE][filewatch.discoverer ] discover_files {"count"=>0}
[2019-01-19T08:22:16,110][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x1f520e7c sleep>"}
[2019-01-19T08:22:19,011][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2019-01-19T08:22:19,551][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-01-19T08:22:19,552][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-01-19T08:22:21,111][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x1f520e7c sleep>"}
[2019-01-19T08:22:24,018][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2019-01-19T08:22:24,572][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-01-19T08:22:24,575][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-01-19T08:22:26,112][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x1f520e7c sleep>"}
[2019-01-19T08:22:28,462][WARN ][logstash.runner ] SIGINT received. Shutting down.
[2019-01-19T08:22:28,495][DEBUG][logstash.instrument.periodicpoller.os] Stopping
[2019-01-19T08:22:28,520][DEBUG][logstash.instrument.periodicpoller.jvm] Stopping
[2019-01-19T08:22:28,524][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Stopping
[2019-01-19T08:22:28,526][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Stopping
[2019-01-19T08:22:28,609][DEBUG][logstash.agent ] Shutting down all pipelines {:pipelines_count=>1}
[2019-01-19T08:22:28,623][DEBUG][logstash.agent ] Converging pipelines state {:actions_count=>1}
[2019-01-19T08:22:28,632][DEBUG][logstash.agent ] Executing action {:action=>LogStash::PipelineAction::Stop/pipeline_id:main}
[2019-01-19T08:22:28,655][DEBUG][logstash.pipeline ] Stopping inputs {:pipeline_id=>"main", :thread=>"#<Thread:0x1f520e7c sleep>"}
[2019-01-19T08:22:28,662][DEBUG][logstash.inputs.file ] Stopping {:plugin=>"LogStash::Inputs::File"}
[2019-01-19T08:22:28,726][INFO ][filewatch.observingtail ] QUIT - closing all files and shutting down.
[2019-01-19T08:22:28,732][DEBUG][logstash.pipeline ] Stopped inputs {:pipeline_id=>"main", :thread=>"#<Thread:0x1f520e7c sleep>"}
[2019-01-19T08:22:29,588][TRACE][filewatch.sincedbcollection] caller requested sincedb write (tail mode subscribe complete - shutting down)
[2019-01-19T08:22:29,599][TRACE][filewatch.sincedbcollection] sincedb_write: to: null

[2019-01-19T08:22:00,898][TRACE][logstash.inputs.file ] Registering file input {:path=>["C:\elkstack\elasticsearch-6.5.1\logs\app.log"]}
[2019-01-19T08:22:01,295][TRACE][filewatch.sincedbcollection] open: reading from null
[2019-01-19T08:22:01,308][TRACE][filewatch.sincedbcollection] open: count of keys read: 0
[2019-01-19T08:22:01,390][TRACE][filewatch.discoverer ] discover_files {"count"=>0}

This very much suggests that "C:\elkstack\elasticsearch-6.5.1\logs\app.log" does not exist.

  1. how come?? I checked again the file does exist.

and the file path in logstash is path => "C:\elkstack\elasticsearch-6.5.1\logs\app.log".

  1. even I change the file path to path => "C:\elkstack\elasticsearch-6.5.1\logs\*.log", it still says "[TRACE][filewatch.discoverer ] discover_files {"count"=>0}"

Thanks,
Cherie

Can you try replacing the backslashes with forward slashes?

path => "C:/elkstack/elasticsearch-6.5.1/logs/app.log"
1 Like

it discover the file now. But still doesn't process the content in app.log and index logstash-talentaudit not created as well.

[2019-01-20T08:40:54,239][TRACE][filewatch.sincedbcollection] open: reading from null
[2019-01-20T08:40:54,256][TRACE][filewatch.sincedbcollection] open: count of keys read: 0
[2019-01-20T08:40:54,324][DEBUG][logstash.api.service ] [api-service] start
[2019-01-20T08:40:54,353][TRACE][filewatch.discoverer ] discover_files {"count"=>1}
[2019-01-20T08:40:54,568][TRACE][filewatch.discoverer ] discover_files handling: {"new discovery"=>true, "watched_file details"=>"<FileWatch::WatchedFile: @filename='app.log', @state='watched', @recent_states='[:watched]', @bytes_read='0', @bytes_unread='0', current_size='1501', last_stat_size='1501', file_open?='false', @initial=true, @sincedb_key='unknown 0 0'>"}
[2019-01-20T08:40:54,585][TRACE][filewatch.sincedbcollection] associate: finding {"inode"=>"unknown", "path"=>"C:/elkstack/elasticsearch-6.5.1/logs/app.log"}
[2019-01-20T08:40:54,598][TRACE][filewatch.sincedbcollection] associate: unmatched
[2019-01-20T08:40:54,745][TRACE][filewatch.tailmode.processor] Delayed Delete processing
[2019-01-20T08:40:54,765][TRACE][filewatch.tailmode.processor] Watched + Active restat processing
[2019-01-20T08:40:54,828][TRACE][filewatch.tailmode.processor] Rotation In Progress processing
[2019-01-20T08:40:54,845][TRACE][filewatch.tailmode.processor] Watched processing
[2019-01-20T08:40:54,884][TRACE][filewatch.tailmode.handlers.createinitial] handling: app.log
[2019-01-20T08:40:54,923][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-20T08:40:54,925][TRACE][filewatch.tailmode.handlers.createinitial] opening app.log
[2019-01-20T08:40:54,973][TRACE][filewatch.tailmode.handlers.createinitial] handle_specifically opened file handle: 100689, path: app.log
[2019-01-20T08:40:55,039][TRACE][filewatch.tailmode.handlers.createinitial] add_new_value_sincedb_collection {"position"=>0, "watched_file details"=>"<FileWatch::WatchedFile: @filename='app.log', @state='active', @recent_states='[:watched, :watched]', @bytes_read='0', @bytes_unread='1501', current_size='1501', last_stat_size='1501', file_open?='true', @initial=true, @sincedb_key='unknown 0 0'>"}
[2019-01-20T08:40:55,097][TRACE][filewatch.tailmode.processor] Active - file grew: app.log: new size is 1501, bytes read 0
[2019-01-20T08:40:55,109][TRACE][filewatch.tailmode.handlers.grow] handling: app.log
[2019-01-20T08:40:55,242][TRACE][filewatch.tailmode.handlers.grow] reading... {"iterations"=>1, "amount"=>1501, "filename"=>"app.log"}
[2019-01-20T08:40:55,264][DEBUG][filewatch.tailmode.handlers.grow] read_to_eof: get chunk
[2019-01-20T08:40:55,308][TRACE][filewatch.tailmode.handlers.grow] buffer_extract: a delimiter can't be found in current chunk, maybe there are no more delimiters or the delimiter is incorrect or the text before the delimiter, a 'line', is very large, if this message is logged often try increasing the file_chunk_size setting. {"delimiter"=>"\n", "read_position"=>0, "bytes_read_count"=>1501, "last_known_file_size"=>1501, "file_path"=>"C:/elkstack/elasticsearch-6.5.1/logs/app.log"}
[2019-01-20T08:40:55,328][DEBUG][filewatch.sincedbcollection] writing sincedb (delta since last write = 1547944855)
[2019-01-20T08:40:55,334][TRACE][filewatch.sincedbcollection] sincedb_write: to: null
[2019-01-20T08:40:56,433][TRACE][filewatch.tailmode.processor] Delayed Delete processing
[2019-01-20T08:40:56,435][TRACE][filewatch.tailmode.processor] Watched + Active restat processing
[2019-01-20T08:40:56,441][TRACE][filewatch.tailmode.processor] Rotation In Progress processing
[2019-01-20T08:40:56,443][TRACE][filewatch.tailmode.processor] Watched processing

Thanks,
Cherie

OK, so having inode be "unknown" is a bug (wildcards that match multiple files will not work), but that's not hurting you yet.

The second message is telling you that it has read the entire 1501 characters from the file and it has not found the end of a line yet. If you open the file in notepad does it appear to have multiple lines?

I opened with notepad. and the whole event(json format) is into two lines. what should I do?

Hi Badger,

I manually input "enter" at the end of the file, and it create index and start parse the json event now. But still have 2 questions

  1. Why in 6.5.1, it's forwardslash works? In logstash6.2.4, I use backslahs and it did work.
  2. For file input, everytime should I input 'enter' to start a new line in log file?

Regards,
Cherie

Each line of the file needs a line terminator, yes.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.