Kibana is not showing all the logfiles from the path, shows only one file

Hi All,

We have newly installed logstash and started exploring the features.

We have given a path in beats configuration where we have number of log files(may be 10) but when we checked in kibana it is only showing one or two log files not the all available in the path.

Please let me what need to be done to see all files.

Thanks,
Chaitanya.

Hi All,

Any help please?

Thanks,
Chaitanya

Do you know all the files are in there?

Yes, I do see them when logged in to the server. Not sure on what basis filebeat is picking only 2 files.

I have attched the file beat configuration and the actual location of files.

Please don't attach screenshots of text like that, it's very difficult to read. I'm also moving this topic to the filebeat section.

Try starting the beat with debug - -v -d "*".

I have waited 30 mins after running the command. It shows nothing to me. Any suggestions?

PS C:\Program Files\filebeat>
PS C:\Program Files\filebeat> ./filebeat -v -d "" -c filebeat.yml
PS C:\Program Files\filebeat> .\filebeat.exe -v -d "
"
Loading config file error: Failed to read /etc/filebeat/filebeat.yml: open /etc/filebeat/filebeat.yml: The system cannot
find the path specified.. Exiting.
PS C:\Program Files\filebeat>
PS C:\Program Files\filebeat>
PS C:\Program Files\filebeat> .\filebeat.exe -v -d "" -c .\filebeat.yml
PS C:\Program Files\filebeat>
PS C:\Program Files\filebeat> .\filebeat.exe -v -d "
" -c filebeat.yml

Okay, I got it. These files are storing at log folder. Please find below pasted its not allowing me attach the file.

Thanks for your time to take a looking into this.

2016-01-28T01:44:45-08:00 DBG Disable stderr logging
2016-01-28T01:44:45-08:00 DBG Initializing output plugins
2016-01-28T01:44:45-08:00 INFO GeoIP disabled: No paths were set under output.geoip.paths
2016-01-28T01:44:45-08:00 DBG ES Ping(url=http://xxxxxxxxxxx:9200, timeout=1m30s)
2016-01-28T01:44:45-08:00 DBG Ping status code: 200
2016-01-28T01:44:45-08:00 INFO Activated elasticsearch as output plugin.
2016-01-28T01:44:46-08:00 INFO Activated logstash as output plugin.
2016-01-28T01:44:46-08:00 DBG create output worker: 0x0, 0x12bba3a0
2016-01-28T01:44:46-08:00 DBG create output worker: 0x0, 0x0
2016-01-28T01:44:46-08:00 DBG No output is defined to store the topology. The server fields might not be filled.
2016-01-28T01:44:46-08:00 INFO Publisher name: USSLCSITESCOPE1
2016-01-28T01:44:46-08:00 DBG create bulk processing worker (interval=1s, bulk size=50)
2016-01-28T01:44:46-08:00 DBG create bulk processing worker (interval=1s, bulk size=200)
2016-01-28T01:44:46-08:00 INFO Init Beat: filebeat; Version: 1.0.1
2016-01-28T01:44:46-08:00 INFO filebeat sucessfully setup. Start running.
2016-01-28T01:44:46-08:00 INFO Registry file set to: C:\ProgramData\filebeat\registry
2016-01-28T01:44:46-08:00 INFO Loading registrar data from C:\ProgramData\filebeat\registry
2016-01-28T01:44:46-08:00 DBG Set idleTimeoutDuration to 5s
2016-01-28T01:44:46-08:00 DBG File Configs: [D:\SiteScope\logs*.]
2016-01-28T01:44:46-08:00 DBG Set ignore_older duration to 24h0m0s
2016-01-28T01:44:46-08:00 DBG Set scan_frequency duration to 3s
2016-01-28T01:44:46-08:00 DBG Set backoff duration to 1s
2016-01-28T01:44:46-08:00 DBG Set max_backoff duration to 10s
2016-01-28T01:44:46-08:00 DBG Set partial_line_waiting duration to 5s
2016-01-28T01:44:46-08:00 DBG Waiting for 1 prospectors to initialise
2016-01-28T01:44:46-08:00 DBG Harvest path: D:\SiteScope\logs*.

2016-01-28T01:44:46-08:00 DBG scan path D:\SiteScope\logs*.*
2016-01-28T01:44:46-08:00 INFO Starting spooler: spool_size: 1; idle_timeout: 5s
2016-01-28T01:44:46-08:00 DBG Windows is interactive: true
2016-01-28T01:44:46-08:00 DBG Check file for harvesting: D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.HA.log
2016-01-28T01:44:46-08:00 DBG Start harvesting unknown file: D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.HA.log
2016-01-28T01:44:46-08:00 DBG Fetching old state of file to resume: D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.HA.log
2016-01-28T01:44:46-08:00 DBG Skipping file (older than ignore older of 24h0m0s, 5969h52m13.7436389s): D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.HA.log
2016-01-28T01:44:46-08:00 DBG Check file for harvesting: D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.log
2016-01-28T01:44:46-08:00 DBG Start harvesting unknown file: D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.log
2016-01-28T01:44:46-08:00 DBG Fetching old state of file to resume: D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.log
2016-01-28T01:44:46-08:00 DBG Skipping file (older than ignore older of 24h0m0s, 5969h52m13.6968398s): D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.log
2016-01-28T01:44:46-08:00 DBG Check file for harvesting: D:\SiteScope\logs\Operator.log
2016-01-28T01:44:46-08:00 DBG Start harvesting unknown file: D:\SiteScope\logs\Operator.log
2016-01-28T01:44:46-08:00 DBG Fetching old state of file to resume: D:\SiteScope\logs\Operator.log
2016-01-28T01:44:46-08:00 DBG Skipping file (older than ignore older of 24h0m0s, 213h21m52.4739675s): D:\SiteScope\logs\Operator.log
2016-01-28T01:44:46-08:00 DBG Check file for harvesting: D:\SiteScope\logs\RunMonitor.log
2016-01-28T01:44:46-08:00 DBG Start harvesting unknown file: D:\SiteScope\logs\RunMonitor.log
2016-01-28T01:44:46-08:00 DBG Fetching old state of file to resume: D:\SiteScope\logs\RunMonitor.log
2016-01-28T01:44:46-08:00 DBG Skipping file (older than ignore older of 24h0m0s, 80h28m36.1642209s): D:\SiteScope\logs\RunMonitor.log
2016-01-28T01:44:46-08:00 DBG Check file for harvesting: D:\SiteScope\logs\RunMonitor.log.1
2016-01-28T01:44:46-08:00 DBG Start harvesting unknown file: D:\SiteScope\logs\RunMonitor.log.1
2016-01-28T01:44:46-08:00 DBG Fetching old state of file to resume: D:\SiteScope\logs\RunMonitor.log.1
2016-01-28T01:44:46-08:00 DBG Skipping file (older than ignore older of 24h0m0s, 87h3m14.6714368s): D:\SiteScope\logs\RunMonitor.log.1

A lot of files are skipped because they were modified more then 24h ago (based on the ignore_older setting). Can you provide the following information:

  • Which files from the directory do you want to fetch?
  • Which ones show up as expected?
  • Which ones are missing?

Lets take the example of alert log. This is file which will will write the log with current time for every 10 mins but out side of the file modification is still showing 24th. Below is the screen shot.

2016-01-28T01:44:52-08:00 DBG Check file for harvesting: D:\SiteScope\logs\alert.log
2016-01-28T01:44:52-08:00 DBG Update existing file for harvesting: D:\SiteScope\logs\alert.log
2016-01-28T01:44:52-08:00 DBG Not harvesting, file didn't change: D:\SiteScope\logs\alert.log
2016-01-28T01:44:52-08:00 DBG Check file for harvesting: D:\SiteScope\logs\alert.log.old
2016-01-28T01:44:52-08:00 DBG Update existing file for harvesting: D:\SiteScope\logs\alert.log.old

Is there any way that we can fix this?

Thanks,
Chaitanya.

Hi Ruflin,

I have changed the path and given the specific log file to pic. I have force deleted the old docs and tryed to create the beats in indices. But i don't see beats available.

When i ran the below beats in debug mode and it giving below error after changing the log paths.

any help? and also is there any way we can change the ignore older value from 24 to greater.

2016-01-29T03:30:44-08:00 DBG Disable stderr logging
2016-01-29T03:30:44-08:00 DBG Initializing output plugins
2016-01-29T03:30:44-08:00 INFO GeoIP disabled: No paths were set under output.geoip.paths
2016-01-29T03:30:45-08:00 INFO Activated logstash as output plugin.
2016-01-29T03:30:45-08:00 DBG create output worker: 0x0, 0x0
2016-01-29T03:30:45-08:00 DBG No output is defined to store the topology. The server fields might not be filled.
2016-01-29T03:30:45-08:00 INFO Publisher name: USSLCSITESCOPE1
2016-01-29T03:30:45-08:00 DBG create bulk processing worker (interval=1s, bulk size=200)
2016-01-29T03:30:45-08:00 INFO Init Beat: filebeat; Version: 1.0.1
2016-01-29T03:30:45-08:00 INFO filebeat sucessfully setup. Start running.
2016-01-29T03:30:45-08:00 INFO Registry file set to: C:\ProgramData\filebeat\registry
2016-01-29T03:30:45-08:00 INFO Loading registrar data from C:\ProgramData\filebeat\registry
2016-01-29T03:30:45-08:00 DBG Set idleTimeoutDuration to 5s

2016-01-29T03:30:51-08:00 INFO send fail
2016-01-29T03:30:51-08:00 INFO backoff retry: 4s
2016-01-29T03:30:55-08:00 DBG Start next scan
2016-01-29T03:30:55-08:00 DBG scan path D:\SiteScope\logs\error.log
2016-01-29T03:30:55-08:00 DBG Check file for harvesting: D:\SiteScope\logs\error.log
2016-01-29T03:30:55-08:00 DBG Update existing file for harvesting: D:\SiteScope\logs\error.log
2016-01-29T03:30:55-08:00 DBG Not harvesting, file didn't change: D:\SiteScope\logs\error.log
2016-01-29T03:30:55-08:00 DBG scan path D:\SiteScope\logs\RunMonitor.log
2016-01-29T03:30:55-08:00 DBG Check file for harvesting: D:\SiteScope\logs\RunMonitor.log
2016-01-29T03:30:55-08:00 DBG Update existing file for harvesting: D:\SiteScope\logs\RunMonitor.log
2016-01-29T03:30:55-08:00 DBG Not harvesting, file didn't change: D:\SiteScope\logs\RunMonitor.log
2016-01-29T03:30:56-08:00 INFO Connecting error publishing events (retrying): dial tcp x.x.x.x.x:5044: connectex: No connection could be made because the target machine actively refused it.

Hi Ruflin,

ignore_older: Thanks for documentation, I will take look into that.

** There seems to be an issue with your connection to Logstash **.
This issues started once I change the logfile path in file beat configuration, if i give the same old path the connection is happening. Cant we change the log file paths as we like? is there anything that I need to clear on logstash/elastic side?

About the modified date: What you are saying is that you add log lines to the file but the system still show the old time as modification date?

Yes, in side of the file logs are writing with the current time, but when i look at the modified date of the file it is showing some old date and that it is what the filebeat/logstash is looking and ignoring the file.

Thanks,
Chaitanya.

You should be able to change the path in the filebeat config at any time. The only thing you need to do is restart filebeat. Can you share your config file (before and after)?

About the modified time: Are you working on a shared file system?

I have restarted file beat but it is the same case.

Old configuration

2016-01-28T01:44:45-08:00 DBG Disable stderr logging
2016-01-28T01:44:45-08:00 DBG Initializing output plugins
2016-01-28T01:44:45-08:00 INFO GeoIP disabled: No paths were set under output.geoip.paths
2016-01-28T01:44:45-08:00 DBG ES Ping(url=http://XX.XX.XX.XX:9200, timeout=1m30s)
2016-01-28T01:44:45-08:00 DBG Ping status code: 200
2016-01-28T01:44:45-08:00 INFO Activated elasticsearch as output plugin.
2016-01-28T01:44:46-08:00 INFO Activated logstash as output plugin.
2016-01-28T01:44:46-08:00 DBG create output worker: 0x0, 0x12bba3a0
2016-01-28T01:44:46-08:00 DBG create output worker: 0x0, 0x0
2016-01-28T01:44:46-08:00 DBG No output is defined to store the topology. The server fields might not be filled.
2016-01-28T01:44:46-08:00 INFO Publisher name: USSLCSITESCOPE
2016-01-28T01:44:46-08:00 DBG create bulk processing worker (interval=1s, bulk size=50)
2016-01-28T01:44:46-08:00 DBG create bulk processing worker (interval=1s, bulk size=200)
2016-01-28T01:44:46-08:00 INFO Init Beat: filebeat; Version: 1.0.1
2016-01-28T01:44:46-08:00 INFO filebeat sucessfully setup. Start running.
2016-01-28T01:44:46-08:00 INFO Registry file set to: C:\ProgramData\filebeat\registry
2016-01-28T01:44:46-08:00 INFO Loading registrar data from C:\ProgramData\filebeat\registry
2016-01-28T01:44:46-08:00 DBG Set idleTimeoutDuration to 5s
2016-01-28T01:44:46-08:00 DBG File Configs: [D:\SiteScope\logs*.]
2016-01-28T01:44:46-08:00 DBG Set ignore_older duration to 24h0m0s
2016-01-28T01:44:46-08:00 DBG Set scan_frequency duration to 3s
2016-01-28T01:44:46-08:00 DBG Set backoff duration to 1s
2016-01-28T01:44:46-08:00 DBG Set max_backoff duration to 10s
2016-01-28T01:44:46-08:00 DBG Set partial_line_waiting duration to 5s
2016-01-28T01:44:46-08:00 DBG Waiting for 1 prospectors to initialise
2016-01-28T01:44:46-08:00 DBG Harvest path: D:\SiteScope\logs*.

2016-01-28T01:44:46-08:00 DBG scan path D:\SiteScope\logs*.*
2016-01-28T01:44:46-08:00 INFO Starting spooler: spool_size: 1; idle_timeout: 5s
2016-01-28T01:44:46-08:00 DBG Windows is interactive: true
2016-01-28T01:44:46-08:00 DBG Check file for harvesting: D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.HA.log
2016-01-28T01:44:46-08:00 DBG Start harvesting unknown file: D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.HA.log
2016-01-28T01:44:46-08:00 DBG Fetching old state of file to resume: D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.HA.log
2016-01-28T01:44:46-08:00 DBG Skipping file (older than ignore older of 24h0m0s, 5969h52m13.7436389s): D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.HA.log
2016-01-28T01:44:46-08:00 DBG Check file for harvesting: D:\SiteScope\logs\HPSiteScopeOperationsManagerIntegration.log
2016-01-28T01:44:46-08:00 DBG Start harvesting unknown file:

New Configuration

2016-01-29T03:30:44-08:00 DBG Disable stderr logging
2016-01-29T03:30:44-08:00 DBG Initializing output plugins
2016-01-29T03:30:44-08:00 INFO GeoIP disabled: No paths were set under output.geoip.paths
2016-01-29T03:30:45-08:00 INFO Activated logstash as output plugin.
2016-01-29T03:30:45-08:00 DBG create output worker: 0x0, 0x0
2016-01-29T03:30:45-08:00 DBG No output is defined to store the topology. The server fields might not be filled.
2016-01-29T03:30:45-08:00 INFO Publisher name: USSLCSITESCOPE1
2016-01-29T03:30:45-08:00 DBG create bulk processing worker (interval=1s, bulk size=200)
2016-01-29T03:30:45-08:00 INFO Init Beat: filebeat; Version: 1.0.1
2016-01-29T03:30:45-08:00 INFO filebeat sucessfully setup. Start running.
2016-01-29T03:30:45-08:00 INFO Registry file set to: C:\ProgramData\filebeat\registry
2016-01-29T03:30:45-08:00 INFO Loading registrar data from C:\ProgramData\filebeat\registry
2016-01-29T03:30:45-08:00 DBG Set idleTimeoutDuration to 5s
2016-01-29T03:30:45-08:00 DBG File Configs: [D:\SiteScope\logs\error.log D:\SiteScope\logs\RunMonitor.log]
2016-01-29T03:30:45-08:00 DBG Set ignore_older duration to 24h0m0s
2016-01-29T03:30:45-08:00 DBG Set scan_frequency duration to 10s
2016-01-29T03:30:45-08:00 DBG Set backoff duration to 1s
2016-01-29T03:30:45-08:00 DBG Set max_backoff duration to 10s
2016-01-29T03:30:45-08:00 DBG Set partial_line_waiting duration to 5s
2016-01-29T03:30:45-08:00 DBG Waiting for 1 prospectors to initialise
2016-01-29T03:30:45-08:00 DBG Harvest path: D:\SiteScope\logs\error.log
2016-01-29T03:30:45-08:00 DBG Harvest path: D:\SiteScope\logs\RunMonitor.log
2016-01-29T03:30:45-08:00 DBG scan path D:\SiteScope\logs\error.log
2016-01-29T03:30:45-08:00 DBG Check file for harvesting: D:\SiteScope\logs\error.log
2016-01-29T03:30:45-08:00 DBG Start harvesting unknown file: D:\SiteScope\logs\error.log
2016-01-29T03:30:45-08:00 DBG Same file as before found. Fetch the state and persist it.

Thanks,
Chaitanya.

I was referring to your config files. Can you please share before / after config file? And please use 3 backticks around your code to make it better readable.

I just noticed one thing, as soon as I enabled the elastic search as output in filebeat.yml the connection is happening and sending data. Do we need to open both the output like elastic and logastsh here?

As per the architecture it's Filebeat >> Logstash >> Elastic search

  # Make sure not file is defined twice as this can lead to unexpected behaviour.
      paths:

       - D:\SiteScope\logs\error.log
       - D:\SiteScope\logs\RunMonitor.log

      # Configure the file encoding for reading files with international characters


# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
output:

  ### Elasticsearch as output
  elasticsearch:
    # Array of hosts to connect to.
    # Scheme and port can be left out and will be set to the default (http and 9200)
    # In case you specify and additional path, the scheme is required: http://localhost:9200/path
    # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
   hosts: ["10.26.138.xxx:9200"]

     ### Logstash as output
  logstash:
    # The Logstash hosts
    hosts: ["10.26.138.xxx:5044"]



    # Number of workers per Logstash host.
    #worker: 1

If you want to send through LS, you don't need to enable elasticsearch here. I assume there is a problem with your connection to LS or LS to elasticsearch.

You only shared one config file, you were always referring to two different ones. Can you please share both?

I reformatted your post with my initial suggestion to put code into 3 backticks to make it readable.

Sorry, I don't have the old configuration file except old debug logs.

What is this 3 backticks to make it readable, I don't understand, tried to search it in google but couldnt find anything. Could please explain what it is and how to do that?

Ok, then lets go with the new one. Please disable the elasticsearch output and lets see what the issue is with Logstash. Can you share your logstash config?

Backticks are ``` If you put 3 before and after your code, it shows the code correctly.

Thank you.

I have disabled the EL output in Beat configuration. Here is the logstash.configuratuon

Logstash.conf

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["10.26.138.xxx:9200"]
  }
}

config.jason

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => "10.26.138.xxx:9200"
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}