How to add hostname to logs that normally do not contain hostname?

I am trying to send SharePoint logs to Logstash and the typical SharePoint logs do not contain the server name. Would I have to do this somewhere in the Beats config, or?

See common beats exported fields doc for filebeat: https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-env.html

I got that, but how would you parse it with GROK?

no need to parse. the field is part of the event as presented to logstash. You can access the hostname via [beat][hostname] in logstash.

So you do not need to add anything to logstash.conf in the input/output sections to be able to display the hostname in Kibana for each of those logs?

No, Filebeat always adds a beat.hostname field to every event it sends and this will be visible is Kibana.

I still don't see this. I am trying to send the following log to Logstash ::

Timestamp Process TID Area Category EventID Level Message Correlation
06/28/2016 13:55:09.12 w3wp.exe (0x1890) 0x0D28 SharePoint Foundation Client File Access 0000 Monitorable [CsiSandbox Stats] Stack Size: 1 Total Created: 7 Max At Once: Total 1 164325c1-3dbd-43d5-a83c-60e9af2498d9

This is the GROK I am using in logstash.conf ::

%{DATESTAMP:parsedtime} \t%{DATA:process} (%{DATA:processcode})(\s*)\t%{DATA:tid}(\s*)\t%{DATA:area}(\s*)\t%{DATA:category}(\s*)\t%{WORD:eventID}(\s*)\t%{WORD:level}(\s*)\t%{DATA:eventmessage}\t%{UUID:CorrelationID}

%{DATESTAMP:parsedtime} \t%{DATA:process} (%{DATA:processcode})(\s*)\t%{DATA:tid}(\s*)\t%{DATA:area}(\s*)\t%{DATA:category}(\s*)\t%{WORD:eventID}(\s*)\t%{WORD:level}(\s*)\t%{GREEDYDATA:eventmessage}

This is how it comes out in Kibana, and it doesn't show the hostname either ::

message: \u0006\t\x98-7\xCE\xF1\xB03\u001E\x9C-jd\xA7\x88\u0005\xDB\xF8M+\x9D+\xAD\x976\u000FZ\x93I\xEBu\xBA^\xF1\xA7\xF9LH\u001E\v\x99\x88$\u001Eq9\x94\xB2_ڜ"m\x96\u0005\xA0P\u0011\xB1

tags: _grokparsefailure

I checked my GROK in the debugger and it parsed those fields properly. I notice when I configure filebeat to output to file only, it is in JSON format. Does that need to be parsed instead of what is inside the actual log file it is collecting input from to send to Logstash?

Can you please share the configurations you are using for Filebeat and Logstash.

Sure ::

Filebeat ::

############################# Filebeat ######################################
filebeat:
  # List of prospectors to fetch data.
  prospectors:

      paths:
        - F:\Logs\ULS\STSP*.log
 
      input_type: log

      document_type: ULS

  idle_timeout: 10s

  registry_file: "C:/ProgramData/filebeat/registry"

output:

  ### Logstash as output
  logstash:
    # The Logstash hosts
    hosts: ["10.0.1.9:60115"]

    index: ad-app-sharepoint

shipper:

  name:

logging:

  files:
    # The directory where the log files will written to.
    path: f:\filebeat\logs

    rotateeverybytes: 10485760 # = 10MB

    level: info

Logstash ::

input {
    tcp {
        port => 60115
        type => "ULS"
    }
}

filter {
    if [type] == "ULS" {
        grok {
            match => {
                "message" => "%{DATESTAMP:parsedtime} \t%{DATA:process} \(%{DATA:processcode}\)(\s*)\t%{DATA:tid}(\s*)\t%{DATA:area}(\s*)\t%{DATA:category}(\s*)\t%{WORD:eventID}(\s*)\t%{WORD:level}(\s*)\t%{DATA:eventmessage}\t%{UUID:CorrelationID}"
            }
            match => {
                "message" => "%{DATESTAMP:parsedtime} \t%{DATA:process} \(%{DATA:processcode}\)(\s*)\t%{DATA:tid}(\s*)\t%{DATA:area}(\s*)\t%{DATA:category}(\s*)\t%{WORD:eventID}(\s*)\t%{WORD:level}(\s*)\t%{DATA:eventmessage}\t%{UUID:CorrelationID}"
            }
                        match => {
                "message" => "%{DATESTAMP:parsedtime} \t%{DATA:process} \(%{DATA:processcode}\)(\s*)\t%{DATA:tid}(\s*)\t%{DATA:area}(\s*)\t%{DATA:category}(\s*)\t%{WORD:eventID}(\s*)\t%{WORD:level}(\s*)\t%{GREEDYDATA:eventmessage}"
            }
                    match => {
                "message" => "%{DATESTAMP:parsedtime} \t%{DATA:process} \(%{DATA:processcode}\)(\s*)\t%{DATA:tid}(\s*)\t%{DATA:area}(\s*)\t%{DATA:category}(\s*)\t%{WORD:eventID}(\s*)\t%{WORD:level}(\s*)\t%{GREEDYDATA:eventmessage}"
            }
        }
        date {
                match => ["parsedtime","MM/dd/YYYY HH:mm:ss.SSS"]
             }
    }
}

output {
 
  if [type] == "ULS" {
    elasticsearch {
        hosts => ["10.0.1.6", "10.0.1.4", "10.0.1.5"]
        index => "ad-app-sharepoint-%{+YYYY.MM.dd}"
        template => "/data/elk-conf/ad-app-sharepoint-template-index.yml"
        template_name => "ad-app-sharepoint"
        user => "shieldadmin"
        password => "shieldadminpassword"
    }
  }
}

why have you configured the tcp input plugin? Use the beats input plugin, which can correctly handle the protocol used to push events logstash.

Great tip, I have changed the input to beats, instead of tcp. As a side note it must have been tcp due to using nxlog before but now using beats.

So, now the logs appear in Kibana, but I am still seeing a grokparsefailure. I no longer see the cryptic message, however.

It is lumping the whole log into the 'message' field, and not separating them into different fields, which is what I thought Logstash does.

....and suddenly, it starts parsing the fields correctly....

What would cause a bunch of these? It errors the index in Kibana when accessing...

2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T16:55:42Z INFO Read line error: file inactive
2016-06-29T17:00:16Z INFO Events sent: 2
2016-06-29T17:00:16Z INFO Registry file updated. 2152 states written.
2016-06-29T17:00:17Z INFO Read line error: file inactive

file inactive is generated when file is closed after close_older, due to file has not been changes in a duration of close_older. Message can be ignored, as it's normal behavior. The log should be changed to Closing inactive file <filename>. With INFO message in filebeat are supposed to show progress I'm not sure it makes sense to downgrade message to DEBUG level.

This topic was automatically closed after 21 days. New replies are no longer allowed.