Trouble with logstash pipeline creating es index

I am attempting to ingest this logfile:

    ---,36.25, 30.14, 0.01, 0.01, 26.36, 23.92, 23.68

    ---,36.25, 30.15, 0.01, 0.01, 26.36, 24.04, 23.68

    ---,36.26, 30.14, 0.01, 0.01, 26.36, 24.04, 23.68

    ---,36.25, 30.15, 0.01, 0.01, 26.36, 24.04, 23.68

    ---,36.25, 30.15, 0.01, 0.01, 26.36, 24.04, 23.55

    ---,36.26, 30.15, 0.01, 0.01, 26.36, 24.04, 23.68

    ---,36.25, 30.14, 0.01, 0.01, 26.24, 23.92, 23.55

And using this conf file:

    input {
      file {
        path => "c:\\users\\administrator\\desktop\\bsb1_data_output1.txt"
        start_position => "beginning"
      }
    }
    filter {
      grok {
        match => "%{GREEDYDATA}"
        }
      }
    }
    output {
      elasticsearch {
        hosts => "http://myIP:9200"
        index => "bsb1"
        document_type => "bsb1"
      }
    }
       stdout {
        codec => rubydebug
      }
    }

But I get this error:

[FATAL][logstash.runner ] The given configuration is invalid. Reason: Expected one of [ \t\r\n], "#", "input", "filter", "output" at line 1, column 1 (byte 1)

Can someone please help me figure this out? :slight_smile:

Here's a list of things to check:

Edit: And the grok match parameter should be a hash defining both the target field and the pattern. You only wrote down the pattern and a bracket. So maybe that's the problem.
Edit 2: And the stdout is outside of the output blog and has another unmatched bracket. It looks like you copied together some snippets without checking your syntax?

Thanks, Jenni. Yep, I am new to logstash pipelines and trying to put this together as I am reading about it/learning.

I have updated my pipeline conf file, but logstash starts with the default template and does not create the es index. Any help would be greatly appreciated.

input {
  file {
    path => "C:/Users/Adminstrator/Desktop/bsb2_data_output.txt"
    start_position => "beginning"
    sincedb_path => "NUL"
  }
}
filter {
  grok {
    match => [ "message" , "%---,%{URIHOST},%{BASE16FLOAT},%{BASE16FLOAT},%{BASE16FLOAT},%{BASE16FLOAT},%{BASE16FLOAT},%{BASE16FLOAT}" ]
  }
}
output {
  elasticsearch {
    hosts => "http://myIP:9200"
    index => "logstash-bsb2"
  }
}

I could see few issues in your grok pattern (may be its the data you copied have extra spaces?)
Try testing your grok and data in : https://grokdebug.herokuapp.com/

Please find sample and see if it works
Data sample

    ---,36.25, 30.14, 0.01, 0.01, 26.36, 23.92, 23.68
    ---,36.24, 20.14, 2.01, 4.01, 36.36, 43.92, 53.68

processor

input {
  file {
    path => "/tmp/samplefile.txt"
    start_position => "beginning"
    sincedb_path => "NUL"
  }
}
filter {
  grok {
    match => {
         message => "^\s*---,%{BASE16FLOAT:field1},\s*%{BASE16FLOAT:field2},\s*%{BASE16FLOAT:field3},\s*%{BASE16FLOAT:field4},\s*%{BASE16FLOAT:field5},\s*%{BASE16FLOAT:field6}"
    } 
  }
}
output {
  elasticsearch {
    hosts => "http://localhost:9200"
    index => "logstash-bsb2"
    password => "changeme"
    user => "elastic"
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.