Migration from old ELK to Elastic Cloud

Hello,
We are in the process of migrating an older version of ELK to Elastic Cloud. We are already evaluating pricing, subscription model, etc. and I am getting ahead of the process with some question I have.

Right now we use Filebeat to collect logs from IIS servers and send them to logstash. Each IIS has several log folders and with Filebeat we tag them using "fields" so that in logstash depending on the tag they are added to a certain index. In Elastic Cloud I think there is no logstash, do you know how to do it so that in Elastic Cloud we know what each log corresponds to?

Thanks!

You can do that in filebeat using conditionals in the output.

This part of the documentation has the following example.

output.elasticsearch:
  hosts: ["http://localhost:9200"]
  indices:
    - index: "warning-%{[agent.version]}-%{+yyyy.MM.dd}"
      when.contains:
        message: "WARN"
    - index: "error-%{[agent.version]}-%{+yyyy.MM.dd}"
      when.contains:
        message: "ERR"

Depending on what you do with Logstash you could also keep your Logstash server and output the data to Elastic Cloud as some things are not possible to do with just Filebeat and Elasticsearch and need Logstash.

Filebeat collects the log files from several IIS servers and, depending on the folder, assigns a "field" name to each one so that logstash marks an index name for it.
But from what you say, we can substitute this code, which is the one we use right now on some servers:


#=========================== Filebeat prospectors =============================

filebeat.inputs:

#IIS 01
- type: filestream
  paths:
    - \\p......1\logfiles\Workbench\W3SVC1\*.log
    - \\p......2\logfiles\Workbench\W3SVC1\*.log
    - \\p......3\logfiles\Workbench\W3SVC1\*.log
    - \\p......4\logfiles\Workbench\W3SVC1\*.log
  fields:
    our_service_name: "iis-prod-01"
  #ignore_older: 8h
  exclude_lines: ["^#"]
  exclude_files: [".zip$"]
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after


#IIS 02
- type: filestream
  paths:
  - \\p......1\logfiles\ServicesRoot\W3SVC2\*.log
  - \\p......2\logfiles\ServicesRoot\W3SVC2\*.log
  - \\p......3\logfiles\ServicesRoot\W3SVC2\*.log
  - \\p......4\logfiles\ServicesRoot\W3SVC2\*.log
  fields:
    our_service_name: "iis-prod-02"
  #ignore_older: 8h
  exclude_lines: ["^#"]
  exclude_files: [".zip$"]
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after

#----------------------------- Logstash output --------------------------------
output.logstash:
  enabled: true
  # Puerto 5044/tcp para PROD, 5045/tcp para preprod
  hosts: ["x.x.x.x:5044"]
 
#output.console:
#  pretty: true


And replace "field" with "indices", so we wouldn't need logstash.

#=========================== Filebeat prospectors =============================

filebeat.inputs:

#IIS 01
- type: filestream
  paths:
    - \\p......1\logfiles\Workbench\W3SVC1\*.log
    - \\p......2\logfiles\Workbench\W3SVC1\*.log
    - \\p......3\logfiles\Workbench\W3SVC1\*.log
    - \\p......4\logfiles\Workbench\W3SVC1\*.log
  #ignore_older: 8h
  exclude_lines: ["^#"]
  exclude_files: [".zip$"]
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  indices:
    - index: "iis-prod-01-%{+yyyy.MM.dd}"



#IIS 02
- type: filestream
  paths:
  - \\p......1\logfiles\ServicesRoot\W3SVC2\*.log
  - \\p......2\logfiles\ServicesRoot\W3SVC2\*.log
  - \\p......3\logfiles\ServicesRoot\W3SVC2\*.log
  - \\p......4\logfiles\ServicesRoot\W3SVC2\*.log
  #ignore_older: 8h
  exclude_lines: ["^#"]
  exclude_files: [".zip$"]
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  indices:
    - index: "iis-prod-02-%{+yyyy.MM.dd}"

#----------------------------- Logstash output --------------------------------
output.logstash:
  enabled: true
  # Puerto 5044/tcp para PROD, 5045/tcp para preprod
  hosts: ["x.x.x.x:5044"]
 
#output.console:
#  pretty: true


That's not what was said and it is not what is described in the documentation.

The indices is an option of the elasticsearch output, not an option for the inputs.

What it is in the documentation is that you can conditionally change the index of the Elasticsearch output using the indices option.

The example clearly show this:

output.elasticsearch:
  hosts: ["http://localhost:9200"]
  indices:
    - index: "warning-%{[agent.version]}-%{+yyyy.MM.dd}"
      when.contains:
        message: "WARN"
    - index: "error-%{[agent.version]}-%{+yyyy.MM.dd}"
      when.contains:
        message: "ERR"

It is using the value of the message field to change which will be the destination index.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.