Creating Multiple Pipelines Arcsight, Netflow, filebeat, packet beat etc

Hi,

I am trying to install module Arcsight, netflow etc with currently running services like filebeat, packetbeat, suricata, wazuh but when I am installing Arcsight or netflow module, I am not to insert indexes, dashboard etc in elasticsearch and kibana getting below error.
Any help or guidance will be much appreciated.

ELK Stack Version is "6.6.2"

[2019-04-02T20:28:17,170][ERROR][logstash.modules.kibanaclient] Error when execu ting Kibana client request {:error=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>}
[2019-04-02T20:28:20,300][ERROR][logstash.modules.kibanaclient] Error when execu ting Kibana client request {:error=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>}
[2019-04-02T20:28:20,731][ERROR][logstash.config.sourceloader] Could not fetch a ll the sources {:exception=>LogStash::ConfigLoadingError, :message=>"Failed to i mport module configurations to Elasticsearch and/or Kibana. Module: arcsight has Elasticsearch hosts: ["localhost:9200"] and Kibana hosts: ["localhost:5601" ]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/config/modules_ common.rb:108:in block in pipeline_configs'", "org/jruby/RubyArray.java:1734:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/config/modules_common. rb:54:in pipeline_configs'", "/usr/share/logstash/logstash-core/lib/logstash/co nfig/source/modules.rb:14:inpipeline_configs'", "/usr/share/logstash/logstash- core/lib/logstash/config/source_loader.rb:61:in block in fetch'", "org/jruby/Ru byArray.java:2481:incollect'", "/usr/share/logstash/logstash-core/lib/logstash /config/source_loader.rb:60:in fetch'", "/usr/share/logstash/logstash-core/lib/ logstash/agent.rb:150:inconverge_state_and_update'", "/usr/share/logstash/logs tash-core/lib/logstash/agent.rb:101:in execute'", "/usr/share/logstash/logstash -core/lib/logstash/runner.rb:362:inblock in execute'", "/usr/share/logstash/ve ndor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in block in initia lize'"]} [2019-04-02T20:28:20,753][ERROR][logstash.agent ] An exception happene d when converging configuration {:exception=>RuntimeError, :message=>"Could not fetch the configuration, message: Failed to import module configurations to Elas ticsearch and/or Kibana. Module: arcsight has Elasticsearch hosts: [\"localhost: 9200\"] and Kibana hosts: [\"localhost:5601\"]", :backtrace=>["/usr/share/logsta sh/logstash-core/lib/logstash/agent.rb:157:inconverge_state_and_update'", "/us r/share/logstash/logstash-core/lib/logstash/agent.rb:101:in execute'", "/usr/sh are/logstash/logstash-core/lib/logstash/runner.rb:362:inblock in execute'", "/ usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:2 4:in `block in initialize'"]}

Those are SSL errors, are you using SSL at all?
What are your inputs and outputs that you are using?

Above 2 error are of ssl then error for import of module configuration.

Well I am not using ssl, I will add parameters for disabling ssl. I want to run netflow, arcsight and other beats in single instance of logstash. Please help in suggesting the way out and sample logstash configuration file

My configuration for multiple pipelines follows the pipeline to pipeline configuration:
https://www.elastic.co/guide/en/logstash/current/pipeline-to-pipeline.html

In my filebeat.yml I add a field to the log itself

  - type: log
    enabled: true
    paths:
      - /root/ForumHelp.log
    fields:
      forum: true

My logstash pipelines.yml calls a routing pipeline that I have setup:

- pipeline.id: route5044
  path.config: "/etc/logstash/conf.d/route5044/*.conf"
- pipeline.id: route5045
  path.config: "/etc/logstash/conf.d/route5045/*.conf"
- pipeline.id: dmesg
  path.config: "/etc/logstash/conf.d/dmesg/*.conf"
- pipeline.id: postgres
  path.config: "/etc/logstash/conf.d/postgres/*.conf"
- pipeline.id: forum_help
  path.config: "/etc/logstash/conf.d/ForumHelp/*.conf"
- pipeline.id: meraki
  path.config: "/etc/logstash/conf.d/meraki/*.conf"
- pipeline.id: catchall
  path.config: "/etc/logstash/conf.d/catchall/*.conf"

My input for my route5044 just opens up ports and determines types:

input {
  tcp {
    port => 5146
    codec => "json_lines"
    ssl_enable => true
    ssl_verify => false
    ssl_key => "/etc/logstash/ssl/logstash-proxy-pkcs8.key"
    ssl_cert => "/etc/logstash/ssl/logstash-proxy.crt"
    ssl_extra_chain_certs => "/etc/logstash/ssl/ca.me.com.crt"    
  }
}

I will be adding logic to this section as well as I am doing some consolidations right now.

Then the output of my route5044 handles the routing to the correct pipeline.

output {
if "postgres" in [fields] {
    pipeline {
      send_to => postgres_log
    }
  }
  else if "meraki" in [fields] {
    pipeline {
      send_to => meraki_syslog
    }
  }
  else if "forum" in [fields] {
    pipeline {
      send_to => forum_help
    }
  }
  else {
    pipeline {
      send_to => catch_all
    }
  }
}

Then finally the input for my forum_help looks like this:

input {
  pipeline {
    address => forum_help
  }
}

Quick run down:
On beats I create a virtual pipeline address by creating a field
Route pipeline accepts input
Route pipeline output checks what the virtual pipeline address is and routes it to the correct pipeline.

Hi Ken,

Thanks for kind and prompt response. Going through the configuration helped in getting good understanding over the configuration.

It will be great if i can get the sample conf file for arcsight and netflow too.

Thanks

I'm afraid that I do not have a setup that I can test Netflow or Arcsight with.

The only thing you would have to do is assign a field to your Netflow or Arcsight and it will work.

If you can't add the field through those, then you can move those services to a different port. For example, I am pulling syslog info from Meraki, but Meraki doesn't support me adding custom fields. So I moved Meraki over to a different port. Here is my config for that:

input {
  beats {
    id => "proxy-5044-in"
    port => 5044
    ssl => true
    ssl_certificate_authorities => ["/etc/logstash/ssl/ca.me.com.crt"]
    ssl_certificate => "/etc/logstash/ssl/logstash-proxy.crt"
    ssl_key => "/etc/logstash/ssl/logstash-proxy-pkcs8.key"
    ssl_verify_mode => "force_peer"
    add_field => {
      "_forwarder" => "fwd-5044"
      "origin_host" => "%{host}"
    }
  }
}

input {
  udp {
    id => "proxy-5045-in"
    port => 5045
    type => "syslog"
    add_field => {
      "[fields][meraki]" => true
      "_forwarder" => "fwd-5045"
    }
  }
}

This is my logstash-proxy config, but you should be able to change it to add it to your regular pipeline.
I have all of my Meraki stuff connect to port 5045, and then on the input I add the [fields][meraki] which then my regular pipeline logic takes over.

So moving services that you are unable to add a field, to a different port will allow you to uniquely identify the logs and add in your own fields.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.