Configuration problem

I have cloud elastic activated. I want to set up my environment in following way:

[beats - on prem] ----> [logstash-on prem] ------> [elastic cloud]

So, basically install beats on servers, send logs to logstash on prem and than ship it out form logstash to elastic cloud.

Here is my configuration for *.conf

input {
  beats {
   port => 5044
   type => "log"
   host => "0.0.0.0"
  }
}

output {
  elasticsearch {
    hosts => "https://xxxxdeploymxxx1.xxxx:9243"
	user => "elastic"
	password => "xxxxxxx"
	index => "%{[@metadata][beat]}-%{+yyyy.ww}"
    document_type => "%{[@metadata][type]}"
  }
}

configuration for *.yml

xpack.management.enabled: true
xpack.management.elasticsearch.cloud_id: xxxxxdeployment:xxxxxxxx
xpack.management.elasticsearch.cloud_auth: elasxxxtic:xxxxxr

Finally this is the error I get when i run logstash

[2021-05-28T11:28:10,662][WARN ][logstash.configmanagement.elasticsearchsource] Detected a 6.x and above cluster: the `t
ype` event field won't be used to determine the document _type {:es_version=>7}
[2021-05-28T11:28:10,697][DEBUG][logstash.configmanagement.elasticsearchsource] Reading configuration from Elasticsearch
 version 7.13.0
[2021-05-28T11:28:10,748][DEBUG][logstash.configmanagement.systemindicesfetcher] Could not find a remote configuration f
or specific `pipeline_id` {:pipeline_ids=>["main"]}
[2021-05-28T11:28:10,757][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2021-05-28T11:28:10,779][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
[2021-05-28T11:28:15,066][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directo
ries not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2021-05-28T11:28:15,182][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Young Generation"}
[2021-05-28T11:28:15,185][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Old Generation"}
[2021-05-28T11:28:15,854][DEBUG][logstash.configmanagement.elasticsearchsource] Reading configuration from Elasticsearch
 version 7.13.0
[2021-05-28T11:28:15,885][DEBUG][logstash.configmanagement.systemindicesfetcher] Could not find a remote configuration f
or specific `pipeline_id` {:pipeline_ids=>["main"]}
[2021-05-28T11:28:15,886][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2021-05-28T11:28:15,889][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
[2021-05-28T11:28:20,078][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directo
ries not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2021-05-28T11:28:20,221][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Young Generation"}
[2021-05-28T11:28:20,221][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Old Generation"}
[2021-05-28T11:28:20,828][DEBUG][logstash.configmanagement.elasticsearchsource] Reading configuration from Elasticsearch
 version 7.13.0
[2021-05-28T11:28:20,860][DEBUG][logstash.configmanagement.systemindicesfetcher] Could not find a remote configuration f
or specific `pipeline_id` {:pipeline_ids=>["main"]}
[2021-05-28T11:28:20,861][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2021-05-28T11:28:20,863][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
[2021-05-28T11:28:25,093][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directo
ries not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2021-05-28T11:28:25,248][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Young Generation"}
[2021-05-28T11:28:25,252][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Old Generation"}
[2021-05-28T11:28:25,823][DEBUG][logstash.configmanagement.elasticsearchsource] Reading configuration from Elasticsearch
 version 7.13.0
[2021-05-28T11:28:25,853][DEBUG][logstash.configmanagement.systemindicesfetcher] Could not find a remote configuration f
or specific `pipeline_id` {:pipeline_ids=>["main"]}
[2021-05-28T11:28:25,854][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2021-05-28T11:28:25,857][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
[2021-05-28T11:28:30,101][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directo
ries not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2021-05-28T11:28:30,268][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Young Generation"}
[2021-05-28T11:28:30,279][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Old Generation"}
[2021-05-28T11:28:30,842][DEBUG][logstash.configmanagement.elasticsearchsource] Reading configuration from Elasticsearch
 version 7.13.0
[2021-05-28T11:28:30,872][DEBUG][logstash.configmanagement.systemindicesfetcher] Could not find a remote configuration f
or specific `pipeline_id` {:pipeline_ids=>["main"]}

Can you please help ?
PS. I tried uncommenting "pipeline.id: main" and setting path for path.config: C:/logstash/*.conf file but this did not solve the problem

[2021-05-28T11:54:48,163][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2021-05-28T11:54:48,186][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
[2021-05-28T11:54:52,509][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directo
ries not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2021-05-28T11:54:52,627][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Young Generation"}
[2021-05-28T11:54:52,628][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Old Generation"}
[2021-05-28T11:54:53,251][DEBUG][logstash.configmanagement.elasticsearchsource] Reading configuration from Elasticsearch
 version 7.13.0
[2021-05-28T11:54:53,285][DEBUG][logstash.configmanagement.systemindicesfetcher] Could not find a remote configuration f
or specific `pipeline_id` {:pipeline_ids=>["main"]}
[2021-05-28T11:54:53,286][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2021-05-28T11:54:53,289][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
[2021-05-28T11:54:57,522][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directo
ries not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2021-05-28T11:54:57,664][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Young Generation"}
[2021-05-28T11:54:57,664][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Old Generation"}
[2021-05-28T11:54:58,235][DEBUG][logstash.configmanagement.elasticsearchsource] Reading configuration from Elasticsearch
 version 7.13.0
[2021-05-28T11:54:58,264][DEBUG][logstash.configmanagement.systemindicesfetcher] Could not find a remote configuration f
or specific `pipeline_id` {:pipeline_ids=>["main"]}
[2021-05-28T11:54:58,266][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2021-05-28T11:54:58,267][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}

It sounds like you are expecting logstash to read its configuration from a file, but I think this message means it is configured to read the configuration from elasticsearch.

[logstash.configmanagement.elasticsearchsource] Reading configuration from Elasticsearch version 7.13.0

2 Likes

Thank you for tour post Badger. How do I fix that ? Do I have to configure pipelines in pipeline management UI ? Can this be configured in logstash.yml file ? Something else?

That enables centralized configuration management. If you do not want it enabled then do not set it. Read the documentation I linked to.

1 Like

Ok so I followed the documentation the best I could, in both cases I am still getting error.

Case1: using centralized management. I have created following logstash pipeline using Kibana UI

input { 
  beats {
   port => 5044
   type => "log"
   host => "0.0.0.0"
  }
}

output {
  elasticsearch {
    hosts => "https://xxxxxx:9243"
user => "elastic"
password => "xxxxxxr"
index => "%{[@metadata][beat]}-%{+yyyy.ww}"
    #document_type => "%{[@metadata][type]}"
  }
}

I am getting following error


[2021-05-28T14:05:57,449][ERROR][logstash.javapipeline    ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<L
gStash::Json::ParserError: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object,
'true', 'false' or 'null')
 at [Source: (byte[])"<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" co
tent="IE=edge,chrome=1"/><meta name="viewport" content="width=device-width"/><title>Elastic</title><style>

        @font-face {
          font-family: 'Inter UI';
          font-style: normal;
          font-weight: 100;
          src: url('/ui/fonts/inter_ui/Inter-UI-Thin-BETA.woff2') format('woff2'), url('/ui/fonts/inter_ui/Inter-UI-Thi
-BETA.woff') format('woff');
        }

        @fon"[truncated 133696 bytes]; line: 1, column: 2]>, :backtrace=>["C:/logstash-7.12.1/logstash-core/lib/logstas
/json.rb:32:in `jruby_load'", "C:/logstash-7.12.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.8.6-j
va/lib/logstash/outputs/elasticsearch/http_client/pool.rb:470:in `get_es_version'", "C:/logstash-7.12.1/vendor/bundle/j
uby/2.5.0/gems/logstash-output-elasticsearch-10.8.6-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:273:in
block in healthcheck!'", "org/jruby/RubyHash.java:1415:in `each'", "C:/logstash-7.12.1/vendor/bundle/jruby/2.5.0/gems/l
gstash-output-elasticsearch-10.8.6-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:265:in `healthcheck!'",
C:/logstash-7.12.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.8.6-java/lib/logstash/outputs/elasti
search/http_client/pool.rb:367:in `update_urls'", "C:/logstash-7.12.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-el
sticsearch-10.8.6-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:83:in `update_initial_urls'", "C:/logstas
-7.12.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.8.6-java/lib/logstash/outputs/elasticsearch/htt
_client/pool.rb:77:in `start'", "C:/logstash-7.12.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.8.6
java/lib/logstash/outputs/elasticsearch/http_client.rb:338:in `build_pool'", "C:/logstash-7.12.1/vendor/bundle/jruby/2.
.0/gems/logstash-output-elasticsearch-10.8.6-java/lib/logstash/outputs/elasticsearch/http_client.rb:64:in `initialize'"
 "C:/logstash-7.12.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.8.6-java/lib/logstash/outputs/elas
icsearch/http_client_builder.rb:106:in `create_http_client'", "C:/logstash-7.12.1/vendor/bundle/jruby/2.5.0/gems/logsta
h-output-elasticsearch-10.8.6-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:102:in `build'", "C:/logst
sh-7.12.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.8.6-java/lib/logstash/plugin_mixins/elasticse
rch/common.rb:34:in `build_client'", "C:/logstash-7.12.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-1
.8.6-java/lib/logstash/outputs/elasticsearch.rb:270:in `register'", "org/logstash/config/ir/compiler/OutputStrategyExt.
ava:131:in `register'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:68:in `register'", "C:/logstas
-7.12.1/logstash-core/lib/logstash/java_pipeline.rb:228:in `block in register_plugins'", "org/jruby/RubyArray.java:1809
in `each'", "C:/logstash-7.12.1/logstash-core/lib/logstash/java_pipeline.rb:227:in `register_plugins'", "C:/logstash-7.
2.1/logstash-core/lib/logstash/java_pipeline.rb:585:in `maybe_setup_out_plugins'", "C:/logstash-7.12.1/logstash-core/li
/logstash/java_pipeline.rb:240:in `start_workers'", "C:/logstash-7.12.1/logstash-core/lib/logstash/java_pipeline.rb:185
in `run'", "C:/logstash-7.12.1/logstash-core/lib/logstash/java_pipeline.rb:137:in `block in start'"], "pipeline.sources
=>["central pipeline management"], :thread=>"#<Thread:0x373d74dd run>"}
[2021-05-28T14:05:57,456][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2021-05-28T14:05:57,465][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStas
::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false
, :backtrace=>nil}

Case 2: management enabled setup true, path specified for a local *.conf file

When the elasticsearch output connects to that it is getting HTML. It expects to get JSON, which is what elasticsearch would return. Have you pointed the output at a kibana instance?

"https://xxxxxx:9243"

I use Cloud ID, should I use Kibana cluster ID instead ?

Sorry, I have never used cloud instances for elastic so I am not able to answer.

1 Like

Np. I understand. Thanks for trying. Can anyone else advise?
The problem is definitely somewhere in the conf file.

Perhaps take a look at this thread

It refers to Packetbeat but the concept / code is the same.

Assuming you just want to use log stash as the pass through then Your logstash conf file should look like this

################################################
# beats->logstash->es default config.
################################################
input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][pipeline] {
    elasticsearch {
      cloud_auth => "elastic:password"
      cloud_id => "mycloud:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRj......"

      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      pipeline => "%{[@metadata][pipeline]}" 
    }
  } else {
    elasticsearch {
      cloud_auth => "elastic:password"
      cloud_id => "mycloud:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRj......"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
    }
  }
}
1 Like

worked like a charm, big thank you Stephen

1 Like

Hi Stephen and all, I bumped into another problem now.... I am able to see data and create indexes but I am unable to see hosts and populated even most basic charts/dashboards. Here are errors I am getting:

  • failed to run search on hosts kpi hosts (400)
  • failed to run search on hosts kpi unique ips (400)
  • failed to run search on all hosts (400)
index": "winlogbeat-7.13.0",
          "node": "oArFASb5Qzit5aB33VdcsQ",
          "reason": {
            "type": "illegal_argument_exception",
            "reason": "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [host.name] in order to load field data by uninverting the inverted index. Note that this can use significant memory."

Can you please advise ?

Looks like you did not run winlogbeat setup first so the template was not loaded and thus the correct mapping was not applied. (It looks like you are using the default mapping)

You need to follow the steps I showed in that other thread... not just add the logstashconf and run...

I always suggest setup a beat direct to ES first ... see that it is working ... then put logstash in the middle.

I always suggest setup a beat direct to ES first ... see that it is working ... then put logstash in the middle.
That is a great suggestion, thank you. I will circle back to suggested articles.

I have been reading about this problem in a similar post
https://discuss.elastic.co/t/failed-to-log-events-to-logstash/234305
I have learned that if i am using Logstash output, I won't be able to manage/load indexes automatically ( with "setup -e" )

Therefore, I studied how to set it up manuall:
https://www.elastic.co/guide/en/beats/winlogbeat/current/winlogbeat-template.html#load-template-manually

but aggain here suggested command
PS > Invoke-RestMethod -Method Put -ContentType "application/json" -InFile winlogbeat.template.json -Uri http://localhost:9200/_template/winlogbeat-7.13.0
only shows how to export it to elastic on localhost. The question is how to export it to elastic cloud using logstash or manually?

See you are trying to make it difficult loading manually etc.. etc.. etc.. :wink:

if you run
winlogbeat setup -e

When winlogbeat is pointed at Elastic Cloud everything will get set up.... everything, dashboard, template, ilm, ingest pipelines ... everything.... loading manually is not recommended by me.

Then put logstash in the middle. Unfortunately because you did not do this first you probably need to clean up the old indices

I can only make suggestions :wink: You can

1 Like

Thanks Stephen, I really appreciate your help.

I have never thought about connecting Winlog to Elastic Cloud and running "setup -e" first (that's lack of experience). I thought that switching between ES cloud and logstash will break things.

Anyhow, before I break more stuff (this is why I am doing it on lab cluster before production) .....
If I understand correctly I need to edit my winlog .yml file and point it directly to ES cloud. In order to do so I need to comment out output for Logstash, uncomment Elastic with following entries

      cloud_auth => "elastic:password"
      cloud_id => "mycloud:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRj......"

after this I just need to run "setup -e" and than set everything back to point to logstash.

Now.... the question is how do I clean up old indices ? Is there an API call that can help me with it?
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html
Found this one, I hope it will work ...

I gave you step by step details here :

Just replace the word packetbeat with winlogbeat

Solution:
So actually there is very little you need to do to make this all work, we will use Logstash as a pasthrough and let all the winlogbeat module do its work, ECS formating, Templates, GEO IP, Index Lifecycle Management, it will all be taken care of.

Here is what I recommend, try to resist the urge to make this more complex that it needs to be.

  1. One a single host Perform Steps 1 - 5 on the Winlogbeat Quckstart page for Elastic Search Service.This will setup Packetbeat and all the associated assets in Elasticsearch and Kibana.

Note Setup only needs to run Once whether you are setting up on 1 host or 1000 hosts, it just loads all the needs artifacts. and If you already did all this.. .and you still have the the cluster you don't even need to do it again.

  1. Now in the winlogbeat.yml comment out cloud.id and cloud.auth: and configure the output section of winlogbeat to point to logstash. Comment out the output.elasticsearch: section. Now Packetbeat is pointed to your on prem Logastash
    output.logstash:
      # The Logstash hosts
      hosts: ["localhost:5044"]
      ...
  1. Setup Logstash. Below is the logstash-beats-es.conf that will support all the beats functionality. Logstash simply acts as a passthough, Packetbeat functionality will magically get passed through.
  2. Start Logstash then start Packetbeat... take a look...data should start to flow exactly as it did when it was Packetbeat to Elastic Cloud direct.
  3. Deploy Packetbeat on other hosts. Configure to point at this Logstash.

Logstash Config for Beats Pass through.

################################################
# beats->logstash->es default config.
################################################
input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][pipeline] {
    elasticsearch {
      cloud_auth => "elastic:password"
      cloud_id => "mycloud:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRj......"

      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      pipeline => "%{[@metadata][pipeline]}" 
    }
  } else {
    elasticsearch {
      cloud_auth => "elastic:password"
      cloud_id => "mycloud:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRj......"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
    }
  }
}

to delete indices just go into Kibana and delete indices like this

GET _cat/indices/win-*/?v

DELETE winlogbeat-7.12.0-000001

You are genius my friend. I promise to read your suggestions next time before asking more questions.
I connected winlog directly with elastic and executed setup -e with no problems

I am not getting more errors. Host are showing up as they should. The only problem now is that logs are shown in wrong time (just hour is off).

Also, do I need to run setup -e for every single winlog computer/VM or was it only necessary once for the first winlog "agent"

From the previous post.... Perhaps read more carefully

Dates are stored in UTC inside elasticsearch and are shown in local time zone in kibana lots of posts on that.

If you have a time zone issue open a separate thread

1 Like

Great. As always thank you.

Slightly different, more technical question but in the same scope.
With the setup I have (beat)-->(logstash)---(elastic cloud), is logstash expected to store any data locally? If yes, for how long? Since I am using it only to pass data from beats to elastic, what expected disk size I should assign to it?

Also, I tired setting up metric beats following your instructions and instructions from the guide, and I was not able to. It looks like metricbeat is able to successfully setup dashboards etc. it sends data out to elastic but most charts in dashboards are not working.


Host overview seems to be working fine (except of time)

Is metric beat setup procedure different than packet eat and wingbeat ?

https://www.elastic.co/guide/en/beats/metricbeat/7.13/metricbeat-installation-configuration.html