Saved "field" parameter is now invalid. Please select a new field. For syslog module in filebeat

(Puneet Patwari) #1

I am using Dockerized Elastic (v 6.6.0) for my use and everything was working fine until I ran into this new problem. After looking at the same error in various forums I am yet to solve this issue. I am running Filebeat, Logstash and Elasticsearch. I enabled system module in filebeat to index them in ES and visualise in Kibana. I followed the recommended steps of loading the pipeline.json in ingest node of ES, setting up dashboard templates in kibana before I modified my filebeat.yml to output to Logstash. On running the docker-compose, I can see the logs correctly indexed in the "Discover" tab of Kibana and the "Logs" tab. However, on opening the dashboard for [Filebeat System] I get the error

Saved "field" parameter is now invalid. Please select a new field
Please see the screenshot below:

I followed various advice of the experts in the threads and understood that some of the fields are not aggregatable and hence the problem arises. Please see the image below for the same.


I am not able to understand the reason as to why certain fields are not aggregatable when there is data on that field in some of the logs. I am also confused as to why system.syslog.hostname.keyword is aggregatable where as system.syslog.hostname is not. Please help me solve the problem.

NOTE:- Running Elastic stack without docker (in standalone mode) with the same flow on my machine, I am able to see the visualize the Syslog dashboard properly and the fields are shown aggregatable for the index pattern filebeat-*.

(Josh Dover) #2

Raw text fields are not aggregatable but are full-text searchable. Keyword fields are aggregatable because they are intended to be used for structured text data such as email addresses, hostnames, zip codes, etc. The keyword field is indexed differently, allowing you to do aggregations like counts, but you cannot do full-text searches on them.

Filebeat by default creates all of these fields for you when it starts up, so you don't need to worry about that. I believe that if you had setup Filebeat without the system module, and then enabled the system module without running the filebeat setup command again you may need to do that again to reconfigure the Kibana dashboards.

If that doesn't work, I would also try forcing Kibana to refresh the index pattern by clicking the refresh button in the top right corner of the screen you have above for filebeat-*.

Let me know if that doesn't work!

(Puneet Patwari) #3

Hi @joshdover, Thanks for the reply.

In my setup I had enabled the system module in the filebeat.yml. I also loaded the Kibana dashboard using the configuration setup.dashboards.enabled: true. I also tried a different approach by explicitly firing setup and --pipelines command from terminal after starting filebeat without having filebeat.module configuration in filebeat.yml. Both ways I am not able to get the dashboard correctly. This problem only happens in the dockerised setup. Please check my Github repo if you want to reproduce it.

I have run the modules successfully in standalone setup(not docker) and got the beautiful dashboards seamlessly. The point to note is that in the standalone setup my index pattern of filebeat-*does not have field system.syslog.hostname.keyword (like it happens in docker setup). Please see the screenshot which is taken from my standalone setup of elastic stack.


The field system.syslog.message has text type and hence is not aggregatable which is expected behaviour. But others are aggregatable and hence we can see dashboards correctly.

This does not work at all.

Please let me know if you need any clarification. And I urge you for help.

(Josh Dover) #4

Thanks for providing more info!

It appears that filebeat is sending data before filebeat setup has created the index template. This is causing Elasticsearch to auto-generate mappings for your index.

This is probably happening because you are piping data through Logstash rather than connecting to Elasticsearch directly. This is fine, but you will need to load the index template manually in that case.

I believe the command you'll want to run something like:

$ filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'

You'll need to run this before actually starting filebeat, so in your setup you would need to override the default Docker command to run this setup command first, and then run the regular filebeat command to start shipping data to Logstash.

(Puneet Patwari) #5

Hi @joshdover , I had taken an approach to setup the templates and ingest module specific pipelines in the docker-compose through a different container called pipeline-setup(as can be seen in the repo). But now as you have put the problem explicitly, I can sense that the order of executing the docker container is not fixed and hence, it might happen that filebeat container starts even before pipeline-setup has finished execution resulting in this anomaly. Could this be the problem?
I will try the solution as described by you to see if it works now. Thanks for the clarification.

(Josh Dover) #6

I believe the problem just stems from the index templates never being set by Filebeat. Those templates are only set if you're filebeat output is Elasticsearch, not an intermediate like Logstash. When you have Logstash in the middle, filebeat doesn't know how to connect to Elasticsearch to set the index templates, so you need to add the setup command above before filebeat starts shipping data to Logstash.

My guess is this works in a non-dockerized environment because filebeat by default will connect to localhost:9200 for Elasticsearch. In Docker, this connection fails because Elasticsearch is on another host, however outside of Docker, localhost:9200 works so the index template gets set.

In summary: I don't think you need to change your boot order, just add the filebeat setup command with the correct configuration for reaching Elasticsearch, then start filebeat the normal way.

(system) closed #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.