Getting a discover/dashboard error of: "Unexpected token < in JSON at position 0"

I'm not able to find much information about this online except for this post. However, their issue was caused by a proxy. I'm simply setup on a single node in my homelab off of a span port on my switch (and I have a management port as well). I'm running on ESXI 7.0.

I'll attach some screenshots. It looks like my indexes aren't completing the search on the discover or dashboard tabs however I'm able to get kibana to talk to Elasticsearch on said indexes in dev tools.

Here you can see my index patterns:

And here are my indices:

Here you can see I can reach said indices from dev tools:

And here is the error I'm getting:

This is all being run in ESXI on a poweredge r420 in my homelab. This is a ROCKNSM setup.

I'm not sure where to go from here troubleshooting wise.

Hi @jonezy7173 Welcome to the community.

Some questions to get started...

What does "See the full error" bottom right Show?
Also if you go to top right Inspect and look at request response what do you see?
Are there and conflicts in that `ecs-* index pattern?
Do the other index patterns work in discover?

@stephenb "See Full Error" just shows the same error in the center of the screen.

Here is a screenshot of what the inspect shows:

This issue is with all indexes I have made.

What do you mean by "Are there conflicts in the ecs-* index pattern"?

What happens when you select a different index pattern in the discover drop down example

ecs-suricata-*

In that Dev Tools run

GET ecs-*/_search

What do you get?

Run this what do you get?

GET ecs-suricata-*/_search

Did you set up the Index Patterns? Did you select @timestamp as the time stamp?

Do you have a proxy anywhere between Kibana and Elasticsearch?

That error is probably showing up in the Kibana logs... you should take a look.

Also I see this... this seems to perhaps be a common issue with the Rock integrations etc...

Perhaps you should consider using the OOTB integrations for suricata etc...

Ohh wow that package has no commits for several years it is possible it is not compatible with the version of Elastic Stack you are running.

Here are two screenshots from running those two dev tools queries:


Also, the issue is the same with other index patterns. I did set them up myself and I did use @timestamp.

I'll dig into the kibana logs. Where are those stored?

Also, I initially was going to use an OOTB integration but I was also running into issues installing a fleet server for fleet managed agents. I may just end up setting up Logstash for suricata.

My issue with logstash originally is that the documentation on setting up input-filter-output pipelines is sparse. I may just have to copy over the input and filter files from the rock logstash and use them in my own ELK setup

In general .. when we ask for output we would prefer formatted text over screen shots.. .screen shots are hard to read ... can not be searched, copied debugged etc..

My Suspision is that you are going to need like Elastic Stack like 7.9 or 7.10 or so for compatibility.... that is just a guess looking at a few of the issues....

Depends on how / where you installed Kibana... either in /var/log/kibana or if running as a service use journalctl

There is endless logstash help here :slight_smile: But remember text not screenshots and there are lots of sample if you search.

Also you could just use the filebeat suricata module... no need for logstash or fleet... in the Lab

probably install, run setup and run in 10 minutes.. following the Filebeat quick start but just use the suricata module

Awesome, I'll dig into all of this tomorrow! Thank you!!

1 Like

My plan now is to go back to my original idea before I tried rock except this time using filebeat instead of agents.

Suricata in a docker container

ELK in docker containers

All running on an ubuntu server host.

Can I run filebeat in a docker container (in the suricata container)?

It looks like this is documentation on how to run filebeat as a standalone container?

The other option is to just run suricata as normal and then keep ELK in containers but my original goal was to containerize the whole deployment.

In the same container sure if you do a build with a Dockerfile... not my area of expertise...

But I think you could just use the Filebeat Docker Container input... with auto discover and the module ... get set up and we should be able to help...

Here is running on Docker...

You will need to run setup and have a proper filebeat.yml ... see here...

Example Config here

1 Like

Good news: I got it working! I have one additional question but first I'll provide my setup just incase anyone happens to stumble upon this. If you just want to see the question skip to the end.

VM Memory Count

First, you have to sysctl -w vm.max_map_count=262144 If you don't do this your containers will crash.

Elasticsearch & Kibana

I installed Elasticsearch and kibana in docker containers as shown here.

However, instead of running the containers with the -it flag, which runs them in the terminal, you should instead run them with the -d flag which will run them in the background.

Then, to get the enrollment key you can run a docker logs es01 If that doesn't work you can run the following commands to get the enrollment token and the elastic password.

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic

Suricata

Once Elasticsearch and Kibana are up and running, you can install suricata. I did that here. One note: you want to use the -v option as it's shown because that will save the eve.json logs to your local box (in my case my ubuntu server). This allows your logs to persist through container failurs/restarts/shutdowns/etc.

Filebeat (Skip to the edit below if you want to install filebeat as a service so it will run persitently)

Now for filebeat. The documentation on installing filebeat is here. You can run filebeat in a container but since the logs are saved to the localhost I decided to run filebeat directly on the ubuntu server for now.

When configuring filebeat, make sure you edit the filebeat.yml and pass it a username and password for elastic as well as make sure you specify the path as https and enable ssl.

To generate the fingerprint needed for ssl. Refer to this documentation specifically: openssl x509 -fingerprint -sha256 -in config/certs/http_ca.crt

You the need to run ./filebeat setup -e

Once that is done you can do a ./filebeat modules enable suricata

You then need to cd into modules.d and edit the suricata.yml to enable it and give it the path of your eve.json (the path on your local host).

Once that's done you should be able to start it up and it should run!

Now to my question:

When I run filebeat, it runs in the terminal (even if I try to run it in the background). This means that when my ssh session terminates filebeat stops. It's not an issue right now as when I start it back up it is able to send all of the logs it had missed from eve.json on to Elasticsearch. But I'm wondering if there's a way to run filebeat as a service? Or at least have it run in the background and persist when I close my ssh session?

The only other workaround I can think of is to make a cron job to run it.

Edit

Got it reinstalled as a deb package. Just wanted to update this for anyone that finds this in the future

My biggest frustration when troubleshooting issues I was having was finding all these posts that were abandoned when the answer was found. I want to make sure my process is documented.

Filebeat as a service

Now for filebeat. The documentation on installing filebeat is here. You can run filebeat in a container but since the logs are saved to the localhost I decided to run filebeat directly on the ubuntu server for now.

The filebeat.yml should be in /etc/filebeat

When configuring filebeat, make sure you edit the filebeat.yml and pass it a username and password for elastic as well as make sure you specify the path as https and enable ssl.

To generate the fingerprint needed for ssl. Refer to this documentation specifically: openssl x509 -fingerprint -sha256 -in config/certs/http_ca.crt

You then need to cd into modules.d and edit the suricata.yml to enable it (set it to true) and give it the path of your eve.json (the path on your local host).

You now have to run filebeat setup -e to set up the index template and some dashboards.

Once thats done you should be able to start the service. On ubuntu the command is sudo service filebeat start

If you cat eve.json and there's logs in there you should now be seeing those logs in Kibana.

Glad you got it working!

Yes install it as a package deb or rpm then it will run as a service.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.