Filebeat index not created

Hi Guys,
I spent hours trying to get filebeat to index a custom log file but I am not having any luck in seeing any of the data ingest.
I provided all of my setup details on stack overflow (formatting of yaml here is off for some reason) elasticsearch - elastic filebeat index not created - Stack Overflow

Please let me know if you need the same details here

Any idea what am I missing?


Hi @dev9 Welcome to the community!

BTW We are not all guys :slight_smile:

First thing I would do is just comment out all the setup stuff Just to reduce the variables.

# setup.ilm.enabled: false
# setup.ilm.rollover_alias: "filebeat"
# setup.ilm.pattern: "{now/d}-000001" 
# "myindex"
# setup.template.pattern: "myindex-*"

I suspect what may be happening... Is the file has been already been read once (which it's a good chance it has) filebeat Will only read the file once because he keeps track of that.

So I would clean out the registry under the data directory and try again.

This is a common thing people get tripped up on.

Also you can start filebeat and watch the logs and should be able to see any error messages.

You can post the logs here.

1 Like

Thanks, Stephen!
My apology, my intention was not to sound sexist :). I usually use that guys term instead of ladies and gentlemen :slight_smile:
I reduced the configuration to the following and I am still having issues;
elastic yaml true my-elastic-cluster
discovery.type: single-node es-node-1 localhost
http.port: 9200

Kibana's yaml

elasticsearch.username: "kibana_system"
server.port: 5601 "localhost"
elasticsearch.hosts: ["http://localhost:9200"]

filebeat yaml

- type: log
  enabled: true
  paths: /Users/me/Downloads/logs/mylog.log
  json.message_key: severity 
  json.keys_under_root: true
    #json.overwrite_keys: true
    #json.add_error_key: true
    #json.expand_keys: true

    #setup.template.overwrite: true
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
  reload.period: 10s
# ======================= Elasticsearch template setting =======================
#index.number_of_shards: 1
  hosts: ["localhost:9200"] filebeat
  template.path: filebeat.template.json

I also followed the instructions on Set up minimal security for Elasticsearch | Elasticsearch Guide [7.15] | Elastic and used ./bin/Elasticsearch-setup-passwords interactive option. Now whenever I try to executed filebeat setup, I get

Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://localhost:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]

The reason I changed the original configuration was this blog post Structured logging with Filebeat | Elastic Blog but unfortunately I am unable to proceed due the authentication error when I execute filebeat setup.

Thanks for the support.

Since you set up the passwords..

If So you need to include the elastic username and password in that output.elasticsearch section.

(Technically you could have used the filebeat user but I'd start with the elastic user first)

Not sure why you're including all the template stuff.

The way I recommend doing this is getting all the connectivity working and then start adding your templates and etc afterwards.

Just a recommendation.

BTW If you're interested on how to create secure stack on a single server I wrote a step-by-step walkthrough.

1 Like

Thanks again Stephen.
I added the username and password as per your recommendation and removed these two lines from the filebeat.yaml filebeat
  template.path: filebeat.template.json

executed filebeat setup and it ran Ok then started filebeat using nohup ./filebeat -e -c filebeat.yml & followed by tail -f nohup.out. I see the events being outputted but when I visit Kibana's discover, I still dont see any data. No errors in elastic or kibana, just no data.

Did you clean out the filebeat registry?

Can you show those log lines?

Most people that want help post logs... You don't need to start in the background just start it in the foreground and capture the logs first 50 lines or so put in here

Also have you run

GET _cat/indices?v

In Kibana -> Dev Tools

I am sorry, I have no idea how to clean the registry. I see folder called registry, should I delete it?

warning: usage of JAVA_HOME is deprecated, use ES_JAVA_HOME
warning: usage of JAVA_HOME is deprecated, use ES_JAVA_HOME
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2021-10-23T20:39:06,551][INFO ][o.e.n.Node               ] [es-node-1] version[7.15.1], pid[53007], build[default/tar/83c34f456ae29d60e94d886e455e6a3409bba9ed/2021-10-07T21:56:19.031608185Z], OS[Mac OS X/11.5/x86_64], JVM[Eclipse Foundation/OpenJDK 64-Bit Server VM/11.0.12/11.0.12+7]
[2021-10-23T20:39:06,559][INFO ][o.e.n.Node               ] [es-node-1] JVM home [/Library/Java/JavaVirtualMachines/temurin-11.jre/Contents/Home], using bundled JDK [false]
[2021-10-23T20:39:06,559][INFO ][o.e.n.Node               ] [es-node-1] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly,, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms1024m, -Xmx1024m, -XX:MaxDirectMemorySize=536870912, -Des.path.home=/Users/sammy/Documents/elk/elastic, -Des.path.conf=/Users/sammy/Documents/elk/elastic/config, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=true]

executing GET _cat/indices?v yields the following

health status index                             uuid                   pri rep docs.count docs.deleted store.size
green  open   .kibana_7.15.1_001                NJlrb97eT6aGuGZik2b7Eg   1   0       2308          505      3.1mb          3.1mb
green  open   .geoip_databases                  8JTvUiJFTXutDwV8KNpmhg   1   0         41           10     44.9mb         44.9mb
green  open   .security-7                       GMvs7im3RiiXcbwFdFG_gA   1   0         57            0    306.3kb        306.3kb
green  open   .apm-custom-link                  6TlRLLE-TG2tF0oWFd2dGQ   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.15.1-000001   vQ-s7-PrS3yy08159iNLRw   1   0         20            0     50.8kb         50.8kb
green  open   .apm-agent-configuration          mL904RRxSE6faXIfDvlq_g   1   0          0            0       208b           208b
yellow open   filebeat-7.15.1-2021.10.23-000001 B0_RNKWySr2x5nyC0LQPfw   1   1          0            0       208b           208b
green  open   .async-search                     9quJYk7LRmuZjZlJT6s7yg   1   0          0          168       46kb           46kb
green  open   .tasks                            KTTMVQLaRICD0ngyay9r-Q   1   0         22            0     30.8kb         30.8kb
green  open   .kibana_task_manager_7.15.1_001   Z7hWEqQvSgutJ1LcXYDtjw   1   0         16         4986        1mb            1mb

I really appreciate your patience and support.

The _cat looks good I see the filebeat bootstrap index.

Actually just delete that whole filebeat data directory

Logs : That is only 6 lines of logs...Not much I can do with that...

Clean out the data directory and start filebeat again... see if the data shows up ... if not.. I need about 50+ lines of the logs from the start command...

Deleted the data directory, ran setup, still no data. the first 53 lines of logs were too much for this forum to take but I added them to google drive here logdata.log - Google Drive

See all these these are mapping errors...and thus the docs are failing to index.
Not sure exactly what kind of logs these are but I have a bit of a theory but let's first see if we can get this running.

2021-10-24T00:27:14.324-0400    WARN    [elasticsearch] elasticsearch/client.go:412 Cannot index event 


{"type":"mapper_parsing_exception","reason":"object mapping for [source] tried to parse field [source] as object, but found a concrete value"}, dropping event!

Cleanup the registry...

Comment out

#json.message_key: severity 
#json.keys_under_root: true

In Dev Tools

DELETE filebeat*

Run setup again then start filebeat

I think you're JSON is trying to overwrite fields that have already been defined a specific types but your log is trying to write them as different types.

See if you can get it to ingest some and then I can perhaps help with the JSON part of it.

It looks it looks like you're trying to tell it to parse severity as a JSON object but it's just a simple field That's causing a parsing exception.

See The logs help :slight_smile

BTW works good for logs

First of all, Thank you very much for your patience and support. You are the definition of a professional and I am grateful for all you do.

Now I see the logs but I have one more question for you. since these logs could contain error details, is it possible to query them using the json key values? for example, in this entry, the value of "environment" is "Env1". how do I query against these keys?

{"schemaVersion":1,"timestamp":"2021-10-12T13:30:33.648Z","severity":"info","details":"This is a test message, please ignore fe7a9860-26db-4474-a8f6-7a376e3251ce","clientIp":null,"serverName":"Server1","serverIp":"","serviceComponent":null,"serviceProcessId":1708,"serviceRealm":null,"identity":null,"identityDelegate":null,"correlationId":null,"correlationRole":"PARTICIPANT","requestId":null,"requestRole":null,"messageId":null,"messageRole":null,"category":"TEST","class":"logger.test","action":null,"target":null,"result":null,"dataClassification":"CONFIDENTIAL","resultReason":null,"duration":0,"source":"logger","serverPort":2001,"serviceName":"Service1","serviceVersion":"1.2.1","environment":"Env1"}

Once again, thank you for your help

You're welcome

Well, Perhaps you should read the docs about mappings, field data types and the query DSL.

In short you'll use a term query.

You should open a new thread on query once you get used to looking at your data.

You can see your data with this

GET filebeat*/_search

Your query would look something like this,

Since you're using a default map each field will create a keyword field and a text field which you will want to define your own when you go to production

GET /filebeat*/_search
  "query": {
    "term": {
      "environment.keyword": {
        "value": "Env1"

Oh and when you're ready open a new thread on searching and mappings :slight_smile:

I will definitely read up on that topic, try a few things then open a new thread only if I need to.
Again, thank you very much for your support.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.