Elastic not creating fields

Hello,

I'm having a problem I can't figure out at all. I'm using a KV filter in logstash to separate Firwall log files. Tests using stdout result in logs being parsed as expected.

However, when I change the output to elastic nothing gets into elastic. I've checked the logstash logs and I can't spot anything wrong.

The strange thing is, the moment I turn off comma separated log files, data is pushed into elastic only everything is put into the message field which obviously is not what I want.

To me it looks like elastic might be refusing to create the fields made by the kv filter. Is there any way I can check what is happening on elastic's end?

All config files and log examples are in the previous thread I created.

I'm having a presentation by the end of the week and I really hope I can get this to work to demonstrate we could use elastic to ingest and analyze all our logs.

Did you check the Elasticsearch logs?

I did and the only thing I can figure out from that is when the stdout config is working, elasticsearch doesn't appear to be creating a mapping.

When I switch the KV values ("," instead of "=" and vica versa), elastic is creating a mapping and data is showing up albeit the value becomes the field name and the field name becomes the value.

No matter what I do I can't get elastic/logstash to process the data when the KV filter is set up to correctly parse the logs (again, output in stdout is correct).

No errors/warnings show up in the logstash or elastic logs.

If data is making it to Elasticsearch then it will create a mapping. There is no way for Elasticsearch to receive data and not do that.

Looking at the other thread, this does not look valid;

hosts => localhost

Try;

hosts => [ "http://localhost:9200" ]

Thanks,

But that is not the problem. As I said, the moment I change the KV filter values or turn off comma separated values on the log it works.

I tried your suggestion but it does not work. Elastic is not generating any index. Again, the moment I turn off comma separated values it works. However if I change the KV filter to work with the non comma separated logs data stops coming in again.

Long story short: data correctly split by the KV filter (confirmed with stdout) doesn't make it into Elastic.

As both logstash and elastic are not reporting any errors I have no clue what to do.

This is the log. I've deleted the index and had logs coming in for a couple of minutes in comma separated style. No index is created. The moment I turned that off on the fortigate Elastic created an index.

[2017-09-21T08:54:20,372][INFO ][o.e.n.Node               ] [] initializing ...
[2017-09-21T08:54:20,812][INFO ][o.e.e.NodeEnvironment    ] [d7ik_di] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [28.6gb], net total_space [46gb], spins? [possibly], types [ext4]
[2017-09-21T08:54:20,812][INFO ][o.e.e.NodeEnvironment    ] [d7ik_di] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-09-21T08:54:27,615][INFO ][o.e.n.Node               ] node name [d7ik_di] derived from node ID [d7ik_di8QNu553qFasCxlA]; set [node.name] to override
[2017-09-21T08:54:27,622][INFO ][o.e.n.Node               ] version[5.4.1], pid[1184], build[2cfe0df/2017-05-29T16:05:51.443Z], OS[Linux/4.10.0-35-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]
[2017-09-21T08:54:27,623][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-09-21T08:54:31,561][INFO ][o.e.p.PluginsService     ] [d7ik_di] loaded module [aggs-matrix-stats]
[2017-09-21T08:54:31,561][INFO ][o.e.p.PluginsService     ] [d7ik_di] loaded module [ingest-common]
[2017-09-21T08:54:31,561][INFO ][o.e.p.PluginsService     ] [d7ik_di] loaded module [lang-expression]
[2017-09-21T08:54:31,561][INFO ][o.e.p.PluginsService     ] [d7ik_di] loaded module [lang-groovy]
[2017-09-21T08:54:31,561][INFO ][o.e.p.PluginsService     ] [d7ik_di] loaded module [lang-mustache]
[2017-09-21T08:54:31,561][INFO ][o.e.p.PluginsService     ] [d7ik_di] loaded module [lang-painless]
[2017-09-21T08:54:31,562][INFO ][o.e.p.PluginsService     ] [d7ik_di] loaded module [percolator]
[2017-09-21T08:54:31,562][INFO ][o.e.p.PluginsService     ] [d7ik_di] loaded module [reindex]
[2017-09-21T08:54:31,562][INFO ][o.e.p.PluginsService     ] [d7ik_di] loaded module [transport-netty3]
[2017-09-21T08:54:31,562][INFO ][o.e.p.PluginsService     ] [d7ik_di] loaded module [transport-netty4]
[2017-09-21T08:54:31,563][INFO ][o.e.p.PluginsService     ] [d7ik_di] no plugins loaded
[2017-09-21T08:54:36,397][INFO ][o.e.d.DiscoveryModule    ] [d7ik_di] using discovery type [zen]
[2017-09-21T08:54:39,496][INFO ][o.e.n.Node               ] initialized
[2017-09-21T08:54:39,496][INFO ][o.e.n.Node               ] [d7ik_di] starting ...
[2017-09-21T08:54:40,059][INFO ][o.e.t.TransportService   ] [d7ik_di] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2017-09-21T08:54:43,256][INFO ][o.e.c.s.ClusterService   ] [d7ik_di] new_master {d7ik_di}{d7ik_di8QNu553qFasCxlA}{0pvNb03dRpu-RVe2nLx5gg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-09-21T08:54:43,358][INFO ][o.e.h.n.Netty4HttpServerTransport] [d7ik_di] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2017-09-21T08:54:43,375][INFO ][o.e.n.Node               ] [d7ik_di] started
[2017-09-21T08:54:47,624][INFO ][o.e.g.GatewayService     ] [d7ik_di] recovered [112] indices into cluster_state
[2017-09-21T08:55:13,253][INFO ][o.e.c.r.a.AllocationService] [d7ik_di] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
[2017-09-21T08:59:10,116][INFO ][o.e.m.j.JvmGcMonitorService] [d7ik_di] [gc][270] overhead, spent [332ms] collecting in the last [1s]
[2017-09-21T08:59:12,150][INFO ][o.e.m.j.JvmGcMonitorService] [d7ik_di] [gc][272] overhead, spent [345ms] collecting in the last [1s]
[2017-09-21T09:08:04,282][INFO ][o.e.c.m.MetaDataDeleteIndexService] [d7ik_di] [fortigate-2017.09.20/j0gHerYCRhyqMjbybqQKHA] deleting index
[2017-09-21T09:15:00,002][INFO ][o.e.c.m.MetaDataCreateIndexService] [d7ik_di] [fortigate-2017.09.21] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2017-09-21T09:15:00,339][INFO ][o.e.c.m.MetaDataMappingService] [d7ik_di] [fortigate-2017.09.21/2Ti6ayHoSlW_0D_em1Ei0w] create_mapping [fortigate]

Can you try with the latest version of the stack - 5.6.1?

Actually I tried that yesterday evening but no luck unfortunately. Same problem.

Can you DM me with the following and I will try it;

  • Your Logstash config
  • Some sample data

Even if you gist/pastebin/etc it and link me, that's fine.

I've looked into this further and I still can't get it working.

I've confirmed:

  • stdout gives the correct results
  • My device is sending out data
  • My server is receiving data from the device
  • Logstash/Elastic are running and processing data (except for the KV filter)

As mentioned above once I switch the value pairs and they don't get split up correctly anymore, data starts appearing in Elastic.

So somewhere in Logstash/Elastic something is going wrong. I can't think of anything outside logstash/elastic that could be causing this issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.