Can you stream all contents of the log file to ELK?

If I forward everything in the log to ELK without creating index first, would I be able to create charts on every entry in the log. Is this recommended? Sincere there are many servers and apps, creating index could become cumbersome. What do you recommend to for this type cases?

Elasticsearch (ES) stores all data in indexes. Indexes in ES can roughly be compared to databases in the relational database world. Hence, what you're saying doesn't quite make sense.

Forwarding all logs to ES is a standard use case and nothing unusual.

what was I trying to say is that, I have multiple log files in all different applications. Should I go though my files first to see what I would need to capture and create indeces and fields on based on what I need first. Or, should I start forwarding the logs to ES and have the ES create the indeces?

can you tell me the best way? I need to be able to catpure all data with limited effort and be able to chart all data.

There's no point in pre-creating any indexes or fields—they will be created automatically based on the events that are sent from Logstash—but you may want to filter out some less interesting events in Logstash.

Hi Magnus,

The problem is that we dont have the ELK version of the logstash client since it is Solaris env. Therefore, streaming live data from logs not working for us. Do you know of any client for Solaris or any work around?

Perhaps Logstash on Solaris - stat.st_gid unsupported or native support failed to load [SOLVED] is helpful. You can probably also obtain a Go compiler that runs on Solaris, and that would allow you to build Filebeat. NXLog could be yet another option.

Hi Magnus,

If I push unstructured data to ES with these clients, would I be able to create charts, table etc on them? When data comes to ES, how does ES creates fields/indices automatically? I'd rather push everything to ES and let ES create the indices and fields.

If I push unstructured data to ES with these clients, would I be able to create charts, table etc on them?

To some extent, but to be really useful you need to write Logstash filters to process the raw data.

When data comes to ES, how does ES creates fields/indices automatically? I'd rather push everything to ES and let ES create the indices and fields.

Elasticsearch won't extract any fields from unstructured data on its own. You need Logstash for that.

Basically, I'd need to know what I need to extract from the logs. What if the logs changes? The solution needs to be dynamic.

You can write Logstash filters that support multiple log formats, but in the end there is no magic going on. The filters need to support the log file formats you'll encounter.

Hi Magnus,

I need to set up a call with ELK people to understand all of these different snerios, any ideas?

Use Elastic's contact form to get in touch with them?

Magnus, Thank you for responding to my questions. I have one more question. Let's say we're sending data based on some filters to ES. Let's say the entries in the logs change order in the app log, is there a way that you know we can update our filters dynamically to account for changes in the logs. for example
timestamp, cpu usage, memory usage, load averages

but this format in the log file changes to this:

timestamp, memory usage, load averages, cpu usage

Any thoughts?

If the changes are reasonably predictable you could supply multiple patterns to parse the input messages (so that Logstash tries one after another until it gets a match), but there's no built-in intelligence to dynamically guess log formats.

would I be able to apply a script to fields to order them. Actual content does not change but it looks liek the order of the fields change. Before aggregating the data, would I be able to execute a script, order the fields then send them to ES?

Yes, you could do that.