In our ECE environment, the logs are being indexed into a logging-and-metrics cluster. How does this work behind the scenes? Is there a Filebeat which is collecting the logs from the Elasticsearch cluster and forwarding them to the logging cluster?
We want to forward these logs from the logging cluster to an external log system, how do we forward logs directly from this cluster without having to introduce an intermediate layer(for example-an app which queries indices and forwards)? Can we have a local File beat to read the logs from the path specified below and forward to an outbound url instead of Logstash?
I also came across an API to read log files https://www.elastic.co/guide/en/cloud-enterprise/current/generate-es-cluster-logs.html
Here is the data from the indices in the cluster
"hits": [
{
"_index": "cluster-logs-2018.10.08",
"_type": "doc",
"_id": <ID>,
"_score": 1,
"_source": {
"@timestamp": "2018-10-08T00:00:11.183Z",
"beat": {
"hostname": <hostname>,
"name": <name>,
"version": "5.6.8-xexec"
},
"ece": {
"component": "elasticsearch",
"runner": <IP>,
"zone": <ZONE>
},
"ece.cluster": <cluster_id>,
"ece.instance": "instance-0000000002",
"input_type": "log",
"message": "[2018-10-08T00:00:06,295][WARN ][org.elasticsearch.deprecation.rest.RestController] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.",
"offset": 198,
"source": "/logs/allocator/containers/elasticsearch/<cluster_id>/instance-0000000002/logs/es.log",
"type": "log"
}
},
Under the hood we use a beats sidecar that is deployed on any ECE node (if you run docker ps on the host you will see a frc-beats-runners-beats-runner container, which picks up the log files written to the host and sends it to the logging and monitoring cluster. If you wish to send these logs to another cluster one option is to configure a remote reindex job. If what you are after is to send these logs to another tool you can run another shipper on the host that will pick up the logs of a specific cluster(s) to any destination.
For example the location of Elasticsearch node logs will be in /mnt/data/elastic/{allocator_id}/services/allocator/containers/elasticsearch/{cluster_id}.
@zanbel
Yes, I want to run another shipper which reads log files in the/mnt/data/elasticsearch directory and sends them out to the log server over http. In the ECE world, is it possible to install an app(external shipper) on the host since much of the setup happens under the hood? If so, how?
Also, is it possible to override the default behavior of writing to the logging-and-metrics cluster and not do this if we have access to logs and are sending them out to another external log system?
If you wish to install your own shipper, of course you can do so. I cannot provide any input around if it will break something in ECE and how it might affect ECE behaviour, how many resources will it require, etc.
You can control the installation path using the --host-storage-path param, more info is available here.
It's a pretty straight forward log forwarder reading log files, we'll take care of the environment specific details.
I had another question in my previous post -
Also, is it possible to override the default behavior of writing to the logging-and-metrics cluster and not do this if we have access to logs and are sending them out to another external log system?
Is this possible, and how? We do not want to be duplicating log stores without having any adverse effects on ECE.
Completely disable the writing to this cluster is not possible as we rely on the logs indexed there for various reasons, and in the future we might to leverage those in order to provide some out-of-the-box alerting mechanism, for example.
You can, however, control the retention policy for the indices stored in this cluster using the following command bash elastic-cloud-enterprise.sh set-logging-and-metrics-policy --pattern cluster-logs-* --days 14, you can read more about these options here.
For the alerting mechanism, we use our monitoring cluster as we enable xpack.monitoring on our nodes. What extra does this provide as compared to the metrics sent from xpack monitoring? The reason why I ask is that sending logs to an external source may not be that common but definitely something that could be required for implementations. So, we should kind of have this de-coupled and not have to run 2 processes which do the same thing(reading logs files from the host mount directory), one which send logs to the logs cluster, the other to your external in house log system.
There are more ECE specific alerting we have in mind, on top of the xpack monitoring, that can leverage the logs that are indexed in the logging and monitoring cluster. I will pass on the request to disable sending logs to the logging and monitoring cluster and evaluate if this is something we would like to include in a future version.
Hope the information provided will still provide a solution to your use case and a reasonable workaround for reduce the size of the logging and monitoring cluster by reducing the retention period.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.