Failing to capture oracle alert logs (kubernetes)

Having a setup wherein my oracle is deployed on kubernetes as statefulset, i need to monitor some logs other than the container logs, so from documentation and examples online was able create a filebeat config such as below:

apiVersion: v1
kind: ConfigMap
name: filebeat-config
namespace: kube-system
k8s-app: filebeat "true"
filebeat.yml: |-
# Mounted filebeat-prospectors configmap:
path: {path.config}/prospectors.d/*.yml # Reload prospectors configs as they change: reload.enabled: true modules: path: {path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false

- type: log
    - /var/lib/docker/containers/*/*.log
    type: docker
  fields_under_root: true
  encoding: utf-8
  ignore_older: 3h
    pattern: '^[[:space:]]|(at|\.{3})\b|^Caused by:'
    negate: true
    match: before
- type: log
    - /opt/oracle/oradata/diag/rdbms/*/ORCLCDB/trace/alert_ORCLCDB.log
    - /opt/oracle/oradata/diag/rdbms/*/ORCLCDB/trace/drcORCLCDB.log
    - /opt/oracle/oradata/diag/tnslsnr/*/listener/trace/listener.log
  document_type: oracle-tarce
  tags: ["oracle", "log"]
    type: oracle
  fields_under_root: true
  encoding: utf-8
  ignore_older: 3h
  - add_cloud_metadata:
  - add_kubernetes_metadata:
      in_cluster: true

  hosts: ['logstash-service:5044']

but i am receiving only the container logs and i am not able to filter and data on the ELK stack i have deployed to visualize the data coming from a particular namespace.

Any ideas / thoughts on this approach , appreciate the help & support here.


Hi @Maurya_M and welcome :slight_smile:

What version of Filebeat are you using? Take into account that prospectors option was deprecated in 6.3 and removed in 7.0. inputs should be used instead now.

hi @jsoriano, thanks fore replying back.

i am using filebeat 6.3 -, as i am doing some quick logging from example , what would be the recommended image versions for the entire Elastic stack ?

I would recommend you to use the latest versions of the Elastic Stack, lots of fixes related to kubernetes support have been done till 6.3, and 7.0 was released recently. In any case I was asking for the version just to confirm that you are in a version where the prospectors options still work.

I see you are configuring Filebeat to read the logs from /opt/oracle, is this directory mounted in the Filebeat container? It should be so Filebeat can read the files.

have added the volume mounts in the filebeat container ( see snippet below): Is this correct, Also i have the filebeat as deamonset , do i need to deploy as side-car container along with the oracle containers?

- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: oracledata
** mountPath: /opt/oracle/oradata**
** readOnly: true**
- name: config
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
path: /var/lib/docker/containers
- name: oracledata
** hostPath:**
** path: /opt/oracle/oradata**
- name: inputs
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
path: /var/lib/filebeat-data
type: DirectoryOrCreate

@jsoriano, i was finally able to deploy using sidecar-container to my oracle container, but i am not seeing any message as per my alert.log configured on Kibana!!

Any ideas how to debug/ have checkpoints if my configuration / logs are reaching the elastic search & kibana.

Having Filebeat --> Logstash --> ES --> Kibaba setup.

@Maurya_M, some things you can check:

  • Double-check that files are accessible from the filebeat container
  • Check in filebeat logs that harvesters are being started for your log files (with messages like Harvester started for file...)
  • Enable debug to see what events are being sent by filebeat
  • Check logstash logs for any error
  • Try to configure filebeat to send events directly to Elasticsearch, to discard problems in logstash

By the way, is there any reason why you are using logstash? for many use cases it is enough with sending the data directly from filebeat to elasticsearch, and this way it is ieasier to take advantage of filebeat modules.

thanks @jsoriano for the suggestions,

  • Harvester has started for the file mentioned in the "path"
    Harvester started for file: /opt/oracle/oradata/diag/tnslsnr/oradb-0/listener/trace/listener.log
    2019-05-23T11:54:48.561Z INFO log/harvester.go:254 Harvester started for file: /opt/oracle/oradata/diag/rdbms/oracdb0/ORCLCDB/trace/alert_ORCLCDB.log

  • Does this suffice the files are accesible from the fb container?

  • Do have debug option with in my side-car filebeat deloyment
    args: [
    "-c", "/etc/filebeat.yml",

should i change this to "filebeat -e -d "*" " - where do i see these verbose message?

On the logstash side, the idea was to had fields / remove unwanted / multiline handling, but i guess these can be done by filebeat filter too. But somehow got carried away to use the whole Elastic stack for now, may drop logstash later, but have added no filter in logstash just these configuration below:

logstash.yml: | ""
path.config: /usr/share/logstash/pipeline
logstash.conf: |
# all input will come from filebeat, no local logs
input {
beats {
port => 5044
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
manage_template => false
index => "%{[kubernetes][namespace]}"

Btw, i did run from dev-tools query on my namespace , but i dont see any of the alert log data which i verified got created, but not getting pushed to ES.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.