How i can integrate weblogic with elastic seach?

i want to integrate weblogic with elastic search & logstash and i am new to this so how i can do this.

What do you exactly mean by "integrate"?
What do you mean by integrate Logstash?

i want to use elastic search and kibana for production env and we are using weblogic.
how we can use this.

i want to use Elasticsearch and kibana for production env and we are using weblogic.

You mean you want to use Logstash to read Weblogic logs and index in Elasticsearch?

Yes Magnus

Now I am able to see web-logic logs in kibana.
I have used filebeat but now the problem is from filebeat i am not able to send data to logstash.

In filebeat.yml i added below thing

  • input_type: log

Paths that should be crawled and fetched. Glob based paths.

paths:
- /sindevuser6/sin/infra/pratikj/WLS_LOG/*.log
output.elasticsearch:

Array of hosts to connect to.

hosts: ["localhost:1603"]
template.name: "filebeat"
template.path: "filebeat.template.json"
template.overwrite: false
output.logstash:

The Logstash hosts

hosts: ["localhost:9600"]

and in logstash.conf file i have added below

input {
beats {
port => 9600
}
}

output {
elasticsearch {
hosts => "localhost:1603"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

please let me know what wrong i have done and also in kibana i can see for one exception multiple entry.

What do the Filebeat logs say?

2016/12/09 11:43:04.839575 client.go:184: DBG Publish: {
"@timestamp": "2016-12-09T11:42:59.796Z",
"beat": {
"hostname": "hostnme",
"name": "xyz",
"version": "5.0.2"
},
"input_type": "log",
"message": " at com.clarify.cbo.Session.findString(Session.java:1437)",
"offset": 505721,
"source": "/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161128_171612.log",
"type": "log"
}
2016/12/09 11:43:04.851936 client.go:238: DBG PublishEvents: 50 events have been published to elasticsearch in 12.284825ms.
2016/12/09 11:43:04.852182 single.go:150: DBG send completed
2016/12/09 11:43:04.852194 output.go:109: DBG output worker: publish 50 events
2016/12/09 11:43:04.860722 client.go:238: DBG PublishEvents: 50 events have been published to elasticsearch in 8.507876ms.
2016/12/09 11:43:04.860953 single.go:150: DBG send completed
2016/12/09 11:43:04.860964 output.go:109: DBG output worker: publish 50 events
2016/12/09 11:43:04.870179 client.go:238: DBG PublishEvents: 50 events have been published to elasticsearch in 9.193153ms.
2016/12/09 11:43:04.870403 single.go:150: DBG send completed
2016/12/09 11:43:04.870418 output.go:109: DBG output worker: publish 50 events
2016/12/09 11:43:04.879506 client.go:238: DBG PublishEvents: 50 events have been published to elasticsearch in 9.071331ms.
2016/12/09 11:43:04.879638 single.go:150: DBG send completed
2016/12/09 11:43:04.879645 output.go:109: DBG output worker: publish 50 events
2016/12/09 11:43:04.892502 sync.go:78: DBG 487 events out of 487 events sent to logstash. Continue sending
2016/12/09 11:43:05.017559 output.go:109: DBG output worker: publish 37 events
2016/12/09 11:43:05.059671 client.go:238: DBG PublishEvents: 37 events have been published to elasticsearch in 42.089564ms.
2016/12/09 11:43:05.059786 single.go:150: DBG send completed
2016/12/09 11:43:05.059803 sync.go:68: DBG Events sent: 488
2016/12/09 11:43:05.059816 registrar.go:250: DBG Processing 488 events
2016/12/09 11:43:05.059948 registrar.go:236: DBG Registrar states cleaned up. Before: 7 , After: 7
22
2016/12/09 11:43:19.785111 prospector_log.go:288: DBG Harvester for file is still running: /sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161128_171612.log
2016/12/09 11:43:19.785117 prospector_log.go:79: DBG Prospector states cleaned up. Before: 7, After: 7
2016/12/09 11:43:19.794359 spooler.go:89: DBG Flushing spooler because of timeout. Events flushed: 0

You seem to have configured Filebeat to send to Elasticsearch, not Logstash. Make sure the indentation of the config file is correct. We can help you, but then you must post your configuration file formatted as preformatted text (there's a toolbar button for it) so that indentation isn't lost.

Hi Magnus,

i have also configured filebeat with logstash but some how filebeat is not able to send data to logstash.

in FileBeat i have configured

paths:
- /sindevuser6/sin/infra/pratikj/WLS_LOG/*.log
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:1603"]
template.name: "filebeat"
template.path: "filebeat.template.json"
template.overwrite: false
output.logstash:
# The Logstash hosts
hosts: ["localhost:9600"]

and contains of logstash.conf file is

input {
beats {
port => 9600
}
}

output {
elasticsearch {
hosts => "localhost:1603"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
and if i dont want to use logstash then how we can update filebeat.

Thanks,
Pratik

Don't you have any indentation in your Filebeat configuration? We need to see exactly what your file looks like.

below is the content of filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- input_type: log

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /sindevuser6/sin/infra/pratikj/WLS_LOG/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ["^DBG"]

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ["^ERR", "^WARN"]

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: [".gz$"]

  # Optional additional fields. These field can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:1603"]
  template.name: "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:9600"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

Comment out the elasticsearch output so that you only keep your logstash output. What's in the Filebeat logs after you do that?

we dont want to use logstash and reason of not using Logstash as logstash only send BEA output to elastic search , it doesnt send all the exception ie application exception and filebeat does all.
so please help in correcting configuration between filebeat and elastic search.

we dont want to use logstash

Then remove the logstash section from the Filebeat configuration.

and reason of not using Logstash as logstash only send BEA output to Elasticsearch , it doesnt send all the exception ie application exception and filebeat does all.

I'm positive that this is a configuration issue but without details it's impossible to comment further.

now i have commented logstash section from filebeat.yml file and but still filebeat is sending may index again again to elasticsearch.

In filebeat logs below is printing

  "beat": {
    "hostname": "indlin2193",
    "name": "indlin2193",
    "version": "5.0.2"
  },
  "input_type": "log",
  "message": "        at com.clarify.cbo.KJniDisp.invokeString(KJniDisp.java:933)",
  "offset": 505980,
  "source": "/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161128_171612.log",
  "type": "log"
}
2016/12/12 10:42:29.394608 client.go:184: DBG  Publish: {
  "@timestamp": "2016-12-12T10:42:24.388Z",
  "beat": {
    "hostname": "indlin2193",
    "name": "indlin2193",
    "version": "5.0.2"
  },
  "input_type": "log",
  "message": "        at com.clarify.cbo.Session.findString(Session.java:1437)",
  "offset": 506045,
  "source": "/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161128_171612.log",
  "type": "log"
}
2016/12/12 10:42:29.394638 output.go:109: DBG  output worker: publish 50 events
2016/12/12 10:42:29.424232 client.go:238: DBG  PublishEvents: 50 events have been  published to elasticsearch in 29.572247ms.
2016/12/12 10:42:29.424375 single.go:150: DBG  send completed
2016/12/12 10:42:29.424388 output.go:109: DBG  output worker: publish 50 events
2016/12/12 10:42:29.449085 client.go:238: DBG  PublishEvents: 50 events have been  published to elasticsearch in 24.678822ms.
2016/12/12 10:42:29.449221 single.go:150: DBG  send completed
2016/12/12 10:42:29.449239 output.go:109: DBG  output worker: publish 50 events
2016/12/12 10:42:29.462594 client.go:238: DBG  PublishEvents: 50 events have been  published to elasticsearch in 13.342227ms.
2016/12/12 10:42:29.462730 single.go:150: DBG  send completed
2016/12/12 10:42:29.462742 output.go:109: DBG  output worker: publish 50 events
2016/12/12 10:42:29.474632 client.go:238: DBG  PublishEvents: 50 events have been  published to elasticsearch in 11.835897ms.
2016/12/12 10:42:29.474883 single.go:150: DBG  send completed
2016/12/12 10:42:29.474925 output.go:109: DBG  output worker: publish 50 events
2016/12/12 10:42:29.492142 client.go:238: DBG  PublishEvents: 50 events have been  published to elasticsearch in 17.193734ms.
2016/12/12 10:42:29.492272 single.go:150: DBG  send completed
2016/12/12 10:42:29.492282 output.go:109: DBG  output worker: publish 50 events
2016/12/12 10:42:29.508023 client.go:238: DBG  PublishEvents: 50 events have been  published to elasticsearch in 15.728332ms.
2016/12/12 10:42:29.508147 single.go:150: DBG  send completed
2016/12/12 10:42:29.508157 output.go:109: DBG  output worker: publish 50 events
2016/12/12 10:42:29.539704 client.go:238: DBG  PublishEvents: 50 events have been  published to elasticsearch in 31.533588ms.

2016/12/12 10:42:29.539844 single.go:150: DBG send completed
2016/12/12 10:42:29.610979 sync.go:68: DBG Events sent: 500
2016/12/12 10:42:29.610995 registrar.go:250: DBG Processing 500 events
2016/12/12 10:42:29.611142 registrar.go:236: DBG Registrar states cleaned up. Before: 8 , After: 8
2016/12/12 10:42:29.611152 registrar.go:273: DBG Write registry file: /sindevuser6/sin/infra/pratikj/pratikk/filebeat-5.0.2-linux-x86_64/data/registry
2016/12/12 10:42:29.611257 registrar.go:295: DBG Registry file updated. 8 states written.

2016-12-07T17:49:14+05:30 INFO Total non-zero values:  filebeat.harvester.started=3 publish.events=19 registrar.states.update=19 registar.states.current=16 filebeat.harvester.closed=3 registrar.writes=2
2016-12-07T17:49:14+05:30 INFO Uptime: 5.421438315s
2016-12-07T17:49:14+05:30 INFO filebeat stopped.

Registry file of filebeat contain below thing

[{"source":"/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161206_142716.log","offset":345065,"FileStateOS":{"inode":796538,"device":64826},"timestamp":"2016-12-12T16:12:25.77152046+05:30","ttl":-1},{"source":"/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161207_121015.log","offset":2113255,"FileStateOS":{"inode":796540,"device":64826},"timestamp":"2016-12-12T16:17:34.378936356+05:30","ttl":-1000000000},{"source":"/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161206_155034.log","offset":1350978,"FileStateOS":{"inode":796539,"device":64826},"timestamp":"2016-12-12T16:12:25.771521243+05:30","ttl":-1},{"source":"/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161128_171612.log","offset":504741,"FileStateOS":{"inode":796619,"device":64826},"timestamp":"2016-12-12T16:12:25.771521559+05:30","ttl":-1},{"source":"/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161128_171612.log","offset":505074,"FileStateOS":{"inode":796669,"device":64826},"timestamp":"2016-12-12T16:12:25.771522024+05:30","ttl":-1},{"source":"/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161128_171612.log","offset":505397,"FileStateOS":{"inode":796760,"device":64826},"timestamp":"2016-12-12T16:12:25.771522252+05:30","ttl":-1},{"source":"/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161128_171612.log","offset":505722,"FileStateOS":{"inode":796629,"device":64826},"timestamp":"2016-12-12T16:12:25.771522587+05:30","ttl":-1},{"source":"/sindevuser6/sin/infra/pratikj/WLS_LOG/weblogic.20161128_171612.log","offset":506045,"FileStateOS":{"inode":796497,"device":64826},"timestamp":"2016-12-12T16:17:34.37893601+05:30","ttl":-1000000000}]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.