First, I am new to all of this and have next to no knowledge of how all of our systems were setup originally as I took this over when the person in charge of it left the company.
ELK 6.8.12, Filebeats 6.4 (yes, old, it will be upgraded after the move)
I am in the process of moving from an on-prem full ELK stack to Elastic hosted services. We are closing all of our on-prem sites down.
However, elastic cloud apparently drops Kafka and Logstash from the stack and I have to make changes to filebeats to point directly to elasticsearch.
It does not seem to be working properly.
With our current full stack setup the logs are like this, which I think is normal.
server -> kafka -> logstash -> elasticsearch
Each server has filebeats running, kafka has 3 servers only running kafka, logstash has 3 servers only running logstash, elasticsearch has 6 servers.. 3 running elasticsearch and kibana (elastic clients) and the remaining 3 are elasticsearch only (data nodes)
Elasticsearch generates index files on named "logstash-TOPIC-YEAR.NUM" and all of the servers get their logs pushed into the single index of their topic (gk, iis, etc)
The servers have filebeats installed with a pretty simple config, this is an example of one of them.
####------ filebeat.yml
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/logs/gk-app.log
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
fields_under_root: true
fields:
type: WLP
input_type: "log"
logging.level: error
output.kafka:
hosts: ["kafka01:9092", "kafka02:9092", "kafka03:9092"]
topic: gk-logstash
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
version: 0.10.2.0
Kafka doesn't appear to have anything really going on with it...
####------ server.properties file
broker.id=1
advertised.listeners=PLAINTEXT://172.1.1.3:9092
listeners=PLAINTEXT://172.1.1.3:9092
num.network.threads=50
num.io.threads=30
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka_data1/logs
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.bytes=10073741824
log.retention.check.interval.ms=30000
zookeeper.connect=localhost:2181
delete.topic.enable=true
num.partitions=12
default.replication.factor=2
zookeeper.connection.timeout.ms=600000
offsets.retention.minutes=43200
####------ zookeeper.properties file
initLimit=5
syncLimit=2
maxClientCnxns=0
clientPort=2181
maxClientCnxns=0
server.1=172.1.1.3:2888:3888
server.2=172.1.1.4:2888:3888
server.3=172.1.1.5:2888:3888
dataDir=/opt/kafka_data1/gk_data
autopurge.snapRetainCount=5
autopurge.purgeInterval=12
I don't see any other configuration files or settings anywhere on these 3 kafka servers.
This is what is being used to start logstash
Logstash --path.settings /etc/logstash/ -r -f /opt/logstash-parsers/parsers/dl-logstash/ -l /opt/share/logs/dl-logstash/ -w 1
Logstash --path.settings /etc/logstash/ -r -f /opt/logstash-parsers/parsers/ti-logstash/ -l /opt/share/logs/ti-logstash/ -w 1
etc,etc for each server type we are running filebeats on.
Logstash servers have this in the /etc/logstash/ directory
00-kafka-input.conf
10-IHS-filter.conf
10-WAS-filter.conf
20-elasticsearch-output.conf
logstash.yml
and other config files for startup, java and log4j
This is one of the files, the others are similar with their own configs but nothing sticking out that points to anything other than the zookeeper1/2/3
####------ 00-kafka-input.conf
input {
kafka {
bootstrap_servers => "${ZOOKEEPER1},${ZOOKEEPER2},${ZOOKEEPER3}"
topics => []
group_id =>
codec => 'json'
session_timeout_ms => '30000'
max_poll_records => '250'
consumer_threads => 4
decorate_events => true
}
}
filter {
mutate {
copy => { '[@metadata][kafka]'' => '[metadata][kafka]'' }
}
####------ logstash.yml
node.name: ${CONFIG}_${HOST}
pipeline.id: ${CONFIG}-${HOST}
path.data: /opt/logstash-parsers/${CONFIG}/
xpack.monitoring.elasticsearch.url: ["https://172.1.1.4:9200","https://172.1.1.3:9200","https://172.1.1.5:9200"]
xpack.monitoring.elasticsearch.username: logstash_user
xpack.monitoring.elasticsearch.password: logstash_password
xpack.monitoring.elasticsearch.ssl.ca: /etc/logstash/ca.pem
The Elasticsearch servers
#### /etc/elasticsearch
# elasticsearch.keystore
# elasticsearch.yml
# jvm.options
# log4j2.properties
# role_mapping.yml
# roles.yml
# users
# users_roles
####------
####------ elasticsearch.yml
####------
cluster.name: elk_cluster_01
node.name: elstc01
node.master: false
node.data: false
node.ingest: false
network.bind_host: 0
network.host: [_ens160_]
discovery.zen.ping.unicast.hosts: ["elstd01", "elstd02", "elstd03"]
bootstrap.memory_lock: true
bootstrap.system_call_filter: false
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.ssl.key: /etc/elasticsearch/certs/key_elstc01.pem
xpack.ssl.certificate: /etc/elasticsearch/certs/cert_elstc01.pem
xpack.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca.pem" ]
xpack.monitoring.collection.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/certs/key_elstc01.pem
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/cert_elstc01.pem
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca.pem" ]
xpack:
security:
authc:
realms:
native1:
type: native
order: 0
ad1:
type: active_directory
order: 1
domain_name: OURDOMAIN
url: ldaps://dc01:636, ldaps://dc02:636
ssl:
certificate_authorities: [ "/etc/elasticsearch/certs/ldap_ca.crt", "/etc/elasticsearch/certs/ca.pem" ]
unmapped_groups_as_roles: true
load_balance.type: round_robin
follow_referrals: false
reindex.ssl.certificate_authorities: ["/etc/elasticsearch/certs/oldcluster.crt"]
reindex.ssl.verification_mode: certificate
Now for the problem...
We are migrating to Elastic on cloud, hosted by Elastic.
There is no kafka or logstash and we are needing to point the filebeats directly to elastic.
This is being done with the following changes made to the filebeat.yml file
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/gk_logs/gatekeeper-app.log
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
fields_under_root: true
fields:
type: WLP
input_type: "log"
logging.level: error
This does work, it will successfully hit the cloud instance and all of the -test passes, it even starts an index named FILEBEAT-6.4.2-YEAR-MON-DAY
But, if I change it to use output.elasticsearch... it does not work.
No errors are generated I can see, but there are no new indices created and I can't get it to use the correct name from our on-prem configuarations.
I've tried using the below, in various formats, to no avail.
The end goal is to have each server/topic generating the index as it was before moving to cloud
filbeat.yml
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/gk_logs/gatekeeper-app.log
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
fields_under_root: true
fields:
type: WLP
input_type: "log"
logging.level: error
cloud.id: "CLOUDID"
cloud.auth: "elastic_user:elastic_pass"
#output.elasticsearch.index: "gs-logstash-%{[agent.version]}"
setup.template.enabled: true
setup.template.name: "bgcs-logstash-%{[agent.version]}"
setup.template.pattern: "bgcs-logstash-%{[agent.version]}"
output.elasticsearch:
index: "bgcs-logstash-%{[agent.version]}"
topic: gk-logstash
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
Any help would be greatly appreciated, I'm on a very short timeline as this was supposed to be shutdown last week.