No new daily indices after setup TLS/SSL

I have a small elk cluster. 2 elastic nodes, 2 data nodes, one of the data node is also master vote only. One Kibana node, 2 logstash nodes. Every node is a single hyper-v VM.
Everything worked nicely until i start configure TLS/SSL communication, base on this tutorial:

https://www.elastic.co/de/blog/configuring-ssl-tls-and-https-to-secure-elasticsearch-kibana-beats-and-logstash

the only thing i did different is the beats input plugin in the logstash .conf file. I replaced it with the udp input plugin, because this used to work before the tls/ssl configuration and i do not use beats. My firewall just sends its logs to the ip adress of my logstashcluster (floating ip) at port 514. Maybe here could be my mistake.
But i started debugging every part of my elk cluster, but i'm stuck now.
Everything seems to be fine, but no new indices are created.

Here are my Configurations

  • Frist elastic server (lnxelastic01) .51

elasticsearch.yml
'
network.host: 0.0.0.0
network.publish_host: x.x.x.51
node.name: 192.168.2.51
node.master: true
node.data: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
network.publish_host: x.x.x.51
discovery.seed_hosts: ["x.x.x.51", "x.x.x.57"]
cluster.initial_master_nodes: ["x.x.x.51", "x.x.x.57"]
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.key: certs/lnxelastic01.key
xpack.security.http.ssl.certificate: certs/lnxelastic01.crt
xpack.security.http.ssl.certificate_authorities: certs/ca.crt
xpack.security.transport.ssl.key: certs/lnxelastic01.key
xpack.security.transport.ssl.certificate: certs/lnxelastic01.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.low: 93%
cluster.routing.allocation.disk.watermark.high: 95%
'

    1. elastic server (lnxelastic02) .57
      '
      elasticsearch.yml
      cluster.name: es-cluster
      node.name: x.x.x.57
      node.master: true
      node.data: false
      path.data: /var/lib/elasticsearch
      path.logs: /var/log/elasticsearch
      bootstrap.memory_lock: true
      network.host: 0.0.0.0
      network.publish_host: x.x.x.57
      discovery.seed_hosts: ["x.x.x.51", "x.x.x.57"]
      cluster.initial_master_nodes: ["x.x.x.51", "x.x.x.57"]
      xpack.security.enabled: true
      xpack.security.http.ssl.enabled: true
      xpack.security.transport.ssl.enabled: true
      xpack.security.http.ssl.key: certs/lnxelastic02.key
      xpack.security.http.ssl.certificate: certs/lnxelastic02.crt
      xpack.security.http.ssl.certificate_authorities: certs/ca.crt
      xpack.security.transport.ssl.key: certs/lnxelastic02.key
      xpack.security.transport.ssl.certificate: certs/lnxelastic02.crt
      xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
      cluster.routing.allocation.disk.threshold_enabled: true
      cluster.routing.allocation.disk.watermark.low: 93%
      cluster.routing.allocation.disk.watermark.high: 95%
      '
  • tail -f /var/log/elasticsearch/es-cluster.log
    '
    [2020-10-21T00:00:07,843][INFO ][o.e.c.r.a.AllocationService] [.57] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-logstash-7-2020.10.21][0]]]).
    [2020-10-21T00:00:09,515][INFO ][o.e.c.m.MetadataCreateIndexService] [.57] [.monitoring-kibana-7-2020.10.21] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[0]
    [2020-10-21T00:00:09,516][INFO ][o.e.c.r.a.AllocationService] [.57] updating number_of_replicas to [1] for indices [.monitoring-kibana-7-2020.10.21]
    [2020-10-21T00:00:09,888][INFO ][o.e.c.r.a.AllocationService] [.57] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-kibana-7-2020.10.21][0]]]).
    [2020-10-21T01:30:00,000][INFO ][o.e.x.m.MlDailyMaintenanceService] [.57] triggering scheduled [ML] maintenance tasks
    [2020-10-21T01:30:00,002][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [.57] Deleting expired data
    [2020-10-21T01:30:00,003][INFO ][o.e.x.s.SnapshotRetentionTask] [.57] starting SLM retention snapshot cleanup task
    [2020-10-21T01:30:00,005][INFO ][o.e.x.s.SnapshotRetentionTask] [.57] there are no repositories to fetch, SLM retention snapshot cleanup task complete
    [2020-10-21T01:30:00,022][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [.57] Completed deletion of expired ML data
    [2020-10-21T01:30:00,023][INFO ][o.e.x.m.MlDailyMaintenanceService] [.57] Successfully completed [ML] maintenance tasks
    '
    1. elastic data (lnxelasticdata) .54

'
cluster.name: es-cluster
node.name: x.x.x.54
node.master: false
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
network.publish_host: x.x.x.54
discovery.seed_hosts: ["x.x.x.51", "x.x.x.57"]
cluster.initial_master_nodes: ["x.x.x.51", "x.x.x.57"]
network.host: 0.0.0.0
network.publish_host: x.x.x..54
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.key: certs/lnxelasticdata.key
xpack.security.http.ssl.certificate: certs/lnxelasticdata.crt
xpack.security.http.ssl.certificate_authorities: certs/ca.crt
xpack.security.transport.ssl.key: certs/lnxelasticdata.key
xpack.security.transport.ssl.certificate: certs/lnxelasticdata.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
'

  • tail -f /var/log/elasticsearch/es-cluster.log

'
[2020-10-20T12:33:45,251][INFO ][o.e.i.s.IndexShard ] [.54] [ilm-history-2-000001][0] primary-replica resync completed with 0 operations
[2020-10-20T12:33:45,257][INFO ][o.e.i.s.IndexShard ] [.54] [.monitoring-es-7-2020.10.17][0] primary-replica resync completed with 0 operations
[2020-10-20T12:33:45,263][INFO ][o.e.i.s.IndexShard ] [.54] [.apm-agent-configuration][0] primary-replica resync completed with 0 operations
[2020-10-20T12:33:45,268][INFO ][o.e.i.s.IndexShard ] [.54] [fortinet-2020.10.13][0] primary-replica resync completed with 0 operations
[2020-10-20T12:33:45,274][INFO ][o.e.i.s.IndexShard ] [.54] [.async-search][0] primary-replica resync completed with 0 operations
[2020-10-20T12:33:45,280][INFO ][o.e.i.s.IndexShard ] [.54] [.monitoring-kibana-7-2020.10.19][0] primary-replica resync completed with 0 operations
[2020-10-20T12:33:45,286][INFO ][o.e.i.s.IndexShard ] [.54] [.monitoring-es-7-2020.10.18][0] primary-replica resync completed with 0 operations
[2020-10-20T12:33:45,321][INFO ][o.e.i.s.IndexShard ] [.54] [fortinet-2020.10.07][0] primary-replica resync completed with 0 operations
[2020-10-20T12:33:45,341][INFO ][o.e.i.s.IndexShard ] [.54] [fortinet-2020.10.12][0] primary-replica resync completed with 0 operations'

    1. elastic data 2 (lnxelasticdata02) .58 (master vote only)

elasticsearch.yml

'
cluster.name: es-cluster
node.name: 192.168.2.58
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
discovery.seed_hosts: ["192.168.2.51", "192.168.2.57"]
cluster.initial_master_nodes: ["192.168.2.51", "192.168.2.57"]
network.host: 0.0.0.0
discovery.seed_hosts: ["192.168.2.51", "192.168.2.57"]
cluster.initial_master_nodes: ["192.168.2.51", "192.168.2.57"]
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.key: certs/lnxelasticdata02.key
xpack.security.http.ssl.certificate: certs/lnxelasticdata02.crt
xpack.security.http.ssl.certificate_authorities: certs/ca.crt
xpack.security.transport.ssl.key: certs/lnxelasticdata02.key
xpack.security.transport.ssl.certificate: certs/lnxelasticdata02.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
#-------------- debug -------------------------------------------
#
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.low: 93%
cluster.routing.allocation.disk.watermark.high: 95%
'

  • tail -f /var/log/elasticsearch/es-cluster.log

'
[2020-10-20T12:34:14,684][INFO ][o.e.c.s.ClusterSettings ] [.58] updating [xpack.monitoring.collection.enabled] from [false] to [true]
[2020-10-20T12:34:15,742][INFO ][o.e.x.s.a.TokenService ] [.58] refresh keys
[2020-10-20T12:34:16,542][INFO ][o.e.x.s.a.TokenService ] [.58] refreshed keys
[2020-10-20T12:34:16,602][INFO ][o.e.l.LicenseService ] [.58] license mode [basic] - valid
[2020-10-20T12:34:16,604][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [.58] Active license is now [BASIC]; Security is enabled
[2020-10-20T12:34:16,624][INFO ][o.e.h.AbstractHttpServerTransport] [.58] publish_address {.58:9200}, bound_addresses {[::]:9200}
[2020-10-20T12:34:16,625][INFO ][o.e.n.Node ] [192.168.2.58] started
'

    1. kibana server (lnxkibana) .52

kibana.yml

'
server.host: "0.0.0.0"
elasticsearch.hosts: ["https://x.x.x.51:9200", "https://x.x.x.57:9200"]
kibana.index: ".kibana"
logging.verbose: true
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/config/certs/lnxkibana.crt
server.ssl.key: /etc/kibana/config/certs/lnxkibana.key
elasticsearch.username: "kibana"
elasticsearch.password: "xxx"
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/config/certs/ca.crt" ]
#---new bwlow for debug
#elasticsearch.ssl.verificationMode: certificate
elasticsearch.ssl.verificationMode: none
#--- for debug without certificate
'

systemctl status kibana (because log is not created?)

'
Oct 21 06:56:30 lnxkibana kibana[687]: {"type":"log","@timestamp":"2020-10-21T06:56:30Z","tags":["debug","plugins","monitoring","monitoring","kibana-monitoring"],"pid":687,"message":"Uploaded bulk stats payload to the local cluster"}
Oct 21 06:56:31 lnxkibana kibana[687]: {"type":"ops","@timestamp":"2020-10-21T06:56:31Z","tags":,"pid":687,"os":{"load":[0.3232421875,0.12841796875,0.0361328125],"mem":{"total":4098392064,"free":1224622080},"uptime":84083},"proc":{"up>
Oct 21 06:56:31 lnxkibana kibana[687]: {"type":"log","@timestamp":"2020-10-21T06:56:31Z","tags":["debug","metrics"],"pid":687,"message":"Refreshing metrics"}
Oct 21 06:56:31 lnxkibana kibana[687]: {"type":"log","@timestamp":"2020-10-21T06:56:31Z","tags":["debug","plugins","security","basic","basic"],"pid":687,"message":"Trying to authenticate user request to /internal/search/es."}
Oct 21 06:56:31 lnxkibana kibana[687]: {"type":"log","@timestamp":"2020-10-21T06:56:31Z","tags":["debug","plugins","security","basic","basic"],"pid":687,"message":"Trying to authenticate via state."}
Oct 21 06:56:31 lnxkibana kibana[687]: {"type":"log","@timestamp":"2020-10-21T06:56:31Z","tags":["debug","plugins","security","basic","basic"],"pid":687,"message":"Request has been authenticated via state."}
Oct 21 06:56:31 lnxkibana kibana[687]: {"type":"log","@timestamp":"2020-10-21T06:56:31Z","tags":["debug","plugins","upgradeAssistant","reindex_worker"],"pid":687,"message":"Polling for reindex operations"}
Oct 21 06:56:31 lnxkibana kibana[687]: {"type":"response","@timestamp":"2020-10-21T06:56:31Z","tags":,"pid":687,"method":"post","statusCode":200,"req":{"url":"/internal/search/es","method":"post","headers":{"host":"192.168.2.52:5601",>
Oct 21 06:56:32 lnxkibana kibana[687]: {"type":"log","@timestamp":"2020-10-21T06:56:32Z","tags":["debug","plugins","taskManager","taskManager"],"pid":687,"message":"Running task endpoint:user-artifact-packager "endpoint:user-artifact-p>
Oct 21 06:56:32 lnxkibana kibana[687]: {"type":"log","@timestamp":"2020-10-21T06:56:32Z","tags":["debug","plugins","securitySolution","endpoint:user-artifact-packager:1","0","0"],"pid":687,"message":"User manifest not available yet."}
'

    1. first logstash server (lnxlogstash01) .51

logstash.xml

'
path.data: /var/lib/logstash
pipeline.ordered: auto
path.logs: /var/log/logstash
xpack.monitoring.enabled: true
path.config: /etc/logstash/conf.d/*.conf
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: "xxx"
xpack.monitoring.elasticsearch.hosts: ["https://x.x.x:9200", "https://x.x.x:9200"]
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/etc/logstash/config/certs/ca.crt"
'

fortigate_nocomment.conf

'
input {
udp {
port => 514
type => firewall
}
}
filter {
if [type] == "firewall" {
mutate {
add_tag => ["fortigate"]
}
grok {
break_on_match => false
match => [ "message", "%{SYSLOG5424PRI:syslog_index}%{GREEDYDATA:message}" ]
overwrite => [ "message" ]
tag_on_failure => [ "failure_grok_fortigate" ]
}
kv { }
if [msg] {
mutate {
replace => [ "message", "%{msg}" ]
}
}
mutate {
convert => { "duration" => "integer" }
convert => { "rcvdbyte" => "integer" }
convert => { "rcvdpkt" => "integer" }
convert => { "sentbyte" => "integer" }
convert => { "sentpkt" => "integer" }
convert => { "cpu" => "integer" }
convert => { "disk" => "integer" }
convert => { "disklograte" => "integer" }
convert => { "fazlograte" => "integer" }
convert => { "mem" => "integer" }
convert => { "totalsession" => "integer" }
}
mutate {
add_field => [ "fgtdatetime", "%{date} %{time}" ]
add_field => [ "loglevel", "%{level}" ]
replace => [ "fortigate_type", "%{type}" ]
replace => [ "fortigate_subtype", "%{subtype}" ]
remove_field => [ "msg", "message", "date", "time", "eventtime" ]
}
date {
match => [ "fgtdatetime", "YYYY-MM-dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["https://x.x.x.51:9200", "https://x.x.x.57:9200", "https://x.x.x.58:9200"]
cacert => '/etc/logstash/config/certs/ca.crt'
user => 'logstash_writer'
password => 'xxx'
index => "fortinet-%{+YYYY.MM.dd}"
manage_template => false
}
}
'

if you need the logstash log, tell me. its long.

    1. secound logstash server (lnxlogstashfailover) .58 (ist pa pacemacer cluster with the first one with the floating ip .53)

logstash. yml

'
path.data: /var/lib/logstash
pipeline.ordered: auto
path.logs: /var/log/logstash
xpack.monitoring.enabled: true
path.config: /etc/logstash/conf.d/*.conf
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: '##local01'
xpack.monitoring.elasticsearch.hosts: [ 'https://192.168.2.51:9200' ]
xpack.monitoring.elasticsearch.ssl.certificate_authority: /etc/logstash/config/certs/ca.crt
'
fortigate_nocomment.conf
'
input {
udp {
port => 514
type => firewall
}
}
filter {
if [type] == "firewall" {
mutate {
add_tag => ["fortigate"]
}
grok {
break_on_match => false
match => [ "message", "%{SYSLOG5424PRI:syslog_index}%{GREEDYDATA:message}" ]
overwrite => [ "message" ]
tag_on_failure => [ "failure_grok_fortigate" ]
}
kv { }
if [msg] {
mutate {
replace => [ "message", "%{msg}" ]
}
}
mutate {
convert => { "duration" => "integer" }
convert => { "rcvdbyte" => "integer" }
convert => { "rcvdpkt" => "integer" }
convert => { "sentbyte" => "integer" }
convert => { "sentpkt" => "integer" }
convert => { "cpu" => "integer" }
convert => { "disk" => "integer" }
convert => { "disklograte" => "integer" }
convert => { "fazlograte" => "integer" }
convert => { "mem" => "integer" }
convert => { "totalsession" => "integer" }
}
mutate {
add_field => [ "fgtdatetime", "%{date} %{time}" ]
add_field => [ "loglevel", "%{level}" ]
replace => [ "fortigate_type", "%{type}" ]
replace => [ "fortigate_subtype", "%{subtype}" ]
remove_field => [ "msg", "message", "date", "time", "eventtime" ]
}
date {
match => [ "fgtdatetime", "YYYY-MM-dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["https://x.x.x.51:9200", "https://x.x.x.57:9200", "https://x.x.x.58:9200"]
cacert => '/etc/logstash/config/certs/ca.crt'
user => 'logstash_writer'
password => 'xxx'
index => "fortinet-%{+YYYY.MM.dd}"
manage_template => false
}
}
'
Some Infos from Kibana

GET _cat/indices

green open .kibana-event-log-7.9.2-000001    P2qJMZunQIaQmUceQHcgJA 1 1      6      0  64.5kb  32.2kb
green open .items-default-000001             _6hA1uWbSY2xv1kSoSAm7w 1 1      0      0    416b    208b
green open .apm-custom-link                  MTiTxjxeRQOK0LboADhsiA 1 1      0      0    416b    208b
green open .monitoring-es-7-2020.10.15       R6S8UOy7ReSgwFFd3EXa-Q 1 1 234068  14656 222.1mb   111mb
green open .monitoring-es-7-2020.10.14       dH5a9aNNQeW4wiT_GB3hMA 1 1 156041      0 144.7mb  72.3mb
green open .kibana_task_manager_1            LT8zUHDtTuaBMOVShVDUFg 1 1      6   4050   1.7mb 859.6kb
green open logs-index_pattern_placeholder    xxsZ2LCsTAm2pTt80B1Mgg 1 1      0      0    416b    208b
green open fortinet-2020.10.08               NCwQr3t_RDG00dRAXYQpoQ 1 1 903741      0 585.4mb 292.7mb
green open fortinet-2020.10.07               44NbmeXsTteUpefYyfnuvw 1 1 232113      9 182.8mb  91.4mb
green open .monitoring-kibana-7-2020.10.19   NDyDTrEZTuWvf0yMgT_vUg 1 1  11270      0   3.6mb   1.8mb
green open fortinet-2020.10.13               rpvMZFkFRXCp7GJYYySaBg 1 1 280377      0 218.2mb 109.1mb
green open .lists-default-000001             UuXLo-kYSQSnIM_vY2Iwdg 1 1      0      0    416b    208b
green open .apm-agent-configuration          CZ2t_51nTSiJ74ImbSnnWg 1 1      0      0    416b    208b
green open fortinet-2020.10.12               8e0ijLihQ7mIIvLGNCYfqw 1 1 482910      0 397.2mb 198.6mb
green open .monitoring-es-7-2020.10.19       TwfWxyRUTyqtEF2RAK2DuQ 1 1 293442 119340 302.1mb   151mb
green open .monitoring-es-7-2020.10.18       jWpHZhsuTri7ymftV0A3YA 1 1 241984  26894 231.7mb 115.8mb
green open .monitoring-logstash-7-2020.10.20 6Bqq_O-dQMyj7_i8zvPvUw 1 1  89383      0    12mb   5.9mb
green open .monitoring-es-7-2020.10.17       jToRam7xRpCTaKGfcQqK5A 1 1 233288  13568 222.5mb 111.2mb
green open .monitoring-logstash-7-2020.10.21 0kCfzKr7SqyrY5oYZlnjQw 1 1     42      0   3.1mb   1.5mb
green open .monitoring-es-7-2020.10.16       SBJ8HSuCSk2H5UlT5p3IxA 1 1 225424   1680 212.8mb 106.4mb
green open .kibana_1                         tUbNvbMcSYmF36h727GnLg 1 1    373    443   1.4mb 718.2kb
green open .security-7                       _RxTng3-QVCfWVigoj0dJQ 1 1     51      0 261.2kb 130.6kb
green open .monitoring-es-7-2020.10.21       UPG6RFMnSZqXEM3u8Jy4Sg 1 1 100965 149872 133.2mb  67.1mb
green open .monitoring-es-7-2020.10.20       M5d3vfK1QNal57gh3FYnWQ 1 1 310337  11024 341.3mb 160.5mb
green open .monitoring-kibana-7-2020.10.20   -6GeyiunT5OZ3Ty0mwRkRA 1 1  17244      0   5.6mb   2.8mb
green open .monitoring-kibana-7-2020.10.21   D7RnjKVPTQmHaystrjLG6w 1 1      6      0 647.7kb 323.7kb
green open metrics-index_pattern_placeholder dk1w4YO4R9uSTLlDF3LZeQ 1 1      0      0    416b    208b
green open .async-search                     VVR_vodmSAqhAoSiC8TRTg 1 1     50    907 915.1mb 372.5mb

GET /_cluster/health?

{
  "cluster_name" : "es-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 4,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 29,
  "active_shards" : 58,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

I hope you can help me with my problems.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.