Help in Data collection and indexing

I have already finished installation and configuration of Elastic, Kibana and Logstash.

I found some existing indices after browsing to localhost:9200/_cat/indices?v

Now I want to send logs from different devices via JDBC & Syslog to ElasticSearch and view it in Kibana.

How can I do so?

Have you seen Beats and Logstash?

I go for agentless approach at first so Beats is not useful at this moment.

For Logstash, I get confused which part mentions collecting different logs

You will usually want an input, filter and an output.

Start with a file input, and go from there. The docs walk you through some basic processing that should get you started.

May I have the URL? Which step to start with?

https://www.elastic.co/guide/en/logstash/6.2/getting-started-with-logstash.html

I got a bit confused.

I try to create a syslog input config file under conf.d directory as below,

input {
udp {
port => "514"
type => "syslog"
}
}

filter {
}

output {
elasticsearch {
hosts => [ "localhost:9200" ]
}

What should I do next to make it effective and then view data from Kibana?

Is that not working? It looks ok.

I just create the above config file. And there is syslog sent to my Elastic server.

Should I update and configure anything further such that Elasticsearch is collecting those syslog and build an index for further processing on Kibana?

If it's working, then see what the analysis looks like and go from there.

From the view of Kibana, I can find system indices only.
I didn't find the one I am trying to create. Do I miss any steps?

What does the output from _cat/indices?v show?

health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-kibana-6-2018.05.07 7qS0tPxOQa-N6Q7OZj03EQ 1 0 7107 0 1.9mb 1.9mb
green open .kibana 8b9DbLcMTaaOtUYo-j59PQ 1 0 1 0 4kb 4kb
green open .monitoring-es-6-2018.05.08 mtDWvwHCRq-UgQ0F8xlfDw 1 0 198731 228 107.1mb 107.1mb
green open .monitoring-es-6-2018.05.06 6m1XXKo7QvCk3Inp4hzO-g 1 0 146896 78 65.9mb 65.9mb
green open .watcher-history-7-2018.05.05 YE0pokcCRECJr2N4iCa9hA 1 0 8628 0 11.7mb 11.7mb
green open .monitoring-es-6-2018.05.10 xkWc-cOxRouax2g61F6znQ 1 0 92061 333 51.8mb 51.8mb
green open .monitoring-alerts-6 N9R4IKxiQMiXzo5I4wURKQ 1 0 2 0 12kb 12kb
green open .watcher-history-7-2018.05.06 LiojSCnURVSuAUDPss8AyQ 1 0 8634 0 11.7mb 11.7mb
green open .monitoring-kibana-6-2018.05.09 OLOv908_RM61ZlZolioC7w 1 0 8638 0 2mb 2mb
green open .watcher-history-7-2018.05.04 xsRjEun5SgmV9JMHNrfiQw 1 0 7030 0 9.5mb 9.5mb
green open .monitoring-es-6-2018.05.04 BIha6WuGRGKKopPjAF1keQ 1 0 74383 63 32mb 32mb
green open .monitoring-kibana-6-2018.05.05 pK00cdlLQMm8_LGmyUP8Xg 1 0 8637 0 1.9mb 1.9mb
green open .watches kTDTEu2xRq-hSz45dfWd0A 1 0 0 0 268b 268b
green open .monitoring-kibana-6-2018.05.06 MnX0ObLWQ1-fVlR0wa3qPg 1 0 8638 0 1.9mb 1.9mb
green open .monitoring-kibana-6-2018.05.08 rl9-m2-5S3qiwD6Dcfj1ag 1 0 8637 0 2.1mb 2.1mb
yellow open test flziY85sTy-LFua8vnxx6Q 5 1 1 0 4.4kb 4.4kb
green open .triggered_watches dOPdzOQbRRqDyuxEHwpxdw 1 0 0 0 3.2mb 3.2mb
green open .monitoring-kibana-6-2018.05.10 TmdojM3UTk-sOF3mYF7l-A 1 0 3394 0 1mb 1mb
green open .monitoring-es-6-2018.05.07 GckqTampTU22W449XQaUvQ 1 0 178671 102 95.8mb 95.8mb
green open .monitoring-kibana-6-2018.05.04 z-8JMgWZRJ2FuvRWnAHiJw 1 0 5525 0 1.5mb 1.5mb
green open .monitoring-es-6-2018.05.09 JRklvsD2QXCeiOOAxP4faA 1 0 216013 252 114.5mb 114.5mb
green open .security-6 wdydXTI2QO-AQJBp9tWRrA 1 0 3 0 9.8kb 9.8kb
green open .monitoring-es-6-2018.05.05 eEeFACUgQNKS9s75yzoX3Q 1 0 120968 80 53.9mb 53.9mb
green open .watcher-history-7-2018.05.07 JkZbccpKRh2Azb4WES4urw 1 0 2832 0 3.9mb 3.9mb

I'd put a stdout section in the output to make sure that things are coming in and making it to the output.

Sorry, I didn't get it

Replace

output {
  elasticsearch {
    hosts => [ "localhost:9200" ]
  }
}

With

output {
  stdout { codec => json }
}

I check that there is a lot of syslog sent to the ELK server with all essential modules installed.

I configured as above said. Found that 514 port is not listening, is it the source of problem?

How can I check if the log is successfully sent to input and then output to elasticsearch?

Thanks

input {
  udp {
    port => "514"
    type => "syslog"
  }
}

filter {
}

output {
  stdout { codec => json }
}

If something is received on UDP / 514, then you will see it in the logstash stdout.

How can I see the logstash stdout?

How do you launch Logstash? Does it print anything?