What is my "index patter"?

It seems Logstash is not able to connect to Elasticsearch. Is Elasticsearch running? Can you reach port 9200 on the Elasticsearch host from the host where Logstash is running, e.g using telnet?

Elasticsearch is running:

# service elasticsearch status
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2017-07-04 03:50:33 EDT; 41min ago
     Docs: http://www.elastic.co
 Main PID: 1344 (java)
   CGroup: /system.slice/elasticsearch.service
           └─1344 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -XX:+Always...

Jul 04 03:50:33 localhost.localdomain systemd[1]: Starting Elasticsearch...
Jul 04 03:50:33 localhost.localdomain systemd[1]: Started Elasticsearch.

and

# netstat -naultp 
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      3990/nginx: master  
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1337/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2481/master         
tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      923/node            
tcp        0      0 0.0.0.0:514             0.0.0.0:*               LISTEN      922/syslog-ng       
tcp        0      0 172.30.9.20:80          172.30.10.18:45044      TIME_WAIT   -                   
tcp        0      0 127.0.0.1:38176         127.0.0.1:9200          ESTABLISHED 923/node            
tcp        0    188 172.30.9.20:22          172.30.10.18:57694      ESTABLISHED 2787/sshd: root@pts 
tcp        0      0 127.0.0.1:38178         127.0.0.1:9200          ESTABLISHED 923/node            
tcp6       0      0 ::1:9200                :::*                    LISTEN      1344/java           
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      1344/java           
tcp6       0      0 ::1:9300                :::*                    LISTEN      1344/java           
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      1344/java           
tcp6       0      0 :::22                   :::*                    LISTEN      1337/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      2481/master         
tcp6       0      0 127.0.0.1:9200          127.0.0.1:38178         ESTABLISHED 1344/java           
tcp6       0      0 127.0.0.1:9200          127.0.0.1:38176         ESTABLISHED 1344/java           
udp        0      0 0.0.0.0:514             0.0.0.0:*                           922/syslog-ng    

It is odd !!!
Why my "Time-field name" disabled?

Have you tried connecting to it from The Logstash instance?

It looks like Elasticsearch is bound to 127.0.0.1:9200, which is not accessible from outside that machine. You will need to change your Elasticsearch config to bind to 0.0.0.0 (all interfaces) or the external IP address (172.30.9.20) of the Elasticsearch host.

How can I bind it to "0.0.0.0" ? My Elasticsearch config is:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#script.disable_dynamic: true

I guess "network.host: 0.0.0.0" ?
Any idea?

I set "network.host: 0.0.0.0" but my "Elasticsearch" can't running:

service elasticsearch status
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2017-07-17 08:58:10 EDT; 33s ago
     Docs: http://www.elastic.co
  Process: 3557 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
  Process: 3554 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
 Main PID: 3557 (code=exited, status=1/FAILURE)

Jul 17 08:58:10 localhost.localdomain elasticsearch[3557]: at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSe...a:100)
Jul 17 08:58:10 localhost.localdomain elasticsearch[3557]: at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareComma...va:72)
Jul 17 08:58:10 localhost.localdomain elasticsearch[3557]: at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67)
Jul 17 08:58:10 localhost.localdomain elasticsearch[3557]: at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122)
Jul 17 08:58:10 localhost.localdomain elasticsearch[3557]: at org.elasticsearch.cli.Command.main(Command.java:88)
Jul 17 08:58:10 localhost.localdomain elasticsearch[3557]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91)
Jul 17 08:58:10 localhost.localdomain elasticsearch[3557]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84)
Jul 17 08:58:10 localhost.localdomain systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Jul 17 08:58:10 localhost.localdomain systemd[1]: Unit elasticsearch.service entered failed state.
Jul 17 08:58:10 localhost.localdomain systemd[1]: elasticsearch.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

Any idea?

I use "Nginx" and configured it for Kibana as below:

# nano /etc/nginx/conf.d/kibana.conf

server {
    listen 80;

    server_name example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;        
    }
}

Can it cause any problem?

Is there anything in the logs when you start up Elasticsearch?

Which logs? I change "network.host: localhost" and "Logstash" can launch.

Look in the elasticsearch logs. The location should be available in your elasticsearch.yml file. It is very likely that the logs will contain warnings about failed bootstrap checks that you will need to fix in order to be able to connect to an IP other than localhost.

Log said me:
https://pastebin.com/GtSc9RE2
is it the log that you need?
The log location in my config file is commented and I guess that I must uncomment it !!!

If it is commented out it will use the default location, which will depend on how you installed Elasticsearch.

OK.
I set it to "path.logs: /var/log/elastic".
What is the next step?

Set network host to 0.0.0.0, start Elasticsearch and check what errors and warnings appear in the logs.

Ah, It is OK and no error.
So?

If there are no errors in the log and it starts up fine I assume it is working?

I'm on step one. I got "No default index pattern. You must select or create one to continue." error and "Time-field name" is empty.

Do you have any data in Elasticsearch? Go into Dev Tools and run GET _cat/indices to list the indices available in the cluster.

Data? I collected Windows event logs and stored it via Logstash as you see. Result is :

yellow open .kibana oRgrhDK4TRW4-C-AafI_ag 1 1 1 0 3.1kb 3.1kb