Trying setting up ELK stack

hi im new to this,
so many guides are available,
but every single is special in part,
i installed elasticsearch, logstash and the forwarder,
then when i start logstash on the server,
with simple direct output and saving to ES,
checking ES works too with
http://172.16.50.66:9200/_search?pretty
works,
then with conf file

bin/logstash -f logstash1.conf
its gettin startet,
ok kibana or "something to view" i decited to install later,
want to have it running first without kibana ( if thats possible)
i installed forwarder on client
when i try starting the forwarder,
[root@localhost /]# /opt/logstash-forwarder/bin/logstash-forwarder -config /opt/logstash-forwarder/bin/forwarder.conf
2015/08/05 14:10:03.882581 --- options -------
2015/08/05 14:10:03.882981 config-arg: /opt/logstash-forwarder/bin/forwarder.conf
2015/08/05 14:10:03.883016 idle-timeout: 5s
2015/08/05 14:10:03.883028 spool-size: 1024
2015/08/05 14:10:03.883039 harvester-buff-size: 16384
2015/08/05 14:10:03.883049 --- flags ---------
2015/08/05 14:10:03.883059 tail (on-rotation): false
2015/08/05 14:10:03.883069 log-to-syslog: false
2015/08/05 14:10:03.883080 quiet: false
2015/08/05 14:10:03.883588
"network": {
"servers": [ "172.16.50.66:5000" ],

"ssl_certificate => "/etc/ssl/logstash.crt"
"ssl_key => "/etc/ssl/logstash.key"

"timeout": 15
},

"files": [
{
"paths": [
"/var/log/*.log",
"/var/log/messages"
],
"fields": { "type": "syslog" }
}, {
"paths": [ "/var/log/apache2/access.log" ],
"fields": { "type": "apache" }
}
]
}
2015/08/05 14:10:03.884474 Failed unmarshalling json: invalid character ':' after top-level value
2015/08/05 14:10:03.884492 Could not load config file /opt/logstash-forwarder/bin/forwarder.conf: invalid character ':' after top-level value

i think i pasted all paths right i think,
in the logstash-forwarder.err
i took this:
seems like client want to connect, but its not allowed?
i opened on both client and server all nessesary ports,

logstash-forwarder.err:
2015/08/05 14:26:25.827708 Failure connecting to 172.16.50.66: dial tcp 172.16.50.66:5000: connection refused
2015/08/05 14:26:26.829432 Connecting to [172.16.50.66]:5000 (172.16.50.66)
2015/08/05 14:26:26.829731 Failure connecting to 172.16.50.66: dial tcp 172.16.50.66:5000: connection refused
2015/08/05 14:26:27.831514 Connecting to [172.16.50.66]:5000 (172.16.50.66)
2015/08/05 14:26:27.831954 Failure connecting to 172.16.50.66: dial tcp 172.16.50.66:5000: connection refused
2015/08/05 14:26:28.833471 Connecting to [172.16.50.66]:5000 (172.16.50.66)
2015/08/05 14:26:28.833777 Failure connecting to 172.16.50.66: dial tcp 172.16.50.66:5000: connection refused

on server:
[root@localhost logstash]# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eth0
sources:
services: dhcpv6-client http ssh
ports: 9200/tcp 9300/tcp 5000/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:

on client:
firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eth0
sources:
services: dhcpv6-client http ssh
ports: 5000/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:

seems ports are open right?

i tried so much on configure the settings, and still not working :frowning:
can someone help :S ?
sorry for my bad english :slight_smile:
glan

/opt/logstash-forwarder/bin/forwarder.conf isn't valid JSON. It looks like it's missing the outermost braces, i.e. it looks like

"network": {
  ...
}

when it should look like this:

{
  "network": {
    ...
  }
}

See the example in the readme file.

true but in the opt/logstash-forwarder/bin the forwarder.conf i checked,
i have the outermost braces,
but yes i see the output,
it says something else

Either way, make sure the contents of the configuration file passes a JSON validation (like http://jsonlint.com/).

thanks,
now all fine with the .conf files, i think

i got since the morning:
Failed to tls handshake with x.x.x.x x509: cannot validate certificate for x.x.x.x because it doesn't contain any IP SANs

i did the sanip adding in the openssl.conf,
but this didnt worked too after,
maybe i was in wrong directory as i gave the command to create the key? is that important?

then i searched a lot and found a lot of instructions to gettin a right certificate and key,
but nothing worked for me :frowning: i followed a lot of instructions step by step, the scripts or tools some were spoken of, all down....

there are a lot of manuals,
but none worked,
i think my problem is, i got some other root paths as some of the instructors says,
like my openssl.cnf is not where 90 % of them said,
i found it somewhere else
maybe this path-problem dont let me get a right certificate

now my forwarder says: Connecting to [x.x.x.x]:5000 (x.x.x.x)
2015/08/06 16:56:57.309096 Failed to tls handshake with x.x.x.x x509: certificate signed by unknown authority

1000 ways to create, none worked for me :frowning:
has someone a real step by step guide with all details to create ssl crt,
i dont get it
and im trying to set elk up for a week :frowning:
pls help

im on centos 7 btw

ok it works now :slight_smile:
didnt believed in it anymore
tried so much configuration,
but now, im happy :slight_smile:

it worked 1 day, forwarder sends to logstash (ssl works),
then 1 day later, Elasticsearch wont work...
i can start LS and LSW, it sends, but LS cant save into ES
i googled again a lot,
about multicast unicast, and much configuration settings,
i tried nothing works :frowning:
but nothing found to fix,

i got this message, when LS wants so save in ES:

Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1, :exception=>org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];, :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:215)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:67)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:153)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}

the day it worked, a mate comes arround saw that it doesnt work cause some services from ES or LS was working in the background, i hasnt started, then he went in a kind of task manager, looked for the PIDS,
and killed it, then all worked perfect on this day,
i searched after this task manager, but in centos are many!?
didnt found the exact one, that worked,
now hes on holdays :frowning:

can some1 help`? :S

output of service:
elasticsearch.service loaded failed failed Elasticsearch

sudo service elasticsearch status
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled)
Active: failed (Result: exit-code) since Di 2015-08-11 11:38:33 CEST; 14min ago
Docs: http://www.elastic.co
Process: 3402 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=$PID_DIR/elasticsearch.pid -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.config=$CONF_FILE -Des.default.path.conf=$CONF_DIR (code=exited, status=3)
Main PID: 3402 (code=exited, status=3)

Aug 11 11:38:33 T25.dymacon.de elasticsearch[3402]: at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:215)
Aug 11 11:38:33 T25.dymacon.de elasticsearch[3402]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Aug 11 11:38:33 T25.dymacon.de elasticsearch[3402]: Caused by: org.elasticsearch.ElasticsearchParseException: malformed, expected settings to start with 'object', instead was [VALUE_STRING]
Aug 11 11:38:33 T25.dymacon.de elasticsearch[3402]: at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:66)
Aug 11 11:38:33 T25.dymacon.de elasticsearch[3402]: at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:46)
Aug 11 11:38:33 T25.dymacon.de elasticsearch[3402]: at org.elasticsearch.common.settings.loader.YamlSettingsLoader.load(YamlSettingsLoader.java:46)
Aug 11 11:38:33 T25.dymacon.de elasticsearch[3402]: at org.elasticsearch.common.settings.ImmutableSettings$Builder.loadFromStream(ImmutableSettings.java:982)
Aug 11 11:38:33 T25.dymacon.de elasticsearch[3402]: ... 5 more
Aug 11 11:38:33 T25.dymacon.de systemd[1]: elasticsearch.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
Aug 11 11:38:33 T25.dymacon.de systemd[1]: Unit elasticsearch.service entered failed state.

es isnt active, i see the reason,
but how to enable :S

when i want to start manually:

sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -d -p /var/run/elasticsearch.pid --default.config=/etc/elasticsearch/elasticsearch.yml --default.path.home=/usr/share/elasticsearch --default.path.logs=/var/log/elasticsearch --default.path.data=/var/lib/elasticsearch --default.path.work=/tmp/elasticsearch --default.path.conf=/etc/elasticsearch
[root@T25 /]# {1.7.1}: pid Failed ...

  • FileNotFoundException[/var/run/elasticsearch.pid (Permission denied)]
    java.io.FileNotFoundException: /var/run/elasticsearch.pid (Permission denied)
    at java.io.FileOutputStream.open0(Native Method)
    at java.io.FileOutputStream.open(FileOutputStream.java:270)
    at java.io.FileOutputStream.(FileOutputStream.java:213)
    at java.io.FileOutputStream.(FileOutputStream.java:162)
    at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:194)
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)

how to change write permissions for
/var/run/
? :S

now i set up an identical server client
now es working and i can reach it via url

now my Logstash gives me back when started:
CircuitBreaker::rescuing exceptions {:name=>"Lumberjack input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
CircuitBreaker::rescuing exceptions {:name=>"Lumberjack input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
CircuitBreaker::rescuing exceptions {:name=>"Lumberjack input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
CircuitBreaker::rescuing exceptions {:name=>"Lumberjack input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
CircuitBreaker::rescuing exceptions {:name=>"Lumberjack input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
CircuitBreaker::Open {:name=>"Lumberjack input", :level=>:warn}
Exception in lumberjack input thread {:exception=>#<LogStash::CircuitBreaker::OpenBreaker: for Lumberjack input>, :level=>:error}
Lumberjack input: the pipeline is blocked, temporary refusing new connection. {:level=>:warn}
CircuitBreaker::Open {:name=>"Lumberjack input", :level=>:warn}
Exception in lumberjack input thread {:exception=>#<LogStash::CircuitBreaker::OpenBreaker: for Lumberjack input>, :level=>:error}
Lumberjack input: the pipeline is blocked, temporary refusing new connection. {:level=>:warn}

my forwarder can connect, but got some errors too,

opt/logstash-forwarder/bin/logstash-forwarder -config /opt/logstash-forwarder/bin/forwarder.conf
2015/08/11 14:43:47.231227 --- options -------
2015/08/11 14:43:47.231522 config-arg: /opt/logstash-forwarder/bin/forwarder.conf
2015/08/11 14:43:47.231564 idle-timeout: 5s
2015/08/11 14:43:47.231580 spool-size: 1024
2015/08/11 14:43:47.231591 harvester-buff-size: 16384
2015/08/11 14:43:47.231600 --- flags ---------
2015/08/11 14:43:47.231610 tail (on-rotation): false
2015/08/11 14:43:47.231624 log-to-syslog: false
2015/08/11 14:43:47.231635 quiet: false
2015/08/11 14:43:47.231797 {
"network": {
"servers": [ "172.16.50.71:5000" ],
"ssl ca": "/etc/pki/tls/certs/lumberjack.crt",
"timeout": 15
},
"files": [
{
"paths": [
"/var/log/.log"
],
"fields": { "type": "syslog" }
},
{
"paths": [
"/var/log/apache2/
.log"
],
"fields": { "type": "apache" }
}
]
}

2015/08/11 14:43:47.233388 Waiting for 2 prospectors to initialise
2015/08/11 14:43:47.233755 Launching harvester on new file: /var/log/yum.log
2015/08/11 14:43:47.234004 harvest: "/var/log/yum.log" (offset snapshot:0)
2015/08/11 14:43:47.234190 All prospectors initialised with 0 states to persist
2015/08/11 14:43:47.235037 Setting trusted CA from file: /etc/pki/tls/certs/lumberjack.crt
2015/08/11 14:43:47.236782 Connecting to [172.16.50.71]:5000 (172.16.50.71)
2015/08/11 14:43:47.485642 Connected to 172.16.50.71
2015/08/11 14:44:02.552919 Read error looking for ack: EOF
2015/08/11 14:44:02.553042 Setting trusted CA from file: /etc/pki/tls/certs/lumberjack.crt
2015/08/11 14:44:02.553579 Connecting to [172.16.50.71]:5000 (172.16.50.71)
2015/08/11 14:44:02.652794 Connected to 172.16.50.71
2015/08/11 14:44:02.662181 Read error looking for ack: EOF
2015/08/11 14:44:02.662231 Setting trusted CA from file: /etc/pki/tls/certs/lumberjack.crt
2015/08/11 14:44:02.662434 Connecting to [172.16.50.71]:5000 (172.16.50.71)

what can i do?
pls help me someone :blush:

My elasticsearch.log

[2015-08-11 15:37:34,647][INFO ][node ] [Baron Blood] version[1.7.1], pid[3402], build[b88f43f/2015-07-29T09:54:16Z]
[2015-08-11 15:37:34,648][INFO ][node ] [Baron Blood] initializing ...
[2015-08-11 15:37:34,845][INFO ][plugins ] [Baron Blood] loaded [], sites []
[2015-08-11 15:37:34,902][INFO ][env ] [Baron Blood] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [16.1gb], net total_space [17.4gb], types [rootfs]
[2015-08-11 15:37:39,608][INFO ][node ] [Baron Blood] initialized
[2015-08-11 15:37:39,609][INFO ][node ] [Baron Blood] starting ...
[2015-08-11 15:37:40,096][INFO ][transport ] [Baron Blood] bound_address {inet[/172.16.50.71:9300]}, publish_address {inet[/172.16.50.71:9300]}
[2015-08-11 15:37:40,146][INFO ][discovery ] [Baron Blood] elasticsearch/0YAOOi60QIGz4C-wh2qSrg
[2015-08-11 15:37:43,241][INFO ][cluster.service ] [Baron Blood] new_master [Baron Blood][0YAOOi60QIGz4C-wh2qSrg][T27.dymacon.de][inet[/172.16.50.71:9300]], reason: zen-disco-join (elected_as_master)
[2015-08-11 15:37:43,387][INFO ][http ] [Baron Blood] bound_address {inet[/172.16.50.71:9200]}, publish_address {inet[/172.16.50.71:9200]}
[2015-08-11 15:37:43,387][INFO ][node ] [Baron Blood] started
[2015-08-11 15:37:43,390][INFO ][gateway ] [Baron Blood] recovered [1] indices into cluster_state
[2015-08-11 15:37:43,806][DEBUG][action.search.type ] [Baron Blood] All shards failed for phase: [query]
org.elasticsearch.indices.IndexMissingException: [logstash-2015.08.10] missing
at org.elasticsearch.indices.IndicesService.indexServiceSafe(IndicesService.java:288)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:559)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:544)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:306)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)