Hello!
Could you provide link to connect filebeat to logstash ? jboss logs will be transmitted
Thank you
Hello!
Could you provide link to connect filebeat to logstash ? jboss logs will be transmitted
Thank you
Are you trying to load the logs into Elasticsearch? if so you do not need logstash
Here are the same 2 links I gave you
Plus I would start with the Filebeat quickstart here to send directly to Elasticsearch get that working and THEN send through logstash if you need to.
Hi Stephen!
I'm trying to connect to logstash
May be you are right.. let's start with
Hi!
I've installed elasticsearch and kibana on my VM and again I'm not able to insert token created in kibana web interface
I'm getting
and it is just hanging in web interface, reloading and reloading again and again
What I'm doing wrong if I follow exactly the steps provided on Docs?
Thank you
Vassiliy Vins
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-21T12:48:09.233-07:00","message":"Holding setup until preboot stage is completed.","log":{"level":"INFO","logger":"root"},"process":{"pid":4947},"trace":{"id":"bfa662041749ed0d08f3f1cc2670ee48"},"transaction":{"id":"50a201cdef28163d"}}
that's the list of command and the order I've run
java -version
yum install java-1.8.0-openjdk -y
#create repo for elasticsearch
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
sudo yum install --enablerepo=elasticsearch elasticsearch -y
systemctl daemon-reload
systemctl start elasticsearch
systemctl enable elasticsearch
systemctl status elasticsearch
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana and save the token.. it is valid 30 minutes only
####################### installing kibana
########### Kibana
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vim /etc/yum.repos.d/kibana.repo
[kibana-8.x]
name=Kibana repository for 8.x package
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum install kibana -y
only change in kibana.yml
server.host: "0.0.0.0"
systemctl daemon-reload
systemctl start kibana
systemctl enable kibana
systemctl status kibana
go to browser and open localhost:5601
it will ask for token - paste there
than run
/usr/share/kibana/bin/kibana-verification-code
it will create code - input in browser DONE
/usr/share/kibana/bin starting script
Finally, I've figured out - not all browsers with the default configuration are allowed to connect to kibana. At first I was using Firefox, Firefox incognito - didn't help, then I installed Chromium and I got the kibana web interface immediately
I believe it should be mentioned in ELK docs.. it takes a lot of time to find the problem
Hello!
I've made very simple configuration on my virtual machine -
one host with Elasticsearch + Kibana 192.168.0.55 and another
with filebeat 192.168.0.48
Applied Elasticsearch token to kibana - everything works
Elasticsearch.yml:
cluster.name: mycluster
node.name: elk
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["elk"]
http.host: 0.0.0.0
filebeat.yml
filebeat.inputs:
- type: log
id: my-nginx-log
enabled: true
paths:
- /var/log/nginx/access*.log
tags: ["back"]
fields:
env: test
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.elasticsearch:
hosts: ["192.168.0.55:9200"]
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
question1 -will Elasticsearch accept all data from filebeat to port 9200 by default or some additional configuration required to allow traffic from filebeat to ELK?
question2 - how can I check that Elasticsearch receives data from filebeat?
Thank you
#1
That defines that filebeat will send data via the elasticsearch output and yes you should define the port.
#2 Do you have Kibana installed if so go to Kibana -> Discover to view the data.
Perhaps you should watch some of the getting started / quick start videos or the guides.
Example
I also noticed these look like you are using shipping nginx logs we have a module that will parse that / automatically create dashboards etc..
See here .... nginx is even the example we show
Thank you for detailed answer
Let's retuarn back to my question. Do I need some extra configuration to for fillebeat and Elasticsearch to exchange info? or
output.elasticsearch:
hosts: ["192.168.0.55:9200"]
is enough to send data and see them later on in Kibana?
I'm asking because the Doc says
output.elasticsearch:
hosts: ["https://myEShost:9200"]
username: "filebeat_internal"
password: "YOUR_PASSWORD"
ssl:
enabled: true
ca_trusted_fingerprint: "b9a10bbe64ee9826abeda6546fc988c8bf798b41957c33d05db736716513dc9c"
is this configuration compulsory or just given as an example of configuration?
Ahhh thanks for the clarification.
It depends on how you setup elasticsearch...
There are many options
Since it looks like you have enabled security in elasticsearch... According to your elasticsearch.yml above then you need to use the second configuration.
output.elasticsearch:
hosts: ["https://myEShost:9200"]
username: "filebeat_internal"
password: "YOUR_PASSWORD"
ssl:
enabled: true
ca_trusted_fingerprint: "b9a10bbe64ee9826abeda6546fc988c8bf798b41957c33d05db736716513dc9c"
You could use a less secure version
output.elasticsearch:
hosts: ["https://myEShost:9200"]
username: "filebeat_internal"
password: "YOUR_PASSWORD"
ssl:
verification_mode: none
Or if you have not setup the filebeat_internal.user
output.elasticsearch:
hosts: ["https://myEShost:9200"]
username: "elastic"
password: "PASSWORD"
ssl:
verification_mode: none
The configuration options and descriptions are here
as soon as I insert these lines related to username, password and ssl
filebeat.inputs:
- type: log
id: my-nginx-log
enabled: true
paths:
- /var/log/nginx/access*.log
tags: ["back"]
fields:
env: test
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.elasticsearch:
hosts: ["192.168.0.55:9200"]
username: "elastic"
password: "4WQ=+RqYuArNBl-acew7"
ssl:
verification_mode: none
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
filebeat is getting this error in systemctl status filebeat:
[root@centos72 filebeat]# systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Sun 2022-11-27 09:06:58 MST; 4min 17s ago
Docs: https://www.elastic.co/beats/filebeat
Process: 8115 ExecStart=/usr/share/filebeat/bin/filebeat --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS (code=exited, status=1/FAILURE)
Main PID: 8115 (code=exited, status=1/FAILURE)
Nov 27 09:06:58 centos72 systemd[1]: filebeat.service: main process exited, code=exited, status=1/FAILURE
Nov 27 09:06:58 centos72 systemd[1]: Unit filebeat.service entered failed state.
Nov 27 09:06:58 centos72 systemd[1]: filebeat.service failed.
Nov 27 09:06:58 centos72 systemd[1]: filebeat.service holdoff time over, scheduling restart.
Nov 27 09:06:58 centos72 systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
Nov 27 09:06:58 centos72 systemd[1]: start request repeated too quickly for filebeat.service
Nov 27 09:06:58 centos72 systemd[1]: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..
Nov 27 09:06:58 centos72 systemd[1]: Unit filebeat.service entered failed state.
Nov 27 09:06:58 centos72 systemd[1]: filebeat.service failed.
Needs to be... https
Also careful with your indentations above... In yml details matter... Although in this case it probably doesn't matter. Your hosts
is not indented properly.
Also to see the actual logs per the docs..
journalctl -u filebeat.service
finally I figured out what caused the error message and didn't allow to start filebeat:
they were lines-
tags: ["back"]
fields:
env: test
Commented them out and now it works
Now next step how can I see my logs in kibana interface?
should I create role and user in kibana interface first? between filebeat and elasticsearch I'm using username "elastic" and pasword created elasticsearch during installation process
Hmm though should be valid configurations perhaps the syntax was wrong.
I would just use the elastic
user to get started.
Please open new threads with new specific questions.
should be indention ... will check after because I need the configuration
Thank you for your help .. will open new thread
Tags and fields should work... Glad you got it working!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.