Can't find elasticsearch.yml

i installed elk using docker compose my kibana works well but elasticsarch and logstash doesn't work and i cant find Elasticsearch.yml and logstash.yml to chande configuration and see what's the probleme

Can you provide more configuration? With this info users will give just general information.

  • How is the configuration of the docker-compose.yml?

This is the default location of elasticsearch installed as a service in Linux:

root@elastic:~# find / -name elasticsearch.yml

I've prepared a docker-compose.yml file using the same image:version that you're using in your ELK:

version: '3.3'
    image: elasticsearch:7.4.0
    container_name: elasticsearch
      - discovery.type=single-node
      - 9200:9200
    restart: unless-stopped
      - netElk
    image: kibana:7.4.0
    container_name: kibana
      - 5601:5601
    restart: unless-stopped
      - netElk
      - Elasticsearch
    image: logstash:7.4.0
    container_name: logstash
      - 5044:5044
    restart: unless-stopped
      - netElk

    driver: bridge

Running okay:

CONTAINER ID   IMAGE                 COMMAND                  CREATED              STATUS              PORTS                              NAMES
b4f5cf7f2c24   kibana:7.4.0          "/usr/local/bin/dumb…"   15 seconds ago       Up 14 seconds>5601/tcp             kibana
1045f9f0f40c   elasticsearch:7.4.0   "/usr/local/bin/dock…"   16 seconds ago       Up 14 seconds>9200/tcp, 9300/tcp   elasticsearch
6bbb5a1ce07d   logstash:7.4.0        "/usr/local/bin/dock…"   About a minute ago   Up About a minute>5044/tcp, 9600/tcp   logstash

The default configuration used for Elasticsearch in this docker images is:

[root@1045f9f0f40c elasticsearch]# find / -name elasticsearch.yml

The logstash docker the same situation:


And for kibana the same:


AndI can see Elasticsearch is running okay:

[2022-05-07 19:18:27] {afuscoar@afuscoar} (~)$ -> curl --insecure -X GET http://localhost:9200/_cat/health
1651943909 17:18:29 docker-cluster green 1 1 3 3 0 0 0 0 - 100.0%

So I think the problem is the configuration files you're trying to do in the mapping.

  • Check your config files are right.
  • Check in your mappings.

as you see my docker-compose is running ok

CONTAINER ID        IMAGE                                                     COMMAND                  CREATED             STATUS              PORTS                              NAMES
a4785c523425        logstash:7.4.0        "/usr/local/bin/do..."   4 seconds ago       Up 3 seconds>5044/tcp, 9600/tcp   logstash
2ee119b2cf2b        kibana:7.4.0          "/usr/local/bin/du..."   4 seconds ago       Up 3 seconds>5601/tcp             kibana
964e20d72349        elasticsearch:7.4.0   "/usr/local/bin/do..."   4 seconds ago       Up 3 seconds        9300/tcp,>9200/tcp   elasticsearch

and connection to kibana working well

[root@tcta4ws00b00000 elk]# curl -I
HTTP/1.1 302 Found
location: /app/kibana
kbn-name: kibana
kbn-xpack-sig: 438c83a9ba6d6306923674be83f5f336
content-type: text/html; charset=utf-8
cache-control: no-cache
content-length: 0
Date: Sat, 07 May 2022 17:36:15 GMT
Connection: keep-alive

connection to Elasticsearch is also working

[root@tcta4ws00b00000 elk]#  curl -I
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 541

logstash container is runing well

[root@tcta4ws00b00000 config]# docker inspect a4785c523425
        "Id": "a4785c5234251a08bee2e8f39510a4636ce95620bd873e43c3d3ea45bfaf7a60",
        "Created": "2022-05-07T17:33:03.924495121Z",
        "Path": "/usr/local/bin/docker-entrypoint",
        "Args": [],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 49301,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2022-05-07T17:33:04.528785571Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        "Image": "sha256:c2c1ac6b995bef47f6adb7966b814be41cd06f1ba5b18c9fb586fd360fc94837",
        "ResolvConfPath": "/var/lib/docker/containers/a4785c5234251a08bee2e8f39510a4636ce95620bd873e43c3d3ea45bfaf7a60/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/a4785c5234251a08bee2e8f39510a4636ce95620bd873e43c3d3ea45bfaf7a60/hostname",
        "HostsPath": "/var/lib/docker/containers/a4785c5234251a08bee2e8f39510a4636ce95620bd873e43c3d3ea45bfaf7a60/hosts",
        "LogPath": "",
        "Name": "/logstash",

but connection to logstash is not working

[root@tcta4ws00b00000 elk]#  curl -I
curl: (56) Recv failure: Connection reset by peer

i figured that it doesn't create the file logstash.yaml under this directory /usr/share/logstash/config/ even if i mentioned that in volume as y can see

    restart: unless-stopped
    container_name: logstash
    image: logstash:7.4.0
      - /opt/application/devops-elk/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      - /opt/application/devops-elk/elk/logstash/pipeline:/usr/share/logstash/pipeline

do ihave to create the file(logstash.yml) by my self if yes can you give me what configuration shoul i use in that file

In the docker images there are already configuration files up:

  • Where are stored the pipelines:
bash-4.2$ cat config/pipelines.yml 
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:

- main
  path.config: "/usr/share/logstash/pipeline"
  • And the simple default pipeline configuration:
bash-4.2$ cat pipeline/logstash.conf 
input {
  beats {
    port => 5044

output {
  stdout {
    codec => rubydebug

Of course, it just has one input (usually we receive data from any beat app -filebeat, metricbeat-). And in the output the codec used for the output data.

If you wanna send the information to the Elasticsearch instance you also need to add another directive in the output area.

For example:

# I have the input:
input {
  # In this case instead of using beats, I'm processing the changes of the
  # /var/log/kern.log file located in the same machine
  file {
    path => "/var/log/kern.log"
    start_position => "beginning"

# I use some filtering to parse the data of this file
# And have a better representative data in the fields
# of the index
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

# Now in the output 
output {
  # I should add the elasticsearch instance where I wanna send the output
  elasticsearch { 
	hosts => ["localhost:9200"]
	index => "kernlogs" 
  stdout { codec => rubydebug }

So in this case you should have an output with the Elasticsearch instance. In this case I have it locally but since you added them in the same network and they resolve, you can use the hostname.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.