Setting up Filebeats, ELK, with Rails + Docker

Hi everybody! Kudos to the @elastic team! It's impressive what this piece of software can do! :smile:

I've been going through the docs, and googling through no avail. Hit Stack Overflow, and GitHub. I've studied the repos and tried different images combinations... I'm stuck.

I'm currently under Rails 5.1.5, Docker 17.12 CE, and I'm using Docker-Compose Version 3. I just want to transfer the logs from a small Rails app to Elasticsearch 6.2.. I don't care how pretty they look, they just need to be there.

This is everything that I've done:

  • I've read the setup documentation for ELK + Beats (Firebeat).
  • I've tried setting the logs from Rails to STDOUT. Then use the Gelf Driver from Docker to transfer it to the Logstash URL.
  • I've tried loading the image from: docker.elastic.co, and passing my own .yml
  • Tried using the sebp/elk image. But got lost on how to send it to Filebeat or Logstash. I pulled a Filebeat image, but didn't know how to make it work.
  • I've tried checking the first 2 Google Search Pages (Yes, I went as far as to look to the 2nd page) on how to set it up with Rails. Unfortunately, all of them are outdated, and the configuration doesn't transfer well to this newer version.
  • Created a StackOverflow question.

I'm able to load Kibana, but Kibana is not able to read any of the logs (I don't know if it's receiving them), therefore I can't create an Index.

I've changed the code so many times, that I don't have the list of all the different attempts.

Here's in application.rb:
I used to have one that used to send it directly, but it never worked.

config.lograge.formatter = Lograge::Formatters::Logstash.new
config.lograge.logger = ActiveSupport::Logger.new(STDOUT)

Here's my full Docker-Compose (So sorry for the verbosity and the comments):

# WARNING!! Indentation is important! Be careful how you indent.
# All paths that point to the actual disk (not the Docker image) 
# are relative to the location of the file! 
version: '3'
services:
       
  # elasticsearch:
  #   build:
  #     context: elk/elasticsearch/
  #   volumes:
  #     - ./elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
  #   ports:
  #     - "9200:9200"
  #     - "9300:9300"
  #   environment:
  #     ES_JAVA_OPTS: "-Xmx256m -Xms256m"
  #   networks:
  #     - elk

  # logstash:
  #   build:
  #     context: elk/logstash/
  #   volumes:
  #     - ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
  #     - ./elk/logstash/pipeline:/usr/share/logstash/pipeline:ro
  #   ports:
  #     - "5000:5000"
  #   environment:
  #     LS_JAVA_OPTS: "-Xmx256m -Xms256m"
  #   networks:
  #     - elk
  #   depends_on:
  #     - elasticsearch

  # kibana:
  #   build:
  #     context: elk/kibana/
  #   volumes:
  #     - ./elk/kibana/config/:/usr/share/kibana/config:ro
  #   ports:
  #     - "5601:5601"
  #   networks:
  #     - elk
  #   depends_on:
  #     - elasticsearch
  
  elk:
    image: sebp/elk
    ports:
    - "5601:5601"
    - "9200:9200"
    - "5044:5044"
    volumes:
    - elasticsearch:/usr/share/elasticsearch/data
  filebeat:
    image: docker.elastic.co/beats/filebeat:6.2.3
    depends_on: 
    - elk
    volumes:
    # - ./elk/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
    - /var/run/docker.sock:/tmp/docker.sock
    environment:
    - LOGSTASH_HOST= 'localhost'
    - LOGSTASH_PORT=5044
app:
    # These are configuraiton options that are applied at build time.
    # In this case, the dockerfile will be read in the root. 
    build: .
    # Tells docker, not to initialize this service, before the listed
    # ones are ran. 
    depends_on:
      - db
      # - elk
      - filebeat
    # We create environment variables.
    environment:
      RAILS_ENV: development
      LOGSTASH_HOST: localhost
      # Same as the one in MYSQL above:
      # LOCAL
      SECRET_MYSQL_HOST: 'db'
      SECRET_MYSQL_DATABASE: 'dev'
      SECRET_MYSQL_USERNAME: 'ruby'
      SECRET_MYSQL_PASSWORD: 'userPassword'
    command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3001 -b '0.0.0.0'"
    # This will allow you to hook yourself to the debugger
    # How you use it:  
    # docker-compose up -d  //-d flag is important!
    # docker attach cprintservicehub_app_1 (If that doesn't work, try with the container's id).
    stdin_open: true
    tty: true
    # logging:
    #   driver: gelf
    #   options:
    #     gelf-address: 'udp://localhost:12201'
    links:
      - db
    volumes:
      - "./:/var/www/cprint"
    ports:
      - "3001:3001"
    expose:
      - "3001"
# Volumes are the recommended storage mechanism of Docker. 
volumes:
  db-data:
    driver: local
  elasticsearch:
    driver: local

networks:
    elk:
      driver: bridge

Do you have any ideas? Do I need to elaborate more?

Thank you! :slight_smile:

This is the other method for me to send to Logstash:

config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'localhost', port: 5050)

But I get ADDR INVALID, apparently it can't find it.

Came here for an update. I have managed to make some progress. I have found out that there is no problem with the image and how I've set it up. I was able to send a test log file through the "nc" command in Ubuntu for Windows 10, and Logstash grabbed the file and delivered it successfully to ElasticSearch.

Kibana was finally able to process it. The problem now lies in the Ruby on Rails configuration, which I'll try to ask in Stack Overflow.

@superjose thanks for you comments :slight_smile:

If you want to use filebeat, take into account that it reads the logs from disk, so you need to use the json-file logging driver in docker, then filebeat can be configured to read the logs from there and send them to logstash or elastic.

Also as you mentioned your application has to be configured to write to stdout so its logs are handled by docker daemon.

You can find in the documentation how to configure filebeat outputs and how to configure prospectors to read from files.

For docker specific instructions take a look to this blogpost about enriching logs with docker metadata, and to the documentation about docker autodiscover.

Can't believe I didn't publish this text. This thing was hanging in here for more than a week :frowning:

I finally solved it:

[Update: April 5th, 2018)
Anyways...
@jsoriano: Thank you very much for the clarification, it's going to prove useful. So far LogstashLogger is doing a good job! Will take into consideration Filebeat in the future, in case we need it. It's a small application so far :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.