How to send logs from server to local machine to VM. where ELK is running?

I have filebeat running on server which collects logs and ships them to logstash. But I want to try to send logs to logstash on my machine. Not on localhost, but on my windows machine to VM. So it will basically be a different machine. How can that be done?

The server that Filebeat is running on will need to have an IP-networking route to the port on a host that is running Logstash with a pipeline input configured to listen on the Beats protocol. Once this is true, Filebeat can be configured with a hostname-port pair (or an IP-port pair) and the logs will be received by the relevant Logstash pipeline.

  • where is the VM that runs Logstash running (e.g., local machine, public cloud, server room etc.)?
  • does the VM have a public IP? if not, does the VM host have a public IP and can you control its network configuration?

Hi @yaauie,

How to set this IP-networking port?

How will this be done?

VM is running on my local machine. (Ubuntu)

I think VM has a public Ip that starts with 10.0.x.x. How do I check its public Ip? I can control the network configurations of my host and VM.

This is my logstash.conf-

#listening on this port
input {
  
  beats {
    port => 5044
  }
}

filter {
  if[fields][log_type] =="access" {
    grok {
      break_on_match => false
      match => {
        "message" => [
          "%{DATESTAMP:timestamp}%{SPACE}%{NONNEGINT:code}%{GREEDYDATA}%{LOGLEVEL}%{SPACE}%{NONNEGINT:anum}%{SPACE}%{GREEDYDATA:logmessage}",
          "(?<activityId>(?<=activity\s\()\d+)"
        ]
      }
    }
  } else if [fields][log_type] == "errors" {
    grok {
      break_on_match => false
      match => {
        "message" => [
          "%{DATESTAMP:timestamp}%{SPACE}%{NONNEGINT:code}%{GREEDYDATA}%{LOGLEVEL}%{SPACE}%{NONNEGINT:anum}%{SPACE}%{GREEDYDATA:logmessage}",
          "(?<statusCode>(?<=StatusCode=\")\d+)"
        ]
      }
    }
  } else if [fields][log_type] == "dispatch" {
    grok {
      break_on_match => false
      match => {
        "message" => [
          "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}\[%{DATA:threadId}]%{SPACE}%{LOGLEVEL:logLevel}%{SPACE}%{JAVACLASS:javaClass}%{SPACE}-%{SPACE}(\[%{NONNEGINT:incidentId}])?%{GREEDYDATA:message}",
          "(?<scheduledActionList>(?<=scheduledActionList\s\[)[\d,\s]+)"
        ]
      }
    }
    if "" in [scheduledActionList] {
      mutate {
        gsub => ["scheduledActionList", " ", ""]
        split => {"scheduledActionList" => ","}
      }
    }
  }
}



output {
    elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    ilm_enabled => false
    index    => "%{[fields][log_type]}-%{+YYYY.MM.dd}"  
  }
  stdout {
    codec => rubydebug
  }
}

And this is the filebeat.yml on server-


#=========================== Filebeat inputs =============================

filebeat.inputs:



- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
   #- C:/Users/Administrator/Downloads/filebeat-7.5.1-windows-x86_64/filebeat-7.5.1-windows-x86_64/access.2020-01-09.log
    - C:\Program Files (x86)\ESQ SST\DataEdgev1.2\ngta-distribution-web-3.2.0.0-bin\logs
    #- c:\programdata\elasticsearch\logs\*


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false




#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:


#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.xx.xx:5044"]

 

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

@yaaule, I have also done port forwarding so kibana can be seen on my IP:5601 port with host Ip is 192.168.x.x and guest Ip 10.0.x.x and both ports set to 5601.

10.x.x.x IPs are Private IPs, which means that the host is not necessarily reachable by hosts outside the private network.

Does the host on which Filebeat is running have a route to the VM running on your local machine (or even just to your local machine)?

If not, you will likely need to rely on port forwarding here too (e.g., forwarding a port on the Filebeat server's loopback interface to your VM by means of an ssh tunnel)

@yaauie, The 10.x.x.x is my Ubuntu's Ip and 192.168.x.x my host's.

FIlebeat is running on a server with ip 192.168.x.x and it doesnt have a route to the VM on my local machine yet. How can that be done?

Here, filebeat on the server's Ip will be host Ip and guest Ip will be my VM's?

Is there a way I can send logs from my filebeat on server to my local machine? I want to take this step first! And then i will try to send from machine to Vm. I f this wont work as a different machine, then i will know my logstass conf has issues.

if you have credentials to log into the host on which Filebeat is running, you can ssh tunnel from your local machine (or the VM), and use that tunnel to bind to a port on the remote machine back to your local machine. This is called Remote Forwarding and can be done using the -R flag:

ssh -R '5044:localhost:5044' username@filebeat_host

The above uses the port:host:hostport form of the argument for -R, where:

  • port: the port on the server that should listen for new connections
  • host: the host that should receive connections that were sent to the listening port on the server (localhost here is your local machine)
  • hostport: the port on the host that is receiving connections

So, if you have Logstash running on a VM with IP 10.0.0.7, I believe you could do something like the following on your VM host to forward inbound requests directly to it:

ssh -R '5044:10.0.0.7:5044' username@filebeat_host

By default, SSH will bind the forwarded port only to the loopback interface, which means that to send to the port that has been forwarded, you would configure Filebeat to send to 127.0.0.1:5044.

1 Like

I do have credentials to log into the server on which filebeat is installed. So in the command-

username would be username and filebeat_host would be the 192.168.x.x server IP, right?

So why 127.0.0.1:5044 in filebeat.yml? and this will be changed in the output block in filebeat.yml, right? And would this mean that when filebeat on server runs, it would get logs and send to logstash on the VM?

After trying the command, I get logged into server and this terminal comes up-
LogstashSSH

Not sure if this means that the remote forwarding has started?
And I have changed the filebeat hosts to 127.0.0.1:5044. Should I run filebeat and see if logstash takes it in?

username would be username and filebeat_host would be the 192.168.x.x server IP, right?

yes

So why 127.0.0.1:5044 in filebeat.yml?

Because once you have opened a Remote Forwarding tunnel, port 5044 on the Filebeat server's loopback interface will be forwarded through the tunnel.

@yaauie, sorry I realize we shouldnt send screenshots but i wasnt sure how to explain this terminal otherwise!

After the remote forwarding, I tried starting filebeat on server but it isnt starting up. How do I know the tunnel is working?

I tried starting filebeat on server but it isnt starting up.

You will need to look at Filebeat's logs to figure out why it is not starting.


ssh isn't good at logging the remote forwarding, and it just kindof happens silently. It looks like you can also pass -N (for No remote command), which will cause the ssh command to just hang there as long as the remote tunnel is open.

     -N      Do not execute a remote command.  This is useful for just for-
             warding ports.

And -v will make the connection log verbosely (in my case, it includes a message Remote connections from LOCALHOST:5055 forwarded to local address 10.0.0.7:5044 and a relevant success message for the below examples).


It's easy to get things a bit confused when all the port numbers are the same. In the below example, we will end up starting a logstash listening for Beats on port 5044 on a VM that is accessible to our localhost via IP 10.0.0.7 (substitute your VM's IP as necessary)

Then, on our local machine, we create the Remote Forwarding tunnel:

ssh -v -N -R '5055:10.0.0.7:5044' username@192.168.1.1

This would:

  • connect to 192.168.1.1 (e.g, filebeat server's IP)
  • cause the server to bind to port 5055 on its own loopback interface, forwarding inbound connections back through our tunnel
  • cause our local machine to forward connections it receives through the tunnel to10.0.0.7's port 5044

Finally, we would configure Filebeat on the server to point to port 5055 in its own loopback, thereby using the tunnel we just created.

@yaauie Please let me know how to revert back the ssh tunneling My logstash was listening before and now its stopped. I want to reverse back the forwarding! Please let me know.

When you close the SSH session that is doing the tunneling, the tunnel will also close.

@yaauie I tried this step and it almost works in a way that tunnel is there and collects logs, but I think it didnt get shipped since no new indexes were made.

In the meantime, I did figure out how to send logs from my machine to the VM. Just added port forwarding rule for logstash and through that logs were being shipped from filebeat on my machine to logstash in my VM. And now the logs are shipped from server to VM also.

But I want to undertstand one thing- filebeat is running from server and the filebeat.yml file goes to pipeline.conf on my ubuntu machine and takes that pipeline's configuration, right?
I am concerned because the index name in my logstash configuration file, called pipelines.conf, is in [fields][log_type]-date format whereas the index being made right now is "logstash" which means ILM is in action. how do I fix it? what logstash configuration file needs to get changed?

ILM doesn't work along-side index patterns that reference field values, because ILM configuration needs to know about specific index aliases at startup and cannot know all possible values that the pipeline will use to populate the index pattern. So if you specify index and rely on the value of fields (e.g., using %{...} format placeholders), you should set ilm_enabled => false.

If you're still seeing new data flowing into logstash-* indices, this indicates that you have an Elasticsearch output plugin that is using the default value for index.

1 Like

@yaauie, Thanks for replying! I do have ILM set to false as you mentioned. Now my index name is stuck at "%{[fields][tag]}". I know what mistake I made which is that in logstash output block, I should have written

index => "%{[fields][tags]"

But now, when i try to delete in dev tools of kibana under

DELETE /%{[fields][tag]}

the index isnt deleting. I tried DELETE /%25{[fields][tag]} as well and that isnt working either.
I do notice that when I select this DELETE command, the grey area that usually is one line selection, currently the grey area selected is taking the commands above it too.

Where does this exist?

My index from logstash moved onto this new one after I made sure my pipeline.conf was the right configuration being read.
Thanks!

Fixed this. Deleted it under Index Management.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.