Can i collect and parse TCP response messages from a server and port ? if so kindly help me with the config file and steps

input {
tcp {
port => 5000
type => syslog
}
udp {
port => 5000
type => syslog
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }

To collect and parse (TCP/UDP) packets install packetbeat in the host where you want to trap the packets then point it to logstash.

Refer this link for more details https://www.elastic.co/products/beats/packetbeat

thanks for your thoughts.

one more clarification

The logs message which i get from the server will be binary format Will the Packetbeat parse the data to human readable format ?

Are you trying to collect and parse network packets or rawlogs from server?

Can you please elaborate more about your server and binary format?

Packetbeat is the application which is like the traffic listener where it will listen actuall network traffic(Request and Response/ inbound and outbound) in any interface then it will convert them to message(plain text).

I want to connect to a server(IP address) through a port and send request to the server and then collect and store the response from the server which is word and byte format depending on the request. in a ELK stack and visualize the logs in Kibana

As per your response, My understanding is that you want to capture request and response from the server. In this case please go head and install packetbeat and trap the packets to visualize in kibana.

Before that it would be great try to capture your request and response with TCPDUMP or WireShark then see how your packets look alike.

Really thanks for sharing your knowledge domain

i need to send request to an IP address through an port number and the response needs to be collected and visualized using kinban . Can you help with the packetbeat config file setup ?

PS C:\Elastic Search\Packetbeat\packetbeat-6.3.2-windows-x86_64> .\packetbeat.exe setup --dashboards
Exiting: error initializing publisher: missing required field accessing 'output.elasticsearch.hosts'
Kindly tell me where i have gone wrong

Attache is the snippet of the yml file
packetbeat.interfaces.device: 1

#================================== Flows =====================================

Set enabled: false or comment out all options to disable flows reporting.

packetbeat.flows:

Set network flow timeout. Flow is killed if no packet is received before being

timed out.

timeout: 30s

Configure reporting period. If set to -1, only killed flows will be reported

period: 10s

#========================== Transaction protocols =============================

packetbeat.protocols:

  • type: icmp

    Enable ICMPv4 and ICMPv6 monitoring. Default: false

    enabled: true

  • type: amqp

    Configure the ports where to listen for AMQP traffic. You can disable

    the AMQP protocol by commenting out the list of ports.

    ports: [5672]

  • type: cassandra
    #Cassandra port for traffic monitoring.
    ports: [9042]

  • type: dns

    Configure the ports where to listen for DNS traffic. You can disable

    the DNS protocol by commenting out the list of ports.

    ports: [53]

    include_authorities controls whether or not the dns.authorities field

    (authority resource records) is added to messages.

    include_authorities: true

    include_additionals controls whether or not the dns.additionals field

    (additional resource records) is added to messages.

    include_additionals: true

  • type: http

    Configure the ports where to listen for HTTP traffic. You can disable

    the HTTP protocol by commenting out the list of ports.

    ports: [80, 8080, 8000, 5000, 8002]

  • type: memcache

    Configure the ports where to listen for memcache traffic. You can disable

    the Memcache protocol by commenting out the list of ports.

    ports: [11211]

  • type: mysql

    Configure the ports where to listen for MySQL traffic. You can disable

    the MySQL protocol by commenting out the list of ports.

    ports: [3306]

  • type: pgsql

    Configure the ports where to listen for Pgsql traffic. You can disable

    the Pgsql protocol by commenting out the list of ports.

    ports: [5432]

  • type: redis

    Configure the ports where to listen for Redis traffic. You can disable

    the Redis protocol by commenting out the list of ports.

    ports: [6379]

  • type: thrift

    Configure the ports where to listen for Thrift-RPC traffic. You can disable

    the Thrift-RPC protocol by commenting out the list of ports.

    ports: [9090]

  • type: mongodb

    Configure the ports where to listen for MongoDB traffic. You can disable

    the MongoDB protocol by commenting out the list of ports.

    ports: [27017]

  • type: nfs

    Configure the ports where to listen for NFS traffic. You can disable

    the NFS protocol by commenting out the list of ports.

    ports: [2049]

  • type: tls

    Configure the ports where to listen for TLS traffic. You can disable

    the TLS protocol by commenting out the list of ports.

    ports: [443]

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.codec: best_compression
index.number_of_shards: 1

#_source.enabled: false

#================================ General =====================================

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

#name:

The tags of the shipper are included in their own field with each

transaction published.

#tags: ["service-X", "web-tier"]

Optional fields that you can specify to add additional information to the

output.

#fields:

env: staging

#============================== Dashboards =====================================

These settings control loading the sample dashboards to the Kibana index. Loading

the dashboards is disabled by default and can be enabled either by setting the

options here, or by using the -setup CLI flag or the setup command.

setup.dashboards.enabled: true
setup.dashboards.index: packetbeat-*
#setup.dashboards.beat: packetbeat

The URL from where to download the dashboards archive. By default this URL

has a value which is computed based on the Beat name and version. For released

versions, this URL points to the dashboard archive on the artifacts.elastic.co

website.

#setup.dashboards.url:

#============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

host: "localhost:5601"

#============================= Elastic Cloud ==================================

These settings simplify using packetbeat with the Elastic Cloud (https://cloud.elastic.co/).

The cloud.id setting overwrites the output.elasticsearch.hosts and

setup.kibana.host options.

You can find the cloud.id in the Elastic Cloud web UI.

#cloud.id:

The cloud.auth setting overwrites the output.elasticsearch.username and

output.elasticsearch.password settings. The format is <user>:<pass>.

#cloud.auth:

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

hosts: ["localhost:9200"]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.