How to collect information using remote script and visualise them in. dedicated Kibana visualization?

Dear Users, I started using ELK stack from a few days, so I'm not an expert.
After a very basic ELK stack deploy, everything seems to be running as expected.
Now, I would like to:

  • execute a script on a remote server, collect the related output and put the result in a "quota usage per user" visualization (pie chart).

In particular, I should execute third part command that provides the quota usage (per user) infomration. This is the output provided by the command:

Quota Summary for user01

Max Limit: 307200.00 GB
Current Usage: 148083.81 GB
Status: Quota Ok

This command should be executed two times a day at 08:00 a.m and 08:00 p.m.

Do you think that it can be done using ELK stack?
Sorry if it is a stupid question.

Thank you,
Mauro

You could use Logstash and the exec plugin for that.

1 Like

Hi Warkolm,

thank you for your answer.
Since I'm a newbie, could you please provide me a guide for the procedure you mentioned or some basic examples?
I would like to understand how logstash manages the output of the script, how it translate in elastisearch records...I'm still studying this world...Sorry...

If you're new to Logstash then check out the getting started parts of the documentation - https://www.elastic.co/guide/en/logstash/current/index.html

And then the exec input section from there.

1 Like

Hi Warkolm,

I read the documents you provided, thank you.
So, I installed logstash-input-exec plugin, I created a bash script that, when it is executed, produces the following output

user=sysm02 limit=0 used=127.49
user=sysm01 limit=0 used=1315.69
user=sysm03 limit=0 used=17.42
user=sysm04 limit=0 used=20.53
user=sysm05 limit=0 used=16.50
user=sp1 limit=307200.00 used=151069.87

and I added "exec" input to the existing logstash config file(please, take a look at the code below)

input {

rabbitmq {
  host => "localhost"
  queue => "audit_messages"
}

exec {
  command => "ssh irs02 icheckquota"
  interval => 30
}

}

filter {

if "_jsonparsefailure" in [tags] {
    mutate {
              gsub => [ "message", "[\\]","" ]
              gsub => [ "message", ".*__BEGIN_JSON__", ""]
              gsub => [ "message", "__END_JSON__", ""]

    } 
    mutate { remove_tag => [ "tags", "_jsonparsefailure" ] }
    json { source => "message" }

}

# Parse the JSON message
json {
    source       => "message"
    remove_field => ["message"]
}

# Replace @timestamp with the timestamp stored in time_stamp
date {
    match => [ "time_stamp", "UNIX_MS" ]
}

# Convert select fields to integer
mutate {
    convert => { "int" => "integer" }
    convert => { "int__2" => "integer" }
    convert => { "int__3" => "integer" }
    convert => { "file_size" => "integer" }
}

}

output {
# Write the output to elastic search under the irods_audit index.
elasticsearch {
hosts => ["localhost:9200"]
index => "irods_audit"
}
}

I restarted logstash, but logstash stop working for both the inputs and it returned this error:

[2020-12-15T10:19:31,706][WARN ][logstash.filters.json ] Parsed JSON object/hash requires a target configuration option {:source=>"message", :raw=>""}

Could you please help me to fix this error?
Thank you in advance,
Mauro

Dear Warkolm,

please ignore my last message, I solved the problem using tags inside logstash config file.
Now, I have only one issue to solve, the last one I hope.

I changed the output of the script to simplify the next operations.
Now, the output is:

sysm02 0 127.49
sysm01 0 1315.69
sysm03 0 17.42
sysm04 0 20.53
sysm05 0 16.50
sp1 307200.00 151069.87

How to set properly dissect section to map everything correctly ?

this is the running "dissect" for exec pipeline:

dissect { mapping => { "message" => "%{+ts} %{+ts} %{irods_user} %{quota} %{used_quota}" } }

But the output is not the one I expected:

How to create a separate entry for each user statistics?

Thank you,
Mauro

I solved the issue using "line codec" and modifying mapping.

Thank you,
Mauro

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.