Is it possible to format Logstash output in a way that it shows in Uptime?

I was thinking if, by using mutate in the output of my pipeline on logstash in a way that "mimics" the heartbeat output, I could have an uptime application to see the return of my pseudo-heartbeat (generated by the logstash on the act of changing the content of the output).

Once placed at Uptime settings UI where you can choose the index which the monitors receives data, that index would have the same specifications of the heartbeat generated output (like monitor.status, url, id, name and etc). This way, would I be able to see my "made up" monitor in Uptime?

The context that leads this question is:

I have to monitor an array of objects that returns a lot of devices, each one with an unique ID, name and a "connected" status. By changing the lines in mutate I could easily make up so that the unique ID stands up by a monitor ID, the name by it's name, the connected could easily go into an "up" or "down" status and so on.

Of course, nothing stops me from using the Dashboard or Canvas since I already have the info at hand, but I was wondering if by making the necessary changes I would be able to have it at Uptime, or if there is anything essential to Uptime that only responds to heartbeat/synthetics.

Not a everyday case but would be glad if anyone could help me set this straight,

best regards!

You probably can do that.

You just need to make your output looks like it is coming from heartbeat and save it in a custom index that the Uptime app will read.

I'm not sure how this works in version 8.X, but in version 7 you could add more indices in the Uptime app.

So, you just need to do what you are already planning, create a custom index with the same naming convention for the fields and use the same mapping as the uptime indices use and it should populate the uptime app in kibana.

2 Likes

I tried out and then got stucked on this error...

[2022-09-21T17:09:14.585-03:00][ERROR][http] ResponseError: search_phase_execution_exception: [script_exception] Reason: runtime error; [script_exception] Reason: runtime error
    at KibanaTransport.request (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\node_modules\@elastic\transport\lib\Transport.js:455:27)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at KibanaTransport.request (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\src\core\server\elasticsearch\client\create_transport.js:63:16)
    at Client.SearchApi [as search] (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\node_modules\@elastic\elasticsearch\lib\api\api\search.js:60:12)
    at Object.search (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\lib.js:51:15)
    at statusCount (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\get_snapshot_counts.js:43:7)
    at Object.getSnapshotCount (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\get_snapshot_counts.js:28:17)
    at handler (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\rest_api\snapshot\get_snapshot_count.js:40:12)
    at handler (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\rest_api\uptime_route_wrapper.js:57:17)
    at Router.handle (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\src\core\server\http\router\router.js:163:30)
    at handler (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\src\core\server\http\router\router.js:124:50)
    at exports.Manager.execute (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\node_modules\@hapi\hapi\lib\toolkit.js:60:28)
    at Object.internals.handler (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\node_modules\@hapi\hapi\lib\handler.js:46:20)
    at exports.execute (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\node_modules\@hapi\hapi\lib\handler.js:31:20)
    at Request._lifecycle (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\node_modules\@hapi\hapi\lib\request.js:371:32)
[2022-09-21T17:09:14.613-03:00][ERROR][http] TypeError: Cannot read properties of undefined (reading 'id')
    at summaryPingsToSummary (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\search\refine_potential_matches.js:92:32)
    at fullyMatchingIds (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\search\refine_potential_matches.js:77:22)
    at refinePotentialMatches (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\search\refine_potential_matches.js:31:16)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at MonitorSummaryIterator.fetchChunk [as chunkFetcher] (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\search\fetch_chunk.js:34:20)
    at MonitorSummaryIterator.attemptBufferMore (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\search\monitor_summary_iterator.js:147:21)
    at MonitorSummaryIterator.bufferNext (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\search\monitor_summary_iterator.js:121:22)
    at MonitorSummaryIterator.next (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\search\monitor_summary_iterator.js:55:5)
    at MonitorSummaryIterator.nextPage (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\search\monitor_summary_iterator.js:71:23)
    at Object.getMonitorStates (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\lib\requests\get_monitor_states.js:44:16)
    at handler (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\rest_api\monitors\monitor_list.js:53:22)
    at handler (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\x-pack\plugins\uptime\server\rest_api\uptime_route_wrapper.js:57:17)
    at Router.handle (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\src\core\server\http\router\router.js:163:30)
    at handler (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\src\core\server\http\router\router.js:124:50)
    at exports.Manager.execute (C:\Users\samuel.soares\Desktop\elk\elk8.0\kibana-8.0.0\node_modules\@hapi\hapi\lib\toolkit.js:60:28)

Here is how the mimic of the monitor looks like:

    mutate {
        
        add_field => {"agent.ephemeral_id" => "xxxxx"}
        add_field => {"agent.hostname" => "xxxxx"}
        add_field => {"agent.id" => "xxxxxx"}
        add_field => {"agent.name" => "xxxxxx"}
        add_field => {"agent.type" => "heartbeat"}
        add_field => {"agent.version" => "7.17.3"}
        add_field => {"ecs.version" => "1.12.0"}
        add_field => {"event.dataset" => "http"}
        add_field => {"http.response.body.bytes" => 0}
        add_field => {"http.response.body.hash" => "-"}
        add_field => {"http.response.headers.Cache-Control" => "-"}
        add_field => {"http.response.headers.Content-Length" => 0}
        add_field => {"http.response.headers.Date" => "%{[headers][date]}"}
        add_field => {"http.response.headers.Kbn-License-Sig" => "-"}
        add_field => {"http.response.headers.Kbn-Name" => "xxxxxxxx"}
        add_field => {"http.response.headers.Location" => "/spaces/ender"}
        add_field => {"http.response.headers.Referrer-Policy" => "no-referrer-when-downgrade"}
        add_field => {"http.response.headers.X-Content-Type-Options" => "nosniff"}
        add_field => {"http.response.mime_type" => "text/plain; charset=utf-8"}
        add_field => {"monitor.id" => "%{[body][udid]}"}
        add_field => {"monitor.name" => "%{[body][deviceName]}"}
        add_field => {"monitor.status" => "%{[body][connected]}"}
        add_field => {"http.url" => "https://xxxxxx.com"}
        add_field => {"http.response.status_code" => "-"}
        add_field => {"monitor.type" => "http"}
        add_field => {"url.domain" => "localhost"}
        add_field => {"url.full" => "https://xxxxxx/rest/deviceContent"}
        add_field => {"url.scheme" => "http"}
        add_field => {"summary.up" => 0}
        add_field => {"summary.down" => 0}
    

        }
    
    if "true" in [monitor.status]{
        mutate {
            replace => {"monitor.status" => "up"}
            replace => {"http.response.status_code" => 200}
            replace => {"summary.up" => 1}
        }
    } else {
        mutate {
            replace => {"monitor.status" => "down"}
            replace => {"summary.down" => 1}
        }
    }

so far everything goes to the cluster and I can see everything just right in the discover. But once I put the index in Uptime and try to see it send back this:

The fields names and mappings needs to be exactly the same.

Your mutates are wrong, in logstash using agent.id in a mutate filter means that it will create a field with a literal dot in the name.

You wil have { "agent.id": "value" } and not { "agent": { "id": "value" } }.

You need to use the format [field][nested], for example [agent][id] , for every field in your mutate.

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.