How to handle LOGS and METRICS for an ElectronJs app using Elastic APM

Dear All,

How do we can handle collecting logs and metrics thanks to APM for NodeJs for an ElectronJs project?

Can you add module to support Electron framework?

@michaelsogos APM is generally intended for web services. Can you expand a little on what you're hoping to achieve? What kind of metrics?

Logs are not something that the APM agents collect in general. For log shipping, I would recommend you take a look at Filebeat: https://www.elastic.co/guide/en/beats/filebeat/current/index.html.

@axw Essentially we found very usefull APM Metrics in a NodeJs app (powered by Electron), and thank to Custom Transactions and Spans we reach a very good level of metric analysis.

The point is that you can think of our ElectronJs app like an IoT embeded software (it is not, just as an example); and so we cannot install other software less than our because HW limits or company policies.

Fortunately the apm-agent is an JS library fully integrated in our software that's why the adoption is easy and compatible with our requirements.

What is missing:

  1. ElectronJs change the way to communicate between FRONT-END and BACK-END, it is not thru HTTP, but based on IPC; so will be very nice to measure also the renderer process in term of (CPU, RAM, etc.) and tracing any IPC requests (like it happens for HTTP actually)

  2. Of course a so amazing system like Elastic born to handle LOG EVENTS should, in our opinion, simplify the way to persist something on there (no matter if are metrics of any kind or logs or something else) and in this way to take all the advanteges of its power

  3. As you said is pretty clear that APM (but not also, HeartBeat for example) is intended for WEB SERVICES, but why to stop there in a so short range of possibilities? Did you know that we measure all our IoT telemetries thanks to elastic (not only us :slight_smile:) , our Services (not WS), our containers, etc.? Why not also an APP.
    Our suggestion is to care about of measure pretty anything, no matter the source (IoT, Mobile App, Desk App, Web Endpoints, Pods and Containers, etc.)

As an overview i can tell you that we are a software factory company for RETAIL CHAIN SERVICES.
To our customers we supply many different services:

  1. A cloud solution to handle back-office (Microservice Web App)
  2. Desktop and Mobile application for front-office (here we are on this discussion)
  3. IoT systems (from receipt printers to proximity marketing systems, etc.)
  4. Integration Services (again a cloud landing point to exchange data in many different format and with many different protocols, e.g.: FTP\HTTP\TCP CSV\JSON\STREAM)
  5. Business Intelligence solution (again a cloud system to access a WEB APP for data visualization or DWH for data analisys)

And in the middle we have many different piece of software to let everything above exists. :slight_smile:

Our final point is to log everything thanks to Elastic and Kibana (some of them are still in Influx and Graphana but we want to move).
Our first point is to integrate APM and LOGS on each single application (Desk, Mobile, Web), for now.

1 Like

While I haven't tried my self, the Node.js agent should work for Electron if you use custom transactions - which it also sounds like you have done successfully :slight_smile:

If I were to build this, I'd probably look into automating this by building a custom tracer for the Electron IPC protocol that understands its router. You can read more about applying custom patches here:

https://www.elastic.co/guide/en/apm/agent/nodejs/current/agent-api.html#apm-add-patch

Regarding logs, then I assume you don't write them to disk first but want to stream them directly to Elasticsearch instead?

@wa7son

Yes, for logging we are thinking a situation where we can stream LOGS to ELK directly (that's why we think an agent should do that in order to hide "complexity"), or as an alternative we can use a logger library which supply different transports (to file, to http, etc.)

We are going with the second scenario but isn't clear how should be the DATA SCHEMA to let the LOGS dashboard works. We know that discovery and visualization are great customizable tool, but also we are very impressed about APM, UPTIME and LOGS specific dashboard and we doesn't want to reinvent the wheel :slight_smile:

We've been discussing if the APM agents should take care of logging as well, but so far have come to the conclusion it's not worth it duplicating the functionality.

The agent would be more complicated as a result and would require more system resources to run. And remember it runs in-process, so this is normally not advisable.

Whereas having a daemon in another process that reads the logs from disk is usually always a better idea. I know this isn't possible in an Electron app in the same way as it is on a web server, but these have been our thoughts so far.

We'll keep revisiting these ideas from time to time, but for now, it's unfortunately not something we can support directly in the agent.

Regarding the log data schema, then I'm not sure either. I'll just ask another colleague to jump in that knows more about this than I.

Which version of the Elastic stack are you using btw? If you already upgraded to 6.7 or newer, you have access to the new Logs & Infrastructure Solution, which IMO is a much nicer way to view logs

@wa7son

Great, then get in touch.

In the mean time we found our way.
With Winston or log4js we will send to an logstash which listen on http, than we will save on elastic and thanks to discovery we will read logs. it works for us.

Thanks for your time.

Using Logstash in between Elasticsearch and your app is probably not a bad idea actually. This will allow you to scale your intake from all your running Electron apps easier.

We are in 7.0, we keep ELK updated every 3 month.

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.