APM Flask - Treating all transactions as Errors

Kibana version: 8.9.1

Elasticsearch version: 8.9.1

APM Server version: 8.9.1

APM Agent language and version: Python (Flask) 6.18.0

We are trying to build a proof of concept using Elastic APM and are running into a situation where all of our logs are being treated as errors. Our aim is to have a centralised dashboard for viewing all of our logs and seeing a transaction flow between services.

Although Elastic suggest only sending WARNING and ERROR we are just testing the solution and sending all our transactions/logs to Elastic APM via the agent.

Even a simple log message like below is showing as an error in our dashboard.

        logger.info(
            f"DBSelect, connection created, others - {others}"
        )

Screenshot of error:

Edit: there is also nothing in the Exception Stack Trace only the Log Stack Trace

Or is this tool designed to only support error logging and thus is treating everything as an error?

I see the:

error.log.level
info

Any ideas? Thanks in advance :slight_smile:

1 Like

This is a side effect of the fact that log sending in Flask was written long ago as a POC, and sends all logs as messages. They then show up in the UI as errors.

We recommend using filebeat to ingest your logs once you have them formatted as documented here.

I've been considering removing the log sending in Flask, but it's a breaking change, so it must wait for a major version of the agent.

1 Like

Hi,

Thank you for the response. Is this behaviour specific to Flask? Do other support frameworks like FastAPI or Django exhibit different behaviour?

Are we able to get all the same information we get from the Flask agent inside of APM using Filebeat? Or will this just be limited to traces and spans. Will there be anyway to get stacktrace, infrastrucutre map, onward connections to DBs etc?

Is there a specific data stream we need to send the logs to via Filebeat/Logstash to get them in our APM dashboard?

Thanks in advance

The other frameworks do not support log sending. Only Flask does.

Are we able to get all the same information we get from the Flask agent inside of APM using Filebeat? Or will this just be limited to traces and spans. Will there be anyway to get stacktrace, infrastrucutre map, onward connections to DBs etc?

Most of those features are not related to log collection/correlation and will continue to work.

Is there a specific data stream we need to send the logs to via Filebeat/Logstash to get them in our APM dashboard?

No; as long as the data stream is included in the data view or Log Indices configuration under the Logs settings in Kibana, the logs will be correlated with your APM data.

Note that you must also have your logs formatted for ingestion. The easiest way is using our log_ecs_reformatting=override configuration; however, if you have a complex logging configuration you may want to apply the formatter yourself.

1 Like

Thank you so much for your replies :slight_smile:

After following what you posted we have managed to get transactions into APM using ecs log formatter and filebeat. However, now we have gone full circle and can't seem to get our ERRORS to be reported as ERRORS.

Are there specific fields that APM requires to associate a transaction as an ERROR?

This is an example of one of our Error messages that appear in APM:

{
  "@timestamp": "2023-09-27T10:55:28.855Z",
  "log.level": "error",
  "message": "Unexpected 500 error happened - ",
  "ecs": {
    "version": "1.6.0"
  },
  "error": {
    "message": "",
    "stack_trace": "  File \"/var/app/venv/staging-LQM1lest/lib64/python3.11/site-packages/flask/app.py\", line 1484, in full_dispatch_request\n    rv = self.dispatch_request()\n         ^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/var/app/venv/staging-LQM1lest/lib64/python3.11/site-packages/flask/app.py\", line 1469, in dispatch_request\n    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/var/app/venv/staging-LQM1lest/lib64/python3.11/site-packages/flask/views.py\", line 109, in view\n    return current_app.ensure_sync(self.dispatch_request)(**kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/var/app/venv/staging-LQM1lest/lib64/python3.11/site-packages/flask/views.py\", line 190, in dispatch_request\n    return current_app.ensure_sync(meth)(**kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/var/app/current/api/api_error.py\", line 27, in get\n    raise ZeroDivisionError\n",
    "type": "ZeroDivisionError"
  },
  "log": {
    "logger": "python_poc_api",
    "origin": {
      "file": {
        "line": 371,
        "name": "__init__.py"
      },
      "function": "internal_exception"
    },
    "original": "Unexpected 500 error happened - "
  },
  "log_context": "LogContext(service_host='python-poc-api-10.0.0.1', service_id='python_poc_api', service_application='python_poc_api', user_guid='', service_function='', transaction_id='', log_prefix='', log_message='', log_level=20, timestamp='')",
  "process": {
    "name": "MainProcess",
    "pid": 3844,
    "thread": {
      "id": 139772232603200,
      "name": "ThreadPoolExecutor-0_0"
    }
  },
  "service": {
    "environment": "dev",
    "name": "python_poc_api"
  },
  "trace": {
    "id": "905c8e027d1dd87400e79125e2a899e5"
  },
  "transaction": {
    "id": "474af35450149f1a"
  }
}

Are we missing any required fields? error.culprit perhaps? Is this something the plugin/handler should handle for us?

Or should we go back to using the Agent for shipping ERROR or above logs and continue to use filebeat for INFO. I would be apprehensive of doing that if you were to remove the Flask log sending as you mentioned.

Thanks in Advance,

Jason!

Errors are a special case -- we won't surface error logs as errors; they are a specific type of document in the APM data stream.

The APM Agent automatically sends error documents when it catches an unhandled exception. However, it doesn't know about exceptions that your app successfully handles; this is why we provide the Client.capture_exception() function:

import elasticapm
try:
    # do some work
except Exception as e:
    # handle exception
    elasticapm.get_client().capture_exception()

This will capture the stacktrace and other contextual information about the active exception and send it to the APM Server as an error document.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.