Carbon Black Cloud: Failed to Unmarshal JSON Message

I've recently added the Carbon Black Cloud integration to one of my agents using the "Collect Carbon Black Cloud logs via API using CEL [Beta]" option. However, all the documents that result from it have no useful information and an error message field filled in with:

failed eval: ERROR: <input>:13:23: failed to unmarshal JSON message: invalid character '<' looking for beginning of value
 |     }).do_request().as(resp, bytes(resp.Body).decode_json().as(body, {
 | ......................^

I've used Postman to test the API request found in the agent's pre-config.yaml with the supplied API ID, key, and Org key. When configured according to the Carbon Black Find Alert reference, the results that come back from Postman are valid.

Snippet from pre-config.yml with org key, api key, and api id removed:

resource.url: https://defense-prod05.conferdeploy.net/api/alerts/v7/orgs/<ORG_ID>/alerts/_search
          state:
            api_key: <API_KEY>/<API_ID>
            initial_interval: 24h
            want_more: false
          tags:
            - forwarded
            - carbon_black_cloud-alert

So as far as I can tell it's not an API key/permission issue, and there's nothing else in the Elastic Agent policy configuration for me to change that would relate to JSON parsing, if that's the issue. Any advice on how to solve this?

Hi @promenade8894 Welcome to the community.

Assuming you looked at these docs in detail as well?

You made sure the Legacy HTTP is off

Thanks for the welcome!

I've checked out those docs before, and I don't think I've missed anything. I can confirm that I'm only using the CEL beta, and I've never used the HTTPJSON or AWS S3/SQS options.

When I read that the HTTPJSON option was being removed in about a month I figured there was no reason to bother setting it up.

Added integrations

I pinged internally lets see if they come back with anything...

Can you turn on preserve original event and see what the orginal event looks like

Unfortunately that doesn't seem to have done anything.


The preserve_original_event tag was added to the tags field, but no event.original field is present within the new documents.

Here's the JSON of the document after turning on preserve original event, which doesn't seem to have anything sensitive.

{
  "_index": ".ds-logs-carbon_black_cloud.alert_v7-default-2024.06.13-000001",
  "_id": "kicGS5ABzfseZv7tiOgV",
  "_version": 1,
  "_score": 0,
  "_source": {
    "input": {
      "type": "cel"
    },
    "agent": {
      "name": "siemserver",
      "id": "4960fd4a-73b7-45f5-93c1-063599ca45f3",
      "ephemeral_id": "397caf15-8b24-4091-993e-efae08600322",
      "type": "filebeat",
      "version": "8.14.1"
    },
    "@timestamp": "2024-06-24T16:13:47.659Z",
    "ecs": {
      "version": "8.11.0"
    },
    "carbon_black_cloud": {
      "alert": {
        "category": "THREAT"
      }
    },
    "data_stream": {
      "namespace": "default",
      "type": "logs",
      "dataset": "carbon_black_cloud.alert_v7"
    },
    "elastic_agent": {
      "id": "4960fd4a-73b7-45f5-93c1-063599ca45f3",
      "version": "8.14.1",
      "snapshot": false
    },
    "error": {
      "message": "failed eval: ERROR: <input>:13:23: failed to unmarshal JSON message: invalid character '<' looking for beginning of value\n |     }).do_request().as(resp, bytes(resp.Body).decode_json().as(body, {\n | ......................^"
    },
    "event": {
      "agent_id_status": "verified",
      "ingested": "2024-06-24T16:13:57Z",
      "kind": "alert",
      "dataset": "carbon_black_cloud.alert_v7"
    },
    "tags": [
      "preserve_original_event",
      "forwarded",
      "carbon_black_cloud-alert"
    ]
  },
  "fields": {
    "carbon_black_cloud.alert.category": [
      "THREAT"
    ],
    "elastic_agent.version": [
      "8.14.1"
    ],
    "elastic_agent.id": [
      "4960fd4a-73b7-45f5-93c1-063599ca45f3"
    ],
    "data_stream.namespace": [
      "default"
    ],
    "input.type": [
      "cel"
    ],
    "data_stream.type": [
      "logs"
    ],
    "tags": [
      "preserve_original_event",
      "forwarded",
      "carbon_black_cloud-alert"
    ],
    "agent.type": [
      "filebeat"
    ],
    "event.ingested": [
      "2024-06-24T16:13:57.000Z"
    ],
    "@timestamp": [
      "2024-06-24T16:13:47.659Z"
    ],
    "agent.id": [
      "4960fd4a-73b7-45f5-93c1-063599ca45f3"
    ],
    "event.module": [
      "carbon_black_cloud"
    ],
    "agent.name.text": [
      "siemserver"
    ],
    "ecs.version": [
      "8.11.0"
    ],
    "error.message": [
      "failed eval: ERROR: <input>:13:23: failed to unmarshal JSON message: invalid character '<' looking for beginning of value\n |     }).do_request().as(resp, bytes(resp.Body).decode_json().as(body, {\n | ......................^"
    ],
    "data_stream.dataset": [
      "carbon_black_cloud.alert_v7"
    ],
    "agent.ephemeral_id": [
      "397caf15-8b24-4091-993e-efae08600322"
    ],
    "agent.name": [
      "siemserver"
    ],
    "agent.version": [
      "8.14.1"
    ],
    "elastic_agent.snapshot": [
      false
    ],
    "event.agent_id_status": [
      "verified"
    ],
    "event.kind": [
      "alert"
    ],
    "event.dataset": [
      "carbon_black_cloud.alert_v7"
    ]
  }
}

I can't imagine it's an issue with the elastic agent itself: I've got other integrations like Cisco IOS and O365 running on it that are ingesting and parsing fine.

I asked internally lets see if we get something back... please be patient... if you have a Support Contract you can open a ticket...

@promenade8894

From Internal Engineering..

I just checked it with our live account and everything’s working fine, the customer could be mixing up the api id and secret key in the config which could lead to an auth error which could in turn cause this issue.
Working screenshot:

I don't think that's quite the issue. The ID/Secret key are passed in the format of
<key>/<id>
and while I can't post the key and ID here, I can at least say that it resembles
WADSFDGSDFSDFDFSDFSDFEWS/DFSDFDSF
where the portion (the key) before the slash is far longer than the stuff after it (the id).
When tested in this format in Postman I get results such as the snippet below

            "type": "CB_ANALYTICS",
            "backend_timestamp": "2024-06-12T02:22:56.276Z",
            "user_update_timestamp": null,
            "backend_update_timestamp": "2024-06-12T02:22:57.305Z",
            "detection_timestamp": "2024-06-12T02:21:35.947Z",
            "first_event_timestamp": "2024-06-12T02:18:35.883Z",
            "last_event_timestamp": "2024-06-12T02:18:41.744Z",
            "severity": 7,
            "reason": "The application dell.techhub.exe injected code into another process (23771888-7c70-436c-a9db-a78daa1f914a) via hollowing. A Terminate Policy Action was applied.",
            "reason_code": "T_REPLACE",

This format is the same as in the pre-config.yml

Edit: I swapped the API key and ID just in case, and recieved a different error.message

	
[
  failed eval: ERROR: <input>:10:19: no such key: results
   | }).do_request().as(resp, bytes(resp.Body).decode_json().as(body, {
   | ..................^,
  Processor json in pipeline logs-carbon_black_cloud.asset_vulnerability_summary-2.2.0 failed with message: field [original] not present as part of path [event.original]
]

@promenade8894 Hang in there looks like another user is seeing something similar but may take a day or so to look / figure out with internal resources...

1 Like

Ok we are still not seeing it on our end.

Can you run a couple POSTMAN requests to the API endpoint and post a sanitized response or DM me with the responses.... full response not snippets.... The more you can give us the more we can help... the Engineer. is gone for the day ... but it you can provide I can pass on to him.

There is something weird here, you enabled Preserve Original Event, the document you shared has the tag preserve_original_event, but it does not have the event.original field.

Do you have any custom ingest pipeline on this integration?

I've left everything as default to avoid unnecessary complications. The only thing I've done is add the Org ID, API ID, and API key.
AlertPipeline
AuditPipeline
SummaryPipeline

Alright, so there was an issue with the initial hostname. It seems I had accidentally put an extra / at the end, turning https://defense-prod05.conferdeploy.net into https://defense-prod05.conferdeploy.net/
which led to an extra / in the resource.url in pre-config.yaml.

However my issue isn't solved.

Instead of the unmarshal issue I now receive the following for the carbon_black_cloud.audit datastream:

failed eval: ERROR: <input>:6:19: no such key: notifications
 | }).do_request().as(resp, bytes(resp.Body).decode_json().as(body, {
 | ..................^

and for the data stream carbon_black_cloud.asset_vulnerability_summary

[
  failed eval: ERROR: <input>:10:19: no such key: results
   | }).do_request().as(resp, bytes(resp.Body).decode_json().as(body, {
   | ..................^,
  Processor json in pipeline logs-carbon_black_cloud.asset_vulnerability_summary-2.2.0 failed with message: field [original] not present as part of path [event.original]
]

I've dm'd @stephenb the responses if that's still needed.

@promenade8894

Yes, it's very odd if event. original is missing. That is the key field, which is the container field for all the content. The 2nd thing that the ingest pipeline does is rename the message field to event.original then try to parse the event.original as json if it is missing everything fails.

Are you passing the Agent Data Directly to Elastic or are you passing it through Logstash or something else...

Have you tried to delete and re apply the integration... I got your DM and will take a look...

Also What version of the Stack and Agent (not just the integration)

Ingest Pipeline


    "set": {
      "field": "ecs.version",
      "value": "8.11.0"
    }
  },
  {
    "rename": {
      "field": "message",
      "target_field": "event.original",
      "ignore_missing": true,
      "if": "ctx.event?.original == null"
    }
  },
  {
    "json": {
      "field": "event.original",
      "target_field": "json",
      "ignore_failure": true
    }
  },

@promenade8894

If you are Brave and want to figure this out quickly simply replace the ingest pipeline with a blank pipeline (we can fix it and put it back later) (You can delete and re-install the assets)

  • Set the Blank Pipeline
  • Run the Integration
  • Share the document looks like in Elastic and share that... it will be what the incoming doc look like.
  • Then we can figure out what it is going on

This replaces the ingest pipeline and we should see what the event looks like when it comes in.

PUT _ingest/pipeline/logs-carbon_black_cloud.alert_v7-2.2.0
{
  "processors": [
    {
      "set": {
        "field": "foo",
        "value": "bar"
      }
    }
  ]
}

Hi @promenade8894, I took a look at the request you shared internally to @stephenb and in that request you have set

"time_range": {
        "range": "-2w"
    },

where as in the integration, initial_interval is set to 24h. Initial interval is the time_range here. Maybe there were no events in the past 24h, which is why initially it was returning errors.

In the latest conversations I can see that the error is occurring for the asset vulnerability and audit data streams. These use different api's and they need some configuration on the carbon black console to get populated.

You can try the following cURLs in Postman to check if you are getting any responses.

Asset vulnerability: (POST)

curl --location --globoff --request POST '{{hostname}}/vulnerability/assessment/api/v1/orgs/{{org_key}}/devices/vulnerabilities/summary/_search' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Token: xxxxxx' \
--data '{
    "start":0,
    "rows":1000
}'

Audit: (GET)

curl --location --globoff  --request GET '{{hostname}}/integrationServices/v3/auditlogs' \
--header 'X-Auth-Token: ••••••' \
--data ''

For Audit Logs configuration, please check this doc:

I've double checked and it seems we do not have asset vulnerability enabled within CBC. I've now disabled this part of the integration, sorry for the confusion.

Edit: The exact response was

{"messages":["Vulnerability Assessment is not enabled for org <ORG_KEY>"],"errorCode":"103"}

For this I receive

{"success":true,"message":"Success","notifications":[{"eventId":"<REMOVED>","eventTime":1719321305804,"clientIp":"<REMOVED>","loginName":"<REMOVED>","orgName":"<REMOVED>","requestUrl":"","description":"Connector (App) <REMOVED> created session successfully","flagged":false,"verbose":false}]}

And to reduce the chance of insufficient API privileges getting in the way, I've enabled read access for everything. I'll be sure to trim those down once we're done here.

I've set the blank pipeline and DM'd you the output.

Both the stack version and the agent version are 8.14.1