Hi, i am producing a valid json document from get-aduser username | convertto-json | out-file file.json
I am then sending that file to ES using filebeat, however in ES it is ingesting the document, one document per line, but because the json document is multiline it's obviously not what i need.
How do i get the entire json object as a single document in ES rather then a document per line?
Follow on question would be, say i want to do get-aduser -filter * and output that to a json file, and send that in, does the same rule apply?
Rather then reading JSON from a file with Filebeat, i was able to POST the JSON object directly to my index, which does seem to work and automatically, creates the itemised fields and templates for me.
Question: In a foreach loop through 100s, 1000s of users, doing a POST each iteration, is that a supported or recommended way, or should i persist with Filebeat?
It's preferable to send multiple documents in one HTTP request, using the _bulk API, which is what filebeat does.
I think what you want, is to append the timestamp to the set of available properties selected from the ADUser object, then serialize this to JSON? If so, you can use a calculated property for this. Something like
# guess you want the same timestamp for all retrieved objects?
$timestamp = (Get-Date).ToUniversalTime().ToString("u")
Get-ADUser bob |
Select-Object DistinguishedName, Enabled, GivenName, `
ObjectClass, ObjectGUID, SamAccountName, Surname, `
UserPrincipalName, @{Name = '@timestamp'; Expression = { $timestamp }} |
ForEach-Object { $_ | ConvertTo-Json -Compress } |
Out-File .\adusers.json
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.