katara
(Katara)
December 26, 2019, 12:23pm
1
Hi Team,
im working with a Snow API on my ELK setup (7.2.0),
and below is the configuration of my logsatsh config file.
input {
http_poller {
urls => {
snowinc => {
url => "https://service-now.com/api/now/table/incident&sysparm_display_value=true&sysparm_limit=10&sysparm_exclude_reference_link=true"
user => "logstash"
password => "Pass123"
headers => {Accept => "application/json"}
}
}
request_timeout => 60
metadata_target => "http_poller_metadata"
schedule => { cron => "* * * * * UTC"}
codec => "json"
}
}
output {
elasticsearch {
hosts => ["10.116.15.27:9200","10.116.15.28:9200","10.116.15.29:9200"]
index => "incidentsnow"
action=>update
document_id => "%{number}"
doc_as_upsert =>true
}
stdout { codec => rubydebug }
}
I initially did not have the
action=>update
document_id => "%{number}"
doc_as_upsert =>true
part and i received all the data with duplicates.
After i did make a doc ID, Im only receiving one inc data where as the link actually has 10.
How do i get all and only unique values of Incidents.
Also, How does an update work. If a ticket gets updated, will the above config help me update the ticket in my index instead of duplicating it / adding a new row?
Here's a the first result set of the link, 10 result sets come with different incident details like below:
<response>
<result>
<parent/>
<made_sla>true</made_sla>
<caused_by/>
<upon_reject>Cancel all future Tasks</upon_reject>
<sys_updated_on>2019-12-23 05:00:00</sys_updated_on>
<child_incidents>0</child_incidents>
<hold_reason/>
<approval_history/>
<number>INC001566</number>
<resolved_by>Ana Ken</resolved_by>
<sys_updated_by>system</sys_updated_by>
<opened_by/>
<user_input/>
<sys_created_on>2019-12-18 03:25:37</sys_created_on>
<sys_domain>global</sys_domain>
<state>Closed</state>
<sys_created_by>sa_Service</sys_created_by>
<calendar_stc>85,372</calendar_stc>
<closed_at>2019-12-23 05:00:00</closed_at>
<cmdb_ci/>
<impact>3 - Low</impact>
<active>false</active>
<business_service/>
<priority>5 - Planning</priority>
<sys_domain_path>/</sys_domain_path>
<business_impact/>
<time_worked/>
<expected_start/>
<opened_at>2019-12-18 03:25:37</opened_at>
<business_duration>8 Hours</business_duration>
<resolved_at>2019-12-19 03:08:29</resolved_at>
<reopened_time/>
<subcategory>Datacenter</subcategory>
<short_description>
[ Critical Zenoss] STR675GH is DOWN!
</short_description>
<assignment_group>L1.5</assignment_group>
<description>
Timestamp: 2019/12/18 11:25:32.000000 Device: STR675GH IP address: 10.116.45.116 Component: Severity: Critical (5) Event class: /Status/Ping Device class: /Server/Microsoft/Windows/strwindows Device/Element priority: Normal (3) Summary: STR675GH is DOWN! Message: STR675GH is DOWN!
</description>
<cause/>
<sys_class_name>Incident</sys_class_name>
<closed_by>Ana Ken</closed_by>
<parent_incident/>
<sys_id>037879a91b714c10fe92437cdc4bcb31</sys_id>
<contact_type>Monitoring</contact_type>
<incident_state>Closed</incident_state>
<urgency>3 - Low</urgency>
<problem_id/>
<reassignment_count>1</reassignment_count>
<assigned_to>Beth Bezo</assigned_to>
<severity>3 - Low</severity>
<sla_due>UNKNOWN</sla_due>
<escalation>Normal</escalation>
<upon_approval>Proceed to Next Task</upon_approval>
<location/>
<category>Network</category>
</result>
<result>
<parent/>
<made_sla>true</made_sla>
<caused_by/>
<upon_reject>Cancel all future Tasks</upon_reject>
<sys_updated_on>2019-12-13 00:43:58</sys_updated_on>
<child_incidents>0</child_incidents>
<hold_reason/>
<approval_history/>
<number>INC0010206</number>
<resolved_by>Abraham Lincoln</resolved_by>
<sys_updated_by>anak</sys_updated_by>
<opened_by>Angel Ken</opened_by>
<user_input/>
<sys_created_on>2019-12-11 01:53:10</sys_created_on>
<sys_domain>global</sys_domain>
<state>Canceled</state>
<sys_created_by>angke</sys_created_by>
<calendar_stc>81,153</calendar_stc>
<closed_at>2019-12-13 00:43:58</closed_at>
<cmdb_ci/>
<impact>3 - Low</impact>
<active>false</active>
<business_service/>
<priority>5 - Planning</priority>
<sys_domain_path>/</sys_domain_path>
<business_impact/>
<time_worked/>
<expected_start/>
<opened_at>2019-12-11 01:51:21</opened_at>
<business_duration>8 Hours</business_duration>
<resolved_at>2019-12-12 00:23:54</resolved_at>
<reopened_time/>
<subcategory>DNS</subcategory>
<short_description>test</short_description>
<assignment_group>Service Desk</assignment_group>
<description>test</description>
<cause/>
<sys_class_name>Incident</sys_class_name>
<closed_by>Ana Ken</closed_by>
<parent_incident/>
<sys_id>0ecf9d931ba54410fe92437cdc4bcb50</sys_id>
<contact_type>Phone</contact_type>
<incident_state>Canceled</incident_state>
<urgency>3 - Low</urgency>
<problem_id/>
<reassignment_count>0</reassignment_count>
<assigned_to>Beth Bezo</assigned_to>
<severity>3 - Low</severity>
<sla_due>UNKNOWN</sla_due>
<escalation>Normal</escalation>
<upon_approval>Proceed to Next Task</upon_approval>
<location/>
<category>directory_services</category>
</result>
Please do help me with these clarifications.
Thank you!
Hi, you can check previous issues with servicenow in the forum:
In this thread, there is a JSON response with a records
field that contains an array of entries. So json input codec and split filter are used to get the information of each entry.
Thank you very much @magnusbaeck for you guidance.....i'll go through docs.
Thanks
Gautham
Maybe you can work on something similar for your case, although your response seems XML instead of JSON.
On the other hand, your configuration of
action=>update
document_id => "%{number}"
doc_as_upsert =>true
will update the existing document if a new one is indexed with the same number
New fields (not present in the existing document) will be added to the same document;
Existing fields will have their values updated with the latest ones.
katara
(Katara)
December 30, 2019, 5:48am
5
Thank you @andres-perez , for helping me out.
I am trying the split filter as mentioned.
filter {
split {
field => "[response][0][result]"
}
}
based on my data. Is this right? Coz im still not able to see all my data.
Also tried
field => "[response]"
and
field => "[result]"
Where am i going wrong?
Gauti
(Gautham)
December 30, 2019, 8:30am
6
Hi @katara
In which field you are getting your incident details?
How the data is getting alligned in you Discover tab?
Getting all data under single field?
If yes then apply split filter.
In my case i'm getting the data in "result" field, so i'm splitting it.
Thanks
Gauti
katara
(Katara)
December 30, 2019, 9:29am
7
Hi @Gauti ,
My structure of XML looks like this: (this is a browser , so i get an XML output. will be JSON if i curl)
<response>
<result>...</result>
<result>...</result>
<result>...</result>
<result>...</result>
<result>...</result>
<result>...</result>
<result>...</result>
<result>...</result>
<result>...</result>
<result>...</result>
</response>
under each result set, the Incident data is stored.
<result>
<made_sla>true</made_sla>
<upon_reject>Cancel all future Tasks</upon_reject>
<number>INC001566</number>
<resolved_by>Ana Ken</resolved_by>
<sys_updated_by>system</sys_updated_by>
</result>
In my discover page, all my inc details fall into one result.
I want splits applied for each result, so I can have all Incident records as separate rows in my ES.
Help me configure the same.
I tried everything that I've mentioned earlier.
How should i apply split here?
Gauti
(Gautham)
December 30, 2019, 11:01am
8
Try this filter
filter
{
split
{
field => "result"
}
Thanks
Gauti
katara
(Katara)
December 30, 2019, 11:19am
9
@Gauti ,
I tried the above and my data still isnt split.
Only the first row of my XML result has been added.
The data just is shown as XML, this output above is from chrome.
When i curl it, it will give me a JSON output. Would like to avoid that confusion.
katara
(Katara)
December 30, 2019, 12:08pm
10
@Gauti , attaching a JSON response for reference:
{"result":[
{
"made_sla":"true",
"upon_reject":"Cancel all future Tasks",
"sys_updated_on":"2019-12-23 05:00:00",
"number":"INC0010275",
"category":"Network"} ,
{
"made_sla":"true",
"upon_reject":"Cancel all future Tasks",
"sys_updated_on":"2019-12-24 07:00:00",
"number":"INC0010567",
"category":"DB"}]}
attached 2 sets and I need each set to occur as a separate row.
So far with the filters tried above,
Im only getting one.
also, im getting the data split as different columns in Kibana, except only the first row (only one inc ticket detail) is getting indexed in ES.
@andres-perez , i believe this is where i differ from the question in the link you've provided me.
katara
(Katara)
December 30, 2019, 5:22pm
11
Team, if anyone has anything I could try, it would be very useful to me! apologies, if this is taking a bit long.
Thanks in advance.
Katara
katara
(Katara)
December 31, 2019, 5:53am
12
update:
I tried adding
json { source => "message" }
in the filters,
yet no results.
I need to figure out a way to split each result set.
When i do
split{ field => "result"}
all my data inside result gets split up as a separate column.
There is no succeeding name for all my result sets to split each result set. How do I fix this? Only the first result set is loading in the ES.
In the Elasticsearch output you use the ‘number’ field as document id but it does not look like you have this at the document root level. Instead it is nested under ‘result’ which causes the same document to be repeatedly updated. Try changing ‘number’ to ‘[result][number]’ and see if this makes a difference.
1 Like
katara
(Katara)
December 31, 2019, 6:38am
14
@Christian_Dahlqvist ,
Wow! This totally worked,
Thank you so much!!
Gauti
(Gautham)
December 31, 2019, 7:08am
15
@katara great that things had worked out.
You can post your working config here, so that it'll be helpful for others who face the same issue.
Thanks
Gauti
1 Like
katara
(Katara)
December 31, 2019, 9:31am
16
Hi guys,
FYI, this is my working configuration.
input {
http_poller {
urls => {
snowinc => {
url => "https://service-now.com"
user => "your_user"
password => "yourpassword"
headers => {Accept => "application/json"}
}
}
request_timeout => 60
metadata_target => "http_poller_metadata"
schedule => { cron => "* * * * * UTC"}
codec => "json"
}
}
filter
{
json {source => "result" }
split{ field => ["result"] }
}
output {
elasticsearch {
hosts => ["yourelastuicIP"]
index => "incidentsnow"
action=>update
document_id => "%{[result][number]}"
doc_as_upsert =>true
}
stdout { codec => rubydebug }
}
You'll be able to find the sample JSON input in the replies for your reference.
Thanks to everyone who helped me out here!
Hope this helps
Regards,
Katara
2 Likes
system
(system)
Closed
January 28, 2020, 9:31am
17
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.