Hello,
I am planning to use ELK
My current SQL table is like below
Time, Field ID, Filed_name, Field_Sub_ID, value, extra value, type
To view this data we have provided excel add-in.
User can provide basic filter to this addin (filter like date,time from & to. Filed_id, sub id)
Value filed contain encoded data (HEX )
14 C9 80 00 04 00 00 40 00 C4 04 61 39 E0 00 01 00 00 61 02 08 07 6C 00 00 40 00 14 00 00 00 36 C0 19 80 00 3B 62 00 00
86 00
Actual data we see in excel is like
WellID = 1 DrillState = 0 MudAvailable = 0 LastLevel = 150 PowerOnContext = xyzz
Type = Bore Operation = Semi OperatorMonitoring = sd209 .......... this is very big data more than 30 fields
Which gets converted by excel addin.
For conversion addin uses xml file, which tells which bit contain what data.
Example:-
<?xml version="1.0"?>
<!-- Drill DataMessage (id=456) -->
<message class="Message">
<record class="Record" id="WellID" offset="0">
<record class="Record" id="Info" offset="0">
<field class="int" id="WellID" size="1"/>
<field class="bool" id="DrillState" size="1"/>
<field class="bool" id="MudAvailable" size="1"/>
<field class="int" id="LastLevel" size="1"/>
<field class="int" id="PowerOnContext" size="1"/>
<field class="int" id="Type" size="1"/>
<field class="string" id="Operation" size="1"/>
<field class="int" id="OperatorMonitoring" size="2"/>
</record>
</record>
</message>
We have different XML schema files, based on type of filed.
If I am using ELK,
My quires-
- When we should do such configuration,
Before inserting data (at logstach ) or while fetching or viewing the data(in elasticearch or kibana side) - How this is possible, I can write own script or can I use our own CPP exe to do this operation.
Can someone please help on this.
Thanks in advance
Regards,
ASH