Text getting trimmed in Kibana

Hi all, I've uploaded a SCV file of HTTP traffic to ELK. The file has quite large values in URL field. These values may go upto 20 lines. Kibana is not showing the complete values, which is hampering my logs analysis.
How can I enable Kibana show the full values ?

It's a little tough to know what's going on here, I may need a bit more information from you:

If you query elasticsearch, are the whole values of the fields being shown?
Is this in discover where the fields are being truncated?
What is this field mapped as in your index pattern?
Are you using an ignore_above setting in your mapping?

Thanks for reply @evon.thomson . I could see in logstash command's stdout that the entire fields were being sent through logstash and in kibana I've to expand some column58 (still 90% content is not visible, probably to save the display area ),


I can see this column is also having src_content : MSQMx\u00 --trimmed-- kind of json mapping (but strangely no quotes). Then upon expanding this column I can see that large text.
My config file is-

input {
        file {
                path => "/home/kriss/botsv1.stream-http.csv" 
                start_position => "beginning"
                sincedb_path => "/dev/null"
}     }
filter {
	mutate {
		gsub => ['message', "\"", " " ]
	}
        csv {
                separator => ","
		columns => ["_serial","_time","source","sourcetype","host,index","splunk_server","_raw"]
		# json { source => "_raw" }   # I wanted to further break _raw column from CSV (since it has Json text inside it. But logstash throws error
	}
        #mutate { add_field => {"artifact" => "bots"} }              
}
output {
                elasticsearch  {
                        hosts => "localhost"
                        index => "http"
                }
stdout {}
}

I am attaching 2 files,
1- kibana.png shows the kibana expanded fields.
2- kibana-input.png shows the partial view of file content being uploaded.

The source CSV file is- https://s3.amazonaws.com/botsdataset/botsv1/json-by-sourcetype/botsv1.stream-http.json.gz .
I think there is no problem in Kibana receiving the data , but it is the issue of only displaying the full data. I read in a post that there is a setting in kibana to enable large text display, but couldn't find it.
Thanks again for the kind help...

Also , even after selecting column 58 in the table, the data isn't fully displayed. This data is approximately 50 lines. But I can hardly see the content of 5 lines.

I'm trying to narrow down if this is a problem in discover or a problem in Logstash / with your ES mappings. Can you run a search on your http index, and show me if the raw JSON contains the full or truncated version of the field content?

You can go into the dev tools in kibana, and run the command GET /http/_search. I believe this is the right index to search, but if it doesn't work, check here for more information.

Hi devon. The dev tool shows the entire data in full length. I tried to upload in pastebin but none carried full paste, the stream part gets trimmed (maybe because of certain characters of http stream). But that is really huge data.

Great, thank you for helping to narrow it down. @majagrubic, do you know anything about Discover cutting off long field contents?

@kriss332, which version of Kibana are you using?

Thank you for following up devon. I am using kibana 7.13.1 , if I am looking at the right place in kibana dashboard.

Just to confirm I understand the issue correctly - the data is displayed correctly when you expand the row, but not in the row in the table itself?

Yes majagrubic. For the huge data I've to reach out to correct column and again expand it to see full data. But in dafault discover view of kibana (even after changing it to tabular view) full data doesn't get displayed.
Also I am facing another issue - Default discover view in kibana dowsn't show all of the fields, although these fields are there in the field selection window (in the left pane). KIbana shows only first few fields and their data (as per alphabetical order), but I can find all this data when I expand the row.


If you see in the above screenshot, last shown field is Addon but it's value got trimmed, and there are many more fields there to be shown, as below-

What you are describing is expected behavior. We cannot show the entire data for each document as that would cause performance concerns and it wouldn't be very useful in the end. That's why expanded view is there. You can also make use of view single document option, which might be more readable.

Got it @majagrubic , but you see when I have a million of logs from CSVs, it is not time efficient to expand each row and then go for "view single document". Is there any setting where I could change this ?

you can disable truncation in the table in Kibana Advanced Settings using truncate:maxHeight

Thanks @matw. I got that option and changed it. But I ran into another problem before solving it.
I saved some filters and then deleted some index. All of this I can't recollect exactly but can somewhat corelate now. Now all I can see is error for the index pattern. And I can't see any data even if I select any time range. Although elasticsearch query on 9200 port shows that data is there and Kibana also shows the same old indexes present.
kibana-error
Whatever I do (select any index, delete existing ones, upload new CSV with a new index), doesn't solve this problem. Its been hours googling and I came across a solution - Kibana show error index pattern - #11 by LeeDr
But I cannot apply it in my kibana. Can someone guide on how to do it on v 7.13.2 ?

Else, can someone tell me how to purge all these kind of cached configurations from kibana & elasticsearch ?

You will need to recreate the index pattern if you deleted it.

I would've done that in first choice @majagrubic , but after so many days I've even forgotten that index name. And because of this problem I had to decommission one server. I've kept it in shut state untill I find a workaround.
Can there really not be a purging method created for such issues? Tomorrow I may have to decommission another ELK VM just because someone in my team saved the search query and I deleted the index without getting a confirmation from everyone else (and as if they remember what query they had saved).
Plz create an alernative solve for this.

Index pattern names are based on Elasticsearch index names. As long as your ES index is intact, you should be able to recreate an index pattern in a matter of seconds. Please read about it in our documentation and let us know if you have any more questions.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.