Substring record basing on numerical index

Hi everyone,
my logstash config file is as follows

input {
	jdbc {
		jdbc_connection_string => "Connection String"
		jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
		jdbc_user => "User"
		jdbc_driver_library => "Lib Path"
		schedule => "*/5 * * * *"
		
		statement => "SELECT uid, date, idstation FROM tablr 1ORDER BY uid ASC"
		use_column_value => true
		tracking_column => "uid"
		tracking_column_type => "numeric"
		clean_run => true
		last_run_metadata_path => "data\.logstash_jdbc_last_run"
	}
}

filter { ... }

output { ... }

idstation field is formed by 8 characters (e.g. 02030417). I want to split this field into four other fields, the criteria of splitting is that every subfield found is formed by two characters, in this case the first field is 02, the second 03, the third 04 and the fourth 17. After splitting, i want this four field to be added into elasticsearch into four separated fields, because i need to make searches basing on each individual field.

I cannot find the way to do this, please help me, thank you in advance!

You can use mutate+gsub to capture fixed length substrings. See this.

Thank you Badger for your response. For me it worked to use Ruby filter.