The type parameter of an input is just adding a field named "type" with value "json" (in your case). See this for more info.
What you are actually looking for is the codec parameter that you can fix to "json" in your Logstash input. See this and find the codec list here.
Answering your questions:
If the format is json, I think putting the .json extension is appropriate, but even if you can save those files as .txt you can read them as json using the codec parameter.
Your workflow should be quite simple here:
Input your files using json codec
Optional: you can use grok in the filter section to make additional parsing on specific fields.
I've also set the output to be stdout so you can view the Logstash logs to verify everything is correct. Once you are happy this can be set back to the elasticsearch output.
First of all, consider that the type parameter you add in the input section is just a flag added to the event. It might be used for rooting or filtering purpose but is not mandatory except is you need it.
To me, you do not need anything in your filter section, as said before, in your case this is optional in case you need to parse the json fields (extracted from codec input) themselves. You might be misunderstanding the json filter. Check the doc page. You should use it when you have raw JSON in one of your field, but in your case the input codec is already splitting your input following json format. Try deleting everything in the filter section: filter {} should be enough.
I did not try myself the exact same configuration but with my understanding of the docs that's how I would configure it in your case.
Last but not the least, since you are in dev mode and you are reading the same file many times, I recommend you to check the sincedb_path and sincedb_clean_after. Native behaviour of Logstash it to read each file only once which is (I suppose) not matching your development case. Moreover, I think this is the issue you are currently facing regarding the data not spawning on Elasticsearch/Kibana.
Those changes might push something to your Elasticsearch. If it is not the case, you might need to dig more in Logstash or Elasticsearch log files to know what's the exact error.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.