Depending on the version of the stack you're on, you probably also need to include the content type header in that curl command for it to work. I think that became a requirement in 5.0, or maybe an early point release. Try this:
curl -H "Content-Type:application/json" -POST http://localhost:9200/etc/elasticsearch -d @abc.txt
I want the data to be fetched from a URL so that i have live data so is there a way where i can add the url instead of static file as the url returns me JSON data only
I don't think you can do this with curl directly, but you can almost certainly do it by piping the curl output to another curl call that will index the data. Unfortunately I'm not super skilled in the art of command piping, but it would look something like this meta example:
curl -H "Content-Type:application/json" -XGET <path_to_json_url> | curl -H "Content-Type:application/json" -XPUT <path_to_elasticsearch>/<index_name>
You could also write a script in pretty much any language that would do this for you, using one of the existing Elasticsearch client libraries, or just making REST calls manually. Once you're there, just put that on a cron job and you're pretty much all set. You can get fancier with it, and probably should for a real production use case, but that would be good enough to start out.
is there any way i can set a kind of threshold on basis of which i can change color etc
Once the raw document data is in Elasticsearch, you can perform aggregations on that data and do thing like split on fields and roll up metrics into averages and the like. The visualization builder UI in Kibana is modeled after the Elasticsearch Query DSL, so it's probably good to have at least a basic understanding of how it functions (you don't really need to understand the syntax to use Kibana, just how it works).
Once you are indexing documents, you can use, for example, a range aggregation in your visualization to show data within various thresholds.