Right problem on ELK Linux installation

Hello,
I think I have some right problem

  1. I installed ELK Linux (redhat from .tar.gz packages) and it was impossible for me to delete an object in the view Settings > Objects.
    I reinstalled 3 times , nothing to do.

For information I have ElasticSearch 2.2.0 and Kibana 4.4.2

You 'd know why?
2. If I use sense to delete an index, It refused to delete it.

  1. A other problem : when I created a visualize called test, when I 'm going to see the code, I see in visState:

    {
    "title": "New Visualization",
    "type": "area",
    "params": {
    "shareYAxis": true,
    ...
    I don't understand, why I have "New Visualization" name. I called it "test" iw creation step...

All this actions is ok under Windows, I have installed ELK on Windows and I don't have any similar problem.
Thanks;

When I try to delete an indices in Settings>Indices, it refused

There are no message in interface, but I see this log in the serveur.
My indices is called "index_stoz_baie_backendbusy_histo, and the message say: [logstash-*] IndexNotFoundException[no such index]

[2016-03-13 18:24:44,694][DEBUG][cluster.service          ] [node-1] processing [shard-started ([index_stoz_baie_backendbusy_histo][1], node[tBssabVsTIGrjqaoc1KmcA], [P], v[3], s[INITIALIZING], a[id=AovbplDYShCXVD-Gws8iaw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-13T17:24:43.449Z]]), reason [after recovery from store],shard-started ([index_stoz_baie_backendbusy_histo][1], node[tBssabVsTIGrjqaoc1KmcA], [P], v[3], s[INITIALIZING], a[id=AovbplDYShCXVD-Gws8iaw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-13T17:24:43.449Z]]), reason [master {node-1}{tBssabVsTIGrjqaoc1KmcA}{<I_HIDE_THE_IP>}{<I_HIDE_THE_IP>:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started],shard-started ([.kibana][0], node[tBssabVsTIGrjqaoc1KmcA], [P], v[9], s[INITIALIZING], a[id=XPHqGDldTCeTl7JksL-cYw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-13T17:24:43.444Z]]), reason [after recovery from store]]: took 53ms done applying updated cluster_state (version: 6, uuid: AD2OyRHnQgWtN86B-OPSQw)
[2016-03-13 18:24:59,258][INFO ][rest.suppressed          ] /logstash-*/_mapping/field/* Params: {ignore_unavailable=false, allow_no_indices=false, index=logstash-*, include_defaults=true, fields=*, _=1457889899197}
[logstash-*] IndexNotFoundException[no such index]
        at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:659)
        at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:133)
        at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:77)
        at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:57)
        at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:40)
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
        at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:351)
        at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52)

It's strange

Ignore the index not found message, it is just Kibana looking for a default index.

Why not use the RPMs instead of the tar.gz, it'll make it a easier to manage?

With RPM, I have the same error.
Normally when I save an object with the same name as another , I should have a confirmation popup that asks me to force the save . This is not the case , I have no message . And it generates the following log

[2016-03-15 18:22:52,931][INFO ][rest.suppressed          ] /.kibana/visualization/44 Params: {index=.kibana, op_type=create, id=44, type=visualization}
RemoteTransportException[[Leonus][x.x.x.x:9300][indices:data/write/index[p]]]; nested: DocumentAlreadyExistsException[[visualization][44]: document already exists];
Caused by: [.kibana][[.kibana][0]] DocumentAlreadyExistsException[[visualization][44]: document already exists]
        at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:432)
        at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:390)
        at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:362)
        at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:515)