As said, you can wrap HTTP REST, and filter for GET, or just for _search
endpoint but that is only one part, and it is an incomplete solution.
More important is to isolate ES in a private network and to maintain a safe
and trusted environment (where every operation on OS level is logged and
must be authorized, and also the network and the connected devices such as
gateways and routers are known to be secure). This makes it possible to
ensure writings can be prevented. Also, append-only file systems should be
considered. In such an environment, it is possible to create a copy from a
master index, that has not been modified in the meantime, to let others
play with it.
Why is wrapping HTTP REST not enough? If you do not consider to take
special care of port 9300-9400, it is possible for attackers to craft
malevolent binary packets and submit these packets to the ES transport
protocol ports if they are exposed to outer network access.
If Kibana (or other plugins), when exposed for network access from outside,
can be used for such operations (by accident or not), I don't know for
sure. I hope it is not possible. It all looks like Kibana just uses search
methods and can not be used for attacks. But hope is not enough for
security. You have to prove by a security audit that everything is
"secure", up to a certain level of trust.
Jörg
On Wed, Jun 18, 2014 at 8:01 PM, Zennet Wheatcroft zwheatcroft@atypon.com
wrote:
If we want to use Kibana we will run into the same issue. I heard Shay say
that Kibana really was not developed for the use case of exposing to
external customers but he did not elaborate on that. What I was thinking of
doing is wrapping ES in a simple web app that forwards GET requests from
Kibana on to ES (keeping the same API) but blocks DELETE, PUT, and POST
requests returning a 501 Not Implemented. Do you think that would work for
maintaining functionality and disallowing updates and deletes? Would that
work for your requirements?
Zennet
On Thursday, June 12, 2014 7:48:47 AM UTC-7, Harvii Dent wrote:
Hello,
I'm planning to use Elasticsearch with Logstash for logs management and
search, however, one thing I'm unable to find an answer for is making sure
that the data cannot be modified once it reaches Elasticsearch.
"action.destructive_requires_name" prevents deleting all indices at
once, but they can still be deleted. Are there any options to prevent
deleting indices altogether?
And on the document level, is it possible to disable 'delete' AND
'update' operations without setting the entire index as read-only (ie.
'index.blocks.read_only')?
Lastly, does setting 'index.blocks.read_only' ensure that the index files
on disk are not changed (so they can be monitored using a file integrity
monitoring solution)? as many regulatory and compliance bodies have
requirements for ensuring logs integrity.
Thanks
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/e4ce0c9a-30a4-4077-b3eb-e4bb5ab2dc0b%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/e4ce0c9a-30a4-4077-b3eb-e4bb5ab2dc0b%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEmjWGXihmXcObSvzaHduO2%3DXoGdohmUWs3CbBb81SxyQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.