Elasticsearch increase total_in_bytes value Memory

Hello

I'm running a single node on an Ubuntu machine. Currently I have not enough space on my node.

"mem" : {
          "total_in_bytes" : 25217441792,
          "free_in_bytes" : 674197504,
          "used_in_bytes" : 24543244288,
          "free_percent" : 3,
          "used_percent" : 97
        }

On the other hand when I execute the command df -h , I see that I still have enough space on the Linux
server

Filesystem                         Size  Used Avail Use% Mounted on
udev                                12G   12G     0 100% /dev
tmpfs                              2,4G  1,5M  2,4G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  158G  110G   42G  73% / ***<=====***
tmpfs                               12G     0   12G   0% /dev/shm
tmpfs                              5,0M     0  5,0M   0% /run/lock
tmpfs                               12G     0   12G   0% /sys/fs/cgroup
/dev/sda2                          974M  304M  603M  34% /boot
/dev/mapper/ubuntu--vg-lv--opt     206G   70G  127G  36% /opt***<=======***
tmpfs                              2,4G     0  2,4G   0% /run/user/1000
/dev/loop0                          64M   64M     0 100% /snap/core20/1634
/dev/loop10                         56M   56M     0 100% /snap/core18/2620
/dev/loop8                          64M   64M     0 100% /snap/core20/1695
/dev/loop2                          56M   56M     0 100% /snap/core18/2632
/dev/loop7                          92M   92M     0 100% /snap/lxd/23991
/dev/loop3                          50M   50M     0 100% /snap/snapd/17883
/dev/loop4                          92M   92M     0 100% /snap/lxd/24061

Please how can I increase the value of total_in_bytes ?
Thanks.

Welcome!

mem.* refers to memory and not filesystem.

So I'm not sure to understand your question.

Hello @dadoonet , I just want to increase the node size.
when I check the disk space on the server I see that there is still space left, I want to use this space for my node

Elasticsearch will use all the disk space which is available in your data dir.

So if your data dir is within /opt, it will try to use the 70G available space if needed.

I don't understand why it's not working for me.
data is stored in Opt as shown below
for Opt there is 127GB free

on the other hand, the node tells me that there is only 3% free

"data" : [
          {
            "path" : "/opt/Elastic/elasticsearch-7.14.0/data/nodes/0",
            "mount" : "/opt (/dev/mapper/ubuntu--vg-lv--opt)",
            "type" : "ext4",
            "total_in_bytes" : 220810428416,
            "free_in_bytes" : 146589724672,
            "available_in_bytes" : 135298658304
          }
        ]

The node reports that almost 127gb are available.

135298658304 

So what's the problem?

What is not working?

I want to do a _reindex , but the operation stops before it ends.
when I check the status of the node, it tells me that there's just 3% free

    "mem" : {
      "total_in_bytes" : 25217441792,
      "free_in_bytes" : 847118336,
      "used_in_bytes" : 24370323456,
      "free_percent" : 3,
      "used_percent" : 97
    }

Can you help me please ?

So how do you know there's a problem with disk space?

Yeah. Please share the elasticsearch logs. That's probably indicates something.
Also, which exact command are you running to reindex your data?

Please share anything that would help us to help you...

because Elastic tells me that I only have 3% free space left, that's not normal, is it?

for the reindex, i used this command

POST _reindex?wait_for_completion=false
{
"source": {
"index": "idx_siv"
},
"dest": {
"index": "siv3"
}
}

and this is the log file

[2022-12-06T01:16:33,501][WARN ][r.suppressed             ] [node-1] path: /idx_siv/_search, params: {pretty=true, index=idx_siv}
java.lang.IllegalStateException: Can't get text on a START_ARRAY at 4:17
	at org.elasticsearch.common.xcontent.json.JsonXContentParser.text(JsonXContentParser.java:74) ~[elasticsearch-x-content-7.14.0.jar:7.14.0]
	at org.elasticsearch.common.xcontent.json.JsonXContentParser.objectText(JsonXContentParser.java:96) ~[elasticsearch-x-content-7.14.0.jar:7.14.0]
	at org.elasticsearch.index.query.MatchQueryBuilder.fromXContent(MatchQueryBuilder.java:527) ~[elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.search.SearchModule.lambda$registerQuery$17(SearchModule.java:975) ~[elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.common.xcontent.NamedXContentRegistry.parseNamedObject(NamedXContentRegistry.java:128) ~[elasticsearch-x-content-7.14.0.jar:7.14.0]
	at org.elasticsearch.common.xcontent.support.AbstractXContentParser.namedObject(AbstractXContentParser.java:398) ~[elasticsearch-x-content-7.14.0.jar:7.14.0]
	at org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder(AbstractQueryBuilder.java:309) ~[elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder(AbstractQueryBuilder.java:286) ~[elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.search.builder.SearchSourceBuilder.parseXContent(SearchSourceBuilder.java:1204) ~[elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.rest.action.search.RestSearchAction.parseSearchRequest(RestSearchAction.java:139) ~[elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.rest.action.search.RestSearchAction.lambda$prepareRequest$1(RestSearchAction.java:114) ~[elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.rest.RestRequest.withContentOrSourceParamParserOrNull(RestRequest.java:471) ~[elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.rest.action.search.RestSearchAction.prepareRequest(RestSearchAction.java:113) ~[elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:83) ~[elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.xpack.security.rest.SecurityRestFilter.handleRequest(SecurityRestFilter.java:98) ~[?:?]
	at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:274) [elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:356) [elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:195) [elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:451) [elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:516) [elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:378) [elasticsearch-7.14.0.jar:7.14.0]
	at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:31) [transport-netty4-client-7.14.0.jar:7.14.0]
	at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:17) [transport-netty4-client-7.14.0.jar:7.14.0]
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:47) [transport-netty4-client-7.14.0.jar:7.14.0]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:831) [?:?]
[2022-12-06T01:20:56,025][WARN ][r.suppressed             ] [node-1] path: /idx_siv/_search, params: {pretty=true, index=idx_siv}
java.lang.IllegalStateException: Can't get text on a START_ARRAY at 4:17

Thank you again

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Where Elastic is telling you that? Could you tell?

About the logs, it seems to indicate that at least one of you document in the index is not really readable (not a correct JSON)... Not related to a disk space issue or whatever else.

Could you run:

GET /idx_siv/_search?size=10000

To see if you can reproduce that?

What is the output of:

GET /_cat/indices/idx_siv?v

when i run this command
GET /_nodes/stats
i get : "free_percent" : 3 in "meme" ==> Percentage of free memory

"mem" : {
          "total_in_bytes" : 25217441792,
          "free_in_bytes" : 674197504,
          "used_in_bytes" : 24543244288,
          "free_percent" : 3,
          "used_percent" : 97
        }

yes when i run
GET /idx_siv/_search?size=10000
i get a documents

What is the output of:

GET /_cat/indices/idx_siv?v
health status index   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   idx_siv idFu6PLLT3SI4YAjgNA_Ig   1   1  103562064      5574757     58.5gb         58.5gb

Thank you

I'm going to tell it one more time in case it's unclear:

RAM is not Disk Space.

Anyway, it sounds like that one of your documents is incorrect. Not sure how this happened.

My recommendation though:

  • upgrade to the latest 7.17 version
  • if possible split the reindex operation into multiple ones using a query. You will have more chance to find which one is not working vs stopping the whole process...

Hello ;
Thank you for your feedback, I have already made a script to write by blocks, but here it is an index with more than 250 Millions documents, it will probably take several days.

Please, is there a way to delete all invalid documents or at least ingore them?

I tried with this, but it didn't work

POST _reindex?wait_for_completion=false
{
  "conflicts": "proceed",
  "source": {
    "index": "idx1"
  },
  "dest": {
    "index": "idx3"
  }
}

Thanks

I don't even know how this could happen to be honest...
It might be a bug that has been fixed in most recent version. That's why I'd recommend upgrading...

What you could try to do is to run otherwise a scroll script which iterates over all the data you have and for each hit, try to parse the _source using jq or similar.

If it fails, print the document _id.

My 2 cents.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.