Not working ML job using partition_field_name of japanese character on Windows 10 Home

Hi

I use Elastic Machine Leaning on Windows 10 Home.
ML job is not working if I use partition_field_name for japanese character on it.

But I make same ML job (same input data and condition) on Linux (using docker),
I can run ML job.

Please check this problem.
I write information of problem with followings.

  1. environment

OS: Windows 10 Home
Elasticsearch Version: 5.6.3

  1. job infmation
    2.1. job messages

     2017-12-15 11:23:22 	8zTKjGP 	Job created
     2017-12-15 11:23:22 	8zTKjGP 	Loading model snapshot [N/A], job latest_record_timestamp [N/A]
     2017-12-15 11:23:22 	8zTKjGP 	Opening job on node [{8zTKjGP}{8zTKjGPzRJiCu-AHwkD84Q}{0B9mb2QnQee8mKzAo5hqmQ}{192.168.0.5}{192.168.0.5:9300}{ml.max_open_jobs=10, ml.enabled=true}]
     2017-12-15 11:23:23 	8zTKjGP 	Starting datafeed [datafeed-error_sample] on node [{8zTKjGP}{8zTKjGPzRJiCu-AHwkD84Q}{0B9mb2QnQee8mKzAo5hqmQ}{192.168.0.5}{192.168.0.5:9300}{ml.max_open_jobs=10, ml.enabled=true}]
     2017-12-15 11:23:23 	8zTKjGP 	Datafeed started (from: 2017-12-09T15:00:00.000Z to: 2017-12-10T15:00:00.000Z)
     2017-12-15 11:23:23 	8zTKjGP 	Datafeed is encountering errors submitting data for analysis: [error_sample] Unexpected death of autodetect:
     2017-12-15 11:23:23 	8zTKjGP 	Datafeed stopped

2.2 ml job config

{
   "job_id":"error_sample",
   "job_type":"anomaly_detector",
   "job_version":"5.6.3",
   "create_time":1513304602513,
   "analysis_config":{
      "bucket_span":"4h",
      "detectors":[
         {
            "detector_description":"mean(FMP)",
            "function":"mean",
            "field_name":"FMP",
            "partition_field_name":"additionals.pageName.keyword",
            "detector_rules":[

            ],
            "detector_index":0
         }
      ],
      "influencers":[
         "additionals.pageName.keyword"
      ]
   },
   "data_description":{
      "time_field":"@timestamp",
      "time_format":"epoch_ms"
   },
   "model_snapshot_retention_days":1,
   "results_index_name":"shared",
   "data_counts":{
      "job_id":"error_sample",
      "processed_record_count":1000,
      "processed_field_count":1948,
      "input_bytes":107050,
      "input_field_count":1948,
      "invalid_date_count":0,
      "missing_field_count":52,
      "out_of_order_timestamp_count":0,
      "empty_bucket_count":0,
      "sparse_bucket_count":0,
      "bucket_count":2,
      "earliest_record_timestamp":1512831606575,
      "latest_record_timestamp":1512862089604,
      "last_data_time":1513304603417,
      "input_record_count":1000
   },
   "model_size_stats":{
      "job_id":"error_sample",
      "result_type":"model_size_stats",
      "model_bytes":2163998,
      "total_by_field_count":58,
      "total_over_field_count":0,
      "total_partition_field_count":57,
      "bucket_allocation_failures_count":0,
      "memory_status":"ok",
      "log_time":1513304603000,
      "timestamp":1512835200000
   },
   "datafeed_config":{
      "datafeed_id":"datafeed-error_sample",
      "job_id":"error_sample",
      "query_delay":"70917ms",
      "frequency":"600s",
      "indices":[
         "chrome-timeline-ml-*"
      ],
      "types":[
         "ml"
      ],
      "query":{
         "match_all":{
            "boost":1
         }
      },
      "scroll_size":1000,
      "chunking_config":{
         "mode":"auto"
      },
      "state":"stopped"
   },
   "state":"failed",
   "node":{
      "id":"8zTKjGPzRJiCu-AHwkD84Q",
      "name":"8zTKjGP",
      "ephemeral_id":"0B9mb2QnQee8mKzAo5hqmQ",
      "transport_address":"192.168.0.5:9300",
      "attributes":{
         "ml.max_open_jobs":"10",
         "ml.enabled":"true"
      }
   },
   "open_time":"31s"
}
  1. elasticsearch log message

    [2017-12-15T11:23:23,261][INFO ][o.e.x.m.a.PutDatafeedAction$TransportAction] [8zTKjGP] Created datafeed [datafeed-error_sample]
    [2017-12-15T11:23:23,392][INFO ][o.e.x.m.d.DatafeedManager] Starting datafeed [datafeed-error_sample] for job [error_sample] in [2017-12-09T15:00:00.000Z, 2017-12-10T15:00:00.000Z)
    [2017-12-15T11:23:23,404][ERROR][o.e.x.m.j.p.a.NativeAutodetectProcess] [error_sample] autodetect process stopped unexpectedly
    [2017-12-15T11:23:23,425][ERROR][o.e.x.m.j.p.a.AutodetectCommunicator] [error_sample] Unexpected death of autodetect:
    [2017-12-15T11:23:23,426][ERROR][o.e.x.m.j.p.a.AutodetectCommunicator] [error_sample] Unexpected exception writing to process
    org.elasticsearch.ElasticsearchException: [error_sample] Unexpected death of autodetect:
    at org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.checkProcessIsAlive(AutodetectCommunicator.java:254) ~[?:?]
    at org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.access$300(AutodetectCommunicator.java:63) ~[?:?]
    at org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator$1.doRun(AutodetectCommunicator.java:300) ~[?:?]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.6.3.jar:5.6.3]
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.6.3.jar:5.6.3]
    at org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:564) ~[?:?]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_151]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_151]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.3.jar:5.6.3]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
    [2017-12-15T11:23:23,430][INFO ][o.e.x.m.d.DatafeedManager] [no_realtime] attempt to stop datafeed [datafeed-error_sample] for job [error_sample]
    [2017-12-15T11:23:23,430][INFO ][o.e.x.m.d.DatafeedManager] [no_realtime] try lock [20s] to stop datafeed [datafeed-error_sample] for job [error_sample]...
    [2017-12-15T11:23:23,430][INFO ][o.e.x.m.d.DatafeedManager] [no_realtime] stopping datafeed [datafeed-error_sample] for job [error_sample], acquired [true]...
    [2017-12-15T11:23:23,430][INFO ][o.e.x.m.d.DatafeedManager] [no_realtime] datafeed [datafeed-error_sample] for job [error_sample] has been stopped
    [2017-12-15T11:23:23,482][INFO ][o.e.x.m.j.p.a.AutodetectProcessManager] [8zTKjGP] Successfully set job state to [failed] for job [error_sample]

  1. input data
    I dumped input data using elasticsearchdump, but I cannot upload zip file of input data.

Sorry, the file you are trying to upload is not authorized (authorized extensions: jpg, jpeg, png, gif).

How to add input data?

Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.