My Mapping with python is not working

Hello community,
I was trying to upload a simple csv-File but unfortunatelty my Mapping is not working. All my fields are still "keywords". What am I missing?

def csv_reader(filename,indexname):
    # connection to ElasticSearch: port and host are variable, standard is like this
    es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
    # if es is reached, it prints connected else it reads not connected
    if es.ping():
        print("connected")
    else:
        print("Not connected")

    with open(filename, 'r') as outfile:
        reader = csv.DictReader(outfile)
        Settings = {
            "settings": {
                "numer_of_shards": 1,
                "number_of_replicas": 0
            },
            "mappings": {
                "members": {
                    "dynamic": "strict",
                    "properties": {
                        "_id": {
                            "type": "long"
                        },
                        "name": {
                            "type": "text"
                        },
                        "gefundene_fehler": {
                            "type": "long"
                        },
                        "behobene_fehler": {
                            "type": "long"
                        },
                        "updated_time": {
                            "type": "date"
                        },
                        "time": {
                            "type": "date"
                        },
                        "timestamp": {
                            "type": "date"
                        }
                    }
                }
            }
        }
        if es.indices.exists(indexname):
            print("deleting existing index")
            es.indices.delete(index=indexname, ignore=[400,404])
            print("creating new index1")
            es.indices.create(index=indexname, ignore=400, body=Settings)
            helpers.bulk(es, reader, index=indexname)
        else:
            es.indices.create(index=indexname, ignore=400, body=Settings)
            helpers.bulk(es, reader, index=indexname)
            print("creating new index2")
    print("all lines loaded")

It sounds like you are using an old version of Elasticsearch. Is that right?

Hello,
I am using ES 7.10.2

Is it maybe possible, that when I use the csv.DictReader(outfile)- comand, that my columns will be read in a String format?
Thank you

The correct is "number_of_shards"

in version 7.10.2 this type will not work

The _id is reserved, change it to another name like "[something]_id"

Thank you for your response. But unfortunately its not working. I just want to update a csv-File to Elasticsearch. If I try it like this, then I get following errors in PyCharm:

  1. line 399, in bulk for ok, item in streaming_bulk(client, actions, *args, **kwargs)
  2. line 320, in streaming_bulk for data, (ok, info) in zip(
  3. line 249, in _process_bulk_chunk for item in gen:
  4. line 188, in _process_bulk_chunk_success raise BulkIndexError("%i document(s) failed to index." % len(errors), errors)
    Elasticsearch.helpers.errors.BulkIndexError: ('2 document(s) failed to index.', [{'index': {'_index': 'mdfadf', '_type': '_doc', '_id': 'UmY3N4ABaUVkc-44MXpt', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': "failed to parse field [time] of type [date] in document with id 'UmY3N4ABaUVkc-44MXpt'. Preview of field's value: '2022-01-28 13:03:29'", 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'failed to parse date field [2022-01-28 13:03:29] with format [strict_date_optional_time||epoch_millis]', 'caused_by': {'type': 'date_time_parse_exception', 'reason': 'Failed to parse with all enclosed parsers'}}}, 'data': {'name': 'ProjectR', 'gefunde_fehler': '7', 'behobene_fehler': '9', 'time': '2022-01-28 13:03:29', 'updated_time': '2022-04-17 13:10:01.337150'}}}, {'index': {'_index': 'mdfadf', '_type': '_doc', '_id': 'U2Y3N4ABaUVkc-44MXpt', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': "failed to parse field [time] of type [date] in document with id 'U2Y3N4ABaUVkc-44MXpt'. Preview of field's value: '2022-02-16 17:00:26'", 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'failed to parse date field [2022-02-16 17:00:26] with format [strict_date_optional_time||epoch_millis]', 'caused_by': {'type': 'date_time_parse_exception', 'reason': 'Failed to parse with all enclosed parsers'}}}, 'data': {'name': 'ProjectB', 'gefunde_fehler': '2', 'behobene_fehler': '8', 'time': '2022-02-16 17:00:26', 'updated_time': '2022-04-17 13:10:01.337150'}}}])

This is my mapping:

    Settings = {
        "settings": {
            "number_of_shards": 1,
            "number_of_replicas": 0
        },
        "mappings": {
                "properties": {
                    "name": {
                        "type": "text"
                    },
                    "gefundene_fehler": {
                        "type": "long"
                    },
                    "behobene_fehler": {
                        "type": "long"
                    },
                    "updated_time": {
                        "type": "date"
                    },
                    "time": {
                        "type": "date"
                    },
                    "timestamp": {
                        "type": "date"
                    }
                }
            }
        }

If I try to upload the csv-File in Kibana with the import csv-File funciton, then I get the following error:

I don't know how to solve this issue.
Thanks

You need to fix the date format for your date fields.

You need to apply date formats to your field.

to '2022-02-16 17:00:26' use yyyy-MM-dd HH:mm:ss.

to '2022-04-17 13:10:01.337150' use strict_date_optional_time_nanos.

Note that you need to add the 'T' in values for "updated_time" like this: 2022-04-17T13:10:01.337150 or save without the nanoseconds.
If you choose a date without nanosecond, you can use the field format time.

  "updated_time": {
          "type": "date",
          "format": "strict_date_optional_time_nanos"
        },
        "time": {
          "type": "date",
          "format": "yyyy-MM-dd HH:mm:ss"
        },

See more info here:

Hello,
first of all I want to thank you both for your help. It worked finally. You guys are legends.

                   "updated_time": {
                        "type": "date",
                        "format": "yyyy-MM-dd HH:mm:ss"
                    },
                    "time": {
                        "type": "date",
                        "format": "yyyy-MM-dd HH:mm:ss"
                    }

This is my mapping and instead of

datetime.now()

I used for updated_time

datetime.now().strftime("%Y-%m-%d %H:%M:%S")

Thanks again for your help

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.