Word Delimiter Graph Token + Synonym Graph Token

I want to use a word delimiter graph token to avoid spliting terms with "+" and "-", e.g. so "3+c" doesn't change to ["3","c"] and stays ["3+c"].
I also want to define a list of synonyms to be applied to this field after filter tokenization.

Documentation says:

If you need to build analyzers that include both multi-token filters and synonym filters, consider using the multiplexer filter, with the multi-token filters in one branch and the synonym filter in the other.

but I don't see how my delimiter can generate multiple tokens since catenate_all, catenate_numbers, catenate_words and preserve_original are all false.

But when I tried to create the index without the multiplexer:

"my_analyzer": {
    "type": "custom",
    "tokenizer": "keyword",
    "filter": [
        "my_delimiter","my_synonyms"
    ]
}

it gives me this error:
Token filter [my_delimiter] cannot be used to parse synonyms

And when I tried with multiplexer:

{
    "settings": {
        "analysis": {
            "analyzer": {
                "my_analyzer": {
                    "type": "custom",
                    "tokenizer": "keyword",
                    "filter": [
                        "my_multiplexer"
                    ]
                }
            },
            "filter": {
                "my_multiplexer": {
                    "type": "multiplexer",
                    "filters": [
                        "my_delimiter",
                        "my_synonyms"
                    ]
                },
                "my_synonyms": {
                    "type": "synonym_graph",
                    "synonyms_path": "analysis/synonym.txt"
                },
                "my_delimiter": {
                    "type": "word_delimiter_graph",
                    "type_table": [
                        "+ => ALPHA",
                        "- => ALPHA"
                    ],
                    "split_on_case_change": false,
                    "split_on_numerics": false,
                    "stem_english_possessive": false
                }
            }
        }
    }
}

it continues giving an error, but now is "Increment must be zero or greater: -1"
I also tried to use flatten_graph without success.
Where am I doing wrong? Is my request of the multiplexer filter right?
Thanks in advance