Create a Custom Elasticsearch Template

Blog, ElasticSearch, Information Technology, Networking, Software

This post will show you how to create a custom index template for Elasticsearch.

Why would you need to create a custom template Lets say you are storing ASN data in your Elasticsearch index.
For example…

  • Google Inc. – 15 messages
  • Facebook Inc. – 25 messages
  • Linkedin Inc. – 33 messages

When you query the index for ASN fields, you are going to get 15 hits for Google, 25 hits for Facebook, 33 hits for Linkedin and 73 hits for Inc. This is because, by default, ElasticSearch does automatic index creation which analyzes each field and splits strings at spaces when indexing. So what if you want to just have the whole field item return as a result so that something like “Google Inc” will show up as 15 hits? Well, you have to create an ElasticSearch index template. Note: creating a template will not magically modify old indexes, that data has already been indexed. The template will only work for newly created indices in ElasticSearch after you add the template.

First thing you need to do is figure out the naming scheme for your indices. Knowing the name pattern for new indices will make it so that the template you are about to create only applies to that index and not other indices in ElasticSearch. I use logstash to ship everything to ElasticSearch and the default index naming pattern is logstash-YYYY-MM-DD so, iny my template, I will have logstash* with the asterisk acting as a wildcard. If you’re not using logstash and are unsure of the naming, go to /var/lib/elasticsearch and look in the indices folder to see the names of your current indices. Remember this for when we create the template.

Next you want to find the name inside of a current index so the template will only match the types you want it to match. Open your browser and type http://localhost:9200/_all/_mapping?pretty=1 in the URL bar and hit enter.

For example: I see

"logstash-2014.09.30": {
        "cisco-fw": {
            "properties": {

Because I have my logstash config file set to specify anything coming in on a certain port as type cisco-fw and because of that, the type in ElasticSearch is cisco-fw but yours might be default or something else. Remember this for when we create the template

Next thing is open up a notepad so we can start creating the template.

Here is the template for your template
yo dawg

curl -XPUT http://localhost:9200/_template/logstash_per_index -d '
{
    "template" : "logstash*",
    "mappings" : {
      "cisco-fw" : {
      }
    }
}'

Where logstash_per_index is the name you want to give the template, logstash* is the index naming scheme and cisco-fw is the type.

Now, under properties, you are going to set the field type and options based on field name. For my example, I am doing ASN values, so, under properties, I would write

"asn":{"type":"string", "index":"not_analyzed"}

The ASN type is a string (obviously) and index is set to not_analyzed. Not_analyzed means that it is still searchable with a query, but it does not go through any analysis process and is not broken down into tokens. This will allow us to see Google Inc as one result when querying ElasticSearch. Do this for all the fields in your index. For example, here is my completed template…

#!/bin/sh
curl -XPUT http://localhost:9200/_template/logstash_per_index -d '
{
    "template" : "logstash*",
    "mappings" : {
      "cisco-fw" : {
         "properties": {
            "@timestamp":{"type":"date","format":"dateOptionalTime"},
            "@version":{"type":"string", "index" : "not_analyzed"},
            "action":{"type":"string"},
            "bytes":{"type":"long"},
            "cisco_message":{"type":"string"},
            "ciscotag":{"type":"string", "index" : "not_analyzed"},
            "connection_count":{"type":"long"},
            "connection_count_max":{"type":"long"},
            "connection_id":{"type":"string"},
            "direction":{"type":"string"},
            "dst_interface":{"type":"string"},
            "dst_ip":{"type":"string"},
            "dst_mapped_ip":{"type":"ip"},
            "dst_mapped_port":{"type":"long"},
            "dst_port":{"type":"long"},
            "duration":{"type":"string"},
            "err_dst_interface":{"type":"string"},
            "err_dst_ip":{"type":"ip"},
            "err_icmp_code":{"type":"string"},
            "err_icmp_type":{"type":"string"},
            "err_protocol":{"type":"string"},
            "err_src_interface":{"type":"string"},
            "err_src_ip":{"type":"ip"},
            "geoip":{
               "properties":{
                  "area_code":{"type":"long"},
                  "asn":{"type":"string", "index":"not_analyzed"},
                  "city_name":{"type":"string", "index":"not_analyzed"},
                  "continent_code":{"type":"string"},
                  "country_code2":{"type":"string"},
                  "country_code3":{"type":"string"},
                  "country_name":{"type":"string", "index":"not_analyzed"},
                  "dma_code":{"type":"long"},
                  "ip":{"type":"ip"},
                  "latitude":{"type":"double"},
                  "location":{"type":"geo_point"},
                  "longitude":{"type":"double"},
                  "number":{"type":"string"},
                  "postal_code":{"type":"string"},
                  "real_region_name":{"type":"string", "index":"not_analyzed"},
                  "region_name":{"type":"string", "index":"not_analyzed"},
                  "timezone":{"type":"string"}
               }
            },
            "group":{"type":"string"},
            "hashcode1": {"type": "string"},
            "hashcode2": {"type": "string"},
            "host":{"type":"string"},
            "icmp_code":{"type":"string"},
            "icmp_code_xlated":{"type":"string"},
            "icmp_seq_num":{"type":"string"},
            "icmp_type":{"type":"string"},
            "interface":{"type":"string"},
            "is_local_natted":{"type":"string"},
            "is_remote_natted":{"type":"string"},
            "message":{"type":"string"},
            "orig_dst_ip":{"type":"ip"},
            "orig_dst_port":{"type":"long"},
            "orig_protocol":{"type":"string"},
            "orig_src_ip":{"type":"ip"},
            "orig_src_port":{"type":"long"},
            "policy_id":{"type":"string"},
            "protocol":{"type":"string"},
            "reason":{"type":"string"},
            "seq_num":{"type":"long"},
            "spi":{"type":"string"},
            "src_interface":{"type":"string"},
            "src_ip":{"type":"string"},
            "src_mapped_ip":{"type":"ip"},
            "src_mapped_port":{"type":"long"},
            "src_port":{"type":"long"},
            "src_xlated_interface":{"type":"string"},
            "src_xlated_ip":{"type":"ip"},
            "syslog_facility":{"type":"string"},
            "syslog_facility_code":{"type":"long"},
            "syslog_pri":{"type":"string"},
            "syslog_severity":{"type":"string"},
            "syslog_severity_code":{"type":"long"},
            "tags":{"type":"string"},
            "tcp_flags":{"type":"string"},
            "timestamp":{"type":"string"},
            "tunnel_type":{"type":"string"},
            "type":{"type":"string"},
            "user":{"type":"string"},
            "xlate_type":{"type":"string"}
      }
    }
  }
}'

Once you change all the types you need, it is now time to add the template to ElasticSearch. You can either save you file in notepad, make it a script and run that through a terminal window or you can copy the text from notepad and enter into a terminal window. You should see a {“ok”:true,”acknowledged”:true} response if everything was formatted properly.
ACK

And that is it. You will only see the fruits of your labor when a new index is created and matches the parameters set in your template file (i.e logstash*). Because of my naming scheme with Logstash, new indices are only created at the start of the day (logstash-YYYY-MM-DD) so I had to wait until the next day to see if my template was working properly. If you are impatient, cannot wait to see if it worked or not and don’t care about losing data in your current index then you can delete it from ElasticSearch by issuing the following CURL command in a terminal window

curl -XDELETE localhost:9200/index_name

where index_name is the name of your index (ex. logstash-2014-12-11)

Helpful tip: if you start seeing data not show up in the index, it is very possible that you may have messed up one of the field types in you template file. I am writing this because I ran into this issue and could not figure out why there was no data in my index. To solve this, go to /var/log/elasticsearch and see the log file for the date where data was not properly going into the ElasticSearch index (it should be a lot bigger in file size compared to the other log files). In the log file, I was seeing this error multiple times

org.elasticsearch.index.mapper.MapperParsingException: failed to parse [protocol]

What happened was that, thinking protocol meant port number protocol (ex. 25, 80, 443 etc), I set the protocol field as type LONG. To my surprise, protocol was either TCP or UDP so it should have been set as type string. ElasticSearch was expecting a long to index based off my template but instead was getting strings so the application freaked out. Instead of modifying the template file on the server, I decided to delete it from ElasticSearch, make my changes to the protocol field and then re-upload the template back to ElasticSearch. To do that, I opened a terminal and typed

curl -XDELETE http://localhost:9200/_template/logstash_per_index

where logstash_per_index is the name of the template. That command will delete the template off of your server. Make your changes to your template in notepad and then add the template back to ElasticSearch.

Since the template only applies to newly created indices and your index did not have any data inside of it because of the incorrect template, you can go ahead and just delete that index and create a new one that will work with the newly modified template.

curl -XDELETE localhost:9200/index_name

where index_name is the name of your index (ex. logstash-2014-12-11).

And that is it. Leave a comment down below if you found this information helpful or if you have any questions for me. Good luck!

26 thoughts on “Create a Custom Elasticsearch Template

  1. We have custom application related log file.

    (1) how do we determine the various properties from a log file ?
    (2) How does the property name mapped to the real entry in the log file ?
    (3) How to distinguish among themselves ?

    1. (1) how do we determine the various properties from a log file ?

      You just have to know and that may be through just trial and error. What I would do first is create an index and insert your custom data and see how elasticsearch sets that type to be. Then you can see the mapping by issuing curl -XGET 'http://localhost:9200/index_name/_mapping/' to see what it did. Elasticsearch is not perfect, if it is not sure then it will usually set the type as a string. There are a few types to choose from, you just have to look at your data and determine what the type should be. You can see all the types here…http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-types.html Basically you have

      Type Definition
      string Text
      integer 32 bit integers
      long 64 bit integers
      float IEEE float
      double Double precision floats
      boolean true or false
      date UTC Date/Time
      geo_point Latitude / Longitude
      ip Numeric ipv4 address

      (2) How does the property name mapped to the real entry in the log file ?

      (3) How to distinguish among themselves ?

      Not sure what you mean here. Care to elaborate?

  2. This ELK stack is a fast moving target ๐Ÿ™‚
    Indices are now here: /var/lib/elasticsearch/elasticsearch/nodes/0/indices.

    I’m using a test system and there is 1 node, but judging by the path structure indices are probably listed per node.

  3. Pingback: URL
  4. Can you please explain the steps to execute the template in windows
    1: I have installed cygwin in my machine
    2: basic curl commands are working
    Question : If Im not running elastic search as a services im just pushing the log with input as a file then
    how could I test this ? we should send the template after sending the log or before?

    1. Cygwin works for simple things but I find it doesn’t work well for things like multi-line curl posts. If I were doing this on windows, the better option I would do is to install the poster extension on firefox https://addons.mozilla.org/en-US/firefox/addon/poster/ and post through that. http://i.imgur.com/yYFDcV8.png

      You do not have to run elasticsearch as a service. You can run it through the command prompt.
      1. Open the command prompt
      2. Change to the root of the elasticsearch dir (ex: cd C:\ELK\elasticsearch)
      3. Run the bat file. (bin\elasticsearch.bat)

      Then you can just post to 127.0.0.1:9200

      You want to post the template before sending the log file because once the file is indexed in elasticsearch then you’re done and the template has no outcome.

    2. Cygwin works for simple things but I find it doesnโ€™t work well for things like multi-line curl posts. If I were doing this on windows, the better option I would do is to install the poster extension on firefox https://addons.mozilla.org/en-US/firefox/addon/poster/ and post through that. http://i.imgur.com/yYFDcV8.png

      You do not have to run elasticsearch as a service. You can run it through the command prompt.
      1. Open the command prompt
      2. Change to the root of the elasticsearch dir (ex: cd C:\ELK\elasticsearch)
      3. Run the bat file. (bin\elasticsearch.bat)

      Then you can just post to 127.0.0.1:9200

      You want to post the template before sending the log file because once the file is indexed in elasticsearch then youโ€™re done and the template has no outcome

  5. Hi Jack. Thanks for your sharing. Can I consult you a question? I also created a custom index template named “ddl_template”, but it failed after working some minutes. And the template “ddl_template” still exists, but it’s content has been changed to the content of the default template “logstash”.

    The following is parts of the content of “ddl_template”:
    #!/bin/sh
    curl -XPUT ‘http://localhost:9200/_template/ddl_template’ -d ‘
    {
    “template”: “ddl*”,
    “order”: 1,
    “settings”: {
    “index.number_of_shards”: 5,
    “number_of_replicas”: 1,
    “index.refresh_interval”: “5s”
    },
    “aliases”: {
    “ddl.alias”: { }
    },
    “mappings”: {
    “ddl”: {
    “_source”: {“enabled” : true},
    “_all”: {“enabled” : false},
    “properties”: {
    “@timestamp”: {
    “type”: “date”,
    “format”: “dateOptionalTime”,
    “doc_values”: true
    }
    }
    }
    }
    }’

    If you have some points about my question, please send email to me. Thank you!

    1. Hello Jason.

      Because it is setting to the default logstash template, I am wondering if the setting in logstash is the issue. “template”: “ddl*” is set for any index that starts with ddl. Logstash by default sets the index to Logstash-YYYY.MM.DD. So, if you have this…

      elasticsearch {
      hosts => “10.0.0.123”
      }

      Then logstash will just go to Logstash-YYYY.MM.DD in elasticsearch

      You need to specify the index in your logstash config. So like this…
      output {
      elasticsearch {
      hosts => “10.0.0.123”
      index => “ddl-%{+YYYY.MM.dd}”
      }
      }

      and that should match the template. Let me know if this works.

  6. Hi Jack,

    Thank you for your answer. I found the reason. It is not the problem of custom template, but the problem of the setting in logstash output elasticsearch. I take elasticsearch as the output of logstash, and want to use my custom template ddl_template. In fact, I don’t need to do anything in logstash output elasticsearch but to create ddl_template with REST API on Elasticsearch. However, I make some setting in logstash output elasticsearch such as the attributes “manage_template”, “template_name”, “template_overwrite”, which in fact are used for template pushed by logstash and I do nothing on it previously.

    Sum up, I think there are two methods to use custom index template. One is creating template with REST API on Elasticsearch (I used), the other one is pushing template by logstash (need to set for logstash output elasticsearch). Besides, the first way needs to set the attribute “protocol”. However, I failed to test the second method, but I am sure this method works.

    I studied Elasticsearch less than three months, thus maybe there are some problems for my points, I am pleasure to communicate with you. Thanks again, Jack. (ps: English is not my mother language, if there are any points you can not read, please forgive me. ^_^ )

  7. First I would like to state that I am new to Elasticsearch, similiar to the above question. I am attempting to create a new index within elasticsearch. I have specified within my logstash.conf to set the index to “mls-%{+YYYY.MM.dd}”. After recycling my elasticsearch instance I expected my logs to be pushed the mls-%{+YYYY.MM.dd} index within elasticsearch but time after time the logs continually show up in the logstash-%{+YYYY.MM.dd} index.

    besides the configuration file is there another place to set the index. Does the version of elasticsearch matter. I am using the latest.

  8. I have created a template that is only matching specific fields, while leaving others not mentioned. I imported the template without issue. And the index was created using the new template. However, my concern is, i can still see the some of the fields i do not want in the the index, even though i didn’t add them to the template. What is happening? Here is a link to my template.

    http://pastebin.com/DUEST8EH

    Thanks in advance

    1. If you do not want to see fields, you have to remove them before they are sent to Elasticsearch using logstash in the filter section using mutate. Ex

      mutate {
      replace => [ “DestinationGeo.location”, “” ]
      }

  9. Hello
    I’m new in Elasticsearch , and I need to create a name index automatically. Not if I’m wrong in this example of template. I made the following template.
    curl -XPUT http://localhost:9200/_template/prueba -d ‘
    {
    “template” : “test*”,
    “mappings” : {
    “test” : {
    “properties”: {
    “@timestamp”:{“type”:”date”,”format”:”dateOptionalTime”},
    “@version”:{“type”:”string”, “index” : “not_analyzed”},
    “action”:{“type”:”string”}
    }
    }
    }
    }’

    But when I create an index called test curl -XPUT ‘http://localhost:9200/test/’
    not created the way test- YYYY -MM -DD , What I’m doing wrong ?
    Please if somebody could explain to me, with some example and step by step.

    Thx in advance

    1. The dates come from logstash. What you are doing is just making a index called test. Can you should me the elasticsearch portion of your logstash config file? It should looke like this…

      elasticsearch {
      hosts => “IP.OF.ELASTICSEARCH.SERVER”
      index => “test-%{+YYYY.MM.dd}”
      }

  10. Thank you for this. It helped me get through a complex mapping procedure and I really appreciate the time you put into this.

  11. Hi
    i am not getting {โ€œokโ€:true,โ€acknowledgedโ€:true} when I run my config file in my logstsash.
    output {
    elasticsearch {
    hosts => [“localhost:9200”]
    manage_template => false
    template => “C:\Users\1006541\Desktop\RAMA\template2.txt”
    template_name => “logstash_indexPattern_line”
    action => “index”
    index => “data3”
    }
    stdout {codec => rubydebug}
    }
    I am not getting any sort of output
    is this the correct way to do?
    Thanks in advance!!!

  12. hi
    i got {“acknowledged”:true} ..the error is when we run the code in postman.
    correct= POST localhost:9200/_template/testindextemplate
    incorrect=POST localhost:9200/_template/testindextemplate -d

    1. The -d flag is used in bash to specify data which you wish to send from curl in bash.

      Its essentially the post data you want to send however you would attach it using postman ๐Ÿ™‚

  13. Hi Jack ,

    I am facing problem with index template .

    I have created template but still new indexes are not created as per index pattern i have mentioned in template .

Leave a Reply to mintu Cancel reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *