{"id":379,"date":"2015-06-16T14:09:49","date_gmt":"2015-06-16T18:09:49","guid":{"rendered":"https:\/\/jackhanington.com\/blog\/?p=379"},"modified":"2015-06-16T16:13:32","modified_gmt":"2015-06-16T20:13:32","slug":"send-cisco-asa-syslogs-to-elasticsearch-using-logstash","status":"publish","type":"post","link":"https:\/\/jackhanington.com\/blog\/2015\/06\/16\/send-cisco-asa-syslogs-to-elasticsearch-using-logstash\/","title":{"rendered":"Send Cisco ASA Syslogs to Elasticsearch Using Logstash"},"content":{"rendered":"<p>This guide is a continuation of this blog post <a href=\"\/\/jackhanington.com\/blog\/2015\/05\/06\/install-kibana-4-and-elasticsearch-on-ubuntu\/\">here<\/a>. The following assumes that you\u00a0already have an Elasticsearch instance set up and ready to go. \u00a0This post will walk you through installing and setting up logstash for sending Cisco ASA messages to an Elasticsearch index.<\/p>\n<p>As of\u00a0today (6\/16\/2015), version 1.5.1 is the latest stable release of Logstash so I will be using 1.5.1 in my setup guide.<\/p>\n<hr \/>\n<p><strong>OPTIONAL<\/strong><\/p>\n<p>Before we begin sending data to Elasticsearch, it is probably a good idea to set up a custom template in Elasticsearch. This will set it so that specific fields are set for the correct types of data.<\/p>\n<p>For example: By default, all data passed to a new index in elasticsearch is treated as a string type. You wouldn&#8217;t want this for something like the bytes field in case you wanted to add up all the bytes for a specific time window search. So you create a custom template so that the bytes field is stored as type long. This is totally optional but it is probably in your benifit to do this before you start collecting your ASA syslogs. You can read more about custom elasticsearch templates <a href=\"\/\/jackhanington.com\/blog\/2014\/12\/11\/create-a-custom-elasticsearch-template\/\">here<\/a> or you can just copy mine by doing this&#8230;<\/p>\n<p>Copy the text below, change the IP address at the top to the IP of your elasticsearch server and save it as template.sh\u00a0on your desktop.<\/p>\n<pre>#!\/bin\/sh\r\ncurl -XPUT http:\/\/10.0.0.112:9200\/_template\/logstash_per_index -d '\r\n{\r\n    \"template\" : \"logstash*\",\r\n    \"mappings\" : {\r\n      \"cisco-fw\" : {\r\n         \"properties\": {\r\n            \"@timestamp\":{\"type\":\"date\",\"format\":\"dateOptionalTime\"},\r\n            \"@version\":{\"type\":\"string\", \"index\" : \"not_analyzed\"},\r\n\t    \"action\":{\"type\":\"string\"},\r\n\t    \"bytes\":{\"type\":\"long\"},\r\n\t    \"cisco_message\":{\"type\":\"string\"},\r\n\t    \"ciscotag\":{\"type\":\"string\", \"index\" : \"not_analyzed\"},\r\n\t    \"connection_count\":{\"type\":\"long\"},\r\n            \"connection_count_max\":{\"type\":\"long\"},\r\n\t    \"connection_id\":{\"type\":\"string\"},\r\n            \"direction\":{\"type\":\"string\"},\r\n            \"dst_interface\":{\"type\":\"string\"},\r\n\t    \"dst_ip\":{\"type\":\"string\"},\r\n            \"dst_mapped_ip\":{\"type\":\"ip\"},\r\n\t    \"dst_mapped_port\":{\"type\":\"long\"},\r\n            \"dst_port\":{\"type\":\"long\"},\r\n            \"duration\":{\"type\":\"string\"},\r\n\t    \"err_dst_interface\":{\"type\":\"string\"},\r\n\t    \"err_dst_ip\":{\"type\":\"ip\"},\r\n\t    \"err_icmp_code\":{\"type\":\"string\"},\r\n\t    \"err_icmp_type\":{\"type\":\"string\"},\r\n\t    \"err_protocol\":{\"type\":\"string\"},\r\n            \"err_src_interface\":{\"type\":\"string\"},\r\n            \"err_src_ip\":{\"type\":\"ip\"},\r\n            \"geoip\":{\r\n               \"properties\":{\r\n                  \"area_code\":{\"type\":\"long\"},\r\n                  \"asn\":{\"type\":\"string\", \"index\":\"not_analyzed\"},\r\n                  \"city_name\":{\"type\":\"string\", \"index\":\"not_analyzed\"},\r\n                  \"continent_code\":{\"type\":\"string\"},\r\n                  \"country_code2\":{\"type\":\"string\"},\r\n                  \"country_code3\":{\"type\":\"string\"},\r\n                  \"country_name\":{\"type\":\"string\", \"index\":\"not_analyzed\"},\r\n                  \"dma_code\":{\"type\":\"long\"},\r\n                  \"ip\":{\"type\":\"ip\"},\r\n                  \"latitude\":{\"type\":\"double\"},\r\n                  \"location\":{\"type\":\"geo_point\"},\r\n                  \"longitude\":{\"type\":\"double\"},\r\n                  \"number\":{\"type\":\"string\"},\r\n                  \"postal_code\":{\"type\":\"string\"},\r\n                  \"real_region_name\":{\"type\":\"string\", \"index\":\"not_analyzed\"},\r\n                  \"region_name\":{\"type\":\"string\", \"index\":\"not_analyzed\"},\r\n                  \"timezone\":{\"type\":\"string\"}\r\n               }\r\n            },\r\n            \"group\":{\"type\":\"string\"},\r\n \t    \"hashcode1\": {\"type\": \"string\"}, \r\n \t    \"hashcode2\": {\"type\": \"string\"}, \r\n            \"host\":{\"type\":\"string\"},\r\n            \"icmp_code\":{\"type\":\"string\"},\r\n            \"icmp_code_xlated\":{\"type\":\"string\"},\r\n            \"icmp_seq_num\":{\"type\":\"string\"},\r\n            \"icmp_type\":{\"type\":\"string\"},\r\n            \"interface\":{\"type\":\"string\"},\r\n            \"is_local_natted\":{\"type\":\"string\"},\r\n            \"is_remote_natted\":{\"type\":\"string\"},\r\n            \"message\":{\"type\":\"string\"},\r\n            \"orig_dst_ip\":{\"type\":\"ip\"},\r\n            \"orig_dst_port\":{\"type\":\"long\"},\r\n            \"orig_protocol\":{\"type\":\"string\"},\r\n            \"orig_src_ip\":{\"type\":\"ip\"},\r\n            \"orig_src_port\":{\"type\":\"long\"},\r\n            \"policy_id\":{\"type\":\"string\"},\r\n            \"protocol\":{\"type\":\"string\"},\r\n            \"reason\":{\"type\":\"string\"},\r\n            \"seq_num\":{\"type\":\"long\"},\r\n            \"spi\":{\"type\":\"string\"},\r\n            \"src_interface\":{\"type\":\"string\"},\r\n            \"src_ip\":{\"type\":\"ip\"},\r\n            \"src_mapped_ip\":{\"type\":\"ip\"},\r\n            \"src_mapped_port\":{\"type\":\"long\"},\r\n            \"src_port\":{\"type\":\"long\"},\r\n            \"src_xlated_interface\":{\"type\":\"string\"},\r\n            \"src_xlated_ip\":{\"type\":\"ip\"},\r\n            \"syslog_facility\":{\"type\":\"string\"},\r\n            \"syslog_facility_code\":{\"type\":\"long\"},\r\n            \"syslog_pri\":{\"type\":\"string\"},\r\n            \"syslog_severity\":{\"type\":\"string\"},\r\n            \"syslog_severity_code\":{\"type\":\"long\"},\r\n            \"tags\":{\"type\":\"string\"},\r\n            \"tcp_flags\":{\"type\":\"string\"},\r\n            \"timestamp\":{\"type\":\"string\"},\r\n            \"tunnel_type\":{\"type\":\"string\"},\r\n            \"type\":{\"type\":\"string\"},\r\n            \"user\":{\"type\":\"string\"},\r\n            \"xlate_type\":{\"type\":\"string\"}\r\n      }\r\n    }\r\n  }\r\n}'\r\n<\/pre>\n<p>Save it and close. Now, open up a terminal window and change to the directory where the template.sh file is located. We need to set the script as executable so we can run it. Run the following commands in a terminal window&#8230;<\/p>\n<pre>cd Desktop\r\nchmod +x template.sh\r\n.\/template.sh\r\n<\/pre>\n<p>You should get back <strong>{&#8220;acknowledged&#8221;:true}<\/strong><\/p>\n<p><a href=\"\/\/jackhanington.com\/blog\/wp-content\/uploads\/2015\/05\/ack.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-344\" src=\"\/\/jackhanington.com\/blog\/wp-content\/uploads\/2015\/05\/ack.png\" alt=\"ack\" width=\"366\" height=\"91\" srcset=\"https:\/\/jackhanington.com\/blog\/wp-content\/uploads\/2015\/05\/ack.png 366w, https:\/\/jackhanington.com\/blog\/wp-content\/uploads\/2015\/05\/ack-300x75.png 300w\" sizes=\"auto, (max-width: 366px) 100vw, 366px\" \/><\/a><\/p>\n<p>Now back to the tutorial&#8230;<\/p>\n<hr \/>\n<p><strong>Download and Install Java<\/strong><\/p>\n<p>If you are doing this on a fresh install of Ubuntu (like me), the first thing you\u2019re going to need to do is install Java. Logstash requires at least Java 7 to function so let\u2019s set that up. If you already have Java on your machine, you can skip to the next section. I will be using Java 8 in this example but you can run 7 or openjdk if you wish.<\/p>\n<p>Open a terminal window (ctrl+shift+t) and type\u2026<\/p>\n<pre>sudo apt-add-repository ppa:webupd8team\/java\r\nsudo apt-get update\r\nsudo apt-get install oracle-java8-installer\r\n<\/pre>\n<p>Once you have accepted the license agreement, Java is ready to go.<\/p>\n<p>&nbsp;<\/p>\n<hr \/>\n<p><strong>Download and Install Logstash<\/strong><\/p>\n<p>Open a terminal window (ctrl+shift+t) and run these commands&#8230;<\/p>\n<pre>wget -O - http:\/\/packages.elasticsearch.org\/GPG-KEY-elasticsearch | sudo apt-key add -\r\necho 'deb http:\/\/packages.elasticsearch.org\/logstash\/1.5\/debian stable main' | sudo tee \/etc\/apt\/sources.list.d\/logstash.list\r\nsudo apt-get update\r\nsudo apt-get install logstash\r\n<\/pre>\n<p>&nbsp;<\/p>\n<hr \/>\n<p><strong>Logstash Configuration<\/strong><\/p>\n<p>Logstash is now installed so now we need to write a configuration file so that we can do things like specify listening port, patterns, the IP of the Elasticsearch server etc.<\/p>\n<p>A Logstash configuration file is basically built of 3 parts: The input (network protocol, listening port, data type etc.), the filter (patterns, grok filters, syslog severity etc.) and the output (IP address of the elasticsearch server logstash is shipping the modified data to etc.).<\/p>\n<p>First and foremost, let&#8217;s create a blank logstash configuration file. Open up your terminal window and type&#8230;<\/p>\n<pre>sudo vi\u00a0\/etc\/logstash\/conf.d\/logstash.conf<\/pre>\n<p>or if you don&#8217;t know how to use VI<\/p>\n<pre>sudo gedit\u00a0\/etc\/logstash\/conf.d\/logstash.conf<\/pre>\n<p>&nbsp;<\/p>\n<p>I am going to post my logstash configuration file below and then I will explain each part of the file and what it is doing. So, copy my code, paste it into your logstash.conf file and follow along below making changes to your file as you see fit.<\/p>\n<p>logstash.conf<\/p>\n<pre>input {\r\n  \tudp { \r\n    \t\tport =&gt; 5544\r\n    \t\ttype =&gt; \"cisco-fw\"\r\n  \t}\r\n}\r\n\r\nfilter {\r\n\tgrok {\r\n    \t\tmatch =&gt; [\"message\", \"%{CISCO_TAGGED_SYSLOG} %{GREEDYDATA:cisco_message}\"]\r\n  \t}\r\n\r\n  \t# Extract fields from the each of the detailed message types\r\n  \t# The patterns provided below are included in core of LogStash 1.4.2.\r\n\tgrok {\r\n\t\tmatch =&gt; [\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW106001}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW106006_106007_106010}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW106014}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW106015}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW106021}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW106023}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW106100}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW110002}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW302010}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW302013_302014_302015_302016}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW302020_302021}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW305011}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW313001_313004_313008}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW313005}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW402117}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW402119}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW419001}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW419002}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW500004}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW602303_602304}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW710001_710002_710003_710005_710006}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW713172}\",\r\n  \t\t\t\"cisco_message\", \"%{CISCOFW733100}\"\r\n\t\t]\r\n\t}\r\n\r\n  \t# Parse the syslog severity and facility\r\n  \tsyslog_pri { }\r\n\r\n\tgeoip {\r\n      \t\tadd_tag =&gt; [ \"GeoIP\" ]\r\n      \t\tdatabase =&gt; \"\/opt\/logstash\/databases\/GeoLiteCity.dat\"\r\n      \t\tsource =&gt; \"src_ip\"\r\n    \t}\r\n\r\n\tif [geoip][city_name]      == \"\" { mutate { remove_field =&gt; \"[geoip][city_name]\" } }\r\n    \tif [geoip][continent_code] == \"\" { mutate { remove_field =&gt; \"[geoip][continent_code]\" } }\r\n    \tif [geoip][country_code2]  == \"\" { mutate { remove_field =&gt; \"[geoip][country_code2]\" } }\r\n    \tif [geoip][country_code3]  == \"\" { mutate { remove_field =&gt; \"[geoip][country_code3]\" } }\r\n    \tif [geoip][country_name]   == \"\" { mutate { remove_field =&gt; \"[geoip][country_name]\" } }\r\n    \tif [geoip][latitude]       == \"\" { mutate { remove_field =&gt; \"[geoip][latitude]\" } }\r\n    \tif [geoip][longitude]      == \"\" { mutate { remove_field =&gt; \"[geoip][longitude]\" } }\r\n    \tif [geoip][postal_code]    == \"\" { mutate { remove_field =&gt; \"[geoip][postal_code]\" } }\r\n    \tif [geoip][region_name]    == \"\" { mutate { remove_field =&gt; \"[geoip][region_name]\" } }\r\n    \tif [geoip][time_zone]      == \"\" { mutate { remove_field =&gt; \"[geoip][time_zone]\" } }\r\n\r\n\t# Gets the source IP whois information from the GeoIPASNum.dat flat file database\r\n\tgeoip {\r\n      \t\tadd_tag =&gt; [ \"Whois\" ]\r\n      \t\tdatabase =&gt; \"\/opt\/logstash\/databases\/GeoIPASNum.dat\"\r\n      \t\tsource =&gt; \"src_ip\"\r\n    \t}\r\n\r\n \t# Parse the date\r\n  \tdate {\r\n    \t\tmatch =&gt; [\"timestamp\",\r\n      \t\t\t\"MMM dd HH:mm:ss\",\r\n      \t\t\t\"MMM  d HH:mm:ss\",\r\n      \t\t\t\"MMM dd yyyy HH:mm:ss\",\r\n      \t\t\t\"MMM  d yyyy HH:mm:ss\"\r\n    \t\t]\r\n  \t}\r\n}\r\n\r\noutput {\r\n\tstdout { \r\n\t\tcodec =&gt; json\r\n\t}\r\n\r\n\telasticsearch {\r\n    \t\thost =&gt; \"10.0.0.133\"\r\n    \t\tflush_size =&gt; 1\r\n\t}\r\n}\r\n<\/pre>\n<p>End of\u00a0logstash.conf<\/p>\n<p>Don&#8217;t freak out! I will walk you through my configuration file and explain what each section is doing. Like I said before, a logstash configuration file is made up of 3 parts: the input, the filter and the output. So let&#8217;s walk through it, shall we?<\/p>\n<p>Note: All of my examples will be using this as an example ASA syslog message.<\/p>\n<pre>&lt;182&gt;May 07 2015 13:26:42: %ASA-6-302014: Teardown TCP connection 48809467 for outside:124.35.68.19\/46505 to inside:10.10.10.32\/443 duration 0:00:00 bytes 300 TCP FINs<\/pre>\n<p>&nbsp;<\/p>\n<h1>The Input<\/h1>\n<pre>input {\r\n  udp { \r\n    port =&gt; 5544\r\n    type =&gt; \"cisco-fw\"\r\n  }\r\n}\r\n<\/pre>\n<p>This tells logstash the protocol (UDP) and\u00a0what port to listen (5544). You can make it any port you want but you just need to set it\u00a0in your ASA firewall like so&#8230;<\/p>\n<p><a href=\"\/\/jackhanington.com\/blog\/wp-content\/uploads\/2015\/05\/asa.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-347\" src=\"\/\/jackhanington.com\/blog\/wp-content\/uploads\/2015\/05\/asa.png\" alt=\"asa\" width=\"299\" height=\"245\" \/><\/a><\/p>\n<p>or for you CLI people&#8230;<\/p>\n<pre>logging enable\r\nlogging timestamp\r\nlogging buffer-size 40960\r\nlogging trap informational\r\nlogging facility 22\r\nlogging host inside 10.0.0.133 17\/5544\r\n<\/pre>\n<p>You can learn more on how to set this up by checking out my ASA syslog tutorial <a href=\"\/\/jackhanington.com\/blog\/2014\/07\/30\/sysloging-cisco-asa-firewall\/\">here<\/a><\/p>\n<h1>The Filter<\/h1>\n<p>This is the filter section. This is where most of the work will be done in logstash. The is where you can do things like use grok patterns to split data into fields based off the message and other neat little features for manipulating your data. My filter section has 5 different parts: grok, syslog_pri, geoip, mutate and date. Let&#8217;s look at them individually and show what exactly they do.<\/p>\n<p><strong>Grok<\/strong><\/p>\n<pre>filter {\r\n\tgrok {\r\n    \t\tmatch =&gt; [\"message\", \"%{CISCO_TAGGED_SYSLOG} %{GREEDYDATA:cisco_message}\"]\r\n  \t...\r\n}\r\n<\/pre>\n<p>This section is where you use patterns to match various elements of your log messages and extract them into fields. This specific pattern is using the CISCO_TAGGED_SYSLOG definition in the firewalls pattern built into logstash and splitting it into sections.<\/p>\n<p>The CISCO_TAGGED_SYSLOG looks like this&#8230;<\/p>\n<pre>CISCO_TAGGED_SYSLOG ^&lt;%{POSINT:syslog_pri}&gt;%{CISCOTIMESTAMP:timestamp}( %{SYSLOGHOST:sysloghost})?: %%{CISCOTAG:ciscotag}:<\/pre>\n<p>and, based off our example ASA syslog <strong>May 07 2015 13:26:42: %ASA-6-302014: Teardown TCP connection 48809467 for outside:124.35.68.19\/46505 to inside:10.10.10.32\/443 duration 0:00:00 bytes 300 TCP FINs<\/strong>, it pulls out this into these fields&#8230;<\/p>\n<p>syslog_pri = 182<br \/>\ntimestamp = May 07 2015 13:26:42<br \/>\nsysloghost =<br \/>\nciscotag = ASA-6-302014<\/p>\n<p>and the rest of the message of <strong>Teardown TCP connection 48809467 for outside:124.35.68.19\/46505 to inside:10.10.10.32\/443 duration 0:00:00 bytes 300 TCP FINs<\/strong> is just is just set as GREEDYDATA:cisco_message, which will be used later in the next grok filter below.<\/p>\n<pre>grok {\r\n    match =&gt; [\r\n    \t\"cisco_message\", \"%{CISCOFW106001}\",\r\n    \t....\r\n    \t\"cisco_message\", \"%{CISCOFW733100}\"\r\n    ]\r\n}\r\n<\/pre>\n<p>Now, using the GREEDYDATA:cisco_message from the previous grok filter, we are going to use the same firewall patterns file built into logstash and match the message type based off the message. The filter goes through all the patterns until it finds a match and then splits the contents of the message into fields. So, based off our GREEDYDATA:cisco_message of <strong>Teardown TCP connection 48809467 for outside:124.35.68.19\/46505 to inside:10.10.10.32\/443 duration 0:00:00 bytes 300 TCP FINs<\/strong> from our example syslog, it will try to match it against the patterns file. Our specific example matches the <strong>CISCOFW302013_302014_302015_302016<\/strong> line in the patterns file, which reads&#8230;<\/p>\n<pre>CISCOFW302013_302014_302015_302016 %{CISCO_ACTION:action}(?: %{CISCO_DIRECTION:direction})? %{WORD:protocol} connection %{INT:connection_id} for %{DATA:src_interface}:%{IP:src_ip}\/%{INT:src_port}( \\(%{IP:src_mapped_ip}\/%{INT:src_mapped_port}\\))?(\\(%{DATA:src_fwuser}\\))? to %{DATA:dst_interface}:%{IP:dst_ip}\/%{INT:dst_port}( \\(%{IP:dst_mapped_ip}\/%{INT:dst_mapped_port}\\))?(\\(%{DATA:dst_fwuser}\\))?( duration %{TIME:duration} bytes %{INT:bytes})?(?: %{CISCO_REASON:reason})?( \\(%{DATA:user}\\))?\r\n<\/pre>\n<p>&#8230;and puts the data into these fields&#8230;<\/p>\n<blockquote><p>action = Teardown<br \/>\nprotocol = TCP<br \/>\nconnection_id = 48809467<br \/>\nsrc_interface = outside<br \/>\nsrc_ip = 124.35.68.19<br \/>\nsrc_port = 46505<br \/>\ndst_interface = inside<br \/>\ndst_ip = 10.10.10.32<br \/>\ndst_port = 443<br \/>\nduration = 0 =00 =00<br \/>\nbytes = 300<br \/>\nreason = TCP FINs<\/p><\/blockquote>\n<p>&nbsp;<\/p>\n<p><strong>Syslog_pri<\/strong><\/p>\n<pre>syslog_pri { }<\/pre>\n<p>This section takes the POSINT syslog_pri from the first grok filter and gets the facility and severity level of the syslog message. Our example had a syslog_pri number of 182 and logstash can determine that the message is an informational based message from the local6 facility. I was able to get that by referencing this chart and finding which column and row 182 fell under.<\/p>\n<pre>             emergency   alert   critical   error   warning   notice   info   debug\r\n kernel              0       1          2       3         4        5      6       7\r\n user                8       9         10      11        12       13     14      15\r\n mail               16      17         18      19        20       21     22      23\r\n system             24      25         26      27        28       29     30      31\r\n security           32      33         34      35        36       37     38      39\r\n syslog             40      41         42      43        44       45     46      47\r\n lpd                48      49         50      51        52       53     54      55\r\n nntp               56      57         58      59        60       61     62      63\r\n uucp               64      65         66      67        68       69     70      71\r\n time               72      73         74      75        76       77     78      79\r\n security           80      81         82      83        84       85     86      87\r\n ftpd               88      89         90      91        92       93     94      95\r\n ntpd               96      97         98      99       100      101    102     103\r\n logaudit          104     105        106     107       108      109    110     111\r\n logalert          112     113        114     115       116      117    118     119\r\n clock             120     121        122     123       124      125    126     127\r\n local0            128     129        130     131       132      133    134     135\r\n local1            136     137        138     139       140      141    142     143\r\n local2            144     145        146     147       148      149    150     151\r\n local3            152     153        154     155       156      157    158     159\r\n local4            160     161        162     163       164      165    166     167\r\n local5            168     169        170     171       172      173    174     175\r\n local6            176     177        178     179       180      181    <em><strong>182<\/strong><\/em>     183\r\n local7            184     185        186     187       188      189    190     191\r\n<\/pre>\n<p>Pretty cool stuff.<\/p>\n<hr \/>\n<p><strong>OPTIONAL: GeoIP<\/strong><\/p>\n<pre>geoip {\r\n    \tadd_tag =&gt; [ \"GeoIP\" ]\r\n    \tdatabase =&gt; \"\/opt\/logstash\/databases\/GeoLiteCity.dat\"\r\n    \tsource =&gt; \"src_ip\"\r\n}\r\n<\/pre>\n<p>This part is completely optional and up to you if you want to set it up. What it does is it uses the source ip address in the syslog and it gets location data based of a flat file database. This flat file database will be used by logstash to get the location of the IP addresses hitting the firewall so you can get turn the source IP address of 124.35.68.19 into this\u2026<\/p>\n<pre class=\"ng-scope\"><span class=\"key strong\">\"geoip\":<\/span> {\r\n      <span class=\"key strong\">\"ip\":<\/span> <span class=\"\">\"124.35.68.19\"<\/span>,\r\n      <span class=\"key strong\">\"country_code2\":<\/span> <span class=\"\">\"US\"<\/span>,\r\n      <span class=\"key strong\">\"country_code3\":<\/span> <span class=\"\">\"USA\"<\/span>,\r\n      <span class=\"key strong\">\"country_name\":<\/span> <span class=\"\">\"United States\"<\/span>,\r\n      <span class=\"key strong\">\"continent_code\":<\/span> <span class=\"\">\"NA\"<\/span>,\r\n      <span class=\"key strong\">\"region_name\":<\/span> <span class=\"\">\"NJ\"<\/span>,\r\n      <span class=\"key strong\">\"city_name\":<\/span> <span class=\"\">\"Edison\"<\/span>,\r\n      <span class=\"key strong\">\"postal_code\":<\/span> <span class=\"\">\"08820\"<\/span>,\r\n      <span class=\"key strong\">\"latitude\":<\/span> <span class=\"number\">40.57669999999999<\/span>,\r\n      <span class=\"key strong\">\"longitude\":<\/span> <span class=\"number\">-74.3674<\/span>,\r\n      <span class=\"key strong\">\"dma_code\":<\/span> <span class=\"number\">501<\/span>,\r\n      <span class=\"key strong\">\"area_code\":<\/span> <span class=\"number\">732<\/span>,\r\n      <span class=\"key strong\">\"timezone\":<\/span> <span class=\"\">\"America\/New_York\"<\/span>,\r\n      <span class=\"key strong\">\"real_region_name\":<\/span> <span class=\"\">\"New Jersey\"<\/span>,\r\n      <span class=\"key strong\">\"location\":<\/span> [\r\n        <span class=\"number\">-74.3674<\/span>,\r\n        <span class=\"number\">40.57669999999999<\/span>\r\n      ]\r\n}<\/pre>\n<p>If this doesn&#8217;t interest you, remove the <strong>geoip<\/strong> section from the config file and skip to the next section. If this does interest you, follow the steps below.<\/p>\n<p>Open a terminal window and type&#8230;<\/p>\n<pre>cd ~\r\nwget http:\/\/geolite.maxmind.com\/download\/geoip\/database\/GeoLiteCity.dat.gz\r\nsudo mkdir \/opt\/logstash\/databases\r\ngunzip GeoLiteCity.dat.gz\r\nsudo mv ~\/GeoLiteCity.dat \/opt\/logstash\/databases\/\r\n<\/pre>\n<p>and that is it for the first GeoIP tag for location data.<\/p>\n<p>This is another optional GeoIP filter&#8230;<\/p>\n<pre>geoip {\r\n    \tadd_tag =&gt; [ \"Whois\" ]\r\n    \tdatabase =&gt; \"\/opt\/logstash\/databases\/GeoIPASNum.dat\"\r\n    \tsource =&gt; \"src_ip\"\r\n}\r\n<\/pre>\n<p>Just like the GeoIP location database, the GeoIP ASN database uses the source IP address,\u00a0but instead of returning location information, it returns the ASN information. So essentially it turns 124.35.68.19 into&#8230;<\/p>\n<pre class=\"ng-scope\"><span class=\"key strong\">\"number\":<\/span> <span class=\"\">\"AS6128\"<\/span>,\r\n<span class=\"key strong\">\"asn\":<\/span> <span class=\"\">\"Cablevision Systems Corp.\"<\/span><\/pre>\n<p>So you can see ISP names in your elasticsearch searches. Also very cool and highly recommended for your setup. If this does not interest you for your setup,\u00a0remove the <strong>geoip<\/strong> section from your configuration file and skip to the next section. If this does interest you, follow the steps below&#8230;<\/p>\n<p>Open a terminal window and type&#8230;<\/p>\n<pre>cd ~\r\nwget http:\/\/download.maxmind.com\/download\/geoip\/database\/asnum\/GeoIPASNum.dat.gz\r\ngunzip GeoIPASNum.dat.gz\r\nsudo mv ~\/GeoIPASNum.dat \/opt\/logstash\/databases\/\r\n<\/pre>\n<p>and that&#8217;s it with the GeoIP section.<br \/>\n<strong>Mutate<\/strong><\/p>\n<pre>if [geoip][city_name]      == \"\" { mutate { remove_field =&gt; \"[geoip][city_name]\" } }<\/pre>\n<p>This part is fairly straight forward. Basically what is happening is that if the entry in the GeoIPASNum.dat file for a particular IP address has a country but does not have a city, it will remove that field before it is inserted into elasticsearch. That is basically it with the mutate section of my config file.<\/p>\n<p><strong>Date<\/strong><\/p>\n<pre>date {\r\n    match =&gt; [\"timestamp\",\r\n    \t\"MMM dd HH:mm:ss\",\r\n    \t\"MMM  d HH:mm:ss\",\r\n    \t\"MMM dd yyyy HH:mm:ss\",\r\n    \t\"MMM  d yyyy HH:mm:ss\"\r\n    ]\r\n}\r\n<\/pre>\n<p>This part is also pretty straight forward. It takes the timestamp value from the first grok filter and sets it as the timestamp when putting it into elasticsearch. So basically, instead of the timestamp being set as the time when logstash received the message, the timestamp is set as when the event was triggered on the firewall based off the firewalls clock and hopefully the firewall is configured to use NTP so that all your devices clocks are synchronized.<\/p>\n<h1>Output<\/h1>\n<pre>output {\r\n\tstdout { \r\n\t\tcodec =&gt; json\r\n\t}\r\n\r\n\telasticsearch {\r\n\t\thost =&gt; \"10.0.0.133\"\r\n\t\tflush_size =&gt; 1\r\n\t}\r\n}\r\n<\/pre>\n<p>This section is how the final result is displayed\/sent to various things. My output section has 2 parts: stdout and elasticsearch.<\/p>\n<p><strong>Stdout<\/strong><\/p>\n<pre>stdout { \r\n\tcodec =&gt; json\r\n}\r\n<\/pre>\n<p>Stdout is optional but I have it in there so I can see if everything is working properly through the terminal window like so<br \/>\n<a href=\"\/\/jackhanington.com\/blog\/wp-content\/uploads\/2014\/04\/Logstash2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-258\" src=\"\/\/jackhanington.com\/blog\/wp-content\/uploads\/2014\/04\/Logstash2.png\" alt=\"Logstash2\" width=\"662\" height=\"145\" srcset=\"https:\/\/jackhanington.com\/blog\/wp-content\/uploads\/2014\/04\/Logstash2.png 662w, https:\/\/jackhanington.com\/blog\/wp-content\/uploads\/2014\/04\/Logstash2-300x65.png 300w, https:\/\/jackhanington.com\/blog\/wp-content\/uploads\/2014\/04\/Logstash2-500x109.png 500w\" sizes=\"auto, (max-width: 662px) 100vw, 662px\" \/><\/a><\/p>\n<p>Obviously you do not need this part to run but I like to have it in there for debugging purposes.<\/p>\n<p><strong>Elasticsearch<\/strong><\/p>\n<pre>elasticsearch {\r\n    \thost =&gt; \"10.0.0.133\"\r\n    \tflush_size =&gt; 1\r\n}\r\n<\/pre>\n<p>Elasticsearch is where you specify the IP address of your elasticsearch server and that is pretty much it.<\/p>\n<hr \/>\n<p>So now that you have your logstash.conf file set up, you can now run logstash for your ASA firewall. Save your config file and type in this into a terminal window&#8230;<\/p>\n<pre>\/opt\/logstash\/bin\/logstash -f \/etc\/logstash\/conf.d\/logstash.conf<\/pre>\n<p>That&#8217;s it, you&#8217;re finished.<\/p>\n<p>Note: If you have not turned on syslog messages on your ASA firewall, read my other blog post <a href=\"\/\/jackhanington.com\/blog\/2014\/07\/30\/sysloging-cisco-asa-firewall\/\">here<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This guide is a continuation of this blog post here. The following assumes that you\u00a0already have an Elasticsearch instance set up and ready to go. \u00a0This post will walk you through installing and setting up logstash for sending Cisco ASA messages to an Elasticsearch index. As of\u00a0today (6\/16\/2015), version 1.5.1 is the latest stable release&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"footnotes":""},"categories":[20,56,3,55,82,40,54],"tags":[83,75,90,91,81,89,84,87,85,59,117,74,88,92,86,57],"class_list":["post-379","post","type-post","status-publish","format-standard","hentry","category-blog","category-elasticsearch","category-information-technology","category-kibana","category-logstash","category-networking","category-software","tag-asa","tag-cisco","tag-cli","tag-command","tag-config","tag-date","tag-firewall","tag-geoip","tag-grok","tag-java","tag-kibana","tag-logstash","tag-mutate","tag-syslog","tag-syslog_pri","tag-ubuntu"],"_links":{"self":[{"href":"https:\/\/jackhanington.com\/blog\/wp-json\/wp\/v2\/posts\/379","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jackhanington.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jackhanington.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jackhanington.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jackhanington.com\/blog\/wp-json\/wp\/v2\/comments?post=379"}],"version-history":[{"count":7,"href":"https:\/\/jackhanington.com\/blog\/wp-json\/wp\/v2\/posts\/379\/revisions"}],"predecessor-version":[{"id":394,"href":"https:\/\/jackhanington.com\/blog\/wp-json\/wp\/v2\/posts\/379\/revisions\/394"}],"wp:attachment":[{"href":"https:\/\/jackhanington.com\/blog\/wp-json\/wp\/v2\/media?parent=379"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jackhanington.com\/blog\/wp-json\/wp\/v2\/categories?post=379"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jackhanington.com\/blog\/wp-json\/wp\/v2\/tags?post=379"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}