Now, we need to open the kibana configuration to specify the port for elasticsearch from 9200 to 80, so we will later setup proxy server which could be nginx or squid etc. to redirect all request from port 80 to 9200. See below line in location ~/kibana/config.js
This is necessary as kibana will be running on 80 port.
Since, we are going to install nginx, we need to specify particular location for kibana say.
Now, we hava setup the kibana, as discussed we need to install nginx, which is pretty easy.
Install Nginx
Use below command to install nginx
sudo apt-get install nginx
Next, download the Nginx configuration from kibana's github repositories to your home dir.
sudo wget https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf
In above configuration we need to specify the FQDN (Fully Qualified Domain Name), and Kibana's root directory.
server_name FQDN;
root /root/kibana3;
Once you are done with changes move the file to nginx configuration location
sudo cp nginx.conf /etc/nginx/sites-available/default
In order to improve your security to access nginx, you can apache2-utils to use htpasswd to generate username and password.
sudo apt-get install apache2-utils
sudo htpasswd -c /etc/nginx/conf.d/kibana.htpasswd user
Now, you can restart the nginx service for the changes to take effect.
Kibana is now accessible via your FQDN or the public IP address of your Logstash Server i.e. http://logstash_server_public_ip/. If you go there in a web browser, you should see a Kibana welcome page which will allow you to view dashboards but there will be no logs to view because Logstash has not been set up yet. Let's do that now.
Install Logstash
The Logstash package is available from the same repository as Elasticsearch. As public key is already setup, let's create the source list
sudo echo 'deb http://packages.elasticsearch.org/logstash/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
Update your apt package database
Install the logstash using below command:
sudo apt-get install logstash=1.4.2-1-2c0f5a1
Generate SSL certificates:
Since we are going to use Logstash Forwarder to ship logs from our Servers to our Logstash Server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash Forwarder to verify the identity of Logstash Server. Create the directories that will store the certificate and private key with the following commands:
sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private
If you have a DNS setup with your private networking, you should create an A record that contains the Logstash Server's private IP address—this domain name will be used in the next command, to generate the SSL certificate. Use the below command to generate it:
sudo cd /etc/pki/tls; sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Since, ssl certificate is generated, we need to transfer it to server from which we are going to ships logs to logstash.
Configure Logstash
Now, lets configure the logstash so that kibana can get the log from it.
Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.
Let's create a configuration file called 01-lumberjack-input.conf and set up our "lumberjack" input (the protocol that Logstash Forwarder uses):
Create a file at location /etc/logstash/conf.d/01-lumberjack-input.conf
which contains
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
Now let's create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages at location /etc/logstash/conf.d/10-syslog.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "% {SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} % {DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
This filter looks for logs that are labeled as "syslog" type (by a Logstash Forwarder), and it will try to use "grok" to parse incoming syslog logs to make it structured and query-able at location /etc/logstash/conf.d/30-lumberjack-output.conf
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
This output basically configures Logstash to store the logs in Elasticsearch.
With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).
Now Restart the logstash to make changes take effect:
sudo service logstash restart
Setup Logstash Forwarder Package:
Get the SSL certificate which we generated previously to server from which we get the logs
sudo scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp
Install Logstash-forwarder Package:
On Server, create the Logstash Forwarder source list:
sudo echo 'deb http://packages.elasticsearch.org/logstashforwarder/debian stable main' | sudo tee /etc/apt/sources.list.d/logstashforwarder.list
It also uses the same GPG key as Elasticsearch, which can be installed with this command:
sudo wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
Then install the Logstash Forwarder package:
sudo apt-get install logstash-forwarder
Next, you will want to install the Logstash Forwarder init script, so it starts on bootup:
sudo cd /etc/init.d/; sudo wget https://raw.githubusercontent.com/elasticsearch/logstash-forwarder/a73e1cb7e43c6de97050912b5bb35910c0f8d0da/logstash-forwarder.init -O logstash-forwarder
sudo chmod +x logstash-forwarder
sudo update-rc.d logstash-forwarder defaults
Now copy the SSL certificate into the appropriate location (/etc/pki/tls/certs):
sudo mkdir -p /etc/pki/tls/certs
sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/
Configure Logstash Forwarder:
We need to specify in logstash forwarder configuration details of the logstash server so that it can send the details to server.
Now add the following lines into the file, substituting in your Logstash Server's private IP address for logstash_server_private_IP at location /etc/logstash-forwarder:
{
"network": {
"servers": [ "logstash_server_private_IP:5000" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/syslog",
"/var/log/auth.log"
],
"fields": { "type": "syslog" }
}
]
}
Now, restart the logstash forwarder using below command
sudo service logstash-forwarder restart
Now, we have setup ELK stack, we just need to check the kibana dashboard for more details.
Below are the useful screenshots which represents the data which we got from server.
Conclusion:-
The ELK stack is very useful in different manners as in this case you will find almost all the logs that could be application log, webserver log, system log etc. Using kibana, will have a visualize report which shows the analysis of the logs which is very helpful to extract the required information.
The ELK stack is easy to maintain on the large scale systems and and large scale infrastructure.