Skip to main content
Version: Early Access

Log analysis tool in Deploy

This topic describes how to configure Deploy to work with the log analysis tool.

Elastic Stack

For Elastic Stack ( Logstash, Elasticsearch, Kibana ), to work with Deploy, logback in Deploy should communicate with logstash. That is done by adding a configuration to logback.xml.

There are different ways to do it. One of the most convenient way to do it is to use TcpSocketAppender. Add the following appender to your logback.xml.

localhost:5000

And do not forget to reference it:

Since logstash is part of Elastic Stack, for log search, analysis, and visualising, Elasticsearch and Kibana are used respectively. Once the Digital.ai Deploy is started, to display the logs in Kibana:

  1. First create Index Pattern from Kibana Dashboard.
  2. Define the index pattern so it matches the pattern name and finalize it.
  3. After creating the index pattern, go to Discover, Digital.ai Deploy logs will be displayed.
  4. You may add any filter to filter the logs in the way you want. First click on the add filter button on the left.
  5. Then define the filter parameter, for instance let’s filter logs for specific task id.

So we will see logs for that taskId:

Fluentd

Fluentd is an alternative to logstash. It is an independent tool so it can integrate with different visualing, search and analysis tools. It can work with Elasticsearch and Kibana as well.

The communication between fluentd to logback is through a program called fluency. The basic fluency configuration in logback.xml looks like this:

debuglocalhost24224

Of course once again do not forget to reference it:

Fluentd has to be configured properly to understand where and how to communicate with other tools. The name of the fluentd conf file depends on how you installed it. You may find further information here: https://docs.fluentd.org/configuration/config-file

For instance for fluentd using in docker it is /fluentd/etc/fluent.conf and a sample conf file is like following:

    <source>
@type forward
port 24224
bind 0.0.0.0
</source>

<match *.**>
@type copy

<store>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key @log_name
flush_interval 1s
</store>

<store>
@type stdout
</store>
</match>

Basically, this configuration tells fluentd to which port to listen fluentd at, which analysis tool to be used and listened at what port etc.

Once the communication is set, Kibana configuration are exactly the same apart from the index pattern to follow this is:

Logstash and Fluentd Comparison

Logstash is part of Elastic Stack which makes easier to config with Elasticsearch and Kibana but is limited to those tools. On the other hand, fluentd can work with any tool in the market, but it is slightly more difficult to configure. For more comparison please read through this article: https://logz.io/blog/fluentd-logstash/