Distributed James Server — Logging

We recommend to closely monitoring ERROR and WARNING logs. Those logs should be considered not normal.

If you encounter some suspicious logs:

  • If you have any doubt about the log being caused by a bug in James source code, please reach us via the bug tracker, the user mailing list or our Gitter channel (see our community page)

  • They can be due to insufficient performance from tier applications (eg Cassandra timeouts). In such case we advise you to conduct a close review of performances at the tier level.

Leveraging filters in Kibana discover view can help to filter out ''already known'' frequently occurring logs.

When reporting ERROR or WARNING logs, consider adding the full logs, and related data (eg the raw content of a mail triggering an issue) to the bug report in order to ease resolution.

Logging configuration

Distributed James uses logback as a logging library and FluentBit as centralize logging.

Information about logback configuration can be found here.

Structured logging

Pushing logs to ElasticSearch

Distributed Server leverages the use of MDC in order to achieve structured logging, and better add context to the logged information. We furthermore ship Logback Elasticsearch Appender on the classpath to easily allow direct log indexation in ElasticSearch.

Here is a sample conf/logback.xml configuration file for logback with the following pre-requisites:

  • Logging both in an unstructured fashion on the console and in a structured fashion in ElasticSearch

  • Logging ElasticSearch Log appender logs in the console

Configuration for pushing log direct to ElasticSearch

  • Logging ElasticSearch Log appender logs in the console

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="30 seconds">

        <contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
                <resetJUL>true</resetJUL>
        </contextListener>

        <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
                <encoder>
                        <pattern>%d{yyyy.MM.dd HH:mm:ss.SSS} %highlight([%-5level]) %logger{15} - %msg%n%rEx</pattern>
                        <immediateFlush>false</immediateFlush>
                </encoder>
        </appender>

        <appender name="ELASTIC" class="com.linagora.logback.elasticsearch.ElasticsearchAppender">
            <url>http://elasticsearch:9200/_bulk</url>
            <index>logs-james-%date{yyyy.MM.dd}</index>
            <type>tester</type>
            <includeMdc>true</includeMdc>
            <excludedMdcKeys>host</excludedMdcKeys>
            <errorLoggerName>es-error-logger</errorLoggerName>
            <properties>
                <property>
                    <name>host</name>
                    <value>${HOSTNAME}</value>
                    <allowEmpty>false</allowEmpty>
                </property>
                <property>
                    <name>severity</name>
                    <value>%level</value>
                </property>
                <property>
                    <name>thread</name>
                    <value>%thread</value>
                </property>
                <property>
                    <name>stacktrace</name>
                    <value>%ex</value>
                </property>
                <property>
                    <name>logger</name>
                    <value>%logger</value>
                </property>
            </properties>
            <headers>
                <header>
                    <name>Content-Type</name>
                    <value>application/json</value>
                </header>
            </headers>
        </appender>

        <root level="WARN">
                <appender-ref ref="ELASTIC" />
        </root>

        <logger name="es-error-logger" level="DEBUG" additivity="false">
            <appender-ref ref="CONSOLE" />
        </logger>

        <logger name="org.apache.james" level="INFO" />

</configuration>

Using FluentBit as a log forwarder

Using Docker

Distributed Server leverages the use of MDC in order to achieve structured logging, and better add context to the logged information. We furthermore ship json logs to file with RollingFileAppender on the classpath to easily allow FluentBit to directly tail the log file. Here is a sample conf/logback.xml configuration file for logback with the following pre-requisites:

Logging in a structured json fashion and write to file for centralizing logging. Centralize logging third party like FluentBit can tail from logging’s file then filter/process and put in to ElastichSearch

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

        <contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
                <resetJUL>true</resetJUL>
        </contextListener>

        <appender name="LOG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
                <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
                        <fileNamePattern>logs/james.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
                        <maxHistory>1</maxHistory>
                        <totalSizeCap>200MB</totalSizeCap>
                        <maxFileSize>100MB</maxFileSize>
                </rollingPolicy>

                <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
                    <layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
                        <timestampFormat>yyyy-MM-dd'T'HH:mm:ss.SSSX</timestampFormat>
                        <timestampFormatTimezoneId>Etc/UTC</timestampFormatTimezoneId>

                        <!-- Importance for handling multiple lines log -->
                        <appendLineSeparator>true</appendLineSeparator>

                        <jsonFormatter class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
                            <prettyPrint>false</prettyPrint>
                        </jsonFormatter>
                    </layout>
                </encoder>
        </appender>

        <root level="INFO">
            <appender-ref ref="LOG_FILE" />
        </root>

</configuration>

First you need to create a logs folder, then mount it to James container and to FluentBit.

docker-compose:

version: "3"

services:
  james:
    depends_on:
      - elasticsearch
      - cassandra
      - rabbitmq
      - s3
    entrypoint: bash -c "java -cp 'james-server.jar:extension-jars/*:james-server-memory-guice.lib/*' -Dworking.directory=/root/ -Dlogback.configurationFile=/root/conf/logback.xml org.apache.james.CassandraRabbitMQJamesServerMain"
    image: linagora/james-rabbitmq-project:branch-master
    container_name: james
    hostname: james.local
    volumes:
      - ./extension-jars:/root/extension-jars
      - ./conf/logback.xml:/root/conf/logback.xml
      - ./logs:/root/logs
    ports:
      - "80:80"
      - "25:25"
      - "110:110"
      - "143:143"
      - "465:465"
      - "587:587"
      - "993:993"
      - "8080:8000"

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
    ports:
      - "9200:9200"
    environment:
      - discovery.type=single-node

  cassandra:
    image: cassandra:3.11.10
    ports:
      - "9042:9042"

  rabbitmq:
    image: rabbitmq:3.8.1-management
    ports:
      - "5672:5672"
      - "15672:15672"

  s3:
    image: zenko/cloudserver:8.2.6
    container_name: s3.docker.test
    environment:
      - SCALITY_ACCESS_KEY_ID=accessKey1
      - SCALITY_SECRET_ACCESS_KEY=secretKey1
      - S3BACKEND=mem
      - LOG_LEVEL=trace
      - REMOTE_MANAGEMENT_DISABLE=1

  fluent-bit:
    image: fluent/fluent-bit:1.5.7
    volumes:
      - ./fluentbit/fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
      - ./fluentbit/parsers.conf:/fluent-bit/etc/parsers.conf
      - ./logs:/fluent-bit/log
    ports:
      - "24224:24224"
      - "24224:24224/udp"
    depends_on:
      - elasticsearch

  kibana:
    image: docker.elastic.co/kibana/kibana:7.10.2
    environment:
      ELASTICSEARCH_HOSTS: http://elasticsearch:9200
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch

FluentBit config as: the Host elasticsearch pointing to elasticsearch service in docker-compose file.

[SERVICE]
    Parsers_File    /fluent-bit/etc/parsers.conf

[INPUT]
    name                    tail
    path                    /fluent-bit/log/*.log
    Parser                  docker
    docker_mode             on
    buffer_chunk_size       1MB
    buffer_max_size         1MB
    mem_buf_limit           64MB
    Refresh_Interval        30

[OUTPUT]
    Name  stdout
    Match *


[OUTPUT]
    Name  es
    Match *
    Host elasticsearch
    Port 9200
    Index fluentbit
    Logstash_Format On
    Logstash_Prefix fluentbit-james
    Type docker

FluentBit Parser config:

[PARSER]
  Name         docker
  Format       json
  Time_Key     timestamp
  Time_Format  %Y-%m-%dT%H:%M:%S.%LZ
  Time_Keep    On
  Decode_Field_As   escaped_utf8    log    do_next
  Decode_Field_As   escaped         log    do_next
  Decode_Field_As   json            log

Using Kubernetes

If using James in a Kubernetes environment, you can just append the logs to the console in a JSON formatted way using Jackson to easily allow FluentBit to directly tail them.

Here is a sample conf/logback.xml configuration file for achieving this:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

        <contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
                <resetJUL>true</resetJUL>
        </contextListener>

        <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
                <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
                    <layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
                        <timestampFormat>yyyy-MM-dd'T'HH:mm:ss.SSSX</timestampFormat>
                        <timestampFormatTimezoneId>Etc/UTC</timestampFormatTimezoneId>

                        <!-- Importance for handling multiple lines log -->
                        <appendLineSeparator>true</appendLineSeparator>

                        <jsonFormatter class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
                            <prettyPrint>false</prettyPrint>
                        </jsonFormatter>
                    </layout>
                </encoder>
        </appender>

        <root level="INFO">
                <appender-ref ref="CONSOLE" />
        </root>

</configuration>

Regarding FluentBit on Kubernetes, you need to install it as a DaemonSet. Some official template exist with FluentBit outputting logs to ElasticSearch. For more information on how to install it, with your cluster, you can look at this documentation.

As stated by the detail of the official documentation, FluentBit is configured to consume out of the box logs from containers on the same running node. So it should scrap your James logs without extra configuration.