Database appenders

Log4j Core provides multiple appenders to send log events directly to your database.

Common concerns

Column mapping

Since relational databases and some NoSQL databases split data into columns, Log4j Core provides a reusable ColumnMapping configuration element to allow specifying the content of each column.

The Column Mapping element supports the following configuration properties:

Table 1. ColumnMapping configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the column.

Optional

columnType

Class<?>

String

It specifies the Java type that will be stored in the column.

If set to:

org.apache.logging.log4j.util.ReadOnlyStringMap
org.apache.logging.log4j.spi.ThreadContextMap

The column will be filled with the contents of the log event’s context map.

org.apache.logging.log4j.spi.ThreadContextStack

The column will be filled with the contents of the log event’s context stack.

java.util.Date

The column will be filled with the log event’s timestamp.

For any other value:

  1. The log event will be formatted using the nested Layout.

  2. The resulting String will be converted to the specified type using a TypeConverter. See the plugin reference for a list of available type converters.

type

Class<?>

String

Deprecated: since 2.21.0 use columnType instead.

literal

String

If set, the value will be added directly in the insert statement of the database-specific query language.

This value is added as-is, without any validation. Never use user-provided data to determine its value.

parameter

String

It specifies the database-specific parameter marker to use. Otherwise, the default parameter marker for the database language will be used.

This value is added as-is, without any validation. Never use user-provided data to determine its value.

pattern

String

This is a shortcut configuration attribute to set the nested Layout element to a PatternLayout instance with the specified pattern property.

source

String

name

It specifies which key of a MapMessage will be stored in the column. This attribute is used only if:

Table 2. ColumnMapping nested elements
Type Multiplicity Description

Layout

zero or one

Formats the value to store in the column.

See Layouts for more information.

An example column mapping might look like this:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
(1)
<ColumnMapping name="id" literal="currval('logging_seq')"/>
(2)
<ColumnMapping name="uuid"
               pattern="%uuid{TIME}"
               columnType="java.util.UUID"/>
<ColumnMapping name="message" pattern="%m"/>
(3)
<ColumnMapping name="timestamp" columnType="java.util.Date"/>
<ColumnMapping name="mdc"
               columnType="org.apache.logging.log4j.spi.ThreadContextMap"/>
<ColumnMapping name="ndc"
               columnType="org.apache.logging.log4j.spi.ThreadContextStack"/>
(4)
<ColumnMapping name="asJson">
  <JsonTemplateLayout/>
</ColumnMapping>
(5)
<ColumnMapping name="resource" source="resourceId"/>
Snippet from an example log4j2.json
"ColumnMapping": [
  (1)
  {
    "name": "id",
    "literal": "currval('logging_seq')"
  },
  (2)
  {
    "name": "uuid",
    "pattern": "%uuid{TIME}",
    "columnType": "java.util.UUID"
  },
  {
    "name": "message",
    "pattern": "%m"
  },
  (3)
  {
    "name": "timestamp",
    "columnType": "java.util.Date"
  },
  {
    "name": "mdc",
    "columnType": "org.apache.logging.log4j.spi.ThreadContextMap"
  },
  {
    "name": "ndc",
    "columnType": "org.apache.logging.log4j.spi.ThreadContextStack"
  },
  (4)
  {
    "name": "asJson",
    "JsonTemplateLayout": {}
  },
  (5)
  {
    "name": "resource",
    "source": "resourceId"
  }
]
Snippet from an example log4j2.yaml
ColumnMapping:
  (1)
  - name: "id"
    literal: "currval('logging_seq')"
  (2)
  - name: "uuid"
    pattern: "%uuid{TIME}"
    columnType: "java.util.UUID"
  - name: "message"
    pattern: "%m"
  (3)
  - name: "timestamp"
    columnType: "java.util.Date"
  - name: "mdc"
    columnType: "org.apache.logging.log4j.spi.ThreadContextMap"
  - name: "ndc"
    columnType: "org.apache.logging.log4j.spi.ThreadContextStack"
  (4)
  - name: "asJson"
    JsonTemplateLayout: {}
  (5)
  - name: "resource"
    source: "resourceId"
Snippet from an example log4j2.properties
(1)
appender.0.col[0].type = ColumnMapping
appender.0.col[0].name = id
appender.0.col[0].literal = currval('logging_seq')

(2)
appender.0.col[1].type = ColumnMapping
appender.0.col[1].name = uuid
appender.0.col[1].pattern = %uuid{TIME}
appender.0.col[1].columnType = java.util.UUID

appender.0.col[2].type = ColumnMapping
appender.0.col[2].name = message
appender.0.col[2].pattern = %m

(3)
appender.0.col[3].type = ColumnMapping
appender.0.col[3].name = timestamp
appender.0.col[3].timestamp = java.util.Date

appender.0.col[4].type = ColumnMapping
appender.0.col[4].name = mdc
appender.0.col[4].columnType = org.apache.logging.log4j.spi.ThreadContextMap

appender.0.col[5].type = ColumnMapping
appender.0.col[5].name = ndc
appender.0.col[5].columnType = org.apache.logging.log4j.spi.ThreadContextStack

(4)
appender.0.col[6].type = ColumnMapping
appender.0.col[6].name = asJson
appender.0.col[6].layout.type = JsonTemplateLayout

(5)
appender.0.col[7].type = ColumnMapping
appender.0.col[7].name = resource
appender.0.col[7].source = resourceId
1 A database-specific expression is added literally to the INSERT statement.
2 A Pattern Layout with the specified pattern is used for these columns. The uuid column is additionally converted into a java.util.UUID before being sent to the JDBC driver.
3 Three special column types are replaced with the log event timestamp, context map, and context stack.
4 A JSON Template Layout is used to format this column.
5 If the global layout of the appender returns a MapMessage, the value for key resourceId will be put into the resource column.

Cassandra Appender

This appender is planned to be removed in the next major release! If you are using this library, please get in touch with the Log4j maintainers using the official support channels.

The Cassandra Appender writes its output to an Apache Cassandra database. The appender supports the following configuration properties:

Table 3. Cassandra Appender configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the Appender.

Optional

batched

boolean

false

Whether to use batch statements to write log messages to Cassandra.

batchType

BatchStatement.Type

LOGGED

The batch type to use when using batched writes.

bufferSize

int

0

The number of log messages to buffer or batch before writing. If 0, buffering is disabled.

clusterName

String

The name of the Cassandra cluster to connect to.

ignoreExceptions

boolean

true

If false, logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored.

keyspace

String

The name of the keyspace containing the table that log messages will be written to.

password

String

The password to use (along with the username) to connect to Cassandra.

table

String

The name of the table to write log messages to.

useClockForTimestampGenerator

boolean

false

Whether to use the configured org.apache.logging.log4j.core.util.Clock as a timestamp generator.

username

String

The username to use to connect to Cassandra. By default, no username or password is used.

useTls

boolean

true

Whether to use TLS/SSL to connect to Cassandra. This is false by default.

Table 4. Cassandra Appender nested elements
Type Multiplicity Description

Filter

zero or one

Allows filtering log events just before they are formatted and sent.

See also appender filtering stage.

ColumnMapping

one or more

A list of column mapping configurations. The following database-specific restrictions apply:

SocketAddress

one or more

A list of Cassandra node addresses to connect to. If absent, localhost:9042 will be used.

See Socket Addresses for the configuration syntax.

Additional runtime dependencies are required for using the Cassandra Appender:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-cassandra</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-cassandra'

Socket Addresses

The address of the Cassandra server is specified using the SocketAddress element, which supports the following configuration options:

Table 5. SocketAddress configuration attributes
Attribute Type Default value Description

host

InetAddress

localhost

The host to connect to.

port

int

0

The port to connect to.

Configuration examples

Here is an example Cassandra Appender configuration:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<Cassandra name="CASSANDRA"
           clusterName="test-cluster"
           keyspace="test"
           table="logs"
           bufferSize="10"
           batched="true"> (1)
  (2)
  <SocketAddress host="server1" port="9042"/>
  <SocketAddress host="server2" port="9042"/>
  (3)
  <ColumnMapping name="id"
                 pattern="%uuid{TIME}"
                 columnType="java.util.UUID"/>
  <ColumnMapping name="timestamp" columnType="java.util.Date"/>
  <ColumnMapping name="level" pattern="%level"/>
  <ColumnMapping name="marker" pattern="%marker"/>
  <ColumnMapping name="logger" pattern="%logger"/>
  <ColumnMapping name="message" pattern="%message"/>
  <ColumnMapping name="mdc"
                 columnType="org.apache.logging.log4j.spi.ThreadContextMap"/>
  <ColumnMapping name="ndc"
                 columnType="org.apache.logging.log4j.spi.ThreadContextStack"/>
</Cassandra>
Snippet from an example log4j2.json
"Cassandra": {
  "name": "CASSANDRA",
  "clusterName": "test-cluster",
  "keyspace": "test",
  "table": "logs",
  (1)
  "bufferSize": 10,
  "batched": true,
  (2)
  "SocketAddress": [
    {
      "host": "server1",
      "port": "9042"
    },
    {
      "host": "server2",
      "port": "9042"
    }
  ],
  (3)
  "ColumnMapping": [
    {
      "name": "id",
      "pattern": "%uuid{TIME}",
      "columnType": "java.util.UUID"
    },
    {
      "name": "timestamp",
      "columnType": "java.util.Date"
    },
    {
      "name": "level",
      "pattern": "%level"
    },
    {
      "name": "marker",
      "pattern": "%marker"
    },
    {
      "name": "logger",
      "pattern": "%logger"
    },
    {
      "name": "message",
      "pattern": "%m"
    },
    {
      "name": "mdc",
      "columnType": "org.apache.logging.log4j.spi.ThreadContextMap"
    },
    {
      "name": "ndc",
      "columnType": "org.apache.logging.log4j.spi.ThreadContextStack"
    }
  ]
}
Snippet from an example log4j2.yaml
Cassandra:
  name: "CASSANDRA"
  clusterName: "test-cluster"
  keyspace: "test"
  table: "logs"
  (1)
  bufferSize: 10
  batched: true
  (2)
  SocketAddress:
    - host: "server1"
      port: "9042"
    - host: "server2"
      port: "9042"
  (3)
  ColumnMapping:
    - name: "id"
      pattern: "%uuid{TIME}"
      columnType: "java.util.UUID"
    - name: "timestamp"
      columnType: "java.util.Date"
    - name: "level"
      pattern: "%level"
    - name: "marker"
      pattern: "%marker"
    - name: "logger"
      pattern: "%logger"
    - name: "message"
      pattern: "%message"
    - name: "mdc"
      columnType: "org.apache.logging.log4j.spi.ThreadContextMap"
    - name: "ndc"
      columnType: "org.apache.logging.log4j.spi.ThreadContextStack"
Snippet from an example log4j2.properties
appender.0.type = Cassandra
appender.0.name = CASSANDRA
appender.0.clusterName = test-cluster
appender.0.keyspace = test
appender.0.table = logs
(1)
appender.0.bufferSize = 10
appender.0.batched = true

(2)
appender.0.addr[0].type = SocketAddress
appender.0.addr[0].host = server1
appender.0.addr[0].port = 9042

appender.0.addr[1].type = SocketAddress
appender.0.addr[1].host = server2
appender.0.addr[1].port = 9042

(3)
appender.0.col[0].type = ColumnMapping
appender.0.col[0].name = uuid
appender.0.col[0].pattern = %uuid{TIME}
appender.0.col[0].columnType = java.util.UUID

appender.0.col[1].type = ColumnMapping
appender.0.col[1].name = timestamp
appender.0.col[1].timestamp = java.util.Date

appender.0.col[2].type = ColumnMapping
appender.0.col[2].name = level
appender.0.col[2].pattern = %level

appender.0.col[3].type = ColumnMapping
appender.0.col[3].name = marker
appender.0.col[3].pattern = %marker

appender.0.col[4].type = ColumnMapping
appender.0.col[4].name = logger
appender.0.col[4].pattern = %logger

appender.0.col[5].type = ColumnMapping
appender.0.col[5].name = message
appender.0.col[5].pattern = %message

appender.0.col[6].type = ColumnMapping
appender.0.col[6].name = mdc
appender.0.col[6].columnType = org.apache.logging.log4j.spi.ThreadContextMap

appender.0.col[7].type = ColumnMapping
appender.0.col[7].name = ndc
appender.0.col[7].columnType = org.apache.logging.log4j.spi.ThreadContextStack
1 Enables buffering. Messages are sent in batches of 10.
2 Multiple server addresses can be used.
3 An example of column mapping. See Column mapping for more details.

The example above uses the following table schema:

CREATE TABLE logs
(
    id        timeuuid PRIMARY KEY,
    level     text,
    marker    text,
    logger    text,
    message   text,
    timestamp timestamp,
    mdc       map<text,text>,
    ndc       list<text>
);

JDBC Appender

The JDBCAppender writes log events to a relational database table using standard JDBC. It can be configured to get JDBC connections from different connection sources.

If batch statements are supported by the configured JDBC driver and bufferSize is configured to be a positive number, then log events will be batched.

The appender gets a new connection for each batch of log events. The connection source must be backed by a connection pool, otherwise the performance will suffer greatly.

Table 6. JDBC Appender configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the Appender.

tableName

String

The name of the table to use.

Optional

bufferSize

int

0

The number of log messages to batch before writing. If 0, batching is disabled.

ignoreExceptions

boolean

true

If false, logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored.

immediateFail

boolean

false

When set to true, log events will not wait to try to reconnect and will fail immediately if the JDBC resources are not available.

reconnectIntervalMillis

long

5000

If set to a value greater than 0, after an error, the JdbcDatabaseManager will attempt to reconnect to the database after waiting the specified number of milliseconds.

If the reconnecting fails then an exception will be thrown and can be caught by the application if ignoreExceptions is set to false.

Table 7. JDBC Appender nested elements
Type Multiplicity Description

Filter

zero or one

Allows filtering log events just before they are formatted and sent.

See also appender filtering stage.

ColumnMapping

zero or more

A list of column mapping configurations. The following database-specific restrictions apply:

Required, unless ColumnConfig is used.

ColumnConfig

zero or more

Deprecated: an older mechanism to define column mappings.

📖 Plugin reference for ColumnConfig

ConnectionSource

one

It specifies how to retrieve JDBC Connection objects.

See Connection Sources for more details.

Layout

zero or one

An optional Layout<? extends Message> implementation that formats a log event as log Message.

If supplied MapMessages will be treated in a special way.

See Map Message handling for more details.

Connection Sources

When configuring the JDBC Appender, you must specify an implementation of ConnectionSource that the appender will use to get Connection objects.

The following connection sources are available out-of-the-box:

DataSource

This connection source uses JNDI to locate a JDBC DataSource.

As of Log4j 2.17.0 you need to enable the DataSource connection source explicitly by setting the log4j2.enableJndiJdbc configuration property to true.

Table 8. DataSource configuration attributes
Attribute Type Default value Description

jndiName

Name

It specifies the JNDI name of a JDBC DataSource.

Only the java: JNDI protocol is supported.

Required

ConnectionFactory

This connection source can use any factory method. The method must:

Table 9. ConnectionFactory configuration attributes
Attribute Type Default value Description

class

Class<?>

The fully qualified class name of the class containing the factory method.

Required

method

String

The name of the factory method.

Required

DriverManager

This connection source uses DriverManager to directly create connections using a JDBC Driver.

This configuration source is useful during development, but we don’t recommend it in production. Unless the JDBC driver provides connection pooling, the performance of the appender will suffer.

See PoolingDriver for a variant of this connection source that uses a connection pool.

Table 10. DriverManager configuration attributes
Attribute Type Default value Description

connectionString

String

The driver-specific JDBC connection string.

Required

driverClassName

String

autodetected

The fully qualified class name of the JDBC driver to use.

JDBC 4.0 drivers can be automatically detected by DriverManager. See DriverManager for more details.

userName

String

The username to use to connect to the database.

password

String

The password to use to connect to the database.

Table 11. DriverManager nested elements
Type Multiplicity Description

Property

zero or more

A list of key/value pairs to pass to DriverManager.

If supplied, the userName and password attributes will be ignored.

PoolingDriver

The PoolingDriver uses Apache Commons DBCP 2 to configure a JDBC connection pool.

Table 12. PoolingDriver configuration attributes
Attribute Type Default value Description

connectionString

String

The driver-specific JDBC connection string.

Required

driverClassName

String

autodetected

The fully qualified class name of the JDBC driver to use.

JDBC 4.0 drivers can be automatically detected by DriverManager. See DriverManager for more details.

userName

String

The username to use to connect to the database.

password

String

The password to use to connect to the database.

poolName

String

example

Table 13. PoolingDriver nested elements
Type Multiplicity Description

Property

zero or more

A list of key/value pairs to pass to DriverManager.

If supplied, the userName and password attributes will be ignored.

PoolableConnectionFactory

zero or one

Allows finely tuning the configuration of the DBCP 2 connection pool. The available parameters are the same as those provided by DBCP 2. See DBCP 2 configuration for more details.

📖 Plugin reference for PoolableConnectionFactory

Additional runtime dependencies are required for using PoolingDriver:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-jdbc-dbcp2</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-jdbc-dbcp2'

Map Message handling

If the optional nested element of type Layout<? Extends Message> is provided, log events containing messages of type MapMessage will be treated specially. For each column mapping (except those containing literals) the source attribute will be used as key to the value in MapMessage that will be stored in column name.

Configuration examples

Here is an example JDBC Appender configuration:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<JDBC name="JDBC"
      tableName="logs"
      bufferSize="10"> (1)
  (2)
  <DataSource jndiName="java:comp/env/jdbc/logging"/>
  (3)
  <ColumnMapping name="id"
                 pattern="%uuid{TIME}"
                 columnType="java.util.UUID"/>
  <ColumnMapping name="timestamp" columnType="java.util.Date"/>
  <ColumnMapping name="level" pattern="%level"/>
  <ColumnMapping name="marker" pattern="%marker"/>
  <ColumnMapping name="logger" pattern="%logger"/>
  <ColumnMapping name="message" pattern="%message"/>
  <ColumnMapping name="mdc"
                 columnType="org.apache.logging.log4j.spi.ThreadContextMap"/>
  <ColumnMapping name="ndc"
                 columnType="org.apache.logging.log4j.spi.ThreadContextStack"/>
</JDBC>
Snippet from an example log4j2.json
"JDBC": {
  "name": "JDBC",
  "tableName": "logs",
  (1)
  "bufferSize": 10,
  (2)
  "DataSource": {
    "jndiName": "java:comp/env/jdbc/logging"
  },
  (3)
  "ColumnMapping": [
    {
      "name": "id",
      "pattern": "%uuid{TIME}",
      "columnType": "java.util.UUID"
    },
    {
      "name": "timestamp",
      "columnType": "java.util.Date"
    },
    {
      "name": "level",
      "pattern": "%level"
    },
    {
      "name": "marker",
      "pattern": "%marker"
    },
    {
      "name": "logger",
      "pattern": "%logger"
    },
    {
      "name": "message",
      "pattern": "%m"
    },
    {
      "name": "mdc",
      "columnType": "org.apache.logging.log4j.spi.ThreadContextMap"
    },
    {
      "name": "ndc",
      "columnType": "org.apache.logging.log4j.spi.ThreadContextStack"
    }
  ]
}
Snippet from an example log4j2.yaml
JDBC:
  name: "JDBC"
  tableName: "logs"
  (1)
  bufferSize: 10
  (2)
  DataSource:
    jndiName: "java:comp/env/jdbc/logging"
  (3)
  ColumnMapping:
    - name: "id"
      pattern: "%uuid{TIME}"
      columnType: "java.util.UUID"
    - name: "timestamp"
      columnType: "java.util.Date"
    - name: "level"
      pattern: "%level"
    - name: "marker"
      pattern: "%marker"
    - name: "logger"
      pattern: "%logger"
    - name: "message"
      pattern: "%message"
    - name: "mdc"
      columnType: "org.apache.logging.log4j.spi.ThreadContextMap"
    - name: "ndc"
      columnType: "org.apache.logging.log4j.spi.ThreadContextStack"
Snippet from an example log4j2.properties
appender.0.type = JDBC
appender.0.name = JDBC
appender.0.tableName = logs
(1)
appender.0.bufferSize = 10

(2)
appender.0.ds.type = DataSource
appender.0.ds.jndiName = java:comp/env/jdbc/logging

(3)
appender.0.col[0].type = ColumnMapping
appender.0.col[0].name = uuid
appender.0.col[0].pattern = %uuid{TIME}
appender.0.col[0].columnType = java.util.UUID

appender.0.col[1].type = ColumnMapping
appender.0.col[1].name = timestamp
appender.0.col[1].timestamp = java.util.Date

appender.0.col[2].type = ColumnMapping
appender.0.col[2].name = level
appender.0.col[2].pattern = %level

appender.0.col[3].type = ColumnMapping
appender.0.col[3].name = marker
appender.0.col[3].pattern = %marker

appender.0.col[4].type = ColumnMapping
appender.0.col[4].name = logger
appender.0.col[4].pattern = %logger

appender.0.col[5].type = ColumnMapping
appender.0.col[5].name = message
appender.0.col[5].pattern = %message

appender.0.col[6].type = ColumnMapping
appender.0.col[6].name = mdc
appender.0.col[6].columnType = org.apache.logging.log4j.spi.ThreadContextMap

appender.0.col[7].type = ColumnMapping
appender.0.col[7].name = ndc
appender.0.col[7].columnType = org.apache.logging.log4j.spi.ThreadContextStack
1 Enables buffering. Messages are sent in batches of 10.
2 A JNDI data source is used.
3 An example of column mapping. See Column mapping for more details.

The example above uses the following table schema:

CREATE TABLE logs
(
    id        BIGINT PRIMARY KEY,
    level     VARCHAR,
    marker    VARCHAR,
    logger    VARCHAR,
    message   VARCHAR,
    timestamp TIMESTAMP,
    mdc       VARCHAR,
    ndc       VARCHAR
);

JPA Appender

This appender is planned to be removed in the next major release! If you are using this library, please get in touch with the Log4j maintainers using the official support channels.

The JPA Appender writes log events to a relational database table using the Jakarta Persistence API 2.2. To use the appender, you need to:

Due to breaking changes in the underlying API, the JPA Appender cannot be used with Jakarta Persistence API 3.0 or later.

Persistence configuration

To store log events using JPA, you need to implement a JPA Entity that extends the AbstractLogEventWrapperEntity class. To help you with the implementation, Log4j provides a BasicLogEventEntity class that only lacks an identity field.

A simple AbstractLogEventWrapperEntity implementation might look like:

Snippet from a LogEventEntity.java
@Entity
@Table(name = "log")
public class LogEventEntity extends BasicLogEventEntity {
    private static final long serialVersionUID = 1L;
    private long id;
    (1)
    public LogEventEntity() {}
    (2)
    public LogEventEntity(final LogEvent wrapped) {
        super(wrapped);
    }
    (3)
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Column(name = "id")
    public long getId() {
        return id;
    }
}

For performance reasons, we recommend creating a separate persistence unit for logging. This allows you to optimize the unit for logging purposes. The definition of the persistence unit should look like the example below:

<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence
                                 http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"
             version="2.1">
  <persistence-unit name="logging" transaction-type="RESOURCE_LOCAL">
    (1)
    <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
    (2)
    <non-jta-data-source>jdbc/logging</non-jta-data-source>
    (3)
    <class>
      org.apache.logging.log4j.core.appender.db.jpa.converter.ContextMapAttributeConverter
    </class>
    <class>
      org.apache.logging.log4j.core.appender.db.jpa.converter.ContextStackAttributeConverter
    </class>
    <class>
      org.apache.logging.log4j.core.appender.db.jpa.converter.InstantAttributeConverter
    </class>
    <class>
      org.apache.logging.log4j.core.appender.db.jpa.converter.LevelAttributeConverter
    </class>
    <class>
      org.apache.logging.log4j.core.appender.db.jpa.converter.MarkerAttributeConverter
    </class>
    <class>
      org.apache.logging.log4j.core.appender.db.jpa.converter.MessageAttributeConverter
    </class>
    <class>
      org.apache.logging.log4j.core.appender.db.jpa.converter.StackTraceElementAttributeConverter
    </class>
    <class>
      org.apache.logging.log4j.core.appender.db.jpa.converter.ThrowableAttributeConverter
    </class>
    (4)
    <class>
      com.example.logging.LogEventEntity
    </class>
    (5)
    <shared-cache-mode>NONE</shared-cache-mode>
  </persistence-unit>
</persistence>
1 Specify you JPA provider.
2 A non-JTA source should be used for performance.
3 If your log event entity extends BasicLogEventEntity, you need to declare these converters.
4 Declare your log event entity.
5 Cache sharing should be set to NONE.

Appender configuration

The JPA appender supports these configuration options:

Table 14. JPA Appender configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the Appender.

tableName

String

The name of the table to use.

persistenceUnitName

String

The name of the persistence unit to use.

entityClassName

Class<?>

The fully qualified name of the entity class to use.

The type must extend AbstractLogEventWrapperEntity.

Optional

bufferSize

int

0

The number of log messages to batch before writing. If 0, batching is disabled.

ignoreExceptions

boolean

true

If false, logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored.

Table 15. JPA Appender nested elements
Type Multiplicity Description

Filter

zero or one

Allows filtering log events just before they are formatted and sent.

See also appender filtering stage.

Additional runtime dependencies are required for using the JPA Appender:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-jpa</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-jpa'

Configuration examples

Using the persistence unit from section Persistence configuration, the JPA appender can be easily configured as:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<JPA name="JPA"
     persistenceUnitName="logging"
     entityClassName="com.example.logging.LogEventEntity"/>
Snippet from an example log4j2.json
"JPA": {
  "name": "JPA",
  "persistenceUnitName": "logging",
  "entityClassName": "com.example.logging.LogEventEntity"
}
Snippet from an example log4j2.yaml
JPA:
  name: "JPA"
  persistenceUnitName: "logging"
  entityClassName: "com.example.logging.LogEventEntity"
Snippet from an example log4j2.properties
appender.0.type = JPA
appender.0.name = JPA
appender.0.persistenceUnitName = logging
appender.0.entityClassName = com.example.logging.LogEventEntity

NoSQL Appender

The NoSQL Appender writes log events to a document-oriented NoSQL database using an internal lightweight provider interface. It supports the following configuration options:

Table 16. NoSQL Appender configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the Appender.

Optional

bufferSize

int

0

The number of log messages to batch before writing to the database. If 0, batching is disabled.

ignoreExceptions

boolean

true

If false, logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored.

Table 17. NoSQL Appender nested elements
Type Multiplicity Description

Filter

zero or one

Allows filtering log events just before they are formatted and sent.

See also appender filtering stage.

KeyValuePair

Zero or more

Adds a simple key/value field to the NoSQL object.

The value attribute of the pair supports runtime property substitution using the current event as context.

Layout

zero or one

An optional Layout<? extends MapMessage> implementation that formats a log event as MapMessage.

See Formatting for more details.

Formatting

This appender transforms log events into NoSQL documents in two ways:

  • If the optional Layout configuration element is provided, the MapMessage returned by the layout will be converted into its NoSQL document.

  • Otherwise, a default conversion will be applied. You enhance the format with additional top level key/value pairs using nested KeyValuePair configuration elements.

    Click to see an example of default log event formatting
    {
      "level": "WARN",
      "loggerName": "com.example.application.MyClass",
      "message": "Something happened that you might want to know about.",
      "source": {
        "className": "com.example.application.MyClass",
        "methodName": "exampleMethod",
        "fileName": "MyClass.java",
        "lineNumber": 81
      },
      "marker": {
        "name": "SomeMarker",
        "parent": {
          "name": "SomeParentMarker"
        }
      },
      "threadName": "Thread-1",
      "millis": 1368844166761,
      "date": "2013-05-18T02:29:26.761Z",
      "thrown": {
        "type": "java.sql.SQLException",
        "message": "Could not insert record. Connection lost.",
        "stackTrace": [
          {
            "className": "org.example.sql.driver.PreparedStatement$1",
            "methodName": "responder",
            "fileName": "PreparedStatement.java",
            "lineNumber": 1049
          },
          {
            "className": "org.example.sql.driver.PreparedStatement",
            "methodName": "executeUpdate",
            "fileName": "PreparedStatement.java",
            "lineNumber": 738
          },
          {
            "className": "com.example.application.MyClass",
            "methodName": "exampleMethod",
            "fileName": "MyClass.java",
            "lineNumber": 81
          },
          {
            "className": "com.example.application.MainClass",
            "methodName": "main",
            "fileName": "MainClass.java",
            "lineNumber": 52
          }
        ],
        "cause": {
          "type": "java.io.IOException",
          "message": "Connection lost.",
          "stackTrace": [
            {
              "className": "java.nio.channels.SocketChannel",
              "methodName": "write",
              "fileName": null,
              "lineNumber": -1
            },
            {
              "className": "org.example.sql.driver.PreparedStatement$1",
              "methodName": "responder",
              "fileName": "PreparedStatement.java",
              "lineNumber": 1032
            },
            {
              "className": "org.example.sql.driver.PreparedStatement",
              "methodName": "executeUpdate",
              "fileName": "PreparedStatement.java",
              "lineNumber": 738
            },
            {
              "className": "com.example.application.MyClass",
              "methodName": "exampleMethod",
              "fileName": "MyClass.java",
              "lineNumber": 81
            },
            {
              "className": "com.example.application.MainClass",
              "methodName": "main",
              "fileName": "MainClass.java",
              "lineNumber": 52
            }
          ]
        }
      },
      "contextMap": {
        "ID": "86c3a497-4e67-4eed-9d6a-2e5797324d7b",
        "username": "JohnDoe"
      },
      "contextStack": [
        "topItem",
        "anotherItem",
        "bottomItem"
      ]
    }

Providers

The NoSQL Appender only handles the conversion of log events into NoSQL documents, and it delegates database-specific tasks to a NoSQL provider. NoSQL providers are Log4j plugins that implement the NoSqlProvider interface. Log4j Core currently provides the following providers:

MongoDB Providers

Starting with version 2.11.0, Log4j supplies providers for the MongoDB NoSQL database engine, based on the MongoDB synchronous Java driver. The choice of the provider to use depends on:

  • the major version of the MongoDB Java driver your application uses: Log4j supports all major versions starting from version 2.

  • the type of driver API used: either the Legacy API or the Modern API. See MongoDB documentation for the difference between APIs.

The list of dependencies of your application provides a hint as to which driver API your application is using. If your application contains any one of these dependencies, it might use the Legacy API:

  • org.mongodb:mongo-java-driver

  • org.mongodb:mongodb-driver-legacy

If you application only uses org.mongodb:mongodb-driver-sync, it uses the Modern API.

The version of the MongoDB Java driver is not the same as the version of the MongoDB server. See MongoDB compatibility matrix for more information.

In order to use a Log4j MongoDB appender you need to add the following dependencies to your application:

Table 18. MongoDB providers compatibility table
Driver version Driver API Log4j artifact Notes

2.x

Legacy

log4j-mongodb2

Reached end-of-support.

Last released version: 2.12.4

3.x

Legacy

log4j-mongodb3

Reached end-of-support.

Last released version: 2.23.1

4.x

Modern

log4j-mongodb4

5.x or later

Modern

log4j-mongodb

If you are note sure, which implementation to choose, log4j-mongodb is the recommended choice.

MongoDb Provider (current)

The MongoDb provider is based on the current version of the MongoDB Java driver. It supports the following configuration options:

Table 19. MongoDb Provider configuration attributes
Attribute Type Default value Description

connection

ConnectionString

It specifies the connection URI used to reach the server.

See Connection URI documentation for its format.

Required

capped

boolean

false

If true, a capped collection will be used.

collectionSize

long

512 MiB

It specifies the capped collection size of bytes.

Additional runtime dependencies are required to use the MongoDb provider:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-mongodb</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-mongodb4'

MongoDb4 Provider (deprecated)

The log4j-mongodb4 module is deprecated in favor of the current MongoDB provider. It supports the following configuration attributes:

Table 20. MongoDb4 provider configuration attributes
Attribute Type Default value Description

connection

ConnectionString

It specifies the connection URI used to reach the server.

See Connection URI documentation for its format.

Required

capped

boolean

false

If true, a capped collection will be used.

collectionSize

long

512 MiB

It specifies the capped collection size of bytes.

Additional runtime dependencies are required to use the MongoDb4 provider:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-mongodb4</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-mongodb4'

Apache CouchDB provider

This provider is planned to be removed in the next major release! If you are using this library, please get in touch with the Log4j maintainers using the official support channels.

The CouchDb Provider allows using the NoSQL Appender with an Apache CouchDB database. The provider can be configured by:

Table 21. CouchDb provider configuration attributes
Attribute Type Default value Description

Standard configuration attributes

protocol

enumeration

http

It specifies the protocol to use to connect to the server. Can be one of:

  • http

  • https

server

String

localhost

The host name of the CouchDB server.

port

int

80 (http) 443 (https)

It specifies the TCP port to use.

databaseName

String

The name of the database to connect to.

username

String

The username for authentication.

password

String

The password for authentication.

Factory method configuration attributes

factoryClassName

Class<?>

The fully qualified class name that contains a factory method that returns either a CouchDbClient or CouchDbProperties object.

The class must be public.

factoryMethodName

String

The name of the factory method. The method:

Additional runtime dependencies are required to use the CouchDb provider:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-couchdb</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-couchdb'

Configuration examples

To connect the NoSQL Appender to a MongoDB database, you only need to provide a connection string:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<NoSql name="MONGO">
  <MongoDb connection="mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"/>
</NoSql>
Snippet from an example log4j2.json
"NoSql": {
  "name": "MONGO",
  "MongoDb": {
    "connection": "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
  }
}
Snippet from an example log4j2.yaml
NoSql:
  name: "MONGO"
  MongoDb:
    connection: "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
Snippet from an example log4j2.properties
appender.1.type = NoSql
appender.1.name = MONGO
appender.1.provider.type = MongoDB
appender.1.provider.connection = mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs

Make sure to not let org.bson, com.mongodb log to a MongoDB database on a DEBUG level, since that will cause recursive logging:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<Root level="INFO">
  <AppenderRef ref="MONGO"/>
</Root>
<Logger name="org.bson"
        level="WARN"
        additivity="false"> (1)
  <AppenderRef ref="FILE"/>
</Logger>
<Logger name="com.mongodb"
        level="WARN"
        additivity="false"> (1)
  <AppenderRef ref="FILE"/>
</Logger>
Snippet from an example log4j2.json
"Root": {
  "level": "INFO",
  "AppenderRef": {
    "ref": "MONGO"
  }
},
"Logger": [
  {
    "name": "org.bson",
    "level": "WARN",
    "additivity": false, (1)
    "AppenderRef": {
      "ref": "FILE"
    }
  },
  {
    "name": "com.mongodb",
    "level": "WARN",
    "additivity": false, (1)
    "AppenderRef": {
      "ref": "FILE"
    }
  }
]
Snippet from an example log4j2.yaml
Root:
  level: "INFO"
  AppenderRef:
    ref: "MONGO"
Logger:
  - name: "org.bson"
    level: "WARN"
    additivity: false (1)
    AppenderRef:
      ref: "FILE"
  - name: "com.mongodb"
    level: "WARN"
    additivity: false (1)
    AppenderRef:
      ref: "FILE"
Snippet from an example log4j2.properties
rootLogger.level = INFO
rootLogger.appenderRef.0.ref = MONGO

logger.0.name = org.bson
logger.0.level = WARN
(1)
logger.0.additivity = false
logger.0.appenderRef.0.ref = FILE

logger.1.name = com.mongodb
logger.1.level = WARN
(1)
logger.1.additivity = false
logger.1.appenderRef.0.ref = FILE
1 Remember to set the additivity configuration attribute to false.

A similar configuration for an Apache CouchDB database looks like:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<NoSql name="COUCH">
  <CouchDB protocol="https"
           username="${env:DB_USER}"
           password="${env:DB_PASS}"
           server="localhost"
           port="5984"
           databaseName="logging"/>
</NoSql>
Snippet from an example log4j2.json
"CouchDb": {
  "name": "COUCH",
  "CouchDB": {
    "protocol": "https",
    "username": "${env:DB_USER}",
    "password": "${env:DB_PASS"},
    "server": "localhost",
    "port": 5984,
    "databaseName": "logging"
  }
Snippet from an example log4j2.yaml
NoSql:
  name: "COUCH"
  CouchDB:
    protocol: "https"
    username: "${env:DB_USER}"
    password: "${env:DB_PASS}"
    server: "localhost"
    port: 5984
    databaseName: "logging"
Snippet from an example log4j2.properties
appender.0.type = NoSql
appender.0.name = COUCH
appender.0.provider.type = CouchDB
appender.0.provider.protocol = https
appender.0.provider.username = ${env:DB_USER}
appender.0.provider.password = ${env:DB_PASS}
appender.0.provider.server = localhost
appender.0.provider.port = 5984
appender.0.provider.databaseName = logging

You can define additional fields to the NoSQL document using KeyValuePair elements, for example:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<NoSql name="MONGO">
  <MongoDb connection="mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"/>
  <KeyValuePair key="startTime" value="${date:yyyy-MM-dd hh:mm:ss.SSS}"/> (1)
  <KeyValuePair key="currentTime" value="$${date:yyyy-MM-dd hh:mm:ss.SSS}"/> (2)
</NoSql>
Snippet from an example log4j2.json
"NoSql": {
  "name": "MONGO",
  "MongoDb": {
    "connection": "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
  },
  "KeyValuePair": [
    {
      "key": "startTime",
      "value": "${date:yyyy-MM-dd hh:mm:ss.SSS}" (1)
    },
    {
      "key": "currentTime",
      "value": "$${date:yyyy-MM-dd hh:mm:ss.SSS}" (2)
    }
  ]
}
Snippet from an example log4j2.yaml
NoSql:
  name: "MONGO"
  MongoDb:
    connection: "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
  KeyValuePair:
    - key: "startTime"
      value: "${date:yyyy-MM-dd hh:mm:ss.SSS}" (1)
    - key: "currentTime"
      value: "$${date:yyyy-MM-dd hh:mm:ss.SSS}" (2)
Snippet from an example log4j2.properties
appender.0.type = NoSql
appender.0.name = MONGO
appender.0.provider.type = MongoDB
appender.0.provider.connection = mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs

appender.0.kv[0].type = KeyValuePair
appender.0.kv[0].key = startTime
(1)
appender.0.kv[0].value = ${date:yyyy-MM-dd hh:mm:ss.SSS}

appender.0.kv[1].type = KeyValuePair
appender.0.kv[1].key = currentTime
(1)
appender.0.kv[1].value = $${date:yyyy-MM-dd hh:mm:ss.SSS}
1 This lookup is evaluated at configuration time and gives the time when Log4j was most recently reconfigured.
2 This lookup is evaluated at runtime and gives the current date. See runtime lookup evaluation for more details.