Appearance
Logging Configuration
This document provides a comprehensive guide to the logging strategy and configuration within the IT Ticketing Service. Effective logging is crucial for debugging, monitoring, and understanding application behavior in all environments. This guide covers log levels, formatting, application-specific logging practices, and production considerations.
A solid logging implementation is the foundation for effective troubleshooting. For more details on how to use logs for debugging, see the Troubleshooting Guide.
Log Levels
Log levels are configured to control the verbosity of the application's output, ensuring that relevant information is captured without excessive noise. The configuration is managed in the application.properties file.
Package-Level Logging Configuration
The primary mechanism for controlling log output is through package-level configuration. This allows for granular control over different parts of the application and its dependencies. The default log level for the application's own codebase is set to INFO.
properties
# src/main/resources/application.properties
# Sets the default log level for all classes within the com.slalom.demo.ticketing package and sub-packages.
logging.level.com.slalom.demo.ticketing=INFO1
2
3
4
2
3
4
This setting ensures that all informational, warning, and error messages from our application code are logged, while DEBUG and TRACE messages are suppressed by default. To enable more detailed logging for a specific feature during development, you can override this for a more specific package or class:
properties
# Example: Enable DEBUG logging for the service layer only
# logging.level.com.slalom.demo.ticketing.service=DEBUG1
2
2
Root Logger Settings
Spring Boot configures a root logger with a default level of INFO. Any logger that is not explicitly configured inherits its level from the root logger. This provides a sensible baseline for all libraries and frameworks used in the project.
Framework Logging
To reduce log noise from verbose frameworks, we explicitly set the log levels for key Spring and AMQP packages. This practice keeps the logs focused on application-specific events while still capturing important framework-level information.
properties
# src/main/resources/application.properties
# Reduces verbosity from Spring's AMQP components, focusing on key lifecycle and message events.
logging.level.org.springframework.amqp=INFO
# Controls logging for Spring's web layer, including request mapping and dispatching.
logging.level.org.springframework.web=INFO1
2
3
4
5
6
7
2
3
4
5
6
7
| Logger Name | Default Level | Purpose |
|---|---|---|
com.slalom.demo.ticketing | INFO | Application-specific business logic. |
org.springframework.web | INFO | HTTP request handling, controller mappings. |
org.springframework.amqp | INFO | RabbitMQ interactions, message publishing/consumption. |
org.hibernate | WARN (default) | JPA/Hibernate operations. Set to DEBUG to see generated SQL. |
For more details on general application settings, refer to the Application Properties documentation.
Log Format
The format of log messages is critical for both human readability during development and machine parsability in production monitoring systems.
Console Output Patterns
By default, Spring Boot (using Logback) provides a color-coded, human-readable console output. The default pattern includes:
- Timestamp (to millisecond precision)
- Log Level (
ERROR,WARN,INFO,DEBUG,TRACE) - Process ID (PID)
- Thread name
- Logger name (typically the fully-qualified class name)
- The log message
An example of the default console output: 2023-10-27T10:30:00.123 INFO 12345 --- [http-nio-8080-exec-1] c.s.d.t.controller.TicketController : REST: Creating new ticket
While the default is sufficient for local development, it can be customized in application.properties if needed:
properties
# Example: Customize the console log pattern
# logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n1
2
2
File Output Configuration
The current configuration does not write logs to a file by default, which is standard for cloud-native applications that log to stdout/stderr. However, for certain deployment models or for debugging purposes, file logging can be enabled easily:
properties
# To log to a specific file in the application's directory:
logging.file.name=logs/it-ticketing-service.log
# To log to a specific path:
# logging.file.path=/var/log/it-ticketing-service/1
2
3
4
5
2
3
4
5
When file logging is enabled, Spring Boot applies a separate, non-color-coded pattern defined by logging.pattern.file.
JSON Structured Logging
For production environments, it is highly recommended to switch to structured (JSON) logging. JSON logs are easily parsed, indexed, and queried by log aggregation systems like ELK Stack or Datadog.
This is not currently configured. To implement it, you would add a dependency like logstash-logback-encoder and configure a logback-spring.xml file to use a JsonEncoder. This standardizes log data, making it a machine-readable event stream.
Application Logging
The application uses SLF4J as its logging facade, which is the standard for Spring Boot applications. This provides a consistent logging API that is decoupled from the underlying implementation (Logback).
Using SLF4J with Lombok @Slf4j
To eliminate boilerplate code, the @Slf4j annotation from Lombok is used to automatically inject a logger instance into our classes. This is the standard pattern across the codebase.
java
// src/main/java/com/slalom/demo/ticketing/controller/TicketController.java
@RestController
@RequestMapping("/api/tickets")
@RequiredArgsConstructor
@Slf4j // Injects a logger instance: private static final org.slf4j.Logger log = org.slf4j.LoggerFactory.getLogger(TicketController.class);
public class TicketController {
private final TicketService ticketService;
@PostMapping
public ResponseEntity<TicketResponse> createTicket(@Valid @RequestBody TicketRequest request) {
// Use the injected 'log' instance
log.info("REST: Creating new ticket");
TicketResponse response = ticketService.createTicket(request);
return new ResponseEntity<>(response, HttpStatus.CREATED);
}
// ...
}1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Parameterized Logging
For performance and clarity, all logging statements must use parameterized messages rather than string concatenation. This avoids the cost of string manipulation if the log level is disabled.
Correct:
java
// src/main/java/com/slalom/demo/ticketing/controller/TicketController.java
log.info("REST: Fetching tickets with filters - status: {}, requesterEmail: {}", status, requesterEmail);1
2
2
Incorrect:
java
// Avoid this pattern! It creates unnecessary strings.
log.info("REST: Fetching tickets with filters - status: " + status + ", requesterEmail: " + requesterEmail);1
2
2
Log Correlation IDs
To trace a single request across different services or through asynchronous message queues, a unique correlation ID (also known as a Trace ID) is essential. While not explicitly implemented in the current code, the standard approach is to use SLF4J's Mapped Diagnostic Context (MDC).
A Spring HandlerInterceptor or a Servlet Filter can be used to extract a correlation ID from an incoming request header (e.g., X-Request-ID) or generate a new one, placing it into the MDC. The logging pattern can then be updated to include the MDC value in every log line, allowing for easy filtering in a log aggregator.
Production Logging
Logging in a production environment has different requirements than in development, focusing on stability, performance, and integration with monitoring tools.
Appropriate Log Levels for Production
The default level of INFO for the application package is a reasonable starting point for production. It provides visibility into the application's flow without being overly verbose.
INFO: Use for significant lifecycle events (e.g., application startup, processing a request, publishing a message).WARN: Use for unexpected but recoverable events (e.g., failing over to a fallback, using a default value when configuration is missing). These are events that operators should be aware of.ERROR: Use for unrecoverable errors that require intervention (e.g., database connection failure, critical business logic exception).ERRORlogs should ideally trigger alerts.
DEBUG and TRACE levels must not be enabled in production by default due to the significant performance overhead and log volume they generate. They can be enabled dynamically for short periods to troubleshoot a specific issue if the logging framework supports it.
Log Aggregation and Monitoring
In a distributed, containerized environment, logs should be treated as an event stream. The application should log to stdout and stderr, and the container orchestration platform (e.g., Kubernetes) should be responsible for collecting these streams.
These logs should be forwarded to a centralized log aggregation system (e.g., ELK Stack, Splunk, Datadog, AWS CloudWatch Logs). This enables:
- Centralized searching and filtering across all services.
- Creation of dashboards to monitor error rates and application health.
- Configuration of alerts based on log patterns (e.g., a spike in
ERRORmessages).
For more information on how logging fits into the broader observability strategy, see the Monitoring documentation.
Performance Considerations
Logging, especially when done incorrectly, can become a performance bottleneck.
- Asynchronous Logging: For high-throughput applications, configure an asynchronous appender in Logback. This moves the I/O-intensive work of writing logs to disk or the network to a separate background thread, minimizing the impact on request-processing threads.
- Avoid Logging in Loops: Be cautious when logging inside tight loops. If necessary, ensure the log level is
DEBUGorTRACEso it can be disabled in production. - Sensitive Data: Never log sensitive information like passwords, API keys, or personally identifiable information (PII). Be especially careful when logging entire request/response objects. Use DTOs and custom
toString()methods to control what gets logged.