In my post, I delved into the basics of setting up logging for your projects. Today I will be showing you a few tips to help you get the most out of your loggers.
Use a config
A configurable logging setup allows for greater flexibility and scalability in your projects. By defining your logging configuration outside of your code, you can easily change logging levels, handlers, and formatters without modifying the source code, making your logging system more adaptable to different environments and needs.
Using dictConfig
Python’s logging.config.dictConfig
function enables you to configure logging using a dictionary. This approach is highly versatile, allowing you to define complex logging configurations with nested structures.
import logging.config
LOGGING_CONFIG = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s',
},
'detailed': {
'format': '%(asctime)s - %(name)s - %(levelname)s - %(module)s - %(lineno)d - %(message)s',
},
'simple': {
'format': '%(levelname)s - %(message)s',
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'simple',
'level': 'INFO',
},
'file_info': {
'class': 'logging.FileHandler',
'filename': 'info.log',
'formatter': 'standard',
'level': 'INFO',
},
'file_debug': {
'class': 'logging.FileHandler',
'filename': 'debug.log',
'formatter': 'detailed',
'level': 'DEBUG',
},
'file_error': {
'class': 'logging.FileHandler',
'filename': 'error.log',
'formatter': 'detailed',
'level': 'ERROR',
},
},
'loggers': {
'': { # root logger
'handlers': ['console', 'file_info'],
'level': 'INFO',
},
'app.module1': {
'handlers': ['console', 'file_debug'],
'level': 'DEBUG',
'propagate': False,
},
'app.module2': {
'handlers': ['console', 'file_error'],
'level': 'ERROR',
'propagate': False,
},
},
}
logging.config.dictConfig(LOGGING_CONFIG)
logger = logging.getLogger(__name__)
logger.error("example")
This allows us to have a single, well-defined configuration for our project logger and simplifies the process of importing it from other locations.
Using JSON
For even greater flexibility, you can store your logging configuration in JSON. This externalizes the configuration, making it easy to change without altering the codebase. Python’s json
module can be used to load these configurations, which are then passed to dictConfig
.
import json
import logging.config
with open('logging_config.json', 'r') as f:
config = json.load(f)
logging.config.dictConfig(config)
logger = logging.getLogger(__name__)
Other utilities for handling configuration may work, but may require some extra code.
Use one logger per module
Adopting a logger per module strategy enhances the granularity of your logs, making it easier to trace where a log entry originated from. This practice is beneficial for debugging and maintaining large codebases.
Rationale
Using a separate logger in each module (usually with the module’s __name__
as the logger’s name) ties log messages to their source, improving readability and troubleshooting efficiency. It also allows us more granular configuration of our loggers, enabling us to set the log levels, formatters, etc.
flowchart LR
A[Root Logger] -->|Uses| K[Formatter & Handler]
A-->|Processed & Forwarded by| B[Logger: auth]
A -->|Processed & Forwarded by| C[Logger: database]
A -->|Processed & Forwarded by| D[Logger: api]
B --> E[Logger: auth.login]
B --> F[Logger: auth.signup]
C --> G[Logger: database.query]
C --> H[Logger: database.connection]
D --> I[Logger: api.request]
D --> J[Logger: api.response]
classDef logger fill;
classDef formatter fill;
classDef handler fill;
class A,B,C,D,E,F,G,H,I,J logger;
class K handler;
Setting up child loggers
In Python, loggers follow a hierarchical structure, meaning you can create child loggers that inherit settings from their parent. This is useful for maintaining consistent logging behavior across related modules while allowing for some level of customization. Setting up a child logger is quite simple.
import logging
# In your main module
main_logger = logging.getLogger('myapp')
# In a submodule
sub_logger = logging.getLogger('myapp.submodule')
Write informative logging messages
The quality of log messages significantly impacts their usefulness. Messages should be concise yet informative, providing enough context to understand the event without needing to delve into the code.
The importance of adding context
Contextual information turns a log message from a cryptic note into a clear narrative, explaining not just what happened but under what circumstances. This can include data such as module, function, line, thread, user IDs, transaction IDs, or relevant state information to better understand runtime issues in our application. Ensure that you sanitize data correctly when adding extra context to your logs, so as not to leak any secrets or confidential data. If you would like more information on this subject, check out my post about secure coding practices.
Techniques to add context
- Structured logging: Use key-value pairs to structure additional context, making logs easily searchable and more informative.
- Template messages: Define a consistent message template that includes placeholders for contextual data.
- Log levels: Appropriately use log levels (DEBUG, INFO, WARNING, ERROR, CRITICAL) to indicate the severity and nature of log messages.
The Formatter
comes with a variety of convenient built-in attributes that can be included in the format string to enhance context:
%(name)s
: Logger name%(levelno)s
: Numeric logging level%(levelname)s
: Text logging level%(pathname)s
: Full pathname of the source file where the logging call was made%(filename)s
: Filename portion of thepathname
%(module)s
: Module name (filename without the extension)%(lineno)d
: Line number where the logging call was made%(funcName)s
: Function or method name where the logging call was made%(created)f
: Time when theLogRecord
was created (value returned bytime.time()
)%(asctime)s
: Human-readable time when theLogRecord
was created%(msecs)d
: Millisecond portion of the creation time%(relativeCreated)d
: Time in milliseconds when theLogRecord
was created, relative to the time the logging module was loaded%(thread)d
: Thread ID%(threadName)s
: Thread name%(process)d
: Process ID%(message)s
: The logged message
Additionally, you can format these attributes in a similar way to formatting values in f-strings.
For example:
"%(levelname)-8s|%(filename)-20s|%(lineno)5d|%(asctime)-25s|%(funcName)-20s|%(process)7d|%(thread)16d|%(threadName)-25s|%(module)-20s|%(pathname)-50s|%(message)s"
The logging functionality also allows you to expand the format string with other attributes, giving you the ability to include additional context in your messages.
logger.info("User %(user)s has initiated a transaction", {'user': 'user1'})
As you can see, you can get very specific about what is logged and how it is presented.
Final thoughts
Effective logging is an art form. By playing around with configurations, taking a modular approach to logging, and writing messages that give clear context, you can turn your logs from just being outputs to providing valuable insights. Keep in mind that the objective is not just to gather data but also to provide the narrative behind the errors in our code, providing you and your team with the tools to effectively find and solve problem code.