7
:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
9
This page contains a number of recipes related to logging, which have been found
12
.. currentmodule:: logging
14
Using logging in multiple modules
15
---------------------------------
17
Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
18
same logger object. This is true not only within the same module, but also
19
across modules as long as it is in the same Python interpreter process. It is
20
true for references to the same object; additionally, application code can
21
define and configure a parent logger in one module and create (but not
22
configure) a child logger in a separate module, and all logger calls to the
23
child will pass up to the parent. Here is a main module::
26
import auxiliary_module
28
# create logger with 'spam_application'
29
logger = logging.getLogger('spam_application')
30
logger.setLevel(logging.DEBUG)
31
# create file handler which logs even debug messages
32
fh = logging.FileHandler('spam.log')
33
fh.setLevel(logging.DEBUG)
34
# create console handler with a higher log level
35
ch = logging.StreamHandler()
36
ch.setLevel(logging.ERROR)
37
# create formatter and add it to the handlers
38
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
39
fh.setFormatter(formatter)
40
ch.setFormatter(formatter)
41
# add the handlers to the logger
45
logger.info('creating an instance of auxiliary_module.Auxiliary')
46
a = auxiliary_module.Auxiliary()
47
logger.info('created an instance of auxiliary_module.Auxiliary')
48
logger.info('calling auxiliary_module.Auxiliary.do_something')
50
logger.info('finished auxiliary_module.Auxiliary.do_something')
51
logger.info('calling auxiliary_module.some_function()')
52
auxiliary_module.some_function()
53
logger.info('done with auxiliary_module.some_function()')
55
Here is the auxiliary module::
60
module_logger = logging.getLogger('spam_application.auxiliary')
64
self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
65
self.logger.info('creating an instance of Auxiliary')
66
def do_something(self):
67
self.logger.info('doing something')
69
self.logger.info('done doing something')
72
module_logger.info('received a call to "some_function"')
74
The output looks like this::
76
2005-03-23 23:47:11,663 - spam_application - INFO -
77
creating an instance of auxiliary_module.Auxiliary
78
2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
79
creating an instance of Auxiliary
80
2005-03-23 23:47:11,665 - spam_application - INFO -
81
created an instance of auxiliary_module.Auxiliary
82
2005-03-23 23:47:11,668 - spam_application - INFO -
83
calling auxiliary_module.Auxiliary.do_something
84
2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
86
2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
88
2005-03-23 23:47:11,670 - spam_application - INFO -
89
finished auxiliary_module.Auxiliary.do_something
90
2005-03-23 23:47:11,671 - spam_application - INFO -
91
calling auxiliary_module.some_function()
92
2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
93
received a call to 'some_function'
94
2005-03-23 23:47:11,673 - spam_application - INFO -
95
done with auxiliary_module.some_function()
97
Multiple handlers and formatters
98
--------------------------------
100
Loggers are plain Python objects. The :meth:`~Logger.addHandler` method has no
101
minimum or maximum quota for the number of handlers you may add. Sometimes it
102
will be beneficial for an application to log all messages of all severities to a
103
text file while simultaneously logging errors or above to the console. To set
104
this up, simply configure the appropriate handlers. The logging calls in the
105
application code will remain unchanged. Here is a slight modification to the
106
previous simple module-based configuration example::
110
logger = logging.getLogger('simple_example')
111
logger.setLevel(logging.DEBUG)
112
# create file handler which logs even debug messages
113
fh = logging.FileHandler('spam.log')
114
fh.setLevel(logging.DEBUG)
115
# create console handler with a higher log level
116
ch = logging.StreamHandler()
117
ch.setLevel(logging.ERROR)
118
# create formatter and add it to the handlers
119
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
120
ch.setFormatter(formatter)
121
fh.setFormatter(formatter)
122
# add the handlers to logger
123
logger.addHandler(ch)
124
logger.addHandler(fh)
127
logger.debug('debug message')
128
logger.info('info message')
129
logger.warn('warn message')
130
logger.error('error message')
131
logger.critical('critical message')
133
Notice that the 'application' code does not care about multiple handlers. All
134
that changed was the addition and configuration of a new handler named *fh*.
136
The ability to create new handlers with higher- or lower-severity filters can be
137
very helpful when writing and testing an application. Instead of using many
138
``print`` statements for debugging, use ``logger.debug``: Unlike the print
139
statements, which you will have to delete or comment out later, the logger.debug
140
statements can remain intact in the source code and remain dormant until you
141
need them again. At that time, the only change that needs to happen is to
142
modify the severity level of the logger and/or handler to debug.
144
.. _multiple-destinations:
146
Logging to multiple destinations
147
--------------------------------
149
Let's say you want to log to console and file with different message formats and
150
in differing circumstances. Say you want to log messages with levels of DEBUG
151
and higher to file, and those messages at level INFO and higher to the console.
152
Let's also assume that the file should contain timestamps, but the console
153
messages should not. Here's how you can achieve this::
157
# set up logging to file - see previous section for more details
158
logging.basicConfig(level=logging.DEBUG,
159
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
160
datefmt='%m-%d %H:%M',
161
filename='/temp/myapp.log',
163
# define a Handler which writes INFO messages or higher to the sys.stderr
164
console = logging.StreamHandler()
165
console.setLevel(logging.INFO)
166
# set a format which is simpler for console use
167
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
168
# tell the handler to use this format
169
console.setFormatter(formatter)
170
# add the handler to the root logger
171
logging.getLogger('').addHandler(console)
173
# Now, we can log to the root logger, or any other logger. First the root...
174
logging.info('Jackdaws love my big sphinx of quartz.')
176
# Now, define a couple of other loggers which might represent areas in your
179
logger1 = logging.getLogger('myapp.area1')
180
logger2 = logging.getLogger('myapp.area2')
182
logger1.debug('Quick zephyrs blow, vexing daft Jim.')
183
logger1.info('How quickly daft jumping zebras vex.')
184
logger2.warning('Jail zesty vixen who grabbed pay from quack.')
185
logger2.error('The five boxing wizards jump quickly.')
187
When you run this, on the console you will see ::
189
root : INFO Jackdaws love my big sphinx of quartz.
190
myapp.area1 : INFO How quickly daft jumping zebras vex.
191
myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
192
myapp.area2 : ERROR The five boxing wizards jump quickly.
194
and in the file you will see something like ::
196
10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
197
10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
198
10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
199
10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
200
10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
202
As you can see, the DEBUG message only shows up in the file. The other messages
203
are sent to both destinations.
205
This example uses console and file handlers, but you can use any number and
206
combination of handlers you choose.
209
Configuration server example
210
----------------------------
212
Here is an example of a module using the logging configuration server::
215
import logging.config
219
# read initial config file
220
logging.config.fileConfig('logging.conf')
222
# create and start listener on port 9999
223
t = logging.config.listen(9999)
226
logger = logging.getLogger('simpleExample')
229
# loop through logging calls to see the difference
230
# new configurations make, until Ctrl+C is pressed
232
logger.debug('debug message')
233
logger.info('info message')
234
logger.warn('warn message')
235
logger.error('error message')
236
logger.critical('critical message')
238
except KeyboardInterrupt:
240
logging.config.stopListening()
243
And here is a script that takes a filename and sends that file to the server,
244
properly preceded with the binary-encoded length, as the new logging
247
#!/usr/bin/env python
248
import socket, sys, struct
250
with open(sys.argv[1], 'rb') as f:
251
data_to_send = f.read()
255
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
256
print('connecting...')
257
s.connect((HOST, PORT))
258
print('sending config...')
259
s.send(struct.pack('>L', len(data_to_send)))
265
Dealing with handlers that block
266
--------------------------------
268
.. currentmodule:: logging.handlers
270
Sometimes you have to get your logging handlers to do their work without
271
blocking the thread you're logging from. This is common in Web applications,
272
though of course it also occurs in other scenarios.
274
A common culprit which demonstrates sluggish behaviour is the
275
:class:`SMTPHandler`: sending emails can take a long time, for a
276
number of reasons outside the developer's control (for example, a poorly
277
performing mail or network infrastructure). But almost any network-based
278
handler can block: Even a :class:`SocketHandler` operation may do a
279
DNS query under the hood which is too slow (and this query can be deep in the
280
socket library code, below the Python layer, and outside your control).
282
One solution is to use a two-part approach. For the first part, attach only a
283
:class:`QueueHandler` to those loggers which are accessed from
284
performance-critical threads. They simply write to their queue, which can be
285
sized to a large enough capacity or initialized with no upper bound to their
286
size. The write to the queue will typically be accepted quickly, though you
287
will probably need to catch the :exc:`queue.Full` exception as a precaution
288
in your code. If you are a library developer who has performance-critical
289
threads in their code, be sure to document this (together with a suggestion to
290
attach only ``QueueHandlers`` to your loggers) for the benefit of other
291
developers who will use your code.
293
The second part of the solution is :class:`QueueListener`, which has been
294
designed as the counterpart to :class:`QueueHandler`. A
295
:class:`QueueListener` is very simple: it's passed a queue and some handlers,
296
and it fires up an internal thread which listens to its queue for LogRecords
297
sent from ``QueueHandlers`` (or any other source of ``LogRecords``, for that
298
matter). The ``LogRecords`` are removed from the queue and passed to the
299
handlers for processing.
301
The advantage of having a separate :class:`QueueListener` class is that you
302
can use the same instance to service multiple ``QueueHandlers``. This is more
303
resource-friendly than, say, having threaded versions of the existing handler
304
classes, which would eat up one thread per handler for no particular benefit.
306
An example of using these two classes follows (imports omitted)::
308
que = queue.Queue(-1) # no limit on size
309
queue_handler = QueueHandler(que)
310
handler = logging.StreamHandler()
311
listener = QueueListener(que, handler)
312
root = logging.getLogger()
313
root.addHandler(queue_handler)
314
formatter = logging.Formatter('%(threadName)s: %(message)s')
315
handler.setFormatter(formatter)
317
# The log output will display the thread which generated
318
# the event (the main thread) rather than the internal
319
# thread which monitors the internal queue. This is what
320
# you want to happen.
321
root.warning('Look out!')
324
which, when run, will produce::
326
MainThread: Look out!
331
Sending and receiving logging events across a network
332
-----------------------------------------------------
334
Let's say you want to send logging events across a network, and handle them at
335
the receiving end. A simple way of doing this is attaching a
336
:class:`SocketHandler` instance to the root logger at the sending end::
338
import logging, logging.handlers
340
rootLogger = logging.getLogger('')
341
rootLogger.setLevel(logging.DEBUG)
342
socketHandler = logging.handlers.SocketHandler('localhost',
343
logging.handlers.DEFAULT_TCP_LOGGING_PORT)
344
# don't bother with a formatter, since a socket handler sends the event as
345
# an unformatted pickle
346
rootLogger.addHandler(socketHandler)
348
# Now, we can log to the root logger, or any other logger. First the root...
349
logging.info('Jackdaws love my big sphinx of quartz.')
351
# Now, define a couple of other loggers which might represent areas in your
354
logger1 = logging.getLogger('myapp.area1')
355
logger2 = logging.getLogger('myapp.area2')
357
logger1.debug('Quick zephyrs blow, vexing daft Jim.')
358
logger1.info('How quickly daft jumping zebras vex.')
359
logger2.warning('Jail zesty vixen who grabbed pay from quack.')
360
logger2.error('The five boxing wizards jump quickly.')
362
At the receiving end, you can set up a receiver using the :mod:`socketserver`
363
module. Here is a basic working example::
367
import logging.handlers
372
class LogRecordStreamHandler(socketserver.StreamRequestHandler):
373
"""Handler for a streaming logging request.
375
This basically logs the record using whatever logging policy is
381
Handle multiple requests - each expected to be a 4-byte length,
382
followed by the LogRecord in pickle format. Logs the record
383
according to whatever policy is configured locally.
386
chunk = self.connection.recv(4)
389
slen = struct.unpack('>L', chunk)[0]
390
chunk = self.connection.recv(slen)
391
while len(chunk) < slen:
392
chunk = chunk + self.connection.recv(slen - len(chunk))
393
obj = self.unPickle(chunk)
394
record = logging.makeLogRecord(obj)
395
self.handleLogRecord(record)
397
def unPickle(self, data):
398
return pickle.loads(data)
400
def handleLogRecord(self, record):
401
# if a name is specified, we use the named logger rather than the one
402
# implied by the record.
403
if self.server.logname is not None:
404
name = self.server.logname
407
logger = logging.getLogger(name)
408
# N.B. EVERY record gets logged. This is because Logger.handle
409
# is normally called AFTER logger-level filtering. If you want
410
# to do filtering, do it at the client end to save wasting
411
# cycles and network bandwidth!
412
logger.handle(record)
414
class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
416
Simple TCP socket-based logging receiver suitable for testing.
419
allow_reuse_address = 1
421
def __init__(self, host='localhost',
422
port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
423
handler=LogRecordStreamHandler):
424
socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
429
def serve_until_stopped(self):
433
rd, wr, ex = select.select([self.socket.fileno()],
437
self.handle_request()
442
format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
443
tcpserver = LogRecordSocketReceiver()
444
print('About to start TCP server...')
445
tcpserver.serve_until_stopped()
447
if __name__ == '__main__':
450
First run the server, and then the client. On the client side, nothing is
451
printed on the console; on the server side, you should see something like::
453
About to start TCP server...
454
59 root INFO Jackdaws love my big sphinx of quartz.
455
59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
456
69 myapp.area1 INFO How quickly daft jumping zebras vex.
457
69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
458
69 myapp.area2 ERROR The five boxing wizards jump quickly.
460
Note that there are some security issues with pickle in some scenarios. If
461
these affect you, you can use an alternative serialization scheme by overriding
462
the :meth:`~handlers.SocketHandler.makePickle` method and implementing your
463
alternative there, as well as adapting the above script to use your alternative
469
Adding contextual information to your logging output
470
----------------------------------------------------
472
Sometimes you want logging output to contain contextual information in
473
addition to the parameters passed to the logging call. For example, in a
474
networked application, it may be desirable to log client-specific information
475
in the log (e.g. remote client's username, or IP address). Although you could
476
use the *extra* parameter to achieve this, it's not always convenient to pass
477
the information in this way. While it might be tempting to create
478
:class:`Logger` instances on a per-connection basis, this is not a good idea
479
because these instances are not garbage collected. While this is not a problem
480
in practice, when the number of :class:`Logger` instances is dependent on the
481
level of granularity you want to use in logging an application, it could
482
be hard to manage if the number of :class:`Logger` instances becomes
483
effectively unbounded.
486
Using LoggerAdapters to impart contextual information
487
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
489
An easy way in which you can pass contextual information to be output along
490
with logging event information is to use the :class:`LoggerAdapter` class.
491
This class is designed to look like a :class:`Logger`, so that you can call
492
:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
493
:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
494
same signatures as their counterparts in :class:`Logger`, so you can use the
495
two types of instances interchangeably.
497
When you create an instance of :class:`LoggerAdapter`, you pass it a
498
:class:`Logger` instance and a dict-like object which contains your contextual
499
information. When you call one of the logging methods on an instance of
500
:class:`LoggerAdapter`, it delegates the call to the underlying instance of
501
:class:`Logger` passed to its constructor, and arranges to pass the contextual
502
information in the delegated call. Here's a snippet from the code of
503
:class:`LoggerAdapter`::
505
def debug(self, msg, *args, **kwargs):
507
Delegate a debug call to the underlying logger, after adding
508
contextual information from this adapter instance.
510
msg, kwargs = self.process(msg, kwargs)
511
self.logger.debug(msg, *args, **kwargs)
513
The :meth:`~LoggerAdapter.process` method of :class:`LoggerAdapter` is where the
514
contextual information is added to the logging output. It's passed the message
515
and keyword arguments of the logging call, and it passes back (potentially)
516
modified versions of these to use in the call to the underlying logger. The
517
default implementation of this method leaves the message alone, but inserts
518
an 'extra' key in the keyword argument whose value is the dict-like object
519
passed to the constructor. Of course, if you had passed an 'extra' keyword
520
argument in the call to the adapter, it will be silently overwritten.
522
The advantage of using 'extra' is that the values in the dict-like object are
523
merged into the :class:`LogRecord` instance's __dict__, allowing you to use
524
customized strings with your :class:`Formatter` instances which know about
525
the keys of the dict-like object. If you need a different method, e.g. if you
526
want to prepend or append the contextual information to the message string,
527
you just need to subclass :class:`LoggerAdapter` and override
528
:meth:`~LoggerAdapter.process` to do what you need. Here is a simple example::
530
class CustomAdapter(logging.LoggerAdapter):
532
This example adapter expects the passed in dict-like object to have a
533
'connid' key, whose value in brackets is prepended to the log message.
535
def process(self, msg, kwargs):
536
return '[%s] %s' % (self.extra['connid'], msg), kwargs
538
which you can use like this::
540
logger = logging.getLogger(__name__)
541
adapter = CustomAdapter(logger, {'connid': some_conn_id})
543
Then any events that you log to the adapter will have the value of
544
``some_conn_id`` prepended to the log messages.
546
Using objects other than dicts to pass contextual information
547
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
549
You don't need to pass an actual dict to a :class:`LoggerAdapter` - you could
550
pass an instance of a class which implements ``__getitem__`` and ``__iter__`` so
551
that it looks like a dict to logging. This would be useful if you want to
552
generate values dynamically (whereas the values in a dict would be constant).
555
.. _filters-contextual:
557
Using Filters to impart contextual information
558
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
560
You can also add contextual information to log output using a user-defined
561
:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
562
passed to them, including adding additional attributes which can then be output
563
using a suitable format string, or if needed a custom :class:`Formatter`.
565
For example in a web application, the request being processed (or at least,
566
the interesting parts of it) can be stored in a threadlocal
567
(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
568
add, say, information from the request - say, the remote IP address and remote
569
user's username - to the ``LogRecord``, using the attribute names 'ip' and
570
'user' as in the ``LoggerAdapter`` example above. In that case, the same format
571
string can be used to get similar output to that shown above. Here's an example
575
from random import choice
577
class ContextFilter(logging.Filter):
579
This is a filter which injects contextual information into the log.
581
Rather than use actual contextual information, we just use random
585
USERS = ['jim', 'fred', 'sheila']
586
IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
588
def filter(self, record):
590
record.ip = choice(ContextFilter.IPS)
591
record.user = choice(ContextFilter.USERS)
594
if __name__ == '__main__':
595
levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
596
logging.basicConfig(level=logging.DEBUG,
597
format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
598
a1 = logging.getLogger('a.b.c')
599
a2 = logging.getLogger('d.e.f')
604
a1.debug('A debug message')
605
a1.info('An info message with %s', 'some parameters')
608
lvlname = logging.getLevelName(lvl)
609
a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
611
which, when run, produces something like::
613
2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
614
2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
615
2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
616
2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
617
2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
618
2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
619
2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
620
2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
621
2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
622
2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
623
2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
624
2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
627
.. _multiple-processes:
629
Logging to a single file from multiple processes
630
------------------------------------------------
632
Although logging is thread-safe, and logging to a single file from multiple
633
threads in a single process *is* supported, logging to a single file from
634
*multiple processes* is *not* supported, because there is no standard way to
635
serialize access to a single file across multiple processes in Python. If you
636
need to log to a single file from multiple processes, one way of doing this is
637
to have all the processes log to a :class:`~handlers.SocketHandler`, and have a
638
separate process which implements a socket server which reads from the socket
639
and logs to file. (If you prefer, you can dedicate one thread in one of the
640
existing processes to perform this function.)
641
:ref:`This section <network-logging>` documents this approach in more detail and
642
includes a working socket receiver which can be used as a starting point for you
643
to adapt in your own applications.
645
If you are using a recent version of Python which includes the
646
:mod:`multiprocessing` module, you could write your own handler which uses the
647
:class:`~multiprocessing.Lock` class from this module to serialize access to the
648
file from your processes. The existing :class:`FileHandler` and subclasses do
649
not make use of :mod:`multiprocessing` at present, though they may do so in the
650
future. Note that at present, the :mod:`multiprocessing` module does not provide
651
working lock functionality on all platforms (see
652
http://bugs.python.org/issue3770).
654
.. currentmodule:: logging.handlers
656
Alternatively, you can use a ``Queue`` and a :class:`QueueHandler` to send
657
all logging events to one of the processes in your multi-process application.
658
The following example script demonstrates how you can do this; in the example
659
a separate listener process listens for events sent by other processes and logs
660
them according to its own logging configuration. Although the example only
661
demonstrates one way of doing it (for example, you may want to use a listener
662
thread rather than a separate listener process -- the implementation would be
663
analogous) it does allow for completely different logging configurations for
664
the listener and the other processes in your application, and can be used as
665
the basis for code meeting your own specific requirements::
667
# You'll need these imports in your own code
669
import logging.handlers
670
import multiprocessing
672
# Next two import lines for this demo only
673
from random import choice, random
677
# Because you'll want to define the logging configurations for listener and workers, the
678
# listener and worker process functions take a configurer parameter which is a callable
679
# for configuring logging for that process. These functions are also passed the queue,
680
# which they use for communication.
682
# In practice, you can configure the listener however you want, but note that in this
683
# simple example, the listener does not apply level or filter logic to received records.
684
# In practice, you would probably want to do this logic in the worker processes, to avoid
685
# sending events which would be filtered out between processes.
687
# The size of the rotated files is made small so you can see the results easily.
688
def listener_configurer():
689
root = logging.getLogger()
690
h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10)
691
f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
695
# This is the listener process top-level loop: wait for logging events
696
# (LogRecords)on the queue and handle them, quit when you get a None for a
698
def listener_process(queue, configurer):
703
if record is None: # We send this as a sentinel to tell the listener to quit.
705
logger = logging.getLogger(record.name)
706
logger.handle(record) # No level or filter logic applied - just do it!
708
import sys, traceback
709
print('Whoops! Problem:', file=sys.stderr)
710
traceback.print_exc(file=sys.stderr)
712
# Arrays used for random selections in this demo
714
LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
715
logging.ERROR, logging.CRITICAL]
717
LOGGERS = ['a.b.c', 'd.e.f']
725
# The worker configuration is done at the start of the worker process run.
726
# Note that on Windows you can't rely on fork semantics, so each process
727
# will run the logging configuration code when it starts.
728
def worker_configurer(queue):
729
h = logging.handlers.QueueHandler(queue) # Just the one handler needed
730
root = logging.getLogger()
732
root.setLevel(logging.DEBUG) # send all messages, for demo; no other level or filter logic applied.
734
# This is the worker process top-level loop, which just logs ten events with
735
# random intervening delays before terminating.
736
# The print messages are just so you know it's doing something!
737
def worker_process(queue, configurer):
739
name = multiprocessing.current_process().name
740
print('Worker started: %s' % name)
743
logger = logging.getLogger(choice(LOGGERS))
744
level = choice(LEVELS)
745
message = choice(MESSAGES)
746
logger.log(level, message)
747
print('Worker finished: %s' % name)
749
# Here's where the demo gets orchestrated. Create the queue, create and start
750
# the listener, create ten workers and start them, wait for them to finish,
751
# then send a None to the queue to tell the listener to finish.
753
queue = multiprocessing.Queue(-1)
754
listener = multiprocessing.Process(target=listener_process,
755
args=(queue, listener_configurer))
759
worker = multiprocessing.Process(target=worker_process,
760
args=(queue, worker_configurer))
761
workers.append(worker)
765
queue.put_nowait(None)
768
if __name__ == '__main__':
771
A variant of the above script keeps the logging in the main process, in a
775
import logging.config
776
import logging.handlers
777
from multiprocessing import Process, Queue
782
def logger_thread(q):
787
logger = logging.getLogger(record.name)
788
logger.handle(record)
791
def worker_process(q):
792
qh = logging.handlers.QueueHandler(q)
793
root = logging.getLogger()
794
root.setLevel(logging.DEBUG)
796
levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
798
loggers = ['foo', 'foo.bar', 'foo.bar.baz',
799
'spam', 'spam.ham', 'spam.ham.eggs']
801
lvl = random.choice(levels)
802
logger = logging.getLogger(random.choice(loggers))
803
logger.log(lvl, 'Message no. %d', i)
805
if __name__ == '__main__':
811
'class': 'logging.Formatter',
812
'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
817
'class': 'logging.StreamHandler',
821
'class': 'logging.FileHandler',
822
'filename': 'mplog.log',
824
'formatter': 'detailed',
827
'class': 'logging.FileHandler',
828
'filename': 'mplog-foo.log',
830
'formatter': 'detailed',
833
'class': 'logging.FileHandler',
834
'filename': 'mplog-errors.log',
837
'formatter': 'detailed',
842
'handlers' : ['foofile']
847
'handlers': ['console', 'file', 'errors']
852
wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,))
855
logging.config.dictConfig(d)
856
lp = threading.Thread(target=logger_thread, args=(q,))
858
# At this point, the main process could do some useful work of its own
859
# Once it's done that, it can wait for the workers to terminate...
862
# And now tell the logging thread to finish up, too
866
This variant shows how you can e.g. apply configuration for particular loggers
867
- e.g. the ``foo`` logger has a special handler which stores all events in the
868
``foo`` subsystem in a file ``mplog-foo.log``. This will be used by the logging
869
machinery in the main process (even though the logging events are generated in
870
the worker processes) to direct the messages to the appropriate destinations.
875
.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
876
.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
878
Sometimes you want to let a log file grow to a certain size, then open a new
879
file and log to that. You may want to keep a certain number of these files, and
880
when that many files have been created, rotate the files so that the number of
881
files and the size of the files both remain bounded. For this usage pattern, the
882
logging package provides a :class:`~handlers.RotatingFileHandler`::
886
import logging.handlers
888
LOG_FILENAME = 'logging_rotatingfile_example.out'
890
# Set up a specific logger with our desired output level
891
my_logger = logging.getLogger('MyLogger')
892
my_logger.setLevel(logging.DEBUG)
894
# Add the log message handler to the logger
895
handler = logging.handlers.RotatingFileHandler(
896
LOG_FILENAME, maxBytes=20, backupCount=5)
898
my_logger.addHandler(handler)
902
my_logger.debug('i = %d' % i)
904
# See what files are created
905
logfiles = glob.glob('%s*' % LOG_FILENAME)
907
for filename in logfiles:
910
The result should be 6 separate files, each with part of the log history for the
913
logging_rotatingfile_example.out
914
logging_rotatingfile_example.out.1
915
logging_rotatingfile_example.out.2
916
logging_rotatingfile_example.out.3
917
logging_rotatingfile_example.out.4
918
logging_rotatingfile_example.out.5
920
The most current file is always :file:`logging_rotatingfile_example.out`,
921
and each time it reaches the size limit it is renamed with the suffix
922
``.1``. Each of the existing backup files is renamed to increment the suffix
923
(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
925
Obviously this example sets the log length much too small as an extreme
926
example. You would want to set *maxBytes* to an appropriate value.
930
Use of alternative formatting styles
931
------------------------------------
933
When logging was added to the Python standard library, the only way of
934
formatting messages with variable content was to use the %-formatting
935
method. Since then, Python has gained two new formatting approaches:
936
:class:`string.Template` (added in Python 2.4) and :meth:`str.format`
937
(added in Python 2.6).
939
Logging (as of 3.2) provides improved support for these two additional
940
formatting styles. The :class:`Formatter` class been enhanced to take an
941
additional, optional keyword parameter named ``style``. This defaults to
942
``'%'``, but other possible values are ``'{'`` and ``'$'``, which correspond
943
to the other two formatting styles. Backwards compatibility is maintained by
944
default (as you would expect), but by explicitly specifying a style parameter,
945
you get the ability to specify format strings which work with
946
:meth:`str.format` or :class:`string.Template`. Here's an example console
947
session to show the possibilities:
949
.. code-block:: pycon
952
>>> root = logging.getLogger()
953
>>> root.setLevel(logging.DEBUG)
954
>>> handler = logging.StreamHandler()
955
>>> bf = logging.Formatter('{asctime} {name} {levelname:8s} {message}',
957
>>> handler.setFormatter(bf)
958
>>> root.addHandler(handler)
959
>>> logger = logging.getLogger('foo.bar')
960
>>> logger.debug('This is a DEBUG message')
961
2010-10-28 15:11:55,341 foo.bar DEBUG This is a DEBUG message
962
>>> logger.critical('This is a CRITICAL message')
963
2010-10-28 15:12:11,526 foo.bar CRITICAL This is a CRITICAL message
964
>>> df = logging.Formatter('$asctime $name ${levelname} $message',
966
>>> handler.setFormatter(df)
967
>>> logger.debug('This is a DEBUG message')
968
2010-10-28 15:13:06,924 foo.bar DEBUG This is a DEBUG message
969
>>> logger.critical('This is a CRITICAL message')
970
2010-10-28 15:13:11,494 foo.bar CRITICAL This is a CRITICAL message
973
Note that the formatting of logging messages for final output to logs is
974
completely independent of how an individual logging message is constructed.
975
That can still use %-formatting, as shown here::
977
>>> logger.error('This is an%s %s %s', 'other,', 'ERROR,', 'message')
978
2010-10-28 15:19:29,833 foo.bar ERROR This is another, ERROR, message
981
Logging calls (``logger.debug()``, ``logger.info()`` etc.) only take
982
positional parameters for the actual logging message itself, with keyword
983
parameters used only for determining options for how to handle the actual
984
logging call (e.g. the ``exc_info`` keyword parameter to indicate that
985
traceback information should be logged, or the ``extra`` keyword parameter
986
to indicate additional contextual information to be added to the log). So
987
you cannot directly make logging calls using :meth:`str.format` or
988
:class:`string.Template` syntax, because internally the logging package
989
uses %-formatting to merge the format string and the variable arguments.
990
There would no changing this while preserving backward compatibility, since
991
all logging calls which are out there in existing code will be using %-format
994
There is, however, a way that you can use {}- and $- formatting to construct
995
your individual log messages. Recall that for a message you can use an
996
arbitrary object as a message format string, and that the logging package will
997
call ``str()`` on that object to get the actual format string. Consider the
998
following two classes::
1001
def __init__(self, fmt, *args, **kwargs):
1004
self.kwargs = kwargs
1007
return self.fmt.format(*self.args, **self.kwargs)
1009
class DollarMessage:
1010
def __init__(self, fmt, **kwargs):
1012
self.kwargs = kwargs
1015
from string import Template
1016
return Template(self.fmt).substitute(**self.kwargs)
1018
Either of these can be used in place of a format string, to allow {}- or
1019
$-formatting to be used to build the actual "message" part which appears in the
1020
formatted log output in place of "%(message)s" or "{message}" or "$message".
1021
It's a little unwieldy to use the class names whenever you want to log
1022
something, but it's quite palatable if you use an alias such as __ (double
1023
underscore ā not to be confused with _, the single underscore used as a
1024
synonym/alias for :func:`gettext.gettext` or its brethren).
1026
The above classes are not included in Python, though they're easy enough to
1027
copy and paste into your own code. They can be used as follows (assuming that
1028
they're declared in a module called ``wherever``):
1030
.. code-block:: pycon
1032
>>> from wherever import BraceMessage as __
1033
>>> print(__('Message with {0} {name}', 2, name='placeholders'))
1034
Message with 2 placeholders
1035
>>> class Point: pass
1040
>>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})',
1042
Message with coordinates: (0.50, 0.50)
1043
>>> from wherever import DollarMessage as __
1044
>>> print(__('Message with $num $what', num=2, what='placeholders'))
1045
Message with 2 placeholders
1048
While the above examples use ``print()`` to show how the formatting works, you
1049
would of course use ``logger.debug()`` or similar to actually log using this
1052
One thing to note is that you pay no significant performance penalty with this
1053
approach: the actual formatting happens not when you make the logging call, but
1054
when (and if) the logged message is actually about to be output to a log by a
1055
handler. So the only slightly unusual thing which might trip you up is that the
1056
parentheses go around the format string and the arguments, not just the format
1057
string. That's because the __ notation is just syntax sugar for a constructor
1058
call to one of the XXXMessage classes.
1060
If you prefer, you can use a :class:`LoggerAdapter` to achieve a similar effect
1061
to the above, as in the following example::
1065
class Message(object):
1066
def __init__(self, fmt, args):
1071
return self.fmt.format(*self.args)
1073
class StyleAdapter(logging.LoggerAdapter):
1074
def __init__(self, logger, extra=None):
1075
super(StyleAdapter, self).__init__(logger, extra or {})
1077
def log(self, level, msg, *args, **kwargs):
1078
if self.isEnabledFor(level):
1079
msg, kwargs = self.process(msg, kwargs)
1080
self.logger._log(level, Message(msg, args), (), **kwargs)
1082
logger = StyleAdapter(logging.getLogger(__name__))
1085
logger.debug('Hello, {}', 'world!')
1087
if __name__ == '__main__':
1088
logging.basicConfig(level=logging.DEBUG)
1091
The above script should log the message ``Hello, world!`` when run with
1092
Python 3.2 or later.
1095
.. currentmodule:: logging
1097
.. _custom-logrecord:
1099
Customizing ``LogRecord``
1100
-------------------------
1102
Every logging event is represented by a :class:`LogRecord` instance.
1103
When an event is logged and not filtered out by a logger's level, a
1104
:class:`LogRecord` is created, populated with information about the event and
1105
then passed to the handlers for that logger (and its ancestors, up to and
1106
including the logger where further propagation up the hierarchy is disabled).
1107
Before Python 3.2, there were only two places where this creation was done:
1109
* :meth:`Logger.makeRecord`, which is called in the normal process of
1110
logging an event. This invoked :class:`LogRecord` directly to create an
1112
* :func:`makeLogRecord`, which is called with a dictionary containing
1113
attributes to be added to the LogRecord. This is typically invoked when a
1114
suitable dictionary has been received over the network (e.g. in pickle form
1115
via a :class:`~handlers.SocketHandler`, or in JSON form via an
1116
:class:`~handlers.HTTPHandler`).
1118
This has usually meant that if you need to do anything special with a
1119
:class:`LogRecord`, you've had to do one of the following.
1121
* Create your own :class:`Logger` subclass, which overrides
1122
:meth:`Logger.makeRecord`, and set it using :func:`~logging.setLoggerClass`
1123
before any loggers that you care about are instantiated.
1124
* Add a :class:`Filter` to a logger or handler, which does the
1125
necessary special manipulation you need when its
1126
:meth:`~Filter.filter` method is called.
1128
The first approach would be a little unwieldy in the scenario where (say)
1129
several different libraries wanted to do different things. Each would attempt
1130
to set its own :class:`Logger` subclass, and the one which did this last would
1133
The second approach works reasonably well for many cases, but does not allow
1134
you to e.g. use a specialized subclass of :class:`LogRecord`. Library
1135
developers can set a suitable filter on their loggers, but they would have to
1136
remember to do this every time they introduced a new logger (which they would
1137
do simply by adding new packages or modules and doing ::
1139
logger = logging.getLogger(__name__)
1141
at module level). It's probably one too many things to think about. Developers
1142
could also add the filter to a :class:`~logging.NullHandler` attached to their
1143
top-level logger, but this would not be invoked if an application developer
1144
attached a handler to a lower-level library logger ā so output from that
1145
handler would not reflect the intentions of the library developer.
1147
In Python 3.2 and later, :class:`~logging.LogRecord` creation is done through a
1148
factory, which you can specify. The factory is just a callable you can set with
1149
:func:`~logging.setLogRecordFactory`, and interrogate with
1150
:func:`~logging.getLogRecordFactory`. The factory is invoked with the same
1151
signature as the :class:`~logging.LogRecord` constructor, as :class:`LogRecord`
1152
is the default setting for the factory.
1154
This approach allows a custom factory to control all aspects of LogRecord
1155
creation. For example, you could return a subclass, or just add some additional
1156
attributes to the record once created, using a pattern similar to this::
1158
old_factory = logging.getLogRecordFactory()
1160
def record_factory(*args, **kwargs):
1161
record = old_factory(*args, **kwargs)
1162
record.custom_attribute = 0xdecafbad
1165
logging.setLogRecordFactory(record_factory)
1167
This pattern allows different libraries to chain factories together, and as
1168
long as they don't overwrite each other's attributes or unintentionally
1169
overwrite the attributes provided as standard, there should be no surprises.
1170
However, it should be borne in mind that each link in the chain adds run-time
1171
overhead to all logging operations, and the technique should only be used when
1172
the use of a :class:`Filter` does not provide the desired result.
1175
.. _zeromq-handlers:
1177
Subclassing QueueHandler - a ZeroMQ example
1178
-------------------------------------------
1180
You can use a :class:`QueueHandler` subclass to send messages to other kinds
1181
of queues, for example a ZeroMQ 'publish' socket. In the example below,the
1182
socket is created separately and passed to the handler (as its 'queue')::
1184
import zmq # using pyzmq, the Python binding for ZeroMQ
1185
import json # for serializing records portably
1188
sock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value
1189
sock.bind('tcp://*:5556') # or wherever
1191
class ZeroMQSocketHandler(QueueHandler):
1192
def enqueue(self, record):
1193
data = json.dumps(record.__dict__)
1194
self.queue.send(data)
1196
handler = ZeroMQSocketHandler(sock)
1199
Of course there are other ways of organizing this, for example passing in the
1200
data needed by the handler to create the socket::
1202
class ZeroMQSocketHandler(QueueHandler):
1203
def __init__(self, uri, socktype=zmq.PUB, ctx=None):
1204
self.ctx = ctx or zmq.Context()
1205
socket = zmq.Socket(self.ctx, socktype)
1207
QueueHandler.__init__(self, socket)
1209
def enqueue(self, record):
1210
data = json.dumps(record.__dict__)
1211
self.queue.send(data)
1217
Subclassing QueueListener - a ZeroMQ example
1218
--------------------------------------------
1220
You can also subclass :class:`QueueListener` to get messages from other kinds
1221
of queues, for example a ZeroMQ 'subscribe' socket. Here's an example::
1223
class ZeroMQSocketListener(QueueListener):
1224
def __init__(self, uri, *handlers, **kwargs):
1225
self.ctx = kwargs.get('ctx') or zmq.Context()
1226
socket = zmq.Socket(self.ctx, zmq.SUB)
1227
socket.setsockopt(zmq.SUBSCRIBE, '') # subscribe to everything
1231
msg = self.queue.recv()
1232
return logging.makeLogRecord(json.loads(msg))
1237
Module :mod:`logging`
1238
API reference for the logging module.
1240
Module :mod:`logging.config`
1241
Configuration API for the logging module.
1243
Module :mod:`logging.handlers`
1244
Useful handlers included with the logging module.
1246
:ref:`A basic logging tutorial <logging-basic-tutorial>`
1248
:ref:`A more advanced logging tutorial <logging-advanced-tutorial>`
1251
An example dictionary-based configuration
1252
-----------------------------------------
1254
Below is an example of a logging configuration dictionary - it's taken from
1255
the `documentation on the Django project <https://docs.djangoproject.com/en/1.3/topics/logging/#configuring-logging>`_.
1256
This dictionary is passed to :func:`~config.dictConfig` to put the configuration into effect::
1260
'disable_existing_loggers': True,
1263
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
1266
'format': '%(levelname)s %(message)s'
1271
'()': 'project.logging.SpecialFilter',
1278
'class':'django.utils.log.NullHandler',
1282
'class':'logging.StreamHandler',
1283
'formatter': 'simple'
1287
'class': 'django.utils.log.AdminEmailHandler',
1288
'filters': ['special']
1293
'handlers':['null'],
1298
'handlers': ['mail_admins'],
1302
'myproject.custom': {
1303
'handlers': ['console', 'mail_admins'],
1305
'filters': ['special']
1310
For more information about this configuration, you can see the `relevant
1311
section <https://docs.djangoproject.com/en/1.3/topics/logging/#configuring-logging>`_
1312
of the Django documentation.
1314
.. _cookbook-rotator-namer:
1316
Using a rotator and namer to customize log rotation processing
1317
--------------------------------------------------------------
1319
An example of how you can define a namer and rotator is given in the following
1320
snippet, which shows zlib-based compression of the log file::
1325
def rotator(source, dest):
1326
with open(source, "rb") as sf:
1328
compressed = zlib.compress(data, 9)
1329
with open(dest, "wb") as df:
1330
df.write(compressed)
1333
rh = logging.handlers.RotatingFileHandler(...)
1334
rh.rotator = rotator
1337
These are not "true" .gz files, as they are bare compressed data, with no
1338
"container" such as youād find in an actual gzip file. This snippet is just
1339
for illustration purposes.
1341
A more elaborate multiprocessing example
1342
----------------------------------------
1344
The following working example shows how logging can be used with multiprocessing
1345
using configuration files. The configurations are fairly simple, but serve to
1346
illustrate how more complex ones could be implemented in a real multiprocessing
1349
In the example, the main process spawns a listener process and some worker
1350
processes. Each of the main process, the listener and the workers have three
1351
separate configurations (the workers all share the same configuration). We can
1352
see logging in the main process, how the workers log to a QueueHandler and how
1353
the listener implements a QueueListener and a more complex logging
1354
configuration, and arranges to dispatch events received via the queue to the
1355
handlers specified in the configuration. Note that these configurations are
1356
purely illustrative, but you should be able to adapt this example to your own
1359
Here's the script - the docstrings and the comments hopefully explain how it
1363
import logging.config
1364
import logging.handlers
1365
from multiprocessing import Process, Queue, Event, current_process
1372
A simple handler for logging events. It runs in the listener process and
1373
dispatches events to loggers based on the name in the received record,
1374
which then get dispatched, by the logging system, to the handlers
1375
configured for those loggers.
1377
def handle(self, record):
1378
logger = logging.getLogger(record.name)
1379
# The process name is transformed just to show that it's the listener
1380
# doing the logging to files and console
1381
record.processName = '%s (for %s)' % (current_process().name, record.processName)
1382
logger.handle(record)
1384
def listener_process(q, stop_event, config):
1386
This could be done in the main process, but is just done in a separate
1387
process for illustrative purposes.
1389
This initialises logging according to the specified configuration,
1390
starts the listener and waits for the main process to signal completion
1391
via the event. The listener is then stopped, and the process exits.
1393
logging.config.dictConfig(config)
1394
listener = logging.handlers.QueueListener(q, MyHandler())
1396
if os.name == 'posix':
1397
# On POSIX, the setup logger will have been configured in the
1398
# parent process, but should have been disabled following the
1400
# On Windows, since fork isn't used, the setup logger won't
1401
# exist in the child, so it would be created and the message
1402
# would appear - hence the "if posix" clause.
1403
logger = logging.getLogger('setup')
1404
logger.critical('Should not appear, because of disabled logger ...')
1408
def worker_process(config):
1410
A number of these are spawned for the purpose of illustration. In
1411
practice, they could be a heterogenous bunch of processes rather than
1412
ones which are identical to each other.
1414
This initialises logging according to the specified configuration,
1415
and logs a hundred messages with random levels to randomly selected
1418
A small sleep is added to allow other processes a chance to run. This
1419
is not strictly needed, but it mixes the output from the different
1420
processes a bit more than if it's left out.
1422
logging.config.dictConfig(config)
1423
levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
1425
loggers = ['foo', 'foo.bar', 'foo.bar.baz',
1426
'spam', 'spam.ham', 'spam.ham.eggs']
1427
if os.name == 'posix':
1428
# On POSIX, the setup logger will have been configured in the
1429
# parent process, but should have been disabled following the
1431
# On Windows, since fork isn't used, the setup logger won't
1432
# exist in the child, so it would be created and the message
1433
# would appear - hence the "if posix" clause.
1434
logger = logging.getLogger('setup')
1435
logger.critical('Should not appear, because of disabled logger ...')
1436
for i in range(100):
1437
lvl = random.choice(levels)
1438
logger = logging.getLogger(random.choice(loggers))
1439
logger.log(lvl, 'Message no. %d', i)
1444
# The main process gets a simple configuration which prints to the console.
1449
'class': 'logging.Formatter',
1450
'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
1455
'class': 'logging.StreamHandler',
1461
'handlers': ['console']
1464
# The worker process configuration is just a QueueHandler attached to the
1465
# root logger, which allows all messages to be sent to the queue.
1466
# We disable existing loggers to disable the "setup" logger used in the
1467
# parent process. This is needed on POSIX because the logger will
1468
# be there in the child following a fork().
1471
'disable_existing_loggers': True,
1474
'class': 'logging.handlers.QueueHandler',
1480
'handlers': ['queue']
1483
# The listener process configuration shows that the full flexibility of
1484
# logging configuration is available to dispatch events to handlers however
1486
# We disable existing loggers to disable the "setup" logger used in the
1487
# parent process. This is needed on POSIX because the logger will
1488
# be there in the child following a fork().
1491
'disable_existing_loggers': True,
1494
'class': 'logging.Formatter',
1495
'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
1498
'class': 'logging.Formatter',
1499
'format': '%(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
1504
'class': 'logging.StreamHandler',
1506
'formatter': 'simple',
1509
'class': 'logging.FileHandler',
1510
'filename': 'mplog.log',
1512
'formatter': 'detailed',
1515
'class': 'logging.FileHandler',
1516
'filename': 'mplog-foo.log',
1518
'formatter': 'detailed',
1521
'class': 'logging.FileHandler',
1522
'filename': 'mplog-errors.log',
1525
'formatter': 'detailed',
1530
'handlers' : ['foofile']
1535
'handlers': ['console', 'file', 'errors']
1538
# Log some initial events, just to show that logging in the parent works
1540
logging.config.dictConfig(config_initial)
1541
logger = logging.getLogger('setup')
1542
logger.info('About to create workers ...')
1545
wp = Process(target=worker_process, name='worker %d' % (i + 1),
1546
args=(config_worker,))
1549
logger.info('Started worker: %s', wp.name)
1550
logger.info('About to create listener ...')
1551
stop_event = Event()
1552
lp = Process(target=listener_process, name='listener',
1553
args=(q, stop_event, config_listener))
1555
logger.info('Started listener')
1556
# We now hang around for the workers to finish their work.
1559
# Workers all done, listening can now stop.
1560
# Logging in the parent still works normally.
1561
logger.info('Telling listener to stop ...')
1564
logger.info('All done.')
1566
if __name__ == '__main__':
1570
Inserting a BOM into messages sent to a SysLogHandler
1571
-----------------------------------------------------
1573
`RFC 5424 <http://tools.ietf.org/html/rfc5424>`_ requires that a
1574
Unicode message be sent to a syslog daemon as a set of bytes which have the
1575
following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
1576
Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the `relevant
1577
section of the specification <http://tools.ietf.org/html/rfc5424#section-6>`_.)
1579
In Python 3.1, code was added to
1580
:class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
1581
unfortunately, it was implemented incorrectly, with the BOM appearing at the
1582
beginning of the message and hence not allowing any pure-ASCII component to
1585
As this behaviour is broken, the incorrect BOM insertion code is being removed
1586
from Python 3.2.4 and later. However, it is not being replaced, and if you
1587
want to produce RFC 5424-compliant messages which include a BOM, an optional
1588
pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
1589
UTF-8, then you need to do the following:
1591
#. Attach a :class:`~logging.Formatter` instance to your
1592
:class:`~logging.handlers.SysLogHandler` instance, with a format string
1595
'ASCII section\ufeffUnicode section'
1597
The Unicode code point U+FEFF, when encoded using UTF-8, will be
1598
encoded as a UTF-8 BOM -- the byte-string ``b'\xef\xbb\xbf'``.
1600
#. Replace the ASCII section with whatever placeholders you like, but make sure
1601
that the data that appears in there after substitution is always ASCII (that
1602
way, it will remain unchanged after UTF-8 encoding).
1604
#. Replace the Unicode section with whatever placeholders you like; if the data
1605
which appears there after substitution contains characters outside the ASCII
1606
range, that's fine -- it will be encoded using UTF-8.
1608
The formatted message *will* be encoded using UTF-8 encoding by
1609
``SysLogHandler``. If you follow the above rules, you should be able to produce
1610
RFC 5424-compliant messages. If you don't, logging may not complain, but your
1611
messages will not be RFC 5424-compliant, and your syslog daemon may complain.
1614
Implementing structured logging
1615
-------------------------------
1617
Although most logging messages are intended for reading by humans, and thus not
1618
readily machine-parseable, there might be cirumstances where you want to output
1619
messages in a structured format which *is* capable of being parsed by a program
1620
(without needing complex regular expressions to parse the log message). This is
1621
straightforward to achieve using the logging package. There are a number of
1622
ways in which this could be achieved, but the following is a simple approach
1623
which uses JSON to serialise the event in a machine-parseable manner::
1628
class StructuredMessage(object):
1629
def __init__(self, message, **kwargs):
1630
self.message = message
1631
self.kwargs = kwargs
1634
return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
1636
_ = StructuredMessage # optional, to improve readability
1638
logging.basicConfig(level=logging.INFO, format='%(message)s')
1639
logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
1641
If the above script is run, it prints::
1643
message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}
1645
Note that the order of items might be different according to the version of
1648
If you need more specialised processing, you can use a custom JSON encoder,
1649
as in the following complete example::
1651
from __future__ import unicode_literals
1656
# This next bit is to ensure the script runs unchanged on 2.x and 3.x
1662
class Encoder(json.JSONEncoder):
1663
def default(self, o):
1664
if isinstance(o, set):
1666
elif isinstance(o, unicode):
1667
return o.encode('unicode_escape').decode('ascii')
1668
return super(Encoder, self).default(o)
1670
class StructuredMessage(object):
1671
def __init__(self, message, **kwargs):
1672
self.message = message
1673
self.kwargs = kwargs
1676
s = Encoder().encode(self.kwargs)
1677
return '%s >>> %s' % (self.message, s)
1679
_ = StructuredMessage # optional, to improve readability
1682
logging.basicConfig(level=logging.INFO, format='%(message)s')
1683
logging.info(_('message 1', set_value=set([1, 2, 3]), snowman='\u2603'))
1685
if __name__ == '__main__':
1688
When the above script is run, it prints::
1690
message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}
1692
Note that the order of items might be different according to the version of
1695
.. currentmodule:: logging.config
1697
Customizing handlers with :func:`dictConfig`
1698
--------------------------------------------
1700
There are times when you want to customize logging handlers in particular ways,
1701
and if you use :func:`dictConfig` you may be able to do this without
1702
subclassing. As an example, consider that you may want to set the ownership of a
1703
log file. On POSIX, this is easily done using :func:`shutil.chown`, but the file
1704
handlers in the stdlib don't offer built-in support. You can customize handler
1705
creation using a plain function such as::
1707
def owned_file_handler(filename, mode='a', encoding=None, owner=None):
1709
if not os.path.exists(filename):
1710
open(filename, 'a').close()
1711
shutil.chown(filename, *owner)
1712
return logging.FileHandler(filename, mode, encoding)
1714
You can then specify, in a logging configuration passed to :func:`dictConfig`,
1715
that a logging handler be created by calling this function::
1719
'disable_existing_loggers': False,
1722
'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
1727
# The values below are popped from this dictionary and
1728
# used to create the handler, set the handler's level and
1730
'()': owned_file_handler,
1732
'formatter': 'default',
1733
# The values below are passed to the handler creator callable
1734
# as keyword arguments.
1735
'owner': ['pulse', 'pulse'],
1736
'filename': 'chowntest.log',
1738
'encoding': 'utf-8',
1742
'handlers': ['file'],
1747
In this example I am setting the ownership using the ``pulse`` user and group,
1748
just for the purposes of illustration. Putting it together into a working
1749
script, ``chowntest.py``::
1751
import logging, logging.config, os, shutil
1753
def owned_file_handler(filename, mode='a', encoding=None, owner=None):
1755
if not os.path.exists(filename):
1756
open(filename, 'a').close()
1757
shutil.chown(filename, *owner)
1758
return logging.FileHandler(filename, mode, encoding)
1762
'disable_existing_loggers': False,
1765
'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
1770
# The values below are popped from this dictionary and
1771
# used to create the handler, set the handler's level and
1773
'()': owned_file_handler,
1775
'formatter': 'default',
1776
# The values below are passed to the handler creator callable
1777
# as keyword arguments.
1778
'owner': ['pulse', 'pulse'],
1779
'filename': 'chowntest.log',
1781
'encoding': 'utf-8',
1785
'handlers': ['file'],
1790
logging.config.dictConfig(LOGGING)
1791
logger = logging.getLogger('mylogger')
1792
logger.debug('A debug message')
1794
To run this, you will probably need to run as ``root``::
1796
$ sudo python3.3 chowntest.py
1798
2013-11-05 09:34:51,128 DEBUG mylogger A debug message
1799
$ ls -l chowntest.log
1800
-rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log
1802
Note that this example uses Python 3.3 because that's where :func:`shutil.chown`
1803
makes an appearance. This approach should work with any Python version that
1804
supports :func:`dictConfig` - namely, Python 2.7, 3.2 or later. With pre-3.3
1805
versions, you would need to implement the actual ownership change using e.g.
1808
In practice, the handler-creating function may be in a utility module somewhere
1809
in your project. Instead of the line in the configuration::
1811
'()': owned_file_handler,
1813
you could use e.g.::
1815
'()': 'ext://project.util.owned_file_handler',
1817
where ``project.util`` can be replaced with the actual name of the package
1818
where the function resides. In the above working script, using
1819
``'ext://__main__.owned_file_handler'`` should work. Here, the actual callable
1820
is resolved by :func:`dictConfig` from the ``ext://`` specification.
1822
This example hopefully also points the way to how you could implement other
1823
types of file change - e.g. setting specific POSIX permission bits - in the
1824
same way, using :func:`os.chmod`.
1826
Of course, the approach could also be extended to types of handler other than a
1827
:class:`~logging.FileHandler` - for example, one of the rotating file handlers,
1828
or a different type of handler altogether.