1
********************************
2
Functional Programming HOWTO
3
********************************
5
:Author: A. M. Kuchling
8
In this document, we'll take a tour of Python's features suitable for
9
implementing programs in a functional style. After an introduction to the
10
concepts of functional programming, we'll look at language features such as
11
:term:`iterator`\s and :term:`generator`\s and relevant library modules such as
12
:mod:`itertools` and :mod:`functools`.
18
This section explains the basic concept of functional programming; if
19
you're just interested in learning about Python language features,
20
skip to the next section on :ref:`functional-howto-iterators`.
22
Programming languages support decomposing problems in several different ways:
24
* Most programming languages are **procedural**: programs are lists of
25
instructions that tell the computer what to do with the program's input. C,
26
Pascal, and even Unix shells are procedural languages.
28
* In **declarative** languages, you write a specification that describes the
29
problem to be solved, and the language implementation figures out how to
30
perform the computation efficiently. SQL is the declarative language you're
31
most likely to be familiar with; a SQL query describes the data set you want
32
to retrieve, and the SQL engine decides whether to scan tables or use indexes,
33
which subclauses should be performed first, etc.
35
* **Object-oriented** programs manipulate collections of objects. Objects have
36
internal state and support methods that query or modify this internal state in
37
some way. Smalltalk and Java are object-oriented languages. C++ and Python
38
are languages that support object-oriented programming, but don't force the
39
use of object-oriented features.
41
* **Functional** programming decomposes a problem into a set of functions.
42
Ideally, functions only take inputs and produce outputs, and don't have any
43
internal state that affects the output produced for a given input. Well-known
44
functional languages include the ML family (Standard ML, OCaml, and other
45
variants) and Haskell.
47
The designers of some computer languages choose to emphasize one
48
particular approach to programming. This often makes it difficult to
49
write programs that use a different approach. Other languages are
50
multi-paradigm languages that support several different approaches.
51
Lisp, C++, and Python are multi-paradigm; you can write programs or
52
libraries that are largely procedural, object-oriented, or functional
53
in all of these languages. In a large program, different sections
54
might be written using different approaches; the GUI might be
55
object-oriented while the processing logic is procedural or
56
functional, for example.
58
In a functional program, input flows through a set of functions. Each function
59
operates on its input and produces some output. Functional style discourages
60
functions with side effects that modify internal state or make other changes
61
that aren't visible in the function's return value. Functions that have no side
62
effects at all are called **purely functional**. Avoiding side effects means
63
not using data structures that get updated as a program runs; every function's
64
output must only depend on its input.
66
Some languages are very strict about purity and don't even have assignment
67
statements such as ``a=3`` or ``c = a + b``, but it's difficult to avoid all
68
side effects. Printing to the screen or writing to a disk file are side
69
effects, for example. For example, in Python a call to the :func:`print` or
70
:func:`time.sleep` function both return no useful value; they're only called for
71
their side effects of sending some text to the screen or pausing execution for a
74
Python programs written in functional style usually won't go to the extreme of
75
avoiding all I/O or all assignments; instead, they'll provide a
76
functional-appearing interface but will use non-functional features internally.
77
For example, the implementation of a function will still use assignments to
78
local variables, but won't modify global variables or have other side effects.
80
Functional programming can be considered the opposite of object-oriented
81
programming. Objects are little capsules containing some internal state along
82
with a collection of method calls that let you modify this state, and programs
83
consist of making the right set of state changes. Functional programming wants
84
to avoid state changes as much as possible and works with data flowing between
85
functions. In Python you might combine the two approaches by writing functions
86
that take and return instances representing objects in your application (e-mail
87
messages, transactions, etc.).
89
Functional design may seem like an odd constraint to work under. Why should you
90
avoid objects and side effects? There are theoretical and practical advantages
91
to the functional style:
96
* Ease of debugging and testing.
102
A theoretical benefit is that it's easier to construct a mathematical proof that
103
a functional program is correct.
105
For a long time researchers have been interested in finding ways to
106
mathematically prove programs correct. This is different from testing a program
107
on numerous inputs and concluding that its output is usually correct, or reading
108
a program's source code and concluding that the code looks right; the goal is
109
instead a rigorous proof that a program produces the right result for all
112
The technique used to prove programs correct is to write down **invariants**,
113
properties of the input data and of the program's variables that are always
114
true. For each line of code, you then show that if invariants X and Y are true
115
**before** the line is executed, the slightly different invariants X' and Y' are
116
true **after** the line is executed. This continues until you reach the end of
117
the program, at which point the invariants should match the desired conditions
118
on the program's output.
120
Functional programming's avoidance of assignments arose because assignments are
121
difficult to handle with this technique; assignments can break invariants that
122
were true before the assignment without producing any new invariants that can be
125
Unfortunately, proving programs correct is largely impractical and not relevant
126
to Python software. Even trivial programs require proofs that are several pages
127
long; the proof of correctness for a moderately complicated program would be
128
enormous, and few or none of the programs you use daily (the Python interpreter,
129
your XML parser, your web browser) could be proven correct. Even if you wrote
130
down or generated a proof, there would then be the question of verifying the
131
proof; maybe there's an error in it, and you wrongly believe you've proved the
138
A more practical benefit of functional programming is that it forces you to
139
break apart your problem into small pieces. Programs are more modular as a
140
result. It's easier to specify and write a small function that does one thing
141
than a large function that performs a complicated transformation. Small
142
functions are also easier to read and to check for errors.
145
Ease of debugging and testing
146
-----------------------------
148
Testing and debugging a functional-style program is easier.
150
Debugging is simplified because functions are generally small and clearly
151
specified. When a program doesn't work, each function is an interface point
152
where you can check that the data are correct. You can look at the intermediate
153
inputs and outputs to quickly isolate the function that's responsible for a bug.
155
Testing is easier because each function is a potential subject for a unit test.
156
Functions don't depend on system state that needs to be replicated before
157
running a test; instead you only have to synthesize the right input and then
158
check that the output matches expectations.
164
As you work on a functional-style program, you'll write a number of functions
165
with varying inputs and outputs. Some of these functions will be unavoidably
166
specialized to a particular application, but others will be useful in a wide
167
variety of programs. For example, a function that takes a directory path and
168
returns all the XML files in the directory, or a function that takes a filename
169
and returns its contents, can be applied to many different situations.
171
Over time you'll form a personal library of utilities. Often you'll assemble
172
new programs by arranging existing functions in a new configuration and writing
173
a few functions specialized for the current task.
176
.. _functional-howto-iterators:
181
I'll start by looking at a Python language feature that's an important
182
foundation for writing functional-style programs: iterators.
184
An iterator is an object representing a stream of data; this object returns the
185
data one element at a time. A Python iterator must support a method called
186
:meth:`~iterator.__next__` that takes no arguments and always returns the next
187
element of the stream. If there are no more elements in the stream,
188
:meth:`~iterator.__next__` must raise the :exc:`StopIteration` exception.
189
Iterators don't have to be finite, though; it's perfectly reasonable to write
190
an iterator that produces an infinite stream of data.
192
The built-in :func:`iter` function takes an arbitrary object and tries to return
193
an iterator that will return the object's contents or elements, raising
194
:exc:`TypeError` if the object doesn't support iteration. Several of Python's
195
built-in data types support iteration, the most common being lists and
196
dictionaries. An object is called :term:`iterable` if you can get an iterator
199
You can experiment with the iteration interface manually:
203
>>> it #doctest: +ELLIPSIS
204
<...iterator object at ...>
205
>>> it.__next__() # same as next(it)
212
Traceback (most recent call last):
213
File "<stdin>", line 1, in ?
217
Python expects iterable objects in several different contexts, the most
218
important being the :keyword:`for` statement. In the statement ``for X in Y``,
219
Y must be an iterator or some object for which :func:`iter` can create an
220
iterator. These two statements are equivalent::
229
Iterators can be materialized as lists or tuples by using the :func:`list` or
230
:func:`tuple` constructor functions:
233
>>> iterator = iter(L)
234
>>> t = tuple(iterator)
238
Sequence unpacking also supports iterators: if you know an iterator will return
239
N elements, you can unpack them into an N-tuple:
242
>>> iterator = iter(L)
247
Built-in functions such as :func:`max` and :func:`min` can take a single
248
iterator argument and will return the largest or smallest element. The ``"in"``
249
and ``"not in"`` operators also support iterators: ``X in iterator`` is true if
250
X is found in the stream returned by the iterator. You'll run into obvious
251
problems if the iterator is infinite; :func:`max`, :func:`min`
252
will never return, and if the element X never appears in the stream, the
253
``"in"`` and ``"not in"`` operators won't return either.
255
Note that you can only go forward in an iterator; there's no way to get the
256
previous element, reset the iterator, or make a copy of it. Iterator objects
257
can optionally provide these additional capabilities, but the iterator protocol
258
only specifies the :meth:`~iterator.__next__` method. Functions may therefore
259
consume all of the iterator's output, and if you need to do something different
260
with the same stream, you'll have to create a new iterator.
264
Data Types That Support Iterators
265
---------------------------------
267
We've already seen how lists and tuples support iterators. In fact, any Python
268
sequence type, such as strings, will automatically support creation of an
271
Calling :func:`iter` on a dictionary returns an iterator that will loop over the
274
>>> m = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 'May': 5, 'Jun': 6,
275
... 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12}
276
>>> for key in m: #doctest: +SKIP
277
... print(key, m[key])
291
Note that the order is essentially random, because it's based on the hash
292
ordering of the objects in the dictionary.
294
Applying :func:`iter` to a dictionary always loops over the keys, but
295
dictionaries have methods that return other iterators. If you want to iterate
296
over values or key/value pairs, you can explicitly call the
297
:meth:`~dict.values` or :meth:`~dict.items` methods to get an appropriate
300
The :func:`dict` constructor can accept an iterator that returns a finite stream
301
of ``(key, value)`` tuples:
303
>>> L = [('Italy', 'Rome'), ('France', 'Paris'), ('US', 'Washington DC')]
304
>>> dict(iter(L)) #doctest: +SKIP
305
{'Italy': 'Rome', 'US': 'Washington DC', 'France': 'Paris'}
307
Files also support iteration by calling the :meth:`~io.TextIOBase.readline`
308
method until there are no more lines in the file. This means you can read each
309
line of a file like this::
312
# do something for each line
315
Sets can take their contents from an iterable and let you iterate over the set's
318
S = {2, 3, 5, 7, 11, 13}
324
Generator expressions and list comprehensions
325
=============================================
327
Two common operations on an iterator's output are 1) performing some operation
328
for every element, 2) selecting a subset of elements that meet some condition.
329
For example, given a list of strings, you might want to strip off trailing
330
whitespace from each line or extract all the strings containing a given
333
List comprehensions and generator expressions (short form: "listcomps" and
334
"genexps") are a concise notation for such operations, borrowed from the
335
functional programming language Haskell (http://www.haskell.org/). You can strip
336
all the whitespace from a stream of strings with the following code::
338
line_list = [' line 1\n', 'line 2 \n', ...]
340
# Generator expression -- returns iterator
341
stripped_iter = (line.strip() for line in line_list)
343
# List comprehension -- returns list
344
stripped_list = [line.strip() for line in line_list]
346
You can select only certain elements by adding an ``"if"`` condition::
348
stripped_list = [line.strip() for line in line_list
351
With a list comprehension, you get back a Python list; ``stripped_list`` is a
352
list containing the resulting lines, not an iterator. Generator expressions
353
return an iterator that computes the values as necessary, not needing to
354
materialize all the values at once. This means that list comprehensions aren't
355
useful if you're working with iterators that return an infinite stream or a very
356
large amount of data. Generator expressions are preferable in these situations.
358
Generator expressions are surrounded by parentheses ("()") and list
359
comprehensions are surrounded by square brackets ("[]"). Generator expressions
362
( expression for expr in sequence1
364
for expr2 in sequence2
366
for expr3 in sequence3 ...
368
for exprN in sequenceN
371
Again, for a list comprehension only the outside brackets are different (square
372
brackets instead of parentheses).
374
The elements of the generated output will be the successive values of
375
``expression``. The ``if`` clauses are all optional; if present, ``expression``
376
is only evaluated and added to the result when ``condition`` is true.
378
Generator expressions always have to be written inside parentheses, but the
379
parentheses signalling a function call also count. If you want to create an
380
iterator that will be immediately passed to a function you can write::
382
obj_total = sum(obj.count for obj in list_all_objects())
384
The ``for...in`` clauses contain the sequences to be iterated over. The
385
sequences do not have to be the same length, because they are iterated over from
386
left to right, **not** in parallel. For each element in ``sequence1``,
387
``sequence2`` is looped over from the beginning. ``sequence3`` is then looped
388
over for each resulting pair of elements from ``sequence1`` and ``sequence2``.
390
To put it another way, a list comprehension or generator expression is
391
equivalent to the following Python code::
393
for expr1 in sequence1:
395
continue # Skip this element
396
for expr2 in sequence2:
398
continue # Skip this element
400
for exprN in sequenceN:
402
continue # Skip this element
404
# Output the value of
407
This means that when there are multiple ``for...in`` clauses but no ``if``
408
clauses, the length of the resulting output will be equal to the product of the
409
lengths of all the sequences. If you have two lists of length 3, the output
410
list is 9 elements long:
414
>>> [(x, y) for x in seq1 for y in seq2] #doctest: +NORMALIZE_WHITESPACE
415
[('a', 1), ('a', 2), ('a', 3),
416
('b', 1), ('b', 2), ('b', 3),
417
('c', 1), ('c', 2), ('c', 3)]
419
To avoid introducing an ambiguity into Python's grammar, if ``expression`` is
420
creating a tuple, it must be surrounded with parentheses. The first list
421
comprehension below is a syntax error, while the second one is correct::
424
[x, y for x in seq1 for y in seq2]
426
[(x, y) for x in seq1 for y in seq2]
432
Generators are a special class of functions that simplify the task of writing
433
iterators. Regular functions compute a value and return it, but generators
434
return an iterator that returns a stream of values.
436
You're doubtless familiar with how regular function calls work in Python or C.
437
When you call a function, it gets a private namespace where its local variables
438
are created. When the function reaches a ``return`` statement, the local
439
variables are destroyed and the value is returned to the caller. A later call
440
to the same function creates a new private namespace and a fresh set of local
441
variables. But, what if the local variables weren't thrown away on exiting a
442
function? What if you could later resume the function where it left off? This
443
is what generators provide; they can be thought of as resumable functions.
445
Here's the simplest example of a generator function:
447
>>> def generate_ints(N):
448
... for i in range(N):
451
Any function containing a :keyword:`yield` keyword is a generator function;
452
this is detected by Python's :term:`bytecode` compiler which compiles the
453
function specially as a result.
455
When you call a generator function, it doesn't return a single value; instead it
456
returns a generator object that supports the iterator protocol. On executing
457
the ``yield`` expression, the generator outputs the value of ``i``, similar to a
458
``return`` statement. The big difference between ``yield`` and a ``return``
459
statement is that on reaching a ``yield`` the generator's state of execution is
460
suspended and local variables are preserved. On the next call to the
461
generator's :meth:`~generator.__next__` method, the function will resume
464
Here's a sample usage of the ``generate_ints()`` generator:
466
>>> gen = generate_ints(3)
467
>>> gen #doctest: +ELLIPSIS
468
<generator object generate_ints at ...>
476
Traceback (most recent call last):
477
File "stdin", line 1, in ?
478
File "stdin", line 2, in generate_ints
481
You could equally write ``for i in generate_ints(5)``, or ``a,b,c =
484
Inside a generator function, ``return value`` causes ``StopIteration(value)``
485
to be raised from the :meth:`~generator.__next__` method. Once this happens, or
486
the bottom of the function is reached, the procession of values ends and the
487
generator cannot yield any further values.
489
You could achieve the effect of generators manually by writing your own class
490
and storing all the local variables of the generator as instance variables. For
491
example, returning a list of integers could be done by setting ``self.count`` to
492
0, and having the :meth:`~iterator.__next__` method increment ``self.count`` and
494
However, for a moderately complicated generator, writing a corresponding class
497
The test suite included with Python's library,
498
:source:`Lib/test/test_generators.py`, contains
499
a number of more interesting examples. Here's one generator that implements an
500
in-order traversal of a tree using generators recursively. ::
502
# A recursive generator that generates Tree leaves in in-order.
505
for x in inorder(t.left):
510
for x in inorder(t.right):
513
Two other examples in ``test_generators.py`` produce solutions for the N-Queens
514
problem (placing N queens on an NxN chess board so that no queen threatens
515
another) and the Knight's Tour (finding a route that takes a knight to every
516
square of an NxN chessboard without visiting any square twice).
520
Passing values into a generator
521
-------------------------------
523
In Python 2.4 and earlier, generators only produced output. Once a generator's
524
code was invoked to create an iterator, there was no way to pass any new
525
information into the function when its execution is resumed. You could hack
526
together this ability by making the generator look at a global variable or by
527
passing in some mutable object that callers then modify, but these approaches
530
In Python 2.5 there's a simple way to pass values into a generator.
531
:keyword:`yield` became an expression, returning a value that can be assigned to
532
a variable or otherwise operated on::
536
I recommend that you **always** put parentheses around a ``yield`` expression
537
when you're doing something with the returned value, as in the above example.
538
The parentheses aren't always necessary, but it's easier to always add them
539
instead of having to remember when they're needed.
541
(:pep:`342` explains the exact rules, which are that a ``yield``-expression must
542
always be parenthesized except when it occurs at the top-level expression on the
543
right-hand side of an assignment. This means you can write ``val = yield i``
544
but have to use parentheses when there's an operation, as in ``val = (yield i)
547
Values are sent into a generator by calling its :meth:`send(value)
548
<generator.send>` method. This method resumes the generator's code and the
549
``yield`` expression returns the specified value. If the regular
550
:meth:`~generator.__next__` method is called, the ``yield`` returns ``None``.
552
Here's a simple counter that increments by 1 and allows changing the value of
553
the internal counter.
557
def counter(maximum):
561
# If value provided, change counter
567
And here's an example of changing the counter:
569
>>> it = counter(10) #doctest: +SKIP
570
>>> next(it) #doctest: +SKIP
572
>>> next(it) #doctest: +SKIP
574
>>> it.send(8) #doctest: +SKIP
576
>>> next(it) #doctest: +SKIP
578
>>> next(it) #doctest: +SKIP
579
Traceback (most recent call last):
580
File "t.py", line 15, in ?
584
Because ``yield`` will often be returning ``None``, you should always check for
585
this case. Don't just use its value in expressions unless you're sure that the
586
:meth:`~generator.send` method will be the only method used to resume your
589
In addition to :meth:`~generator.send`, there are two other methods on
592
* :meth:`throw(type, value=None, traceback=None) <generator.throw>` is used to
593
raise an exception inside the generator; the exception is raised by the
594
``yield`` expression where the generator's execution is paused.
596
* :meth:`~generator.close` raises a :exc:`GeneratorExit` exception inside the
597
generator to terminate the iteration. On receiving this exception, the
598
generator's code must either raise :exc:`GeneratorExit` or
599
:exc:`StopIteration`; catching the exception and doing anything else is
600
illegal and will trigger a :exc:`RuntimeError`. :meth:`~generator.close`
601
will also be called by Python's garbage collector when the generator is
604
If you need to run cleanup code when a :exc:`GeneratorExit` occurs, I suggest
605
using a ``try: ... finally:`` suite instead of catching :exc:`GeneratorExit`.
607
The cumulative effect of these changes is to turn generators from one-way
608
producers of information into both producers and consumers.
610
Generators also become **coroutines**, a more generalized form of subroutines.
611
Subroutines are entered at one point and exited at another point (the top of the
612
function, and a ``return`` statement), but coroutines can be entered, exited,
613
and resumed at many different points (the ``yield`` statements).
619
Let's look in more detail at built-in functions often used with iterators.
621
Two of Python's built-in functions, :func:`map` and :func:`filter` duplicate the
622
features of generator expressions:
624
:func:`map(f, iterA, iterB, ...) <map>` returns an iterator over the sequence
625
``f(iterA[0], iterB[0]), f(iterA[1], iterB[1]), f(iterA[2], iterB[2]), ...``.
630
>>> list(map(upper, ['sentence', 'fragment']))
631
['SENTENCE', 'FRAGMENT']
632
>>> [upper(s) for s in ['sentence', 'fragment']]
633
['SENTENCE', 'FRAGMENT']
635
You can of course achieve the same effect with a list comprehension.
637
:func:`filter(predicate, iter) <filter>` returns an iterator over all the
638
sequence elements that meet a certain condition, and is similarly duplicated by
639
list comprehensions. A **predicate** is a function that returns the truth
640
value of some condition; for use with :func:`filter`, the predicate must take a
644
... return (x % 2) == 0
646
>>> list(filter(is_even, range(10)))
650
This can also be written as a list comprehension:
652
>>> list(x for x in range(10) if is_even(x))
656
:func:`enumerate(iter) <enumerate>` counts off the elements in the iterable,
657
returning 2-tuples containing the count and each element. ::
659
>>> for item in enumerate(['subject', 'verb', 'object']):
665
:func:`enumerate` is often used when looping through a list and recording the
666
indexes at which certain conditions are met::
668
f = open('data.txt', 'r')
669
for i, line in enumerate(f):
670
if line.strip() == '':
671
print('Blank line at line #%i' % i)
673
:func:`sorted(iterable, key=None, reverse=False) <sorted>` collects all the
674
elements of the iterable into a list, sorts the list, and returns the sorted
675
result. The *key* and *reverse* arguments are passed through to the
676
constructed list's :meth:`~list.sort` method. ::
679
>>> # Generate 8 random numbers between [0, 10000)
680
>>> rand_list = random.sample(range(10000), 8)
681
>>> rand_list #doctest: +SKIP
682
[769, 7953, 9828, 6431, 8442, 9878, 6213, 2207]
683
>>> sorted(rand_list) #doctest: +SKIP
684
[769, 2207, 6213, 6431, 7953, 8442, 9828, 9878]
685
>>> sorted(rand_list, reverse=True) #doctest: +SKIP
686
[9878, 9828, 8442, 7953, 6431, 6213, 2207, 769]
688
(For a more detailed discussion of sorting, see the :ref:`sortinghowto`.)
691
The :func:`any(iter) <any>` and :func:`all(iter) <all>` built-ins look at the
692
truth values of an iterable's contents. :func:`any` returns ``True`` if any element
693
in the iterable is a true value, and :func:`all` returns ``True`` if all of the
694
elements are true values:
710
:func:`zip(iterA, iterB, ...) <zip>` takes one element from each iterable and
711
returns them in a tuple::
713
zip(['a', 'b', 'c'], (1, 2, 3)) =>
714
('a', 1), ('b', 2), ('c', 3)
716
It doesn't construct an in-memory list and exhaust all the input iterators
717
before returning; instead tuples are constructed and returned only if they're
718
requested. (The technical term for this behaviour is `lazy evaluation
719
<http://en.wikipedia.org/wiki/Lazy_evaluation>`__.)
721
This iterator is intended to be used with iterables that are all of the same
722
length. If the iterables are of different lengths, the resulting stream will be
723
the same length as the shortest iterable. ::
725
zip(['a', 'b'], (1, 2, 3)) =>
728
You should avoid doing this, though, because an element may be taken from the
729
longer iterators and discarded. This means you can't go on to use the iterators
730
further because you risk skipping a discarded element.
736
The :mod:`itertools` module contains a number of commonly-used iterators as well
737
as functions for combining several iterators. This section will introduce the
738
module's contents by showing small examples.
740
The module's functions fall into a few broad classes:
742
* Functions that create a new iterator based on an existing iterator.
743
* Functions for treating an iterator's elements as function arguments.
744
* Functions for selecting portions of an iterator's output.
745
* A function for grouping an iterator's output.
747
Creating new iterators
748
----------------------
750
:func:`itertools.count(n) <itertools.count>` returns an infinite stream of
751
integers, increasing by 1 each time. You can optionally supply the starting
752
number, which defaults to 0::
755
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...
756
itertools.count(10) =>
757
10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ...
759
:func:`itertools.cycle(iter) <itertools.cycle>` saves a copy of the contents of
760
a provided iterable and returns a new iterator that returns its elements from
761
first to last. The new iterator will repeat these elements infinitely. ::
763
itertools.cycle([1,2,3,4,5]) =>
764
1, 2, 3, 4, 5, 1, 2, 3, 4, 5, ...
766
:func:`itertools.repeat(elem, [n]) <itertools.repeat>` returns the provided
767
element *n* times, or returns the element endlessly if *n* is not provided. ::
769
itertools.repeat('abc') =>
770
abc, abc, abc, abc, abc, abc, abc, abc, abc, abc, ...
771
itertools.repeat('abc', 5) =>
772
abc, abc, abc, abc, abc
774
:func:`itertools.chain(iterA, iterB, ...) <itertools.chain>` takes an arbitrary
775
number of iterables as input, and returns all the elements of the first
776
iterator, then all the elements of the second, and so on, until all of the
777
iterables have been exhausted. ::
779
itertools.chain(['a', 'b', 'c'], (1, 2, 3)) =>
782
:func:`itertools.islice(iter, [start], stop, [step]) <itertools.islice>` returns
783
a stream that's a slice of the iterator. With a single *stop* argument, it
784
will return the first *stop* elements. If you supply a starting index, you'll
785
get *stop-start* elements, and if you supply a value for *step*, elements
786
will be skipped accordingly. Unlike Python's string and list slicing, you can't
787
use negative values for *start*, *stop*, or *step*. ::
789
itertools.islice(range(10), 8) =>
790
0, 1, 2, 3, 4, 5, 6, 7
791
itertools.islice(range(10), 2, 8) =>
793
itertools.islice(range(10), 2, 8, 2) =>
796
:func:`itertools.tee(iter, [n]) <itertools.tee>` replicates an iterator; it
797
returns *n* independent iterators that will all return the contents of the
799
If you don't supply a value for *n*, the default is 2. Replicating iterators
800
requires saving some of the contents of the source iterator, so this can consume
801
significant memory if the iterator is large and one of the new iterators is
802
consumed more than the others. ::
804
itertools.tee( itertools.count() ) =>
808
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...
811
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...
814
Calling functions on elements
815
-----------------------------
817
The :mod:`operator` module contains a set of functions corresponding to Python's
818
operators. Some examples are :func:`operator.add(a, b) <operator.add>` (adds
819
two values), :func:`operator.ne(a, b) <operator.ne>` (same as ``a != b``), and
820
:func:`operator.attrgetter('id') <operator.attrgetter>`
821
(returns a callable that fetches the ``.id`` attribute).
823
:func:`itertools.starmap(func, iter) <itertools.starmap>` assumes that the
824
iterable will return a stream of tuples, and calls *func* using these tuples as
827
itertools.starmap(os.path.join,
828
[('/bin', 'python'), ('/usr', 'bin', 'java'),
829
('/usr', 'bin', 'perl'), ('/usr', 'bin', 'ruby')])
831
/bin/python, /usr/bin/java, /usr/bin/perl, /usr/bin/ruby
837
Another group of functions chooses a subset of an iterator's elements based on a
840
:func:`itertools.filterfalse(predicate, iter) <itertools.filterfalse>` is the
841
opposite of :func:`filter`, returning all elements for which the predicate
844
itertools.filterfalse(is_even, itertools.count()) =>
845
1, 3, 5, 7, 9, 11, 13, 15, ...
847
:func:`itertools.takewhile(predicate, iter) <itertools.takewhile>` returns
848
elements for as long as the predicate returns true. Once the predicate returns
849
false, the iterator will signal the end of its results. ::
854
itertools.takewhile(less_than_10, itertools.count()) =>
855
0, 1, 2, 3, 4, 5, 6, 7, 8, 9
857
itertools.takewhile(is_even, itertools.count()) =>
860
:func:`itertools.dropwhile(predicate, iter) <itertools.dropwhile>` discards
861
elements while the predicate returns true, and then returns the rest of the
862
iterable's results. ::
864
itertools.dropwhile(less_than_10, itertools.count()) =>
865
10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ...
867
itertools.dropwhile(is_even, itertools.count()) =>
868
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, ...
870
:func:`itertools.compress(data, selectors) <itertools.compress>` takes two
871
iterators and returns only those elements of *data* for which the corresponding
872
element of *selectors* is true, stopping whenever either one is exhausted::
874
itertools.compress([1,2,3,4,5], [True, True, False, False, True]) =>
878
Combinatoric functions
879
----------------------
881
The :func:`itertools.combinations(iterable, r) <itertools.combinations>`
882
returns an iterator giving all possible *r*-tuple combinations of the
883
elements contained in *iterable*. ::
885
itertools.combinations([1, 2, 3, 4, 5], 2) =>
886
(1, 2), (1, 3), (1, 4), (1, 5),
887
(2, 3), (2, 4), (2, 5),
891
itertools.combinations([1, 2, 3, 4, 5], 3) =>
892
(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (1, 4, 5),
893
(2, 3, 4), (2, 3, 5), (2, 4, 5),
896
The elements within each tuple remain in the same order as
897
*iterable* returned them. For example, the number 1 is always before
898
2, 3, 4, or 5 in the examples above. A similar function,
899
:func:`itertools.permutations(iterable, r=None) <itertools.permutations>`,
900
removes this constraint on the order, returning all possible
901
arrangements of length *r*::
903
itertools.permutations([1, 2, 3, 4, 5], 2) =>
904
(1, 2), (1, 3), (1, 4), (1, 5),
905
(2, 1), (2, 3), (2, 4), (2, 5),
906
(3, 1), (3, 2), (3, 4), (3, 5),
907
(4, 1), (4, 2), (4, 3), (4, 5),
908
(5, 1), (5, 2), (5, 3), (5, 4)
910
itertools.permutations([1, 2, 3, 4, 5]) =>
911
(1, 2, 3, 4, 5), (1, 2, 3, 5, 4), (1, 2, 4, 3, 5),
915
If you don't supply a value for *r* the length of the iterable is used,
916
meaning that all the elements are permuted.
918
Note that these functions produce all of the possible combinations by
919
position and don't require that the contents of *iterable* are unique::
921
itertools.permutations('aba', 3) =>
922
('a', 'b', 'a'), ('a', 'a', 'b'), ('b', 'a', 'a'),
923
('b', 'a', 'a'), ('a', 'a', 'b'), ('a', 'b', 'a')
925
The identical tuple ``('a', 'a', 'b')`` occurs twice, but the two 'a'
926
strings came from different positions.
928
The :func:`itertools.combinations_with_replacement(iterable, r) <itertools.combinations_with_replacement>`
929
function relaxes a different constraint: elements can be repeated
930
within a single tuple. Conceptually an element is selected for the
931
first position of each tuple and then is replaced before the second
932
element is selected. ::
934
itertools.combinations_with_replacement([1, 2, 3, 4, 5], 2) =>
935
(1, 1), (1, 2), (1, 3), (1, 4), (1, 5),
936
(2, 2), (2, 3), (2, 4), (2, 5),
937
(3, 3), (3, 4), (3, 5),
945
The last function I'll discuss, :func:`itertools.groupby(iter, key_func=None)
946
<itertools.groupby>`, is the most complicated. ``key_func(elem)`` is a function
947
that can compute a key value for each element returned by the iterable. If you
948
don't supply a key function, the key is simply each element itself.
950
:func:`~itertools.groupby` collects all the consecutive elements from the
951
underlying iterable that have the same key value, and returns a stream of
952
2-tuples containing a key value and an iterator for the elements with that key.
956
city_list = [('Decatur', 'AL'), ('Huntsville', 'AL'), ('Selma', 'AL'),
957
('Anchorage', 'AK'), ('Nome', 'AK'),
958
('Flagstaff', 'AZ'), ('Phoenix', 'AZ'), ('Tucson', 'AZ'),
962
def get_state(city_state):
965
itertools.groupby(city_list, get_state) =>
968
('AZ', iterator-3), ...
972
('Decatur', 'AL'), ('Huntsville', 'AL'), ('Selma', 'AL')
974
('Anchorage', 'AK'), ('Nome', 'AK')
976
('Flagstaff', 'AZ'), ('Phoenix', 'AZ'), ('Tucson', 'AZ')
978
:func:`~itertools.groupby` assumes that the underlying iterable's contents will
979
already be sorted based on the key. Note that the returned iterators also use
980
the underlying iterable, so you have to consume the results of iterator-1 before
981
requesting iterator-2 and its corresponding key.
987
The :mod:`functools` module in Python 2.5 contains some higher-order functions.
988
A **higher-order function** takes one or more functions as input and returns a
989
new function. The most useful tool in this module is the
990
:func:`functools.partial` function.
992
For programs written in a functional style, you'll sometimes want to construct
993
variants of existing functions that have some of the parameters filled in.
994
Consider a Python function ``f(a, b, c)``; you may wish to create a new function
995
``g(b, c)`` that's equivalent to ``f(1, b, c)``; you're filling in a value for
996
one of ``f()``'s parameters. This is called "partial function application".
998
The constructor for :func:`~functools.partial` takes the arguments
999
``(function, arg1, arg2, ..., kwarg1=value1, kwarg2=value2)``. The resulting
1000
object is callable, so you can just call it to invoke ``function`` with the
1001
filled-in arguments.
1003
Here's a small but realistic example::
1007
def log(message, subsystem):
1008
"""Write the contents of 'message' to the specified subsystem."""
1009
print('%s: %s' % (subsystem, message))
1012
server_log = functools.partial(log, subsystem='server')
1013
server_log('Unable to open socket')
1015
:func:`functools.reduce(func, iter, [initial_value]) <functools.reduce>`
1016
cumulatively performs an operation on all the iterable's elements and,
1017
therefore, can't be applied to infinite iterables. *func* must be a function
1018
that takes two elements and returns a single value. :func:`functools.reduce`
1019
takes the first two elements A and B returned by the iterator and calculates
1020
``func(A, B)``. It then requests the third element, C, calculates
1021
``func(func(A, B), C)``, combines this result with the fourth element returned,
1022
and continues until the iterable is exhausted. If the iterable returns no
1023
values at all, a :exc:`TypeError` exception is raised. If the initial value is
1024
supplied, it's used as a starting point and ``func(initial_value, A)`` is the
1025
first calculation. ::
1027
>>> import operator, functools
1028
>>> functools.reduce(operator.concat, ['A', 'BB', 'C'])
1030
>>> functools.reduce(operator.concat, [])
1031
Traceback (most recent call last):
1033
TypeError: reduce() of empty sequence with no initial value
1034
>>> functools.reduce(operator.mul, [1,2,3], 1)
1036
>>> functools.reduce(operator.mul, [], 1)
1039
If you use :func:`operator.add` with :func:`functools.reduce`, you'll add up all the
1040
elements of the iterable. This case is so common that there's a special
1041
built-in called :func:`sum` to compute it:
1043
>>> import functools
1044
>>> functools.reduce(operator.add, [1,2,3,4], 0)
1051
For many uses of :func:`functools.reduce`, though, it can be clearer to just
1052
write the obvious :keyword:`for` loop::
1056
product = functools.reduce(operator.mul, [1,2,3], 1)
1063
A related function is `itertools.accumulate(iterable, func=operator.add) <itertools.accumulate`.
1064
It performs the same calculation, but instead of returning only the
1065
final result, :func:`accumulate` returns an iterator that also yields
1066
each partial result::
1068
itertools.accumulate([1,2,3,4,5]) =>
1071
itertools.accumulate([1,2,3,4,5], operator.mul) =>
1078
The :mod:`operator` module was mentioned earlier. It contains a set of
1079
functions corresponding to Python's operators. These functions are often useful
1080
in functional-style code because they save you from writing trivial functions
1081
that perform a single operation.
1083
Some of the functions in this module are:
1085
* Math operations: ``add()``, ``sub()``, ``mul()``, ``floordiv()``, ``abs()``, ...
1086
* Logical operations: ``not_()``, ``truth()``.
1087
* Bitwise operations: ``and_()``, ``or_()``, ``invert()``.
1088
* Comparisons: ``eq()``, ``ne()``, ``lt()``, ``le()``, ``gt()``, and ``ge()``.
1089
* Object identity: ``is_()``, ``is_not()``.
1091
Consult the operator module's documentation for a complete list.
1094
Small functions and the lambda expression
1095
=========================================
1097
When writing functional-style programs, you'll often need little functions that
1098
act as predicates or that combine elements in some way.
1100
If there's a Python built-in or a module function that's suitable, you don't
1101
need to define a new function at all::
1103
stripped_lines = [line.strip() for line in lines]
1104
existing_files = filter(os.path.exists, file_list)
1106
If the function you need doesn't exist, you need to write it. One way to write
1107
small functions is to use the :keyword:`lambda` statement. ``lambda`` takes a
1108
number of parameters and an expression combining these parameters, and creates
1109
an anonymous function that returns the value of the expression::
1111
adder = lambda x, y: x+y
1113
print_assign = lambda name, value: name + '=' + str(value)
1115
An alternative is to just use the ``def`` statement and define a function in the
1121
def print_assign(name, value):
1122
return name + '=' + str(value)
1124
Which alternative is preferable? That's a style question; my usual course is to
1125
avoid using ``lambda``.
1127
One reason for my preference is that ``lambda`` is quite limited in the
1128
functions it can define. The result has to be computable as a single
1129
expression, which means you can't have multiway ``if... elif... else``
1130
comparisons or ``try... except`` statements. If you try to do too much in a
1131
``lambda`` statement, you'll end up with an overly complicated expression that's
1132
hard to read. Quick, what's the following code doing? ::
1135
total = functools.reduce(lambda a, b: (0, a[1] + b[1]), items)[1]
1137
You can figure it out, but it takes time to disentangle the expression to figure
1138
out what's going on. Using a short nested ``def`` statements makes things a
1143
return 0, a[1] + b[1]
1145
total = functools.reduce(combine, items)[1]
1147
But it would be best of all if I had simply used a ``for`` loop::
1153
Or the :func:`sum` built-in and a generator expression::
1155
total = sum(b for a,b in items)
1157
Many uses of :func:`functools.reduce` are clearer when written as ``for`` loops.
1159
Fredrik Lundh once suggested the following set of rules for refactoring uses of
1162
1. Write a lambda function.
1163
2. Write a comment explaining what the heck that lambda does.
1164
3. Study the comment for a while, and think of a name that captures the essence
1166
4. Convert the lambda to a def statement, using that name.
1167
5. Remove the comment.
1169
I really like these rules, but you're free to disagree
1170
about whether this lambda-free style is better.
1173
Revision History and Acknowledgements
1174
=====================================
1176
The author would like to thank the following people for offering suggestions,
1177
corrections and assistance with various drafts of this article: Ian Bicking,
1178
Nick Coghlan, Nick Efford, Raymond Hettinger, Jim Jewett, Mike Krell, Leandro
1179
Lameiro, Jussi Salmela, Collin Winter, Blake Winton.
1181
Version 0.1: posted June 30 2006.
1183
Version 0.11: posted July 1 2006. Typo fixes.
1185
Version 0.2: posted July 10 2006. Merged genexp and listcomp sections into one.
1188
Version 0.21: Added more references suggested on the tutor mailing list.
1190
Version 0.30: Adds a section on the ``functional`` module written by Collin
1191
Winter; adds short section on the operator module; a few other edits.
1200
**Structure and Interpretation of Computer Programs**, by Harold Abelson and
1201
Gerald Jay Sussman with Julie Sussman. Full text at
1202
http://mitpress.mit.edu/sicp/. In this classic textbook of computer science,
1203
chapters 2 and 3 discuss the use of sequences and streams to organize the data
1204
flow inside a program. The book uses Scheme for its examples, but many of the
1205
design approaches described in these chapters are applicable to functional-style
1208
http://www.defmacro.org/ramblings/fp.html: A general introduction to functional
1209
programming that uses Java examples and has a lengthy historical introduction.
1211
http://en.wikipedia.org/wiki/Functional_programming: General Wikipedia entry
1212
describing functional programming.
1214
http://en.wikipedia.org/wiki/Coroutine: Entry for coroutines.
1216
http://en.wikipedia.org/wiki/Currying: Entry for the concept of currying.
1221
http://gnosis.cx/TPiP/: The first chapter of David Mertz's book
1222
:title-reference:`Text Processing in Python` discusses functional programming
1223
for text processing, in the section titled "Utilizing Higher-Order Functions in
1226
Mertz also wrote a 3-part series of articles on functional programming
1227
for IBM's DeveloperWorks site; see
1228
`part 1 <http://www.ibm.com/developerworks/linux/library/l-prog/index.html>`__,
1229
`part 2 <http://www.ibm.com/developerworks/linux/library/l-prog2/index.html>`__, and
1230
`part 3 <http://www.ibm.com/developerworks/linux/library/l-prog3/index.html>`__,
1233
Python documentation
1234
--------------------
1236
Documentation for the :mod:`itertools` module.
1238
Documentation for the :mod:`operator` module.
1240
:pep:`289`: "Generator Expressions"
1242
:pep:`342`: "Coroutines via Enhanced Generators" describes the new generator
1243
features in Python 2.5.
1247
Handy little function for printing part of an iterator -- used
1248
while writing this document.
1252
slice = itertools.islice(it, 10)
1253
for elem in slice[:-1]:
1254
sys.stdout.write(str(elem))
1255
sys.stdout.write(', ')