1
:mod:`urllib.robotparser` --- Parser for robots.txt
2
====================================================
4
.. module:: urllib.robotparser
5
:synopsis: Load a robots.txt file and answer questions about
6
fetchability of other URLs.
7
.. sectionauthor:: Skip Montanaro <skip@pobox.com>
12
single: World Wide Web
16
This module provides a single class, :class:`RobotFileParser`, which answers
17
questions about whether or not a particular user agent can fetch a URL on the
18
Web site that published the :file:`robots.txt` file. For more details on the
19
structure of :file:`robots.txt` files, see http://www.robotstxt.org/orig.html.
22
.. class:: RobotFileParser()
24
This class provides a set of methods to read, parse and answer questions
25
about a single :file:`robots.txt` file.
27
.. method:: set_url(url)
29
Sets the URL referring to a :file:`robots.txt` file.
33
Reads the :file:`robots.txt` URL and feeds it to the parser.
35
.. method:: parse(lines)
37
Parses the lines argument.
39
.. method:: can_fetch(useragent, url)
41
Returns ``True`` if the *useragent* is allowed to fetch the *url*
42
according to the rules contained in the parsed :file:`robots.txt`
47
Returns the time the ``robots.txt`` file was last fetched. This is
48
useful for long-running web spiders that need to check for new
49
``robots.txt`` files periodically.
51
.. method:: modified()
53
Sets the time the ``robots.txt`` file was last fetched to the current
57
The following example demonstrates basic use of the RobotFileParser class.
59
>>> import urllib.robotparser
60
>>> rp = urllib.robotparser.RobotFileParser()
61
>>> rp.set_url("http://www.musi-cal.com/robots.txt")
63
>>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
65
>>> rp.can_fetch("*", "http://www.musi-cal.com/")