urllib
urlopen
function is
similar to the built-in function open
, but accepts URLs
(Universal Resource Locators) instead of filenames. Some restrictions
apply --- it can only open URLs for reading, and no seek operations
are available.
it defines the following public functions:
IOError
exception is
raised. If all went well, a file-like object is returned. This
supports the following methods: read()
, readline()
,
readlines()
, fileno()
, close()
and info()
.
Except for the last one, these methods have the same interface as for
file objects --- see the section on File Objects earlier in this
manual. (It's not a built-in file object, however, so it can't be
used at those few places where a true built-in file object is
required.)
The info()
method returns an instance of the class
rfc822.Message
containing the headers received from the server,
if the protocol uses such headers (currently the only supported
protocol that uses this is HTTP). See the description of the
rfc822
module.
None
(for
a local object) or whatever the info()
method of the object
returned by urlopen()
returned (for a remote object, possibly
cached). Exceptions are the same as for urlopen()
.
urlretrieve()
.
%xx
escape.
Letters, digits, and the characters ``_,.-
'' are never quoted.
The optional addsafe parameter specifies additional characters
that should not be quoted --- its default value is '/'
.
Example: quote('/~conolly/')
yields '/%7econnolly/'
.
Example: unquote('/%7Econnolly/')
yields '/~connolly/'
.
urlretrieve()
has been disabled until I
find the time to hack proper processing of Expiration time headers.
urlopen()
and urlretrieve()
functions can cause
arbitrarily long delays while waiting for a network connection to be
set up. This means that it is difficult to build an interactive
web client using these functions without using threads.
urlopen()
or urlretrieve()
is the
raw data returned by the server. This may be binary data (e.g. an
image), plain text or (for example) HTML. The HTTP protocol provides
type information in the reply header, which can be inspected by
looking at the Content-type
header. For the Gopher protocol,
type information is encoded in the URL; there is currently no easy way
to extract it. If the returned data is HTML, you can use the module
htmllib
to parse it.
urllib
module contains (undocumented) routines to
parse and unparse URL strings, the recommended interface for URL
manipulation is in module urlparse
.