~pythonregexp2.7/python/issue2636-01+09-01-01

« back to all changes in this revision

Viewing changes to Lib/tokenize.py

  • Committer: Jeffrey C. "The TimeHorse" Jacobs
  • Date: 2008-09-22 00:02:12 UTC
  • mfrom: (39022.1.34 Regexp-2.7)
  • Revision ID: darklord@timehorse.com-20080922000212-7r0q4f4ugiq57jph
Merged in changes from the Atomic Grouping / Possessive Qualifiers branch.

Show diffs side-by-side

added added

removed removed

Lines of Context:
146
146
 
147
147
class StopTokenizing(Exception): pass
148
148
 
149
 
def printtoken(type, token, (srow, scol), (erow, ecol), line): # for testing
 
149
def printtoken(type, token, srow_scol, erow_ecol, line): # for testing
 
150
    srow, scol = srow_scol
 
151
    erow, ecol = erow_ecol
150
152
    print "%d,%d-%d,%d:\t%s\t%s" % \
151
153
        (srow, scol, erow, ecol, tok_name[type], repr(token))
152
154