标签:
Functions that returns None to indicate special meaning are error prone because None and other values (e.g., zero, the empty string) all evaluate to False in conditional expressions.
Raise exceptions to indicate special situations instead of returning None. Expect the calling code to handle exceptions properly when they‘re documented.
# found never change def sort_priority(numbers, group): found = False def helper(x): if x in group: found = True return (0, x) return (1, x) numbers.sort(key=helper) return found # use a mutable value, for example list def sort_priority(numbers, group): found = [False] def helper(x): if x in group: found[0] = True return (0, x) return (1, x) numbers.sort(key=helper) return found
Closure functions can refer to variables from any of the scopes in which they were defined.
By default, closures can‘t affect enclosing scopes by assigning variables.
In Python 2, use a mutable value (like a single-item list) to work around the lack of the nonlocal statement.
Avoid using nonlocal statements for anything beyond simple functions.
Using iterator can be clearer than the alternative of returning lists of accumulated results.
The iterator returned by a generator produces the set of values passed to yield expressions within the generator function‘s body.
Generators can produce a sequence of outputs for arbitrarily large inputs because their working memory doesn‘t include all inputs and outputs.
The iterator protocol is how Python for loops and related expressions traverse the contents of a container type. When Python sees a statement like for x in foo it will actually call iter(foo). The iter built-in function calls the foo.__iter__ special method in turn. The __iter__ method must return an iterator object (which itself implements the __next__ special method). Then the for loop repeatedly calls the next built-in function on the iterator object until it‘s exhausted (and raises a StopIteration exception).
Practically speaking you can achieve all of this behavior for your classes by implementing the __iter__ method as a generator.
The protocol states that when an iterator is passed to the iter built-in function, iter will return the iterator itself. In contrast, when a container type is passed to iter, a new iterator object will be returned each time.
>>> class MyContainer(object): ... def __iter__(self): ... return (_ for _ in xrange(5)) ... >>> gen = MyContainer() # a new iterator object will be returned each time >>> [_ for _ in gen] [0, 1, 2, 3, 4] >>> [_ for _ in gen] [0, 1, 2, 3, 4] >>> [_ for _ in gen] [0, 1, 2, 3, 4] >>> iterator = (_ for _ in xrange(5)) # return the iterator itself >>> [_ for _ in iterator] [0, 1, 2, 3, 4] >>> [_ for _ in iterator] [] >>> [_ for _ in iterator] []
Thus, you can test an input value for this behavior and raise a TypeError to reject iterators. It will work for any type of container that follows the iterator protocol.
def normalize_defensive(numbers): if iter(numbers) is iter(numbers): # An iterator - bad! raise TypeError(‘Must supply a container‘) # sum will call ReadVisits.__iter__ to allocate a new iterator object total = sum(numbers) result = [] # for loop will also call __iter__ to allocate a second iterator object for value in numbers: percent = 100 * value / total result.append(percent) return result
>>> lst = [1,2,3] >>> iter(lst) == iter(lst) False >>> gen = (_ for _ in xrange(4)) >>> iter(gen) == iter(gen) True
>>> lst [1, 2, 3] # join! >>> ‘,‘.join(str(x) for x in lst) ‘1,2,3‘ >>> ‘,‘.join([str(x) for x in lst]) ‘1,2,3‘ >>> ‘,‘.join((str(x) for x in lst)) ‘1,2,3‘
Functions can accept a variable number of positional arguments by using *args in the def statement.
You can use the items from a sequence as the positional arguments for a function with the * operator.
Using the * operator with a generator may cause your program to run out of memory and crash.
Adding new positional parameters to functions that accept *args can introduce hard-to-find bugs.
Function arguments can be specified by position or by keyword.
Keywords make it clear what the purpose of each argument is when it would be confusing with only positional arguments.
Keyword arguments with default values make it easy to add new behaviors to a function, especially when the function has existing callers.
Optional keyword arguments should always be passed by keyword instead of by position.
Default arguments are only evaluated once: during function definition at module load time. This can cause odd behaviors for dynamic values (like {} or []).
Use None as the default value for keyword arguments that have a dynamic value. Document the actual default behavior in the function‘s docstring.
def safe_division_d(number, divisor, **kwargs): ignore_overflow = kwargs.pop(‘ignore_overflow‘, False) ignore_zero_div = kwargs.pop(‘ignore_zero_division‘, False) if kwargs: raise TypeError("Unexcepted **kwrags: %r" % kwargs) # ... # raise Exception safe_division_d(1, 0, False, True) >>> TypeError: safe_division_d() takes 2 positional arguments but 4 were given # it works safe_division_d(1, 0, ignore_zero_division=True)
Keyword arguments make the intention of a function call more clear.
Use Keyword-only arguments to force callers to supply keyword arguments for potentially confusing functions, especially those that accept mutiple Boolean flags.
Python 2 can emulate keyword-only arguments for functions by using **kwargs and manually raising TypeError exceptions.
标签:
原文地址:http://www.cnblogs.com/senjougahara/p/python.html