Uses and Abuses Of Python’s Excessive Dynamism

I have not written much about programming lately, but enough has built up in my personal goofiness repo that I thought I’d talk about it. I am a Python programmer by day, and occasionally I learn things about the language that click together and I realize how badly I can abuse the runtime, which I then do as aggressively as I can manage. Here are some of my favourite examples.

You can find all the code samples here on my github.

Aspect Oriented Programming via Late Binding Abuse And Metaclasses

Python metaclasses are a complicated beast that I will not explain in great detail here. This Stack Overflow answer is a good place to start on how they work in general.

Here I present a metaclass for making all class functions log their calls):

def logging_func(f):
    @wraps(f)
    def _logs(self, *args, **kwargs):
        alist = ', '.join(str(a) for a in args)
        kwlist = ', '.join('%s=%s' % (k, v) for k, v in kwargs.items())
        print '%s(%s)' % (f.__name__, ', '.join([alist, kwlist]))
        return f(self, *args, **kwargs)
    return _logs
    

class LoggingMeta(type):
    def __new__(cls, name, bases, attrs):
        for k, v in attrs.items():
            if type(v) in (types.MethodType, types.FunctionType, types.LambdaType):
                attrs[k] = logging_func(v)
        return super(LoggingMeta, cls).__new__(cls, name, bases, attrs)

logging_func is just a decorator that takes a function, and returns a new function that prints the arguments of the call before returning the result of the original function.

LoggingMeta is a metaclass (you can tell by how it inherits from type!) that modifies any class of which it is the metaclass by wrapping all functions with the above logging_func. If this were production code, you’d want to be more discerning about which methods to wrap (probably leave out __init__, etc), but if this were production code you wouldn’t want to be doing this at all, so there’s that.

So that covers the metaclasses part of the heading. Next up we abuse late binding:

class SecretLogging(object):
    __metaclass__ = LoggingMeta

object = SecretLogging

In python, all the builtins are normal names that are simply bound by default. This means you can bind builtin names such as object and str to any object of your choice. This does not apply to keywords such as if and in. Here we bind object to be a new class, which is an empty new-style class with the metaclass defined above. Now all new-style classes defined henceforth (in any scope where this rebinding of object has occurred) will secretly log all their function calls.

class Foo(object):

    def something(self, g, a=None, b=None, *args, **k):
        return {'a': a, 'b': b}

Now, unbeknownst to me, the naive definer of the Foo class, all my methods are secretly wrapped up to log their calls!

Declarative Function Currying

This one actually came up at work. I ended up deciding it was too arcane to go with, but it would have solved a real problem (despite probably adding more problems than it did solve). I was modifying some code to take a new configuration parameter. The tests all set up the configuration to the appropriate state for the given test using a context manager, so there were a lot of lines like:

with build_my_object(some_config):
    # the rest of the test

So I got to thinking it would be nice if I could say, I’d like to ensure that all these tests still work the same when my new configuration parameter is set, but I don’t want to duplicate all these functions to change just one variable. I realized I could do this with Python magic! This was being run under one of those test frameworks that runs anything named test_* as a unit test, so I would have to come up with some scheme for creating new test functions on my class from what I would think of as a test template. My idea was that I would have some sort of decorator that would tell some magic infrastructure somewhere to take this function as a template, and replace it with two or more versions of itself, named appropriately, that called the build_my_object context manager with different parameters.

Here is how to implement something like this. I again encourage you not to do things this aggressively arcane in your production code.

First, let’s look into the contract we’d like to be able to use for these templates. I ended up with the following, and I think it’s pretty appropriate for what I’m trying to do:

@method_duper
class Foo(object):
    @dupe('x', 'y')
    def duped_this_one(self, arg):
        print arg

This means we want to take the function duped_this_one and replace it with two functions, one of which is calling it with the argument ‘x’ and one of which is calling it with the argument ‘y’. The class decorator method_duper is a necessary evil (unless we want to go into metaclasses like above, but I didn’t think that level of indirection was necessary for this trick). It’s not part of the contract I want, but it makes things work.

So now we have two things we need to define to make this work: method_duper and dupe. Let’s look at dupe first:

def dupe(*dupe_args):
    def wrapper(f):
        f._dupe = True
        f._dupe_args = dupe_args
        return f
    return wrapper

This is a pretty standard decorator in pattern, but note that it does not actually return a modified function. This is using a decorator to implement a declarative pattern, not to change the behaviour of a function. All this does is set some attributes on the wrapped function so that our “magic infrastructure” can use them later to dupe our functions.

So then, the real work must be done in method_duper, and that is the case.

def method_duper(cls):
    def maker(f, *args):
        def new_f(self):
            return f(self, *args)
        return new_f

    for n in dir(cls):
        f = getattr(cls, n)
        if not callable(f) or not getattr(f, '_dupe', False):
            continue

        for i, new_args in enumerate(f._dupe_args):
            mod = maker(f, *new_args)
            name = '%s_%d' % (n, i)
            mod.func_name = name
            setattr(cls, name, mod)
        delattr(cls, n)
    return cls

This is a fairly complicated function, but a lot of it is actually just doing some massaging for issues like scope control. Let’s look at a simplified (and wrong) version to get the gist of it, and then I will briefly explain why it doesn’t work so well:

def method_duper(cls):
    for n in dir(cls):
        f = getattr(cls, n)
        if not getattr(f, '_dupe', False):
            continue
            
        for i, new_args in enumerate(f._dupe_args):
            def new_f(self):
                return f(self, *new_args)
            name = '%s_%d' % (n, i)
            new_f.func_name = name
            setattr(cls, name, new_f)
        delattr(cls, n)
    return cls

This is a little easier to swallow. This is a class decorator, as the argument indicates (and as its usage earlier on makes clear). This method takes in a class, modifies it, and returns it. It is worth noting that, because of Python’s evaluation rules, the class will have been fully executed before this method is called (you can think of class Foo as being syntactic sugar for a builtin function call to some internal magic). Thus, the class object we receive when this method executes already has its methods defined, and, in particular, our @dupe() decorators have also executed and set up their declarations.

So we go through each element in the class, and any that have the _dupe attribute set, we modify appropriately. What the inner loop here is doing is taking the function’s _dupe_args (which you’ll recall we set up with our call to @dupe()) and, for each argument list, creating a new function (named, obviously enough, new_f), naming that function in a unique way, and setting up an attribute of that name on the class.

We then go and delete the original method (it was only a template, after all!) and voila, we have our class with custom, declaratively defined duplicated methods!

If you go back to the earlier code, the major difference is that we were using a factory method (maker) to define new_f rather than doing it inline in the loop. This is to get around python’s scoping rules, and is a problem with defining closures that anyone who has ever worked in JavaScript will be well familiar with.

Defining Pre-Increment via Overly Broad Magic Methods

Why is __pos__ an overridable attribute on a class? The world will never know. But, it means you can define pre-increment operators if you decide to!

 class Increment(object):
     '''
     i = Increment()
     print i.val # 0
     ++i
     print i.val # 1
     '''
     def __init__(self, val=0):
         self.val = val
         self._inc = False
 
     def __pos__(self):
         if self._inc:
             self.val += 1
         self._inc = not self._inc
         return self
 
     def __getattr__(self, name):
         return getattr(self.val, name)

This just uses the __pos__ magic method to do gross things with internal state. Fun fact: since python doesn’t actually have a ++ operator, any number of + symbols lex independently, so you can do nonsense like:

i = ++++++++++Increment()
print i.val

which will print 5.

That’s all, folks

Hopefully you learned some things not to do in your production code today. There’s more goofiness up on my github: in particular, the lambda calculus in Python is pretty fun, but that’s much less fun to explain and mostly just an exercise in the flexibility of __call__.

Advertisement
This entry was posted in software. Bookmark the permalink.

1 Response to Uses and Abuses Of Python’s Excessive Dynamism

  1. Pingback: How To DDoS Your Own Website With Python | See Alexander Code

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s