Monitoring Cron Jobs

Wed, 24 Feb 2010 22:27

(This article comes from one I helped edit and publish inside work, so I can't take any credit for the ideas expressed within, though I do vehemently and violently subscribe to the sentiment! Thanks to Alan Sundell for originally educating me.)

When you set (or don't set at all) MAILTO in a crontab fragment, typically it's because you want to be notified if your job fails -- failure in this case if and only if the job only prints to stdout/stderr if there is an exceptional condition... However not all jobs print only on exceptional conditions, many use stderr for logging, and email is just not a great solution to this problem, especially at scale.

Problems with alerting by email from cron.

Why is it a bad idea to rely on cron mail?

  • We all get so much mail from cron that few are in the habit of reading it anymore.
  • If your server is broken, the mail submission agent may be broken too.
  • You may handle your cron mail, but you may be on holiday when it arrives.
  • crond can crash.
  • You will get mutiple emails for the same failure.
  • Your cron mail will get delivered to every one of your mailboxes, eating up storage.
  • You cannot suppress cron mail notifications.
  • Your cron mail has no concept of dependencies.
  • You will get notified of temporary failures when you only care about persistent ones.

If a cronjob running successfully is critical to operation, then it seems that what you really need is some kind of monitoring system that addresses all of these things, and can send alerts to some oncall rotation that determines who is responsible for handling alerts.

A potential solution

Here's an idea that might help with that.

  1. Direct the output of your job to some log file for debugging, in the event of persistent failure. Note the truncate:

    */1 * * * * root cronjob > /var/log/cronjob.log 2>&1

    (If you decide to appent, not overwrite the log each execution, then make sure you logrotate that file.)

  2. At the end of cronjob, update a status file, like so:

    scriptname=$(basename $0)
    date +%s > /var/tmp/cronjob-last-success-timestamp

    Ensure that your job exits on error before reaching the last line!

  3. Collect the content of that file regularly with your monitoring system; scrape it with the nagios host agent, pump it into collectd, whatever you hip open source cats are using these days.

  4. Configure your monitoring system to send a notification on the timestamp having not been updated in some time period.

    if cronjob-last-success-timestamp
    (time() - 30m)
      then alert
  5. Profit!

Now you only generate an alert if the cron job hasn't succeeded in the last 30 minutes (a threshold you can adjust to match your monitoring scrape intervals and service SLAs), and with a sufficiently mature monitoring system you can now express dependencies, suppress the notification, and send it to an oncall rotation, and so on!

Most significantly, we have converted a system that always reported failure, into a system that is based around checking for success -- a failsafe.

I just read with excitement the announcement of Debian Enhancement Proposals, something that I too have been contemplating in recent months (but due to my ghastly lack of commitment to the Debian community doubted my ability to drive it).

I work in company driven by engineering documents and designs, I like RFCs, and I like what Python has done with its PEPs. Debian's adoption of this useful tool can only improve the community and the distribution.


Mo rides, 20c

Mon, 19 Nov 2007 23:47


For the last 19 days I have been growing a filthy mo, again. Why? Because it was so much fun last time, raising money and awareness for the fight against male depression and prostate cancer.

  • Depression affects 1 in 6 men...Most don't seek help. Untreated depression is a leading risk factor for suicide.
  • Last year in Australia 18,700 men were diagnosed with prostate cancer and more than 2,900 died of prostate cancer - equivalent to the number of women who die from breast cancer annually.
  • Men are far less healthy than women. The average life expectancy of males is 5 years less than females.

To sponsor my Mo please go to http://www.movember.com/au/donate, enter my registration number which is 79829 and your credit card details. Or you can sponsor me by cheque made payable to the "Movember Foundation" clearly marking the donation as being for my Registration Number: 79829. Please mail cheques to: PO Box 292, Prahran VIC 3181. All donations over $2 are tax deductible.

The money raised by Movember is donated to the Prostate Cancer Foundation of Australia and beyondblue - the national depression initiative, which will use the funds to create awareness, fund research and increase support networks for those men who suffer from prostate cancer and male depression.

For those that have supported Movember in previous years you can be very proud of the impact it has had and can check out the detail at: Fundraising Outcomes.

Movember culminates at the end of the month at the Gala Parties. These glamorous and groomed events will see Tom Selleck and Borat look-a-likes battle it out for their chance to take home the prestigious Man of Movember title. If you would like to be part of this great night you'll need to purchase a Gala Party ticket.

Guys, invest in your future! Girls, invest in your future!

Thanks for your support!


nsscache open source launch

Mon, 05 Nov 2007 21:54

Today we open sourced the project I've been working on for the last 9 months, nsscache.

It's a glorified version of:

ldapsearch | awk > /etc/passwd

in that we in theory support more than just LDAP as a data source, and offer two types of database storage (nss_db using Berkeley DB, and plain text files).

If you're having issues with your nss_ldap setup, then try it out :)

Today's blog of the day comes from this post:

The easy solution to this is for the conference organizers to provide laptops that have multiple boot options for different distributions.

I think this is certainly interesting, and Mel8 might want to experiment with this, but I find the use of the word "easy" quite amusing here :)

This post caught my eye on the feed this morning:

your maternal ancestors might be complete morons, but you insult your audience by impying everyone else's is too!

linux.conf.au 2007 programme choices

Fri, 12 Jan 2007 01:39

As the clock ticks approach single digit figures, the organising team are ramping up. Everything's coming along smoothly in time for a kick-arse start on Monday.

One thing that makes me sad is that I'll likely not be able to watch a lot of the talks -- but if I do get a chance, I'd see these:

  • clustering tdb by Andrew Tridgell. When I first saw this proposal came in, I knew it was a good one. Someone should make an LDAP server that doesn't suck, and use tdb to solve the multimaster replication problem in the storage layer. Oh wait -- that's what he's doing already.

  • Puppet by Luke Kanies. Luke's been working on this awesome next-generation systems configuration management tool for a few years now. He approached me, back in the day, to be a beta-tester -- he and I were both hitting scaling problems with cfengine. I hope every sysadmin makes it to this talk!

There's a few others to note: Theodore T'so always has an interesting talk; this year he's giving two cool subjects a run. The tutorial on heartbeat 2 by Alan Robertson is sure to be full of good loadbalancing fu.

There's lots of exciting things going on in the programme, so I hope to see you all next week!

we're #1!

Tue, 14 Nov 2006 19:31

Who's the best linux conference in the world?

Turns out we're only second hit for "linux conference", but we're still best!

linux.conf.au boned, fixed again

Tue, 14 Nov 2006 18:54

Yeah, I broke the registration process when I rolled out some bugfixes last night, but I've fixed them now! Continue in your merry scramble for tickets to the ROCKIN'EST FREE AND OPEN SORES GIG IN THE SOUTH.

(In unrelated news, can you think of a good reason why they shouldn't go to lca2007?

lca2007 registrations now open!

Wed, 01 Nov 2006 20:34

Registrations to LCA 2007 are now open! Get in quick!


Wed, 01 Nov 2006 07:37

Sponsor me for movember! My rego number is 4098!

Last night the Seven met, as they do, around a dark table, deep below the city in a room built by the Templar Knights; surrounded, as normal, by ancient iconography of power and knowledge. Their goal: a draft programme for LCA 2007.

We ranked the streams in order of popularity; nothing too scientific, based purely on presenter name and subject matter, what would likely draw the biggest crowds?

Then with that ordering, we'd go through the streams, picking off the top of all lists, and putting them in the biggest room. Repeat for the next one in the next talk slot, offsetting them so that no two venues would carry the same stream at the same time.

halfway through the first programme draft

By the end of it, we had a nice patchwork quilt.

the finished draft

I don't think we're done with it, yet, but it's a good start!

new programming music

Wed, 20 Sep 2006 15:47

On Erik's recommendation I ordered Ministry of Sound.. no, just kidding. I got Greatest Fits ordered from Red Eye, my favouritest record store ever, and it arrived yesterday.


LCA 2007: no longer "fucking amazing"

Tue, 19 Sep 2006 21:41

To combat the risk of driving people away from conferences, linux.conf.au 2007 wil no longer be fucking amazing.

Instead, it will merely OPEN YOUR SORES, FREE YOUR SOFT WEAR, and ROCK.

lca 2007 logo

The response to the CFP has been massive! Right now our reviewers are starting to read the torrent of submissions...

  • 212 proposals for presentations/seminars
  • 31 proposals for tutorials
  • 17 proposals for miniconfs

Holy shit!

Reading through some of the titles and abstracts, theres a lot of really awesome stuff that people have been working on, I'm really excited about what the programme is going to look like!

Unfortunately, we're going to have to reject a lot of these :( On the upside, it does mean that the stuff that does make it through is going to be so mindblowingly awesome that you can't afford not to be at linux.conf.au 2007!


woo, new command

Thu, 14 Sep 2006 10:51

Shoulder-surfing johnf the other night at the seven meeting, I saw him use

cd -

to return to a previous directory...

I'd been using pushd and popd when I remembered; tried out cd - just now and bam! productivity increased 300%! Now I won't be opening new terminals just to keep a shell in past directories...

wrapping CGI applications in WSGI

Tue, 12 Sep 2006 12:30

We've got a large "legacy" body of code that is used by our staff to track most of our business, it's a whole lot of Python CGI that uses some custom HTML and DB frameworky code; it's pretty ugly and having become a convert to the cult of Pylons, WSGI, and SQLAlchemy, I really want to replace it.

Of course, anyone knows that one of the Things You Should Never Do is rewrite from scratch. Even in the same language.

It would be much easier to integrate the old app into a new Pylons app, have them running side-by-side, and slowly deprecate the old one as new interfaces are written. (This is still not a perfect idea, as demonstrated by the 4 year old TCL code that the current app was meant to replace still running in production ;-) As bugs in the old code are found, we can either beat our heads against brick walls or replace just that functionality with a sane data model, similar looking templates, and shiny new controller smarts, and no-one would be the wiser, except of course that for some reason the developers are no longer constantly grumpy and the webapp is running smoother and faster than before, and crashing less often...

It occurred to me yesterday the best way to get a legacy CGI app to run along with Pylons is to convert it to a WSGI application, and just mash it in at the bottom of the application stack, where Pylons would normally go when it 404s.

Here's the result of some free time and caffeinated excitement this morning:

import imp
import sys
import StringIO

def application(environ, start_response):
    # trap the exit handler, we don't want scripts exiting randomly
    # we might want to do something with the return code later
    retcode = None
    def exit_handler(rc):
        retcode = rc

    sys.exit = exit_handler

    # trap the output buffer
    outbuf = StringIO.StringIO()
    sys.stdout = outbuf

    # catch stderr output in the parent's error stream
    sys.stderr = environ['wsgi.errors']

    # import the script
    script = environ['PATH_TRANSLATED']
    f = open(script, "rb")
    imp.load_module('__main__', f, script, ("py", "rb", imp.PY_SOURCE))

    # outbuf has a typical CGI response, headers separated by a double
    # newline, then content
    (header, content) = outbuf.getvalue().split('\n\n', 1)
    headers = [tuple(x.split(': ', 1)) for x in header.split('\n')]

    # return it wsgi style
    start_response('200 OK', headers)
    return [content]

Our CGI apps print out on stdout, as you'd expect, so we need to trap that, here done with a StringIO monkeypatched on the top of sys.stdout. We also need to hack sys.exit out of the way, so that the CGIs don't quit before we've completed the WSGI protocol. (I think this might cause some bugs in the execution though, because now it's not terminating execution of the module, but I haven't found an example yet to bother worrying about it.)

I import the script, rather than using os.system, because it feels right. I use imp.load_module rather than import because we don't know what the script is until runtime :)

The real trick comes from a tip I found here , whilst looking for how to run the imported module as __main__. Just imp.load_module and tell it that it's __main__! Simple!

(The hardest part about this whole excercise was now fiddling with sys.path and the CWD to make sure the imported script was running with the right environment that the CGIs used to expect, this is all done in the CGI runner dispatch.cgi which I won't copy here because it's pretty trivial and well documented in the WSGI spec.)

more than a feeling

Sun, 10 Sep 2006 14:43

I woke up this morning, and the sun was gone Turned on some music to start my day..

For a while, I've wanted to be woken up by anything other than my clock radio, so last night I peeked at banshee to see if it had a remote control... turns out it does!

Hacked up this script, shins:


banshee --enqueue /media/usbdisk/music0/Albums/The\ Shins/Chutes\ Too\ Narrow/01\ -\ Kissing\ the\ Lipless.ogg
banshee --play

and set it to run at 9am:

dawn% at 9am
warning: commands will be executed using /bin/sh
at> sh shins
at> <EOT>
job 6 at Sun Sep 10 09:00:00 2006

and this morning I was woken to the soft sounds of The Shins, just as planned. Great start to the day!

pylons gotchas

Sun, 03 Sep 2006 12:42

Benno was over, hacking on the LCA 2007 website with me yesterday, and we hit two gotchas, both I knew about but when I explained them to him they sounded silly.

c considered harmful.

c is a request-local global object that you can attach objects to, which is useful as a way of passing data from the controller to the template code -- when you're calling a parameterised template you might not know at call time what the args the template wants are, but you can pass them all in on c. If you're using some pattern like a mixin CRUD class for generalising common data operations, then the code that actually calls the template doesn't know what the object is, but the template it's calling does.

c has the magical property that it has overloaded __getattr__ to return an empty string if the attribute is not found. This is a mixed blessing; your templates can access an attribute that hasn't been attached and it'll mostly cope with it. (Problems happen when you try to access attributes of nonexistent attributes, and you get the confusing message 'str has no attribute X'.)

However, this means you hide bugs; you've forgotten to attach the object you want to c and then your code runs fine; it's the users who find the problem after deployment, not during development. Having a __getattr__ that throws exceptions means you find out about these problems a lot sooner.

I think both of these points show that c in general is a bad idea; you should make use of explicit args so that your template interface is clearly defined -- I haven't yet found a nice way of doing it that is as easy as or better than using c though.

Myghty expressions that evaluate to False return empty strings.

We had a simple construct like so:

Count: <% len(c.review_collection) %>

which has the interesting property of evaluating to '' when c.review_collection is empty; len() returns 0 which is False.

This is pretty retarded; I suspect there's a shortcut along the lines of:

content = evaluate_fragment("len(c.review_collection)")
if content:

when these inline blocks are rendered; the if block clearly will fail to trigger when the inline block evaluates to 0, False, [], or {}. I can't think of a case where this is a good thing.

The workaround is to wrap the len() call in str(), so that the fragment doesn't evaluate to false.

<% str(len(c.review_collection)) %>

More gotchas as they come to hand.

LCA 2007 proposals so far

Thu, 31 Aug 2006 13:40

I'm having a browse of the submissions to LCA that we've got so far, and there's some cool stuff in there!

There's still a little over 2 weeks for you to get your proposals in, so don't hold back! I'm sure you all have something very exciting you want to talk about at the conference!

The more submissions we get, the rockin'er the conference will be!

Dad, I dug another hole...

Tue, 15 Aug 2006 13:53

I wrote another mock object, this time replacing urlopen from urllib2.

import urllib2
import StringIO
import unittest

class Dummy_urllib2(object):

    def install(cls):
        urllib2.urlopen = Dummy_urllib2.urlopen

    install = classmethod(install)

    def urlopen(self, url, data=None):
        self.url = url
        self.data = data

        response = StringIO.StringIO("foo")

        def geturl():
            return url

        response.geturl = geturl

        def info():
            return {}

        response.info = info

        return response

    urlopen = classmethod(urlopen)

class TestDummy_urllib2(unittest.TestCase):
    def test_install(self):

        url = 'http://notfound.example.org'

            r = urllib2.urlopen(url)
        except urllib2.URLError, e:
            self.fail("URLError raised, Dummy_urllib2 not installed or failed: %s" % e)

        self.assertEqual(url, Dummy_urllib2.url)
        self.assertEqual(url, r.geturl())
        self.assertEqual(None, Dummy_urllib2.data)
        self.assertEqual("foo", r.read())

if __name__ == '__main__':

This time it comes with it's own test suite. How meta!

linux.conf.au 2007 CFP updates

Mon, 14 Aug 2006 17:22

I'm exhausted; a weekend of hacking and deployment has left me a bit frazzled.

But it's OK! linux.conf.au 2007's website has had a facelift, thanks to Andy Fitzsimmons for the CSS tweaks, and our CFP submission process is a lot better.

If you haven't yet put in a proposal for a talk, a miniconf, a tutorial, then now's the time!
The CFP is still open!

a sudoku solver in Erlang

Mon, 31 Jul 2006 12:36

Matt will be pleased to hear that despite my claims that I'd have a productive weekend doing important things for LCA 2007 organisation, I spent the weekend programming.

Some bastard gave a great talk on Erlang at SLUG last Friday, and I really wanted to cut my teeth on it. So, hungover, I spent most of Saturday and a little bit of Sunday morning playing with the getting started guide and writing a Sudoku solver.

If you're curious, you can pull it down from my bzr repository:


boned :(

Sat, 29 Jul 2006 11:01

In an office full of sysadmins, you'd think one of them would have known what day it was yesterday :(

Pylons 0.9 is out!

Sat, 29 Jul 2006 10:52

Pylons 0.9 is out, with more rockin' features than ever :-)

Also, the new website is a big improvement over the last.

mock LDAP server object

Wed, 26 Jul 2006 19:04

Today's big achievement was an LDAP mock object, similar to the SMTP mock object found in paste.fixture. I was refactoring the sign in and sign out code of an in-house application that uses LDAP as an authentication store, and I needed to test that the logic of the controllers was correct. So, referring back to Paste's lovely fixture module, I came up with the following:

import ldap

class Dummy_ldap(object):

    def __init__(self, server):
        print "dummy ldap init"
        self.server = server

    def install(cls):
        ldap.initialize = cls

    install = classmethod(install)

    def simple_bind_s(self, dn, passwd):
        return True

    def search(self, base, scope, search_filter):
        self.base = base
        self.filter = search_filter
        self.results = [(ldap.RES_SEARCH_ENTRY, [('dn', "cn=test,%s" % self.base),
                                                 ('mail', ['test'])]),
                        (ldap.RES_SEARCH_RESULT, []),
        self.counter = 0
        return 1

    def result(self, rid, number):
        r = self.results[self.counter]
        self.counter += 1
        return r

A bit hackish, yes, but I'm not trying to reimplement LDAP here, I just want to trap the calls I use. Obviously a little bit of work is needed to, say, disallow the bind, or throw an exception, but these are trivial extensions.

Just as in Dummy_smtplib, you call the classmethod install to set it up (i.e. monkeypatch ldap.initialize) and you get to trap how it behaves.

class TestAccountController(ControllerTest):

    def test_signin_signout(self):

        # ... do stuff


paste.fixture's Dummy_smtplib

Thu, 20 Jul 2006 21:21

I'm working on two webapps that need to send email to users, and in both, the user is expected to click on a URL to confirm their registration. A common enough idiom for website accounts.

As a disciple of the cult of test-driven development, I want to be able to make a test that generates that email, inspects the contents for the URL, visits that generated URL, and then checks that the registration is completed.

Michael K, who previously suggested other abuses of Python's dynamic nature, reminded me a few months ago that you could monkeypatch an imported library with your own, and it'd be preserved throughout the "address space" of the running Python program.

I'd not really played with it, and had been putting off writing a test for email because I didn't really understand what I wanted. The other night at DebSIG, I asked (complained?) again about it, and he said "Hey, paste.fixture already does it!" I'd said I knew, but it had no documentation, and no-one on the paste users list had responded to my request for examples. He gave me a curt "Read the fucking source" response (in a much nicer way of course) and I thought, he's right! Back in the day I used to read library source code in order to work out poorly documented APIs, why now do I rely on clear documentation so much? I should just dig in and write some example code to test it out, and JFDI.

Enough of the backstory.

Here's a quick guide to setting up an email sender test, using paste.fixture around your Pylons application.

Firstly, in your controller, you have something that sends email, like so:

import smtplib

from app.lib.base import BaseController, m

class FooController(BaseController):
    def index(self):
        """Do something and send email while you're at it"""
        s = smtplib.SMTP("localhost")

Hey, this is a message



    def activate(self, id):
        m.write("awesome, activating %s" % id)

Pretty basic, if we visit /foo/ then we send an email, and then if we visit /foo/activate/N we inform the visitor of the activation.

The test is pretty simple too:

import re
import unittest

from paste.fixture import Dummy_smtplib

def TestFooController(unittest.TestCase):
    def test_foo_activation(self):

You just call the classmethod install on Dummy_smtplib to set it up, which does some magic behind the scenes (really it just replaces smtplib.SMTP with itself)


then run through the process you want to test

        # get the start page
        res = self.app.get('/foo')

and now we check that the message was sent, and its contents

        self.failIfEqual(None, Dummy_smtplib.existing, "no message sent")

        match = re.match(r'^.*/foo/activate/([^ ]+)', Dummy_smtplib.existing.message)
        self.failIfEqual(None, match)

        # visit the URL
        res = self.app.get('/foo/activate/%s' % match.group(1))

and finally, test the result of the activation.


You've also got to clean up, reset the dummy SMTP library for next time (you'll get an exception thrown if you don't, to remind you).


If you do any database stuff, then the times to check the status of the data model are just before the actuvation URL is visited, and again afterwards. I keep the model empty for each test, so I can pull out all the records and make sure that there's only one afterwards, and that it has the right attributes before and after.

Pretty easy stuff.

lca2007 CFP open!

Tue, 18 Jul 2006 19:49

I was on a jet from Austin to London when it happened, but busy the hours beforehand in a flat in Austin nursing a hangover putting the final touches on the website. So, though I missed the chance to announce at the time, I'll take this opportunity to blog about it now!

The LCA 2007 CFP is open, so submit your proposal for a talk, miniconf, etc now! (Or in reality, start writing your proposal now, so that you can submit it before the CFP closes ;-)

We're looking forward to your submissions!

pylons, paste, and wsgi

Mon, 17 Jul 2006 14:20

I scribbled this down on our whiteboard last Friday, trying to explain how Pylons and Paste fit together. Prevously jdub and Lindsay had asked me similar questions. Until Friday, I wasn't even sure myself.

pylons and paste stack diagram

The first thing to note is that Paste is not a framework or single library, it's a collection of components that by themselves don't do a lot, but with their powers combined form a set of useful and sometimes essential tools for building a web application in Python.

Paste implements an interface known as WSGI, aka the Web Server Gateway Interface. It's defined in PEP 333. Basically WSGI describes a Chain of Command design pattern; each piece of a WSGI application takes a request, and either acts on that request or passes it along the chain. The interface described by WSGI means you can plug WSGI apps (or as Pylons calls them, /middleware/) together in any order as you like.

Why is this useful? Well, it means you can take an off-the-shelf authentication handler to cope with 403 and 401 responses and take care of logins. One would only need to say "this is how you authenticate someone" and "this is how you ask the user for their password." Other things are possible; Pylons ships with an ultra-sexy 500 handler that puts you in a DHTML debugger, complete with traceback and Python interpreter. (Of course such a tool is a giant security hole so it is easily turned off in production environments.)

So, that's Paste. There's a few special cases in there, though: PasteScript and PasteDeploy. They're special in that they tend to be at the bottom of the stack -- they're specifically for launching WSGI applications, configuration of the application (e.g. authenticatoin details alluded to above) and connecting to the application (e.g. direct HTTP, FastCGI, and other connectors). I suspect that my diagram above doesn't lend itself well to describing how PasteScript and PasteDeploy really work; it's still a bit of dark magic to me. I hope someone else would be able to build on this article with their own that rebuts the errors and clears the grey areas.

In a Pylons app, you tend not to notice Paste, except when deploying (because you tend to run the command paster serve to launch a development environment). Pylons itself is mostly just glue. It's a thin veil of a framework over the top of some very powerful supporting libraries but presents them in a convenient and well defined way.

When you create a Pylons app, you get your paste middleware built for you, and then the entry point for your app is created as a WSGI application too. So it sits on top of the stack, taking in requests, and sending out responses. Your app can define its own middleware, too, so you have a lot of control over what happens between your app and the browser.

The main components of a Pylons app are:

  • A route mapper, by default Routes. The route mapper takes in URLs from the request passed into the app, and maps that URL to a controller object and method call. (If you've used RoR then you probably are familiar with this already.)

  • A templating engine, by default Myghty. The templating engine generates the view presented to the browser.

  • A data model. Pylons doesn't prefer any method of data model, it just makes available a model module within which you can define your own data model. I use SQLAlchemy as an ORM because it is very powerful and is nicely suited to working with existing schemas. It works as an MVC between the data model presented to the application and the database schema itself.

Pylons lets you swap out any of these components with your own, if you desire. I find Routes and Myghty to be powerful and flexible and friendly enough that there's no reason to want anything else.

Your controller objects, like any MVC pattern, coordinate between the model and the view. An action performed on a controller retrieves some data from the model, possibly altering it, and renders that data using the template engine.

There are other parts, other libraries that you'll see in a Pylons app, that aren't represented here. WebHelpers is a library of convenience functions used in the template engine, for generating common HTML and JavaScript.
paste.fixture is a web app test framework that takes advantage of the common interface of WSGI to allow one to test their application without requiring a full web server and socket handling.
FormEncode handles form validation, useful from within a controller object. These are but to name a few.

Unfortunately there is a sore need for overviews like this one in the Paste and Pylons community; as stated earlier I didn't fully understand the relationships myself until I came up with this diagram. Hopefully then, dear reader, you have a better insight into how this collection of names fit together, and can avoid the steep learning curve :-)

TDD promotes good health

Thu, 01 Jun 2006 16:01

There's an important advantage to Test Driven Development that I don't think was covered on list or by Rob at his talk.

By having a test suite, you can code after a heavy liquid lunch and be sure that you're not decreasing the quality of existing code. It makes it easier to focus on a specific task and write code to solve that problem. Having something do all the work for you when testing is a massive bonus, because obviulsy the side-effects of a pub lunch are that you are easily distracted and lack the willingness to focus on the task at hand. Test suites lower the barrier of entry to getting work done.

Who'da thought that best practices would also be best for drinking practice?

All content Copyright © 2002-2005 Jamie Wilkinson. Entries in this blog are licensed under the Creative Commons Attribution-Sharealike v2 License.