A couple of days ago we have faced quite a creepy issue: a
Django-based site with tens of thousands hits/day has been throwing
NoReverseMatch errors with no particular pattern. Basically, any url
on the site could be found in the error emails. These emails were
flowing in at a very worrysome rate, and user complaints started to
arrive as well.
I'm sharing our findings just so that if anyone gets into the same
trouble he could find the info on google.
All the credit for discovering the roots of the problem goes to Vadim
Shender whom you might know for his OCaml posts if you read this blog.
If you are using Django, by now you are probably thinking "swallowed
ImportError". If only it was that simple! Without going into too much detail, here's
what we eventually found: django/core/urlresolvers.py is not
thread-safe and the server was using Apache+mod_wsgi config with
multiple threads per process. Once you know this is a thread safety problem, it's
trivial to find the relevant info, for instance: http://code.djangoproject.com/wiki/DjangoSpecifications/Core/Threading. The
conclusion: Django is known to have thread safety issues so if you are using a
multithreaded config be prepared to see some magic under the load and
react to it.
- Switch to a prefork config. This is safest, but you probably won't
get as much performance.
- Patch your copy of Django. We shall post a patch for this
particular issue here.
- Increase the maximum number of requests per process. Haven't tried
that yet, but it should be helpful because the problem is in
initialization which is only done once per each new process (see
- Other suggestions are very welcome.
For the record: the site I mentioned uses Django SVN revision
7153. Just looked at trunk head at it seems to have the same problem,
but this needs testing. We are going to submit a ticket and probably a
patch which we'll also attach to this post.
For curious, here's a more detailed description. The code that causes the problem:
if not self._reverse_dict and hasattr(self.urlconf_module, 'urlpatterns'):
for pattern in reversed(self.urlconf_module.urlpatterns):
if isinstance(pattern, RegexURLResolver):
for key, value in pattern.reverse_dict.iteritems():
self._reverse_dict[key] = (pattern,) + value
self._reverse_dict[pattern.callback] = (pattern,)
self._reverse_dict[pattern.name] = (pattern,)
This is a method from RegexURLResolver
class, instances of which
are shared globally, so if one thread is inside this method and
filling the dict and the other one requests it, the second thread
will see a partially-populated dict and has all chances of not finding
the views it looks for. Since the initialization of the dict only
happens once per Python process, increasing the process lifetime
(number of requests) should reduce the frequency of these failures.
Posted by Alexander Tereshkin on 2008-10-30 14:29:25
the course of my recent work I've been investigating Computer Vision
field, and more specifically, object detection and tracking problem.
Object detection is a classic problem in computer vision, with
applications in many areas. In this post I'm going to share my
findings about the latest research advances in this area.
who is familiar by hearsay with it, knows the great paper “Robust
Real-Time Face Detection” by Paul Viola and Michael J. Jones.
Following their approach it is possible to detect faces at 15 frames
per second on a conventional Intel Pentium III (as they did). Our
task is to achieve the same results on some conventional embedded
processor. Also, their learning algorithm
may be running days or even weeks. Our goal is to try to reduce this
are three main contributions of their paper: an integral image
a simple and efficient classifier (at that time) and an approach for
combining more complex classifiers.
integral image allows for extremely fast feature evolution. Viola and
Michael used a set of Haar-like features
Using the integral image any Haar-like feature can be computed at any
scale or location in constant
time. Haar features are similar to Haar wavelets
The feature set considers rectangular regions of image and sums up
the pixels in this region. The results are used to classify images.
The Haar wavelets are a natural set basis functions which encode
differences in average intensities between different regions.
detection is a rare event binary classification problem,
in the sense that among millions of sub-windows, only a few contain
faces. The classifier introduced by those guys filters out over 50%
of the image while preserving 99% of the faces. As we will see later,
we could go far beyond this limits. This classifier is built using
AdaBoost learning algorithm http://en.wikipedia.org/wiki/AdaBoost
to select a small number of critical visual features from a very
large set of potential features. The basic idea of AdaBoost is to
train an ensemble of M
voting coefficients. This is done by training,
for m from 1 to M.
However, at m-th
stage, the training examples are weighted differently using a
based on performance of the previous stage. The idea is to increase
the weights of those correctly classified examples, so that
subsequent weak classifiers focus more on the wrongly classified
With each feature is associated one weak classifier, which classifies
a sub-window by first integrating the sub-window with the feature
using integral images, and then thresholding the value with a
properly chosen threshold. Then they are combined into a “cascade”
which allows useless regions to be quickly discarded. Those
sub-windows which are not rejected by initial classifier of the
cascade are processed by a sequence of classifiers, each slightly
more complex than the last.
We finished with Viola and Michael's paper. Now let us examine some
ideas to improve performance of the learning algorithm and detector.
Pham and Tat-Jen Cham brought a new real insight into the
aforementioned method with
their paper “Fast training and selection of Haar features using
statistics in boosting-based face detection”. It is extremely fast
and comparable in accuracy. They claim that “traditional techniques
for training a weak classifier usually run in,
with N examples... and T features” And they present a novel
approach to train a weak classifier in time,
where d is the number of pixels of the probed image sub-window, by
using only the statistics of the weighted input data. Did you see
it?! Rather than trying to reduce either N
or T, they break up
the NT factor. I do
not want to reprint that exciting paper because it contains a lot of
math, so just go and read the paper. By the way, this is currently
the world's fastest method for training a face detector. So it is
worth the reader's attention. If you need help in statistics, I
strongly advise to read first chapters of “Pattern classification”
by R.O. Duda, P. E. Hart, and D. G. Stork.
Let us look in another direction. The main limitations of the
aforementioned detectors is to gather a representative training set.
To overcome that limitation we should resort to the help of
H.Grabner, P. M. Roth, and H.Bischof. They have published an
“inspiring” piece of work: “Is Pedestrian Detection Really a
The basic idea is to train a separate classifier for each image
location which has only to discriminate the object from background at
a specific location. For the complexity is reduced “we can use a
simple update strategy that requires only a few positive samples and
is stable by design”. And some limitations arise there: fixed
cameras looking always at the same scene. This approach uses on-line
unsupervised learning methods
which usually tend to wrong updates which reduces the performance of
the detector. The detector might start to drift and would end in an
unreliable state. There are variations of the on-line learning
algorithm which are suffering from the drifting problem:
a self-training framework the current classifier evaluates an input
sample and predicts a label which is then directly used to update
In a co-training framework two classifiers are trained in parallel
using different views of data. The confident predicted labels are
used to update the other classifier, respectively.
results obtained by the classifier are verified by an analyzer and
if the obtained labels are confident the samples are used for
updating the classifier.
And again, authors overcome these problems and proved that whether
the false positive rate is increased nor that recall is decreased if
the system is running for a longer period of time. Their method does
not take into account the classifier response for delivering update.
practical implementation a fixed highly overlapping grid (both in
location and scale) is placed in the image. Each grid element
to a classifier
which is responsible only for its underlying image patch.
To be continued...
Viola, Michael Jones. Robust Real-Time Face Detection. 2004
Phan, Tat-Jen Cham. Fast training and selection of Haar features
using statistics in boosting-based face detection. 2007
O. Duda, P. E. Hart, and D. G. Stork. Pattern classification.
Grabner, P. M. Roth, and H. Bischof. Is Pedestrian Detection Really
a Hard Task. 2007
List comprehension is a syntax construct for easy creating lists which allows to write
more readable code than using map and filter functions. It is analogous to set comprehension in mathematics.
In Haskell using list comprehension jointly with lazy nature of lists allows to easily
handle infinite lists, but OCaml doesn't provide such functionality.
Consider the following Haskell code:
[ 2 * x | x <- [0 .. 100], x ^ 2 > 3]
The corresponding OCaml code could look like this:
let rec interval a b =
if a > b then  else a :: interval (a + 1) b
(fun x -> 2 * x)
(List.filter (fun x -> x ** x > 3) (interval 0 100))
But even in this case because of the strict nature of OCaml we couldn't easy handle infinite lists:
[ 2 * x | x <- [0 .. ], x ^ 2 > 3]
Surely, it is possible to handle infinite lists by implementing an appropriate type on your own, e.g.:
type 'a node_t =
| Cons of 'a * 'a t
and 'a t = 'a node_t lazy_t
and providing corresponding functions to work with it.
But the lack of syntactic sugar for lazy lists makes working with them somewhat tedious.
The code will be bulky even for the simple example above, not to mention something like this:
primes = sieve [2..]
sieve (p:xs) = p : sieve [x|x <- xs, x `mod` p > 0]
To simplify the usage of lazy lists in OCaml I've implemented a lazy list library and a syntax extension that allows for the following notation in the above examples:
[% 2 * x | x <- [% 0 .. ]; x * x > 3 ]
let primes =
let rec sieve = function
p %: xs -> p %: sieve [% x | x <- xs; x mod p > 0 ]
in sieve [% 2 ..]
With this syntax extension lazy lists may be written as follows:
[% ] (* an empty lazy list *)
[% 42 ] (* a lazy list containing single element 42 *)
[% 1; 2; 3; 4; 5] (* a lazy list containing number from 1 to 5 *)
[% 1 .. 5 ] (* the same lazy list *)
[% 2; 4 .. 10 ] (* a lazy list containing even numbers from 1 to 10 *)
[% 1; 3 .. 9; 10; 12 .. 20 ]
(* a lazy list containing odd numbers from 1 to 9 and even number from 10 to 20 *)
[% 1 .. ] (* a lazy list containing positive integers *)
[% 2; 4 .. ] (* a lazy list containing all even positive integers *)
Cons in original syntax is performed in the following way:
1 %: [% 2; 3 ] (* a lazy list containing 1, 2, 3 *)
1 %: 2 %: 3 %:  (* the same lazy list *)
In revised syntax:
[% 1 :: [% 2; 3 ] ] (* a lazy list containing 1, 2, 3 again *)
[% 1 :: [% 2 :: [% 3 :: [% ] ] ] ] (* and again *)
[% 1; 2 :: [% 3 ] ] (* the same lazy list *)
[% 1; 2; 3 :: [% ] ] (* the same lazy list *)
List comprehension consists from expression, generators, guards and let-bindings.
[% x + 1 | x <- [% 1 .. 10 ] ] (* a lazy list containing integers from 2 to 11 *)
Multiple generators are separated by semicolons:
[% x, y | x <- [% 1 .. 4 ]; y <- [% 7; 8 ] ]
Boolean guards allow filtering:
[% x, y, z | x <- [% 1 .. 20 ]; y <- [% x .. 20 ];
z <- [% y .. 20 ]; x * x + y * y = z * z]
[% i, j | i <- [% 1 .. ]; j <- [% 1 .. i - 1 ]; gcd i j = 1 ]
Example of let declaration:
[% i, j | i <- [% 1 .. ]; let k = i * i; j <- [% 1 .. k ] ]
Pattern matching allows arbitrary decomposition of lazy list:
match [% (1, [2; 3]); (4, [5; 6]) ] with
(x, [y; 3]) %: tl -> x + y
| _ -> 0
In revised syntax:
match [% (1, [2; 3]); (4, [5; 6]) ] with
[ [% (x, [y; e]) :: tl ] -> x + y
| _ -> 0 ] ;
This library can be downloaded here .
Any comments are very welcome.
In many situations in functional programming it's handy to be able to produce
"specialized" functions by fixing the values of part of the arguments of a more
generic one. This is usually referred to as partial application. Here's a simple
example in OCaml:
# (+) 1 ;;
- : int -> int = <fun>
# let f = (+) 1 ;;
val f : int -> int = <fun>
# f 2 ;;
- : int = 3
Partial application allows writing more elegant code in some cases:
# List.map ((+) 1) [0; 1; 2; 3; 4] ;;
- : int list = [1; 2; 3; 4; 5]
# let f c a b = c * (a + b) ;;
val f : int -> int -> int -> int = <fun>
# List.map2 (f 2) [0; 1; 2] [1; 2; 3] ;;
- : int list = [2; 6; 10]
List.map (fun x -> 1 + x) [0; 1; 2; 3; 4] ;;
List.map2 (fun x y -> f 2 x y) [0; 1; 2] [1; 2; 3]
But the support for partial application is somewhat limited - there's no syntax
in OCaml for fixing an arbitrary subset of arguments, only a number of leftmost arguments
(upd: I mean positional arguments here, no keyword arguments).
This limitation disallows writing such elegant code for example in following
case: we want to substract 5 from all elements of a list (i.e. we want to fix the
second argument). The code has to look like this:
# List.map (fun x -> (-) x 5) [1; 2; 3; 4; 5] ;;
- : int list = [-4; -3; -2; -1; 0]
To overturn this limitation the pa_papply syntax extension have been created:
module Id : Sig.Id = struct
let name = "Partial apply syntax extension"
let version = "0.0.1"
module Make (Syn : Sig.Camlp4Syntax) = struct
expr: LEVEL "apply" (* LEFTA *)
[ [ e = expr; stubs = LIST1 "_" ->
let il =
let rec enum i = function  ->  | _ :: t -> i :: enum (i + 1) t
in enum 0 stubs
let xl = List.map (fun i -> "__papply_x_" ^ string_of_int i) il in
let eapplyx =
List.fold_left (fun accu xi -> <:expr< $accu$ $lid:xi$ >>) e xl in
let funx =
(fun xi rest -> <:expr< fun $lid:xi$ -> $rest$>>)
xl <:expr< $eapplyx$ __papply_z>>
in <:expr< fun __papply_z -> $funx$ >> ] ] ;
module M = Register.OCamlSyntaxExtension (Id) (Make)
This extension is compiled by running following command:
# ocamlc -I +camlp4 -pp camlp4orf -c pa_papply.ml
And the command:
# ocamlc -pp "camlp4o -I . pa_papply.cmo" -c test.ml
compiles for instance file test.ml written in our extended syntax.
pa_papply enables to rewrite the previous example in the following way:
# List.map ((-) _ 5) [1; 2; 3; 4; 5] ;;
- : int list = [-4; -3; -2; -1; 0]
It's possible to skip any number of leftmost arguments before partial
# let f x y z = (x + y) * z ;;
val f : int -> int -> int -> int = <fun>
# f 1 2 3 ;;
- : int = 9
# f _ 2 1 3 ;;
- : int = 9
# f 1 _ 3 2 ;;
- : int = 9
# f _ _ 3 1 2 ;;
- : int = 9
Comments, suggestions and bug reports are welcome.
It's probably quite an uncommon requirement for a web site to protect
it's users from itself. Believe it or not, I'm now working on a
project that has exactly this requirement.
More on why it's in place later, right now I just want to share one
result that came out of our attempts to fulfill it: a complete RSA
In the core of it is the well known implementation by Dave Shapiro. But it lacked one
important feature to be useful for us: key generation.
I guess at this point your are completely puzzled why on earth someone
would need to generate RSA keys in js. But I'll leave this as an
exercise to the reader.
And once you get into key generation you figure out how poorly is
First, it has no long math, so it has to be implemented on top of the
basic built-in means. Luckily, this was a part of Dave's package.
But then you get to the second problem: it's dead slow, and worse
than that, Firefox and IE stall completely for minutes before you can
generate a 512-bit key (respect to Opera team for not making
idea yet how to make it reasonably fast, but at least the browser
freeze can (and should) be dealt with. The code that handles it is in
"ugly_hacks/ensure_response.js". The basic idea behind it is using
continuation-passing style and releasing control from within inner
loops. What makes it an ugly hack is the way we are releasing
control (and also that it pollutes global namespace :) ). Since we
wanted to keep the "responsiveness routines" decoupled from the logic
as much as possible, we had to use the only "context-independent" way
of unwinding the stack we know: exceptions. Not only this is ugly,
it's also extremely slow, but this is the price we were going to
pay. As for capturing continuations, there seems to be no way to
abstract this in js, so had to change the main code to make them
explicit (e.g. close the loops over the counters).
Last but not least, randomization. If you need to generate really
secure keys, Math.random() isn't of much help. The package contains a
secure random number generator (core/randpool.js) and a routine to
seed it with mouse gestures and keystrokes (core/entropy.js).
The examples are also available here: