A windowed version of the Nearest Neighbour (WNN) classifier for images is described. While its construction is inspired by the architecture of Artificial Neural Networks, the underlying theoretical framework is based on approximation theory. We illustrate WNN on the datasets MNIST and EMNIST of images of handwritten digits. In order to calibrate the parameters of WNN, we first study it on MNIST. We then apply WNN with these parameters to EMNIST resulting in an error rate of 0.76% which significantly outperforms traditional classification methods like Support Vector Machines. By expansions of the training set, an error rate down to 0.42% is achieved.