This report introduces a robust contour descriptor for view-based object recognition. In recent years great progress has been made in the field of view based object recognition mainly due to the introduction of texture based features such as SIFT and MSER. Although these are remarkably successful for textured objects, they have problems with man-made objects with little or no texture. For such objects, either explicit geometrical models, or contour and shading based features are also needed. This report introduces a robust contour descriptor which we hope can be combined with texture based features to obtain object recognition systems that work in a wider range of situations. Each detected contour is described as a sequence of line and ellipse segments, both which have well defined geometrical transformations to other views. The feature detector is also quite fast, this is mainly due to the idea of first detecting chains of contour points, these chains are then split into line segments, which are later either kept, grouped into ellipses or discarded. We demonstrate the robustness of the feature detector with a repeatability test under general homography transformations of a planar scene. Through the repeatability test, we find that using ellipse segments instead of lines, where this is appropriate improves repeatability. We also apply the features in a robotic setting where object appearances are learned by manipulating the objects.