Hierarchical policies are a key ingredient of intelligent behavior, expressing the different levels of abstraction involved in the solution of a problem. Learning hierarchical policies, however, remains a challenge, as no general learning principles have been identified for this purpose, despite the broad interest and vast literature in both model-free reinforcement learning and model-based planning. In this work, we introduce a principled method for learning hierarchical policies over classical planning domains, with no supervision from small instances. The method is based on learning to decompose problems into subproblems so that the subproblems have a lower complexity as measured by their width. Problems and subproblems are captured by means of sketch rules, and the scheme for reducing the width of sketch rules is applied iteratively until the final sketch rules have zero width and encode a general policy. We evaluate the learning method on a number of classical planning domains, analyze the resulting hierarchical policies, and prove their properties. We also show that learning hierarchical policies by learning and refining sketches iteratively is often more efficient than learning flat general policies in one shot.