We propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming atcompact storage, real-time rendering performance, and high image quality. Our model is trained once, using a small trainingset, and then used to obtain a sparse tensor containing the model parameters. Our technique exploits redundancies in the dataacross all dimensions simultaneously, as opposed to existing methods that use only angular information and ignore correlationsin the spatial domain. We show that our model admits efficient angular interpolation directly in the model space, rather thanthe BTF space, leading to a notably higher rendering speed than in previous work. Additionally, the high quality-storage costtradeoff enabled by our method facilitates controlling the image quality, storage cost, and rendering speed using a singleparameter, the number of coefficients. Previous methods rely on a fixed number of latent variables for training and testing,hence limiting the potential for achieving a favorable quality-storage cost tradeoff and scalability. Our experimental resultsdemonstrate that our method outperforms existing methods both quantitatively and qualitatively, as well as achieving a highercompression ratio and rendering speed.