Handwriting TransformersShow others and affiliations
2021 (English)In: 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), IEEE , 2021, p. 1066-1074Conference paper, Published paper (Refereed)
Abstract [en]
We propose a novel transformer-based styled handwritten text image generation approach, HWT, that strives to learn both style-content entanglement as well as global and local style patterns. The proposed HWT captures the long and short range relationships within the style examples through a self-attention mechanism, thereby encoding both global and local style patterns. Further, the proposed transformer-based HWT comprises an encoder-decoder attention that enables style-content entanglement by gathering the style features of each query character. To the best of our knowledge, we are the first to introduce a transformer-based network for styled handwritten text generation. Our proposed HWT generates realistic styled handwritten text images and outperforms the state-of-the-art demonstrated through extensive qualitative, quantitative and human-based evaluations. The proposed HWT can handle arbitrary length of text and any desired writing style in a few-shot setting. Further, our HWT generalizes well to the challenging scenario where both words and writing style are unseen during training, generating realistic styled handwritten text images. Code is available at: https://github.com/ankanbhunia/HandwritingTransformers
Place, publisher, year, edition, pages
IEEE , 2021. p. 1066-1074
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:liu:diva-187612DOI: 10.1109/ICCV48922.2021.00112ISI: 000797698901025ISBN: 9781665428125 (electronic)ISBN: 9781665428132 (print)OAI: oai:DiVA.org:liu-187612DiVA, id: diva2:1691094
Conference
18th IEEE/CVF International Conference on Computer Vision (ICCV), ELECTR NETWORK, oct 11-17, 2021
2022-08-292022-08-292022-08-29