liu.seSearch for publications in DiVA
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning-Based Methods for Visual Understanding in Engineering and Production Workflows
Linköping University, Department of Management and Engineering, Product Realisation. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-5950-4962
2026 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Artificial intelligence (AI) has advanced rapidly across engineering and industrial domains, yet its adoption in production environments is often constrained by the need for higher system reliability, limited availability of high-quality data, and the challenge of embedding tacit engineering knowledge into learning-based models. These limitations hinder the broader industrial push toward flexible, data-driven automation capable of handling increasing product variability and shorter production cycles.

This thesis investigates how learning-based methods can be designed and integrated as re-liable perception components within production workflows. Through four complementary case studies, the work demonstrates how visual representations—ranging from Computer-Aided Design (CAD)-derived images and engineering drawings to point clouds and RGB-D images—can be leveraged to address concrete industrial challenges across multiple stages of the manufacturing pipeline.

The first case study predicts fixturing clamp configurations for welding operations in automotive manufacturing by learning geometric patterns from CAD-derived representations. The second applies optical character recognition to engineering drawings to accelerate quality-control and documentation workflows. The third examines scene reconstruction and 3D object detection from point clouds, using synthetic data generation to mitigate data scarcity. The fourth develops a fast, zero-shot pose estimation approach for robotic manipulation, enabling reliable object localization in dynamic industrial environments.

Taken together, these studies show how AI methods informed by structured engineering knowledge can increase process efficiency, reduce manual workload, and help resolve persistent automation bottlenecks in modern manufacturing.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2026. , p. 62
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2506
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
URN: urn:nbn:se:liu:diva-221069DOI: 10.3384/9789181184648ISBN: 9789181184631 (print)ISBN: 9789181184648 (electronic)OAI: oai:DiVA.org:liu-221069DiVA, id: diva2:2036119
Public defence
2026-03-05, ACAS, A-building, Campus Valla, Linköping, 13:15 (English)
Opponent
Supervisors
Funder
Vinnova, 2021-02481Vinnova, 2020-02974Vinnova, 2023-02694Available from: 2026-02-06 Created: 2026-02-06 Last updated: 2026-02-06Bibliographically approved
List of papers
1. Application of optimized convolutional neural network to fixture layout in automotive parts
Open this publication in new window or tab >>Application of optimized convolutional neural network to fixture layout in automotive parts
2023 (English)In: The International Journal of Advanced Manufacturing Technology, ISSN 0268-3768, E-ISSN 1433-3015, Vol. 126, p. 339-353Article in journal (Refereed) Published
Abstract [en]

Fixture layout is a complex task that significantly impacts manufacturing costs and requires the expertise of well-trained engineers. While most research approaches to automating the fixture layout process use optimization or rule-based frameworks, this paper presents a novel approach using supervised learning. The proposed framework replicates the 3-2-1 locating principle to layout fixtures for sheet metal designs. This principle ensures the correct fixing of an object by restricting its degrees of freedom. One main novelty of the proposed framework is the use of topographic maps generated from sheet metal design data as input for a convolutional neural network (CNN). These maps are created by projecting the geometry onto a plane and converting the Z coordinate into gray-scale pixel values. The framework is also novel in its ability to reuse knowledge about fixturing to lay out new workpieces and in its integration with a CAD environment as an add-in. The results of the hyperparameter-tuned CNN for regression show high accuracy and fast convergence, demonstrating the usability of the model for industrial applications. The framework was first tested using automotive b-pillar designs and was found to have high accuracy (approximate to 100%) in classifying these designs. The proposed framework offers a promising approach for automating the complex task of fixture layout in sheet metal design.

Place, publisher, year, edition, pages
SPRINGER LONDON LTD, 2023
Keywords
Design automation; Machine learning; Fixtures; CNN; Hyperparameter tuning; EfficientNet
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-192681 (URN)10.1007/s00170-023-10995-0 (DOI)000938262100003 ()
Note

Funding Agencies|Linkping University; Vinnova-FFI (Fordonsstrategisk forskning ochinnovation) [2020-02974]

Available from: 2023-03-29 Created: 2023-03-29 Last updated: 2026-02-06Bibliographically approved
2. Optical character recognition on engineering drawings to achieve automation in production quality control
Open this publication in new window or tab >>Optical character recognition on engineering drawings to achieve automation in production quality control
2023 (English)In: Frontiers in Manufacturing Technology, E-ISSN 2813-0359, Vol. 3Article in journal (Refereed) Published
Abstract [en]

Introduction: Digitization is a crucial step towards achieving automation in production quality control for mechanical products. Engineering drawings are essential carriers of information for production, but their complexity poses a challenge for computer vision. To enable automated quality control, seamless data transfer between analog drawings and CAD/CAM software is necessary.

Methods: This paper focuses on autonomous text detection and recognition in engineering drawings. The methodology is divided into five stages. First, image processing techniques are used to classify and identify key elements in the drawing. The output is divided into three elements: information blocks and tables, feature control frames, and the rest of the image. For each element, an OCR pipeline is proposed. The last stage is output generation of the information in table format.

Results: The proposed tool, called eDOCr, achieved a precision and recall of 90% in detection, an F1-score of 94% in recognition, and a character error rate of 8%. The tool enables seamless integration between engineering drawings and quality control.

Discussion: Most OCR algorithms have limitations when applied to mechanical drawings due to their inherent complexity, including measurements, orientation, tolerances, and special symbols such as geometric dimensioning and tolerancing (GD&T). The eDOCr tool overcomes these limitations and provides a solution for automated quality control.

Conclusion: The eDOCr tool provides an effective solution for automated text detection and recognition in engineering drawings. The tool's success demonstrates that automated quality control for mechanical products can be achieved through digitization. The tool is shared with the research community through Github.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2023
Keywords
optical character recognition, image segmentation, object detection, engineering drawings, quality control, keras-ocr
National Category
Engineering and Technology Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:liu:diva-195416 (URN)10.3389/fmtec.2023.1154132 (DOI)
Funder
Vinnova, 2021-02481
Available from: 2023-06-20 Created: 2023-06-20 Last updated: 2026-02-06Bibliographically approved
3. Optimizing Text Recognition in Mechanical Drawings: A Comprehensive Approach
Open this publication in new window or tab >>Optimizing Text Recognition in Mechanical Drawings: A Comprehensive Approach
2025 (English)In: Machines, E-ISSN 2075-1702, Vol. 13, no 3, article id 254Article in journal (Refereed) Published
Abstract [en]

The digitalization of engineering drawings is a pivotal step toward automating and improving the efficiency of product design and manufacturing systems (PDMSs). This study presents eDOCr2, a framework that combines traditional OCR and image processing to extract structured information from mechanical drawings. It segments drawings into key elements-such as information blocks, dimensions, and feature control frames-achieving a text recall of 93.75% and a character error rate (CER) below 1% in a benchmark with drawings from different sources. To improve semantic understanding and reasoning, eDOCr2 integrates Vision Language models (Qwen2-VL-7B and GPT-4o) after segmentation to verify, filter, or retrieve information. This integration enables PDMS applications such as automated design validation, quality control, or manufacturing assessment. The code is available on Github.

Place, publisher, year, edition, pages
MDPI, 2025
Keywords
mechanical drawings; optical character recognition; intelligent document processing; quality control; vision language models
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:liu:diva-212838 (URN)10.3390/machines13030254 (DOI)001452775200001 ()2-s2.0-105001120622 (Scopus ID)
Note

Funding Agencies|Vinnova; DART project; [2021-02481]; [2024-01420]

Available from: 2025-04-07 Created: 2025-04-07 Last updated: 2026-02-06
4. Towards digital representations for brownfield factories using synthetic data generation and 3D object detection
Open this publication in new window or tab >>Towards digital representations for brownfield factories using synthetic data generation and 3D object detection
2024 (English)In: Proceedings of the Design Society: International Conference on Engineering Design, Cambridge University Press, May 2024, Vol. 4, pp. 2297 - 2306 / [ed] Gaetano Cascini, Cambridge University Press , 2024, Vol. 4, p. 2297-2306Conference paper, Published paper (Refereed)
Abstract [en]

This study emphasizes the importance of automatic synthetic data generation in data-driven applications, especially in the development of a 3D computer vision system for engineering contexts such as brownfield factory projects, where no data is readily available. Key points: (1) A successful integration of a synthetic data generator with the S3DIS dataset, leading to a significant enhancement in object detection of previous classes and enabling recognition of new ones; (2) A proposal for a CAD-based configurator for efficient and customizable scene reconstruction from LiDAR scanner point clouds.

Place, publisher, year, edition, pages
Cambridge University Press, 2024
Keywords
artificial intelligence (AI), brown field, digital twin, point cloud, synthetic data generation
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-221068 (URN)10.1017/pds.2024.232 (DOI)2-s2.0-85194086010 (Scopus ID)
Conference
International Conference on Engineering Design, 2024
Available from: 2026-02-06 Created: 2026-02-06 Last updated: 2026-02-06Bibliographically approved

Open Access in DiVA

fulltext(38531 kB)226 downloads
File information
File name FULLTEXT01.pdfFile size 38531 kBChecksum SHA-512
d27bf15b3d269a492842d1638f5fecdb3514e8bf9ae3169c370b6a6e203c0cf16df359a1fd7f5ae686d4975ca376bf17fb2a25fe4d5632ef1d035a6d353bd4da
Type fulltextMimetype application/pdf
Order online >>

Other links

Publisher's full text

Authority records

Villena Toro, Javier

Search in DiVA

By author/editor
Villena Toro, Javier
By organisation
Product RealisationFaculty of Science & Engineering
Production Engineering, Human Work Science and Ergonomics

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 4952 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf