liu.seSearch for publications in DiVA
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Learning-Based Methods for Visual Understanding in Engineering and Production Workflows
Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, Produktrealisering. Linköpings universitet, Tekniska fakulteten.ORCID-id: 0000-0002-5950-4962
2026 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Artificial intelligence (AI) has advanced rapidly across engineering and industrial domains, yet its adoption in production environments is often constrained by the need for higher system reliability, limited availability of high-quality data, and the challenge of embedding tacit engineering knowledge into learning-based models. These limitations hinder the broader industrial push toward flexible, data-driven automation capable of handling increasing product variability and shorter production cycles.

This thesis investigates how learning-based methods can be designed and integrated as re-liable perception components within production workflows. Through four complementary case studies, the work demonstrates how visual representations—ranging from Computer-Aided Design (CAD)-derived images and engineering drawings to point clouds and RGB-D images—can be leveraged to address concrete industrial challenges across multiple stages of the manufacturing pipeline.

The first case study predicts fixturing clamp configurations for welding operations in automotive manufacturing by learning geometric patterns from CAD-derived representations. The second applies optical character recognition to engineering drawings to accelerate quality-control and documentation workflows. The third examines scene reconstruction and 3D object detection from point clouds, using synthetic data generation to mitigate data scarcity. The fourth develops a fast, zero-shot pose estimation approach for robotic manipulation, enabling reliable object localization in dynamic industrial environments.

Taken together, these studies show how AI methods informed by structured engineering knowledge can increase process efficiency, reduce manual workload, and help resolve persistent automation bottlenecks in modern manufacturing.

sted, utgiver, år, opplag, sider
Linköping: Linköping University Electronic Press, 2026. , s. 62
Serie
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2506
HSV kategori
Identifikatorer
URN: urn:nbn:se:liu:diva-221069DOI: 10.3384/9789181184648ISBN: 9789181184631 (tryckt)ISBN: 9789181184648 (digital)OAI: oai:DiVA.org:liu-221069DiVA, id: diva2:2036119
Disputas
2026-03-05, ACAS, A-building, Campus Valla, Linköping, 13:15 (engelsk)
Opponent
Veileder
Forskningsfinansiär
Vinnova, 2021-02481Vinnova, 2020-02974Vinnova, 2023-02694Tilgjengelig fra: 2026-02-06 Laget: 2026-02-06 Sist oppdatert: 2026-02-06bibliografisk kontrollert
Delarbeid
1. Application of optimized convolutional neural network to fixture layout in automotive parts
Åpne denne publikasjonen i ny fane eller vindu >>Application of optimized convolutional neural network to fixture layout in automotive parts
2023 (engelsk)Inngår i: The International Journal of Advanced Manufacturing Technology, ISSN 0268-3768, E-ISSN 1433-3015, Vol. 126, s. 339-353Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Fixture layout is a complex task that significantly impacts manufacturing costs and requires the expertise of well-trained engineers. While most research approaches to automating the fixture layout process use optimization or rule-based frameworks, this paper presents a novel approach using supervised learning. The proposed framework replicates the 3-2-1 locating principle to layout fixtures for sheet metal designs. This principle ensures the correct fixing of an object by restricting its degrees of freedom. One main novelty of the proposed framework is the use of topographic maps generated from sheet metal design data as input for a convolutional neural network (CNN). These maps are created by projecting the geometry onto a plane and converting the Z coordinate into gray-scale pixel values. The framework is also novel in its ability to reuse knowledge about fixturing to lay out new workpieces and in its integration with a CAD environment as an add-in. The results of the hyperparameter-tuned CNN for regression show high accuracy and fast convergence, demonstrating the usability of the model for industrial applications. The framework was first tested using automotive b-pillar designs and was found to have high accuracy (approximate to 100%) in classifying these designs. The proposed framework offers a promising approach for automating the complex task of fixture layout in sheet metal design.

sted, utgiver, år, opplag, sider
SPRINGER LONDON LTD, 2023
Emneord
Design automation; Machine learning; Fixtures; CNN; Hyperparameter tuning; EfficientNet
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-192681 (URN)10.1007/s00170-023-10995-0 (DOI)000938262100003 ()
Merknad

Funding Agencies|Linkping University; Vinnova-FFI (Fordonsstrategisk forskning ochinnovation) [2020-02974]

Tilgjengelig fra: 2023-03-29 Laget: 2023-03-29 Sist oppdatert: 2026-02-06bibliografisk kontrollert
2. Optical character recognition on engineering drawings to achieve automation in production quality control
Åpne denne publikasjonen i ny fane eller vindu >>Optical character recognition on engineering drawings to achieve automation in production quality control
2023 (engelsk)Inngår i: Frontiers in Manufacturing Technology, E-ISSN 2813-0359, Vol. 3Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Introduction: Digitization is a crucial step towards achieving automation in production quality control for mechanical products. Engineering drawings are essential carriers of information for production, but their complexity poses a challenge for computer vision. To enable automated quality control, seamless data transfer between analog drawings and CAD/CAM software is necessary.

Methods: This paper focuses on autonomous text detection and recognition in engineering drawings. The methodology is divided into five stages. First, image processing techniques are used to classify and identify key elements in the drawing. The output is divided into three elements: information blocks and tables, feature control frames, and the rest of the image. For each element, an OCR pipeline is proposed. The last stage is output generation of the information in table format.

Results: The proposed tool, called eDOCr, achieved a precision and recall of 90% in detection, an F1-score of 94% in recognition, and a character error rate of 8%. The tool enables seamless integration between engineering drawings and quality control.

Discussion: Most OCR algorithms have limitations when applied to mechanical drawings due to their inherent complexity, including measurements, orientation, tolerances, and special symbols such as geometric dimensioning and tolerancing (GD&T). The eDOCr tool overcomes these limitations and provides a solution for automated quality control.

Conclusion: The eDOCr tool provides an effective solution for automated text detection and recognition in engineering drawings. The tool's success demonstrates that automated quality control for mechanical products can be achieved through digitization. The tool is shared with the research community through Github.

sted, utgiver, år, opplag, sider
Frontiers Media S.A., 2023
Emneord
optical character recognition, image segmentation, object detection, engineering drawings, quality control, keras-ocr
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-195416 (URN)10.3389/fmtec.2023.1154132 (DOI)
Forskningsfinansiär
Vinnova, 2021-02481
Tilgjengelig fra: 2023-06-20 Laget: 2023-06-20 Sist oppdatert: 2026-02-06bibliografisk kontrollert
3. Optimizing Text Recognition in Mechanical Drawings: A Comprehensive Approach
Åpne denne publikasjonen i ny fane eller vindu >>Optimizing Text Recognition in Mechanical Drawings: A Comprehensive Approach
2025 (engelsk)Inngår i: Machines, E-ISSN 2075-1702, Vol. 13, nr 3, artikkel-id 254Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

The digitalization of engineering drawings is a pivotal step toward automating and improving the efficiency of product design and manufacturing systems (PDMSs). This study presents eDOCr2, a framework that combines traditional OCR and image processing to extract structured information from mechanical drawings. It segments drawings into key elements-such as information blocks, dimensions, and feature control frames-achieving a text recall of 93.75% and a character error rate (CER) below 1% in a benchmark with drawings from different sources. To improve semantic understanding and reasoning, eDOCr2 integrates Vision Language models (Qwen2-VL-7B and GPT-4o) after segmentation to verify, filter, or retrieve information. This integration enables PDMS applications such as automated design validation, quality control, or manufacturing assessment. The code is available on Github.

sted, utgiver, år, opplag, sider
MDPI, 2025
Emneord
mechanical drawings; optical character recognition; intelligent document processing; quality control; vision language models
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-212838 (URN)10.3390/machines13030254 (DOI)001452775200001 ()2-s2.0-105001120622 (Scopus ID)
Merknad

Funding Agencies|Vinnova; DART project; [2021-02481]; [2024-01420]

Tilgjengelig fra: 2025-04-07 Laget: 2025-04-07 Sist oppdatert: 2026-02-06
4. Towards digital representations for brownfield factories using synthetic data generation and 3D object detection
Åpne denne publikasjonen i ny fane eller vindu >>Towards digital representations for brownfield factories using synthetic data generation and 3D object detection
2024 (engelsk)Inngår i: Proceedings of the Design Society: International Conference on Engineering Design, Cambridge University Press, May 2024, Vol. 4, pp. 2297 - 2306 / [ed] Gaetano Cascini, Cambridge University Press , 2024, Vol. 4, s. 2297-2306Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This study emphasizes the importance of automatic synthetic data generation in data-driven applications, especially in the development of a 3D computer vision system for engineering contexts such as brownfield factory projects, where no data is readily available. Key points: (1) A successful integration of a synthetic data generator with the S3DIS dataset, leading to a significant enhancement in object detection of previous classes and enabling recognition of new ones; (2) A proposal for a CAD-based configurator for efficient and customizable scene reconstruction from LiDAR scanner point clouds.

sted, utgiver, år, opplag, sider
Cambridge University Press, 2024
Emneord
artificial intelligence (AI), brown field, digital twin, point cloud, synthetic data generation
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-221068 (URN)10.1017/pds.2024.232 (DOI)2-s2.0-85194086010 (Scopus ID)
Konferanse
International Conference on Engineering Design, 2024
Tilgjengelig fra: 2026-02-06 Laget: 2026-02-06 Sist oppdatert: 2026-02-06bibliografisk kontrollert

Open Access i DiVA

fulltext(38531 kB)258 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 38531 kBChecksum SHA-512
d27bf15b3d269a492842d1638f5fecdb3514e8bf9ae3169c370b6a6e203c0cf16df359a1fd7f5ae686d4975ca376bf17fb2a25fe4d5632ef1d035a6d353bd4da
Type fulltextMimetype application/pdf
Bestill online >>

Andre lenker

Forlagets fulltekst

Person

Villena Toro, Javier

Søk i DiVA

Av forfatter/redaktør
Villena Toro, Javier
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 5609 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf