liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Balanced Product of Calibrated Experts for Long-Tailed Recognition
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0001-9874-737X
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Husqvarna Grp, Sweden.ORCID iD: 0000-0002-3434-2522
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Univ KwaZulu Natal, South Africa.ORCID iD: 0000-0002-6096-3648
Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-2492-9872
2023 (English)In: 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE COMPUTER SOC , 2023, p. 19967-19977Conference paper, Published paper (Refereed)
Abstract [en]

Many real-world recognition problems are characterized by long-tailed label distributions. These distributions make representation learning highly challenging due to limited generalization over the tail classes. If the test distribution differs from the training distribution, e.g. uniform versus long-tailed, the problem of the distribution shift needs to be addressed. A recent line of work proposes learning multiple diverse experts to tackle this issue. Ensemble diversity is encouraged by various techniques, e.g. by specializing different experts in the head and the tail classes. In this work, we take an analytical approach and extend the notion of logit adjustment to ensembles to form a Balanced Product of Experts (BalPoE). BalPoE combines a family of experts with different test-time target distributions, generalizing several previous approaches. We show how to properly define these distributions and combine the experts in order to achieve unbiased predictions, by proving that the ensemble is Fisher-consistent for minimizing the balanced error. Our theoretical analysis shows that our balanced ensemble requires calibrated experts, which we achieve in practice using mixup. We conduct extensive experiments and our method obtains new state-of-the-art results on three long-tailed datasets: CIFAR-100-LT, ImageNet-LT, and iNaturalist-2018. Our code is available at https://github.com/emasa/BalPoE-CalibratedLT.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2023. p. 19967-19977
Series
IEEE Conference on Computer Vision and Pattern Recognition, ISSN 1063-6919, E-ISSN 2575-7075
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-199347DOI: 10.1109/CVPR52729.2023.01912ISI: 001062531304028ISBN: 9798350301298 (electronic)ISBN: 9798350301304 (print)OAI: oai:DiVA.org:liu-199347DiVA, id: diva2:1815355
Conference
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, CANADA, jun 17-24, 2023
Note

Funding Agencies|Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; Swedish Research Council [2022-06725]; Knut and Alice Wallenberg Foundation at the National Supercomputer Centre

Available from: 2023-11-28 Created: 2023-11-28 Last updated: 2025-11-18
In thesis
1. Learning Robot Vision under Insufficient Data
Open this publication in new window or tab >>Learning Robot Vision under Insufficient Data
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Machine learning is used today in a wide variety of applications, especially within computer vision, robotics, and autonomous systems. Example use cases include detecting people or other objects using cameras in autonomous vehicles, or navigating robots through collision-free paths to solve different tasks. The flexibility of machine learning is attractive as it can be applied to a wide variety of challenging tasks, without detailed prior knowledge of the problem domain. However, training machine learning models requires vast amounts of data, which leads to a significant manual effort, both for collecting the data and for annotating it. 

In this thesis, we study and develop methods for training machine learning models under in-sufficient data within computer vision, robotics, and autonomous systems, for the purpose of reducing the manual effort. In summary, we study (1) weakly-supervised learning for reducing the annotation cost, (2) methods for reducing model bias under highly imbalanced training data,(3) methods for obtaining trustworthy uncertainty estimates, and (4) the use of simulated and semi-virtual environments for reducing the amount of real-world data in reinforcement learning. 

In the first part of this thesis, we investigate how weakly-supervised learning can be used within image segmentation. In contrast to fully supervised learning, weakly-supervised learning uses a weaker form of annotation, which reduces the annotation effort. Typically, in image segmentation, each object needs to be precisely annotated in every image on the pixel level. Creating this type of annotation is both time consuming and costly. In weakly-supervised segmentation, however, the only information required is which objects are depicted in the images. This significantly reduces the annotation time. In Papers A and B, we propose two loss functions for improving the predicted object segmentations, especially their contours, in weakly-supervised segmentation. 

In the next part of the thesis, we tackle class imbalance in image classification. During data collection, some classes naturally occur more frequently than others, which leads to an imbalance in the amount of data between the different classes. Models trained on such datasets may become biased towards the more common classes. Overcoming this effect by collecting more data of the rare classes may take a very long time. Instead, we develop an ensemble method for image classification in Paper C, which is unbiased despite being trained on highly imbalanced data. 

When using machine learning models within autonomous systems, a desirable property for them is to predict trustworthy uncertainty estimates. This is especially important when the training data is limited, as the probability for encountering previously unseen cases is large. In short, a model making a prediction with a certain confidence should be correct with the corresponding probability. This is not the case in general, as machine learning models are notorious for predicting overconfident uncertainty estimates. We apply methods for improving the uncertainty estimates for classification in Paper C and for regression in Paper D. 

In the final part of this thesis, we utilize reinforcement learning for teaching a robot to perform coverage path planning, e.g. for lawn mowing or search-and-rescue. In reinforcement learning, the robot interacts with an environment and gets rewards based on how well it solves the task. Initially, its actions are random, which improve over time as it explores the environment and gathers data. It typically takes a long time for this learning process to converge. This is problematic in real-world environments where the robot needs to operate during the full duration, which may require human supervision. At the same time, a large variety in the training data is important for generalisation, which is difficult to achieve in real-world environments. Instead, we utilize a simulated environment in Paper E for accelerating the training process, where we procedurally generate random environments. To simplify the transfer from simulation to reality, we fine-tune the model in a semi-virtual indoor environment on the real robot in Paper F. 

Abstract [sv]

Maskininlärning används idag i bred utsträckning inom många områden, och i synnerhet in-om datorseende, robotik, och autonoma system. Det kan till exempel användas för att detektera människor och andra föremål med kameror i autonoma bilar, eller för att styra robotar längs kollisionsfria banor för att lösa diverse uppgifter. Flexibiliteten i maskininlärning är attraktiv då den kan tillämpas för att lösa svåra problem utan detaljkännedom inom problemdomänen i fråga. Dock krävs en stor mängd data för att träna maskininlärningsmodeller, vilket medför en stor manuell arbetsbörda, dels för att samla in data, och dels för att annotera insamlade data.

I denna avhandling undersöker och utvecklar vi metoder för att träna maskininlärningsmodeller med begränsad tillgång till data inom datorseende, robotik och autonoma system, i syfte att minska den manuella arbetsbördan. Sammanfattningsvis undersöker vi (1) svagt väglett läran-de för att minska annoteringstiden, (2) metoder som är opartiska under högt obalanserade data,(3) metoder för att erhålla pålitliga osäkerhetsskattningar, och (4) simulerings- och semivirtuella miljöer för att minska mängden riktiga data för förstärkningsinlärning.

I den första delen av avhandlingen undersöker vi hur svagt väglett lärande (eng. weakly-supervised learning) kan användas inom bildsegmentering. Till skillnad från fullt väglett lärande används en svagare annoteringsform, vilket medför en minskning i den manuella annoterings-bördan. För bildsegmentering krävs i vanliga fall en noggrann annotering av varje enskilt objekt i varje bild på pixelnivå. Att skapa denna typ av annotering är både tidskrävande och kostsam. Med svagt väglett lärande krävs endast kännedom om vilka typer av objekt som finns i varje bild, vilket avsevärt minskar annoteringstiden. I Artikel A och B utformar vi två målfunktioner som är anpassade för att bättre segmentera objekt av intresse, i synnerhet deras konturer.

I nästa del hanterar vi en oönskad effekt som kan uppstå under datainsamlingen. Vissa typer av klasser förekommer naturligt oftare än andra, vilket leder till att det blir en obalans av mängden data emellan olika klasser. En modell som är tränad på en sådan datamängd kan bli partisk mot de klasser som förekommer oftare. Om vissa klasser är sällsynta kan det ta väldigt lång tid att samla in tillräckligt mycket data för att överkomma den effekten. För att motverka effekten i bildklassificering utvecklar vi en ensemblemetod i Artikel C som är opartisk, trots att den är tränad på högt obalanserade data.

För att maskininlärningsmodeller ska vara användbara inom autonoma system är det fördelaktigt om de på ett pålitligt sätt kan skatta sin osäkerhet. Detta är särskilt viktigt vid begränsad träningsdata, eftersom sannolikheten ökar för att okända situationer uppstår som modellen inte har sett under träning. I korthet bör en modell som gör en skattning med en viss säkerhet vara korrekt med motsvarande sannolikhet. Detta är inte fallet generellt för maskininlärningsmodeller, utan de har en tendens att vara överdrivet självsäkra. Vi tillämpar metoder för att förbättra osäkerhetsskattningen för klassificering i Artikel C och för regression i Artikel D.

I den sista delen av avhandlingen undersöker vi hur förstärkningsinlärning (eng. reinforcement learning) kan tillämpas för att lära en robot yttäckningsplanering, exempelvis för gräsklippning eller för att hitta försvunna personer. Under förstärkningsinlärning interagerar roboten i den tilltänkta miljön, och får belöningar baserat på hur väl den utför uppgiften. Initialt är dess handlingar slumpmässiga som sedan förbättras över tid. I många fall tar detta väldigt lång tid, vilket är problematiskt i verkliga miljöer då roboten behöver hållas i drift under hela träningsprocessen. Samtidigt är varierande träningsmiljöer viktiga för generalisering till nya miljöer, vilket är svårt att åstadkomma. Istället använder vi en simulerad miljö i Artikel E för att påskynda tränings-processen där vi utnyttjar slumpmässigt genererade miljöer. För att sedan förenkla övergången från simulering till verklighet finjusterar vi modellen i en semivirtuell inomhusmiljö i Artikel F.  

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2024. p. 57
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2397
National Category
Robotics and automation
Identifiers
urn:nbn:se:liu:diva-207606 (URN)10.3384/9789180757218 (DOI)9789180757201 (ISBN)9789180757218 (ISBN)
Public defence
2024-10-11, Ada Lovelace, B-building, Campus Valla, Linköping, 10:15 (English)
Opponent
Supervisors
Available from: 2024-09-13 Created: 2024-09-13 Last updated: 2025-08-21Bibliographically approved
2. Robust Visual Learning across Class Imbalance and Distributional Shift
Open this publication in new window or tab >>Robust Visual Learning across Class Imbalance and Distributional Shift
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Computer vision aims to equip machines with perceptual understanding—detecting, recognizing, localizing, and relating visual entities to existing sources of knowledge. Machine learning provides the mechanism: models learn representations and decision rules from data and are expected to generalize beyond the training distribution. These systems already support biodiversity monitoring, autonomous driving, and geospatial mapping. In practice, however, textbook assumptions break down: the concept space is vast, data is sparse and imbalanced, many categories are rare, and high-quality annotations are costly. In addition, deployment conditions shift over time—class frequencies and visual domains evolve—biasing models toward frequent scenarios and eroding reliability.

In this work, we develop methods for training reliable visual recognition models under more realistic conditions: class imbalance, limited labeled data, and distribution shift. Our contributions span three themes: (1) debiasing strategies for imbalanced classification that remain reliable under changes in class priors; (2) semi-supervised learning techniques tailored to imbalanced data to reduce annotation cost while preserving minority-class performance; and (3) a unified multimodal retrieval approach for remote sensing (RS) that narrows the domain gap.

In Paper A, we study long-tailed image recognition, where skewed training data biases classifiers toward frequent classes. During deployment, changes in class priors can further amplify this bias. We propose an ensemble of skill-diverse experts, each trained under a distinct target prior, and aggregate their predictions to balance head and tail performance. We theoretically show that the ensemble’s prior bias equals the mean expert bias and that choosing complementary target priors cancels it, yielding an unbiased predictor that minimizes balanced error. With calibrated experts—achieved in practice via Mixup—the ensemble attains state-of-the-art accuracy and remains reliable under label shift.

In Paper B, we investigate long-tailed recognition in the semi-supervised setting, where a small, imbalanced labeled set is paired with a large unlabeled pool. Semi-supervised learning leverages unlabeled data to reduce annotation costs, typically through pseudo-labeling, but the unlabeled class distribution is often unknown and skewed. Naïve pseudo-labeling propagates the labeled bias, reinforcing head classes and overlooking rare ones. We propose a flexible distribution-alignment framework that estimates the unlabeled class mix online and reweights pseudo-labels accordingly, guiding the model first toward the unlabeled distribution to stabilize training and then toward a balanced classifier for fair inference. The proposed approach leverages unlabeled data more effectively, improving accuracy, calibration, and robustness to unknown unlabeled priors.

In Paper C, we move beyond recognition to unified multimodal retrieval for remote sensing—a domain with scarce image–text annotations and a challenging shift from natural images. Prior solutions are fragmented: RS dual encoders lack interleaved input support; universal embedders miss spatial metadata and degrade under domain shift; and RS generative assistants reason over regions but lack scalable retrieval. To overcome these limitations, we introduce VLM2GeoVec, a single-encoder, instruction-following embedder that aligns images, text, regions, and geocoordinates in a shared space. For comprehensive evaluation, we also propose RSMEB, a unified retrieval benchmark that spans conventional tasks (e.g., classification, cross-modal retrieval) and novel interleaved tasks (e.g., visual grounding, spatial localization, semantic geo-localization). In RSMEB, VLM2GeoVec narrows the domain gap relative to universal embedders and matches specialized baselines in conventional tasks in zero-shot settings. It further enables interleaved spatially-aware search, delivering several-fold gains in metadata-aware RS applications.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2025. p. 67
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2487
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-219564 (URN)10.3384/9789181183085 (DOI)9789181183078 (ISBN)9789181183085 (ISBN)
Public defence
2025-12-17, Zero, Zenit Building, Campus Valla, Linköping, 09:15 (English)
Opponent
Supervisors
Note

Funding agency: The Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP), funded by the Knut and Alice Wallenberg Foundation

Available from: 2025-11-18 Created: 2025-11-18 Last updated: 2025-11-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Sanchez Aimar, EmanuelJonnarth, ArviFelsberg, MichaelKuhlmann, Marco
By organisation
Computer VisionFaculty of Science & EngineeringArtificial Intelligence and Integrated Computer Systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 493 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf