liu.seSök publikationer i DiVA
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
A Graph-Based Perspective on Neural Networks
Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska fakulteten.
2026 (Engelska)Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

The empirical success of deep learning in a wide range of applications over the last decade has been remarkable. Neural networks can now achieve human-like or superhuman performance at tasks such as image recognition and segmentation,speech recognition, and natural language generation.

Despite decades of research dedicated to understanding how such models learn,there are still many unresolved questions. For instance, neural networks are often severely overparameterized, sometimes with many more parameters than training samples, which according to intuition from classical theory should lead to high sensitivity to noise and poor performance when encountering new data. Yet with enough parameters or training, one can overcome this issue, even without explicit regularization. Understanding implicit biases in training and the induced behavior of neural networks is an important puzzle piece towards understanding how these models learn so efficiently.

This thesis emphasizes the ‘network’ part of neural networks, and uses tools from graph theory to view this class of models from a new perspective that adds to our understanding of their inner workings.

The first paper treats deep linear neural networks, which are neural networks where the nonlinear activations have been removed. The gradient flow equations describing the network’s learning process is an analytically treatable dynamical system, and although it is a simplified model, a deep linear network shares several interesting features with its nonlinear counterpart, such as a non-convex loss function and nonlinear dynamics induced by the overparameterization. The network is considered as a directed acyclic graph and the learning dynamics are described in terms of its adjacency matrix. This reformulation simplifies the gradient flow equations and provides insight into the system properties. For instance,it allows us to highlight an equivalence relation among adjacency matrices, and to investigate stable and unstable manifolds at the critical points of the system without needing to compute the Hessian of the loss function.

The second paper uses the concept of frustration from statistical physics in the context of deep neural networks, and relates frustration to monotonicity of the network when viewed as a function. It is shown that state-of-the-art convolutional neural networks trained on image classification tasks are less frustrated,and thus closer to monotone functions, than what is expected from null models. This suggests an implicit bias in the kind of function that they learn.

Ort, förlag, år, upplaga, sidor
Linköping: Linköping University Electronic Press, 2026. , s. 36
Serie
Linköping Studies in Science and Technology. Licentiate Thesis, ISSN 0280-7971 ; 2028
Nationell ämneskategori
Datavetenskap (datalogi) Reglerteknik
Identifikatorer
URN: urn:nbn:se:liu:diva-221215DOI: 10.3384/9789181184822ISBN: 9789181184815 (tryckt)ISBN: 9789181184822 (digital)OAI: oai:DiVA.org:liu-221215DiVA, id: diva2:2038316
Presentation
2026-03-13, Ada Lovelace, B-huset, Campus Valla, Linköping, 10:15
Opponent
Handledare
Tillgänglig från: 2026-02-13 Skapad: 2026-02-13 Senast uppdaterad: 2026-03-17Bibliografiskt granskad
Delarbeten
1. Computing frustration and near-monotonicity in deep neural networks
Öppna denna publikation i ny flik eller fönster >>Computing frustration and near-monotonicity in deep neural networks
(Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

For the signed graph associated to a deep neural network, one can compute the frustration level, i.e., test how close or distant the graph is to structural balance. For all the pretrained deep convolutional neural networks we consider, we find that the frustration is always less than expected from null models. From a statistical physics point of view, and in particular in reference to an Ising spin glass model, the reduced frustration indicates that the amount of disorder encoded in the network is less than in the null models. From a functional point of view, low frustration (i.e., proximity to structural balance) means that the function representing the network behaves near-monotonically, i.e., more similarly to a monotone function than in the null models. Evidence of near-monotonic behavior along the partial order determined by frustration is observed for all networks we consider. This confirms that the class of deep convolutional neural networks tends to have a more ordered behavior than expected from null models, and suggests a novel form of implicit regularization.

Nyckelord
Disordered Systems and Neural Networks, Machine Learning
Nationell ämneskategori
Reglerteknik Artificiell intelligens
Identifikatorer
urn:nbn:se:liu:diva-221214 (URN)10.48550/arXiv.2510.05286 (DOI)
Tillgänglig från: 2026-02-13 Skapad: 2026-02-13 Senast uppdaterad: 2026-02-13

Open Access i DiVA

fulltext(1964 kB)87 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 1964 kBChecksumma SHA-512
ffe7c8fdda4e307d05fa23d2eff3f3aaec8216c255a28bf3313d0b3811205eca2890b3101ab54092e7a74265f32848a9b38ef3ec801a878a0f4782b2aab10610
Typ fulltextMimetyp application/pdf
Beställ online >>

Övriga länkar

Förlagets fulltext

Person

Wendin, Joel

Sök vidare i DiVA

Av författaren/redaktören
Wendin, Joel
Av organisationen
ReglerteknikTekniska fakulteten
Datavetenskap (datalogi)Reglerteknik

Sök vidare utanför DiVA

GoogleGoogle Scholar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 3680 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf