liu.seSearch for publications in DiVA
12 2 of 2
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Graph-Based Perspectiveon Neural Networks
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, Faculty of Science & Engineering.
2026 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The empirical success of deep learning in a wide range of applications over the last decade has been remarkable. Neural networks can now achieve human-like or superhuman performance at tasks such as image recognition and segmentation,speech recognition, and natural language generation.

Despite decades of research dedicated to understanding how such models learn,there are still many unresolved questions. For instance, neural networks are often severely overparameterized, sometimes with many more parameters than training samples, which according to intuition from classical theory should lead to high sensitivity to noise and poor performance when encountering new data. Yet with enough parameters or training, one can overcome this issue, even without explicit regularization. Understanding implicit biases in training and the induced behavior of neural networks is an important puzzle piece towards understanding how these models learn so efficiently.

This thesis emphasizes the ‘network’ part of neural networks, and uses tools from graph theory to view this class of models from a new perspective that adds to our understanding of their inner workings.

The first paper treats deep linear neural networks, which are neural networks where the nonlinear activations have been removed. The gradient flow equations describing the network’s learning process is an analytically treatable dynamical system, and although it is a simplified model, a deep linear network shares several interesting features with its nonlinear counterpart, such as a non-convex loss function and nonlinear dynamics induced by the overparameterization. The network is considered as a directed acyclic graph and the learning dynamics are described in terms of its adjacency matrix. This reformulation simplifies the gradient flow equations and provides insight into the system properties. For instance,it allows us to highlight an equivalence relation among adjacency matrices, and to investigate stable and unstable manifolds at the critical points of the system without needing to compute the Hessian of the loss function.

The second paper uses the concept of frustration from statistical physics in the context of deep neural networks, and relates frustration to monotonicity of the network when viewed as a function. It is shown that state-of-the-art convolutional neural networks trained on image classification tasks are less frustrated,and thus closer to monotone functions, than what is expected from null models. This suggests an implicit bias in the kind of function that they learn.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2026. , p. 36
Series
Linköping Studies in Science and Technology. Licentiate Thesis, ISSN 0280-7971 ; 2028
National Category
Computer Sciences Control Engineering
Identifiers
URN: urn:nbn:se:liu:diva-221215DOI: 10.3384/9789181184822ISBN: 9789181184815 (print)ISBN: 9789181184822 (electronic)OAI: oai:DiVA.org:liu-221215DiVA, id: diva2:2038316
Presentation
2026-03-13, Ada Lovelace, B-huset, Campus Valla, Linköping, 10:15
Opponent
Supervisors
Available from: 2026-02-13 Created: 2026-02-13 Last updated: 2026-02-13Bibliographically approved
List of papers
1. Computing frustration and near-monotonicity in deep neural networks
Open this publication in new window or tab >>Computing frustration and near-monotonicity in deep neural networks
(English)Manuscript (preprint) (Other academic)
Abstract [en]

For the signed graph associated to a deep neural network, one can compute the frustration level, i.e., test how close or distant the graph is to structural balance. For all the pretrained deep convolutional neural networks we consider, we find that the frustration is always less than expected from null models. From a statistical physics point of view, and in particular in reference to an Ising spin glass model, the reduced frustration indicates that the amount of disorder encoded in the network is less than in the null models. From a functional point of view, low frustration (i.e., proximity to structural balance) means that the function representing the network behaves near-monotonically, i.e., more similarly to a monotone function than in the null models. Evidence of near-monotonic behavior along the partial order determined by frustration is observed for all networks we consider. This confirms that the class of deep convolutional neural networks tends to have a more ordered behavior than expected from null models, and suggests a novel form of implicit regularization.

Keywords
Disordered Systems and Neural Networks, Machine Learning
National Category
Control Engineering Artificial Intelligence
Identifiers
urn:nbn:se:liu:diva-221214 (URN)10.48550/arXiv.2510.05286 (DOI)
Available from: 2026-02-13 Created: 2026-02-13 Last updated: 2026-02-13

Open Access in DiVA

fulltext(1964 kB)47 downloads
File information
File name FULLTEXT01.pdfFile size 1964 kBChecksum SHA-512
ffe7c8fdda4e307d05fa23d2eff3f3aaec8216c255a28bf3313d0b3811205eca2890b3101ab54092e7a74265f32848a9b38ef3ec801a878a0f4782b2aab10610
Type fulltextMimetype application/pdf
Order online >>

Other links

Publisher's full text

Authority records

Wendin, Joel

Search in DiVA

By author/editor
Wendin, Joel
By organisation
Automatic ControlFaculty of Science & Engineering
Computer SciencesControl Engineering

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 3212 hits
12 2 of 2
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf