liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Compiler Testing by Random Source Code Generation
Linköping University, Department of Computer and Information Science, Software and Systems.
2023 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesisAlternative title
Kompilatortestning genom slumpmässig källkodsgenerering (Swedish)
Abstract [en]

Most software projects today are written using programming languages. Compilers in turn translate programs written in these higher level languages into machine code, executable on actual hardware. Ensuring that these compilers function correctly is therefore paramount. Manually written test suites make sure that compilers functions correctly for some inputs, but can never hope to cover every possible use case. Thus, it is of interest to find out how other testing techniques can be applied.

This project aimed to implement a random test program generator for Configura Magic (CM), a proprietary programming language used at Configura. Our tool is inspired by the widely successful C program generator Csmith. It is implemented by randomly generating an abstract syntax tree (AST) and unparsing it to produce correct code.

Our tool found about 3 bugs in the CM compiler, Configura Virtual Machine (CVM), during its development.CVM was instrumented to get code coverage data after compiling programs. Compiling the CVM test suite (CTS) and Configura's main product CET (Configura Extension Technology)cover about 23% and 19% of the compiler respectively, while compiling programs generated by our tool only cover about 6%. But on the other hand, our generated programs uniquely cover about 0.2% that is not covered by CTS and CET.

A backend for our tool that generates C-code was also implemented, to compare it against Csmith. The results show that on average (100 program generations 30 times, for a total of 3000 programs), our tool gets about 45% coverage while Csmith gets about 50% on the small C compiler TinyCC. Although our tool was mildly successful in finding bugs, the comparison between it and Csmith shows its potential to be even more effective.

Place, publisher, year, edition, pages
2023. , p. 39
Keywords [en]
Compiler testing, random testing, fuzz testing, random program generation, automated testing
National Category
Computer Sciences Software Engineering
Identifiers
URN: urn:nbn:se:liu:diva-195872ISRN: LIU-IDA/LITH-EX-A--23/071--SEOAI: oai:DiVA.org:liu-195872DiVA, id: diva2:1776161
External cooperation
Configura
Subject / course
Computer Engineering
Supervisors
Examiners
Available from: 2023-06-28 Created: 2023-06-27 Last updated: 2023-06-28Bibliographically approved

Open Access in DiVA

fulltext(396 kB)448 downloads
File information
File name FULLTEXT01.pdfFile size 396 kBChecksum SHA-512
30638afa65bd8ae3888ace301727422dac7d445f25e5fb55ff14afa3ab69abbce404edad0234505bfd1917bcf30147bf5aa30540be10ab858f3d4150e160ec2c
Type fulltextMimetype application/pdf

By organisation
Software and Systems
Computer SciencesSoftware Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 448 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 441 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf