liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards automated assessment of team performance by mimicking expert observers ratings
Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering. Swedish Def Res Agcy, Sweden.
2019 (English)In: Cognition, Technology & Work, ISSN 1435-5558, E-ISSN 1435-5566, Vol. 21, no 2, p. 253-274Article in journal (Refereed) Published
Abstract [en]

Automation is the holy grail of performance assessment. Cheap and reliable automated systems that produce consistent feedback on performance. Many such systems have been proposed that accurately measure the state of a product or the outcome of a process. Procedural faults can be detected and even mitigated without the need for human interference. In production industry and professional sports, this is a natural part of business. However, in macrocognitive team performance studies, human appraisal is still king. This study investigates the reliability of human observers as assessors of performance among virtual teams, and what they base their assessments on when only able to monitor one of the team members at a time. The results show that expert observers put a lot of emphasis on task outcomes and on communication and are generally reliable raters of team performance, but there are several aspects that they cannot rate reliably under these circumstances, e.g., team workload, stress, and collaborative problem-solving. Through simple algorithms, this study shows that by capturing task scores and different quantitative communication metrics, team performance ratings can be estimated to closely match how the expert observers assess team performance in a virtual team setting. The implication of the study is that numeric team performance estimations can be acquired by automated systems, with reasonable accuracy and reliability compared to observer ratings.

Place, publisher, year, edition, pages
SPRINGER LONDON LTD , 2019. Vol. 21, no 2, p. 253-274
Keywords [en]
Team performance; Performance assessment; Automation
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
URN: urn:nbn:se:liu:diva-157538DOI: 10.1007/s10111-018-0499-6ISI: 000467039000006OAI: oai:DiVA.org:liu-157538DiVA, id: diva2:1328675
Note

Funding Agencies|Department of the Navy Grant by Office of Naval Research Global [N62909-11-1-7019]

Available from: 2019-06-22 Created: 2019-06-22 Last updated: 2019-06-22

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Granasen, Dennis
By organisation
Department of Computer and Information ScienceFaculty of Science & Engineering
In the same journal
Cognition, Technology & Work
Production Engineering, Human Work Science and Ergonomics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf