| Abstract: |
Human-machine collaboration requires unambiguous communication to limit misunderstandings. Although semantic interoperability manages to remove ambiguity in machine-to-machine communication, it is insufficient when humans are involved. Humans process and understand information differently based on past experience and the current context, exceeding semantic interoperability's scope. Cognitive interoperability aims to achieve an aligned understanding, share intentions, and enable joint decision-making between agents. However, the cognitive state of the human is hard to detect and model is a major obstacle for cognitive interoperability.
We propose a cognitive Human Digital Twin (cHDT) that emulates a human's cognitive processes by exploiting cognitive architectures. Specifically, ACT-R, a mature cognitive architecture developed from decades of experimental results in cognitive science and neuroscience, is examined as a candidate model. We discuss how the state of an ACT-R model, and thus the cHDT, may contribute to cognitive interoperability.
With a simplified use case, we illustrate how a cHDT hosting a personalised ACT-R model could track and continuously share the human's internal cognitive states. This enables external systems, like robots, to adapt to human perspectives and avoid resource conflicts in human-robot collaboration. Finally, we discuss the applicability of ACT-R as an emulation model, the components of a cHDT, and outline a two-phase implementation scenario to validate the proposed solution. |