Öppna denna publikation i ny flik eller fönster >>Visa övriga...
2025 (Engelska)Konferensbidrag, Muntlig presentation med publicerat abstract (Refereegranskat)
Abstract [en]
The Beluga Challenge, recently organized by the Tuples con-sortium, offered a track on explainable planning (XAIP), to the best of our knowledge the first XAIP competition to date. Within the setting of the Beluga logistics domain, participantswere given a planning task and a plan, and were supposed toanswer a query to explain to a human expert certain choices made in the plan. The queries ask about particular state atomsthat were achieved and alternatives “why achieve this atom A instead of that atom B?”, action reordering “can I do A before B instead?”, or about the consequences of object removal “what happens if we forbid to use object X?”. In this work, we propose counterfactual reasoning to come up with explanations that answer these queries. We design task reformulations, modifications that alter the input planning task, such that the solutions for the modified task allow to explain the choices made in the initial plan. Our framework generalizes the queries posed in the Beluga challenge. To obtain textual explanations, we employ a large language model (LLM) that allows our system to be used without planning-specific knowledge. We empirically show that solving the modifiedtask is similarly hard as finding a plan for the original task,showing that our approach is efficient for practical usage.
Nationell ämneskategori
Artificiell intelligens
Identifikatorer
urn:nbn:se:liu:diva-220250 (URN)
Konferens
International Conference on Planning and Scheduling (ICAPS) 2025 Workshop on Human-Aware and Explainable Planning(HAXP)
2026-01-052026-01-052026-01-07