Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/47988
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBOLLEN, Jeroen-
dc.contributor.authorVAN DEN BUSSCHE, Jan-
dc.contributor.authorVANSUMMEREN, Stijn-
dc.contributor.authorVIRTEMA, Jonni-
dc.date.accessioned2026-01-06T14:33:43Z-
dc.date.available2026-01-06T14:33:43Z-
dc.date.issued2025-
dc.date.submitted2026-01-06T14:07:01Z-
dc.identifier.citationProceedings of the 22nd International Conference on Principles of Knowledge Representation and Reasoning, IJCAI Organization, p. 175 -184-
dc.identifier.isbn978-1-956792-08-9-
dc.identifier.issn2334-1033-
dc.identifier.urihttp://hdl.handle.net/1942/47988-
dc.description.abstractGraph Neural Networks (GNNs) are a class of machine-learning models that operate on graph-structured data. Their expressive power is intimately related to logics that are invariant under graded bisimilarity. Current proposals for recurrent GNNs either assume that the graph size is given to the model, or suffer from a lack of termination guarantees. In this paper, we propose a halting mechanism for recurrent GNNs. We prove that our halting model can express all node classifiers definable in graded modal mu-calculus, even for the standard GNN variant that is oblivious to the graph size. To prove our main result, we develop a new approximate semantics for graded mu-calculus, which we believe to be of independent interest. We leverage this new semantics into a new model-checking algorithm, called the counting algorithm, which is oblivious to the graph size. In a final step we show that the counting algorithm can be implemented on a halting recurrent GNN.-
dc.description.sponsorshipPrinciples of Knowledge Representation and Reasoning, Incorporated (KR, Inc.) This work was supported by the Bijzonder Onderzoeksfonds (BOF) of Hasselt University Grant No. BOF20ZAP02; by the Research Foundation Flanders (FWO) under research project Grant No. G019222N; and by the Flanders AI (FAIR) research program.-
dc.language.isoen-
dc.publisherIJCAI Organization-
dc.rights2025 International Joint Conferences on Artificial Intelligence Organization-
dc.subject.otherRecurrent Graph Neural Networks-
dc.subject.otherGraded Bisimulation-
dc.subject.otherModal Mu Calculus-
dc.titleHalting Recurrent GNNs and the Graded mu-Calculus-
dc.typeProceedings Paper-
local.bibliographicCitation.conferencedate2025, November 11-17-
local.bibliographicCitation.conferencename22nd International Conference on Principles of Knowledge Representation and Reasoning-
local.bibliographicCitation.conferenceplaceMelbourne, Australia-
dc.identifier.epage184-
dc.identifier.spage175-
local.bibliographicCitation.jcatC1-
local.type.refereedRefereed-
local.type.specifiedProceedings Paper-
dc.identifier.doi10.24963/kr.2025/18-
local.provider.typeCrossRef-
local.bibliographicCitation.btitleProceedings of the 22nd International Conference on Principles of Knowledge Representation and Reasoning-
local.uhasselt.internationalyes-
item.fulltextWith Fulltext-
item.contributorBOLLEN, Jeroen-
item.contributorVAN DEN BUSSCHE, Jan-
item.contributorVANSUMMEREN, Stijn-
item.contributorVIRTEMA, Jonni-
item.accessRightsRestricted Access-
item.fullcitationBOLLEN, Jeroen; VAN DEN BUSSCHE, Jan; VANSUMMEREN, Stijn & VIRTEMA, Jonni (2025) Halting Recurrent GNNs and the Graded mu-Calculus. In: Proceedings of the 22nd International Conference on Principles of Knowledge Representation and Reasoning, IJCAI Organization, p. 175 -184.-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
kr2025-0018-bollen-et-al.pdf
  Restricted Access
Published version217.25 kBAdobe PDFView/Open    Request a copy
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.