Please use this identifier to cite or link to this item:
http://hdl.handle.net/1942/48113| Title: | Logical Aspects of Neural Networks: Query Languages, Expressiveness, and Equivalence | Authors: | STEEGMANS, Juno | Advisors: | Van den Bussche, Jan Vansummeren, Stijn |
Issue Date: | 2026 | Abstract: | Database theory has a tradition of determining the expressive power of both models and query languages over these models. In this thesis we explore the expressive power of query languages on neural networks. To this end we define two languages, both inspired by first-order logic. First-order logic over the reals naturally yields a language that treats the network as a black-box; only allowing the input--output function defined by the network to be queried. On the other hand, we define a white-box language by viewing the network of a weighted graph, and extending first-order logic with summation and weight terms. In general, these two approaches are incomparable in expressive power. However, under natural circumstances, the white-box language subsumes the black-box language; that is, for linear constraints when we are querying feedforward neural network with a fixed number of hidden layers with piecewise linear activation functions. In an attempt to repeat this result but without knowing the number of hidden layers, we add recursion to our white-box language, which we show makes it more expressive than the original white-box language. We show that there are two natural semantics for this language and that they are equally expressive. However, this is still not expressive enough to repeat our previous proof in the same manner. To this end we add recursion to the white-box language in a different manner, where we extend and repeatedly expand an input structure, allowing new symbols to be created, together with repeatedly creating new domain values. We then define a variant of this language that is guaranteed to terminate within a finite amount of time, which is sufficient to repeat our proof without the additional restriction on the network. On the other hand, we will explore the expressive power of graph neural networks, specifically message-passing graph neural networks. In particular, we investigate their power to transform the numerical features stored in the nodes of the graph. In particular, we focus on global expressive power, i.e. uniformly over all input graphs, and on graphs of bounded degree with features from a bounded domain. To this end we introduce the notion of a global feature map transformer, and a basic language for it called MPLang, which we use as a yardstick for expressiveness. Every message-passing neural network can be expressed in MPLang, and we show to which extent the converse inclusion holds. We consider exact versus approximate expressiveness, the use of arbitrary activation functions, and the case where only the ReLU activation function is allowed. Finally we will seek to exploit some of the previously known results on expressive power of graph neural networks to improve their training. This training is known to pose challenges for memory-constrained devices, such as GPUs. As such, we look at exact compression as a method of reducing the memory requirements of learning on large graphs. More precisely, we propose a method for compression that is inspired by the limits of the expressive power of graph neural networks, which transforms the original learning problem into a compressed learning problem that we prove is equivalent. | Document URI: | http://hdl.handle.net/1942/48113 | Category: | T1 | Type: | Theses and Dissertations |
| Appears in Collections: | Research publications |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| PhD Thesis Juno Steegmans.pdf Until 2031-01-10 | Published version | 1.52 MB | Adobe PDF | View/Open Request a copy |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.