Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/48113
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorVan den Bussche, Jan-
dc.contributor.advisorVansummeren, Stijn-
dc.contributor.authorSTEEGMANS, Juno-
dc.date.accessioned2026-01-14T10:38:02Z-
dc.date.available2026-01-14T10:38:02Z-
dc.date.issued2026-
dc.date.submitted2026-01-13T13:57:33Z-
dc.identifier.urihttp://hdl.handle.net/1942/48113-
dc.description.abstractDatabase theory has a tradition of determining the expressive power of both models and query languages over these models. In this thesis we explore the expressive power of query languages on neural networks. To this end we define two languages, both inspired by first-order logic. First-order logic over the reals naturally yields a language that treats the network as a black-box; only allowing the input--output function defined by the network to be queried. On the other hand, we define a white-box language by viewing the network of a weighted graph, and extending first-order logic with summation and weight terms. In general, these two approaches are incomparable in expressive power. However, under natural circumstances, the white-box language subsumes the black-box language; that is, for linear constraints when we are querying feedforward neural network with a fixed number of hidden layers with piecewise linear activation functions. In an attempt to repeat this result but without knowing the number of hidden layers, we add recursion to our white-box language, which we show makes it more expressive than the original white-box language. We show that there are two natural semantics for this language and that they are equally expressive. However, this is still not expressive enough to repeat our previous proof in the same manner. To this end we add recursion to the white-box language in a different manner, where we extend and repeatedly expand an input structure, allowing new symbols to be created, together with repeatedly creating new domain values. We then define a variant of this language that is guaranteed to terminate within a finite amount of time, which is sufficient to repeat our proof without the additional restriction on the network. On the other hand, we will explore the expressive power of graph neural networks, specifically message-passing graph neural networks. In particular, we investigate their power to transform the numerical features stored in the nodes of the graph. In particular, we focus on global expressive power, i.e. uniformly over all input graphs, and on graphs of bounded degree with features from a bounded domain. To this end we introduce the notion of a global feature map transformer, and a basic language for it called MPLang, which we use as a yardstick for expressiveness. Every message-passing neural network can be expressed in MPLang, and we show to which extent the converse inclusion holds. We consider exact versus approximate expressiveness, the use of arbitrary activation functions, and the case where only the ReLU activation function is allowed. Finally we will seek to exploit some of the previously known results on expressive power of graph neural networks to improve their training. This training is known to pose challenges for memory-constrained devices, such as GPUs. As such, we look at exact compression as a method of reducing the memory requirements of learning on large graphs. More precisely, we propose a method for compression that is inspired by the limits of the expressive power of graph neural networks, which transforms the original learning problem into a compressed learning problem that we prove is equivalent.-
dc.language.isoen-
dc.titleLogical Aspects of Neural Networks: Query Languages, Expressiveness, and Equivalence-
dc.typeTheses and Dissertations-
local.format.pages160-
local.bibliographicCitation.jcatT1-
dc.relation.references[1] R. Abboud, I.I. Ceylan, M. Grohe, and T. Lukasiewicz. The surprising power of graph neural networks with random node initialization. In Z.-H. Zhou, editor, Proceedings 30th International Joint Conference on Artificial Intelligence, pages 2112–2118. ijcai.org, 2021. [2] Serge Abiteboul, Richard Hull, and Victor Vianu. Foundations of Databases. Addison- Wesley, 1995. URL: http://webdam.inria.fr/Alice/. [3] Serge Abiteboul and Victor Vianu. Datalog extensions for database queries and updates. Journal of Computer and System Sciences, 43(1):62–124, 1991. URL: https://www.sciencedirect.com/science/article/pii/002200009190032Z, doi:10. 1016/0022-0000(91)90032-Z. [4] A. Albarghouthi. Introduction to neural network verification. Foundations and Trends in Programming Languages, 7(1–2):1–157, 2021. URL: https://verifieddeeplearning. com. [5] Krzysztof R. Apt, Howard A. Blair, and Adrian Walker. Towards a theory of declarative knowledge. In Jack Minker, editor, Foundations of Deductive Databases and Logic Programming, pages 89–148. Morgan Kaufmann, 1988. URL: https://doi.org/10. 1016/b978-0-934613-40-8.50006-3, doi:10.1016/B978-0-934613-40-8.50006-3. [6] M. Arenas, D. Báez, P. Barceló, J. Pérez, and B. Subercaseaux. Foundations of sym- bolic languages for model interpretability. In M. Ranzato et al., editors, Proceedings 34th Annual Conference on Neural Information Processing Systems, pages 11690–11701, 2021. [7] M. Arenas, P. Barceló, D. Bustamenta, J. Caraball, and B. Subercaseaux. A uniform language to explain decision trees. In M. Pagnucco P. Marquis, M. Ortiz, editor, Pro- ceedings 21st International Conference on Principles of Knowledge Representation and Reasoning, pages 60–70, 2024. [8] R. Arora, A. Basu, P. Mianjy, and A. Mukherjee. Understanding deep neural networks with rectified linear units. In Proceedings 6th International Conference on Learning Representations. OpenReview.net, 2018. [9] W. Azizian and M. Lelarge. Expressive power of invariant and equivariant graph neural networks. In Proceedings 9th International Conference on Learning Representations. OpenReview.net, 2021. [10] Christel Baier and Joost-Pieter Katoen. Principles of Model Checking. MIT Press, 2008. [11] M. Balcilar, P. Héroux, et al. Breaking the limits of message passing graph neural networks. In M. Meila and T. Zhang, editors, Proceedings 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 599–608, 2021. [12] P. Barceló, E.V. Kostylev, M. Monet, J. Pérez, J.L. Reutter, and J.P. Silva. The expressive power of graph neural networks as a query language. SIGMOD Record, 49(2):6–17, 2020. [13] P. Barceló, E.V. Kostylev, M. Monet, J. Pérez, J.L. Reutter, and J.P. Silva. The logical expressiveness of graph neural networks. In Proceedings 8th International Conference on Learning Representations. OpenReview.net, 2020. [14] Pablo Barceló, Egor V. Kostylev, Mikaël Monet, Jorge Pérez, Juan L. Reutter, and Juan-Pablo Silva. The Expressive Power of Graph Neural Networks as a Query Lan- guage. ACM SIGMOD Record, 49(2):6–17, December 2020. doi:10.1145/3442322. 3442324. [15] S. Basu, R. Pollack, and M.-F. Roy. Algorithms in Real Algebraic Geometry. Springer, second edition, 2008. [16] Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vini- cius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks, October 2018. arXiv:arXiv:1806.01261, doi:10.48550/arXiv.1806.01261. [17] Michael Benedikt, Martin Grohe, Leonid Libkin, and Luc Segoufin. Reachability and connectivity queries in constraint databases. J. Comput. Syst. Sci., 66(1):169–206, 2003. doi:10.1016/S0022-0000(02)00034-X. [18] F. Bodria, F. Giannotti, R. Guidotti, F. Naretto, D. Pedreschi, and S. Rinzivillo. Bench- marking and survey of explanation methods for black box models. Data Mining and Knowledge Discovery, 37:1719–1778, 2023. [19] Jeroen Bollen, Jasper Steegmans, Jan Van den Bussche, and Stijn Vansummeren. Learn- ing graph neural networks using exact compression. In Olaf Hartig and Yuichi Yoshida, editors, Proceedings of the 6th Joint Workshop on Graph Data Management Experiences & Systems (GRADES) and Network Data Analytics (NDA), Seattle, WA, USA, 18 June 2023, pages 8:1–8:9. ACM, 2023. doi:10.1145/3594778.3594878. [20] R. Brijder, F. Geerts, J. Van den Bussche, and T. Weerwag. On the expressive power of query languages for matrices. ACM Transactions on Database Systems, 44(4):15:1– 15:31, 2019. [21] Jie Chen, Tengfei Ma, and Cao Xiao. FastGCN: Fast Learning with Graph Convolu- tional Networks via Importance Sampling. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Con- ference Track Proceedings. OpenReview.net, 2018. [22] Kenneth L. Clarkson. Randomized algorithm for closest-point queries. SIAM journal on computing, 17(4):830–847, 1988. [23] George Cybenko. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst., 2(4):303–314, 1989. [24] K.M. Dannert and E. Grädel. Semiring provenance for guarded logics. In J. Madarász and G. Székely, editors, Hajnal Andréka and István Németi on the Unity of Science, volume 19 of Outstanding Contributions to Logic, pages 55–79. Springer, 2021. [25] Chenhui Deng, Zhiqiang Zhao, Yongyu Wang, Zhiru Zhang, and Zhuo Feng. Graph- Zoom: A multi-level spectral approach for accurate and scalable graph embedding, February 2020. arXiv:arXiv:1910.02370, doi:10.48550/arXiv.1910.02370. [26] Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Singhal, Omer F. Rana, Pankesh Patel, Bin Qian, Zhenyu Wen, Tejal Shah, Graham Morgan, and Rajiv Ranjan. Explainable AI (XAI): core ideas, techniques, and solutions. ACM Comput. Surv., 55(9):194:1–194:33, 2023. doi:10.1145/3561048. [27] F. Geerts. On the expressive power of linear algebra on graphs. Theory of Computing Systems, 65(1):179–239, 2021. [28] Floris Geerts and Bart Kuijpers. On the decidability of termination of query evaluation in transitive-closure logics for polynomial constraint databases. Theor. Comput. Sci., 336(1):125–151, 2005. URL: https://doi.org/10.1016/j.tcs.2004.10.034, doi:10. 1016/J.TCS.2004.10.034. [29] Floris Geerts and Juan L. Reutter. Expressiveness and approximation properties of graph neural networks. In The Tenth International Conference on Learning Represen- tations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL: https://openreview.net/forum?id=wIzUeM3TAU. [30] Floris Geerts, Jasper Steegmans, and Jan Van den Bussche. On the expressive power of message-passing neural networks as global feature map transformers. In Ivan Varz- inczak, editor, Foundations of Information and Knowledge Systems - 12th Interna- tional Symposium, FoIKS 2022, Helsinki, Finland, June 20-23, 2022, Proceedings, volume 13388 of Lecture Notes in Computer Science, pages 20–34. Springer, 2022. doi:10.1007/978-3-031-11321-5\_2. [31] Alessandro Generale, Till Blume, and Michael Cochez. Scaling R-GCN Training with Graph Summarization. In Companion Proceedings of the Web Conference 2022, WWW ’22, pages 1073–1082, New York, NY, USA, August 2022. Association for Computing Machinery. doi:10.1145/3487553.3524719. [32] Mark Gerarts, Juno Steegmans, and Jan Van den Bussche. SQL4NN: validation and expressive querying of models as data. In Proceedings of the Workshop on Data Manage- ment for End-to-End Machine Learning, DEEM 2025, Berlin, Germany, June 22-27, 2025, pages 10:1–10:5. ACM, 2025. doi:10.1145/3735654.3735946. [33] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for Quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pages 1263– 1272, Sydney, NSW, Australia, August 2017. JMLR.org. [34] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016. [35] T.J. Green, G. Karvounarakis, and V. Tannen. Provenance semirings. In Proceedings 26th ACM Symposium on Principles of Database Systems, pages 31–40, 2007. [36] Martin Grohe. The logic of graph neural networks. In Proceedings of the 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS ’21, pages 1–17, New York, NY, USA, November 2021. Association for Computing Machinery. doi:10.1109/ LICS52264.2021.9470677. [37] Martin Grohe, Christoph Standke, Juno Steegmans, and Jan Van den Bussche. Query languages for neural networks. In Sudeepa Roy and Ahmet Kara, editors, 28th Interna- tional Conference on Database Theory, ICDT 2025, March 25-28, 2025, Barcelona, Spain, volume 328 of LIPIcs, pages 9:1–9:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2025. URL: https://doi.org/10.4230/LIPIcs.ICDT.2025.9, doi: 10.4230/LIPICS.ICDT.2025.9. [38] Erich Grädel and Yuri Gurevich. Metafinite model theory. Information and Computa- tion, 140(1):26–81, 1998. URL: https://www.sciencedirect.com/science/article/ pii/S0890540197926754, doi:10.1006/inco.1997.2675. [39] Y. Gurevich. Algebras of feasible functions. In Proceedings 24th Symposium on Foun- dations of Computer Science, pages 210–214. IEEE Computer Society, 1983. [40] Y. Gurevich. Toward logic tailored for computational complexity. In M.M. Richter et al., editors, Computation and Proof Theory, volume 1104 of Lecture Notes in Mathematics, pages 175–216. Springer-Verlag, 1984. [41] Y. Gurevich. Logic and the challenge of computer science. In E. Börger, editor, Current Trends in Theoretical Computer Science, pages 1–57. Computer Science Press, 1988. [42] Y. Gurevich and S. Shelah. Fixed-point extensions of first-order logic. Annals of Pure and Applied Logic, 32:265–280, 1986. [43] D. Halperin. Arrangements. In J.E. Goodman and J. O’Rourke, editors, Handbook of Discrete and Computational Geometry, chapter 24. Chapman & Hall/CRC, second edition, 2004. [44] William L. Hamilton. Graph Representation Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2020. doi:10.2200/ S01045ED1V01Y202009AIM046. [45] William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pages 1025–1035, Red Hook, NY, USA, December 2017. Curran Associates Inc. [46] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society, 2016. doi:10.1109/CVPR.2016.90. [47] L. Hella, M. Järvisalo, A. Kuustisto, J. Laurinharju, T. Lempiäinen, K. Luosto, J. Suomela, and J. Virtema. Weak models of distributed computing, with connections to modal logic. Distributed Computing, 28:31–53, 2015. [48] W. Hodges. Model Theory. Cambridge University Press, 1993. [49] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neu- ral Networks, 4(2):251–257, 1991. URL: https://doi.org/10.1016/0893-6080(91) 90009-T. [50] Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2261– 2269. IEEE Computer Society, 2017. doi:10.1109/CVPR.2017.243. [51] Wenbing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. Adaptive sampling to- wards fast graph representation learning. In Proceedings of the 32nd International Con- ference on Neural Information Processing Systems, NIPS’18, pages 4563–4572, Red Hook, NY, USA, December 2018. Curran Associates Inc. [52] M. I. Jordan and T. M. Mitchell. Machine learning: Trends, perspectives, and prospects. Science, 349(6245):255–260, 2015. URL: https://www.science.org/doi/ abs/10.1126/science.aaa8415, arXiv:https://www.science.org/doi/pdf/10.1126/ science.aaa8415, doi:10.1126/science.aaa8415. [53] Surya Mattu Julia Angwin, Jeff Larson and Lauren Kirchner. Machine bias. ProPublica, May 2016. URL: https://www.propublica.org/article/ machine-bias-risk-assessments-in-criminal-sentencing. [54] P.C. Kanellakis, G.M. Kuper, and P.Z. Revesz. Constraint query languages. Journal of Computer and System Sciences, 51(1):26–52, August 1995. [55] A. Klug. Equivalence of relational algebra and relational calculus query languages having aggregate functions. Journal of the ACM, 29(3):699–717, 1982. [56] A. Kohn, V. Leis, and Th. Neumann. Tidy tuples and flying start: fast compilation and fast execution of relational queries in Umbra. VLDB Journal, 30(5):883–905, 2021. [57] G. Kuper, L. Libkin, and J. Paredaens, editors. Constraint Databases. Springer, 2000. [58] M. Kwiatkowska and X. Zhang. When to trust AI: Advances and challenges for certifi- cation of neural networks. In M. Ganzha, L.A. Maciaszek, M. Paprycki, and D. Slezak, editors, Proceedings 18th Conference on Computer Science and Intelligent Systems, vol- ume 35 of Annals of Computer Science and Information Systems, pages 25–37. Polish Information Processing Society, 2023. [59] M. Leshno, V.Y. Lin, A. Pinkus, and S. Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6(6):861–867, 1993. [60] Jiongqian Liang, Saket Gurukar, and Srinivasan Parthasarathy. MILE: A Multi-Level Framework for Scalable Graph Embedding. Proceedings of the International AAAI Conference on Web and Social Media, 15:361–372, May 2021. doi:10.1609/icwsm. v15i1.18067. [61] Ningyi Liao, Dingheng Mo, Siqiang Luo, Xiang Li, and Pengcheng Yin. SCARA: Scal- able graph neural networks with feature-oriented optimization. Proceedings of the VLDB Endowment, 15(11):3240–3248, July 2022. doi:10.14778/3551793.3551866. [62] L. Libkin. Expressive power of SQL. Theoretical Computer Science, 296:379–404, 2003. [63] L. Libkin. Elements of Finite Model Theory. Springer, 2004. [64] Haiyang Lin, Mingyu Yan, Xiaocheng Yang, Mo Zou, Wenming Li, Xiaochun Ye, and Dongrui Fan. Characterizing and Understanding Distributed GNN Training on GPUs. IEEE Computer Architecture Letters, 21(1):21–24, January 2022. doi:10.1109/LCA. 2022.3168067. [65] Changliu Liu, Tomer Arnon, Christopher Lazarus, Christopher Strong, Clark Barrett, Mykel J Kochenderfer, et al. Algorithms for verifying deep neural networks. Foundations and Trends® in Optimization, 4(3-4):244–404, 2021. [66] X. Liu and E. Lorini. A unified logical framework for explanations in classifier systems. Journal of Logic and Computation, 33(2):485–515, 2023. [67] Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In NIPS, pages 4765–4774, 2017. [68] Abo Khamis M., H.Q. Ngo, and A. Rudra. Juggling functions inside a database. SIG- MOD Record, 46(1):6–13, 2017. [69] J. Marques-Silva. Logic-based explainability in machine learning. In L.E. Bertossi and G. Xiao, editors, Reasoning Web: Causality, Explanations and Declarative Knowledge, volume 13759 of Lecture Notes in Computer Science, pages 24–104. Springer, 2023. [70] Ch. Molnar. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Second edition, 2022. URL: https://christophm.github.io/ interpretable-ml-book. [71] Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and Leman Go Neural: Higher- Order Graph Neural Networks. Proceedings of the AAAI Conference on Artificial In- telligence, 33(01):4602–4609, July 2019. doi:10.1609/aaai.v33i01.33014602. [72] Martin Otto. Graded modal logic and counting bisimulation, September 2019. arXiv: arXiv:1910.00039, doi:10.48550/arXiv.1910.00039. [73] Jingshu Peng, Zhao Chen, Yingxia Shao, Yanyan Shen, Lei Chen, and Jiannong Cao. Sancus: Staleness-aware communication-avoiding full-graph decentralized training in large-scale graph neural networks. Proceedings of the VLDB Endowment, 15(9):1937– 1950, May 2022. doi:10.14778/3538598.3538614. [74] A. Pinkus. Approximation theory of the MLP model in neural networks. Acta Numerica, 8:143–195, 1999. [75] M. Raasveld and H. Mühleisen. DuckDB: An embeddable analytical database. In Proceedings 2019 International Conference on Management of Data, pages 1981–1984. ACM, 2019. [76] Adina Roth. The nest learning thermostat is smarter and sleeker than ever, Aug 2024. [Accessed 09-10-2025]. URL: https://blog.google/products/google-nest/ new-learning-thermostat/. [77] C. Rudin. Stop explaining black box maching learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1:206–215, 2019. [78] S.J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Pearson, fourth edition, 2022. [79] Guillaume Salha, Romain Hennequin, Viet-Anh Tran, and Michalis Vazirgiannis. A Degeneracy Framework for Scalable Graph Autoencoders. In Sarit Kraus, editor, Pro- ceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 3353–3359. ijcai.org, 2019. doi:10.24963/ijcai.2019/465. [80] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. [81] Ch. Szegedy, W. Zaremba, et al. arXiv:1312.6199, 2014. Intriguing properties of neural networks. [82] A. Tarski. A Decision Method for Elementary Algebra and Geometry. University of California Press, 1951. [83] V. Tjeng, K.Y. Xiao, and R. Tedrake. Evaluating robustness of neural networks with mixed integer programming. In Proceedings 7th International Conference on Learning Representations. OpenReview.net, 2019. [84] S. Torunczyk. Aggregate queries on sparse databases. In Proceedings 39h ACM Sym- posium on Principles of Databases, pages 427–443. ACM, 2020. [85] B. Urian. Apple ios 17 update comes with new pet recognition fea- ture in photos app: Here’s how it works, Jun 2023. [Accessed 09- 10-2025]. URL: https://www.techtimes.com/articles/292334/20230607/ apple-ios-17-update-new-pet-recognition-feature-photos-app.htm. [86] S. van Bergerem and N. Schweikardt. Learning concepts described by weight aggregation logic. In Ch. Baier and J. Goubault-Larrecq, editors, 29th Annual Conference on Com- puter Science Logic, volume 183 of Leibnitz International Proceedings in Informatics, pages 10:1–10:18. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, 2021. [87] Victor Vianu. Datalog unchained. In Leonid Libkin, Reinhard Pichler, and Paolo Guagliardo, editors, PODS’21: Proceedings of the 40th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, Virtual Event, China, June 20-25, 2021, pages 57–69. ACM, 2021. doi:10.1145/3452021.3458815. [88] S. Wachter et al. Counterfactual explanation without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2):841–887, 2018. [89] Qiange Wang, Yanfeng Zhang, Hao Wang, Chaoyi Chen, Xiaodong Zhang, and Ge Yu. NeutronStar: Distributed GNN Training with Hybrid Dependency Management. In Proceedings of the 2022 International Conference on Management of Data, SIGMOD ’22, pages 1301–1315, New York, NY, USA, June 2022. Association for Computing Machinery. doi:10.1145/3514221.3526134. [90] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1):4–24, January 2021. doi:10.1109/TNNLS.2020. 2978386. [91] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How Powerful are Graph Neural Networks? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. [92] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. Graph Convolutional Neural Networks for Web-Scale Recommender Sys- tems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18, pages 974–983, New York, NY, USA, July 2018. Association for Computing Machinery. doi:10.1145/3219819.3219890. [93] Binhang Yuan, Cameron R. Wolfe, Chen Dun, Yuxin Tang, Anastasios Kyrillidis, and Chris Jermaine. Distributed learning of fully connected neural networks using inde- pendent subnet training. Proceedings of the VLDB Endowment, 15(8):1581–1590, April 2022. doi:10.14778/3529337.3529343. [94] Chenguang Zheng, Hongzhi Chen, Yuxuan Cheng, Zhezheng Song, Yifan Wu, Changji Li, James Cheng, Hao Yang, and Shuai Zhang. ByteGNN: Efficient graph neural network training at large scale. Proceedings of the VLDB Endowment, 15(6):1228–1242, February 2022. doi:10.14778/3514061.3514069. [95] Zulun Zhu, Jiaying Peng, Jintang Li, Liang Chen, Qi Yu, and Siqiang Luo. Spiking Graph Convolutional Networks. In Thirty-First International Joint Conference on Ar- tificial Intelligence, volume 3, pages 2434–2440, July 2022. doi:10.24963/ijcai.2022/ 338.-
local.type.refereedNon-Refereed-
local.type.specifiedPhd thesis-
local.provider.typePdf-
local.uhasselt.internationalno-
item.accessRightsEmbargoed Access-
item.embargoEndDate2031-01-10-
item.fulltextWith Fulltext-
item.contributorSTEEGMANS, Juno-
item.fullcitationSTEEGMANS, Juno (2026) Logical Aspects of Neural Networks: Query Languages, Expressiveness, and Equivalence.-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
PhD Thesis Juno Steegmans.pdf
  Until 2031-01-10
Published version1.52 MBAdobe PDFView/Open    Request a copy
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.