Please use this identifier to cite or link to this item:
http://hdl.handle.net/1942/46169
Title: | Lab-scale Machine Learning: Tales of the good, the bad and the average | Authors: | Vanpoucke, Danny E.P. | Issue Date: | 2025 | Publisher: | Source: | MateriNex, Vestar, Antwerpen, 2025, May 27 | Abstract: | Machine Learning and Artificial Intelligence are presented as the fix-all for current day problems. Also in research it is experiencing a golden age. However, before a Machine Learning model can be created, an enormous quantity of training data needs to be generated. This stands in stark contrast to general academic and industrial lab-scale data sets resulting from research projects. The latter give rise to small or even extremely small data sets (< 50 samples). This makes many of us wonder: "Is it possible to train an ML model with 30 samples instead of 30.000.000?" Using some simple regression models, I'll show that these can be successful in creating a suitable model in the (very) small data regime. Real life use-cases considering adhesive coatings, solvable inks, and spray-coating are discussed. I'll present a strategy to always obtain the best model and highlight caveats and ways to deal with them. [1] "A machine learning approach for the design of hyperbranched polymeric dispersing agents based on aliphatic polyesters for radiation curable inks", | Document URI: | http://hdl.handle.net/1942/46169 | Category: | C2 | Type: | Conference Material |
Appears in Collections: | Research publications |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2025_MateriNex_DEPVanpoucke_Lab-scale Machine Learning.pdf | Conference material | 130.32 kB | Adobe PDF | View/Open |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.