Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/7988
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHOLLANDERS, Goele-
dc.contributor.authorBEX, Geert Jan-
dc.contributor.authorGYSSENS, Marc-
dc.contributor.authorWESTRA, Ronald-
dc.contributor.authorTUYLS, Karl-
dc.date.accessioned2008-03-13T21:18:05Z-
dc.date.available2008-03-13T21:18:05Z-
dc.date.issued2007-
dc.identifier.citationVAN SOMEREN, Maarten & KATRENKO, Sophia & ADRIAANS, Pieter (Ed.) Proceedings of the 18th Annual Belgian-Dutch Benelearn Conference. p. 30-36.-
dc.identifier.urihttp://hdl.handle.net/1942/7988-
dc.description.abstractThis paper is concerned with the learning process of a sparse interaction network, for example, a gene-protein interaction network. The advantage of the process we purpose is that there will always be a student S that fits the teacher T very well with a relatively small data set and a high number of unknown components, i.e., when the number of measurements M is significantly smaller than the system size N. To measure the efficiency of this learning process, we use the generalization error, epsilon_gen, which represents the probability that the student is a good fit to the teacher. From our experiments it follows that the quality of the fit depends on several factors: First, the ratio α = M/N of the number of measurements to the system size has a strong impact. Surprisingly, we find that a sudden identification transition occurs for value α ≈ αgen which corresponds to epsilon_gen = 1/2. From this sample size onwards the student will be a good fit to the teacher. Interestingly, the generalization threshold αgen, will always be significantly smaller than 1. Second, the quality of the fit depends on the sparsity of the network. If the number of non-zero components increases, as sparsity disappears, the efficiency of the process will gradually increase. Finally there is an impact of the noise level. The learning process is robust to noise upto a certain threshold. We see that, at this level, the impact on the noise suddenly and dramatically increases as a consequence of which the student will no longer be a good fit to the teacher.-
dc.language.isoen-
dc.subject.othermachine learning, sparse systems, network reconstruction-
dc.titleLearning Sparse Networks From Poor Data-
dc.typeProceedings Paper-
local.bibliographicCitation.authorsVAN SOMEREN, Maarten-
local.bibliographicCitation.authorsKATRENKO, Sophia-
local.bibliographicCitation.authorsADRIAANS, Pieter-
local.bibliographicCitation.conferencedateMay 14-15, 2008-
local.bibliographicCitation.conferencenameThe Annual Belgian-Dutch Benelearn Conference-
dc.bibliographicCitation.conferencenr18-
local.bibliographicCitation.conferenceplaceAmsterdam, the Netherlands-
dc.identifier.epage36-
dc.identifier.spage30-
local.bibliographicCitation.jcatC2-
local.type.specifiedProceedings Paper-
dc.bibliographicCitation.oldjcatC2-
local.bibliographicCitation.btitleProceedings of the 18th Annual Belgian-Dutch Benelearn Conference-
item.contributorHOLLANDERS, Goele-
item.contributorBEX, Geert Jan-
item.contributorGYSSENS, Marc-
item.contributorWESTRA, Ronald-
item.contributorTUYLS, Karl-
item.fullcitationHOLLANDERS, Goele; BEX, Geert Jan; GYSSENS, Marc; WESTRA, Ronald & TUYLS, Karl (2007) Learning Sparse Networks From Poor Data. In: VAN SOMEREN, Maarten & KATRENKO, Sophia & ADRIAANS, Pieter (Ed.) Proceedings of the 18th Annual Belgian-Dutch Benelearn Conference. p. 30-36..-
item.accessRightsOpen Access-
item.fulltextWith Fulltext-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
benelearn2007.pdfPublished version123.45 kBAdobe PDFView/Open
Show simple item record

Page view(s)

20
checked on Sep 7, 2022

Download(s)

4
checked on Sep 7, 2022

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.