Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks.pdf

Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks.pdf

ID:34308876

大小:439.97 KB

页数:10页

时间:2019-03-04

Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks.pdf_第1页
Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks.pdf_第2页
Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks.pdf_第3页
Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks.pdf_第4页
Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks.pdf_第5页
资源描述:

《Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks.pdf》由会员上传分享,免费在线阅读,更多相关内容在学术论文-天天文库

1、www.sciencemag.org/cgi/content/full/313/5786/504/DC1SupportingOnlineMaterialforReducingtheDimensionalityofDatawithNeuralNetworksG.E.Hinton*andR.R.Salakhutdinov*Towhomcorrespondenceshouldbeaddressed.E-mail:hinton@cs.toronto.eduPublished28July2006,Science313,504(2006)DOI:10.1126/science.1127647This

2、PDFfileincludes:MaterialsandMethodsFigs.S1toS5MatlabCodeSupportingOnlineMaterialDetailsofthepretraining:TospeedupthepretrainingofeachRBM,wesubdividedalldatasetsintomini-batches,eachcontaining100datavectorsandupdatedtheweightsaftereachmini-batch.Fordatasetsthatarenotdivisiblebythesizeofaminibatch,

3、theremainingdatavectorswereincludedinthelastminibatch.Foralldatasets,eachhiddenlayerwaspretrainedfor50passesthroughtheentiretrainingset.Theweightswereupdatedaftereachmini-batchusingtheaveragesinEq.1ofthepaperwithalearningrateof.Inaddition,timesthepreviousupdatewasaddedtoeachweightand

4、 timesthevalueoftheweightwassub-tractedtopenalizelargeweights.Weightswereinitializedwithsmallrandomvaluessampledfromanormaldistributionwithzeromeanandstandarddeviationof .TheMatlabcodeweusedisavailableathttp://www.cs.toronto.edu/hinton/MatlabForSciencePaper.htmlDetailsofthefine-tuning:F

5、orthefine-tuning,weusedthemethodofconjugategradientsonlargerminibatchescontaining1000datavectors.WeusedCarlRasmussen's“minimize”code(1).Threelinesearcheswereperformedforeachmini-batchineachepoch.Todetermineanadequatenumberofepochsandtocheckforoverfitting,wefine-tunedeachautoencoderonafractionofthetr

6、ainingdataandtesteditsperformanceontheremainder.Wethenrepeatedthefine-tuningontheentiretrainingset.Forthesyntheticcurvesandhand-writtendigits,weused200epochsoffine-tuning;forthefacesweused20epochsandforthedocumentsweused50epochs.Slightoverfittingwasobservedforthefaces,buttherewasnooverfittingfortheot

7、herdatasets.Overfittingmeansthattowardstheendoftraining,thereconstructionswerestillimprovingonthetrainingsetbutweregettingworseonthevalidationset.Weexperimentedwithvariousvaluesofthelearningrate,momentum,andweight-decay

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。