欢迎来到天天文库
浏览记录
ID:13483759
大小:5.70 MB
页数:44页
时间:2018-07-22
《中科院机器学习题库new概要》由会员上传分享,免费在线阅读,更多相关内容在教育资源-天天文库。
1、机器学习题库一、极大似然1、MLestimationofexponentialmodel(10)AGaussiandistributionisoftenusedtomodeldataontherealline,butissometimesinappropriatewhenthedataareoftenclosetozerobutconstrainedtobenonnegative.Insuchcasesonecanfitanexponentialdistribution,whoseprobabilitydensityfu
2、nctionisgivenbyGivenNobservationsxidrawnfromsuchadistribution:(a)Writedownthelikelihoodasafunctionofthescaleparameterb.(b)Writedownthederivativeoftheloglikelihood.(c)GiveasimpleexpressionfortheMLestimateforb.2、换成Poisson分布:3、二、贝叶斯假设在考试的多项选择中,考生知道正确答案的概率为p,猜测答案的概率为
3、1-p,并且假设考生知道正确答案答对题的概率为1,猜中正确答案的概率为,其中m为多选项的数目。那么已知考生答对题目,求他知道正确答案的概率。1、ConjugatepriorsThereadingsforthisweekincludediscussionofconjugatepriors.Givenalikelihoodforaclassmodelswithparametersθ,aconjugatepriorisadistributionwithhyperparametersγ,suchthattheposteriord
4、istribution与先验的分布族相同(a)Supposethatthelikelihoodisgivenbytheexponentialdistributionwithrateparameterλ:Showthatthegammadistribution_isaconjugatepriorfortheexponential.Derivetheparameterupdategivenobservationsandthepredictiondistribution.(b)Showthatthebetadistributi
5、onisaconjugatepriorforthegeometricdistributionwhichdescribesthenumberoftimeacoinistosseduntilthefirstheadsappears,whentheprobabilityofheadsoneachtossisθ.Derivetheparameterupdateruleandpredictiondistribution.(a)Supposeisaconjugatepriorforthelikelihood;showthatthem
6、ixturepriorisalsoconjugateforthesamelikelihood,assumingthemixtureweightswmsumto1.(d)Repeatpart(c)forthecasewherethepriorisasingledistributionandthelikelihoodisamixture,andthepriorisconjugateforeachmixturecomponentofthelikelihood.somepriorscanbeconjugateforseveral
7、differentlikelihoods;forexample,thebetaisconjugatefortheBernoulliandthegeometricdistributionsandthegammaisconjugatefortheexponentialandforthegammawithfixedα(e)(Extracredit,20)Explorethecasewherethelikelihoodisamixturewithfixedcomponentsandunknownweights;i.e.,thew
8、eightsaretheparameterstobelearned.三、判断题(1)给定n个数据点,如果其中一半用于训练,另一半用于测试,则训练误差和测试误差之间的差别会随着n的增加而减小。(2)极大似然估计是无偏估计且在所有的无偏估计中方差最小,所以极大似然估计的风险最小。(3)回归函数A和B,如果A比B更简单,则
此文档下载收益归作者所有