Welcome![Sign In][Sign Up]
Location:
Search - EM-ML

Search list

[Embeded-SCM DevelopPFM2s

Description: ML Estimation of 2 PFM signals using EM and AM
Platform: | Size: 1599 | Author: 老邢 | Hits:

[Speech/Voice recognition/combineHMMmodel

Description: This code implements in C++ a basic left-right hidden Markov model and corresponding Baum-Welch (ML) training algorithm. It is meant as an example of the HMM algorithms described by L.Rabiner (1) and others. Serious students are directed to the sources listed below for a theoretical description of the algorithm. KF Lee (2) offers an especially good tutorial of how to build a speech recognition system using hidden Markov models.
Platform: | Size: 15360 | Author: aaaaaaa | Hits:

[Embeded-SCM DevelopPFM2s

Description: ML Estimation of 2 PFM signals using EM and AM -ML Estimation of 2 PFM signals using EM and AM
Platform: | Size: 1024 | Author: 老邢 | Hits:

[AI-NN-PRem

Description: em算法是一种估计最优参数的方法 又名最大期望算法-em algorithm is a way to estimate the optimal parameters, also known as the greatest expectations algorithm
Platform: | Size: 5120 | Author: 凌风 | Hits:

[DocumentsML

Description:
Platform: | Size: 6144 | Author: roger | Hits:

[AI-NN-PRPQRbayes

Description: 从ML-EM 重建算法入手,分析了贝叶斯模型的一些关键点,针对采用传统方法求解MAP问题的局限性,提出一种用于正电子成像的贝叶斯神经网络重建算法,为了保留边缘信息,引入了二进制的保边缘变量,并应用共轭神经网络求解,模拟的重建结果表明,应用这种算法可以得到比ML-EM 算法更好的重建图像@-From the ML-EM reconstruction algorithm start with an analysis of Bayesian model some of the key points, the use of traditional methods for solving the problem of the limitations of MAP, a positron imaging for the Bayesian neural network algorithm, in order to to retain the edge information, the introduction of binary variables edge security, and application of neural network for solving the conjugate, the reconstruction of simulated results show that the algorithm can be more than ML-EM image reconstruction algorithm better @
Platform: | Size: 285696 | Author: 李浩 | Hits:

[matlablearn_kalman

Description: LEARN_KALMAN Find the ML parameters of a stochastic Linear Dynamical System using EM.
Platform: | Size: 2048 | Author: 李松寒 | Hits:

[Industry researchem_covariances

Description: Using SAS/IML : This code uses the EM algorithm to estimate the maximum likelihood (ML) covariance matrix and mean vector in the presence of missing data. This implementation of the EM algorithm or any similar ML approach assumes that the data are missing completely at random (MCAR) or missing at random (MAR: see Little & Rubin, 1987).-Using SAS/IML : This code uses the EM algorithm to estimate the maximum likelihood (ML) covariance matrix and mean vector in the presence of missing data. This implementation of the EM algorithm or any similar ML approach assumes that the data are missing completely at random (MCAR) or missing at random (MAR: see Little & Rubin, 1987).
Platform: | Size: 9216 | Author: jpsartre | Hits:

[matlablearn_kalman

Description: the ML parameters of a stochastic Linear Dynamical System using EM. -the ML parameters of a stochastic Linear Dynamical System using EM.
Platform: | Size: 2048 | Author: ilker | Hits:

[OpenGL programAcceleration__ML_EM__Tomosynthesis

Description: ct ML_EM加速算法,三维重建算法 -ct ML_EM
Platform: | Size: 92160 | Author: mingwe | Hits:

[Software EngineeringHMM

Description: mm_em.m function [LL, prior, transmat, obsmat, nrIterations] = ... dhmm_em(data, prior, transmat, obsmat, varargin) LEARN_DHMM Find the ML/MAP parameters of an HMM with discrete outputs using EM. [ll_trace, prior, transmat, obsmat, iterNr] = learn_dhmm(data, prior0, transmat0, obsmat0, ...) Notation: Q(t) = hidden state, Y(t) = observation INPUTS:-mm_em.m function [LL, prior, transmat, obsmat, nrIterations] = ... dhmm_em (data, prior, transmat, obsmat, varargin) LEARN_DHMM Find the ML/MAP parameters of an HMM with discrete outputs using EM. [ ll_trace, prior, transmat, obsmat, iterNr] = learn_dhmm (data, prior0, transmat0, obsmat0, ...) Notation: Q (t) = hidden state, Y (t) = observation INPUTS:
Platform: | Size: 8192 | Author: 龙蛋 | Hits:

[Othermaxwell

Description: Max Welling s Notes 原网页是PS格式的,我都转成了pdf格式方便阅读,机器学习算法相关,都是一些相对基础的算法,这个笔记把很多算法的精髓都整理的很清楚,适合初学者入门看,以及一些新的算法里面可能没有涉及,在英文描述中我把里面涉及的算法列了一下,按需求下载吧。-Max Welling s Notes。Statistical Estimation [ps] - bayesian estimation - maximum a posteriori (MAP) estimation - maximum likelihood (ML) estimation - Bias/Variance tradeoff & minimum description length (MDL) Expectation Maximization (EM) Algorithm [ps] - detailed derivation plus some examples Supervised Learning (Function Approximation) [ps] - mixture of experts (MoE) - cluster weighted modeling (CWM) Clustering [ps] - mixture of gaussians (MoG) - vector quantization (VQ) with k-means. Linear Models [ps] - factor analysis (FA) - probabilistic principal component analysis (PPCA) - principal component analysis (PCA) Independent Component Analysis (ICA) [ps] - noiseless ICA - noisy ICA - variational ICA Mixture of Factor Analysers (MoFA) [ps] - derivation of learning algorithm Hidden Markov Models (HMM) [ps] - viterbi decoding algorithm - Baum-Welch learning algorithm Kalman Filters (KF) [ps] - kalman filter algorithm (very detailed derivation) -
Platform: | Size: 1211392 | Author: 陈希 | Hits:

[Windows DevelopImageClassification-master

Description: 在这个项目中,我们的目标是建立一个识别和大小231x231图像呈现对象分类系统。我们得到了一组训练图像,每四个标签之一:1飞机;汽车2;3马,否则。我们提供了两个特点:一是方向梯度直方图(HOG),其尺寸为5408;另一个是overfeat ImageNet美国有线电视新闻网的特点,其尺寸37000。关于测试图像,我们只给出了每个图像的功能,没有标签,结果判断由平地机。我们的目标是提供二进制和多个预测。平衡错误率(BER)是我们的性能评估。为了解决这个问题,我们首先减少PCA的问题的维数,处理不平衡数据集,通过向上采样或下采样,去除异常值,通过无监督学习,如k-均值和EM算法。其次,我们使用ML方法,如二进制和多项式logistic回归,二进制和多项式SVM和神经网络。多项式SVM的证明有最好的结果。最后,我们在100分中得了92分。-In this project, our goal was to build a system that recognizes and classifies the object present in an image of size 231x231. We were given a set of training images each with one of four labels: 1 for airplanes 2 for cars 3 for horses 4 otherwise. We were provided with two sets of features: one is Histogram of Oriented Gradients (HOG), which has dimension of 5408 the other one is OverFEAT ImageNet CNN Features, which has dimension of 37,000. Concerning the test images, we were only given the features of each image without label, and the results to be judged by the grader. Our goal was to provide binary and multiple predictions. The Balanced Error Rate (BER) was our performance uator. To solve the problem, we firstly reduced the problem’s dimensionality by PCA, dealt with imbalanced datasets through up-sampling or down-sampling, and removed outliers through unsupervised learning such as K-Means and EM Algorithm. Secondly, we applied ML methods such as Binary and Multinomial Logist
Platform: | Size: 322560 | Author: 杨雪 | Hits:

[OtherML

Description: GMM高斯混合模型EM算法聚类,PCA主成分分析,以及从人脸图像中提取主成分(GMM Gauss hybrid model EM algorithm clustering, PCA principal component analysis, and extraction of principal components from face images)
Platform: | Size: 3072 | Author: linann | Hits:

CodeBus www.codebus.net