## 2017年2月23日 Expectation Maximization in Gaussian Mixture Model 原

airxiechao

Expectation Maximization method recursively tries from a point and forwards to a better point

``````import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline

#data
n1 = 3000
n2 = 7000

mu1 = 0
sigma1 = 3

mu2 = 5
sigma2 = 2

x1 = np.random.normal(mu1, sigma1, n1)
x2 = np.random.normal(mu2, sigma2, n2)
x = np.hstack((x1,x2))

_ = plt.hist(x1, bins=100)
_ = plt.hist(x2, bins=100)``````

``_ = plt.hist(x, bins=100)``

``````#initial guesses
m1 = np.random.rand()*(max(x)-min(x))+min(x)
s1 = 1

m2 = np.random.rand()*(max(x)-min(x))+min(x)
s2 = 1

print 'norm1:', m1, s1, 'norm2:', m2, s2
#norm1: -6.5435226366 1 norm2: 1.15394623023 1

for n in range(10):
#E-step
labels = []
for i in x:
p1 = norm.pdf(i, m1, s1)
p2 = norm.pdf(i, m2, s2)
if p1 >= p2:
labels.append(1)
else:
labels.append(2)

#M-step
x1 = x[np.where(np.array(labels) == 1)]
x2 = x[np.where(np.array(labels) == 2)]

m1 = np.mean(x1)
s1 = np.std(x1)

m2 = np.mean(x2)
s2 = np.std(x2)

print 'norm1:', m1, s1, 'norm2:', m2, s2

#norm1: -4.30546103228 1.39432196416 norm2: 3.94814447813 2.72635206138
#norm1: -3.23042249158 1.59444089123 norm2: 4.2506641256 2.43631282532
#norm1: -2.3644098131 1.80418282915 norm2: 4.5406085984 2.18797184
#norm1: -1.72486583864 1.97586471581 norm2: 4.77283938361 2.01103456234
#norm1: -1.26655182276 2.10274758435 norm2: 4.94558082249 1.89444297284
#norm1: -0.897747751161 2.20129953065 norm2: 5.09070102913 1.80598039713
#norm1: -0.625943300484 2.2688575111 norm2: 5.20389880713 1.74161614629
#norm1: -0.409140001353 2.31965216958 norm2: 5.29868670437 1.69055746097
#norm1: -0.258522415967 2.35347111298 norm2: 5.36712063346 1.65520505151
#norm1: -0.152758194611 2.37698853371 norm2: 5.4160829944 1.63093861018

_ = plt.hist(x1, bins=100)
_ = plt.hist(x2, bins=100)``````

### airxiechao

EM算法及其应用

EM算法简介 首先上一段算法的wiki定义: expectation–maximization (EM) algorithm is an iterative method to find maximum likelihood（MLE） or maximum a posteriori (MAP) estimates of......

gavin__zhou
2017/06/02
0
0
K-Means笔记（三）数学原理

2015/12/31
0
0

机器学习学习方法 　　如何去学，或者说如何高效的学以及学习的步骤和侧重点非常重要。 　　1、切勿掉入书山中，实践理论结合重要 　　很多人可能会一头扎进书山中埋头苦读，读完一本接着...

2018/06/11
0
0
AI 笔记 Week 11-12 Pattern Rec Through Time

Week 11: start Lesson 8, Pattern Rec Through Time (through New Observation Sequence for “We”), and read Chapter 15 in Russell & Norvig. Additional readings can be found on the......

2017/12/09
0
0
AI 笔记 Week 10 Machine Learning

This week you should finish Lesson 7, Machine Learning, and read Chapter 18.6-11 & 20.3 in Russell & Norvig. Assignment 4: Decision Trees Due: October 29 at 11:59PM UTC-12 (Anyw......

2017/12/09
0
0

3_数组

3_数组

7
0

APEMESH

7
0

zxiaofan666

10
0
X-MSG-IM-分布式信令跟踪能力

dev5

7
0
OpenJDK之CyclicBarrier

OpenJDK8，本人看的是openJDK。以前就看过，只是经常忘记，所以记录下 图1 CyclicBarrier是Doug Lea在JDK1.5中引入的，作用就不详细描述了，主要有如下俩个方法使用: await()方法，如果当前线...

8
0