I am trying to implement the cluster estimation method using EM found in Weka, more precisely the following description:
The cross validation performed to determine the number of clusters is done in the following steps:
- the number of clusters is set to 1
- the training set is split randomly into 10 folds.
- EM is performed 10 times using the 10 folds the usual CV way.
- the loglikelihood is averaged over all 10 results.
- if loglikelihood has increased the number of clusters is increased by 1 and the program continues at step 2.
My current implementation is as follows:
def estimate_n_clusters(X):
"Find the best number of clusters through maximization of the log-likelihood from EM."
last_log_likelihood = None
kf = KFold(n_splits=10, shuffle=True)
components = range(50)[1:]
for n_components in components:
gm = GaussianMixture(n_components=n_components)
log_likelihood_list = []
for train, test in kf.split(X):
gm.fit(X[train, :])
if not gm.converged_:
raise Warning("GM not converged")
log_likelihood = np.log(-gm.score_samples(X[test, :]))
log_likelihood_list += log_likelihood.tolist()
avg_log_likelihood = np.average(log_likelihood_list)
if last_log_likelihood is None:
last_log_likelihood = avg_log_likelihood
elif avg_log_likelihood+10E-6 <= last_log_likelihood:
return n_components
last_log_likelihood = avg_log_likelihood
I am getting a similar number of clusters both trough Weka and my function, however, using the number of clusters n_clusters estimated by the function
gm = GaussianMixture(n_components=n_clusters).fit(X)
print(np.log(-gm.score(X)))
Results in NaN, since the -gm.score(X) yields negative results (about -2500). While Weka reports Log likelihood: 347.16447.
My guess is that the likelihood mentioned in step 4 of Weka is not the same as the one mentioned in the functionscore_samples().
Can anyone tell where I am getting something wrong?
Thanks