I am trying to build a recommendation system using Non-negative matrix factorization. Using scikit-learn NMF as the model, I fit my data, resulting in a certain loss(i.e., reconstruction error). Then I generate recommendation for new data using the inverse_transform method.

Now I do the same using another model I built in TensorFlow. The reconstruction error after training is close to that obtained using sklearn's approach earlier. However, neither are the latent factors similar to one another nor the final recommendations.

One difference between the 2 approaches that I am aware of is: In sklearn, I am using the Coordinate Descent solver whereas in TensorFlow, I am using the AdamOptimizer which is based on Gradient Descent. Everything else seems to be the same:

1. Loss function used is the Frobenius Norm
2. No regularization in both cases
3. Tested on the same data using same number of latent dimensions

Relevant code that I am using:

1. scikit-learn approach:

```model =  NMF(alpha=0.0, init='random', l1_ratio=0.0, max_iter=200,
n_components=2, random_state=0, shuffle=False, solver='cd', tol=0.0001,
verbose=0)
model.fit(data)
result = model.inverse_transform(model.transform(data))```

2. TensorFlow approach:

```w = tf.get_variable(initializer=tf.abs(tf.random_normal((data.shape,
2))), constraint=lambda p: tf.maximum(0., p))
h = tf.get_variable(initializer=tf.abs(tf.random_normal((2,
data.shape))), constraint=lambda p: tf.maximum(0., p))
loss = tf.sqrt(tf.reduce_sum(tf.squared_difference(x, tf.matmul(w, h))))```

My question is that if the recommendations generated by these 2 approaches do not match, then how can I determine which are the right ones? Based on my use case, sklearn's NMF is giving me good results, but not the TensorFlow implementation. How can I achieve the same using my custom implementation?

Sep 7, 2018 in Python 314 views

## 1 answer to this question.

The choice of the optimizer has a big impact on the quality of the training. Some very simple models (I'm thinking of GloVe for example) do work with some optimizer and not at all with some others. Then, to answer your questions:

1. how can I determine which are the right ones ?

The evaluation is as important as the design of your model, and it is as hard i.e. you can try these 2 models and several available datasets and use some metrics to score them. You could also use A/B testing on a real case application to estimate the relevance of your recommendations.

1. How can I achieve the same using my custom implementation ?

First, try to find a coordinate descent optimizer for Tensorflow and make sure all step you implemented are exactly the same as the one in scikit-learn. Then, if you can't reproduce the same, try different solutions (why don't you try a simple gradient descent optimizer first ?) and take profit of the great modularity that Tensorflow offers !

• 58,100 points

## SKLearn NMF Vs Custom NMF

The choice of the optimizer has a ...READ MORE

+1 vote

## Compiled vs Interpreted Languages

Compiled languages are written in a code ...READ MORE

## Difference between append vs. extend list methods in Python

append: Appends object at the end. x = ...READ MORE

## how do i change string to a list?

suppose you have a string with a ...READ MORE

## how can i randomly select items from a list?

You can also use the random library's ...READ MORE

+1 vote

## how can i count the items in a list?

Syntax :            list. count(value) Code: colors = ['red', 'green', ...READ MORE