We know that a feed forward neural network with 0 hidden layers (i.e. just an input layer and an output layer) with a sigmoid activation function at the end should be equivalent to logistic regression.
I wish to prove this to be true, but I need to fit 0 hidden layers using the sklearn MLPClassifier module specifically.
My attempt:
my_nn = MLPClassifier(hidden_layer_sizes=(0), alpha = 0,
max_iter=10000)
but this results in an error message:
hidden_layer_sizes must be > 0, got [0, 0].
Is there any way to achieve this using this specific module?