## AI & Deep Learning with TensorFlow

- 17k Enrolled Learners
- Weekend
- Live Class

The era of Deep Learning and Machine Learning is at its peak. It is going to create **2.3 Million** Jobs by 2020. With new frameworks coming up every month, TensorFlow and Theano have been there for a while and have gained a good amount of popularity as well. So in this Theano vs TensorFlow article, I’ll be discussing the following topics:

Theano can be defined as a library for **Scientific Computing**. It was developed by the Université de Montréal and has been available since 2007.

It allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can run on both CPU and GPU.

**TensorFlow** is an open-source software library by Google Brain for dataflow programming across a range of tasks.

It is a symbolic math library that is used for machine learning applications like neural networks.

We will compare Theano vs TensorFlow based on the following Metrics:

**Popularity:**

Theano | TensorFlow |

Theano being an old Framework is not that popular among Data Scientists, Researchers. It was once upon a time | TensorFlow is hands down the most famous Deep Learning Framework and is used in a lot of research. |

**Execution Speed:**

**Technology Benefits:**

Theano | TensorFlow |

It supports a wide range of Operations. Theano computes the gradient when determining the Backpropagation error. You have full control over Optimizers as you have to hard code it. | TensorFlow still has to come at par with Theano. That’s not the case for TensorFlow It gives access to lots of good Optimizers out of the box. Which makes Coding Easier |

**Compatibility:**

**Community Support:**

Theano | TensorFlow |

Theano has Bigger Community Support as it came way before TensorFlow. | TensorFlow’s Online Community Support is Increasing rapidly with its Popularity. Documentation is comparatively lesser. |

**Code Readability:**

Let us Compare Theano vs TensorFlow based on their Code. Here I’m Taking a Basic Example Script where we will take some Phony data and initialize the best fit for that data so it can predict future data points.

**Theano Code:**

import theano import theano.tensor as T import numpy # Again, make 100 points in numpy x_data = numpy.float32(numpy.random.rand(2, 100)) y_data = numpy.dot([0.100, 0.200], x_data) + 0.3 # Intialise the Theano model X = T.matrix() Y = T.vector() b = theano.shared(numpy.random.uniform(-1, 1), name="b") W = theano.shared(numpy.random.uniform(-1.0, 1.0, (1, 2)), name="W") y = W.dot(X) + b # Compute the gradients WRT the mean-squared-error for each parameter cost = T.mean(T.sqr(y - Y)) gradientW = T.grad(cost=cost, wrt=W) gradientB = T.grad(cost=cost, wrt=b) updates = [[W, W - gradientW * 0.5], [b, b - gradientB * 0.5]] train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True) for i in xrange(0, 201): train(x_data, y_data) print W.get_value(), b.get_value()

**Equivalent TensorFlow Code:**

import tensorflow as tf import numpy as np # Make 100 phony data points in NumPy. x_data = np.float32(np.random.rand(2, 100)) # Random input y_data = np.dot([0.100, 0.200], x_data) + 0.300 # Construct a linear model. b = tf.Variable(tf.zeros([1])) W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0)) y = tf.matmul(W, x_data) + b # Minimize the squared errors. loss = tf.reduce_mean(tf.square(y - y_data)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) # For initializing the variables. init = tf.initialize_all_variables() # Launch the graph sess = tf.Session() sess.run(init) # Fit the plane. for step in xrange(0, 201): sess.run(train) if step % 20 == 0: print step, sess.run(W), sess.run(b) # Learns best fit is W: [[0.100 0.200]], b: [0.300]

**Length Wise** Both the Code are almost **Similar** there’s not much difference. Two identically-generated NumPy arrays describing the input, and the target output. But if we have a look at the Model Initialization.

**Model Initialization:**

# TensorFlow b = tf.Variable(tf.zeros([1])) W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0)) y = tf.matmul(W, x_data) + b # Theano X = T.matrix() Y = T.vector() b = theano.shared(numpy.random.uniform(-1, 1), name="b") W = theano.shared(numpy.random.uniform(-1.0, 1.0, (1, 2)), name="W") y = W.dot(X) + b

As you can see here that TensorFlow doesn’t require any Special Treatment of X and Y Variables. On the other hand, Theano requires an extra effort to make sure that the variables are **Symbolic Inputs** to the Function. The definition of b and W are explanatory and also nicer.

**The Learning: Optimization**

# Tensorflow loss = tf.reduce_mean(tf.square(y - y_data)) # (1) optimizer = tf.train.GradientDescentOptimizer(0.5) # (2) train = optimizer.minimize(loss) # (3) # Theano cost = T.mean(T.sqr(y - Y)) # (1) gradientW = T.grad(cost=cost, wrt=W) # (2) gradientB = T.grad(cost=cost, wrt=b) # (2) updates = [[W, W - gradientW * 0.5], [b, b - gradientB * 0.5]] # (2) train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True) # (3)

For (1) the **MSE** is almost the same for Theano vs TensorFlow.

For (2) Defining the **Optimizer** is easy and simple as it gets in case of TensorFlow, but Theanno gives you a great deal of Control overt the Optimizers although its quite lengthy and increases Verification Effort.

For (3) **Training Function** the Code is almost Similar

**Training Body:**

# TensorFlow init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for step in xrange(0, 201): sess.run(train) # Theano for i in xrange(0, 201): train(x_data, y_data) print W.get_value(), b.get_value()

The code for Training is almost Identical, but Encapsulating the Graph Execution in Session Object is **Conceptually Cleaner** than Theano.

On a Concluding Note, it can be said that both APIs have a *similar Interface*. But TensorFlow is comparatively **easier** yo use as it provides a lot of Monitoring and Debugging Tools. Theano takes the Lead in **Usability and Speed**, but TensorFlow is better suited for Deployment. Paperwork or **Documentation** for Theano is more than TensorFlow and TensorFlow being a new Language people don’t have many resources, to begin with. Open-source deep-libraries such as Keras, Lasagne and Blocks have been **built on top of** Theano.

*I hope this comparison was enough for you to decide which framework to opt for, check out the AI and Deep Learning With Tensorflow by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. This Certification Training is curated by industry professionals as per the industry requirements & demands. You will master the concepts such as SoftMax function, Autoencoder Neural Networks, Restricted Boltzmann Machine (RBM) and work with libraries like Keras & TFLearn.*

Got a question for us? Please mention it in the comments section of “Theano vs TensorFlow” and we will get back to you.

REGISTER FOR FREE WEBINAR

Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP