Running Deep Learning code MUCH faster on Windows

Most people doing Deep Learning in python are using TensorFlow or the easier API of Kera on top of TensorFlow.  If you’re doing this you can usually speed up your code by a factor of 10 or more by running it on a supported GPU. But to get this all setup is non-trivial if you are used to doing installs in Windows (Linux programmer probably consider this procedure to be typical for what they have to do to get software to work). A good description of this complex procedure is found here. Because it involves Visual Studio, if you don’t already have the right version of Visual Studio installed, expect that alone to take an hour or more (in my experience, THE slowest installing Microsoft product I’ve ever seen).

Oh wow. I am still downloading and installing product after product, just so I can have some sort of driver for the python tensorflow package. Read through that entire page before deciding if you want to go through with this.

OMG. One of the last steps is to compile everything, which takes (if you turn off your virus protection) FOUR TO FIVE HOURS. (A few more hours if you don’t turn off virus protection).

I have never been a C/C++ programmer. Now I know another reason why. I did notice that one tensorflow DLL used for this was 340MB. I remember when an entire operating system was that size. (And long before I was born, there were computers who had 8K to do everything in.)


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s