HomeNews and blogs hub

Dabbling In Deep Learning

Bookmark this page Bookmarked

Dabbling In Deep Learning

Author(s)

Adam Tomkins

Posted on 22 November 2018

Estimated read time: 9 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Dabbling In Deep Learning

Posted by s.aragon on 22 November 2018 - 9:31am jeremy-bishop-93202-unsplash.jpgPhoto by Jeremy Bishop 

By Adam Tomkins, Software Sustainability Institute Fellow.

Like most programmers, I’ve heard all about all the latest advances in Artificial Intelligence and Deep Learning, but, being stuck deep in a software project of my own, deep learning has remained thoroughly on the outskirts of my professional life. However, I’ve been ruminating on some problems that could serve as my nice on-ramp into the deep earning development community, and I’ve given it a fair few shots over the years. Unlike any other technology I’ve dabbled in, there always seems to be something in the way of what should be a nice, smooth tutorial.

Hardware Dependencies, Driver Versions, Operating Systems, Tool Kits. Something has inevitably broken every time I wanted to have a quick dip into deep learning, and an hour of prototyping quickly turns into three hours of driver rollbacks and version management. I run enough brittle software as it is (I’m looking at you MPI and HTC), and I know better than to start changing graphics drivers when I’ve got a finely tuned system running just fine.

And so, just like that, deep learning gets put on the back burner again. Until another Two Minute Papers episode comes out, and my mind starts mulling over the infinite possibilities of AI all over again.

When the Sheffield RSE Group sent out invitations for an "Introduction to Deep Learning" course, as part of the NVIDIA Deep Learning Institute, I knew this was my chance to give it a whirl, hopefully, without tearing my hair out with installation nightmares. It took a while for a date to come up when I was free, but the second it did, I booked on in an instant and blocked out that whole day.

And boy am I glad I did. I’m writing this just after completing the course and I’m still on the programmers high of entering a new field, all abuzz with new possibilities. And so, it is with that in mind, then I must say: keep an eye on your friendly neighbourhood RSE team, their support and training can bring new life to your old software practices.

I’m not here to tell you how great deep learning is, or how it is revolutionising every field it gets its hands on, I’m here to tell you about how great the deep learning Training is, and how it could breath new life into what you’ve got your hands on right now. We’re all busy people, so I’ll break it down simply, and try to justify why you should write a day off for a training event like this.

About the course

The course is an NVIDIA Ambassador Event called “Fundamentals of Deep Learning for Computer Vision”, organised by Research Software Engineering at Sheffield University. It is open to any student or academic and runs several times a year. You can find out about the future events and similar events like it through the Sheffield RSE mailing list, or your local RSE group.

The course runs all day from 9am until 4-5pm, with breaks and a good lunch break if you absolutely must keep tabs on those important work emails. It is laid back and lead at a good pace with stretch goals, to keep the more experienced busy, and support, to keep the more hesitant crowd up to speed.

What do you need to know ahead of time

The course is designed for beginners, but it takes a few things for granted. First make sure you understand the basics of what machine learning is and how Deep Learning works. You don’t need to know any maths, only the concepts behind it. There are plenty of good youtube videos you can watch, with some of my personal favourites being from Two Minute papers with the wonderfully enthusiastic Károly Zsolnai-Fehér.

Secondly, the course is based on Python, a great and easy to use programming language, using Jupyter Notebooks to guide your progress. All this means is that some Python will be useful to know, but not absolutely essential, there was at least one person on the course with no Python knowledge, and they seemed to finish the course without issue.

The whole course is very well-designed to keep you on track and is written and taught in such a way as to not make you feel lost in the technological weeds. That being said, that is all you need to know ahead of time, everything else is taken care of.

How does it work

And I do mean everything. You will be using NVIDIA own DIGITS platform, free of charge as part of the Deep learning Institute. This means that you completely cut out the nightmare configuration of toolkits and hardware compatibilities, with driver versions. It all just works. You will be running deep learning examples within half an hour of getting started.

All of your work will be done in the browser, so all you need is a computer with Chrome (we had University computers). You will be given access to everything else you need on the day. And better still, you still have access after the course to refine your ideas and try out new programs. You’ll learn how to build some neat techniques, so it is worth keeping them around to jumpstart future projects.

What you will learn

The course is all about building, training and deploying networks, without getting stuck in the minutia of network design and data processing. The official course description states its aims as:

  • Implement common deep learning workflows, such as image classification and object detection.

  • Experiment with data, training parameters, network structure, and other strategies to increase performance and capability.

  • Deploy your neural networks to start solving real-world problems.

I think it is worth highlighting their approach to real-world solution design. Through a handful of tasks, you will learn the basics of deep learning, how to train a network and evaluate it, and how to deploy a deep network in a real application.

The focus of deploying a deep network is crucial. The course walks you step-by-step through the Pipeline of network development and shows you how to use a network in your own code. It encourages creative problem solving, creating incrementally smarter networks to image classification and object recognition. A key thing they emphasise, which is absolutely missing in the majority of deep learning tutorials, is the dirty secret of deep learning: You don't have to build a network, to use it.

If you want to learn the minutia of handcrafting a network from scratch, this isn't the course for you. Here they approach it differently, in a saner and result-oriented way: first, you have to think of the problem you want to solve. Then, they (and now I) suggest you follow these steps:

  1. If someone has already solved your problem, use that network.

  2. If someone has solved something close to what you want, retrain that network.

  3. If someone has almost solved your task, modify their network.

  4. Finally, if there is nothing else out there, you can consider building your own.

We covered the first three points in that network, building on the famous networks, Alexnet for Image Classification and COCO for object recognition, editing network structure where necessary. The class really builds your confidence with how to find existing networks and modifying them to your needs. Save the pure network design for pure researchers.

What you will build

I almost don’t want to ruin the surprise, but it’s nice to have an idea of what you will have built by the time you're done.

The course focuses on Computer Vision, and you will use and train an object recognition network, to support an ‘intelligent dog door’, letting your favourite canine companion come as they please while keeping the neighbour cats outside.

 

You will then expand the basic classification to look at localisation, building a pipeline to go from asking “is there a dog?” to “where is the dog?” This will take you through the AlexNet object network, through to modifying its structure to achieve more complicated goals, and finally using DetectNet in conjunction with COCO, Microsoft's dataset for object detection, for more fine-grained object recognition. I want to emphasise that while you’re not building networks from the ground up, the focus on adapting well-proven networks really is invaluable in showing you want can be done without the extensive research-focused approach or network crafting.

What you will get

If you have the confidence to adapt a network to your needs, including training and validation, and deployment, at the end there is an optional assessment, which is well worth doing, to build a new system yourself, in order to get an NVIDIA Deep Learning Institute certification to hang on your wall or put on your Linkedin. Keep an eye on the RSE mailing lists. These events are well worth the time and a lot of fun.

If this sounds like an interesting way to spend a day, the Sheffield RSE group will continue to host these sessions, with the next one being on Tuesday the 18th of December. You can find more information, and register here.

Want to discuss this post with us? Send us an email or contact us on Twitter @SoftwareSaved. 

Share on blog/article:
Twitter LinkedIn