Session: Docker and Python – Making them play nicely and securely for machine learning and data science
Docker has become a standard tool for developers around the world to deploy applications in a reproducible and robust manner. The existence of Docker and Docker compose have reduced the time needed to set up new software and implementing complex technology stacks for our applications.
Now, six years after the initial release of` Docker, we can say with confidence that containers and containers orchestration have become some of the defaults in the current technology stacks.
There are thousands of tutorials and getting started documents for those wanting to adopt Docker for apps deployment. However, if you are a Data Scientist, a researcher or someone working on scientific computing wanting to adopt Docker, the story is quite different. There are very few tutorials (in comparison to app/web) and documents focused on Docker best practices for DS and scientific computing. If you are working on DS, ML or scientific computing, this talk is for you. We’ll cover best practices when building Docker containers for data-intensive applications, from optimising your image build, to ensuring your containers are secure and efficient deployment workflows. We will talk about the most common problems faced while using Docker with data-intensive applications and how you can overcome most of them. Finally, I’ll give some practical and useful tips for you to improve your Docker workflows and practises.
Attendees will leave the talk feeling confident about adopting Docker across a range of DS, ML and research projects.