Let me share a story that I’ve heard too many times.
” So I was developing a machine learning model with my team and within a few weeks of extensive experimentation we got promising results…
…unfortunately, we couldn’t tell exactly what performed best because we didn’t track feature versions, didn’t record the parameters, and used different environments to run our models…
…after a few weeks, we weren’t even sure what we have actually tried so we needed to rerun pretty much everything”
In this article, I will show you how you can keep track of your machine learning experiments and organize your model development efforts so that stories like that will never happen to you.
You will learn about
What is experiment management?
Experiment management in the context of machine learning is a process of tracking experiment metadata like:
- code versions
- data versions
organizing them in a meaningful way and making them available to access and collaborate on within your organization.
In the next sections, you will see exactly what that means with examples and implementations.
Tracking ML experiments
What I mean by tracking is collecting all the metainformation about your machine learning experiments that is needed to:
- share your results and insights with the team (and you in the future),
- reproduce results of the machine learning experiments,
- keep your results, that take a long time to generate, safe.
Let’s go through all the pieces of an experiment that I believe should be recorded, one by one.
Code version control for data science
Okay, in 2019 I think pretty much everyone working with code knows about version control. Failing to keep track of your code is a big, but obvious and easy to fix the problem.
Should we just proceed to the next section? Not so fast.
Problem 1: Jupyter notebook version control
A large part of data science development is happening in Jupyter notebooks which are more than just code. Fortunately, there are tools that help with notebook versioning and diffing. Some tools that I know:
- nbconvert (.ipynb -> .py conversion)
- nbdime (diffing)
- jupytext (conversion+versioning)
- neptune-notebooks (versioning+diffing+sharing)
Once you have your notebook versioned, I would suggest to go the extra mile and make sure that it runs top to bottom. For that you can use jupytext or nbconvert:
Problem 2: Experiments on dirty commits
Data science people tend to not follow the best practices of software development. You can always find someone (me included) who would ask:
“But how about tracking code in-between commits? What if someone runs an experiment without committing the code?”
One option is to explicitly forbid running code on dirty commits. Another option is to give users an additional safety net and snapshot code whenever they run an experiment. Each one has its pros and cons and it is up to you to decide.
Every machine learning model or pipeline needs hyperparameters. Those could be learning rate, number of trees or a missing value imputation method. Failing to keep track of hyperparameters can result in weeks of wasted time looking for them or retraining models.
The good thing is, keeping track of hyperparameters can be really simple. Let’s start with the way people tend to define them and then we’ll proceed to hyperparameter tracking:
Typically a .yaml file that contains all the information that your script needs to run. For example:
Command line + argparse
You simply pass your parameters to your script as arguments:
Parameters dictionary in main.py
You put all of your parameters in a dictionary inside your script:
Magic numbers all over the place
Whenever you need to pass a parameter you simply pass a value of that parameter.
We all do that sometimes but it is not a great idea especially if someone will need to take over your work.
Ok, so I do like .yaml configs and passing arguments from the command line (option 1 and 2), but anything other than magic numbers is fine. What is important is that you log those parameters for every experiment.
If you decide to pass all parameters as the script arguments make sure to log them somewhere. It is easy to forget, so using an experiment management tool that does this automatically can save you here.
There is nothing so painful as to have a perfect script on perfect data version producing perfect metrics only to discover that you don’t remember what are the hyperparameters that were passed as arguments.
A bonus of having your hyperparameters abstracted away entirely (option 1 and 2) is that you implicitly turn your training and evaluation scripts into an objective function that you can optimize automatically:
That means you can use readily available libraries and run hyperparameter optimization algorithms with virtually no additional work! If you are interested in the subject please check out my blog post series about hyperparameter optimization libraries in Python.
In real-life projects, data is changing over time. Some typical situations include:
- new images are added,
- labels are improved,
- mislabeled/wrong data is removed,
- new data tables are discovered,
- new features are engineered and processed,
- validation and testing datasets change to reflect the production environment.
Whenever your data changes, the output of your analysis, report or experiment results will likely change even though the code and environment did not. That is why to make sure you are comparing apples to apples you need to keep track of your data versions.
Having almost everything versioned and getting different results can be extremely frustrating, and can mean a lot of time (and money) in wasted effort. The sad part is that you can do little about it afterward. So again, keep your experiment data versioned.
For the vast majority of use cases whenever new data comes in you can save it in a new location and log this location and a hash of the data. Even if the data is very large, for example when dealing with images, you can create a smaller metadata file with image paths and labels and track changes of that file.
A wise man once told me:
“Storage is cheap, training a model for 2 weeks on an 8-GPU node is not.”
And if you think about it, logging this information doesn’t have to be rocket science.
Whichever option you decide is best for your project please version your data.
I know that 10x data scientists can read data hash and know exactly what it is, but you may also want to log something a bit more readable for us mere mortals. For example, I wrote a simple function that lets you log a snapshot of your image directory to Neptune:
Tracking machine learning metrics
I have never found myself in a situation where I thought that I have logged too many metrics for my experiment, have you?
In a real-world project, the metrics you care about can change due to new discoveries or changing specifications so logging more metrics can actually save you some time and trouble in the future.
Either way, my suggestion is:
“Log metrics, log them all”
Typically, metrics are as simple as a single number
but I like to think of it as something a bit broader. To understand if your model has improved, you may want to take a look at a chart, confusion matrix or distribution of predictions. Those, in my view, are still metrics because they help you measure the performance of your experiment.
Tracking metrics both on training and validation datasets can help you assess the risk of the model not performing well in production. The smaller the gap the lower the risk. A great resource is this kaggle days talk by Jean-François Puget.
Moreover, if you are working with data collected at different timestamps you can assess model performance decay and suggest proper model retraining schema. Simply track metrics at different timeframes of your validation data and see how the performance drops.
Versioning data science environment
The majority of problems with environment versioning can be summarized by the infamous quote:
“I don’t understand, it worked on my machine.”
One approach that helps solve this issue can be called “environment as code” where the environment can be created by executing instructions (bash/yaml/docker) step-by-step. By embracing this approach you can switch from versioning the environment to versioning environment set-up code which we know how to do.
There are a few options that I know to be used in practice (by no means this is a full list of approaches).
This is the preferred option and there are a lot of resources on the subject. One that I particularly like is the “Learn Enough Docker to be useful” series by Jeff Hale.
In a nutshell, you define the Dockerfile with some instructions.
You build your environment from those instructions:
And you can run scripts on the environment by going:
The example I showed was used to run a Neptune enabled Jupyterlab server on AWS. Check it out if you are interested.
It’s a simpler option and in many cases, it is enough to manage your environments with no problems. It doesn’t give you as many options or guarantees as docker does, but it can be enough for your use case.
The environment can be defined as a .yaml configuration file just like this one:
You can create conda environment by running:
conda env create -f environment.yaml
What is pretty cool is that you can always dump the state of your environment to such config by running:
conda env export > environment.yaml
Simple and gets the job done.
You can always define all your bash instructions explicitly in the Makefile. For example:
and set it up by running:
It is often difficult to read those files and you are giving up a ton of additional features of conda and/or docker but it doesn’t get much simpler than this.
Now, that you have your environment defined as code, make sure to log the environment file for every experiment.
Again, if you are using an experiment manager you can snapshot your code whenever you create a new experiment, even if you forget to git commit:
and have it safely stored in the app:
How to organize your model development process?
As much as I think tracking experimentation and ensuring the reproducibility of your work is important it is just a part of the puzzle. Once you have tracked hundreds of experiment runs you will quickly face new problems:
- how to search through and visualize all of those experiments,
- how to organize them into something that you and your colleagues can digest,
- how to make this data shareable and accessible inside your team/organization?
This is where experiment management tools really come in handy. They let you:
- filter/sort/tag/group experiments,
- visualize/compare experiment runs,
- share (app and programmatic query API) experiment results and metadata.
For example, by sending a link I can share a comparison of machine learning experiments with all the additional information available.
With that, you and all the people on your team know exactly what is happening when it comes to model development. It makes it easy to track the progress, discuss problems, and discover new improvement ideas.
Working in creative iterations
Tools like that are a big help and a huge improvement from spreadsheets and notes. However, what I believe can take your machine learning projects to the next level is a focused experimentation methodology that I call creative iterations.
I’d like to start with some pseudocode and explain it later:
In every project, there is a phase where the business_specification is created that usually entails a timeframe, budget, and goal of the machine learning project. When say goal, I mean a set of KPIs, business metrics, or if you are super lucky machine learning metrics. At this stage, it is very important to manage business expectations but it’s a story for another day. If you are interested in those things I suggest you take a look at some articles by Cassie Kozyrkov, for instance, this one.
Assuming that you and your team know what is the business goal you can do initial_research and cook up a baseline approach, a first creative_idea. Then you develop it and come up with a solution which you need to evaluate and get your first set of metrics. Those, as mentioned before, don’t have to be simple numbers (and often are not) but could be charts, reports or user study results. Now you should study your solution, metrics, and explore_results.
It may be here where your project will end because:
- your first solution is good enough to satisfy business needs,
- you can reasonably expect that there is no way to reach business goals within the previously assumed time and budget,
- you discover that there is a low-hanging fruit problem somewhere close and your team should focus their efforts there.
If none of the above apply, you list all the underperforming parts of your solution and figure out which ones could be improved and what creative_ideas can get you there. Once you have that list, you need to prioritize them based on expected goal improvements and budget. If you are wondering how can you estimate those improvements, the answer is simple: results exploration.
You have probably noticed that results exploration comes up a lot. That’s because it is so very important that it deserves its own section.
Model results exploration
This is an extremely important part of the process. You need to understand thoroughly where the current approach fails, how far time/budget wise are you from your goal, what are the risks associated with using your approach in production. In reality, this part is far from easy but mastering it is extremely valuable because:
- it leads to business problem understanding,
- it leads to focusing on the problems that matter and saves a lot of time and effort for the team and organization,
- it leads to discovering new business insights and project ideas.
Some good resources I found on the subject are:
- “Understanding and diagnosing your machine-learning models” PyData talk by Gael Varoquaux
- “Creating correct and capable classifiers” PyData talk by Ian Osvald
- “Using the ‘What-If Tool’ to investigate Machine Learning models” article by Parul Pandey
Diving deeply into results exploration is a story for another day and another blog post, but the key takeaway is that investing your time in understanding your current solution can be extremely beneficial for your business.
In this article, I explained:
- what experiment management is,
- how organizing your model development process improves your workflow.
For me, adding experiment management tools to my “standard” software development best practices was an aha-moment which made my machine learning projects more likely to succeed. I think, if you give it a go you will feel the same.