“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, and we will have multiplied the intelligence — the human biological machine intelligence of our civilization — a billion-fold.”
– Ray Kurzweil, American inventor and futurist.
As we know, “Data” is the new power, and companies around the globe are trying to leverage this power in their businesses. Whether that business is:
Training machine learning or deep learning models can take a really long time.
If you are like me, you like to know what is happening during that time:
Neptune lets you do all that, and in this post I will show you how to make it happen. Step by step.
Check out this example run monitoring experiment to see how this can look like. …
Choosing the correct hyperparameters for machine learning or deep learning models is one of the best ways to extract the last juice out of your models. In this article, I will show you some of the best ways to do hyperparameter tuning that are available today (in 2020).
First, let’s understand the differences between a hyperparameter and a parameter in machine learning.
Image retrieval is the task of finding images related to a given query. With content-based image retrieval, we refer to the task of finding images containing some attributes which are not in the image metadata, but present in its visual content.
In this post we:
Ranking Loss, Contrastive Loss, Siamese Nets, Triplet Nets, Triplet Loss, Image…
Imagine if you could get all the tips and tricks you need to tackle a binary classification problem on Kaggle or anywhere else. I have gone over 10 Kaggle competitions including:
– and pulled out that information for you. …
Last week I had the pleasure to participate in the International Conference on Learning Representations (ICLR), an event dedicated to the research on all aspects of deep learning. Initially, the conference was supposed to take place in Addis Ababa, Ethiopia, however, due to the novel coronavirus pandemic, it went virtual. I’m sure it was a challenge for organisers to move the event online, but I think the effect was more than satisfactory, as you can read here!
Over 1300 speakers and 5600 attendees proved that the virtual format was more accessible for the public, but at the same time, the conference remained interactive and engaging. From many interesting presentations, I decided to choose 16, which are influential and thought-provoking. Here are the best deep learning papers from the ICLR. …
Training machine learning/deep learning models can take a really long time, and understanding what is happening as your model is training is absolutely crucial.
Typically you can monitor:
Depending on the library or framework, this can be easier or more difficult, but pretty much always it is doable.
Most libraries allow you to monitor your model training in one of the following ways:
Some time ago I had a chance to interview a great artificial intelligence researcher and Chief AI Scientist in Lindera, Arash Azhand.
We talked about:
Here is the full video of our conversation:
You can watch the video to see the raw content.
Here, on the other hand, is a summary of our conversation. What follows is not a 1–1 transcript but rather a cleaned-up, structured, and rephrased version of it. …
The International Conference on Learning Representations (ICLR) took place last week, and I had a pleasure to participate in it. ICLR is an event dedicated to research on all aspects of representation learning, commonly known as deep learning. Due to the coronavirus pandemic, the conference couldn’t take place in Addis Ababa, as planned, and went virtual instead. It didn’t change the great atmosphere of the event, quite the opposite — it was engaging and interactive, and attracted an even bigger audience than last year. …
Machine learning algorithms are tunable by multiple gauges called hyperparameters. Recent deep learning models are tunable by tens of hyperparameters, that together with data augmentation parameters and training procedure parameters create quite complex space. In the reinforcement learning domain, you should also count environment params.
Data scientists should control hyperparameter space well in order to make progress.
Here, we will show you recent practices, tips & tricks, and tools to track hyperparameters efficiently and with minimal overhead. …