Workshop - Winder.AI Blog

Industrial insight and articles from Winder.AI, focusing on the topic Workshop

Subscribe

Principal Component Analysis

Sun Jan 28, 2018, in Machine Learning, Workshop

Dimensionality Reduction - Principal Component Analysis Welcome! This workshop is from Winder.ai. Sign up to receive more free workshops, training and videos. Sometimes data has redundant dimensions. For example, when predicting weight from height data you would expect that information about their eye colour provides no predictive power. In this simple case we can simply remove that feature from the data. With more complex data it is usual to have combinations of features that provide predictive power.

Distance Measures with Large Datasets

Mon Jan 1, 2018, in Machine Learning, Workshop

Distance Measures for Similarity Matching with Large Datasets Today I had an interesting question from a client that was using a distance metric for similarity matching. The problem I face is that given one vector v and a list of vectors X how do I calculate the Euclidean distance between v and each vector in X in the most efficient way possible in order to get the top matching vectors?

Detrending Seasonal Data

Thu Dec 21, 2017, in Machine Learning, Workshop

Detrending Seasonal Data Welcome! This workshop is from Winder.ai. Sign up to receive more free workshops, training and videos. statsmodels is a comprehensive library for time series data analysis. And it has a really neat set of functions to detrend data. So if you see that your features have any trends that are time-dependent, then give this a try. It’s essentially fitting the multiplicative model: $y(t) = Level * Trend * Seasonality * Noise$

Evidence, Probabilities and Naive Bayes

Thu Dec 21, 2017, in Machine Learning, Workshop

Evidence, Probabilities and Naive Bayes Welcome! This workshop is from Winder.ai. Sign up to receive more free workshops, training and videos. Bayes rule is one of the most useful parts of statistics. It allows us to estimate probabilities that would otherwise be impossible. In this worksheet we look at bayes at a basic level, then try a naive classifier. Bayes Rule For more intuition about Bayes Rule, make sure you check out the training.

Hierarchical Clustering - Agglomerative

Thu Dec 21, 2017, in Machine Learning, Workshop

Hierarchical Clustering - Agglomerative Clustering Welcome! This workshop is from Winder.ai. Sign up to receive more free workshops, training and videos. Clustering is an unsupervised task. In other words, we don’t have any labels or targets. This is common when you receive questions like “what can we do with this data?” or “can you tell me the characteristics of this data?”. There are quite a few different ways of performing clustering, but one way is to form clusters hierarchically.

Qualitative Model Evaluation - Visualising Performance

Thu Dec 21, 2017, in Machine Learning, Workshop

Qualitative Model Evaluation - Visualising Performance Welcome! This workshop is from Winder.ai. Sign up to receive more free workshops, training and videos. Being able to evaluate models numerically is really important for optimisation tasks. However, performing a visual evaluation provides two main benefits: Easier to spot mistakes Easier to explain to other people It is so easy to miss a gross error when looking at summary statistics alone. Always visualise your data/results!

Quantitative Model Evaluation

Thu Dec 21, 2017, in Machine Learning, Workshop

Quantitative Model Evaluation Welcome! This workshop is from Winder.ai. Sign up to receive more free workshops, training and videos. We need to be able to compare models for a range of tasks. The most common use case is to decide whether changes to your model improve performance. Typically we want to visualise this, and we will in another workshop, but first we need to establish some quantitative measures of performance.

Testing Model Robustness with Jitter

Thu Dec 21, 2017, in Machine Learning, Workshop

Testing Model Robustness with Jitter Welcome! This workshop is from Winder.ai. Sign up to receive more free workshops, training and videos. To test whether your models are robust to changes, one simple test is to add some noise to the test data. When we alter the magnitude of the noise, we can infer how well the model will perform with new data and different sources of noise. In this example we’re going to add some random, normally-distributed noise, but it doesn’t have to be normally distributed!

K-NN For Classification

Wed Dec 20, 2017, in Machine Learning, Workshop

K-NN For Classification Welcome! This workshop is from Winder.ai. Sign up to receive more free workshops, training and videos. In a previous workshop we investigated how the nearest neighbour algorithm uses the concept of distance as a similarity measure. We can also use this concept of similarity as a classification metric. I.e. new observations will be classified the same as its neighbours. This is accomplished by finding the most similar observations and setting the predicted classification as some combination of the k-nearest neighbours.

Nearest Neighbour Algorithms

Wed Dec 20, 2017, in Machine Learning, Workshop

Nearest Neighbour Algorithms Welcome! This workshop is from Winder.ai. Sign up to receive more free workshops, training and videos. Nearest neighbour algorithms are a class of algorithms that use some measure of similarity. They rely on the premise that observations which are close to each other (when comparing all of the features) are similar to each other. Making this assumption, we can do some interesting things like: Recommendations Find similar stuff But more crucially, they provide an insight into the character of the data.