pi.exchange - Cameron Welland
Can machine learning predict modern players' positions?Which player is the most positionless?Does having positionless players help a team win games?Keep reading, because I’m going to answer all these questions and more.
statsbomb.com - Jaymes Monte
As a coach of my 11-year-old son’s 9-a-side team, nothing frustrates me more (except for kids taking a pair of scissors to their socks – but they wouldn’t let me write a blog piece about that) than players not getting their head up, taking too long on the ball and failing to pick out a pass to a teammate in open space.
nih.gov - Gabriel Anzer and Pascal Bauer
Due to the low scoring nature of football (soccer), shots are often used as a proxy to evaluate team and player performances. However, not all shots are created equally and their quality differs significantly depending on the situation. The aim of this study is to objectively quantify the quality of any given shot by introducing a so-called expected goals (xG) model. This model is validated statistically and with professional match analysts. The best performing model uses an extreme gradient boosting algorithm and is based on hand-crafted features from synchronized positional and event data of 105, 627 shots in the German Bundesliga. With a ranked probability score (RPS) of 0.197, it is more accurate than any previously published expected goals model. This approach allows us to assess team and player performances far more accurately than is possible with traditional metrics by focusing on process rather than results.
medium.com - Databento
This is a simple example that demonstrates how to build high-frequency trading signals in Python, using order book and market depth data from Databento together with machine learning models from sklearn.
github.com
Unverified black box model is the path to the failure. Opaqueness leads to distrust. Distrust leads to ignoration. Ignoration leads to rejection.The DALEX package xrays any model and helps to explore and explain its behaviour, helps to understand how complex models are working. The main function explain() creates a wrapper around a predictive model. Wrapped models may then be explored and compared with a collection of local and global explainers. Recent developents from the area of Interpretable Machine Learning/eXplainable Artificial Intelligence.
drwhy.ai - Przemyslaw Biecek and Tomasz Burzykowski
Taking this goal into account in this book, we do showhow to determine which explanatory variables affect a model’s prediction for a single observation. In particular, we present the theory and examples of methods that can be used to explain prediction like break-down plots, ceteris-paribus profiles, local-model approximations, or Shapley values;techniques to examine predictive models as a whole. In particular, we review the theory and examples of methods that can be used to explain model performance globally, like partial-dependence plots or variable-importance plots;charts that can be used to present the key information in a quick way;tools and methods for model comparison;code snippets for R and Python that explain how to use the described methods.