arxiv.org - Ulrik Brandes, Gordana Marmulla, Ivana Smokovic
In the run-up to any major sports tournament, winning probabilities of participants are publicized for engagement and betting purposes. These are generally based on simulating the tournament tens of thousands of times by sampling from single-match outcome models. We show that, by virtue of the tournament schedule, exact computation of winning probabilties can be substantially faster than their approximation through simulation. This notably applies to the 2022 and 2023 FIFA World Cup Finals, and is independent of the model used for individual match outcomes.
thecollector.com - Luke Dunne
What reason do we have for thinking that patterns observed in our past experience will hold into the future? Simply put, when we see something happen over and over again (the sun rising, for instance), what is it about our seeing it over and over that gives us a reason to think that these things will continue to happen?
youtube.com - Peter Webb
Discover the core principles of successful Betfair trading in this deep-dive tutorial. Whether you're new to trading or a seasoned professional, this video reveals some essential techniques, tips and strategies to help you navigate Betfair markets and maximize your profit.
statsbomb.com - Matt Edwards
Special Teams Coordinator (STC) is one of the hardest jobs in college football, as special teams are treated as anything but. There are 11 full-time coaches per staff allowable by NCAA rules. Most teams do not choose to dedicate one of those 11 to special teams full-time. Rather STC’s often have special teams added to their responsibilities as a position coach.
arxiv.org - Vélez Jiménez, Román Alberto, Lecuanda Ontiveros, José Manuel, Edgar Possani
This paper presents a novel approach for optimizing betting strategies in sports gambling by integrating Von Neumann-Morgenstern Expected Utility Theory, deep learning techniques, and advanced formulations of the Kelly Criterion. By combining neural network models with portfolio optimization, our method achieved remarkable profits of 135.8% relative to the initial wealth during the latter half of the 20/21 season of the English Premier League. We explore complete and restricted strategies, evaluating their performance, risk management, and diversification. A deep neural network model is developed to forecast match outcomes, addressing challenges such as limited variables. Our research provides valuable insights and practical applications in the field of sports betting and predictive modeling.
arxiv.org - Muhammad Sohaib Ayub, Naimat Ullah, Sarwan Ali, Imdad Ullah Khan, Mian Muhammad Awais, Muhammad Asad Khan, Safiullah Faizullah
Cricket is the second most popular sport after soccer in terms of viewership. However, the assessment of individual player performance, a fundamental task in team sports, is currently primarily based on aggregate performance statistics, including average runs and wickets taken. We propose Context-Aware Metric of player Performance, CAMP, to quantify individual players' contributions toward a cricket match outcome. CAMP employs data mining methods and enables effective data-driven decision-making for selection and drafting, coaching and training, team line-ups, and strategy development. CAMP incorporates the exact context of performance, such as opponents' strengths and specific circumstances of games, such as pressure situations. We empirically evaluate CAMP on data of limited-over cricket matches between 2001 and 2019. In every match, a committee of experts declares one player as the best player, called Man of the M}atch (MoM). The top two rated players by CAMP match with MoM in 83\% of the 961 games. Thus, the CAMP rating of the best player closely matches that of the domain experts. By this measure, CAMP significantly outperforms the current best-known players' contribution measure based on the Duckworth-Lewis-Stern (DLS) method.
arxiv.org - Zhonghan Zhao, Wenhao Chai, Shengyu Hao, Wenhao Hu, Guanhong Wang, Shidong Cao, Mingli Song, Jenq-Neng Hwang, Gaoang Wang
Deep learning has the potential to revolutionize sports performance, with applications ranging from perception and comprehension to decision. This paper presents a comprehensive survey of deep learning in sports performance, focusing on three main aspects: algorithms, datasets and virtual environments, and challenges. Firstly, we discuss the hierarchical structure of deep learning algorithms in sports performance which includes perception, comprehension and decision while comparing their strengths and weaknesses. Secondly, we list widely used existing datasets in sports and highlight their characteristics and limitations. Finally, we summarize current challenges and point out future trends of deep learning in sports. Our survey provides valuable reference material for researchers interested in deep learning in sports applications.
arxiv.org - Sarosij Bose, Saikat Sarkar, Amlan Chakrabarti
Classifying player actions from soccer videos is a challenging problem, which has become increasingly important in sports analytics over the years. Most state-of-the-art methods employ highly complex offline networks, which makes it difficult to deploy such models in resource constrained scenarios. Here, in this paper we propose a novel end-to-end knowledge distillation based transfer learning network pre-trained on the Kinetics400 dataset and then perform extensive analysis on the learned framework by introducing a unique loss parameterization. We also introduce a new dataset named SoccerDB1 containing 448 videos and consisting of 4 diverse classes each of players playing soccer. Furthermore, we introduce an unique loss parameter that help us linearly weigh the extent to which the predictions of each network are utilized. Finally, we also perform a thorough performance study using various changed hyperparameters. We also benchmark the first classification results on the new SoccerDB1 dataset obtaining 67.20% validation accuracy. Apart from outperforming prior arts significantly, our model also generalizes to new datasets easily. The dataset has been made publicly available at: https://bit.ly/soccerdb1
twitter.com - Kirk Borne
Download free 208-page PDF >> Successful #AlgorithmicTrading — quantitative strategies for profitable trading results.
arxiv.org - Masanori Hirano, Kentaro Minami, Kentaro Imajo
Deep hedging is a deep-learning-based framework for derivative hedging in incomplete markets. The advantage of deep hedging lies in its ability to handle various realistic market conditions, such as market frictions, which are challenging to address within the traditional mathematical finance framework. Since deep hedging relies on market simulation, the underlying asset price process model is crucial. However, existing literature on deep hedging often relies on traditional mathematical finance models, e.g., Brownian motion and stochastic volatility models, and discovering effective underlying asset models for deep hedging learning has been a challenge. In this study, we propose a new framework called adversarial deep hedging, inspired by adversarial learning. In this framework, a hedger and a generator, which respectively model the underlying asset process and the underlying asset process, are trained in an adversarial manner. The proposed method enables to learn a robust hedger without explicitly modeling the underlying asset process. Through numerical experiments, we demonstrate that our proposed method achieves competitive performance to models that assume explicit underlying asset processes across various real market data.
github.io - Aki Vehtari
Here are some answers by Aki Vehtari to frequently asked questions about cross-validation and loo package.
nvidia.com - Eryk Lewinson
Many data science projects contain some information about the passage of time. And this is not restricted to time series forecasting problems. For example, you can often find such features in traditional regression or classification tasks. This article investigates how to create meaningful features using date-related information. We present three approaches, but we need some preparation first.
towardsai.net - Anshumaan Tiwari
intelligence, Convolutional Neural Networks (CNNs) have emerged as a revolutionary technology, reshaping the fields of computer vision and image recognition. With their ability to automatically learn and identify patterns in images, CNNs have unlocked new possibilities in numerous applications, from self-driving cars to medical diagnostics. In this article, we will delve into the workings of CNN architecture and explore its prowess using the popular CIFAR-10 dataset as our testing ground.
arxiv.org - Paul-Christian BĂĽrkner, Jonah Gabry, Aki Vehtari
One of the common goals of time series analysis is to use the observed series to inform predictions for future observations. In the absence of any actual new data to predict, cross-validation can be used to estimate a model's future predictive accuracy, for instance, for the purpose of model comparison or selection. Exact cross-validation for Bayesian models is often computationally expensive, but approximate cross-validation methods have been developed, most notably methods for leave-one-out cross-validation (LOO-CV). If the actual prediction task is to predict the future given the past, LOO-CV provides an overly optimistic estimate because the information from future observations is available to influence predictions of the past. To properly account for the time series structure, we can use leave-future-out cross-validation (LFO-CV). Like exact LOO-CV, exact LFO-CV requires refitting the model many times to different subsets of the data. Using Pareto smoothed importance sampling, we propose a method for approximating exact LFO-CV that drastically reduces the computational costs while also providing informative diagnostics about the quality of the approximation.
keras.io - Keras Team
We're excited to share with you a new library called Keras Core, a preview version of the future of Keras. In Fall 2023, this library will become Keras 3.0. Keras Core is a full rewrite of the Keras codebase that rebases it on top of a modular backend architecture. It makes it possible to run Keras workflows on top of arbitrary frameworks — starting with TensorFlow, JAX, and PyTorch.Keras Core is also a drop-in replacement for tf.keras, with near-full backwards compatibility with tf.keras code when using the TensorFlow backend. In the vast majority of cases you can just start importing it via import keras_core as keras in place of from tensorflow import keras and your existing code will run with no issue — and generally with slightly improved performance, thanks to XLA compilation.
twitter.com - Selçuk Korkmaz
Explaining Markov Chain Monte Carlo (MCMC) to a Layperson
stanford.edu
Natural language processing (NLP) is a crucial part of artificial intelligence (AI), modeling how people share information. In recent years, deep learning approaches have obtained very high performance on many NLP tasks. In this course, students gain a thorough introduction to cutting-edge neural networks for NLP.
learnopencv.com - Jaiyam Sharma
This month, for July 2023, we’ve selected five papers that stand out for their innovative approaches, practical applications, and potential to influence the field. These papers tackle a range of topics, from improving machine learning algorithms to exploring new uses for AI in various industries.Â
youtube.com - Timur Doumler
It is often said that C is a great language for low latency systems, such as finance, audio processing, and video games. But what exactly do we mean by "low latency"? How is that different from "high performance"? And what makes C a great language for that? This talk is an attempt at answering these questions. We will look at low latency use cases across these different industries, establish their commonalities and differences, and discuss typical challenges in low latency systems and C techniques to overcome them.
datacolada.org - Uri Simonsohn
Bayes factors provide the results of a horse-race. They tell us how much more consistent the data are with an alternative hypothesis than with the null hypothesis.
columbia.edu - Aki Vehtari
The question is when leave-one-out cross-validation or leave-one-group-out cross-validation is valid for model comparison. The short answer is that we need to think about what is the joint data generating mechanism, what is exchangeable, and what is the prediction task. LOO can be valid or invalid, for example, for time-series and phylogenetic modelling depending on the prediction task. Everything said about LOO applies also to AIC, DIC, WAIC, etc.