ergodicityeconomics.com - Ole Peters
In 2011 I gave a 15-minute talk to a lay audience in London. The topic I had chosen was ergodicity breaking, and the challenge was clear: how do you get this across? I invented a coin-toss gamble, which has since become a go-to illustration of ergodicity breaking and a very intuitive way of explaining how ergodicity economics differs from other approaches to economics, and how its concepts may apply to problems unrelated to economics.
blogspot.com - Mark Taylor
I've revisited the idea to see if the intuition that "good finishers", when they miss, don't miss by much, is valid or not.The idea is fairly basic. A shot that hits the post, is inches away from being a high quality post shot xG, whereas one that flies high and wide is going to need a fair bit of resighting to trouble the keeper.
quantinsti.com - Chainika Thakar
In the world of finance and investment management, effectively managing portfolio risk is essential for achieving optimal returns. Two key concepts that help quantify and analyse risk are the covariance matrix and portfolio variance.Covariance matrix and portfolio variance are essential tools that provide insights into the relationships between assets and help measure and manage portfolio risk.
priceactionlab.com - Michael Harris
The Sharpe ratio is probably the most important metric for evaluating trading strategy performance.This article is based on a recent X (Twitter) thread by this author. I noticed in the replies that some questioned the use of the Sharpe ratio and proposed the Sortino ratio instead. It is not one against the other, which is a false dichotomy.
ssrn.com - David Rapach, Guofu Zhou
We survey the literature on stock return forecasting, highlighting the challenges faced by forecasters as well as strategies for improving return forecasts. We focus on U.S. equity premium forecastability and illustrate key issues via an empirical application based on updated data. Some studies argue that, despite extensive in-sample evidence of equity premium predictability, popular predictors from the literature fail to outperform the simple historical average benchmark forecast in out-of-sample tests. Recent studies, however, provide improved forecasting strategies that deliver statistically and economically significant out-of-sample gains relative to the historical average benchmark. These strategies—including economically motivated model restrictions, forecast combination, diffusion indices, and regime shifts—improve forecasting performance by addressing the substantial model uncertainty and parameter instability surrounding the data-generating process for stock returns. In addition to the U.S. equity premium, we succinctly survey out-of-sample evidence supporting U.S. cross-sectional and international stock return forecastability. The significant evidence of stock return forecastability worldwide has important implications for the development of both asset pricing models and investment management strategies.
github.io
AlpacaEval an LLM-based automatic evaluation that is fast, cheap, and reliable. It is based on the AlpacaFarm evaluation set, which tests the ability of models to follow general user instructions. These responses are then compared to reference Davinci003 responses by the provided GPT-4 or Claude or ChatGPT based auto-annotators, which results in the win rates presented above. AlpacaEval displays a high agreement rate with ground truth human annotations, and leaderboard rankings on AlpacaEval are very correlated with leaderboard rankings based on human annotators. Please see our documentation for more details on our analysis.
willthompson.name - Will Thompson
When people say “Large Language Models”, they typically are referring to a type of deep learning architecture called a Transformer. Transformers are models that work with sequence data (e.g. text, images, time series, etc) and are part of a larger family of models called Sequence Models. Many Sequence Models can also be thought of as Language Models, or models that learn a probability distribution of the next word/pixel/value in a sequence.
icml.cc
This year six papers were chosen as recipients of the Outstanding Paper Award.
quantpedia.com - Lukas Zelieska
At the end of 2018, researchers at Google AI Language made a significant breakthrough in the Deep Learning community. The new technique for Natural Language Processing (NLP) called BERT (Bidirectional Encoder Representations from Transformers) was open-sourced. An incredible performance of the BERT algorithm is very impressive. BERT is probably going to be around for a long time. Therefore, it is useful to go through the basics of this remarkable part of the Deep Learning algorithm family.
github.io - Jean Nyandwi
A deep dive into Transformer a neural network architecture that was introduced in the famous paper “attention is all you need” in 2017, its applications, impacts, challenges and future directions
twimlai.com
Today we’re joined by David Rosenberg, head of the machine learning strategy team in the Office of the CTO at Bloomberg. In our conversation with David, we discuss the creation of BloombergGPT, a custom-built LLM focused on financial applications. We explore the model’s architecture, validation process, benchmarks, and its distinction from other language models. David also discussed the evaluation process, performance comparisons, progress, and the future directions of the model. Finally, we discuss the ethical considerations that come with building these types of models, and how they've approached dealing with these issues.
spreaker.com
Blair Hull was known as the first "BP" or "Big Player". Ken Uston wrote a book called, "The Big Player" after he replaced Blair in that role. Taking what he learned from counting cards Blair moved to a much bigger casino with higher stakes, the options exchange.