janestreet.com
This talk looks at the question of how to design an exchange through the lens of JX, a crossing engine we built at Jane Street in the last two years. Performance plays an interesting role in this design, in that, although the end-to-end latency of the system is not important in and of itself, the ability of individual components of JX to handle messages rates in the 500k/sec range with latencies in the single-digit microseconds helped us build a replicated system that is both simple and robust.
sarem-seitz.com - Sarem Seitz
While point forecasts are very popular, be aware of some unlucky pitfalls.
huggingface.co
This course will teach you about Deep Reinforcement Learning from beginner to expert. It’s completely free and open-source!
sebastianraschka.com - Sebastian Raschka
To mark the start of the new year, this month's issue will feature a review of the top ten papers I've read in 2022.
sebastianraschka.com - Sebastian Raschka
Recently, I shared the top 10 papers that I read in 2022. As a follow-up, I am compiling a list of my favorite 10 open-source releases that I discovered, used, or contributed to in 2022.
ssrn.com
This paper compares various machine learning models to predict the cross-section of emerging market stock returns. We document that allowing for non-linearities and interactions leads to economically and statistically superior out-of-sample returns compared to traditional linear models. Although we find that both linear and machine learning models show higher predictability for stocks associated with higher limits to arbitrage, we also show that this effect is less pronounced for non-linear models. Furthermore, significant net returns can be achieved when accounting for transaction costs, short-selling constraints, and limiting our investment universe to big stocks only.
louisbouchard.ai
A curated list of the latest breakthroughs in AI by release date with a clear video explanation, link to a more in-depth article, and code.
eranraviv.com
The kernel trick is helpful for expanding any data from its original dimension to higher dimension. Fine, who cares?. Well, in life we often benefit from dimension expansion. Higher dimension makes for a richer data representation, which statistical models can exploit to create better predictions.
siboehm.com
In this post, I’ll iteratively optimize an implementation of matrix multiplication written in CUDA. My goal is not to build a cuBLAS replacement, but to deeply understand the most important performance characteristics of the GPUs that are used for modern deep learning. This includes coalescing global memory accesses, shared memory caching and occupancy optimizations, among others.