Lately, I’ve been digging into machine learning (ML) compilers and how they differ from traditional software compilers.
In this post, we explore various classification metrics: accuracy, precision, recall, F1-score, and AUC-ROC.
Working with machine learning often means juggling multiple Python versions, CUDA drivers, TensorFlow/PyTorch builds, and environment conflicts. Let’s checkout how docker can help us to ease the version pain. Say goodbye to “works on my machine” and Python hell.
Machine learning is in the center of the AI hype. But what does “learning” actually mean? We look behind the jargon and compare to another popular field computational mechanics.
Python’s @dataclass is a powerful tool for creating data containers with minimal boilerplate code. However, it introduces a subtle pitfall when working with mutable defaults like lists or NumPy arrays. This article explores this common issue, its root cause, and how to fix it effectively.
@dataclass
Precision, recall, and the confusion matrix help evaluate machine learning models. Learn to Understand their tradeoffs, especially in imbalanced datasets, and optimize your classifier for better results.
When you read and write about software, there is always a balance to be struck how technical the books should be. I have read Effective Modern C++, which I found a little too technical.