Machine Learning Guide

Dept

  • 46 minutes 30 seconds
    MLA 022 Code AI Tools

    Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

    Show notes: https://ocdevel.com/mlg/mla-22

    Tools discussed:

    1. Windsurf: https://codeium.com/windsurf
    2. Copilot: https://github.com/features/copilot
    3. Cursor: https://www.cursor.com/
    4. Cline: https://github.com/cline/cline
    5. Roo Code: https://github.com/RooVetGit/Roo-Code
    6. Aider: https://aider.chat/

    Other:

    1. Leaderboards: https://aider.chat/docs/leaderboards/
    2. Video of speed-demon: https://www.youtube.com/watch?v=QlUt06XLbJE&feature=youtu.be
    3. Reddit: https://www.reddit.com/r/chatgptcoding/

    Boost programming productivity by acting as a pair programming partner. Groups these tools into three categories:

    • Hands-Off Tools: These include solutions that work on fixed monthly fees and require minimal user intervention. GitHub Copilot started with simple tab completions and now offers an agent mode similar to Cursor, which stands out for its advanced codebase indexing and intelligent file searching. Windsurf is noted for its simplicity—accepting prompts and performing automated edits—but some users report performance throttling after prolonged use.

    • Hands-On Tools: Aider is presented as a command-line utility that demands configuration and user involvement. It allows developers to specify files and settings, and it efficiently manages token usage by sending prompts in diff format. Aider also implements an “architect versus edit” approach: a reasoning model (such as DeepSeek R1) first outlines a sequence of changes, then an editor model (like Claude 3.5 Sonnet) produces precise code edits. This dual-model strategy enhances accuracy and reduces token costs, especially for complex tasks.

    • Intermediate Power Tools: Open-source tools such as Cline and its more advanced fork, RooCode, require users to supply their own API keys and pay per token. These tools offer robust, agentic features, including codebase indexing, file editing, and even browser automation. RooCode stands out with its ability to autonomously expand functionality through integrations (for example, managing cloud resources or querying issue trackers), making it particularly attractive for tinkerers and power users.

    A decision framework is suggested: for those new to AI coding assistants or with limited budgets, starting with Cursor (or cautiously exploring Copilot’s new features) is recommended. For developers who want to customize their workflow and dive deep into the tooling, RooCode or Cline offer greater control—always paired with Aider for precise and token-efficient code edits.

    Also reviews model performance using a coding benchmark leaderboard that updates frequently. The current top-performing combination uses DeepSeek R1 as the architect and Claude 3.5 Sonnet as the editor, with alternatives such as OpenAI’s O1 and O3 Mini available. Tools like Open Router are mentioned as a way to consolidate API key management and reduce token costs.

    9 February 2025, 8:34 pm
  • 42 minutes 14 seconds
    MLG 033 Transformers

    Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

    Show notes: https://ocdevel.com/mlg/33

    3Blue1Brown videos: https://3blue1brown.com/

    • Background & Motivation:

      • RNN Limitations: Sequential processing prevents full parallelization—even with attention tweaks—making them inefficient on modern hardware.
      • Breakthrough: “Attention Is All You Need” replaced recurrence with self-attention, unlocking massive parallelism and scalability.
    • Core Architecture:

      • Layer Stack: Consists of alternating self-attention and feed-forward (MLP) layers, each wrapped in residual connections and layer normalization.
      • Positional Encodings: Since self-attention is permutation invariant, add sinusoidal or learned positional embeddings to inject sequence order.
    • Self-Attention Mechanism:

      • Q, K, V Explained:
        • Query (Q): The representation of the token seeking contextual info.
        • Key (K): The representation of tokens being compared against.
        • Value (V): The information to be aggregated based on the attention scores.
      • Multi-Head Attention: Splits Q, K, V into multiple “heads” to capture diverse relationships and nuances across different subspaces.
      • Dot-Product & Scaling: Computes similarity between Q and K (scaled to avoid large gradients), then applies softmax to weigh V accordingly.
    • Masking:

      • Causal Masking: In autoregressive models, prevents a token from “seeing” future tokens, ensuring proper generation.
      • Padding Masks: Ignore padded (non-informative) parts of sequences to maintain meaningful attention distributions.
    • Feed-Forward Networks (MLPs):

      • Transformation & Storage: Post-attention MLPs apply non-linear transformations; many argue they’re where the “facts” or learned knowledge really get stored.
      • Depth & Expressivity: Their layered nature deepens the model’s capacity to represent complex patterns.
    • Residual Connections & Normalization:

      • Residual Links: Crucial for gradient flow in deep architectures, preventing vanishing/exploding gradients.
      • Layer Normalization: Stabilizes training by normalizing across features, enhancing convergence.
    • Scalability & Efficiency Considerations:

      • Parallelization Advantage: Entire architecture is designed to exploit modern parallel hardware, a huge win over RNNs.
      • Complexity Trade-offs: Self-attention’s quadratic complexity with sequence length remains a challenge; spurred innovations like sparse or linearized attention.
    • Training Paradigms & Emergent Properties:

      • Pretraining & Fine-Tuning: Massive self-supervised pretraining on diverse data, followed by task-specific fine-tuning, is the norm.
      • Emergent Behavior: With scale comes abilities like in-context learning and few-shot adaptation, aspects that are still being unpacked.
    • Interpretability & Knowledge Distribution:

      • Distributed Representation: “Facts” aren’t stored in a single layer but are embedded throughout both attention heads and MLP layers.
      • Debate on Attention: While some see attention weights as interpretable, a growing view is that real “knowledge” is diffused across the network’s parameters.
    9 February 2025, 1:33 am
  • 26 minutes
    MLA 021 Databricks

    Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

    Discussing Databricks with Ming Chang from Raybeam (part of DEPT®)

    22 June 2022, 1:50 am
  • 1 hour 8 minutes
    MLA 020 Kubeflow

    Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

    Conversation with Dirk-Jan Kubeflow (vs cloud native solutions like SageMaker)

    Dirk-Jan Verdoorn - Data Scientist at Dept Agency

    Kubeflow. (From the website:) The Machine Learning Toolkit for Kubernetes. The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run Kubeflow.

    TensorFlow Extended (TFX). If using TensorFlow with Kubeflow, combine with TFX for maximum power. (From the website:) TensorFlow Extended (TFX) is an end-to-end platform for deploying production ML pipelines. When you're ready to move your models from research to production, use TFX to create and manage a production pipeline.

    Alternatives:

    29 January 2022, 12:48 am
  • 1 hour 14 minutes
    MLA 019 DevOps

    Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

    Chatting with co-workers about the role of DevOps in a machine learning engineer's life

    Expert coworkers at Dept

    Devops tools

    Pictures (funny and serious)

    13 January 2022, 10:42 pm
  • 6 minutes 22 seconds
    MLA 018 Descript

    Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

    (Optional episode) just showcasing a cool application using machine learning

    Dept uses Descript for some of their podcasting. I'm using it like a maniac, I think they're surprised at how into it I am. Check out the transcript & see how it performed.

    • Descript
    • The Ship It Podcast How to ship software, from the front lines. We talk with software developers about their craft, developer tools, developer productivity and what makes software development awesome. Hosted by your friends at Rocket Insights. AKA shipit.io
    • Brandbeats Podcast by BASIC An agency podcast with views on design, technology, art, and culture. Explore the new microsite at www.brandbeats.basicagency.com
    7 November 2021, 1:26 am
  • 1 hour 4 minutes
    MLA 017 AWS Local Development

    Try a walking desk while studying ML or working on your projects!

    Show notes: ocdevel.com/mlg/mla-17

    Developing on AWS first (SageMaker or other)

    Consider developing against AWS as your local development environment, rather than only your cloud deployment environment. Solutions:

    1. Stick to AWS Cloud IDEs (LambdaSageMaker StudioCloud9
    2. Connect to deployed infrastructure via Client VPN

    3. LocalStack

    Infrastructure as Code

    6 November 2021, 5:39 am
  • 59 minutes 57 seconds
    MLA 016 SageMaker 2

    Try a walking desk while studying ML or working on your projects!

    Part 2 of deploying your ML models to the cloud with SageMaker (MLOps)

    MLOps is deploying your ML models to the cloud. See MadeWithML for an overview of tooling (also generally a great ML educational run-down.)

    5 November 2021, 5:20 am
  • 47 minutes 1 second
    MLA 015 SageMaker 1

    Try a walking desk while studying ML or working on your projects!

    Show notes Part 1 of deploying your ML models to the cloud with SageMaker (MLOps)

    MLOps is deploying your ML models to the cloud. See MadeWithML for an overview of tooling (also generally a great ML educational run-down.)

    And I forgot to mention JumpStart, I'll mention next time.

    4 November 2021, 6:21 am
  • 52 minutes 5 seconds
    MLA 014 Machine Learning Server

    Try a walking desk while studying ML or working on your projects!

    Server-side ML. Training & hosting for inference, with a goal towards serverless. AWS SageMaker, Batch, Lambda, EFS, Cortex.dev

    18 January 2021, 1:31 am
  • 47 minutes 8 seconds
    MLA 013 Customer Facing Tech Stack

    Try a walking desk while studying ML or working on your projects!

    Client, server, database, etc.

    3 January 2021, 1:20 am
  • More Episodes? Get the App