A weekly talk show
Jamie's Links:
https://github.com/github/spec-kit
https://owasp.org/
https://bsky.app/profile/gaprogman.com
https://dotnetcore.show/
https://gaprogman.github.io/OwaspHeaders.Core/
Mike on LinkedIn
Coder Radio on Discord
Mike's Oryx Review
Alice
Alice Jumpstart Offer
RubyLLM
Carmine
Chat With Work
Carmine on X
Mike on LinkedIn
Coder Radio on Discord
Alice
Mike's 2026 Predictions Post
ThousandEyes
Murtaza on LinkedIn
Internet Outages Map
ThousandEyesJob Openings
Mike on LinkedIn
Coder Radio on Discord
Alice
Mike's 2026 Predictions Post
Mike on LinkedIn
Mike's Blog
Show on Discord
Dreamcast assorted references:
Dreamcast overview https://sega.fandom.com/wiki/Dreamcast
History of Dreamcast development https://segaretro.org/History_of_the_Sega_Dreamcast/Development
The Rise and Fall of the Dreamcast: A Legend Gone Too Soon (Simon Jenner) https://sabukaru.online/articles/he-rise-and-fall-of-the-dreamcast-a-legend-gone-too-soon
The Legacy of the Sega Dreamcast | 20 Years Later https://medium.com/@Amerinofu/the-legacy-of-the-sega-dreamcast-20-years-later-d6f3d2f7351c
Socials & Plugs
The R Podcast https://r-podcast.org/
R Weekly Highlights https://serve.podhome.fm/r-weekly-highlights
Shiny Developer Series https://shinydevseries.com/
Eric on Bluesky https://bsky.app/profile/rpodcast.bsky.social
Eric on Mastodon https://podcastindex.social/@rpodcast
Eric on LinkedIn https://www.linkedin.com/in/eric-nantz-6621617/
Links
James on LinkedIn
Mike on LinkedIn
Mike's Blog
Show on Discord
Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised.
Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms.
Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments.
Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp.
Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes.
Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises.
Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation).
Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.