Agentic AI programming is what happens when coding assistants stop acting like autocomplete and start collaborating on real work. In this episode, we cut through the hype and incentives to define “agentic,” then get hands-on with how tools like Cursor, Claude Code, and LangChain actually behave inside an established codebase. Our guest, Matt Makai, now VP of Developer Relations at DigitalOcean, creator of Full Stack Python and Plushcap, shares hard-won tactics. We unpack what breaks, from brittle “generate a bunch of tests” requests to agents amplifying technical debt and uneven design patterns. Plus, we also discuss a sane git workflow for AI-sized diffs. You’ll hear practical Claude tips, why developers write more bugs when typing less, and where open source agents are headed. Hint: The destination is humans as editors of systems, not just typists of code.
Episode sponsors
Posit
Talk Python Courses
Links from the show Matt Makai: linkedin.com
Plushcap Developer Content Analytics: plushcap.com
DigitalOcean Gradient AI Platform: digitalocean.com
DigitalOcean YouTube Channel: youtube.com
Why Generative AI Coding Tools and Agents Do Not Work for Me: blog.miguelgrinberg.com
AI Changes Everything: lucumr.pocoo.org
Claude Code - 47 Pro Tips in 9 Minutes: youtube.com
Cursor AI Code Editor: cursor.com
JetBrains Junie: jetbrains.com
Claude Code by Anthropic: anthropic.com
Full Stack Python: fullstackpython.com
Watch this episode on YouTube: youtube.com
Episode #517 deep-dive: talkpython.fm/517
Episode transcripts: talkpython.fm
Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong
--- Stay in touch with us ---
Subscribe to Talk Python on YouTube: youtube.com
Talk Python on Bluesky: @talkpython.fm at bsky.app
Talk Python on Mastodon: talkpython
Michael on Bluesky: @mkennedy.codes at bsky.app
Michael on Mastodon: mkennedy
Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis
Dansk
Danmark