Large Language Models (LLMs) continue to amaze us with their capabilities. However, the utilization of LLMs in production AI applications requires the integration of private data. Join us as we have a captivating conversation with Jerry Liu from LlamaIndex, where he provides valuable insights into the process of data ingestion, indexing, and query specifically tailored for LLM applications. Delving into the topic, we uncover different query patterns and venture beyond the realm of vector databases.
Join the discussion
Changelog++ members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
Fastly • – Our bandwidth partner. • Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.comFly.io • – The home of Changelog.com • — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog • and check out the speedrun in their docs • . Typesense • – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Jerry Liu – GitHub • , X • Chris Benson – Website • , GitHub • , LinkedIn • , X • Daniel Whitenack – Website • , GitHub • , X Show Notes:
LlamaIndex DocsLlamaHubLlamaIndex Blog Something missing or broken? PRs welcome!
★ Support this podcast ★
Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis
Dansk
Danmark