Today’s guest is the author of a popular Medium blog where he has recently been dissecting generative AI for technologists. I read his introduction to the transformer architecture and immediately realized our audience needs to meet him. A bit like great recent guest Ken Wenger, Pradeep makes complicated technology accessible.
By day, Pradeep Menon is a CTO at Microsoft's digital natives division in APAC. He has had one of the best ground floor views of generative AI since Microsoft first invested in OpenAI in 2019 and then again in March of this year.
Pradeep was previously in similar roles at Alibaba and IBM. He speaks frequently on topics related to emerging tech, data, and AI to global audiences and is a published author.
Listen and learn...
1. What surprises Pradeep most about the capabilities of LLMs 2. What most people don't understand about how LLMs like GPT are trained 3. The difference between prompting and fine-tuning 4. Why ChatGPT performs so well as a coding co-pilot 5. How RLHF works 6. How Bing uses grounding to mitigate the impact of LLM hallucinations 7. How Pradeep uses ChatGPT to improve his own productivity 8. How we should regulate AI 9. What new careers AI is creating
References in this episode...
• Ken Wenger on AI and the Future of Work • Pradeep's book Data Lakehouse in ActionD-ID speaking avatars
Today’s guest is the author of a popular Medium blog where he has recently been dissecting generative AI for technologists. I read his introduction to the transformer architecture and immediately realized our audience needs to meet him. A bit like great recent guest Ken Wenger, Pradeep makes complicated technology accessible.
By day, Pradeep Menon is a CTO at Microsoft's digital natives division in APAC. He has had one of the best ground floor views of generative AI since Microsoft first invested in OpenAI in 2019 and then again in March of this year.
Pradeep was previously in similar roles at Alibaba and IBM. He speaks frequently on topics related to emerging tech, data, and AI to global audiences and is a published author.
Listen and learn...
1. What surprises Pradeep most about the capabilities of LLMs 2. What most people don't understand about how LLMs like GPT are trained 3. The difference between prompting and fine-tuning 4. Why ChatGPT performs so well as a coding co-pilot 5. How RLHF works 6. How Bing uses grounding to mitigate the impact of LLM hallucinations 7. How Pradeep uses ChatGPT to improve his own productivity 8. How we should regulate AI 9. What new careers AI is creating
References in this episode...
• Ken Wenger on AI and the Future of Work • Pradeep's book Data Lakehouse in ActionD-ID speaking avatars
Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis
Dansk
Danmark