It’s easy to make content with a Large Language Model (LLM), but the output frequently suffers from hallucinations (fake content), outdated information (not based on the latest data), and reliance on public data only (no private data). Additionally, the output format can be chaotic, frequently littered with harmful or personally identifiable information (PII), and utilizing a large context window can become expensive—making LLMs little than perfect for real-world applications.
In this talk, we’ll begin with a fast overview of the latest advancements in LLMs. We’ll then research various techniques to overcome common LLM challenges: grounding and Retrieval-Augmented Generation (RAG) to enhance prompts with applicable data; function calling to supply LLMs with more fresh information; batching and context caching to control costs; frameworks for evaluating and safety investigating your LLMs and more!
By the end of this session, you’ll have a solid knowing of how LLMs can neglect and what you can do to address these issues.
