Friday, January 26, 2024 – 11:00AM to 2:00PM
In person in CRTP 2 in Hock Plaza and virtual
Presented by: Sage Arbor, PhD., Sr. Informaticist, Technology & Data Solutions, Duke Clinical Research Institute
Please join Sage Arbor, PhD. for a dynamic and collaborative learning experience where participants will delve deep into the world of large language models. Attendees will engage in hands-on activities ranging from prompt engineering techniques, investigating how AIs can improve other AIs, exploring the quality of model outputs, and making insightful comparisons between different language models.
Embark on a journey to understand the nuances of quality metrics, distinguishing between those that are easily measurable and those that pose challenges. Explore the multifaceted aspects of evaluating AI-generated content, emphasizing the importance of human evaluation in refining model performance.
One of the highlights of the workshop is the discussion and exploration of various strategies to enhance AI models. AIs with personas of quality assurance inspectors which cycle to improve another AIs output will be explored. Attendees have the opportunity to gain practical insights into optimizing and customizing models to better suit specific tasks and domains, thus appreciating the potential for tailoring AI to their unique needs.
In this intellectually stimulating workshop, participants not only gain a deeper understanding of large language models but also acquire the skills and knowledge necessary to harness the full potential of AI in their respective fields.