This is a preview of a post that’s just for paid subscribers. If it’s interesting, sign up and read more 😉 Subscribers have been asking me to write more about AI and how it works, and so I shall. This post — and others going into the mechanics of AI — is for paid subscribers only. TL;DRRetrieval Augmented Generation (RAG) is a way to make LLMs like GPT-4 more accurate and personalized to your specific data.
Alongside old school fine tuning, RAG is becoming the standard way to get better, more personalized results out of state of the art LLMs. Back to the future: training modelsThe funny thing about RAG is that the basic concept has been around for as long as machine learning has. Long time readers will recall that back in the day, I studied Data Science in undergrad. “Old School” machine learning, before everyone was calling it AI, was entirely predicated on training a new model for every problem. ... Unlock this post for free, courtesy of Justin. |