The rise of GenAI
AI is not new, but the availability of high- performing compute resource and internet- scale data sets, coupled with technological advancements in the way machines learn, have created the perfect environment for AI to thrive.
Among the many recent innovations, Generative AI has captured our imagination for its ability to communicate with us in almost human-like fashion. A subset of machine learning, characterised by its ability to process complex inputs and respond with original content, the possibilities for this exciting new technology are hard to overstate. Generative AI has the potential to transform the way we work and the experiences we offer to our customers, but extracting these benefits whilst ensuring quality and safety needs careful thought.
The Large Language Models (LLM’s) that power Generative AI are the Swiss army knives of the AI world. Trained on massive data sets, they possess impressive knowledge and an ability to answer questions on a wide range of topics. However, for many applications it is their intrinsic ability to understand language, and their ability to understand and work with data, that is of higher importance. We can talk to them, and they understand. They can reason over the data we provide, extracting relevant information and using it to answer questions, solve problems, create documents, images, presentations and more.
These generalized abilities however present a challenge – how can we narrow their focus to a specific industry, a specific product or a specific document? How can we keep their responses grounded in our content rather than the terabytes of data they absorbed as part of their training?
Continued pre-training and fine-tuning enable models to develop specialisms, but typically require large volumes of training data and specialist knowledge to get good results. Retrieval Augmented Generation (RAG) takes a different approach by providing relevant extracts from source material as additional background context to the question. Prompt Engineering – crafting questions (or “prompts”) in such a way to elicit the best possible response, including verbosity and tone of voice – has arisen as specialty subject with whole books devoted to it. But alongside this has also risen the risk of Prompt Injection – deliberately manipulating prompts to try and guide a model to reveal sensitive information or respond in a manner that could cause brand embarrassment.
But despite the risks, the promise of AI is too great to ignore. Easy access to off- the-shelf models and an ever-growing library of AI enabled tools and services is enabling and driving rapid innovation.
And with technological guardrails in place, strong governance and oversight, and with humans in the loop at key points in the process, it is possible for us to begin leveraging the power of AI now, whilst continuing to protect our data, our users and our reputations from harm.