Aug. 15 2023 01:33 PM

Why the right support is still crucial for the success of generative AI

GettyImages-1479077000

As we stand at the dawn of a new era in artificial intelligence, the debut of OpenAI's ChatGPT has elicited a broad range of responses. The spectrum of reactions spans from jubilation and awe to skepticism and apprehension, reflecting the diversity of perspectives towards this technological breakthrough. Amid the buzz, it's vital to strike a balance between the optimism of tech evangelists — who herald ChatGPT as a game-changer in productivity and customer engagement — and the cautionary voices warning of potential regulatory and societal implications.

This groundbreaking technology undeniably holds immense potential. Yet, to harness its benefits while mitigating risks, we must cut through the noise and foster a clear, comprehensive understanding of ChatGPT and its capabilities. As we navigate these uncharted waters, let us embark on this journey with an open mind, ready to explore, learn and adapt to this remarkable innovation in the realm of artificial intelligence.

As a technology executive who has spent three decades at the intersection of science research, engineering logistics and customer communications management (CCM), I’ll admit to being in the camp of those who are enthusiastic about the AI advancements exhibited by ChatGPT. Generative AI has many benefits to offer the CCM space, including streamlined processes, better quality communications and increased personalization, to name a few. However, what we might miss amid the noise is that, while generative AI may appear to be thinking for itself, its capabilities ultimately reflect human understanding of how it works and the skill with which we prompt it.

One of the most widely discussed aspects of ChatGPT is its ability to be convincingly lifelike in its responses to user-generated prompts and questions. This linguistic command is based on language prediction, a hallmark of generative AI technologies classified as large language models (LLMs). Like other generative AI models, LLMs are “trained” by processing vast quantities of data unsupervised, deducing and learning the rules of grammar, syntax and composition that govern natural language. In internalizing this data, LLMs note words that typically appear together, honing their ability to predict which word should come next in a sequence. In addition to helping the technology generate sentences without obvious logical errors, this predictive capability means that LLMs can be trained over time to use the specific lexicon of an organization or industry, enabling them to employ field-specific language at various levels of complexity and readability.

The widely cited shortcomings of ChatGPT, whose acronym stands for generative pre-trained transformer, are perhaps the best evidence of why this training is so important. Users have reported their ChatGPT queries have returned factual inaccuracies, strange tangents and evidence of bias, all stemming from the terabytes of data fed to it by human users. In other words, the success of a generative AI model is dependent on human input and our understanding of how to train, prompt and hone it.

In today's rapidly evolving digital landscape, the potential of LLMs, such as ChatGPT, Google's Bard or Microsoft's Bing, with AI is being harnessed to revolutionize content creation from drafting persuasive advertising copy to crafting precise responses for customer service queries. However, utilizing this technology efficiently necessitates an understanding of 'prompt engineering,' akin to carefully crafting a wish told to a genie to steer the outcome towards a desired result.

While LLMs are progressively becoming a vital tool in CCM, their use does raise concerns about data security, particularly around private personal information (PPI). Therefore, it is essential to choose packaged solutions that not only optimally leverage the capabilities of LLMs, but also ensure that PPI does not inadvertently leave the safety of an organization's firewall.

The ultimate goal is to develop these AI models to function autonomously. However, the current reality is that they still require significant human feedback. The silver lining, though, is that you don't need to be an AI expert to reap the benefits of these technologies. The right vendor can provide a customized solution, training and fine-tuning the LLM specific to your industry, organization and data protection protocols. Moreover, these vendors can help enhance your LLM by feeding it only the most accurate and relevant data, ensuring the generated responses are not only accurate, but also free from bias and compliance issues. This way, your organization can fully harness the power of AI while maintaining data integrity and security.

In other words, generative AI can offer a great starting point for the creation of your customer communications and save significant time and energy on the content authoring process. However, we’re still a long way off from generative AI models that can function without input from human users, which means finding knowledgeable support remains a necessary and important first step. This will help you take full advantage of these new and improved capabilities by determining the data, prompts and feedback that will train your model most effectively, allowing you to be sure that your AI-generated starting point accurately reflects your organization’s standards around critically important parameters, such as desired reading levels and sentiment.

Atif Khan has over 20 years of experience building successful software development, data science, and AI engineering teams that have delivered demonstrable results. As the Vice President of AI and Data Science at Messagepoint, Khan has established a comprehensive AI research and engineering practice and delivered two AI platforms (MARCIE and Semantex) that have brought a fresh perspective to the CCM industry. Through collaboration with the leadership team, he has defined the vision and objectives for these platforms, accelerated their market launch, while forging academic partnerships to achieve long-term product research goals.

Most Read  

This section does not contain Content.
0