

With summer winding down, the metaphorical heat has not yet subsided, instead increasing on how generative AI has captured most of our tech discussions today. HGS has been at the forefront of this conversation with its investments in Speech AI in its Gartner® recognized accelerator – HGS Agent X.
HGS Agent X is a cloud-based console that allows enterprises a single pane of glass view of customer information and history, which in turns emphasizes knowledge sharing and collaboration, and completely redefines a customer experience (CX) journey to be much more faster, engaging, and supportive.
With the efforts of its data science team, HGS Agent X has recently been enhanced to include Large Language Model (LLM) support for improving CX and automation in the BPO/contact-center industry across multiple feature offerings:
Presently, we now have completed our preliminary studies into deploying a customizable and trainable LLM of choice, forcing an answer of YES or NO or NA against a standard QA & QC questionnaire, categorizing into three classical evaluation buckets. We’ve automated scoring of these forms against agent performance, eliminating human involvement, freeing up team lead’s times, removing bias and improving transparency. Additionally, 100% of the calls are now processed 10% earlier and in less than 24 hours after a call is recorded.
A Deeper Look into Speech AI
HGS did a sample study of 15 questions, broken down into 40 sub-questions in an English language call center supporting 24×7 operations for a North American client.
On a sample call base of 200 recordings, HGS forced the LLM of choice to answer with a “yes” or a “no” each of these questions and asked it to provide reasons as to why it categorized a spoken yes as a yes and a spoken no as a no.
Parallelly and to avoid bias, HGS had a QA & QC team manually listen in to the same set of calls and respond to the questionnaire with a yes or no and provide reasons for the same akin to the LLM.
Sample Output Results
Manual data science team inspection of the output yielded a baseline accuracy at ~75-77%. Head-to-head comparison using a combination of Levenshtein distance/word matching and Cosine similarity, with the outputs populated by the QA & QC team revealed a 77.15% match for the 1418 responses and a ~ 40-50% match with the reasoning. Using an untrained model, the results were impressive.
With HGS Agent X doing a lot of the heavy lifting, we’ve put in place a no-code and/or low-code practice. LLMs support internal coding with a high degree of proficiency and we’ve enabled a window on the HGS Agent X screen, where users, governed by their areas of interest, can type queries in simple business English, and out pops the answer. It doesn’t get much simpler than that.
While it may seem simple, what happens behind the scenes is the code being converted to a SQL query that fires the question against all relevant databases/knowledge banks/policy documents and fetches the results and populates a designated tab of the HGS Agent X output screen/PowerBI dashboard. The user still has the option to accept or reject the response which will help fine tune the response delivery mechanism.
HGS Agent X is making agents, managers and their team leads jobs easy in comparison and we can’t wait to see what LLM helps us to achieve next.
Sharath Tadepalli – Principal Data Scientist
Recent blog posts: