How HGS Agent X is Revolutionizing CX with Generative AI

With summer winding down, the metaphorical heat has not yet subsided, instead increasing on how generative AI has captured most of our tech discussions today. HGS has been at the forefront of this conversation with its investments in Speech AI in its Gartner® recognized accelerator – HGS Agent X.

HGS Agent X is a cloud-based console that allows enterprises a single pane of glass view of customer information and history, which in turns emphasizes knowledge sharing and collaboration, and completely redefines a customer experience (CX) journey to be much more faster, engaging, and supportive.

With the efforts of its data science team, HGS Agent X has recently been enhanced to include Large Language Model (LLM) support for improving CX and automation in the BPO/contact-center industry across multiple feature offerings:

  • Automated QA & QC – Up until a few years ago, QA & QC was manual, requiring a dedicated team of floor leads and man-hours to randomly sample 5-10% of calls per account to conduct monitoring, scoring and KPI assessments. This was compounded by the fact that these team leads had to manually listen to these archived calls and subjectively rate their agents’ performance biased by their interpretations.

Presently, we now have completed our preliminary studies into deploying a customizable and trainable LLM of choice, forcing an answer of YES or NO or NA against a standard QA & QC questionnaire, categorizing into three classical evaluation buckets. We’ve automated scoring of these forms against agent performance, eliminating human involvement, freeing up team lead’s times, removing bias and improving transparency. Additionally, 100% of the calls are now processed 10% earlier and in less than 24 hours after a call is recorded.

  • Prompt Engineering – The obvious question may be, “If I can customize, nudge, guide and tune my LLM, can I create one tailor-made reference model for geographic regions to interpret contextual cultural lexicons and accents?” What about one that understands acronyms and abbreviations specific to a client? What’s next – lingo specific to an industry vertical? The experts at HGS can respond with a simple, “yes.” HGS has developed its own custom LLM children stemming from one parent reference LLM. More information to excitedly come!
  • Querying Articles – We identify areas where the agents have underperformed in any of the three classical buckets: understanding the problem and offer a solution, introducing oneself and empathy/respectfulness and ending the call on a good note. Or, it could be a mixture of some or all of these three categories broken down into multiple sub-questions. Mentoring and coaching is now more focused, and data-driven while the dashboards provide agents, team leads and management the ability to look up areas of improvement in near real time in the from of FAQ sheets and policy documents.

A Deeper Look into Speech AI

HGS did a sample study of 15 questions, broken down into 40 sub-questions in an English language call center supporting 24×7 operations for a North American client.

On a sample call base of 200 recordings, HGS forced the LLM of choice to answer with a “yes” or a “no” each of these questions and asked it to provide reasons as to why it categorized a spoken yes as a yes and a spoken no as a no.

Parallelly and to avoid bias, HGS had a QA & QC team manually listen in to the same set of calls and respond to the questionnaire with a yes or no and provide reasons for the same akin to the LLM.

Sample Output Results

Manual data science team inspection of the output yielded a baseline accuracy at ~75-77%. Head-to-head comparison using a combination of Levenshtein distance/word matching and Cosine similarity, with the outputs populated by the QA & QC team revealed a 77.15% match for the 1418 responses and a ~ 40-50% match with the reasoning. Using an untrained model, the results were impressive.

  • Elastic search query – Gone are the days of coders proficient in structured query language (SQL) taking in requests from CTOs/COOs/CIOs via business analysts to extract intelligence from various databases in the contact center. Instead, you have HGS Agent X pulling the query itself:
    1. Database 1 = Call recording database
    2. Database 2 = Speech AI attributes database
    3. Database 3 = Call tracking database
    4. Database 4 = CRM
    5. Database 5 = Misc. and Sundry database like HR, performance and coaching

With HGS Agent X doing a lot of the heavy lifting, we’ve put in place a no-code and/or low-code practice. LLMs support internal coding with a high degree of proficiency and we’ve enabled a window on the HGS Agent X screen, where users, governed by their areas of interest, can type queries in simple business English, and out pops the answer. It doesn’t get much simpler than that.

While it may seem simple, what happens behind the scenes is the code being converted to a SQL query that fires the question against all relevant databases/knowledge banks/policy documents and fetches the results and populates a designated tab of the HGS Agent X output screen/PowerBI dashboard. The user still has the option to accept or reject the response which will help fine tune the response delivery mechanism.

HGS Agent X is making agents, managers and their team leads jobs easy in comparison and we can’t wait to see what LLM helps us to achieve next.

Photo of Sharath Tadepalli

Sharath Tadepalli – Principal Data Scientist

Recent blog posts:

Google Gemini
Customer Focused Cybersecurity
Cybersecurity Experts on Demand