Sentiment and Market Analysis for Investment Performance Commentary

Since 2018, we have been listening to and working with firms that want to become more efficient with specialist labour and consider process automation.

Various firms consider different use cases. Among the most common is investment performance communication for fund factsheets and/or institutional client reporting. There are lots of touchpoints, and the process can often be sequential. One data anomaly, human mistake, outsourced component, or missed communication step often results in a replay of some or all of the entire process. Equally, an ad-hoc or urgent request invokes a similar sequential workflow. Some tools have batching capabilities. Some tools have strong workflow capabilities.

It’s a shared experience among all asset managers to grapple with operational challenges in this area. The truth is that no two client profiles are exactly the same, adding another layer of complexity to the process.

Generally, there are 7 stages. These can be seen below.

Terms to understand:

Natural Language Processing (NLP), Natural Language Understanding (NLU), and Natural Language Generation (NLG) are closely related fields within artificial intelligence, each with distinct roles:

  1. Natural Language Processing (NLP):
    • Definition: NLP is a branch of AI focusing on the interaction between computers and humans through natural language. It encompasses a wide range of tasks, including language translation, sentiment analysis, and text summarisation.
    • Applications: Examples include chatbots, virtual assistants like Siri and Alexa, and language translation services.
  2. Natural Language Understanding (NLU):
    • Definition: NLU is a subset of NLP involving a machine’s human language comprehension. It focuses on understanding the meaning and context of the text or speech input.
    • Applications: NLU is used in tasks like sentiment analysis, intent recognition, and information extraction.
  3. Natural Language Generation (NLG):
    • Definition: NLG is another NLP subset that generates human-like text from structured data. It focuses on creating coherent and contextually appropriate responses.
    • Applications: NLG is used in automated report generation, chatbots, and content creation.

NLP is the overarching field that includes both NLU and NLG. NLU helps machines understand human language, while NLG enables them to generate human-like text.

Machine Learning (ML) and Large Language Models (LLMs) are critical concepts in artificial intelligence:

Machine Learning (ML)

  • Definition: ML is a subset of artificial intelligence that involves training algorithms to learn from and make predictions or decisions based on data.
  • How It Works: ML models are trained on large datasets, learning patterns and relationships within the data. Once trained, these models can make predictions or decisions without being explicitly programmed to perform the task.
  • Applications: ML is used in various fields, including image recognition, speech recognition, recommendation systems, and autonomous vehicles.

Large Language Models (LLMs)

  • Definition: LLMs are a type of machine learning model designed to understand and generate human language. They are trained on vast amounts of text data to learn the statistical relationships between words and phrases.
  • How They Work: LLMs use neural network architectures, particularly transformers, to process and generate text. They can predict the next word in a sentence, generate coherent paragraphs, translate languages, and more.
  • Applications: LLMs are used in chatbots, virtual assistants, content creation, language translation, and summarisation.

Machine learning provides the foundation for creating intelligent systems, while large language models apply these principles to tasks involving human language.

In the context of Large Language Models (LLMs), RAG and XAG refer to specific techniques and frameworks used to enhance the capabilities and applications of these models:

RAG (Retrieval-Augmented Generation)

  • Definition: RAG is a framework that combines retrieval-based and generation-based approaches to improve the performance of language models.
  • How It Works: It retrieves relevant documents or pieces of information from a large corpus and uses this retrieved information to generate more accurate and contextually appropriate responses.
  • Applications: RAG is particularly useful in scenarios where the model needs to provide detailed and factual answers, such as in question-answering systems, customer support, and knowledge-based applications.

XAG (Explainable AI Generation)

  • Definition: XAG focuses on making the outputs of AI models, including LLMs, more interpretable and understandable to humans.
  • How It Works: It involves techniques that explain the decisions or outputs generated by the model, helping users understand the reasoning behind the model’s responses.
  • Applications: XAG is crucial in fields where transparency and trust are essential, such as healthcare, finance, and legal applications.

These frameworks help enhance the usability and reliability of LLMs, making them more practical for a wide range of applications.

Prompt Engineering

Prompt engineering involves designing and refining the inputs (prompts) given to large language models (LLMs) to elicit the desired responses. This process is crucial for optimising the performance and reliability of AI models.

  • Purpose: To guide the model towards generating accurate, relevant, and contextually appropriate outputs.
  • Techniques: Includes crafting clear and specific prompts, using examples, and iteratively testing and refining prompts to improve model responses.
  • Applications: Used in chatbots, virtual assistants, content generation, and more to ensure the AI behaves as intended.

Guardrails

Guardrails are mechanisms designed to ensure the safe and ethical use of AI models. They help prevent the model from generating harmful, biased, or inappropriate content.

  • Purpose: To mitigate risks such as biased outputs, privacy breaches, and security vulnerabilities.
  • Types:
    • Content Filters: Automatically detect and block harmful or inappropriate content.
    • System Metaprompts: Instructions embedded within the system to guide the model’s behaviour and provide additional safeguards3.
    • External Guardrails: Tools and services like Amazon Bedrock Guardrails that offer comprehensive safety and privacy protections.
  • Applications: Essential in fields like healthcare, finance, and customer service, where trust and reliability are paramount.

By combining prompt engineering with robust guardrails, developers can create AI applications that are both powerful and responsible.

Summary recommendations

Stages 1, 2, and 4 can be achieved relatively easily. There are significant business benefits and opportunity costs to discover. The translation of structured data into words is becoming mature.

Stage 3 (and step 6 1/2) is applicable to almost every situation and requires a moderate amount of effort.

Many stop and reap the above benefits and jump to stage 7. It’s a familiar path many have trodden. Some of the more prominent vendors have a nascent product offering.

Since the introduction of the world’s most quoted LLM, the market has asked, ‘What else can this technology do?’

Some are now entering Stage 5 and Stage 6. It requires the appetite to increase experimentation and embrace innovation.  Newer acronyms above need to become part of the everyday vocabulary.

We at AI infin8 recommend finding a trusted partner who can showcase and demonstrate a path that fits your approach and appetite. If you’d like to discuss your business problems and how AI can help solve them, please get in touch.