Table of Contents

Enhancing Interactions with LLMs and ReAct Prompting using LangChain Library

Large Language Models (LLMs) like GPT-4 and the innovative technique of ReAct prompting have the potential to enhance AI interactions, helping to build more sophisticated AI models that understand the differences of human language and improve the results produced by our interactions with AI models. 

Delving into the inner workings of LLMs and ReAct prompting, we will demonstrate their practical implementation using the LangChain library in Python, with the vision of shaping more refined, precise, and contextually aware AI systems, revolutionizing with these steps our interaction with technology.


Before diving into the implementation of Large Language Models and ReAct Prompting using the LangChain library, you should ensure that your environment meets the following requirements:

  • Python: We used Python 3.9.7 for the examples given in this post. Though Python 3.6 and above should work fine, we recommend using the latest stable version of Python for the best compatibility.
  • LangChain Library: The LangChain library is pivotal to our examples. Ensure you have the latest version installed in your environment. You can install it using pip:

      pip install LangChain
  • Text Editor: Any text editor can be used to write the Python scripts. We used Visual Studio Code for our examples due to its extensive Python support, but other editors like Sublime Text, Atom, or even a simple text editor like Notepad++ would suffice.
  • Operating System: Our examples were developed and tested on Ubuntu 20.04 LTS. However, the code should run seamlessly on other operating systems as well, including macOS (version 10.14 and above) and Windows (Windows 10 Professional or later).
  • Internet Connection: A stable internet connection is required to download the models from the LangChain library. 

Before proceeding with the demo, make sure you meet these requirements. This will ensure a smooth and hassle-free experience as you explore the exciting realm of Large Language Models and ReAct Prompting.

LLMs 101

Artificial intelligence continues to transform our approach to problem-solving, and one area where its impact is most profound is in natural language processing. The creation of large language models (LLMs), such as OpenAI's GPT-3 and GPT-4, represents a significant milestone in this field. These models generate text that is not only coherent and contextually appropriate but also indistinguishable from human-written text.

This blog post focuses on the use of LLMs and a novel technique called ReAct Prompting, exploring how it can enhance AI interactions. Furthermore, we'll demonstrate how to implement these techniques using the LangChain library.

Did you know that GPT-4, a modern large language model (LLM), was trained on a dataset comprised of trillions of words? That's the equivalent of reading every book in the Library of Congress hundreds of times over! Now, imagine if we could fine-tune these models using an innovative technique called ReAct Prompting, which incorporates human feedback into their learning process. The result is a more interactive and precise model that can better understand and emulate human-like conversation. 

LLMs and ReAct Prompting can come together to create more refined, precise, and contextually aware AI systems. It's a valuable resource if you've ever wondered, "How can we make AI understand and generate text more like a human?" or "How can we improve the results given our interactions with AI models?"

Borrowing a quote from our case study on the application of ReAct Prompting: "The fine-tuning of AI models using human feedback has led to a significant improvement in context-aware responses. It's as if our AI has developed a sense of conversation!" This insight serves as the starting point for our exploration.

We will focus on the understanding and practical application of LLMs and ReAct Prompting using the LangChain library in Python. Whether you're a student, a researcher, an AI enthusiast, or a professional, this knowledge will equip you to build more sophisticated AI models that understand the diversity in human language. So, are you ready to revolutionize the way we interact with AI?

ReAct Prompting

Shunyu Yao and a group of researchers introduced a framework in the 2022 publication “Synergizing Reasoning and Acting in Language Models" named ReAct where LLMs are used to generate both reasoning traces and task-specific actions in an interleaved manner.

Generating reasoning traces allows the model to induce, track, and update action plans, and even handle exceptions. The action step allows to interface with and gather information from external sources such as knowledge bases or environments.

The ReAct framework can allow LLMs to interact with external tools to retrieve additional information that leads to more reliable and factual responses.
Results show that ReAct can outperform several state-of-the-art baselines on language and decision-making tasks. ReAct also leads to improved human interpretability and trustworthiness of LLMs. Overall, the authors found that the best approach uses ReAct combined with chain-of-thought (CoT) that allows the use of both internal knowledge and external information obtained during reasoning.


ReAct is a method that combines "acting" and "reasoning" to help Large Language Models (LLMs) learn new tasks and make decisions. This approach addresses issues that LLMs sometimes face, like producing incorrect facts and compounding errors. In a process known as "chain-of-thought" prompting, ReAct guides LLMs to verbally express their reasoning and decide on the next steps for a task. This allows the model to adapt its plan based on new information, including data from external sources like Wikipedia. This way, ReAct can search for information to support its reasoning, while the reasoning process itself guides the user toward what to look for next.

ReAct prompting uses a process involving steps of thought, action, and observation to guide Large Language Models (LLMs) in learning new tasks or making decisions. Here's a simple example: suppose we ask an LLM about the elevation range of a particular area. The model will first decide it needs to search for information (Thought 1). It will perform a search (Action 1), then observe the results (Observation 1). The model will then process these results, decide the next step, and it will continue this cycle until it can answer the original question. Depending on the task, the structure of these prompts can vary. Some tasks may require a lot of thought-action-observation steps, while others may require more actions and fewer thoughts.


Below is a high-level example of how the ReAct prompting approach works in practice. We will be using OpenAI for the LLM and LangChain(opens in a new tab) as it already has built-in functionality that leverages the ReAct framework to build agents that perform tasks by combining the power of LLMs and different tools.
First, let's install and import the necessary libraries:

Screenshot 2023-08-24 at 5.06.20 PM

Now we can configure the LLM, the tools we will use, and the agent that allows us to leverage the ReAct framework together with the LLM and tools. Notice that we are using a search API for searching external information and LLM as a math tool.

Screenshot 2023-08-25 at 9.24.03 AM

Once that's configured, we can now run the agent with the desired query/prompt. Notice that here we are not expected to provide few-shot examples as explained in the paper.

Screenshot 2023-08-25 at 9.25.13 AM

The chain execution looks as follows:

Screenshot 2023-08-25 at 9.24.45 AM

The output we get after using this method is as follows:

Screenshot 2023-08-25 at 9.26.33 AM


Key Takeaways:

  • Large Language Models (LLMs) like GPT-4 combined with the innovative technique of ReAct prompting can enhance AI interactions, leading to more refined, precise, and contextually aware AI systems.
  • Python's LangChain library provides practical means to implement LLMs and ReAct prompting, provided you meet the set system requirements, including the latest stable Python version, the LangChain library, a suitable text editor, a compatible operating system, and stable internet.
  • ReAct prompting is an innovative framework allowing LLMs to interleave generating reasoning traces and task-specific actions, helping to generate reliable and factual responses, increase human interpretability and trustworthiness of LLMs, and outperform several state-of-the-art baselines on language and decision-making tasks.
  • ReAct prompting works by guiding LLMs to verbally express their reasoning and decide on the next steps for a task, thus addressing issues that LLMs sometimes face, such as producing incorrect facts and compounding errors. This process involves steps of thought, action, and observation.
  • An example demonstrates how ReAct prompting can be practically implemented using the LangChain library, showing how it uses chain of thought to answer queries by performing searches, observing the results, deciding on the next steps, and executing these until the query is answered.

About Encora

Fast-growing tech companies partner with Encora to outsource product development and drive growth. Contact us to learn more about our software engineering capabilities.


Author Bio

Oliver Becerra Gonzalez is a data scientist at Encora with expertise in MLOps, data science, and data engineering. Skilled in technologies like SQL, Python, Kubernetes, and more, he holds degrees in Nanotechnology and a Ph.D. in Theoretical Physics, specializing in Biospintronics. Oliver's favorite subjects include mathematics, physics, and programming, reflecting his insatiable curiosity and love for learning.

Learn More about Encora

We are the software development company fiercely committed and uniquely equipped to enable companies to do what they can’t do now.

Learn More

Global Delivery






Related Insights

Bank Modernization: Leveraging Technology for Competitive Advantage

Banks leverage innovative tech like Generative AI, data analytics, and core modernization to ...

Read More

Making Dynamic Pricing Truly Dynamic – Win-win Approach for Customers and Retailers

Dynamic Pricing has become a go-to strategy to compete and improve the bottom line.

Read More

Generative AI: Transforming the Insurance Industry

Learn how generative AI transforms insurance via underwriting, claims processing, and customer ...

Read More
Previous Previous

Accelerate Your Path
to Market Leadership 

Encora logo

+1 (480) 991 3635

Innovation Acceleration

Encora logo

+1 (480) 991 3635

Innovation Acceleration