From Pirates to Nobleman: Simulating Multi-Agent Conversations using OpenAI’s ChatGPT and Python

Many people use ChatGPT for its text-generation capability and have included it in their day-to-day workflows. However, few people may know that you can use it to create multi-agent conversations between fictional and nonfictional characters. Ever wondered if AI could take you on a time-traveling journey to eavesdrop on a chat between an 18th-century Pirate and Nobleman? Say no more! In this blog post, we’re diving deep into the captivating universe of ChatGPT. We use a Python script and OpenAI ChatGPT to simulate a multi-agent conversation about life’s big questions. Plus, we’ll give you the lowdown on how to bring your characters to life to make the convo even more riveting.

So, buckle up and get ready for a time-warping chat that’ll not only tickle your curiosity but also get you thinking about the meaning of life itself!

Also: ChatGPT Prompt Engineering: A Practical Guide for Businesses

Multi-Agent Conversations between Two instances of ChatGPT of itself – How does it work?

GPT, part of the name ChatGPT, stands for “Generative Pretrained Transformer”. It’s a fancy name for a groundbreaking innovation we’ve seen in the last ten years. One cool thing about ChatGPT is that it can have a conversation with itself. How? By using the answer from one version of itself as the next question for another version.

Imagine this: You let ChatGPT-1 talk, take its answer, and give it to ChatGPT-2 as a new question. You keep doing this, and before you know it, you’ve got an ongoing chat between the two!

This unique feature can be super handy for many things. For instance, you could build more lifelike chatbots or create authentic and varied dialogues for characters in a story.

But there’s a catch. Sometimes, they can get stuck in loops or spit out gibberish. That’s why it’s crucial to keep an eye on these chats to make sure they make sense and stay on topic.

The illustration shows the first three steps in a conversation between ChatGPT and itself - In the sample, the prompt includes information that makes ChatGPT alternate between the roles of a pirate and an aristocrat.
The illustration shows the first three steps in a conversation between ChatGPT and itself. In the sample, the prompt includes information that makes ChatGPT alternate between the roles of a pirate and an aristocrat.

Also: Eliminating Friction: How LLMs such as OpenAI’s ChatGPT Streamline Digital Experiences and Reduce the Need for Search

How to Manage a Conversation Context with ChatGPT

When using ChatGPT models, there are three different roles available: system, user, and assistant. Understanding these roles is critical when designing a conversation because it allows us to manage the context and information provided to the model.

  • The system role indicates that a message comes from the system or the model itself. While it is not required to include system messages in a conversation, doing so can help set up the conversation’s context and provide information that can be used to guide the model’s subsequent responses.
  • The user role indicates that a message comes from an end-user or an application. This role represents prompts that trigger a response from the model. When designing a conversation, it is essential to carefully craft these prompts to provide the model with the necessary context and information to provide a useful response.
  • The assistant role indicates that a message is coming from the assistant, which is the model itself. This role helps maintain continuity between the user and the model during a conversation. Using the assistant role, previous conversations can be saved and sent again in subsequent requests to help guide the model’s responses.

The code below shows how to use these roles when requesting the ChatGPT model.

prompt = [{"role": "system", "content": general_instructions}, 
          {"role": "user", "content": task},
         {"role": "assistant": previous_responses}] 

#print('Generating response from OpenAI...') 
completion = openai.ChatCompletion.create(
  model=model_engine, 
  messages=prompt, 
  temperature=0.5, 
  max_tokens=100)

Understanding the different roles available when using ChatGPT models is critical for designing effective conversations. By carefully crafting messages with the appropriate role, we can provide the model with the necessary context and information it needs to provide useful and relevant responses.

Simulating a Multi-Agent Conversation between a Pirate and a Nobleman

Now that you have a broad understanding of multi-agent conversations, we will focus on the practical part. In the following, we will use ChatGPT to simulate a discussion between a pirate and a nobleman on the sense of life. The characters have distinct personalities and goals, which will be reflected in their responses and the unique style of how they converse.

The code is available on the GitHub repository.

OpenAI ChatGPT has various use cases. Among others, it can be used to simulate multi-agent conversations between fictional and non-fictional characters.

Some Words on the OpenAI API Key and Inference Costs

To run the code below, you will need an OpenAI API that you can obtain from OpenAI or from Azure OpenAI. Yes, you will have to sign up.

How about the cost of inferring the model? We will be utilizing the ChatGPT Turbo model for every conversation, which will require using the completion endpoint of the model several times, leading to some costs. The cost of using GPT models varies depending on the model type and the number of processed tokens. In the use case discussed in this article, each code execution will involve 20 API calls to the ChatGPT Turbo model (3.5). Since this model is highly cost-effective and we will not be generating an excessive amount of text, executing the code will only incur a few cents in costs.

Step #1 Setting up Imports and OpenAI Key

Let’s begin by importing a few libraries such as openai, datetime, and Azure. We will also set up an API key for OpenAI. You can for example store and retrieve the API key from an environment variable or from an Azure Key Vault. Storing the key directly in the code is not advised, as you might accidentally commit your code to a public repository and expose the key.

The code below will also create a folder to store conversations. Each conversation will get stored in HTML format.

import openai
import datetime as dt
from azure.identity import AzureCliCredential
from azure.keyvault.secrets import SecretClient
import time
import os

# use environment variables for API key
timestamp = dt.datetime.now().strftime("%Y%m%d_%H%M%S")

# set your OpenAI API key here
API_KEY = os.environ.get("OPENAI_API_KEY")
if API_KEY is None:
    print('trying to get API key from azure keyvault')
    keyvault_name = 'your-keyvault-name'
    client = SecretClient(f"https://{keyvault_name}.vault.azure.net/", AzureCliCredential())
    API_KEY = client.get_secret('openai-api-key').value
openai.api_key = API_KEY

# create a folder to store the conversations if it does not exist
path = 'ChatGPT_conversations'
if not os.path.exists(path):
    os.makedirs(path)

Step #2 Functions for Prompts and OpenAI Completion

We begin by defining three essential functions used to create and maintain a conversation using the OpenAI ChatGPT completion endpoint.

  • The first function, initialize_conversation, sets up the conversation by creating an initial prompt for the user to start the conversation with a given topic and character description. For instance, the code below sets the initial prompt as “Good day, Sir. Wonderful day isn’t it?” and waits for ChatGPT to respond with an appropriate reply.
  • The second function, respond_prompt, generates a response to the previous response and creates a prompt for the user to continue the conversation. This function is called multiple times as the conversation progresses.
  • Finally, the openai_request function utilizes the OpenAI chat engine to generate a response to the prompt created by the previous two functions. The response generated by the chat engine is then returned to the calling function.

By using these functions, we can establish a fully functional and engaging conversation system that utilizes the power of OpenAI’s ChatGPT model.

# This function creates a prompt that initializes the conversation 
def initialize_conversation(topic='', character=''):
    instructions = f' You have a conversation on {topic}. You can bring up any topic that comes to your mind'
    instructions = character['description'] + instructions
    task = f'Good day, Sir.'
    if topic != '':
        task = task + f' Wonderful day isn t it?'
    return instructions, task

# This function creates a prompt that responds to the previous response
def respond_prompt(response, topic='', character=''):    
    instructions = f'You have a conversation with someone on {topic}. \
    Reply to questions and bring up any topic that comes to your mind.\
    Dont say more than 2 sentences at a time.'
    instructions = character['description'] + instructions
    task = f'{response}' 
    return instructions, task

# OpenAI Engine using the turbo model
def openai_request(instructions, task, model_engine='gpt-3.5-turbo'):
    prompt = [{"role": "system", "content": instructions }, 
              {"role": "user", "content": task }]

    #print('Generating response from OpenAI...')
    completion = openai.ChatCompletion.create(
    model=model_engine, 
    messages=prompt,
    temperature=1.0, # this will lead to create responses that are more creative
    max_tokens=100)

    response = completion.choices[0].message.content

    return response

Step #3 Defining Characters

After defining the functions, the next step is to create the conversation by specifying the topic and the characters that will participate in the debate. In the following example, the topic is “the sense of life,” and the two characters are James, an Aristocrat, and Blackbeard, a Pirate. These characters are described with unique personalities and behaviors that will be utilized in the conversation simulation.

With this framework, it is simple to modify the behaviors, voice, and tone of the characters by adjusting their descriptions. This flexibility allows for the creation of intriguing conversations on any topic between different characters. The possibilities are endless; one could simulate a debate between Darth Vader and Harry Potter, or even describe the behavior and background of the characters in more detail.

In short, this framework provides a flexible and adaptable platform for generating engaging conversations, allowing for the exploration of various perspectives and topics.

# initialize conversation on the following topic
topic = 'The sense of life'
conversation_rounds = 20

# description of character 1
color_1 = 'darkblue' 
character_1 = {
"name": 'James (Aristocrat)',
"description": 'You are a French nobleman from the 18th century. \
    Your knowledge and wordlview corresponds to that of a common aristocrate from that time. \
    You speak in a distinguished manner. \
    You response in one or two sentences. \
    You are afraid of pirates but also curious to meet one.'}

# description of character 2 
color_2 = 'brown'
character_2 = {
"name": 'Blackbeard (Pirate)',
"description": 'You are a devious pirate from the 18th century who tends to swear. \
    Your knowledge and wordlview corresponds to that of a common pirate from that time. \
    You response in one or two sentences. \
    You are looking for a valuable treasure and trying to find where it is hidden. \
    You try to steer the conversation back to the treasure no matter what.'}

Step #4 Start the Conversation

Now that we have our characters defined, it’s time to start the conversation. The next section of the code invokes the conversation between the two characters by initializing it with a greeting from one of the characters. In the code below, James starts the conversation with a prompt: “Good day, Sir. Wonderful day, isn’t it?”.

The conversation then proceeds with each character taking turns responding to the previous prompt. The response from each character is generated by calling the openai_request() function, which sends a prompt to OpenAI’s GPT-3.5 model and returns a response generated by the model.

The conversation continues for a specified number of rounds, which can be set by changing the value of the “num_rounds” variable. Each round consists of a response from one character followed by a response from the other character.

conversation = ''
for i in range(conversation_rounds):
        # initialize conversation
        if i == 0:
            print('Initializing conversation...')
            text_color = color_1
            name = character_1['name']
            instructions, task = initialize_conversation(topic, character_1)
            response = openai_request(instructions, task)
            print(f'{name}: {task}')
            conversation = f'<p style="color: {text_color};"><b>{name}</b>: {task}</p> \n'
        # alternate between character_1 and character_2
        else:
            if i % 2 == 0:
                text_color = color_1
                name = character_1['name']
                instructions, task = respond_prompt(response, topic, character_1)
            else:
                text_color = color_2
                name = character_2['name']
                instructions, task = respond_prompt(response, topic, character_2)

            # OpenAI request
            response = openai_request(instructions, task)

            # wait some seconds 
            time.sleep(15)

            # add response to conversation after linebreak
            print(f'{name}: {response}')
            conversation += ' ' + f'<p style="color: {text_color};"><b>{name}</b>: {response}</p> \n'

        #print('storing conversation')
        # store conversation with timestamp
        
        filename = f'{path}/GPTconversation_{timestamp}.html'
        with open(filename, 'w') as f:
            f.write(conversation)

As you can see, the conversation is quite entertaining and even touches upon some aspects of an interesting philosophical question. But foremost it’s an amusing example of an exotic use case for ChatGPT.

By modifying the character descriptions and prompts, you can create interesting conversations on any topic between various characters. For example, you can create a conversation between Albert Einstein and Marie Curie on the topic of physics, or between Abraham Lincoln and Martin Luther King Jr. on the topic of civil rights. The possibilities are endless!

Summary

Arrr sailor, you have reached the end of this post! So let’s do a quick recap. This article has presented a Python script that allowed for the simulation of a multi-agent ChatGPT conversation between a pirate and a nobleman. The script used the ChatGPT language model to generate responses for the characters based on a given topic. We have provided instructions for running the script and customizing the personalities and behaviors of the characters. In addition, we have discussed how the script works by generating a response from OpenAI that is then used as a prompt for another ChatGPT instance. In this way, ChatGPT can be used to simulate a conversation on any topic.

Overall, I hope this article was able to demonstrate the potential of using GenerativeAI and natural language processing to create engaging and entertaining dialogues between virtual characters.

If you have any questions or want to share your experiences with what you could achieve with the script, please let me and everyone else know in the comment.

Sources and Further Reading

https://en.wikipedia.org/wiki/Blackbeard
OpenAI.com/models
Azure OpenAI Service
ChatGPT helped to revise this article, but the thoughts are human-made.
Images generated with Midjourney.com

Author

  • Florian Follonier Profile Picture Zurich

    Hi, I am Florian, a Zurich-based Cloud Solution Architect for AI and Data. Since the completion of my Ph.D. in 2017, I have been working on the design and implementation of ML use cases in the Swiss financial sector. I started this blog in 2020 with the goal in mind to share my experiences and create a place where you can find key concepts of machine learning and materials that will allow you to kick-start your own Python projects.

    View all posts
0 0 votes
Article Rating
Subscribe
Notify of

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x