Why Use OpenAI in Your Python Bot?
Integrating a large language model (LLM) like GPT into your bot transforms it from a rigid command processor into a flexible, natural-language assistant. Users can ask questions in plain English, get context-aware answers, and have conversations that feel genuinely intelligent.
In this guide, we'll build a Python chatbot using the OpenAI API that maintains conversation history, uses a custom system prompt, and handles errors gracefully.
Prerequisites
- An OpenAI account with an API key (available at platform.openai.com)
- Python 3.9+
- The
openaipackage:pip install openai - The
python-dotenvpackage:pip install python-dotenv
Step 1: Secure Your API Key
Create a .env file in your project root:
OPENAI_API_KEY=sk-your-key-here
Then load it in your script:
import os
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
Step 2: Build the Chat Loop with Memory
The OpenAI Chat API is stateless — it doesn't remember previous messages. To give your bot memory, you maintain a list of messages and send the full history with each request:
conversation_history = [
{
"role": "system",
"content": "You are PyBot, a helpful assistant specializing in Python programming, automation, and bots. Be concise and practical."
}
]
def chat(user_message):
conversation_history.append({"role": "user", "content": user_message})
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=conversation_history,
max_tokens=500,
temperature=0.7
)
assistant_message = response.choices[0].message.content
conversation_history.append({"role": "assistant", "content": assistant_message})
return assistant_message
print("PyBot ready! Type 'quit' to exit.\n")
while True:
user_input = input("You: ")
if user_input.lower() == "quit":
break
reply = chat(user_input)
print(f"PyBot: {reply}\n")
Understanding the Message Roles
| Role | Purpose |
|---|---|
system | Sets the bot's personality, rules, and context |
user | The human's input |
assistant | The bot's previous responses (for memory) |
Step 3: Managing Token Limits
Conversation history grows indefinitely, which eventually hits token limits and increases costs. Implement a simple trimming strategy:
MAX_HISTORY = 20 # keep last 20 exchanges
def trim_history():
global conversation_history
system_msg = conversation_history[0]
recent = conversation_history[-(MAX_HISTORY * 2):]
conversation_history = [system_msg] + recent
Call trim_history() at the start of each chat() call.
Step 4: Handling Errors Gracefully
from openai import RateLimitError, APIConnectionError
def chat(user_message):
try:
# ... your existing chat logic
except RateLimitError:
return "I'm receiving too many requests right now. Please try again in a moment."
except APIConnectionError:
return "I'm having trouble connecting to my brain. Check your internet connection."
except Exception as e:
return f"Something went wrong: {str(e)}"
Next Steps
- Stream responses for a more natural, real-time feel using
stream=True - Add function calling to let the AI trigger real actions in your app
- Embed into Discord or Telegram to deploy your AI bot to a real audience
- Use RAG (Retrieval-Augmented Generation) with tools like LangChain to give the bot knowledge of your own documents
With these fundamentals in place, you have a fully functional AI chatbot that you can customize, deploy, and extend in almost any direction.