OpenAI API simulator on localhost
Table of Contents
Preface
OpenAI
opens new horizons in software development. It is now possible
to write programs in a declarative style by simply describing the
desired outcome. This approach allows developers to focus on what they
want to achieve, rather than detailing the step-by-step instructions
to get there. As a result, the development process becomes more
intuitive and efficient, fostering innovation and reducing the
complexity of coding.
Prerequisites
Key
export OPENAI_API_MOCK_KEY='sk-the-mock-key-for-openai-simulation-on-localhost' echo $OPENAI_API_MOCK_KEY
sk-the-mock-key-for-openai-simulation-on-localhost
Virtual Environment
pip list
Package Version ----------------- -------- annotated-types 0.7.0 anyio 4.4.0 certifi 2024.7.4 click 8.1.7 dnspython 2.6.1 email_validator 2.2.0 fastapi 0.111.1 fastapi-cli 0.0.4 greenlet 3.0.3 h11 0.14.0 httpcore 1.0.5 httptools 0.6.1 httpx 0.27.0 idna 3.7 iniconfig 2.0.0 Jinja2 3.1.4 markdown-it-py 3.0.0 MarkupSafe 2.1.5 mdurl 0.1.2 packaging 24.1 pip 23.0.1 pluggy 1.5.0 pydantic 2.8.2 pydantic_core 2.20.1 pydantic-settings 2.3.4 Pygments 2.18.0 pytest 8.3.2 python-dotenv 1.0.1 python-multipart 0.0.9 PyYAML 6.0.1 rich 13.7.1 setuptools 66.1.1 shellingham 1.5.4 sniffio 1.3.1 SQLAlchemy 2.0.31 starlette 0.37.2 typer 0.12.3 typing_extensions 4.12.2 uvicorn 0.30.1 uvloop 0.19.0 watchfiles 0.22.0 websockets 12.0 wheel 0.38.4
Description
For sake of simplicity in OpenAI application developmnet I want to build a FastAPI application for simulating responces from OpenAI server. It should allow to test and debug my application on localhost with minimum interruptions.
Create Assistant
API Simulacrum
I want to build a FastAPI application for simulating responses from
OpenAI server. For very beginning create the endpoint
create_assistant
which on the request:
``` curl "https://api.openai.com/v1/assistants" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_MOCK_KEY" \ -H "OpenAI-Beta: assistants=v2" \ -d '{ "instructions": "You are a personal math tutor.", "name": "Math Tutor", "description": "The description of the assistant", "tools": [{"type": "code_interpreter"}], "model": "gpt-4-turbo", "temperature": 0.8, "top_p": 1.0 }' ``` returns the response:
``` { "id": "asst_abc123", "object": "assistant", "created_at": 1698984975, "name": "Math Tutor", "description": "The description of the assistant.", "model": "gpt-4-turbo", "instructions": "You are a personal math tutor.", "tools": [ { "type": "code_interpreter" } ], "metadata": {}, "top_p": 1.0, "temperature": 0.8, "response_format": "auto" } ```
Where id
, object
, created_at
, metadata
, response_format
are
mock values, but name
, description
, model
, instructions
,
tools
, top_p
, temperature
are values from request data
.
In addition write the unittest
for testing create_assistant
with
```unittest.main(exit=False)``` at the bottom.
FastAPI Application for Simulating OpenAI Responses
To create a FastAPI application that simulates responses from an OpenAI server, we will implement the `create_assistant` endpoint. This endpoint will accept a POST request with the specified JSON body and return a mocked response.
Here’s the complete implementation, including the FastAPI application and the corresponding unit tests.
import unittest import time from fastapi import FastAPI from fastapi import HTTPException from fastapi.testclient import TestClient from pydantic import BaseModel app = FastAPI() # Define the request model class AssistantRequest(BaseModel): instructions: str name: str description: str tools: list model: str temperature: float top_p: float # Define the response model (not strictly necessary, but for clarity) class AssistantResponse(BaseModel): id: str object: str created_at: int name: str description: str model: str instructions: str tools: list metadata: dict top_p: float temperature: float response_format: str @app.post("/create_assistant", response_model=AssistantResponse) def create_assistant(data: AssistantRequest): mock_response = AssistantResponse( id="asst_abc123", object="assistant", created_at=int(time.time()), # using current time for created_at name=data.name, description=data.description, model=data.model, instructions=data.instructions, tools=data.tools, metadata={}, top_p=data.top_p, temperature=data.temperature, response_format="auto" ) return mock_response client = TestClient(app) class TestCreateAssistant(unittest.TestCase): def test_create_assistant(self): request_data = { "instructions": "You are a personal math tutor.", "name": "Math Tutor", "description": "The description of the assistant.", "tools": [{"type": "code_interpreter"}], "model": "gpt-4-turbo", "temperature": 0.8, "top_p": 1.0 } response = client.post("/create_assistant", json=request_data) expected_response = { "id": "asst_abc123", "object": "assistant", "created_at": response.json()["created_at"], # Expecting a timestamp "name": "Math Tutor", "description": "The description of the assistant.", "model": "gpt-4-turbo", "instructions": "You are a personal math tutor.", "tools": [ {"type": "code_interpreter"} ], "metadata": {}, "top_p": 1.0, "temperature": 0.8, "response_format": "auto" } self.assertEqual(response.status_code, 200) self.assertDictEqual(response.json(), expected_response) unittest.main(exit=False)
. ---------------------------------------------------------------------- Ran 1 test in 0.012s OK
```
Explanation of the FastAPI Application
- AssistantRequest Model:
- This `BaseModel` defines the expected request parameters for the assistant creation.
- AssistantResponse Model:
- This `BaseModel` defines the structure of the mocked response.
- POST Endpoint:
- The `/create_assistant` endpoint takes an `AssistantRequest` object as input and returns an `AssistantResponse` object. It generates a mock response with preset values for certain fields while using the values from the request for others.