OpenAI Responses API¶
Harness Guides uses OpenAI's Responses API as the default OpenAI reference surface.
Official docs:
Why this is the default here¶
The Responses API is the right fit for harness documentation because it brings model output, tool use, streaming, long-running work, and conversation continuity into one surface. That maps much more cleanly to agent harness design than teaching everything through legacy chat-completions patterns.
Concepts used across this site¶
responses.createfor the main loop entrypointinputfor user and system-side contentinstructionsfor high-level runtime guidanceprevious_response_idfor response continuitytoolsfor built-in tools, remote MCP, and custom function callstool_choicewhen the harness wants tighter controlparallel_tool_callswhen the runtime allows safe concurrencystreamfor incremental model output and tool eventsbackgroundfor long-running work that should outlive one synchronous waitincludewhen the runtime needs richer returned data
Python example¶
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-5",
instructions="You are a coding harness. Use tools when needed.",
input="Summarize the repository layout and call tools if necessary.",
tools=[
{
"type": "function",
"name": "read_file",
"description": "Read a file from disk",
"parameters": {
"type": "object",
"properties": {
"path": {"type": "string"},
},
"required": ["path"],
},
}
],
parallel_tool_calls=True,
)
How to use this page¶
- Read it before the query loop and tool chapters.
- Use it as the vocabulary guide for every OpenAI-specific example.
- Keep architecture decisions provider-portable, even when the examples use Responses API.