Mock REST APIs and LLM streaming endpoints. Instantly.

WireMock requires Java, Maven, and configuration files. mockapi.dog runs in your browser. No dependencies. No installation. No setup.

  • Mock REST APIs and stream tokens over SSE exactly like GPT-4 and Claude APIs
  • Full control over HTTP methods, status codes, response bodies, and latency
  • No SDK, no JVM, no Docker - just a URL

Built for developers integrating REST and LLM APIs into production applications.

The Problem

You need a mock endpoint. Maybe for a frontend feature, maybe for an integration test, maybe for a demo. You look at WireMock.

First, add the Maven dependency. Or pull the Docker image. Write the stub mapping in JSON or Java. Configure the JUnit integration. Start the server. Hope the port isn't in use.

For a simple mock endpoint, you just created a build dependency, wrote configuration, and started a local server process. And WireMock doesn't support LLM streaming at all - no SSE, no token-by-token delivery.

Scenario

A developer building a streaming chat UI needs a mock endpoint that returns SSE events in OpenAI format. She also needs two REST endpoints for user and product data. WireMock requires her to set up a Java project or Docker container, write JSON stub mappings, and it still can't mock the streaming endpoint.

The Solution

mockapi.dog runs in your browser. Open the page. Define your response. Click save. Your endpoint is live and accessible from anywhere.

For REST, define any JSON with any HTTP method and status code. Add delays to simulate slow servers. Add error rates to test resilience. Add conditional errors triggered by request headers.

For LLM streaming, choose OpenAI, Anthropic, or generic SSE format. Tokens stream over Server-Sent Events exactly like the production API. No server to run. No SDK to configure. No account to create.

Feature Breakdown

OpenAI-compatible streaming

Your mock endpoint sends chunked SSE data in the exact OpenAI chat completion format. Drop it into any OpenAI SDK integration. Test streaming parsing, token rendering, and completion handling.

Anthropic-compatible streaming

Mock Claude's streaming format with proper event types and delta content blocks. Test your Anthropic SDK integration without spending API credits.

Configurable latency injection

Add millisecond-precision delays to any endpoint. Simulate slow networks, overloaded servers, or the natural token-by-token pace of a large language model.

Header-conditional error responses

Return an error status code only when a specific header and value are present. Test authentication failures, feature flags, and routing logic without multiple endpoints.

Zero infrastructure

No Java runtime. No Maven or Gradle. No Docker. No stub mapping files. No local server process to manage. Everything is hosted and browser-based.

Custom HTTP responses with full control

Any method. Any status code. Any JSON body. CORS headers included. Your mock endpoint behaves exactly the way you configure it.

mockapi.dog vs WireMock

Featuremockapi.dogWireMock
Setup timeSecondsMinutes (OSS) / Seconds (Cloud)
Requires installationYes (Java or Docker)
Configuration filesNoneJSON/Java stubs
LLM streaming (SSE)
Delay simulation
Error simulationRandom + conditional
Request verification
Record & playback
Programmatic APIYes (REST + SDKs)
Hosted endpointsYes, freeCloud: free + paid
Signup requiredOSS: No / Cloud: Yes
CostFree, no limitsOSS: free / Cloud: free + paid

Honest tradeoffs

mockapi.dog does not support request verification, traffic recording, programmatic stub creation, or deep testing framework integration. If your workflow requires asserting that outbound requests were made with specific parameters, or replaying recorded production traffic, WireMock is the right tool. mockapi.dog is for developers who need hosted mock endpoints with zero infrastructure.

Use Cases

1

Developing an AI chat interface

Build the streaming text renderer for a ChatGPT-like interface. Tokens arrive over SSE so you can test typewriter effects, markdown rendering mid-stream, and stop-generation buttons - without API costs.

2

Testing OpenAI SDK error handling

What happens when the stream drops mid-response? When the API returns a 429? Set up a mock with error simulation and test every failure path your SDK integration needs to handle.

3

Mocking multiple LLM providers

Your app supports both OpenAI and Anthropic. Create separate mock endpoints in each format. Test your provider-switching logic without API keys for either service.

4

Simulating slow token generation

Some models respond faster than others. Add delay to your LLM streaming mock to simulate a slow model. Verify that your UI loading states and timeout logic work at different speeds.

Developer Experience

Your first mock endpoint takes 5 seconds. Open the page. Choose REST or LLM streaming. Define the response. Click save.

No server to run locally. No environment variables. No .env file. No Docker. No package installation. No port conflicts.

The endpoint is hosted and immediately accessible. Point your fetch call, your OpenAI SDK client, or your test suite at the URL. It works.

Open the browser. Create the endpoint. Use the URL. That is it.

Pricing

Free. No limits. No signup.

No per-request charges. No token counting. No monthly caps. No feature gating.

This is a solo-developer tool built for the developer community. Mocking an API endpoint should cost exactly what it costs to use: nothing.

Ready to start?

Stop setting up Java projects to return JSON. Stop managing Docker containers for mock servers. Create a mock endpoint on mockapi.dog. It takes ten seconds.