Build your frontend. Mock your backend.

Mockbin creates fixed-response endpoints. mockapi.dog adds delay simulation, error injection, conditional failures, and LLM streaming - all free, no signup.

  • Get realistic API responses with configurable delays and error rates
  • Test loading states, error handling, and streaming with real HTTP calls
  • Works instantly from localhost - CORS handled for you

Built by a developer, for developers who ship UI before the API is ready.

The Problem

You're building a frontend. The backend isn't ready. You need mock endpoints that return JSON.

Mockbin solves the simplest version of this problem: you define a status code and a response body, and you get a URL. That works for basic cases.

But real development needs more. You need to test what happens when the API is slow. What happens when it fails. What happens when your AI feature streams tokens. Mockbin doesn't simulate delays, doesn't inject errors, and doesn't support streaming.

Scenario

A React developer building a product page needs to test three states: loading (slow API), success (200 with data), and error (500 failure). She also needs an LLM streaming endpoint for an AI-powered search feature. Mockbin can only serve a fixed JSON response - no delays, no errors, no streaming.

The Solution

mockapi.dog gives you everything Mockbin does, plus the features real frontend development requires.

Define any JSON response with any status code. Add millisecond-precision delays to test loading states. Set random error rates to test failure handling. Configure conditional errors based on request headers.

For AI features, create LLM streaming endpoints in OpenAI, Anthropic, or generic SSE format. Tokens arrive over Server-Sent Events. Your chat component streams text on screen.

Everything is free. Everything requires no signup. Everything deploys instantly.

Feature Breakdown

CORS enabled by default

Every mock endpoint includes proper CORS headers. Your app on localhost:3000 or localhost:5173 can fetch from your mock URL without proxy configuration or middleware.

Custom JSON responses

Define exactly the JSON your components expect. Match the shape of your TypeScript interfaces. Return arrays, nested objects, pagination metadata - whatever your UI consumes.

Loading state testing with delays

Add 2000ms of delay to your mock endpoint. Watch your skeleton screens, spinners, and shimmer effects render correctly. Verify that loading states actually appear in real usage.

Error state testing

Set a mock endpoint to return 500. Verify your error boundary catches it. Set 401 and test redirect-to-login. Set 429 and test retry logic. Every HTTP error, simulated on demand.

LLM streaming for AI features

Building a chat component? A text summarizer? Mock the streaming response. Tokens arrive over SSE. Test your streaming text renderer and stop button against a real streaming endpoint.

Multiple HTTP methods

Create a GET for list views. A POST that returns 201 for form submissions. A DELETE that returns 204. Mock your entire API contract, not just GET requests.

mockapi.dog vs Mockbin

Featuremockapi.dogMockbin
Setup timeSecondsSeconds
Signup required
Custom JSON responses
Custom status codes
All HTTP methodsLimited
Delay simulation
Error rate simulation
Conditional errorsYes (header-based)
LLM streaming (SSE)
OpenAPI import
CostFreeFree
FocusMocking with testing featuresBasic fixed-response mocking

Honest tradeoffs

Mockbin supports OpenAPI spec import to auto-generate mock endpoints from your API definition, which mockapi.dog does not. If your workflow is spec-first and you want auto-generated mocks from a Swagger file, Mockbin handles that well. mockapi.dog is for developers who know exactly what response they need and want delay, error, and streaming capabilities alongside it.

Use Cases

1

Building a dashboard before the API exists

You have Figma designs and TypeScript types. You don't have a backend. Create mock endpoints that return the exact data shapes your components consume. Build every page, every state, every interaction.

2

Testing loading and skeleton states

Your designer wants to see the loading state. Add a 3-second delay to your mock endpoint. The loading skeleton renders on every page refresh. Screenshot it, iterate, ship.

3

Prototyping an AI-powered feature

Your product manager wants to see the AI chat feature in the next sprint review. Create an LLM streaming mock. The chat component streams tokens on screen. The demo looks real.

4

Validating form submission flows

Your form POSTs data and expects a 201 response with the created object. Create a POST mock that returns 201 with the response body your success handler needs. Test the full create-and-redirect flow.

Developer Experience

From browser tab to working mock endpoint: 5 seconds. Same speed as Mockbin, but with delay, error, and streaming capabilities included.

There is no project to create. No dependency to add. No environment variable to set. Open mockapi.dog, fill in the response, click save, copy the URL.

Paste it into your fetch call. When the real API is ready, change the URL. Everything else stays the same.

Pricing

Free. No limits. No signup.

Both mockapi.dog and Mockbin are free. The difference is what you get for free.

Mockbin gives you fixed responses. mockapi.dog gives you fixed responses plus delays, error simulation, conditional errors, all HTTP methods, and LLM streaming. Same price: zero.

Ready to start?

Your backend isn't ready. Your deadline is. Open mockapi.dog and create the endpoints your frontend needs. Delays, errors, streaming - all included.