ordertime.
Pricing flat fee, no surprises

Three tiers, priced
so the model is
obvious.

Free is the open-source SDK, self-hosted. Basic is the v1 hosted product: manual-first commerce with no automated processing. Managed is for merchants who outgrow the async model, and is coming soon. None of the tiers charge transaction fees.

Tier I
01
$0 /mo

Free

open source · self-host

For developers and self-hosters who want full control. The SDK, no service.

  • The full SDK on npm under @storefront/*
  • All adapters available
  • Run on your own infrastructure
  • Build-time AI (BYO provider key)
  • Community support (GitHub)
  • × No managed dashboard or order ingest
  • × No build orchestration service
Read the docs
v1 product
Tier II
02
$49 /mo

Basic

manual-first · no automated processing

The hosted async commerce v1. You approve every order. We never touch the money.

  • Hosted orders + dashboard
  • Build orchestration (webhooks, cron, manual, API)
  • Pseudonymous order ingest from your processor
  • Manual approval workflow with auth-hold capture
  • 5,000 SKUs · all adapters
  • Build-time AI (BYO key plus a small managed quota)
  • Email support
  • × No automated capture or fulfillment triggers
Get started
Tier III
03
Coming

Managed

priced accordingly

The full-service tier for merchants whose model needs real-time behavior. Everything Basic has, plus the things async excludes.

  • Automated capture and fulfillment triggers
  • Subscriptions and recurring billing
  • Multi-warehouse and multi-region inventory
  • B2B pricing tiers and quotes
  • Tax and shipping calculation
  • Real-time inventory sync
  • White-glove onboarding and priority support
Get in touch
§ basic vs. managed

Where the line is between
Basic and Managed

Basic is async-by-design: manual approval, manual capture, and manual fulfillment triggers. The features that need a real-time premise (automated capture, subscriptions, real-time inventory, B2B pricing tiers) live in Managed, which is coming later. We list those out up front so nobody wastes time.

Included in Basic
  • Catalog source adapters (Shopify, Squidex, Sanity, Contentful, JSON, others)
  • SSG adapters (Astro, Next.js, SvelteKit, 11ty, Hugo)
  • Processor adapters (Stripe, Paddle, Mollie, specialty)
  • Build-time AI (translation, SEO, embeddings, alt text)
  • Pseudonymous orders with analytics dashboard
  • Build orchestration (webhooks, cron, manual, API)
  • Localization (12+ locales out of the box)
  • Manual order approval workflow
  • Outbound webhooks for fulfillment and accounting
In Managed (coming soon)
  • ·Automated capture and fulfillment triggers
  • ·Subscriptions and recurring billing
  • ·Customer accounts on storefronts
  • ·Multi-warehouse and multi-region inventory
  • ·Real-time inventory sync
  • ·B2B pricing tiers and quotes
  • ·Tax and shipping calculation

These need the real-time premise that Basic doesn't carry.

§ FAQ

Frequently asked

What if I have more than 5,000 SKUs?

+

Basic caps at 5,000. Beyond that you'll want Managed (coming soon), where build pipelines do incremental rebuilds and we configure the system to your scale. Or self-host with Free if you can run the build infrastructure yourself.

Do you take a cut of sales?

+

No, never. We don't touch the money. Your processor (Stripe, Paddle, etc.) handles payments and pays you directly. We charge a flat platform fee and that's it.

What about AI costs?

+

Build-time AI runs through your own provider key (OpenRouter recommended). Basic includes a small managed quota for convenience. AI output is cached on source content hash, so unchanged products don't regenerate, and a daily rebuild typically does almost no AI work.

Can I leave?

+

Yes. Your catalog lives in your data store. Your customers and money live at your processor. Your code lives in your repo. We hold pseudonymous order records you can export at any time. Nothing locks you in.

High-risk vertical (cannabis, adult, kratom)?

+

We don't care. We're not the processor. Bring whatever specialty processor will work with your vertical, and we provide the catalog and orders layer. Mainstream platforms tend to refuse high-risk merchants entirely; we're agnostic.

Can I run the AI provider locally?

+

Yes. Ollama and LM Studio work well for build-time AI when your CI runs on your own hardware. Build-time is exactly the use case local LLMs are good at: long-running, latency-tolerant, off-the-clock. Cost is zero.