FAQ

Frequently asked questions

The questions we hear most often. Don't see yours? Email us.

What is Onpilot?

+

Onpilot is an embeddable AI agent platform. You build an agent in the dashboard, connect it to your knowledge sources and integrations, and drop it into your app as a chat widget or invoke it through our API.

How do I integrate Onpilot with my app?

+

The fastest path is our embed script — one line of HTML on your site and the agent is live. For richer control, install @onpilot/react and render the component anywhere. For server-side access, call the REST API directly.

What does it cost?

+

We bill monthly in USD, with usage measured in model tokens. You can see current plans on the pricing page. Pre-launch, we're also happy to design a custom plan for teams with unusual usage patterns — just reach out.

Is Onpilot secure?

+

Every customer's data is kept separate. Credentials are encrypted, traffic runs over HTTPS, and access is least-privilege. Full details on the Security page.

Can I run Onpilot inside my own environment?

+

Yes, on Enterprise. Run it in your own cloud or data center. Tell us from the Enterprise page and we'll scope it with you.

How does billing work?

+

You're billed monthly for your plan, plus any usage above your included token allotment. You can buy token packs in advance to smooth out spikes, and download invoices from the dashboard.

Can I bring my own LLM?

+

Yes. Onpilot routes through a model gateway, so you can point agents at OpenAI, Anthropic, Google, or any provider we support — and plug in your own API keys if you'd prefer to be billed directly by the model provider.

Which channels are supported?

+

Today: web chat widget, Slack, WhatsApp, Telegram, and direct API. We add more based on customer demand — if you need something specific, let us know.

How do I train an agent on my knowledge?

+

You can upload documents, point at a sitemap, or connect a source like Notion or a website crawler. Onpilot handles indexing and retrieval; you see citations in every answer so you can verify where information came from.

What if an agent gives a wrong answer?

+

Every conversation is logged with the sources the agent used. You can review transcripts in the dashboard, mark replies for follow-up, and tune the knowledge base. For sensitive workflows, you can also require human-in-the-loop approval before responses are sent.

Still have questions? Contact us.