Enterprise

Transcriptionfor your entire org.

Everything in the open-source edition, plus multi-user auth, team management, admin dashboards, API access, and enterprise-grade infrastructure. Self-hosted. Air-gapped. Yours.

Why Enterprise

Built for organizations that
take data seriously.

100%

On-Premise

No data leaves your infrastructure. Ever.

5 min

Deploy Time

One script. One docker compose command.

Scalable

PostgreSQL + cloud storage for any workload.

0

Vendor Lock-in

Open-source core. Standard Docker. Your data.

Platform

Everything your team needs.

The open-source transcription engine, hardened for production with enterprise auth, storage, and infrastructure.

Multi-User & Teams

Create teams, invite members, and manage role-based access. Each user has their own recordings, transcripts, and documents — with team-level sharing for collaboration.

JWT Authentication

Secure session management with access and refresh tokens. Register, login, and manage credentials through a clean REST API. First user becomes admin automatically.

Scoped API Keys

Generate API keys with granular read/write scopes for CI pipelines, integrations, and automated workflows. Revoke any key instantly.

Webhooks

Subscribe to real-time events — transcription complete, recording created, document processed, and more. HMAC-SHA256 signed payloads for verification.

Audit Logging

Every mutating API request is logged with user, action, path, and timestamp. Full audit trail for compliance reviews and security investigations.

LLM Integration

Connect OpenAI, Anthropic, or any compatible provider for AI-powered summaries, action items, key points extraction, and conversational chat with transcripts.

PostgreSQL

Production-grade database with full ACID compliance, concurrent access, and pgvector for semantic search. Replaces SQLite for multi-user workloads.

Cloud Storage

Store recordings and documents on S3, Azure Blob, or local Docker volumes. Configure your preferred storage backend with a single environment variable.

Whisper Transcription

Same best-in-class Whisper models as the desktop app. GPU-accelerated with CUDA support. Process audio locally — nothing leaves your servers.

OCR & Document Processing

Extract text from PDFs, images, and spreadsheets with Tesseract OCR. Everything becomes searchable alongside your audio transcripts.

REST API

Full API for programmatic access to recordings, transcripts, documents, teams, and admin functions. Build custom integrations and automations.

Docker Deployment

Three-container stack with nginx, backend, and PostgreSQL. Deploy anywhere Docker runs — on-premise, cloud VMs, or Kubernetes.

Admin Dashboard

Full control from a
single dashboard.

Dedicated admin pages give you visibility into every aspect of your deployment — users, services, storage, templates, and compliance reporting.

User Management

View all users, manage roles, deactivate accounts, and reset credentials.

Service Management

Monitor and configure backend services, transcription engines, and LLM providers.

Database Management

View storage usage, run maintenance tasks, and manage database backups.

Template Management

Create and manage recording templates with custom metadata fields for different use cases.

Audit Reports

Generate compliance reports, export audit logs, and track usage across teams.

Account Settings

User profile management, password changes, and personal notification preferences.

Roadmap

What's coming next.

Enterprise features driven by real customer needs. Here's what's on the horizon.

In Development

Meeting Bots

Automated agents that join Microsoft Teams, Google Meet, and Zoom calls to record and transcribe meetings in real-time.

Planned

Mobile Access

Secure mobile companion app for accessing your self-hosted Verbatim server on the go — with end-to-end encryption.

Exploring

Voice Chat Assistant

AI-powered voice assistant that lets you query your transcripts and documents using natural language conversation.

Planned

External WhisperX

Offload transcription to a dedicated WhisperX service for higher throughput and GPU resource optimization.

In Development

Custom LLM Servers

Connect self-hosted LLM inference servers (vLLM, Ollama, text-generation-inference) for fully air-gapped AI features.

Planned

SSO / SAML

Single sign-on integration with your identity provider — Okta, Azure AD, Google Workspace, and SAML 2.0.

Security & Compliance

Security isn't a feature.
It's the architecture.

Verbatim Enterprise is designed from the ground up to keep your data where it belongs — on your servers, under your control.

  • All processing happens on your infrastructure — no external API calls for transcription or OCR
  • JWT-based authentication with configurable token expiry and refresh rotation
  • License validation is double-gated: pull-time verification and runtime middleware
  • Audit logging captures every mutating API request for compliance
  • Webhook payloads are HMAC-SHA256 signed for verification
  • PostgreSQL with full ACID compliance and encrypted connections
  • Docker deployment supports air-gapped environments with no internet dependency
  • API keys with granular scopes — revoke any key instantly

Deployment

From zero to production in minutes.

Authenticate with your license. Pull the image. Start the stack.

terminal
# 1. Authenticate and configure
$ ./scripts/setup.sh --license <your-jwt>
✓ License validated · GHCR authenticated · .env created

# 2. Start the stack
$ docker compose -f docker-compose.prod.yml up -d
✓ postgres · backend · nginx — all healthy

# 3. Open your browser
$ curl http://localhost/api/info
{"mode":"enterprise","version":"1.3.0"}

Update

docker compose pull && docker compose up -d

Rollback

Set VERBATIM_VERSION=1.2.0, pull, restart

Scale

Add replicas, external PostgreSQL, S3 storage

Ready to deploy?

Get started with the enterprise edition today. Self-hosted, private, and fully under your control.