A product by ezinsights.ai team

EzCoworker

Your AI Agents. Your Infrastructure. Your Rules.

A powerful self-hosted AI agent platform powered by Claude Code CLI — free for internal business use. Connect any LLM, run agents in isolated Docker containers, and process real files on your own servers. Then unlock the full Enterprise suite: Command Center, KPI Dashboards, IDE & Chat integration, Knowledge Graph, Database Semantic Layer, Code Migration & Review Agents — all built on the ezinsights.ai Data Intelligence & SDLC Framework.

Get Started Free View on GitHub
ezcoworker — bash
# Clone & enter the repo
$ git clone https://github.com/EzinsightsInc/EzCoworker
$ cd ezcoworker-community

# Build the agent container image (~5 min)
$ docker build -f agent.Dockerfile -t claude-agent-image-community .

# Download the skills from the skills folder and extract them in the relevant skill directory
# Based on the OS 
\ If Windows: HOST_SKILLS_PATH=C:/claude_data_community/skills
\ If MacOS: HOST_SKILLS_PATH=/opt/claude_data_community/skills
\ Do the same for plugins
\ Do the same for mcp-servers set up

# Configure API keys, then launch
$ cp .env.example .env  && vi .env
$ docker-compose up -d --build

 frontend     http://localhost:3600
 backend      http://localhost:5600
 litellm      http://localhost:4000  (OpenAI/Gemini/Groq router)
 postgres     ready
 22 models discovered  (anthropic · openai · gemini · groq · ollama)
$ 
6+LLM Providers
100%Self-Hosted
MCPTool Support
DockerIsolated Agents
IBULicensed
See It In Action

Demo Videos & Walkthroughs

Watch EzCoworker handle real-world tasks — from data analysis to document generation to live code execution.

Getting Started

5-Minute Quick Start

Install EzCoworker, connect Ollama, and run your first agent end-to-end.

⏱ ~5 min
Use Case

CSV Analysis → PDF Report

Upload a CSV, ask for analysis, watch the agent generate charts and a formatted PDF output.

⏱ ~2 min
MCP Integration

Connecting MCP Servers

Add a custom MCP server as an agent tool and watch it get called live during a session.

⏱ ~2 min
Skills System

Building Custom Skills

Create a SKILL.md, assign it to a user, and see how it shapes agent behavior in real-time.

⏱ ~2 min
Administration

Admin Panel Walkthrough

Manage users, configure models, assign skills, and monitor storage from the admin UI.

⏱ ~6 min
Configuration

Multi-Provider Model Setup

Configure Ollama local models side by side with Anthropic and OpenAI cloud models.

⏱ ~0 min
Platform Capabilities

A complete open-source foundation — with a world-class Enterprise tier

Community Edition gives you a complete, production-ready self-hosted AI agent platform — free for internal business use. The Enterprise tier layers on a full Data Intelligence & SDLC automation suite that redefines what an AI platform can do for your organisation.

🤖

Any LLM — Local or Cloud

Ollama for free local inference, or API keys for Anthropic Claude, OpenAI GPT, Google Gemini, Groq, and OpenRouter. Models auto-discovered and synced from your environment on every startup. Switch models per conversation. 22 models supported out of the box.

🐳

Isolated Agent Containers

Every agent run spawns a dedicated Docker container with a per-user workspace volume. The container has full filesystem access, runs real code, and is destroyed after idle timeout. Absolute zero cross-user contamination — no shared memory, no shared state.

🔧

MCP Tool Integration

Connect any Model Context Protocol server as live agent tooling. SSE and stdio transports both supported. Per-user MCP configurations. Bundle tools with instruction prompts as reusable plugins — the agent gets the right tools injected automatically per conversation.

📁

File Workspace

Upload files before or during a conversation. The agent reads from the input folder, processes data, and writes all outputs to a separate output folder. Download results instantly. Supports CSV, PDF, Excel, DOCX, PPTX, ZIP, images, code files and any file type the agent runtime can handle.

🎯

Skills System

Markdown instruction files injected into the agent's system context at runtime. Each skill teaches the agent a specific domain or workflow — data analysis, PDF generation, web scraping, security audits, code review. Users enable the skills they need; unused skills are never injected.

💬

Conversation Memory

Configurable 10-turn conversation history per user, persisted in PostgreSQL. The agent maintains context across multiple messages in the same session. Context survives page refreshes and re-connections. Each user's history is fully isolated from other users.

👥

Multi-User Platform

User registration and JWT authentication out of the box. Each user gets their own isolated workspace, skill assignments, and conversation history. An admin monitoring dashboard shows usage metrics, token consumption, storage, response times, and failure rates across all users and models.

🐍

Rich Agent Runtime

Claude Code CLI + Python 3 + pandas + openpyxl + LibreOffice + WeasyPrint + reportlab + pdfplumber + python-pptx + Node.js baked into every agent container. Agents don't just write code — they execute it, process real files, and deliver actual outputs like PDFs, charts, and spreadsheets.

📦

One-Command Deploy

docker-compose up starts the frontend (nginx), backend (Node.js), PostgreSQL, and LiteLLM proxy together. Works on any Linux server, Mac, or Windows host with Docker Desktop. No Kubernetes, no cloud dependencies. Full production setup in under 10 minutes.

🔀

LiteLLM Translation Proxy

Claude Code CLI speaks Anthropic format natively. For OpenAI, Gemini, Groq, and Azure, a built-in LiteLLM gateway handles format translation. A custom stripping proxy removes any Anthropic-specific parameters before they reach providers that would reject them — no errors, no manual configuration per provider.

👁️

Vision & Reasoning Models

Vision capability auto-detected per model — all Claude 4.x, GPT-4o family, and Gemini models support image input. Reasoning models (o3, o4-mini, gemini-2.5-pro) identified and classified for complex tasks. The auto-router uses these flags to select the right model for the task at hand.

🧠

Intent-Based Model Router

Auto mode analyses each message and routes to the most cost-effective model tier. Simple scripts go to capable local Ollama models. Data analysis and coding go to mid-tier cloud. Complex reasoning and agentic tasks go to flagship models. Skill context influences routing but never overrides intent for simple tasks.

📊

Usage Monitoring

Per-user and per-model token consumption, response times, storage breakdown, failed responses, and conversation counts — all tracked in PostgreSQL. Monitoring views with sortable, searchable tables. Activity trends from the actual DB creation date. Failed responses surfaced in red with error type classification.

🔗

Conversation Sharing

Generate a public share link for any conversation. Recipients see the full message history and can download agent-generated output files without creating an account. Input/uploaded files are never exposed via share links — only agent outputs are publicly accessible.

Multi-Agent Orchestration Enterprise

Spawn multiple specialised agents that work in parallel on different aspects of the same task. A planner agent decomposes complex goals into sub-tasks, delegates to worker agents, and synthesises results. Dramatically reduces time-to-completion for large, multi-step projects.

🗺️

Intent-Based Execution Planner Enterprise

An LLM-powered execution planner analyses each request, builds a structured task graph, and determines optimal agent composition, model selection, and execution order before a single agent fires. Results in significantly higher task success rates on complex, ambiguous requests.

🏛️

CEO Command Center Enterprise

A purpose-built executive intelligence hub that gives C-suite leaders a single conversational interface to every corner of the business. Ask natural-language questions across finance, ops, HR, and sales simultaneously. The Command Center dispatches specialised data agents, synthesises results in real-time, and delivers board-ready narrative reports with charts, variance analysis, and recommended actions — all without touching a BI tool.

ezinsights.ai Data Intelligence
📊

KPI Intelligence Dashboard Enterprise

Live KPI dashboards powered by AI agents, not static SQL. Define your business metrics once in plain language; the dashboard agent continuously queries connected data sources, detects anomalies, surfaces trend breakdowns, and pushes proactive alerts when targets drift. Supports department-level drill-down, cohort comparisons, and predictive trend lines. Exportable as PDF, PPTX, or live-shareable links for leadership reviews.

ezinsights.ai Data Intelligence
💬

Channel Integrations Enterprise

Deploy EzCoworker agents natively inside Zoho Cliq, Slack, WhatsApp, Microsoft Teams, and Discord. Users chat with specialised AI agents in the tools they already live in — no context switching, no separate UI. Webhook-based integration delivers rich formatted results, file attachments, and download links back to the originating channel. Team-level agent pools, channel-scoped skill assignments, and per-channel audit logs included.

Chat Platforms
🖥️

IDE Integration Enterprise

First-class IDE plug-in for VS Code and JetBrains IDEs that surfaces EzCoworker's full agent suite directly in the editor sidebar. Developers trigger code review, security scans, migration suggestions, and documentation generation without leaving their workspace. The IDE extension routes requests through the same enterprise agent infrastructure — all governed by your org's model policies, skill assignments, and audit trail. Not a copilot autocomplete — a full agentic pipeline accessible from the editor.

Developer Tooling
🕸️

Knowledge Graph Server Enterprise

A persistent, enterprise-wide Knowledge Graph that agents use to understand entity relationships, business context, and domain ontologies across your entire organisation. Ingest documentation, code repositories, data dictionaries, org charts, and internal wikis. The graph enables agents to answer questions like "which services depend on this API?" or "what teams own this data domain?" — grounding every response in your real business context rather than hallucinated assumptions. Backed by a graph database with automatic relationship inference and a built-in MCP server interface.

ezinsights.ai Knowledge Graph
🗄️

Database Semantic Layer Enterprise

A natural-language-to-SQL semantic layer that sits on top of any database — PostgreSQL, MySQL, Snowflake, BigQuery, Redshift, or Databricks. Business users describe what they want in plain English; the semantic layer agent translates intent into optimised, governance-compliant SQL, executes the query, and returns results as structured data, charts, or narrative summaries. Includes schema discovery, column-level data classification, query history, and a masking layer for PII fields — making every database safely queryable by any enterprise user without writing a single line of SQL.

ezinsights.ai Data Ontology
🤖

Smart Data Intelligence Agents Enterprise

A suite of specialised data agents from the ezinsights.ai Data Intelligence framework: the Profiler Agent automatically scans and classifies datasets; the Lineage Agent traces data from source to report; the Quality Agent monitors freshness, completeness, and consistency rules continuously; the Enrichment Agent joins internal data with external market or reference datasets; and the Insight Agent proactively surfaces non-obvious patterns and correlations without being asked. Together they form an always-on intelligence layer over your entire data estate.

ezinsights.ai Persona Agents
🔄

Code Migration Agents Enterprise

Multi-step agentic workflows for large-scale codebase migrations — Python 2 → 3, Java 8 → 21, Angular → React, monolith → microservices, on-prem SQL → cloud-native, and more. The Migration Agent analyses the full dependency graph, generates a phased migration plan, executes transformations file by file, runs the test suite after each change, resolves failures autonomously, and produces a detailed migration report. Far beyond find-and-replace — it understands idiom, library equivalence, and runtime semantics. Dramatically reduces migration timelines from months to days.

SDLC Automation
🔍

Design Generation Agents Enterprise

Autonomous design agents that translate product requirements, wireframes, and brand guidelines into production-ready UI components, design system tokens, and full page layouts. The Design Generation Agent ingests a plain-language brief or an existing Figma/Sketch artefact, applies your organisation's design system constraints, and outputs pixel-accurate HTML/CSS, React components, or Tailwind markup ready to hand directly to engineering. Covers accessibility compliance (WCAG 2.2), responsive breakpoints, dark/light theming, and component documentation. Dramatically compresses the design-to-code handoff from days to minutes — without losing design intent or brand consistency.

SDLC Automation
🔍

Code Generation Agents Enterprise

Full-stack code generation agents that go far beyond autocomplete. Describe a feature, a service, or an entire module in natural language — the Code Generation Agent analyses your existing codebase architecture, matches your team's patterns and naming conventions, generates implementation files, writes inline documentation, and opens a pull request. Supports REST and GraphQL API scaffolding, database schema generation with migration scripts, event-driven microservice boilerplate, and CLI tooling. The agent runs the generated code in an isolated Docker container, validates it compiles and passes basic smoke tests, and only submits the PR once it has verified its own output. Integrates with Jira and Linear to auto-link generated code to the originating ticket.

SDLC Automation
🔍

Test Automation Agents Enterprise

End-to-end test generation and execution agents that analyse your codebase, identify coverage gaps, and autonomously author unit, integration, and end-to-end test suites. The Test Automation Agent reads existing source code and documentation to infer intent, generates tests in your framework of choice — Jest, Pytest, JUnit, Playwright, Cypress, or Selenium — and runs them in an isolated container to confirm they pass before committing. On every pull request it identifies net-new code paths with no test coverage and automatically proposes the missing tests inline. For regression suites, it detects flaky tests, diagnoses root causes, and proposes fixes. Coverage reports, failure summaries, and trend analytics are surfaced in the engineering dashboard — giving QA leads full visibility without manual test management overhead.

SDLC Automation
🔍

Code Review Agents Enterprise

Automated, multi-dimensional code review that goes well beyond linting. The Code Review Agent covers security vulnerability scanning (OWASP Top 10, SSRF, injection), performance bottleneck identification, architecture pattern conformance, test coverage gap analysis, documentation completeness, and pull request summarisation for human reviewers. Integrates with GitHub, GitLab, and Bitbucket via webhooks — reviews are posted as inline PR comments the moment a pull request opens. Engineering leads configure review policies per repo; the agent enforces them consistently across every commit, every team, every time zone.

SDLC Automation
How It Works

Architecture Overview

EzCoworker orchestrates Docker containers running Claude Code CLI, routing to your chosen LLM and managing isolated user workspaces automatically.

// Request Flow

flowchart TD A([User Message]) --> B[Backend API\nNode.js :5600] B --> C{Auth + Skills\n+ Plugins} C --> D[Select Model\nfrom DB] D --> E[Build Context\n+ History] E --> F[Agent Container\nClaude Code CLI] F --> G{LLM Provider} G --> H[Ollama Local] G --> I[Anthropic / OpenAI\nGoogle / Groq] F --> J[Output Files\nworkspace/output/] J --> K([Response + Links]) style A fill:#00d4bc,color:#07090f,stroke:none style K fill:#00d4bc,color:#07090f,stroke:none style F fill:#131d34,color:#d8e8f0,stroke:#00d4bc,stroke-width:1px style B fill:#0e1526,color:#d8e8f0,stroke:#00d4bc,stroke-width:1px style H fill:#0b0f1c,color:#556070,stroke:#243040,stroke-width:1px style I fill:#0b0f1c,color:#556070,stroke:#243040,stroke-width:1px

// Container Architecture

graph TB subgraph HOST["🖥️ Docker Host"] FE[nginx Frontend\n:3600] BE[Node.js Backend\n:5600] DB[(PostgreSQL)] FE <-->|API| BE BE <--> DB end subgraph AGENTS["🐳 Per-User Agent Containers"] A1[claude-agent-user-1\nClaude Code + Python] A2[claude-agent-user-2\nClaude Code + Python] end subgraph FS["📁 Host Volumes"] W[users/id/workspace/] SK[skills/] PL[plugins/] end BE -->|docker exec| AGENTS AGENTS <-->|bind mounts| FS style HOST fill:#0b0f1c,stroke:#00d4bc,color:#d8e8f0 style AGENTS fill:#0e1526,stroke:#4d8eff,color:#d8e8f0 style FS fill:#0b0f1c,stroke:#ffb940,color:#d8e8f0

// Model Provider Routing (with LiteLLM)

flowchart TD CC[Claude Code CLI] --> R{Provider\nDetection} R -->|anthropic| AN[api.anthropic.com\ndirect] R -->|ollama| OL[localhost:11434\ndirect] R -->|openai / gemini\ngroq / azure| SP[Stripping Proxy\n:4001] SP --> LM[LiteLLM\n:4000] LM -->|openai| OA[api.openai.com] LM -->|google| GG[generativelanguage\n.googleapis.com] LM -->|groq| GR[api.groq.com] style CC fill:#00d4bc,color:#07090f,stroke:none style R fill:#131d34,color:#d8e8f0,stroke:#00d4bc style SP fill:#131d34,color:#ffb940,stroke:#ffb940,stroke-width:1px style LM fill:#131d34,color:#4d8eff,stroke:#4d8eff,stroke-width:1px

// Plugin + Skill Injection

flowchart TD U[User Enables Skill] --> SP[SKILL.md → System Prompt] P[Plugin Folder] --> PM[pluginManager.js] PM --> PI[PLUGIN.md → Instructions] PM --> MC[mcp.json → MCP Config] PI --> CTX SP --> CTX[Agent System Context] MC --> CLI[--mcp-config\nClaude Code CLI] CLI --> CTX CTX --> AGT([Agent Execution]) style U fill:#4d8eff,color:#ffffff,stroke:none style P fill:#ffb940,color:#07090f,stroke:none style AGT fill:#00d4bc,color:#07090f,stroke:none style CTX fill:#131d34,color:#d8e8f0,stroke:#00d4bc
Get Running in Minutes

Quick Start Guide

From zero to running AI agents on your own machine in under 10 minutes.

01 📦

Clone the Repo

Clone EzCoworker Community Edition from GitHub. The repo includes the backend, frontend, agent Dockerfile, and all configuration templates.

git clone github.com/ezinsightsainc/ezcoworker
02 🐳

Build Agent Image

Build the Claude Code agent image with Python, LibreOffice, and all document tools baked in. Takes ~5 min first time.

docker build -f agent.Dockerfile -t claude-agent-image-community .
03 ⚙️

Configure .env

Copy the example env and add API keys. Set GEMINI_API_KEY for Google, OPENAI_API_KEY for GPT models. LiteLLM handles the translation automatically.

cp .env.example .env
04 🚀

Launch Platform

Start all services — frontend, backend, PostgreSQL, and LiteLLM proxy — with one command.

docker-compose up -d --build
05 👤

Create Admin

POST to the setup endpoint with your ADMIN_SETUP_KEY from .env to create the first admin account. Then register regular users via the web interface at localhost:3600.

POST /api/admin/setup
06 🤖

Start Chatting

Open the platform, register a user, pick from auto-discovered models, and run your first agent session. Upload a file, ask the agent to process it, and download the output.

http://localhost:3600
Model Providers

Works with any LLM you already use

No lock-in. Add as many providers as you want. Models are auto-discovered from API keys and your Ollama installation at startup.

Ollama (Local)
Anthropic — claude-sonnet-4-6
Anthropic — claude-opus-4-6
Anthropic — claude-haiku-4-5
OpenAI — gpt-4.1 / gpt-4.1-mini
OpenAI — gpt-4o / gpt-4o-mini
OpenAI — o3 · o4-mini (reasoning)
Google — gemini-2.5-pro / flash
Google — gemini-3.1-pro-preview
Groq — llama-3.3-70b-versatile
OpenRouter
gpt-oss:20b (local)
qwen3-coder (local)
DeepSeek-R1 (local)
The Definitive Choice

How EzCoworker stacks up

Two comparisons: first against the leading commercial AI coworker products, then against open-source alternatives. EzCoworker wins on every dimension that matters for enterprise deployment.

// vs Commercial AI Cowork Products · Claude Cowork (Jan 2026) & Microsoft Copilot Cowork (Mar 2026)
Capability Claude Cowork
Anthropic · SaaS · Paid · Jan 2026
Microsoft Copilot Cowork
Microsoft · M365 SaaS · Mar 2026
EzCoworker Community
Free · Self-Hosted · IBU
EzCoworker Enterprise
Contact for pricing
Self-hosted / on-premise ✗ SaaS only ✗ M365 cloud only ✓ 100% ✓ 100%
Your data stays on your servers ✗ Anthropic cloud ✗ Microsoft cloud
Any LLM provider (Ollama, OpenAI, Gemini, Groq…) ✗ Claude only ~ Claude + OpenAI only ✓ 22 models, 6+ providers
Free local LLM inference (Ollama)
Works outside Microsoft 365 / Office apps ✓ Mac & Windows desktop ✗ M365 apps only ✓ Any workflow
Multi-user platform with admin panel ✗ Per-seat SaaS ✗ Per-seat M365 licence ✓ Unlimited users
Isolated Docker agent containers per user
MCP tool integration + custom skills ~ Limited ~ Copilot Studio connectors only ✓ Full MCP + skills
File workspace — process & download outputs ✓ Local files ~ M365/SharePoint only ✓ Any file type
Zero per-seat / per-query SaaS fees ✗ Per-seat billing ✗ M365 E3/E5/E7 required ✓ Free ✓ Flat enterprise
CEO Command Center + KPI Dashboards
IDE integration (VS Code, JetBrains) ✓ All repos + orgs
Chat integrations (Slack, Teams, Cliq, Discord) ~ Teams only ✓ All platforms
Knowledge Graph Server ~ Work IQ (M365 graph only) ✓ Any data source
Database Semantic Layer (NL→SQL any warehouse)
Smart Data Intelligence Agents (5 specialist)
Code Migration Agents (full codebase, autonomous) ✓ Fully autonomous
Code Review Agents (PR webhook, OWASP, perf) ✓ + security + arch
Multi-agent parallel orchestration ~ Sequential M365 tasks only
Open source — inspect, extend, own the code ✗ Closed source ✗ Closed source ✓ Core open
// vs Open-Source Alternatives
Other open-source coworkers
(e.g. Open WebUI, LibreChat, Big-AGI)
  • Basic chat UI, typically single-agent
  • No isolated Docker agent containers
  • No file execution runtime (LibreOffice, Python, etc.)
  • No skills / MCP plugin system
  • No intent-based model routing
  • No SDLC automation agents
  • No Knowledge Graph or Semantic Layer
  • No enterprise upgrade path
EzCoworker Community
Free · Self-hosted · IBU License
  • Full multi-user platform + admin dashboard
  • Isolated Docker agent containers per user
  • Rich runtime: Python, LibreOffice, PDFs, charts
  • MCP tool system + custom skills
  • 22 models, 6+ providers, intent-based routing
  • File workspace — upload, execute, download
  • One-command Docker Compose deploy
  • Clear upgrade path to full Enterprise suite
EzCoworker Enterprise
Unique in market — no alternative has this
  • CEO Command Center & live KPI Dashboards
  • IDE integration (VS Code, JetBrains)
  • Chat integrations (Slack, Teams, Cliq, Discord)
  • Knowledge Graph Server + DB Semantic Layer
  • Smart Data Intelligence Agents (5 specialist)
  • Code Migration Agents — months of work in days
  • Code Review Agents — PR webhooks, OWASP scans
  • Multi-agent parallel orchestration + planner
🏆

Community Edition is the open-source foundation — free, self-hosted, and genuinely production-ready. Enterprise Edition is the only platform that extends that foundation all the way to full Data Intelligence and SDLC automation — capabilities no SaaS coworker or open-source alternative offers — built on the ezinsights.ai Data Intelligence & SDLC Framework.

Community vs Enterprise

Choose your edition

Community Edition is free for internal business use — a complete self-hosted AI agent platform with multi-user support, MCP tools, file workspaces, and 22+ models. Enterprise adds the full ezinsights.ai Data Intelligence & SDLC suite — Command Center, KPI Dashboards, IDE & Chat integrations, Knowledge Graph, Semantic Layer, and specialist AI agents — contact sales@ezinsights.ai for pricing and onboarding.

Capability Community (Free · Internal Business Use) Enterprise
Single agent execution
All LLM providers (Ollama, Anthropic, OpenAI, Google Gemini, Groq, OpenRouter)
LiteLLM proxy — translates Anthropic ↔ OpenAI format for all cloud providers
Latest 2026 models — claude-sonnet-4-6, gpt-4.1, gemini-3.1-pro, o3, o4-mini
Vision model flag (image input) + reasoning model flag (chain-of-thought)
MCP server support & plugin system
File workspace (upload / process / download) — handles any filename format
Shared chat links — visitors download output files without an account
Custom skills system
Multi-user platform + admin panel
Admin dashboard — failed response tracking (per user, model, day)
Admin tables — sortable & searchable across all views
Activity trends from DB creation date — no phantom empty rows
Conversation history (10 turns)
Docker Compose deployment
Multi-agent parallel execution
Intent-based LLM execution planner
Smart model tier routing
CEO Command Center — conversational executive intelligence hub
KPI Intelligence Dashboard — live AI-driven metrics with anomaly alerts
Channel integrations (Zoho Cliq, Slack, WhatsApp, Teams, Discord)
IDE integration — VS Code & JetBrains sidebar agent
Knowledge Graph Server — org-wide entity & relationship intelligence
Database Semantic Layer — natural-language-to-SQL over any warehouse
Smart Data Intelligence Agents (Profiler, Lineage, Quality, Enrichment, Insight)
Code Migration Agents — automated large-scale codebase migrations
Code Review Agents — security, perf, architecture, PR comments via webhook
Advanced admin intelligence dashboard — cost attribution & anomaly detection
SSO/SAML + SOC 2 / ISO 27001 / GDPR audit logging
Priority support & SLA

// Internal Business Use License

EzCoworker Community Edition is free to deploy and use within your organisation for internal business purposes. You may not resell, sublicense, or offer it as a hosted service to third parties. Modifications for internal use are permitted; redistributed forks require written approval from ezinsights.ai. For commercial redistribution, SaaS deployments, or OEM embedding, contact

sales@ezinsights.ai.

Internal Business Use License · by ezinsights.ai

Ready to run AI agents
on your own infrastructure?

Deploy Community Edition free, or reach out to unlock the full Enterprise Data Intelligence & SDLC suite — the most powerful AI agent platform available anywhere.

Contact for Enterprise →
×