Technology Deep Dive

14 min read Market & Product

For the investor-friendly overview of Brunelly's competitive moat and IP, see Technology & Competitive Moat.


3. Maitento: The AI Operating System (Key IP)

Maitento is Brunelly's most defensible asset. It is a separate codebase, a separate intellectual property asset, and a genuinely novel piece of technology. Understanding what Maitento is - and what it is not - is critical to understanding Brunelly's defensibility.

Why Maitento Matters to Investors

The AI developer tools market is flooded with products that call an LLM API, wrap the response in a user interface, and call it innovation. Those products have no defensible technology - they can be replicated in weeks by any competent engineering team. Maitento is fundamentally different, and understanding that difference is critical to understanding Brunelly's competitive moat.

The AI industry is undergoing a generational shift. The first wave of AI tools - Copilot, Cursor, ChatGPT wrappers - are single-agent systems: one AI model responding to one prompt. The industry is now moving toward multi-agent orchestration, where multiple specialised AI agents collaborate on complex tasks, share context, and coordinate their work. This is the difference between asking one person a question and having a team of specialists work together on a project.

Maitento was purpose-built for this multi-agent future. It is not adapting to the shift - it was designed for it from the ground up. The core IP assets that make this possible are:

  • Cogniscript: A proprietary programming language designed specifically for AI agent orchestration. This is not a scripting layer on top of Python - it is a full language with its own compiler, bytecode format, and virtual machine. It gives Brunelly precise, repeatable control over how AI agents think, collaborate, and make decisions. No other company in this market has built a purpose-designed language for AI orchestration.

  • The Loom: A sophisticated memory management system designed to give AI agents persistent, context-aware memory across multi-step workflows. The Loom is currently in active development, with core components operational and the complete system targeted for full launch in May 2026. When fully realised, The Loom enables each step of a workflow - codebase analysis, code generation, code review - to have access to everything the previous steps learned. This persistent memory capability is what enables full-lifecycle coverage; without it, each AI interaction starts from zero.

  • Multi-agent coordination: Four distinct patterns for how AI agents work together - from simple single-agent tasks to complex multi-specialist workflows with human approval gates. This is the orchestration layer that transforms individual AI capabilities into structured, enterprise-grade workflows.

This dual-asset structure means investors are not just backing a product - they are backing a platform. And the platform's defensibility does not depend on any single AI model provider. Maitento abstracts away the underlying LLMs (Claude, GPT, Gemini, local models), meaning it benefits from model improvements rather than being threatened by them. Better models running through Maitento's orchestration produce dramatically better results than the same models running standalone.

What Maitento Is Not

Maitento is not a wrapper around ChatGPT, Claude, or any other large language model. It is not a prompt engineering framework. It is not a "thin layer" that pipes user input to an LLM and returns the response. Companies that do this - and there are many - have no defensible technology. Their product can be replicated in weeks.

What Maitento Is

Maitento is an AI operating system - a runtime environment for building, executing, and managing complex AI workflows. The analogy to a traditional operating system is deliberate and precise:

Traditional OS Concept Maitento Equivalent Purpose
CPU Cogniscript VM Executes AI workflow logic
Process App / Interaction / Capsule Isolated execution units with their own state
Memory The Loom Persistent, structured AI memory
File System Virtual File System Tenant-isolated file operations
Device Driver AI Model Interface Connects to Claude, GPT, Gemini, local models
System Calls Syscall Architecture Controlled, validated access to external systems
Scheduler Process Scheduler Distributes work across available resources
IPC (Inter-Process Communication) RabbitMQ Message passing between AI processes

This is not marketing language. The architecture genuinely implements OS-level abstractions at the AI orchestration layer.

Cogniscript: A Proprietary Programming Language

At the heart of Maitento is Cogniscript, a purpose-built programming language with its own complete compilation pipeline:

Source Code (.cogniscript) -> Tokenizer -> Parser (AST) -> Compiler -> Assembler -> Bytecode -> Virtual Machine

Cogniscript was designed specifically for AI orchestration. Key properties:

  • Deterministic execution: Unlike running AI logic in a general-purpose language where runtime behaviour can vary, Cogniscript bytecode executes predictably and reproducibly.
  • Observable debugging: Every instruction, stack frame, and variable is visible during execution. When an AI workflow misbehaves, engineers can inspect exactly what happened and why.
  • Pause and resume: AI workflows can be suspended mid-execution and resumed later - essential for human-in-the-loop patterns where an AI process needs to wait for human approval before continuing.
  • Safe sandboxing: The virtual machine validates all operations. AI workflows cannot access resources they are not explicitly permitted to use.
  • Version control: Because Cogniscript compiles to bytecode, changes to AI workflows produce meaningful diffs that can be reviewed, tested, and rolled back.

Building a programming language with a complete compilation pipeline, virtual machine, and runtime is a rare engineering feat. It requires deep expertise in compiler design, virtual machine architecture, and language theory - disciplines that are uncommon even among senior software engineers.

The Loom: AI Memory System

Most AI tools treat each interaction as stateless - the AI has no memory of previous conversations, decisions, or context. The Loom is Maitento's answer to this fundamental limitation. The Loom is currently in active development, with core memory components operational and the complete system - including advanced conflict braiding and cross-agent memory sharing - targeted for full launch in May 2026.

The Loom implements four distinct types of memory:

  • Episodic Memory: Records of specific events and interactions (what happened, when, in what context)
  • Semantic Memory: Factual knowledge and understanding (what the AI "knows" about a project, codebase, or domain)
  • Relational Memory: Connections between concepts, people, and decisions (how things relate to each other)
  • Procedural Memory: Learned processes and workflows (how to do things based on past experience)

Each memory has properties that go well beyond simple storage:

  • Salience-based retrieval: Memories are recalled based on relevance to the current context, not just recency. The system weighs semantic similarity, keyword matching, recency, and contextual fit.
  • Ownership semantics: The system distinguishes between knowledge the AI was told, knowledge it inferred, and knowledge it discovered - reflecting how the information should be weighted in decision-making.
  • Decay rates: Memories can be permanent, slow-decaying, normal, or fast-decaying, allowing the system to naturally prioritise recent and important information.
  • Conflict resolution: When multiple AI agents work on the same problem, their memories are "braided" together, resolving conflicts and preserving the full picture.

This memory system is what will enable Brunelly to maintain deep context across an entire project lifecycle - from initial backlog generation through code review - in a way that stateless AI tools fundamentally cannot. The foundational memory architecture is in place today, with the full four-type system reaching completion in May 2026.

Multi-Agent Orchestration

Maitento supports four distinct patterns for coordinating AI agents:

Pattern How It Works Use Case
OneShot Single agent, single prompt, single response Simple completions, quick analysis
RoundRobin Multiple agents take turns, with voting and proposals Consensus-building, multi-perspective code review
Managed Multi-turn conversation with human injection points Interactive refinement sessions, chat interfaces
Routed Dynamic agent selection based on analysis of the task Expert routing - send architecture questions to the architecture specialist, security questions to the security specialist

All patterns support human-in-the-loop interaction, where the AI process pauses and waits for human input before continuing. This is essential for enterprise trust - AI assists and proposes, humans approve and direct.

Multi-Phase Code Generation

Brunelly's code generation is not simple text completion. It is a structured, multi-phase process:

  1. Analysis Phase: AI examines the existing codebase, understands the architecture, patterns, and conventions. Commits its analysis to the repository.
  2. Implementation Phase: AI modifies real code in a real git repository, following the patterns and conventions it identified. Commits its changes.
  3. Verification Phase: AI validates the changes (build, test, review). Creates a pull request with a complete diff for human review.

This produces production-ready code changes, not code snippets that a developer has to manually integrate.

LLM Provider Independence

Maitento connects to AI models through a standardised interface. It currently supports:

  • Anthropic Claude (all versions)
  • OpenAI GPT (all versions)
  • Google Gemini
  • Local models via Ollama (for air-gapped and cost-sensitive deployments)

This means Brunelly is not locked to any single AI vendor. As new models emerge or existing models improve, Maitento can incorporate them without architectural changes. This also enables a unique capability: using different models for different tasks based on their strengths (e.g., one model for code generation, another for natural language refinement).

Why Maitento Is Hard to Replicate

Barrier Detail
Technical expertise Requires a rare combination of OS design, compiler engineering, virtual machine architecture, AI agents, and distributed systems knowledge. Very few engineers have this breadth.
Architectural coherence The components reinforce each other. The VM needs the memory system; the memory system needs the orchestration patterns; the orchestration patterns need the syscall architecture. Copying one piece yields little without the others.
Time An experienced team would need 12-24 months to build a comparable system. For a competitor starting today, Brunelly would have 2-3 years of head start by the time they catch up.
Integrated domain knowledge Deep integration with git workflows, code analysis, project management patterns, and enterprise SDLC processes. This is not generic AI infrastructure - it is purpose-built for software development.

Independent assessment scores Maitento at 8/10 for novelty and 8.5/10 for defensibility. It represents genuine innovation in AI orchestration, not incremental improvement over existing frameworks.

Compared to existing AI orchestration frameworks:

Framework What It Does Maitento Advantage
LangChain Python AI framework No process isolation, no VM, no memory system
CrewAI Multi-agent framework No bytecode execution, no code generation integration
AutoGen (Microsoft) Agent framework No virtual file system, limited orchestration patterns
Temporal Workflow orchestration Not AI-native, no model integration

4. SDLC Coverage

Brunelly is the only product on the market that covers the entire software development lifecycle with AI assistance at every stage. Every competitor either started with code and is trying to expand to planning, or started with project management and is trying to bolt on AI. Brunelly was built AI-native from the ground up across the full lifecycle.

Current Coverage

Ideation and Planning

  • Backlog Generation: Users describe a project vision and Brunelly's AI generates a structured backlog - features, user stories, tasks, acceptance criteria, and technical notes. What traditionally takes a product team weeks of workshops happens in minutes.
  • AI Refinement Chat: Interactive sessions where users refine work items with AI assistance. The AI can rewrite, expand, clarify, or challenge requirements - acting as an always-available product analyst.
  • Work Item Improvement: AI enhances individual work items with better descriptions, clearer acceptance criteria, and detailed technical specifications.
  • Tech Lead Chat: Project-scoped AI advisor that understands the project's technology stack, architecture, and codebase. Answers architectural questions with full project context.

Estimation and Sprint Planning

  • AI-Assisted Estimation: Real-time voting sessions (story points or t-shirt sizing) with AI recommendations based on codebase complexity and historical patterns.
  • Sprint Planning: Visual Kanban boards with customisable columns, drag-and-drop prioritisation, and AI-informed dependency detection.

Design and Architecture

  • Architecture Expert Chat: AI discussions backed by automated codebase analysis documents. The AI understands the actual architecture, not just what documentation claims.
  • Codebase Analysis: Automated generation of architecture documentation from the codebase itself - dependency maps, pattern identification, and architectural overviews.
  • UI Generator: AI-generated wireframes and mockups from work item descriptions, with style guide support and iteration capability.

Implementation

  • Code Generation: Multi-phase, context-aware code generation that produces complete pull requests. The AI analyses the codebase, understands patterns and conventions, generates code that follows them, and creates a ready-to-review PR.
  • Automatic Coding Orchestration: Batch code generation across all ready stories with human-in-the-loop approval at each step. Pause, resume, accept, reject, or request rework on any generated PR.
  • Pull Request Management: Full PR lifecycle within the platform - diffs, inline comments, merge strategies (merge, squash, rebase), conflict resolution.

Review and Quality

  • AI Code Review: Automated review of code for quality, bugs, and best practices. Generates findings with inline comments before human review begins.
  • Code Quality Analysis: Continuous scanning for quality issues, performance problems, and refactoring opportunities with an actionable stats dashboard.
  • Security Review: AI vulnerability scanning with severity levels and remediation recommendations, following OWASP-style identification patterns.
  • Bug Hunter: Proactive scanning of the codebase for potential bugs before they reach production.
  • Pattern Risk Analysis: Identification of architectural anti-patterns and risks with prioritised recommendations.

Testing

  • AI Test Agent: A full-capability AI testing agent that can function as a virtual QA team member. The test agent can autonomously run test suites, execute exploratory testing against running applications, take screenshots for visual verification, suggest new test cases based on code changes and requirements, and generate comprehensive test reports with findings, evidence, and recommendations. This goes well beyond test plan generation - it emulates the judgment and thoroughness of a skilled human tester, including the ability to discover edge cases and usability issues that scripted tests miss.
  • Test Plan Generation: AI generates structured test plans from work item specifications, covering functional, integration, and edge case scenarios with full traceability back to requirements.
  • Test Management: Full test lifecycle - creation, suites, runs, scheduling, pass/fail tracking, coverage mapping to work items, CSV/PDF export.
  • Screenshot-Based Verification: The test agent can take screenshots during test execution, providing visual evidence of test results and UI state - essential for design verification and regression detection.

Coming in 2026

The remaining stages of the SDLC are under active development:

  • CI/CD Pipeline Integration: Generated PRs automatically trigger build, test, and deployment pipelines via GitHub Actions and Azure Pipelines.
  • AI Design System: A declarative, schema-based design system that bridges design and code generation - designs become code automatically (targeted Month 3 - see Product Roadmap below).
  • Production Monitoring Loop: Integration with monitoring tools (Datadog, New Relic, PagerDuty, Sentry) to detect production exceptions, analyse root causes using codebase context, raise bug work items, and optionally generate fix PRs automatically.

The Full Loop Vision (End of 2026)

Ideation -> Planning -> Design -> Coding -> Review -> Testing -> Deployment
    ^                                                                |
    |                                                                v
    +--- Auto-Fix <--- Bug Analysis <--- Monitoring <--- Production -+

No competitor even aspires to closing this loop. By end of 2026, Brunelly aims to deliver a complete, AI-assisted cycle from initial specification through production monitoring and automated bug resolution.

Competitive Positioning

Point solutions like Cursor and GitHub Copilot improve code writing productivity by 30-50%. But code writing represents only about 25% of the total software development lifecycle. That translates to a 7.5-12.5% overall productivity improvement.

Brunelly targets 30-50% improvement across 100% of the SDLC - delivering 3-4x the value of code-only tools.

SDLC Phase % of Developer Time Brunelly Cursor Copilot Devin
Requirements and Planning 15-20% Covered No No No
Estimation 5-10% Covered No No No
Sprint Planning and Refinement 10-15% Covered No No No
Code Writing 20-30% Covered Covered Covered Covered
Code Review 10-15% Covered No Partial No
Testing 15-20% Covered No No No
Documentation 5-10% Covered No No No

5. Infrastructure and Deployment Flexibility

Hybrid Cloud Architecture

Brunelly runs on a hybrid cloud architecture that combines on-premises infrastructure with public cloud services. This is not a cost-cutting measure - it is a deliberate architectural decision that proves a critical enterprise capability: the ability to deploy wherever the customer needs.

On-Premises Infrastructure:

  • 5 HP enterprise servers
  • 200 CPU cores total
  • 1.28 TB RAM
  • Terabytes of enterprise SAS storage
  • Fully redundant: dual routers, redundant storage paths, redundant network paths, redundant power
  • Kubernetes clusters for container orchestration

Cloud Infrastructure:

  • Microsoft Azure for database hosting and client file storage
  • Azure Front Door for global ingress and DDoS protection
  • Designed for multi-region expansion

Total monthly infrastructure cost: approximately $600 ($300 data centre + $300 Azure).

Why This Matters

For investors: This demonstrates extraordinary capital efficiency. Comparable startups spend $10,000-$50,000 per month on cloud infrastructure. Brunelly's hybrid approach delivers enterprise-grade infrastructure at a fraction of the cost, extending runway significantly.

For enterprise customers: The fact that Brunelly already runs its own infrastructure - Kubernetes clusters, redundant networking, enterprise storage - proves it can deploy on customer infrastructure. This is not a theoretical capability. It is proven in production.

For regulated industries: Banking, defence, healthcare, and government customers often cannot use public cloud for sensitive workloads. Brunelly's on-premises capability means it can serve markets that cloud-only competitors cannot enter.

Infrastructure Credibility

This infrastructure is not a hobbyist setup. The CTO, Guy Powell, was responsible for running a product quality enterprise storage lab at HP, managing enterprise-grade hardware and infrastructure at scale. He has also previously run an ISP, with full responsibility for network operations, uptime, and customer-facing reliability. The Brunelly infrastructure reflects professional data centre experience applied to a startup context - delivering enterprise-grade reliability at a fraction of the cost of public cloud. This is why the on-premises deployment capability is not aspirational: it is backed by direct, hands-on experience operating production data centre environments.

Deployment Options

Deployment Model Description Target Customer
SaaS (Multi-Tenant) Shared platform, tenant-isolated data Most customers, fastest onboarding
Dedicated Cloud Single-tenant deployment in customer's preferred cloud region Customers with data residency requirements
On-Premises Full deployment within customer's own data centre Regulated industries, government, defence
Hybrid Some components on-prem, some in cloud Customers transitioning to cloud or with mixed requirements

Regional Expansion

Current data centre operations are in the UK. Planned expansion includes:

  • UAE data centre: Supporting Middle East operations and customers requiring data residency in the region
  • Additional regions: Driven by customer demand and regulatory requirements

6. Enterprise Readiness

Brunelly was built by a team with deep enterprise software delivery experience - including roles at Microsoft, Symantec, Capgemini, Interoute (telco), and ADP (enterprise payroll). Enterprise requirements were not afterthoughts; they influenced architectural decisions from the start.

Multi-Tenant Architecture

Every entity in the system is scoped to a tenant. Data isolation is enforced at the data layer - not just the application layer. One customer's data can never leak into another customer's view, even in a shared deployment.

Role-Based Access Control

The platform supports role-based permissions with tenant-level administration. Users are assigned roles that determine what actions they can perform and what data they can see. The RBAC system is designed for extension to granular, project-level permissions as enterprise requirements demand.

Audit Trails and Traceability

All entities track creation and modification metadata. The architecture supports comprehensive audit logging - who performed what action, when, and in what context. This is a foundation for the immutable audit trails required by SOC 2, ISO 27001, and financial regulators.

API-First Design

Every capability in Brunelly is accessible through a REST API. The user interface is a consumer of the same API that external systems would use. This means:

  • Integration with existing enterprise tools (Jira, Azure DevOps, Slack, CI/CD pipelines) is architecturally straightforward
  • Customers can build custom integrations and workflows
  • Automated testing and validation of all platform capabilities is built into the development process

Billing and Usage Tracking

A credit-based billing system with Stripe integration provides transparent, per-feature cost tracking. Enterprise customers can monitor exactly what AI capabilities their teams are using and at what cost - essential for budget management and internal chargebacks.

DORA Compliance Pathway

The Digital Operational Resilience Act (DORA) applies to financial institutions in the EU and their technology providers. Brunelly's architecture - with its clear separation of concerns, audit capabilities, and on-premises deployment option - positions it well for DORA compliance. The Deutsche Bank / Publicis Sapient engagement validates that regulated financial institutions see Brunelly as viable for their environment.

Compliance Roadmap

Compliance is treated as sales enablement, not overhead. Each investment is mapped to the percentage of enterprise deals it unblocks.

Phase Timeline Key Milestones Estimated Cost Enterprise Deal Impact
Foundation Months 1-6 SOC 2 Type I, SAML/OIDC SSO, comprehensive audit logging, penetration testing, privacy policy and DPA, zero-retention LLM policy, public trust center ~$45-75K + engineering SOC 2 Type I unblocks 60%+ of enterprise conversations; SAML 2.0 + OIDC unblocks 75-80% of enterprise deals; privacy policy and DPA unblocks 90%+ of enterprise deals
Enterprise-Ready Months 6-12 SOC 2 Type II, SCIM provisioning, MFA enforcement, SIEM integration, content exclusion controls, configurable data retention, incident response plan, IP indemnification ~$30-60K + engineering SOC 2 Type II is the gold standard that closes enterprise deals; SCIM required for 500+ seat organisations; IP indemnification achieves parity with GitHub Copilot
Advanced Enterprise Months 12-24 ISO 27001, ISO/IEC 42001 (AI Management), customer-managed encryption keys, VPC / private endpoint deployment, BYOM support, regional data residency Varies by scope ISO 27001 opens EU/UK enterprise market; ISO/IEC 42001 is a differentiator (only Augment Code has this); BYOK and VPC required by large regulated enterprises
Government and Regulated 24+ months FedRAMP pathway, HIPAA compliance, EU AI Act assessment, air-gapped deployment Regulatory-dependent Opens government, defence, and healthcare verticals

Estimated total compliance investment in Year 1: $75,000-$150,000. This covers Phases 1 and 2 and is the cost of enterprise market entry - it directly unlocks enterprise deal flow rather than representing overhead.

Enterprise Feature Readiness

Transparency with investors: this table shows the honest gap between current product state and full enterprise readiness. The team has mapped every requirement, prioritised by revenue impact, and built timelines based on the founder's 20+ years of enterprise software experience. P0 items are the immediate post-funding priority. These timelines align directly with the Product Roadmap in Section 7 below - enterprise feature delivery is integrated into the product development plan, not a separate workstream.

Requirement Current State Enterprise Expectation Priority Timeline Roadmap Cross-Reference
Free Model + SaaS Tiers Not yet launched Freemium + subscription tiers for self-serve adoption P0 Month 1 Product Roadmap Month 1
Single Sign-On (SSO) Not yet built SAML 2.0 / OIDC required P0 Month 2 Product Roadmap Month 2
Audit Logging Basic Comprehensive audit trail with export P0 Month 1-3 Product Roadmap Months 1-3
Jira Integration Not yet built Bi-directional sync P0 Month 3 Product Roadmap Month 3
Microsoft Integrations Not yet built Azure DevOps, Teams P0 Month 3 Product Roadmap Month 3
Full Design UI/UX Feature Basic UI generation Complete AI design system P1 Month 3 Product Roadmap Month 3
SCIM Provisioning Not yet built Automated user lifecycle management P1 Month 3-5 Product Roadmap Months 4-6
SOC 2 Type I Not started Certification required for procurement P1 Month 4-6 Product Roadmap Months 4-6
Granular RBAC Basic roles Project/team-level permissions P1 Month 3-4 Product Roadmap Months 4-6
API Access Limited Full REST API with rate limiting P1 Month 4-6 Product Roadmap Months 4-6
Production Deployment Development environment Full production deployment P2 Month 6 Product Roadmap Month 6
On-Prem Deployment Proven capability (own infra) Packaged deployment for customer infrastructure P3 Month 6-10 Product Roadmap Months 4-6
Multi-Region UK + Azure UAE, EU data centres P3 Month 6-12 Product Roadmap Month 12

6.5 Data Handling & Privacy

Data Flow & Storage

Customer source code and project data enters the platform via Azure Front Door, which provides global ingress and DDoS protection. Data is stored in Azure (MongoDB for database, Azure Blob Storage for files) with tenant-level isolation enforced at every layer. For regulated customers, Brunelly's on-premises deployment option ensures that no data ever leaves customer infrastructure - source code, project data, and AI interactions all remain within the customer's own network boundary.

LLM Interaction Policy

Brunelly enforces a zero-retention policy for all LLM interactions. Customer code is never used to train AI models. Prompts and responses are not stored by LLM providers. Critically, Brunelly's own Maitento AI OS acts as an abstraction layer that controls all data sent to external models - no raw customer data is passed directly to third-party APIs. This architecture enables content filtering, data minimisation, and full auditability of what information leaves the platform.

Data Residency

The current primary region is UK/Azure. Planned expansion includes a UAE data centre region for Middle East customers and an EU data residency option for European enterprise clients. On-premises deployment provides full data sovereignty for regulated industries such as banking, defence, and government - the customer controls exactly where data resides and how it is processed.

GDPR & Regulatory Compliance

A Data Processing Agreement (DPA) template is available for enterprise clients. The platform follows GDPR-compliant data handling practices, including support for right-to-deletion requests and configurable data retention policies per tenant. These capabilities are foundational for DORA compliance in financial services and align with the requirements identified in the Deutsche Bank / Publicis Sapient engagement.

Encryption & Secrets Management

All data is encrypted at rest (AES-256) and in transit (TLS 1.2+). Customer credentials (Git tokens, API keys, integration secrets) are managed securely and never stored in plaintext. Maitento's credential architecture pulls secrets on demand rather than passing them as parameters, ensuring credentials can be rotated without disrupting running processes.


7. Product Roadmap (2026)

Priority Framework

All roadmap items are prioritised against enterprise sales requirements validated through the Publicis Sapient / Deutsche Bank engagement and broader enterprise pipeline feedback. Enterprise feature readiness (Section 6) is integrated into this roadmap - each milestone delivers both product capability and enterprise requirements in parallel.

Priority Definition Timeline Examples
P0 Enterprise deal-killers - without these, contracts don't close Month 1-3 SSO (SAML 2.0/OIDC), Audit logging, Jira integration
P1 Enterprise accelerators - significantly improve win rate Month 3-6 SCIM provisioning, SOC 2 Type I, Granular RBAC, API access
P2 Competitive differentiators - widen the moat Month 4-8 Full CI/CD pipeline, Production monitoring, Auto-fix loop
P3 Market expansion - open new segments Month 6-12 On-prem deployment package, Multi-region, Advanced analytics

Month-by-Month Product Roadmap

Timeframe Milestone Details
Month 1 Free model + SaaS subscription tiers Launch freemium tier using Maitento's Forge capability for zero-cost local inference. Introduce paid SaaS subscription tiers to drive bottom-up adoption and self-serve revenue.
Month 2 SSO enterprise integration SAML 2.0 and OIDC single sign-on - the single most common deal-killer in enterprise sales.
Month 3 Jira, Microsoft integrations + full design UI/UX Jira and Azure DevOps bidirectional sync, Microsoft Teams integration. Launch the full AI design system - a declarative, schema-based design tool that feeds directly into code generation.
Months 4-6 Remaining enterprise integrations + deploy to production SCIM provisioning, SOC 2 Type I certification, granular RBAC, full REST API with rate limiting, CI/CD pipeline integration (GitHub Actions, Azure Pipelines). Remaining enterprise integrations as required by pipeline feedback.
Month 5 (May 2026) The Loom full launch Complete deployment of the full Loom memory system - all four memory types (Episodic, Semantic, Relational, Procedural) with salience-based retrieval, conflict braiding, and cross-agent memory sharing fully operational.
Month 6 Production deployment complete Full production environment operational. All P0 and P1 enterprise features delivered. Platform ready for enterprise customer onboarding at scale.
Months 6-12 Production monitoring + full AI cross-cutting Production monitoring integration (Datadog, New Relic, PagerDuty, Sentry). Automated bug analysis and work item creation. Auto-fix pipeline. Regional infrastructure expansion including UAE data centre. Enhanced code generation with broader language and framework support.
Month 12 Full AI cross-cutting across all SDLC stages Production monitoring operational. AI assistance active across every stage of the software development lifecycle - from ideation through production monitoring and automated bug resolution. SOC 2 Type II certification.

Beyond 2026

  • Maitento platform SDK for third-party developers - opening the AI OS to external builders
  • Additional AI model support as the model landscape evolves
  • Industry-specific templates and workflows for banking, healthcare, defence, and government
  • Air-gapped deployment for the most security-sensitive environments

Immediate technical priority post-funding: P0 items (free model launch, SSO, audit logging, Jira integration) are enterprise deal-blockers. The Publicis Sapient / Deutsche Bank engagement has validated that these are the first requirements in every enterprise evaluation. They take precedence over all other roadmap items and are targeted for completion within the first three months.


8. IP Protection and Defensibility

What Makes This Hard to Replicate

The Cogniscript language and VM require expertise in compiler design, virtual machine architecture, and language theory - disciplines that are rare even among senior engineers. Building a programming language is not a weekend project. Building one that is specifically designed for AI orchestration, with pause/resume semantics, safe sandboxing, and observable debugging, requires a unique combination of skills.

The Loom memory system represents a novel approach to AI memory that goes far beyond vector databases. Its salience-based retrieval, ownership semantics, decay rates, and conflict braiding have no direct equivalent in the market. Replicating this requires not just engineering effort but the same philosophical insight into how AI agents should remember, forget, and resolve conflicting information.

The architectural coherence of Maitento means copying one component yields little value. The VM depends on the syscall architecture. The memory system depends on the process model. The orchestration patterns depend on all of the above. A competitor would need to replicate the entire system to compete, not just one piece.

Estimated replication time: 12-24 months for an experienced, well-funded team. This gives Brunelly a 2-3 year head start that widens as the platform accumulates more capabilities, integrations, and domain knowledge.

Trade Secret Protection

Maitento's core IP - the Cogniscript language specification, compiler implementation, VM architecture, Loom memory system, and orchestration patterns - is protected as trade secrets. The codebase is not open-source. Access is tightly controlled. This approach is common for genuinely novel software infrastructure (e.g., Google's search algorithms, Bloomberg's terminal architecture) and provides strong protection without the public disclosure requirements of patents.

IP Ownership

Maitento and Brunelly IP is currently held by Pina Vida Ltd, the consultancy founded by Guy Powell. This IP is transferring to the new Brunelly entity as part of the company's incorporation. By the time investment closes, Brunelly will own 100% of all IP - both the application and the AI operating system. There will be no licensing arrangements, no shared ownership, and no encumbrances.


This document is confidential and intended for prospective investors only.