Skip to content
5 min read

What If Users Could Build Their Own Features — Inside Your App?

Every SaaS company hits the same wall. Not a technology wall. A UI wall.

You shipped the API. It can filter, sort, export, bulk-update, and run reports. The backend does everything your customers are asking for. But they can't access any of it — because nobody has built the screen yet.

Custom dashboard? Backlog. Different data view? Six months. Urgent export in a specific format? Pull an engineer off their sprint for twenty minutes to write a one-off script. Combine two features in a new way? "We'll consider it for the roadmap."

I lived this. After my last company hit product-market fit, we were buried. Fortune 100 customers offering real money for features only 10% of users needed. We 10x'd development throughput with AI coding tools. Doubled engineering headcount. Users just wanted more. Their expectations scaled faster than our team.

The bottleneck was never capability. It was always UI.

The core idea

What if users could describe the interface they need — and the app just built it?

"Show me a table of all orders from last week, with a button to mark each as shipped."

The system assembles it. Right there. Using your existing API. No ticket filed, no sprint planned, no deploy.

This is what n.codes is building: an open-source framework that lets end-users generate features inside your app with natural language prompts. Think Lovable or Replit — but embedded in your product, wired to your backend, respecting your permissions, and matching your design system.

What exists today: the capability map

Before an LLM can build UI on top of your app, it needs to understand what your app can do. That's the problem we solved first.

The n.codes CLI analyzes your codebase — routes, API schemas, frontend components — and produces a capability map: a structured registry of every entity, action, query, and component in your system.

bash

npx n.codes init
npx n.codes sync

init walks you through configuration — which LLM provider to use (OpenAI or Anthropic), where to write output. sync reads your codebase and generates the map.

Here's what it produces for a real app (cal.com):

yaml

version: 1
generatedAt: "2026-02-01T11:51:58.295Z"
projectName: "cal.com"

actions:
  deleteBooking:
    endpoint: DELETE /api/v1/bookings/:id
    description: "Removes booking by ID with confirmation response"
  patchBooking:
    endpoint: PATCH /api/v1/bookings/:id
    description: "Updates booking fields via request body"
  postBookings:
    endpoint: POST /api/v1/bookings
    description: "Creates new booking from payload"

queries:
  getBookings:
    endpoint: GET /api/v1/bookings
    description: "Retrieves bookings for display or reporting"
  getBooking:
    endpoint: GET /api/v1/bookings/:id
    description: "Retrieves single booking without side effects"

components:
  booker:
    path: "apps/web/modules/bookings/components/Booker.tsx"
  enterprisePage:
    path: "apps/web/components/EnterprisePage.tsx"

The CLI auto-detects framework patterns — Next.js Pages and App Router, Express-style route directories, controller-based architectures — and classifies every endpoint. GET and HEAD become queries. POST, PUT, PATCH, DELETE become actions. It maps your component library alongside the API surface.

For cal.com, that's ~150 API endpoints and 500+ components, indexed and described, ready for an LLM to reason about.

The sync command is also incremental. It hashes route files and their imports, so re-running it only re-analyzes what changed. Cached results skip the LLM entirely.

bash

npx n.codes sync --force     # re-analyze everything
npx n.codes sync --sample 10 # analyze only 10 routes (useful for testing)
npx n.codes validate          # check your capability map for issues

Why the capability map matters

Most attempts at "AI-generated UI" fail in the same way: the LLM doesn't know what the app can actually do. It hallucinates endpoints. It builds forms for APIs that don't exist. It ignores permissions.

The capability map is the constraint layer. It tells the LLM: here are the entities (Booking, User, Team). Here are the actions you can take (create, update, delete). Here are the queries you can run (list, filter, search). Here are the components you can use. Stay inside these boundaries.

This is what makes the approach different from generic code generation. The LLM isn't writing arbitrary code. It's assembling known components and wiring them to documented APIs.

What's coming: prompt-to-UI generation

The capability map is the foundation. What we're building on top of it:

Sandboxed UI runtime. Users describe what they want. The system generates an interface using pre-built components (tables, forms, filters, charts) and renders it in an isolated sandbox (iframe or WASM). No arbitrary code execution. Allowlisted components and actions only.

Permission-aware generation. The system respects your RBAC. If a user can't access the billing API, they can't generate a billing dashboard. Every generated UI inherits the user's permission scope.

Audit trail. Every generated interface and every action taken through it gets logged. You know exactly what was built, by whom, and what it did.

Ownership model. Generated UIs can be private (just for the user who made them), shared with a team, or promoted to "official" by an admin. A feature that starts as one user's experiment can become part of the product.

MCP integration. Your capability map becomes an MCP server. Any MCP-compatible client — Claude, Cursor, custom agents — can discover your app's entities, actions, and queries without custom integration work. Instead of building bespoke tool definitions for every AI client, you expose the capability map once and any agent can use it. Your app becomes a tool the entire AI ecosystem can talk to.

What this is for — and what it isn't

n.codes targets the 80-90% of software that's CRUD operations, analytics dashboards, and workflows. Tables, forms, filters, buttons, charts — different arrangements of the same building blocks.

It is not for pixel-perfect marketing pages. It is not for bypassing business logic. It won't replace your design team.

But for the internal tool that needs twelve different data views? For the enterprise customer who needs a workflow you'll never prioritize? For the backend team that shipped an API and needs any UI on top of it? That's where this fits.

Real use cases

SaaS companies drowning in feature requests. Stop building one-off dashboards. Give users a way to self-serve for the long tail of requests your team will never get to.

Platform companies with API ecosystems. Let your ecosystem build UIs on your API without hiring frontend developers. The capability map documents what's possible; users explore within those bounds.

Internal tools teams. Backend engineers ship APIs. The UI generates itself. No more waiting three sprints for a frontend developer to wrap a CRUD endpoint in a form.

Rapid prototyping. Before committing engineering time to a feature, let users generate a working prototype from the API that already exists. Validate demand before you build.

Getting started

n.codes is open source and in early development. The CLI and capability map generation work today. UI generation is what's next.

bash

# Install the CLI
npm install -g ncodes-cli

# Initialize in your project
npx n.codes init

# Generate your capability map
npx n.codes sync

# Validate the output
npx n.codes validate

Try it on your own codebase and see what the capability map reveals. If you've been sitting on a powerful API that users can't fully access because of a UI bottleneck — this is what we're solving.

Links:


I'd love to hear from you. Does this match a problem you've experienced? What would break this idea? Open an issue or start a discussion on GitHub.