MCP Integration Reference
Multi-connector architecture

Multi-connector architecture

Recommended split

For maintainability and blast-radius control, prefer multiple downstream MCP servers rather than one monolith:

RoleExamplesNotes
UI / widgetsMCP App server (tools + resources/read for HTML UI)Frequent UI iteration; keep host-facing tools narrow
Core backendYour domain API MCPBusiness logic, persistence
IntegrationsZapier, n8n, Make MCP bridgesIsolate third-party credentials and quotas

Why isolate frontend from backend

Splitting the MCP App (widget + host-facing tools and resources) from backend MCP servers pays off across the whole lifecycle:

  • Development — UI and protocol surfaces can move quickly without redeploying data layers; backend teams can ship API and tool changes on their own cadence.
  • Testing — Point the upstream mapping at a stub or dummy downstream connector for the “backend” tools while you exercise the widget end-to-end; swap in a real MCP server when you need integration tests.
  • Operations — Smaller deploy units mean clearer ownership, smaller blast radius when something fails, and the ability to roll back or scale one side without touching the other.
  • Security and credentials — Long-lived secrets and integration tokens stay on servers that never serve HTML to arbitrary embed contexts; the Apps-facing server stays thin.

This split matches the common Chat Vault–style layout: one deployable for the MCP App + widget, another for core tools (and optionally more connectors for automations).

Upstream mapping: pick the implementation per tool

In Agentsyx Creator, the upstream connector is what the host sees (tool names, schemas, and—for MCP Apps—resources). Downstream connectors are the actual MCP servers you register (your backend, Zapier, n8n, Make, etc.).

Upstream tool mapping lets you attach each published tool to one downstream implementation, chosen from any connector you have wired up. That means:

  • The MCP App server can register placeholder or minimal tools (or tools that only exist to drive the UI) while you iterate on the widget.
  • In production, you remap those same upstream tool names to the real implementation on another connector—for example your core API MCP, or a Zapier MCP tool that wraps a third-party SaaS.
  • You can mix sources: some tools served from your own server, others from Zapier, Make, or n8n, without changing how the host discovers a single upstream surface.

The model and host still see one consistent tool list; only your Creator configuration decides which downstream server fulfills each call. For how connectors fit together conceptually, see Understanding Connectors.

Blending integration-platform MCP servers

Pointing separate downstream connectors at Zapier, Make, and n8n MCP offerings (each as its own server) opens access to large catalogs of pre-built app integrations those platforms maintain—often far faster than hand-rolling every OAuth flow and API client yourself.

Benefits in this architecture:

  • Credentials stay in the platform — Zapier/Make/n8n store tokens and refresh logic; your MCP tool surface stays a thin contract.
  • Isolation — A quota spike or misconfigured automation on one integration connector does not have to share a process or deploy with your core backend MCP.
  • Composability — Map only the automations you need as named tools on the upstream connector; leave the rest unmapped.

Operational details and caveats (quotas, focused workflows, minimal arguments) are summarized in Integration platforms.

Practices

  • Least privilege — Integration connectors should expose only the automations needed for mapped tools.
  • Failure isolation — A failed automation or misconfigured integration connector should not take down core read-only tools.
  • Naming — Use clear connector and tool names in Creator so mappings stay understandable when you swap implementations.
  • Idempotency — Automation backends may retry; design tools so repeated calls are safe or deduplicated.

See also Integration platforms.