πŸš€ Big News:Socket Has Acquired Secure Annex.Learn More β†’
Socket
Book a DemoSign in
Socket

@mobileai/react-native

Package Overview
Dependencies
Maintainers
1
Versions
137
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@mobileai/react-native

Add an in-app AI support agent to React Native apps that understands UI, navigates screens, fills forms, and escalates to humans.

Source
npmnpm
Version
0.9.63
Version published
Weekly downloads
1.5K
-29.47%
Maintainers
1
Weekly downloads
Β 
Created
Source

Guarded AI Assistance for React Native Apps

Add in-app AI assistance that understands your React Native UI, answers user questions, guides workflows, performs approved actions, and hands off to a human when needed.

MobileAI is the SDK and cloud dashboard behind it. The SDK runs inside your app; MobileAI Cloud handles projects, analytics, hosted proxy configuration, and support escalation.

Dashboard: Open MobileAI Cloud to create a project, then copy your publishable key from Setup & API Keys.

npm install @mobileai/react-native

πŸ€– Answers, Guides, and Resolves Inside Your App

AI assistant helping a user resolve an issue inside the app

npm

license

platform

⭐ If this helped you, star this repo β€” it helps others find it!

πŸ’‘ The Problem With Every Support Tool Today

Intercom, Zendesk, and every chat widget all do the same thing: send the user instructions in a chat bubble.

"To cancel your order, go to Orders, tap the order, then tap Cancel."

That's not support. That's documentation delivery with a chat UI.

This SDK takes a different approach. Instead of only telling users where to go, it provides delegated assistance inside the app: answering when data is enough, guiding when the user should act, and performing approved actions when the app allows it.

🧠 How It Works β€” The App Defines the Assistance Layer

Every other support tool needs you to build API connectors: endpoints, webhooks, action definitions in their dashboard. Months of backend work before the AI can do anything useful.

This SDK reads your app's live UI natively β€” every button, label, input, and screen β€” in real time. There's nothing heavy to integrate before the assistant understands the app. Your app already knows how to cancel orders, update addresses, apply promo codes, and validate forms. MobileAI adds a guarded assistance layer around those existing workflows.

No OCR. No image pipelines. No selectors. No required annotations. No required backend connectors.

For flows where backend or database state is the better source of truth, expose structured read-only data with useData or custom app actions with useAction. The assistant can mix UI guidance, structured data, and approved actions in the same conversation.

Why This Matters in the Support Context

The most important insight: assistance must be expected, permissioned, and visible. In a support conversation, the user has already asked for help β€” but the app should still define what the assistant can see, what it can do, and which steps require approval:

ContextSafer assistant behavior
Unprompted (out of nowhere)Do nothing without user intent
In a support chat β€” user asked for helpExplain, ask permission, then assist
User is frustrated and types "how do I..."Guide, use data when possible, and act only through approved paths

🎟️ The 5-Level Support Ladder

The SDK handles every tier of support automatically β€” from a simple FAQ answer to live human chat:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Level 1: Knowledge Answer                           β”‚
β”‚  Answers from knowledge base β€” instant, zero UI     β”‚
β”‚  "What's your return policy?" β†’ answered directly    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Level 2: Show & Guide                               β”‚
β”‚  Assistant navigates to exact screen, user acts last β”‚
β”‚  "Settings β†’ Notifications. It's right here. ☘️"     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Level 3: Do & Confirm (guarded assistance)          β”‚
β”‚  Assistant fills forms, user confirms final action   β”‚
β”‚  "I've typed your new address. Tap Save to confirm." β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Level 4: Approved Resolution                        β”‚
β”‚  Assistant completes allowed flow after approval     β”‚
β”‚  "Done! Order #4521 cancelled. Refund in 3-5 days." β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Level 5: Human Escalation                           β”‚
β”‚  Live agent via WebSocket β€” seamless handoff         β”‚
β”‚  Billing dispute, legal issues, edge cases           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Example conversations

Level 1 β€” Knowledge Answer
User: "What's your return policy?"
AI:   "You can return items within 30 days. Full refund in 5-7 business days."
      [from knowledge base β€” no UI needed]
Level 2 β€” Show & Guide
User: "How do I turn on price alerts?"
AI:   "That's in Settings β†’ Notifications. Want me to take you there?"
User: "Yes"
AI:   [navigates to Settings β†’ scrolls to Notifications section]
AI:   "Here it is. The Price Alerts toggle is right here. ☘️"
      [user taps the toggle themselves]
Level 3 β€” Do & Confirm
User: "I need to change my shipping address"
AI:   "Sure! I'll navigate to your address settings."
      [navigates to Profile β†’ Address]
AI:   "What's the new address?"
User: "15 Omar St, Nasr City, Cairo"
AI:   [types the address into the fields]
AI:   "I've filled in '15 Omar St, Nasr City, Cairo'. Tap Save to confirm."
      [user taps Save themselves]
Level 4 β€” Approved Resolution
User: "Cancel my last order"
AI:   "Order #4521 β€” Margherita pizza, $12.99. Want me to cancel it?"
User: "Yes"
AI:   [navigates to Orders β†’ finds #4521 β†’ taps Cancel β†’ taps Confirm]
AI:   "Done! Order #4521 is cancelled. Refund arrives in 3-5 days."
Level 5 β€” Human Escalation
User: "I was charged twice for my order"
AI:   "I can see your order history, but billing disputes need a human agent."
      [triggers escalate β†’ live agent chat via WebSocket]

βš™οΈ Why Not Intercom or Zendesk?

Intercom FinZendesk AIThis SDK
Answer questionsβœ…βœ…βœ… Knowledge base
Navigate user to right screenβŒβŒβœ… App-aware navigation
Fill forms for the userβŒβŒβœ… Types directly into fields
Execute in-app actionsVia API connectors (must build)Via API connectorsβœ… Approved app actions and guarded UI assistance
Voice supportβŒβŒβœ… Gemini Live
Human escalationβœ…βœ…βœ… WebSocket live chat
Mobile-native❌ WebView overlay❌ WebViewβœ… React Native component
Setup timeDays–weeks (build connectors)Days–weeksMinutes (<AIAgent> wrapper)
Price per resolution$0.99 + subscription$1.50–2.00You decide

The moat

No competitor can do Levels 2–4. Intercom and Zendesk answer questions (Level 1) and escalate to humans (Level 5). The middle β€” app-aware navigation, form assistance, and full in-app resolution β€” is uniquely possible because this SDK reads the React Native Fiber tree. That can't be added with a plugin or API connector.

✨ What's Inside

Support Your Users

🦹 AI Support Agent β€” Resolves at Every Level

The AI answers questions, guides users to the right screen, helps fill forms, or completes approved task flows β€” with voice support and human escalation built in. All inside the existing app UI, with app-defined guardrails.

  • Zero-config β€” wrap your app with <AIAgent>, done. No annotations, no selectors, no API connectors
  • 5-level resolution β€” knowledge answer β†’ guided navigation β†’ guarded assistance β†’ approved resolution β†’ human escalation
  • User approval β€” the assistant pauses before app actions and irreversible steps. Users stay in control
  • Human escalation β€” live chat via WebSocket, CSAT survey, ticket dashboard β€” all built in
  • Knowledge base β€” policies, FAQs, product data queried on demand β€” no token waste

🎀 Real-time Voice Support β€” Users Speak, AI Assists

Full bidirectional voice AI powered by the Gemini Live API. Users speak their support request; the assistant responds with voice and can guide, navigate, fill forms, or resolve approved issues.

  • Sub-second latency β€” real-time audio via WebSockets, not turn-based
  • Full resolution β€” the same guided and approved actions as text mode β€” all by voice
  • Screen-aware β€” auto-detects screen changes and updates context instantly

πŸ’‘ Speech-to-text in text mode: Install expo-speech-recognition for a mic button in the chat bar β€” letting users dictate instead of typing. Separate from voice mode.

🍎 Siri & Spotlight β€” Trigger Actions Hands-Free (iOS 16+)

Every useAction you register automatically becomes a Siri shortcut and Spotlight action. One config plugin added at build time β€” no Swift required β€” and users can say:

"Hey Siri, track my order in MyApp" "Hey Siri, checkout in MyApp" "Hey Siri, cancel my last order in MyApp"

Setup β€” Expo Config Plugin
// app.json
{
  "expo": {
    "plugins": [
      ["@mobileai/react-native/withAppIntents", {
        "scanDirectory": "src",
        "appScheme": "myapp"
      }]
    ]
  }
}

After npx expo prebuild, every registered useAction is available in Siri and Spotlight automatically.

Or generate manually:

# Scan useAction calls β†’ intent-manifest.json
npx @mobileai/react-native generate-intents src

# Generate Swift AppIntents code
npx @mobileai/react-native generate-swift intent-manifest.json myapp

⚠️ iOS 16+ only. Android equivalent (Google Assistant App Actions) is on the roadmap.

Supercharge Your Dev Workflow

πŸ”Œ MCP Bridge β€” Test Your App in English, Not Code

Your app becomes MCP-compatible with one prop. Connect any AI β€” Antigravity, Claude Desktop, CI/CD pipelines β€” to remotely read and control the running app. Find bugs without writing a single test.

MCP-only mode β€” just want testing? No chat popup needed:

<AIAgent
  showChatBar={false}
  mcpServerUrl="ws://localhost:3101"
  analyticsKey="mobileai_pub_xxx"
  navRef={navRef}
>
  <App />
</AIAgent>

πŸ§ͺ AI-Powered Testing via MCP

The most powerful use case: test your app without writing test code. Connect your AI (Antigravity, Claude Desktop, or any MCP client) to the emulator and describe what to check β€” in English. No selectors to maintain, no flaky tests, self-healing by design.

Skip the test framework. Just ask:

Ad-hoc β€” ask your AI anything about the running app:

"Is the Laptop Stand price consistent between the home screen and the product detail page?"

YAML Test Plans β€” commit reusable checks to your repo:

# tests/smoke.yaml
checks:
  - id: price-sync
    check: "Read the Laptop Stand price on home, tap it, compare with detail page"
  - id: profile-email
    check: "Go to Profile tab. Is the email displayed under the user's name?"

Then tell your AI: "Read tests/smoke.yaml and run each check on the emulator"

Real Results β€” 5 bugs found automatically:

#What was checkedBug foundAI steps
1Price consistency (list β†’ detail)Laptop Stand: $45.99 vs $49.992
2Profile completenessEmail missing β€” only name shown2
3Settings navigationHelp Center missing from Support section2
4Description vs specifications"breathable mesh" vs "Leather Upper"3
5Cross-screen price syncYoga Mat: $39.99 vs $34.994

πŸ“¦ Installation

Install the public React Native SDK:

npm install @mobileai/react-native

Requires React Native >=0.83.0 <0.84.0 and works with Expo managed workflow through a development build or prebuild. The base package includes native modules for screenshot capture and the elevated overlay, so Expo Go is not supported after installing it.

npx expo prebuild
npx expo run:ios
npx expo run:android

For React Native CLI apps, run the normal native install/build step after installing the package, such as cd ios && pod install.

AI Installation Skill

This package includes an AI-readable installation skill for coding agents and IDE assistants. It gives an AI assistant the exact workflow for adding @mobileai/react-native to Expo or React Native apps, including screen mapping, <AIAgent> wiring, optional voice dependencies, and common native-install fixes.

Skill location:

skills/install-mobileai-react-native/SKILL.md

After installing the npm package, the same skill is available at:

node_modules/@mobileai/react-native/skills/install-mobileai-react-native/SKILL.md

Point your AI coding assistant at that folder and ask:

Use the install-mobileai-react-native skill to install @mobileai/react-native in this app.

Screenshot Capture

πŸ“Έ Screenshots β€” for image/video content understanding

react-native-view-shot is a required native dependency for screenshot capture and is included with @mobileai/react-native, so you do not need to add it separately. Rebuild the native app after install so it can be autolinked.

πŸŽ™οΈ Speech-to-Text in Text Mode β€” dictate messages instead of typing
npx expo install expo-speech-recognition

Automatically detected. No extra config needed β€” a mic icon appears in the text chat bar, letting users speak their message instead of typing. This is separate from voice mode.

🎀 Voice Mode β€” real-time bidirectional voice agent
npm install react-native-audio-api

Expo Managed β€” add to app.json:

{
  "expo": {
    "android": { "permissions": ["RECORD_AUDIO", "MODIFY_AUDIO_SETTINGS"] },
    "ios": { "infoPlist": { "NSMicrophoneUsageDescription": "Required for voice chat with AI assistant" } }
  }
}

Then rebuild: npx expo prebuild && npx expo run:android (or run:ios)

Expo Bare / React Native CLI β€” add RECORD_AUDIO + MODIFY_AUDIO_SETTINGS to AndroidManifest.xml and NSMicrophoneUsageDescription to Info.plist, then rebuild.

Hardware echo cancellation (AEC) is automatically enabled β€” no extra setup.

πŸ” Optional Consent Persistence β€” remember AI consent across app restarts

By default, AI consent is kept only for the current app session. If your app sets consent={{ persist: true }} and wants the consent decision remembered after restart, install AsyncStorage:

npm install @react-native-async-storage/async-storage

This dependency is only used for consent persistence. Tickets, discovery tooltip state, telemetry, device identity, and conversation history do not require AsyncStorage.

πŸš€ Quick Start

Add one line to your metro.config.js β€” the AI gets a map of every screen in your app, auto-generated on each dev start:

// metro.config.js
require('@mobileai/react-native/generate-map').autoGenerate(__dirname);

Or generate it manually anytime:

npx @mobileai/react-native generate-map

Without this, the AI can only see the currently mounted screen β€” it has no idea what other screens exist or how to reach them. Example: "Write a review for the Laptop Stand" β€” the AI sees the Home screen but doesn't know a WriteReview screen exists 3 levels deep. With a map, it sees every screen in your app and knows exactly how to get there: Home β†’ Products β†’ Detail β†’ Reviews β†’ WriteReview.

2. Wrap Your App

If you use a MobileAI publishable key, the SDK now defaults to the hosted MobileAI text and voice proxies automatically. You only need to pass proxyUrl and voiceProxyUrl when you want to override them with your own backend.

React Navigation

import { AIAgent } from '@mobileai/react-native';
import { NavigationContainer, useNavigationContainerRef } from '@react-navigation/native';
import screenMap from './ai-screen-map.json'; // auto-generated by step 1

export default function App() {
  const navRef = useNavigationContainerRef();

  return (
    <AIAgent
      // Your MobileAI Dashboard ID
      // This now auto-configures the hosted MobileAI text + voice proxies too.
      analyticsKey="mobileai_pub_xxxxxxxx"

      navRef={navRef}
      screenMap={screenMap} // optional but recommended
    >
      <NavigationContainer ref={navRef}>
        {/* Your existing screens β€” zero changes needed */}
      </NavigationContainer>
    </AIAgent>
  );
}

Expo Router

In your root layout (app/_layout.tsx):

import { AIAgent } from '@mobileai/react-native';
import { Slot, useNavigationContainerRef } from 'expo-router';
import screenMap from './ai-screen-map.json'; // auto-generated by step 1

export default function RootLayout() {
  const navRef = useNavigationContainerRef();

  return (
    <AIAgent
      // Hosted MobileAI proxies are inferred automatically from analyticsKey
      analyticsKey="mobileai_pub_xxxxxxxx"
      navRef={navRef}
      screenMap={screenMap}
    >
      <Slot />
    </AIAgent>
  );
}

Choose Your Provider

The examples above use Gemini (default). To use OpenAI for text mode, add the provider prop. Voice mode is not supported with OpenAI.

<AIAgent
  provider="openai"
  apiKey="YOUR_OPENAI_API_KEY"
  // model="gpt-4.1-mini"  ← default, or use any OpenAI model
  navRef={navRef}
>
  {/* Same app, different brain */}
</AIAgent>

A floating chat bar appears automatically. Ask the AI to answer questions, guide workflows, or perform approved app actions depending on the mode you choose.

Hosted MobileAI Defaults

For the standard MobileAI Cloud setup, this is enough:

<AIAgent analyticsKey="mobileai_pub_xxxxxxxx" navRef={navRef} />

Only pass explicit proxy props when:

  • you want to use your own backend proxy
  • you want a dedicated voice proxy
  • you are self-hosting the MobileAI backend

For a custom production backend, pass your text proxy URL directly:

<AIAgent
  proxyUrl="https://myapp.example.com/api/mobileai/text"
  proxyHeaders={{ Authorization: `Bearer ${userToken}` }}
  navRef={navRef}
/>

If Voice Mode uses a separate WebSocket backend, add voiceProxyUrl:

<AIAgent
  proxyUrl="https://myapp.example.com/api/mobileai/text"
  voiceProxyUrl="wss://voice.myapp.example.com/mobileai/voice"
  navRef={navRef}
/>

voiceProxyUrl falls back to proxyUrl when it is not set.

Companion Mode β€” Screen-Aware Guidance Without App Control

Set interactionMode="companion" when you want the assistant to look at the current screen and guide the user, but never control the app.

<AIAgent interactionMode="companion" analyticsKey="mobileai_pub_xxx" navRef={navRef}>
  <App />
</AIAgent>

Companion mode:

  • Reads the current screen structure and screen map
  • Answers questions about what the user sees
  • Explains confusing UI states, visible options, disabled controls, and what matters next
  • Helps with support triage, comparisons, recommendations, and form validation flows
  • Gives step-by-step guidance using visible labels when steps are actually useful
  • Can use query_knowledge, query_data, and other non-UI custom tools
  • Cannot tap, type, scroll, navigate, submit, or use UI-control tools

Use this when buyers or users want a safer assistant that helps them understand and decide, not just a bot that clicks for them.

Knowledge-Only Mode β€” AI Assistant Without App Actions

Set enableUIControl={false} for a lightweight FAQ / support assistant. Single LLM call, ~70% fewer tokens:

<AIAgent enableUIControl={false} knowledgeBase={KNOWLEDGE} />
Full Agent (default)Knowledge-Only
UI analysisβœ… Full structure read❌ Skipped
Tokens per request~500-2000~200
Agent loopUp to 25 stepsSingle call
Tools available72 (done, query_knowledge)

πŸ›‘οΈ Guardrails β€” Delegated Assistance, Not User Impersonation

MobileAI is not designed as unrestricted UI automation. It is delegated assistance under app-defined limits: the assistant can read context, use structured data, guide users, and perform approved actions only through the SDK runtime.

By default, the assistant must get approval before app actions such as navigating, tapping, typing, scrolling, or selecting controls. Sensitive flows should also use code-level confirmation gates before the final commit.

// Default setup:
<AIAgent analyticsKey="mobileai_pub_xxx" navRef={navRef}>
  <App />
</AIAgent>

What the assistant can help with after approval:

  • Navigating between screens and tabs
  • Scrolling to find content
  • Typing into form fields
  • Selecting options and filters
  • Adding items to cart

What should require explicit confirmation:

  • Placing an order / completing a purchase
  • Submitting a form that sends data to a server
  • Deleting anything (account, item, message)
  • Confirming a payment or transaction
  • Saving account/profile changes

Full Autonomy Is Explicit Opt-In

<AIAgent interactionMode="autopilot" />

Use autopilot only for trusted, low-risk workflows where confirmations are intentionally unwanted. Avoid it for payments, deletion, consent, security settings, regulated data, or account-level changes.

Mark Sensitive Controls as Critical

For sensitive controls, add aiConfirm={true}. This adds a code-level confirmation gate before the assistant can interact with the element:

// These elements require confirmation before the assistant can touch them
<Pressable aiConfirm onPress={deleteAccount}>
  <Text>Delete Account</Text>
</Pressable>

<Pressable aiConfirm onPress={placeOrder}>
  <Text>Place Order</Text>
</Pressable>

<TextInput aiConfirm placeholder="Credit card number" />

aiConfirm works on any interactive element: Pressable, TextInput, Slider, Picker, Switch, DatePicker.

πŸ’‘ Dev tip: In __DEV__ mode, the SDK logs a reminder to add aiConfirm to critical elements after each copilot task.

Safety Model

LayerMechanismDeveloper effort
Approval gateUser approves before app actionsBuilt in
aiConfirm propCode blocks sensitive elements until confirmedAdd prop to critical controls
aiIgnore propHide controls from the assistant entirelyAdd prop to excluded controls
Stale-target protectionRe-checks targets before tapping after screen changesBuilt in
Content maskingSanitize screen content before it reaches the LLMConfigure transformScreenContent
TraceabilityRecords actions and conversation contextBuilt in / dashboard-backed
Structured dataUse useData when direct state lookup is better than UI navigationOptional

πŸ’¬ Human Support Mode

Transform the AI assistant into a production-grade support system. It can answer from knowledge, query structured data, guide users through the app, perform approved actions, and escalate to a live human agent when it can't help.

import { buildSupportPrompt, createEscalateTool } from '@mobileai/react-native';

<AIAgent
  analyticsKey="mobileai_pub_xxx" // required for MobileAI escalation
  instructions={{
    system: buildSupportPrompt({
      enabled: true,
      greeting: {
        message: "Hi! πŸ‘‹ How can I help you today?",
        agentName: "Support",
      },
      quickReplies: [
        { label: "Track my order", icon: "πŸ“¦" },
        { label: "Cancel order", icon: "❌" },
        { label: "Talk to a human", icon: "πŸ‘€" },
      ],
      escalation: { provider: 'mobileai' },
      csat: { enabled: true },
    }),
  }}
  customTools={{ escalate: createEscalateTool({ provider: 'mobileai' }) }}
  userContext={{
    userId: user.id,
    name: user.name,
    email: user.email,
    plan: 'pro',
  }}
>
  <App />
</AIAgent>

What Happens on Escalation

  • AI creates a ticket in the MobileAI Dashboard inbox
  • User receives a real-time live chat thread (WebSocket)
  • Support agent replies β€” user sees messages instantly
  • Ticket is closed when resolved β€” a CSAT survey appears

Escalation Providers

ProviderWhat happens
'mobileai'Ticket β†’ MobileAI Dashboard inbox + WebSocket live chat
'custom'Calls your onEscalate callback β€” wire to Intercom, Zendesk, etc.
// Custom provider β€” bring your own live chat:
createEscalateTool({
  provider: 'custom',
  onEscalate: (context) => {
    Intercom.presentNewConversation();
    // context includes: userId, message, screenName, chatHistory
  },
})

User Context

Pass user identity to the escalation ticket for agent visibility in the dashboard:

<AIAgent
  userContext={{
    userId: 'usr_123',
    name: 'Ahmed Hassan',
    email: 'ahmed@example.com',
    plan: 'pro',
    custom: { region: 'cairo', language: 'ar' },
  }}
  pushToken={expoPushToken}      // for offline support reply notifications
  pushTokenType="expo"            // 'fcm' | 'expo' | 'apns'
/>

πŸ—ΊοΈ Screen Mapping β€” Navigation Intelligence

By default, the AI navigates by reading what's on screen and tapping visible elements. Screen mapping gives the AI a complete map of every screen and how they connect β€” via static analysis of your source code (AST). No API key needed, runs in ~2 seconds.

Setup (one line)

Add to your metro.config.js β€” the screen map auto-generates every time Metro starts:

// metro.config.js
require('@mobileai/react-native/generate-map').autoGenerate(__dirname);

// ... rest of your Metro config

Then pass the generated map to <AIAgent>:

import screenMap from './ai-screen-map.json';

<AIAgent screenMap={screenMap} navRef={navRef}>
  <App />
</AIAgent>

That's it. Works with both Expo Router and React Navigation β€” auto-detected.

What It Gives the AI

Without Screen MapWith Screen Map
AI sees only the current screenAI knows every screen in your app
Must explore to find featuresPlans the full navigation path upfront
Deep screens may be unreachableKnows each screen's navigatesTo links
No knowledge of dynamic routesUnderstands item/[id], category/[id] patterns

Disable Without Removing

<AIAgent screenMap={screenMap} useScreenMap={false} />
Advanced: Watch mode, CLI options, and npm scripts

Manual generation:

npx @mobileai/react-native generate-map

Watch mode β€” auto-regenerates on file changes:

npx @mobileai/react-native generate-map --watch

npm scripts β€” auto-run before start/build:

{
  "scripts": {
    "generate-map": "npx @mobileai/react-native generate-map",
    "prestart": "npm run generate-map",
    "prebuild": "npm run generate-map"
  }
}
FlagDescription
--watch, -wWatch for file changes and auto-regenerate
--dir=./pathCustom project directory

πŸ’‘ The generated ai-screen-map.json is committed to your repo β€” no runtime cost.

🧠 Knowledge Base

Give the AI domain knowledge it can query on demand β€” policies, FAQs, product details. Uses a query_knowledge tool to fetch only relevant entries (no token waste).

Static Array

import type { KnowledgeEntry } from '@mobileai/react-native';

const KNOWLEDGE: KnowledgeEntry[] = [
  {
    id: 'shipping',
    title: 'Shipping Policy',
    content: 'Free shipping on orders over $75. Standard: 5-7 days. Express: 2-3 days.',
    tags: ['shipping', 'delivery'],
  },
  {
    id: 'returns',
    title: 'Return Policy',
    content: '30-day returns on all items. Refunds in 5-7 business days.',
    tags: ['return', 'refund'],
    screens: ['product/[id]', 'order-history'], // only surface on these screens
  },
];

<AIAgent knowledgeBase={KNOWLEDGE} />
<AIAgent
  knowledgeBase={{
    retrieve: async (query: string, screenName?: string) => {
      const results = await fetch(`/api/knowledge?q=${query}&screen=${screenName}`);
      return results.json();
    },
  }}
/>

πŸ”Œ MCP Bridge Setup β€” Connect AI Editors to Your App

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    WebSocket     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Antigravity     β”‚  Streamable HTTP β”‚                  β”‚                 β”‚                  β”‚
β”‚  Claude Desktop  β”‚ ◄──────────────► β”‚ @mobileai/       β”‚ ◄─────────────► β”‚  Your React      β”‚
β”‚  or any MCP      β”‚    (port 3100)   β”‚  mcp-server      β”‚   (port 3101)   β”‚  Native App      β”‚
β”‚  compatible AI   β”‚  + Legacy SSE    β”‚                  β”‚                 β”‚                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Setup in 3 Steps

1. Start the MCP bridge β€” no install needed:

npx @mobileai/mcp-server

2. Connect your React Native app:

<AIAgent
  apiKey="YOUR_API_KEY"
  mcpServerUrl="ws://localhost:3101"
/>

3. Connect your AI:

Google Antigravity

Add to ~/.gemini/antigravity/mcp_config.json:

{
  "mcpServers": {
    "mobile-app": {
      "command": "npx",
      "args": ["@mobileai/mcp-server"]
    }
  }
}

Click Refresh in MCP Store. You'll see mobile-app with 2 tools: execute_task and get_app_status.

Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "mobile-app": {
      "url": "http://localhost:3100/mcp/sse"
    }
  }
}
Other MCP Clients
  • Streamable HTTP: http://localhost:3100/mcp
  • Legacy SSE: http://localhost:3100/mcp/sse

MCP Tools

ToolDescription
execute_task(command)Send a natural language command to the app
get_app_status()Check if the React Native app is connected

Environment Variables

VariableDefaultDescription
MCP_PORT3100HTTP port for MCP clients
WS_PORT3101WebSocket port for the React Native app

πŸ”Œ API Reference

<AIAgent> Props

Core

PropTypeDefaultDescription
apiKeystringβ€”API key for your provider (prototyping only β€” use proxyUrl in production).
provider'gemini' | 'openai''gemini'LLM provider for text mode.
proxyUrlstringHosted MobileAI text proxy when analyticsKey is setBackend proxy URL (production). Routes all LLM traffic through your server.
proxyHeadersRecord<string, string>β€”Auth headers for proxy (e.g., Authorization: Bearer ${token}).
voiceProxyUrlstringHosted MobileAI voice proxy when analyticsKey is set; otherwise falls back to proxyUrlDedicated proxy for Voice Mode WebSockets.
voiceProxyHeadersRecord<string, string>β€”Auth headers for voice proxy.
modelstringProvider defaultModel name (e.g. gemini-2.5-flash, gpt-4.1-mini).
navRefNavigationContainerRefβ€”Navigation ref for auto-navigation.
childrenReactNodeβ€”Your app β€” zero changes needed inside.

Behavior

PropTypeDefaultDescription
interactionMode'companion' | 'copilot' | 'autopilot''copilot'Companion: screen-aware guidance with non-UI tools allowed and UI-control tools blocked. Copilot (default): AI pauses before app actions and irreversible steps. Autopilot: no-confirmation mode for trusted low-risk workflows only.
showDiscoveryTooltipbooleantrueShow one-time animated tooltip on FAB explaining AI capabilities. Dismissed after 6s or first tap.
maxStepsnumber25Max agent steps per task.
maxTokenBudgetnumberβ€”Max total tokens before auto-stopping the agent loop.
maxCostUSDnumberβ€”Max estimated cost (USD) before auto-stopping.
stepDelaynumberβ€”Delay between agent steps in ms.
enableUIControlbooleantrueWhen false, AI becomes knowledge-only (faster, fewer tokens, no screen-aware loop). Prefer interactionMode="companion" when you want screen-aware guidance without app control.
enableVoicebooleanfalseShow voice mode tab.
showChatBarbooleantrueShow the floating chat bar.

Navigation

PropTypeDefaultDescription
screenMapScreenMapβ€”Pre-generated screen map from generate-map CLI.
useScreenMapbooleantrueSet false to disable screen map without removing the prop.
router{ push, replace, back }β€”Expo Router instance (from useRouter()).
pathnamestringβ€”Current pathname (from usePathname() β€” Expo Router).

AI

PropTypeDefaultDescription
instructions{ system?, getScreenInstructions? }β€”Custom system prompt + per-screen instructions.
customToolsRecord<string, ToolDefinition | null>β€”Add custom tools or remove built-in ones (set to null).
knowledgeBaseKnowledgeEntry[] | { retrieve }β€”Domain knowledge the AI can query via query_knowledge.
knowledgeMaxTokensnumber2000Max tokens for knowledge results.
transformScreenContent(content: string) => stringβ€”Transform/mask screen content before the LLM sees it.
blocksArray<BlockDefinition | React.ComponentType<any>>β€”Register built-in/custom rich blocks for chat and screen injection.
richUIThemePartial<RichUITheme>β€”Global rich UI theme overrides.
richUISurfaceThemes{ chat?: Partial<RichUITheme>, zone?: Partial<RichUITheme>, support?: Partial<RichUITheme> }β€”Optional per-surface theme overrides.
blockActionHandlersRecord<string, (payload: Record<string, unknown>) => void>β€”Register block action handlers for button/toggle/chip interactions.

Security

PropTypeDefaultDescription
interactiveBlacklistReact.RefObject<any>[]β€”Refs of elements the AI must NOT interact with.
interactiveWhitelistReact.RefObject<any>[]β€”If set, AI can ONLY interact with these elements.

Support

PropTypeDefaultDescription
userContext{ userId?, name?, email?, plan?, custom? }β€”Logged-in user identity β€” attached to escalation tickets.
pushTokenstringβ€”Push token for offline support reply notifications.
pushTokenType'fcm' | 'expo' | 'apns'β€”Type of the push token.

Proactive Help

PropTypeDefaultDescription
proactiveHelpProactiveHelpConfigβ€”Detects user hesitation and shows a contextual help nudge.
<AIAgent
  proactiveHelp={{
    enabled: true,
    pulseAfterMinutes: 2,        // subtle FAB pulse to catch attention
    badgeAfterMinutes: 4,        // badge: "Need help with this screen?"
    badgeText: "Need help?",
    dismissForSession: true,     // once dismissed, won't show again this session
    generateSuggestion: (screen) => {
      if (screen === 'Checkout') return 'Having trouble with checkout?';
      return undefined;
    },
  }}
/>

Analytics

PropTypeDefaultDescription
analyticsKeystringβ€”Publishable key (mobileai_pub_xxx) β€” enables auto-analytics and, by default, the hosted MobileAI text/voice proxies.
analyticsProxyUrlstringβ€”Enterprise: route events through your backend.
analyticsProxyHeadersRecord<string, string>β€”Auth headers for analytics proxy.

MCP

PropTypeDefaultDescription
mcpServerUrlstringβ€”WebSocket URL for the MCP bridge (e.g. ws://localhost:3101).

Lifecycle & Callbacks

PropTypeDefaultDescription
onResult(result) => voidβ€”Called when agent finishes a task.
onBeforeTask() => voidβ€”Called before task execution starts.
onAfterTask(result) => voidβ€”Called after task completes.
onBeforeStep(stepCount) => voidβ€”Called before each agent step.
onAfterStep(history) => voidβ€”Called after each step (with full step history).
onTokenUsage(usage) => voidβ€”Token usage data per step.
onAskUser(question) => Promise<string>β€”Custom handler for ask_user β€” agent blocks until resolved.

Theming

PropTypeDefaultDescription
accentColorstringβ€”Quick accent color for FAB, send button, active states.
themeChatBarThemeβ€”Full chat bar theme override.
debugbooleanfalseEnable SDK debug logging.

🎨 Customization

// Quick β€” one color:
<AIAgent accentColor="#6C5CE7" />

// Full theme:
<AIAgent
  accentColor="#6C5CE7"
  theme={{
    backgroundColor: 'rgba(44, 30, 104, 0.95)',
    inputBackgroundColor: 'rgba(255, 255, 255, 0.12)',
    textColor: '#ffffff',
    successColor: 'rgba(40, 167, 69, 0.3)',
    errorColor: 'rgba(220, 53, 69, 0.3)',
  }}
/>

useAction β€” Custom AI-Callable Business Logic

Register isolated, headless logic for the AI to call (e.g., API requests, checkouts). The handler is kept automatically fresh internally, so you never get stuck with a stale closure. The optional deps array re-registers the action so the AI sees an updated description.

import { useAction } from '@mobileai/react-native';

function CartScreen() {
  const { cart, clearCart, getTotal } = useCart();

  // Passing [cart.length] ensures the AI receives the live item count in its context!
  useAction(
    'checkout',
    `Place the order and checkout (${cart.length} items for $${getTotal()})`,
    {},
    async () => {
      if (cart.length === 0) return { success: false, message: 'Cart is empty' };

      // Human-in-the-loop: AI pauses until user taps Confirm
      return new Promise((resolve) => {
        Alert.alert('Confirm Order', `Place order for $${getTotal()}?`, [
          { text: 'Cancel', onPress: () => resolve({ success: false, message: 'User denied.' }) },
          { text: 'Confirm', onPress: () => { clearCart(); resolve({ success: true, message: `Order placed!` }); } },
        ]);
      });
    },
    [cart.length, getTotal]
  );
}

useAI β€” Headless / Custom Chat UI

import { useAI } from '@mobileai/react-native';

function CustomChat() {
  const { send, isLoading, status, messages } = useAI();

  const summary = (msg) =>
    msg.content
      .map((node) => (node.type === 'text' ? node.content : `[${node.blockType}]`))
      .join('\n');

  return (
    <View style={{ flex: 1 }}>
      <FlatList data={messages} renderItem={({ item }) => <Text>{summary(item)}</Text>} />
      {isLoading && <Text>{status}</Text>}
      <TextInput onSubmitEditing={(e) => send(e.nativeEvent.text)} placeholder="Ask the AI..." />
    </View>
  );
}

Chat history persists across navigation. Override settings per-screen:

const { send } = useAI({
  enableUIControl: false,
  onResult: (result) => router.push('/(tabs)/chat'),
});

Rich UI in Chat and on Screen

The SDK can answer in three surfaces:

  • plain text for short conversational or one-line answers
  • chat UI for structured content that should live in the transcript
  • screen UI for temporary contextual UI placed inside an AIZone

As the app developer, your job is to define the UI surfaces the AI is allowed to use. You do that by registering block components on AIAgent, optionally adding themed screen zones with AIZone, and rendering rich message content correctly if you build your own chat UI.

Chat UI: Reusable UI Inside the Transcript

Use chat UI when the answer should remain part of the conversation and be easy to scan again later.

Typical use cases:

  • product, dish, offer, or plan recommendations
  • comparing 2-3 options with tradeoffs
  • structured support answers such as fees, ETA, status, refunds, or policy details
  • next-step guidance after an issue or decision point
  • lightweight in-chat confirmations or preference capture

Built-in chat block interfaces:

  • ProductCard for a concrete entity such as a dish, product, offer, listing, or plan
  • ComparisonCard for side-by-side choices and tradeoffs
  • FactCard for support, FAQ, status, and policy information
  • ActionCard for guided next steps and recommended actions
  • FormCard for lightweight choices, confirmations, and inline inputs

Screen UI: Temporary UI Anchored to a Screen Region

Use screen UI only when placement matters more than transcript history.

Typical use cases:

  • checkout clarification near price or payment UI
  • decision support attached to the current item
  • inline form help or recovery guidance near the blocked step
  • contextual warnings or explanations that should appear next to the relevant area

For most apps, chat UI should be the default rich surface. Add screen UI only where in-place help is clearly more useful than a chat response.

Setup: Register Rich Blocks on AIAgent

Register the blocks you want the AI to use across chat and screen surfaces:

import {
  AIAgent,
  ProductCard,
  ComparisonCard,
  FactCard,
  ActionCard,
  FormCard,
} from '@mobileai/react-native';
import {
  NavigationContainer,
  useNavigationContainerRef,
} from '@react-navigation/native';

export default function App() {
  const navRef = useNavigationContainerRef();

  return (
    <AIAgent
      analyticsKey="mobileai_pub_xxxxxxxx"
      navRef={navRef}
      blocks={[ProductCard, ComparisonCard, FactCard, ActionCard, FormCard]}
      richUITheme={{
        colors: {
          blockSurface: '#141327',
          primaryText: '#161616',
          inverseText: '#ffffff',
          accent: '#ff6a6a',
        },
      }}
      blockActionHandlers={{
        choose_option: (payload) => {
          console.log('User chose option', payload);
        },
        submit_preferences: (payload) => {
          console.log('Submitted preferences', payload);
        },
      }}
    >
      <NavigationContainer ref={navRef}>
        <AppNavigator />
      </NavigationContainer>
    </AIAgent>
  );
}

What this enables:

  • chat cards inside the built-in MobileAI chat UI
  • screen injection into any matching AIZone
  • theming for built-in block components
  • button, chip, toggle, and form actions through blockActionHandlers

Setup: Add AIZone Only Where On-Screen Placement Makes Sense

If you want the AI to place UI inside the current screen, wrap the relevant area in an AIZone and whitelist the allowed blocks:

import { AIZone, FactCard, ProductCard } from '@mobileai/react-native';

function DishDetailScreen() {
  return (
    <AIZone
      id="dish-detail-summary"
      allowInjectBlock
      interventionEligible
      proactiveIntervention={false}
      blocks={[FactCard, ProductCard]}
    >
      <DishDetailContent />
    </AIZone>
  );
}

Use AIZone for narrow, contextual placement. It is not required for chat cards.

Setup: Custom Chat UI

If you build your own chat surface with useAI(), render message content with RichContentRenderer so text and blocks appear correctly:

import { FlatList } from 'react-native';
import { RichContentRenderer, useAI } from '@mobileai/react-native';

function CustomChatScreen() {
  const { messages } = useAI();

  return (
    <FlatList
      data={messages}
      keyExtractor={(item, index) => `${item.timestamp}-${index}`}
      renderItem={({ item }) => (
        <RichContentRenderer
          content={item.content}
          surface="chat"
          isUser={item.role === 'user'}
        />
      )}
    />
  );
}

This is the only extra setup needed for custom chat rendering. The built-in MobileAI chat bar already renders rich content automatically.

Setup: Custom Block Interfaces

You can define your own block interfaces when the built-ins are not specific enough for your app.

Use custom blocks for domain-specific UI such as:

  • restaurant or marketplace cards
  • order status summaries
  • loyalty or rewards UI
  • checkout warnings
  • onboarding or account setup helpers

Custom blocks are plain React components registered as BlockDefinition objects:

import type { BlockDefinition } from '@mobileai/react-native';
import { Text, View } from 'react-native';

function RewardCard(props: {
  title: string;
  points: number;
  description?: string;
}) {
  return (
    <View>
      <Text>{props.title}</Text>
      <Text>{props.points} points</Text>
      {props.description ? <Text>{props.description}</Text> : null}
    </View>
  );
}

const RewardCardBlock: BlockDefinition = {
  name: 'RewardCard',
  component: RewardCard,
  allowedPlacements: ['chat', 'zone'],
  interventionEligible: true,
  interventionType: 'decision_support',
  propSchema: {
    title: { type: 'string', required: true },
    points: { type: 'number', required: true },
    description: { type: 'string' },
  },
};

<AIAgent blocks={[ProductCard, FactCard, RewardCardBlock]}>
  <App />
</AIAgent>

The AI only sees the block name and prop schema, not your component source. Keep block props explicit and serializable.

πŸ“Š Zero-Config Analytics β€” Auto-Capture Every Tap

Just add analyticsKey β€” every button tap, screen navigation, and session is tracked automatically. It also enables the default hosted MobileAI AI proxies unless you override them. Zero code changes to your app components.

<AIAgent
  analyticsKey="mobileai_pub_abc123"   // ← enables full auto-capture
  navRef={navRef}
>
  <App />
</AIAgent>

What's captured automatically:

EventDataHow
user_interactionButton label, screen, coordinates, actor: 'user'Root touch interceptor
screen_viewScreen name, previous screenNavigation ref listener
session_startDevice, OS, SDK versionOn mount
session_endDuration, event countOn background
agent_requestUser queryOn AI task start
agent_stepTool name, args, resultOn each AI action
agent_completeSuccess, steps, costOn AI task end

AI vs User Action Differentiation

When the AI agent taps a button on behalf of the user, those taps are not counted as user_interaction events β€” they're already captured as agent_step events with full context.

This means your funnels and retention charts always show real human behaviour, while the AI's actions are separately attributed for ROI analysis. No other analytics SDK can offer this because they don't own the app root.

EventWhoDashboard use
user_interaction { actor: 'user' }Human onlyFunnels, retention, journeys
agent_step { tool: 'tap' }AI onlyAgent ROI, resolution rate

Custom business events β€” track what matters to you:

import { MobileAI } from '@mobileai/react-native';

MobileAI.track('purchase_complete', { order_id: 'ord_1', total: 29.99 });
MobileAI.identify('user_123', { plan: 'pro' });

Enterprise: use analyticsProxyUrl to route events through your own backend β€” zero keys in the app bundle.

πŸ”’ Security & Production

Backend Proxy β€” Keep API Keys Secure

Use this only when you want to override the default hosted MobileAI proxy behavior or route traffic through your own backend.

<AIAgent
  proxyUrl="https://myapp.vercel.app/api/gemini"
  proxyHeaders={{ Authorization: `Bearer ${userToken}` }}
  voiceProxyUrl="https://voice-server.render.com"  // only if text proxy is serverless
  navRef={navRef}
>

voiceProxyUrl falls back to proxyUrl if not set. Only needed when your text API is on a serverless platform that can't hold WebSocket connections.

Next.js Text Proxy Example
import { NextResponse } from 'next/server';

export async function POST(req: Request) {
  const body = await req.json();
  const response = await fetch('https://generativelanguage.googleapis.com/...', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json', 'x-goog-api-key': process.env.GEMINI_API_KEY! },
    body: JSON.stringify(body),
  });
  return NextResponse.json(await response.json());
}
Express WebSocket Proxy (Voice Mode)
const express = require('express');
const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();
const geminiProxy = createProxyMiddleware({
  target: 'https://generativelanguage.googleapis.com',
  changeOrigin: true,
  ws: true,
  pathRewrite: (path) => `${path}${path.includes('?') ? '&' : '?'}key=${process.env.GEMINI_API_KEY}`,
});

app.use('/v1beta/models', geminiProxy);
const server = app.listen(3000);
server.on('upgrade', geminiProxy.upgrade);

Element Gating β€” Hide Elements from AI

// AI will never see or interact with this element:
<Pressable aiIgnore={true}><Text>Admin Panel</Text></Pressable>

// In copilot mode, AI must confirm before touching this element:
<Pressable aiConfirm={true} onPress={deleteAccount}>
  <Text>Delete Account</Text>
</Pressable>

Content Masking β€” Sanitize Before LLM Sees It

<AIAgent transformScreenContent={(c) => c.replace(/\b\d{13,16}\b/g, '****-****-****-****')} />

Screen-Specific Instructions

<AIAgent instructions={{
  system: 'You are a food delivery assistant.',
  getScreenInstructions: (screen) => screen === 'Cart' ? 'Confirm total before checkout.' : undefined,
}} />

Lifecycle Hooks

HookWhen
onBeforeTaskBefore task execution starts
onBeforeStepBefore each agent step
onAfterStepAfter each step (with full history)
onAfterTaskAfter task completes (success or failure)

🧩 AIZone β€” Contextual AI Regions

AIZone marks specific sections of your UI so the AI can operate within them with special capabilities: simplify cluttered areas, render rich blocks, or highlight elements.

import { AIZone, FactCard, ProductCard } from '@mobileai/react-native';

// Allow AI to simplify this zone if it's too cluttered
<AIZone id="product-details" allowSimplify>
  <View>
    <Text aiPriority="high">Price: $29.99</Text>
    <Text aiPriority="low">SKU: ABC-123</Text>
    <Text aiPriority="low">Weight: 500g</Text>
  </View>
</AIZone>

// Allow AI to inject contextual cards from a safe template whitelist
<AIZone
  id="checkout-summary"
  allowInjectBlock
  allowHighlight
  blocks={[FactCard, ProductCard]}
  proactiveIntervention={false}
>
  <CheckoutSummary />
</AIZone>

// Deprecated migration path (still supported):
<AIZone id="legacy" allowInjectCard templates={[InfoCard, ReviewSummary]}>
  <LegacyPanel />
</AIZone>

aiPriority Attribute

Tag any element with aiPriority to control AI visibility:

ValueEffect
"high"Always rendered β€” surfaced first in AI context
"low"Hidden when AI calls simplify_zone() on the enclosing AIZone

AIZone Props

PropTypeDescription
idstringUnique zone identifier the AI uses to target operations
allowSimplifybooleanAI can call simplify_zone(id) to hide aiPriority="low" elements
allowHighlightbooleanAI can visually highlight elements inside this zone
allowInjectHintbooleanAI can inject a contextual text hint into this zone
allowInjectBlockbooleanAI can render registered rich blocks into this zone
allowInjectCardbooleanDeprecated alias for allowInjectBlock
blocksBlockDefinition[] | React.ComponentType<any>[]Whitelist of blocks the zone may render; required when block rendering is enabled
templatesReact.ComponentType<any>[]Deprecated alias for blocks
interventionEligiblebooleanEnables strict screen-intervention mode checks for this zone
proactiveInterventionbooleanEnables optional proactive render_block calls in this zone

When using block rendering, always pass a whitelist via blocks (or templates for legacy apps). The AI can only instantiate registered blocks and props; it never generates raw JSX.

Built-in block names: FactCard, ProductCard, ActionCard, ComparisonCard, FormCard.

Compatibility wrappers remain for migration:

  • InfoCard (maps to FactCard)
  • ReviewSummary (maps to ProductCard)

πŸ› οΈ Built-in Tools

ToolWhat it does
tap(index)Tap any interactive element β€” buttons, switches, checkboxes, custom components
long_press(index)Long-press an element to trigger context menus
type(index, text)Type into a text input
scroll(direction, amount?)Scroll content β€” auto-detects edge, rejects PagerView
slider(index, value)Drag a slider to a specific value
picker(index, value)Select a value from a dropdown/picker
date_picker(index, date)Set a date on a date picker
navigate(screen)Navigate to any screen
wait(seconds)Wait for loading states before acting
capture_screenshot(reason)Capture the SDK root component as an image (requires react-native-view-shot)
render_block(zoneId, blockType, props)Render a registered block into an AIZone as a contextual intervention
inject_card(zoneId, blockType, props)Deprecated alias of render_block for migration
done(reply, previewText?, success?)Return mixed chat content (text and block nodes) with a preview string
done(text, success)Deprecated compatibility form for text-only responses
ask_user(question)Ask the user for clarification
query_knowledge(question)Search the knowledge base

πŸ§ͺ Test in Feedyum

  • Restart the app stack with fresh bundle cache.
    • cd /Users/mohamedsalah/mobileai-suite-copy/react-native-ai-agent && npm run build
    • cd /Users/mohamedsalah/mobileai-suite-copy/feedyum-fullstack/feedyum && npx expo start -c
  • Open a dish detail screen from Feedyum.
  • Open AI chat and send: Can you quickly show me a summary of this dish in context.
  • Expected result:
    • short assistant text confirming placement
    • ProductCard/FactCard rendered in the dish-detail-summary zone
    • card includes dismiss affordance
  • In logs, confirm a render_block call or inject_card alias call and zone injection.

If you only get text, check these fast:

  • app is attached to latest Metro session
  • AIZone on this screen has allowInjectBlock, blocks, and interventionEligible={true}
  • query is an intervention-worthy request in context, not a pure factual question

πŸ“‹ Requirements

  • React Native 0.72+
  • Expo SDK 49+ (or bare React Native)
  • Gemini API key β€” Get one free, or
  • OpenAI API key β€” Get one

Gemini is the default provider and powers all modes (text + voice). OpenAI is available as a text mode alternative via provider="openai". Voice mode uses gemini-2.5-flash-native-audio-preview (Gemini only).

πŸ“„ License

MIT Β© Mohamed Salah

πŸ‘‹ Let's connect β€” LinkedIn

Keywords

react-native

FAQs

Package last updated on 03 May 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts