MCP for mobile apps.

Give your Android app agentic powers.

Apps describe what they can do. AI Assistant (Hark) discovers and invokes those capabilities. On-device. Open protocol.

Hark means "to listen." It's also short for my name "Harkirat".

Start in the right place

How it works

  1. 1
    Describe the app
    The app ships oacp.json and OACP.md so an assistant can understand the actions, parameters, and vocabulary.
  2. 2
    Discover capabilities
    An OACP-compatible assistant scans the device for .oacp ContentProviders and reads each manifest at runtime.
  3. 3
    Match the request
    The assistant uses its on-device pipeline to match the user's request to the best capability and extract any parameters.
  4. 4
    Invoke the app
    The assistant dispatches an Android intent. Background tasks use broadcasts and UI flows use activities.
  5. 5
    Return the result
    If the action is async, the app sends a structured result back and the assistant can speak or display it.

Background action (round trip)

 "Hey Hark, what's the weather?"
        │
        ▼
 ┌─────────────┐  broadcast   ┌─────────────────┐
 │    Hark     │─────────────►│  Weather App    │
 │  on-device  │   (OACP)     │                 │
 │     AI      │◄─────────────│  fetches data   │
 └─────────────┘ ACTION_RESULT└─────────────────┘
        │        
        ▼
 "Currently 22°, partly cloudy"

Foreground action (one way)

 "Hey Hark, take a picture with
  front camera in 2 seconds"
        │
        ▼
 ┌─────────────┐   (OACP)     ┌─────────────────┐
 │    Hark     │─────────────►│  Camera App     │
 │  on-device  │  activity    │                 │
 │     AI      │              │  opens camera,  │
 └─────────────┘              │  2s countdown   │
                              └─────────────────┘

Ecosystem

8 apps and counting. Breezy Weather, Binary Eye, Wikipedia, and more. Each one ships an oacp.json and works with any OACP assistant out of the box.

See all apps

What's next

Wake word detection is shipped. Background listening, self-hosted inference, and disambiguation UI are next. Everything is open source.

See the full roadmap