May 14, 2026
  • All
  • Engineering
  • AI
  • Apple Intelligence
  • Capacitor
  • Gemini Nano
  • LLM
  • Showcase

Capacitor Showcase – LocalLLM

Joseph Pender

With Capacitor, anything you can build for the web, you can build for mobile. Beyond brochure and CRUD apps, it’s possible to build advanced applications that rely heavily on native hardware functionality, rich media and local first data. Starting this year, the Capacitor Team is embarking on a process to explore the real world ergonomics of developing Capacitor apps spanning some of the more difficult and technically challenging app genres, such as rich media, background processing, and AI. We will document the common pain points found during that process, come up with solutions for those issues (which may be bug fixes, Capacitor features or even new plugins), and document the process here with the community.

In our first installment of Capacitor Showcase, we are introducing Oakline Bank.

Oakline Bank is a fictional digital-first regional bank built as a Capacitor demo application, showcasing how modern mobile banking experiences can be crafted with a single cross-platform codebase targeting both iOS and Android. The app demonstrates real-world patterns you’d find in production fintech apps: account dashboards, transaction histories, and, appropriately timed in our current AI craze: an AI-powered in-app assistant called OakBot.

The (fictional) team realized, when a user asks OakBot “What’s my account balance?” or “Summarize my spending this month,” answering that question requires sending real financial data to an AI model. With a cloud-based model (OpenAI, Anthropic, Google, etc.), that data leaves the device and travels to a third-party server, which would be a significant concern for any financial institution.

In 2026, there is now a way to solve this problem on modern mobile devices – on-device AI.

Introducing Capacitor LocalLLM

Capacitor LocalLLM is a native Capacitor plugin that brings the power of on-device AI directly to your iOS and Android apps. By using both Apple Intelligence and Android’s on-device AI frameworks, it gives developers a simple, unified TypeScript API to send prompts with conversation session support, all while respecting user privacy and working completely offline. Whether you’re building a smart assistant, a creative tool, or an offline AI-powered bank chatbot, LocalLLM makes it as straightforward as:

const response = await LocalLLM.prompt({
  sessionId: chatSessionId,
  instructions: instructions,
  prompt: userMessageText,
  options: {
    temperature: 0.7,
    maximumOutputTokens: 256,
  },
});

With that you get a cross platform prompt interface without needing worry about the intricacies of local AI models on both iOS and Android, or without having to research and provide your own custom models.

Grounding a Local LLM with Private Data: How OakBot Works

On-device language models are trained on vast general knowledge, but they have no awareness of anything specific to the user running them. To be useful as a banking assistant, OakBot needs to answer questions like:

  • “How much did I spend on dining this month?”
  • “What’s my current balance?”                                                                      
  • “Have I paid my Apple Card bill recently?”

None of that exists inside the model’s weights. The only way to make it available is to inject it  into the context window at inference time: essentially telling the model what it needs to know before it answers. Here is how:

  1. Fetch the relevant data from the local source (in this case, the app’s in-memory transaction state loaded from the fake API)
  2. Serialize it into natural language that the model can reason over
  3. Inject it into the session context before the user’s first message

In OakBot specifically, this happens in formatTransactionHistory(), which transforms structured Transaction objects into a plain-text block:

Spending by Category (all time):
  - Housing: $7,400.00
  - Groceries: $1,823.45
Recent Transactions (last 60):                                                                      
  - ...
  - 2026-03-10: Whole Foods Market (Groceries) - -$87.43   
  - ...
Total transactions on record: 160
  Current balance: $3,241.18

That block is then passed to LocalLLM.warmup() as the promptPrefix — a special parameter that pre-loads the context into the session before any conversation begins. This is the critical handshake between the app’s local data and the inference engine, and because it goes through the plugin’s abstraction layer, it works the same way in code regardless of whether the user is on iOS or Android.

await LocalLLM.warmup({
  sessionId: chatSessionId,
  promptPrefix: systemPromptWithTransactionContext,
});

Why not JSON over Plain Text?

Token efficiency actually favors natural language here. Each JSON object repeats field names for every entry. Compare:

{"date":"2026-03-10","merchant":"Whole Foods Market","category":"Groceries","amount":-87.43}

vs:

- 2026-03-10: Whole Foods Market (Groceries) - -$87.43

The natural language line is noticeably more compact. Multiply that across 60 transactions and the difference is meaningful on a tight context budget.

Small on-device models reason better over natural language. The models running via LocalLLM (Foundation Models, Gemini Nano) are far smaller than cloud models. Larger models like GPT-4 handle JSON structure reliably; smaller on-device models can stumble on it. Natural prose plays to their strengths.


To experiment with the Oakline Bank demo app yourself, check out the app source here, and run it on your own modern device that can support running on-device AI models.

LocalLLM will be launched as a Capacitor Labs plugin initially as the first party on-device AI story is still in flux, especially on Android, however feel free to submit feedback, bug reports and suggestions for future directions we can take the plugin.

Community Mentions

Capacitor LocalLLM isn’t the only solution the Capacitor ecosystem for local AI usage. Check out some related efforts from the Capacitor community below

More Coming Soon

Stay tuned for more installments of the Capacitor Showcase series, as we continue to explore what can be built with Capacitor.


Joseph Pender