Open Sourced Dec 15, 2025

Google A2UI Project Deep Dive A2UI

Agent-to-User Interface: A UI generation protocol standard designed for the AI Agent era.
Not just code generation, but establishing a new "communication language".

Core Workflow: From Intent to Interface

AI Agent (Cloud)

Generates structured JSON blueprints
instead of raw HTML code

JSON Stream

Client App (Local)

Parses JSON -> Maps to components
Safely renders native UI

💡 Key Point: The client only renders "pre-approved" components (like Card, Button), eliminating the risk of AI generating malicious scripts or breaking UI styles.

Scenario Simulation: Restaurant Reservation (Local)

Incoming JSON Stream
// Click "Replay Demo" to start receiving A2UI data...
9:41
Book an Italian restaurant for two this Friday night.

Online Experience: A2UI Composer

Loading Demo...

Demo Source: ag-ui.com

Safety

AI cannot generate executable code directly. It can only "order" from the app's "whitelist component catalog". This fundamentally prevents XSS attacks or UI breakage from prompt injection.

Cross-Platform

The same JSON blueprint can be sent to Web (React/Angular), iOS (SwiftUI), or Android (Flutter). The UI look and feel is determined entirely by the client's native styles.

Streaming Updates

Supports operations like addComponents, updateData. Agents can show the frame first, then fill data, or generate the next step dynamically based on user input without refreshing.

Standardization

Google defined a set of generic base components (Card, Form, Chart, Map, etc.) and property specs. Similar to "HTML for Agents", making model training and fine-tuning easier.

Clarification: What is it NOT?

Not "AI writing HTML/CSS"

Unlike v0.dev or Copilot helping you write frontend code. A2UI is a runtime protocol where Agents generate interface structures dynamically during user conversation.

Does not contain UI Rendering Engine

A2UI is just a spec. The Renderer needs to be implemented by developers using React, Flutter, etc. (or use Google's reference implementation).

“Users misunderstand that AI can ‘create’ brand new UI elements out of thin air. In reality, AI is more like playing LEGOs—it can only build castles using the blocks (components) you provide.”

Future Vision: Model as App

A2UI marks the evolution from Chatbot to Micro-App Generator. Future apps might have only one entry point. When you want to book tickets, AI assembles a booking UI on the spot; when you want to analyze data, AI assembles a dashboard. This "UI on Demand" (Generative UI) will revolutionize software development and usage.