Version 1.0 · Unreal Engine 5.5+

Your AI Assistant
Inside Unreal Editor

Chat with an AI that reads, creates, and modifies Blueprints, materials, actors, levels, lighting, and more. It sees your viewport, plans multi-step tasks, and executes them autonomously — all from a dockable chat panel.

23
Editor Tools
~177
Sub-actions
5
LLM Providers
100%
C++ Native

How It Works

Not a code generator. An autonomous agent that operates directly inside your editor.

1

You Describe

Type what you want in plain English. "Create a door Blueprint that opens on overlap" — no menus, no manual steps.

2

AI Plans

The AI discovers your project state, creates a step-by-step plan, and shows you what it will do — with full visibility into each task.

3

Editor Executes

Each task is executed sequentially using built-in editor tools. Blueprints are wired, materials are authored, actors are placed — all undoable.

Feature Highlights

Everything you need to accelerate your Unreal Engine workflow.

Blueprint Authoring

Create Blueprints from any parent class. Add variables, components, functions, and wire graph nodes — including flow control, math, casts, and custom events. Batch mode for efficiency.

Material Creation

Author materials with 30+ expression types. Create material instances, material functions, landscape layers, decals, and post-process materials. Connect nodes and compile in one step.

Actor & Level Control

Spawn, transform, duplicate, delete actors. Find by name, class, or tag. Manage levels, lighting, components, and any UObject property via reflection — including nested paths and arrays.

Procedural Geometry

Create 9 primitive types. Perform boolean operations (union, subtract, intersect). Subdivide, simplify, smooth, mirror meshes, and convert to static mesh assets.

Visual Awareness

The AI sees your viewport via screenshot capture and reasons about what's on screen. It knows your selection, current level, camera position, and pinned assets automatically.

Editor Control

Control PIE (play/stop/pause), undo/redo, save assets, execute console commands, and manipulate the viewport camera — all through natural language.

Environment & Volumes

Create 10 volume types, spawn sky atmosphere, fog, and volumetric clouds. Configure lighting with point, spot, directional, and rect lights. Build lighting and set exposure.

Python Scripting

Execute arbitrary Python scripts inside the editor via the unreal module. Extends the AI's capabilities beyond the built-in tool set.

Polished UI

Dockable chat panel with streaming responses, task progress visualization, tool call transparency, model selector, conversation history, cancel button, and one-click undo.

23 Tools, ~177 Sub-actions

A comprehensive toolkit that covers every major editor workflow.

30
Blueprint Operations
18 + 12 sub-actions
27
Material Operations
30+ expression types
47
Actor & Level Ops
27 + 9 + 6 + 5 sub-actions
25
Procedural Geometry
9 primitives, booleans, editing
19
Environment & Volumes
10 volume types, sky, fog
6
Spline Operations
Points, locations, tangents
16
Editor Control
PIE, undo/redo, console, camera
7+
Asset & Advanced
Assets, Python, viewport, state

Bring Your Own Model

Connect to the provider that works best for you — cloud or fully local. Switch anytime from Project Settings.

🟢

OpenAI

GPT-4o, o1, o3, GPT-4.1

Recommended
🟠

Anthropic

Claude Sonnet 4, Opus 4

Excellent
🔵

OpenRouter

200+ models, one API key

Flexible

GitHub Copilot

Your existing subscription

Convenient
🏠

Local / Offline

Ollama, LM Studio

Private

Tip: For the best experience, use a model with strong tool-calling support and vision capability (e.g., GPT-4o, Claude Sonnet 4).

What Can You Say?

Just describe what you want. Here are some example prompts.

What You Say
What Happens
"Create a Blueprint actor with a rotating mesh"
Plans and executes: create BP, add mesh, wire rotation logic
"Create a glowing emissive material"
Creates material, adds emissive expression, connects to output
"Create a door that opens on overlap"
Multi-step: creates BP, adds mesh, collision, wires overlap event → movement
"Describe what you see in the viewport"
Captures a screenshot and uses vision to describe the scene
"Spawn a point light at 0,0,200 with intensity 5000"
Spawns and configures the light in the current level
"Build a staircase with 12 steps"
Creates procedural stairs geometry in the level
"Set all point lights to intensity 3000"
Finds all point lights by class and sets their properties
"Add sky atmosphere, fog, and volumetric clouds"
Spawns all three environment actors
"Create a landscape material with 3 layers"
Creates material with landscape layer expressions and blending

Quick Start

Up and running in under 2 minutes.

1

Install the Plugin

Install from the Fab Marketplace. The plugin appears at Plugins/UnrealAICopilot/ and is enabled by default.

2

Set Your API Key

API keys are read from environment variables — never written to project files.

C:\> setx OPENAI_API_KEY "sk-..."

Then restart the editor for the variable to take effect.

3

Open the Chat Panel

Menu: Tools → AI Copilot  ·  Toolbar: Click the AI Copilot button

4

Start Chatting

Type a message and press Enter. The AI responds in real-time with streaming text.

Fully Configurable

Fine-tune every aspect from Project Settings → Plugins → AI Copilot.

Provider

Choose between OpenAI, Anthropic, OpenRouter, GitHub Copilot, or Local. Set model ID and endpoint per provider.

Generation

Control temperature (0.0–2.0), max tokens (256–32K), iteration limits, and tool result character caps.

Task Limits

Set discovery iterations, mutations per task, round-trip caps, and consecutive failure bail-out thresholds.

API Key Security

Keys are read from environment variables at runtime — never written to config files. No accidental source control commits.

Behavior

Toggle auto editor state injection, auto viewport screenshots, and streaming response mode.

Conversation History

Save, load, and manage past conversations. Stored locally in your Saved/ directory — never uploaded.

Frequently Asked Questions

Does it send my project data to external servers?
The plugin sends conversation messages and tool results to the LLM provider you configure. If you use a local model (Ollama / LM Studio), everything stays on your machine. The plugin never sends data to any server other than the one you explicitly configure.
Are my API keys stored safely?
API keys are read from environment variables at runtime and are never written to project config files. They cannot be accidentally committed to source control.
Which model should I use?
For the best experience, use a model with strong tool-calling support: GPT-4o (OpenAI), Claude Sonnet 4 (Anthropic), or Kimi K2.5 (via OpenRouter). Vision features require a vision-capable model.
Can I use this completely offline?
Yes. Set up Ollama or LM Studio with a local model, configure the Local provider, and everything runs on your machine with no internet required.
Does it work with C++ projects?
The AI can read your C++ source files and reason about them. It modifies your project through the editor's asset pipeline (Blueprints, materials, actors, levels) — it does not generate or modify C++ source files directly.
Can I undo changes the AI makes?
Yes. AI operations go through standard editor commands and support Ctrl+Z undo. The chat panel also shows exactly which tools were called so you know what changed.
What happens if a task fails?
The AI automatically retries with a different approach. After 5 consecutive failures (configurable), it skips to the next task and reports the issue. You can also click Cancel at any time.

Requirements

🎮

Unreal Engine

Version 5.5 or later

🔑

API Key

For at least one LLM provider (or a running local model)

💻

Platform

Windows 64-bit (Mac & Linux supported)

Plugin Dependencies

EditorScriptingUtilities Required
GeometryScripting Required
PythonScriptPlugin Optional
Niagara / NiagaraEditor Optional
ControlRig Optional

Ready to Transform Your Workflow?

Stop juggling menus. Start describing what you want.