Myaiheye — Own Your Local AI
Local-first • You own it

AI without limits.
On your hardware.

Myaiheye is a local-first AI system designed for chat, coding, reasoning, automation, and optional vision analysis — with full control over models and tools.

  • Chat & Code helper
  • Automation tools
  • Optional image / vision analyzer
  • Safe Mode by default, Root Mode opt-in
Request Early Access See requirements Early access • Limited openings
Email [email protected] with subject Myaiheye.

Who it’s for

Power users who want a local operator: developers, builders, IT / homelab folks, small teams, and creators who want an assistant that runs on their own machine.

Typical uses

• Coding + debugging with tools
• Local automation workflows
• UI / screenshot understanding (Vision tier)
• A private assistant that doesn’t depend on cloud services

Included models

Base distribution is built around a 3-model set:

RoleModelSize
Primary Chat / Code / Reason qwen2.5-coder:32b ~19 GB
Vision add-on llava:34b ~20 GB
Lite fallback llama3.1:8b ~5 GB

Chat / Code / Reason are behavior modes, not separate models.

System requirements

Myaiheye runs entirely on customer hardware. Choose the tier that matches the experience you want.

Tier 1 — Lite (fallback-only)

Best for basic assistant, quick Q&A, low cost.
CPU: 4–8 modern cores
RAM: 16 GB min (32 GB recommended)
GPU: optional (8–12 GB VRAM helps)
Storage: 40 GB minimum (OS + model + logs)
Experience: decent on CPU, better on GPU

Tier 2 — Pro (recommended)

Best for “operator” workflows: tools, coding, automation.
CPU: 8–16 modern cores
RAM: 32 GB min (64 GB recommended)
GPU: 24 GB VRAM recommended (ideal for 32B-class quantized)
Storage: 120 GB min (250 GB recommended)
Experience: fast, stable, tool-friendly

Tier 3 — Pro + Vision

For screenshot/UI/photo understanding.
Same as Tier 2, plus:
GPU: 24 GB VRAM strongly recommended
Storage: add +25–50 GB headroom for vision cache/workflows

Universal requirements

OS: Ubuntu 24.04 LTS (or your supported distro list)
Ollama endpoint: 127.0.0.1:11434
Myaiheye UI: configurable (default e.g. 8080)
Security baseline: tool allowlist (or “confirm before sudo”), logs + audit trail

Startup & health validation

ollama list curl -s http://127.0.0.1:11434/api/tags | head pm2 status nvidia-smi # if GPU tier

Performance expectations

First response after boot: < 5–10 seconds (warmup).
Typical responses: near-instant for chat/code tasks.
Heavy reasoning or vision tasks: longer but predictable.

HORUS — Local-First AI Operator

A fully local AI system designed for real execution, verification, and user-controlled behavior. No cloud dependency.

View HORUS →

Safe Mode (default)

Myaiheye ships as a product: stable routing, tight tool permissions, restricted filesystem roots, and no risky system access by default.

Safe Mode defaults

• Tool execution via allowlist
• No sudo by default
• Restricted filesystem roots (project/data folders)
• Model routing locked to the supported set

Root Mode (opt-in)

For power users. Root Mode unlocks full customization: models, tools, routing rules, filesystem access, and (optionally) privileged commands — only when explicitly enabled.

Root Mode principles

• Human must enable it (never the AI)
• Enable locally (localhost/terminal), not from remote web sessions
• Audit log every elevated action
• One-click rollback to Safe Mode

Get started

Email [email protected] with subject Myaiheye and include:
• Your tier (Lite / Pro / Pro+Vision)
• Your hardware (CPU/RAM/GPU)
• What you want Myaiheye to do (chat, coding, automation, vision)

Notes

Myaiheye is designed for local operation and customer-owned hardware. For remote access, use secure networking practices and restrict access appropriately.

Product policy suggestion: large 70B+ reasoning models not included by default; offer optional packs later.