Skip to main content
The palyrad daemon is the central orchestrator of the Palyra ecosystem. It manages agent lifecycles, handles multi-modal transport (HTTP, gRPC, QUIC), enforces security policies, and persists system state in a secure journal.

System Overview

The daemon is built as a high-performance asynchronous service in Rust, utilizing tokio for its runtime and axum for its web surface. It acts as the “brain” that connects user interfaces (Console, CLI, Discord) to execution environments (Sandboxes, Browser) and AI models.

Initialization Sequence

The daemon follows a strict bootstrap sequence defined in crates/palyra-daemon/src/lib.rs. It loads configuration, initializes the SQLite-backed JournalStore, starts the GatewayRuntimeState, and spawns background loops for scheduling and maintenance.

Code-to-Entity Mapping

The following diagram maps high-level system components to their primary implementation structures and files within the palyra-daemon crate. Daemon Component Map Sources: crates/palyra-daemon/src/lib.rs#51-84, crates/palyra-daemon/src/gateway.rs#73-82, crates/palyra-daemon/src/app/state.rs#29-59

Subsystems

The daemon is partitioned into several specialized subsystems. Detailed documentation for each can be found in the linked child pages.

Gateway and Orchestration Engine

The GatewayRuntimeState is the central hub for all operations. It manages the RunStateMachine, which transitions agent tasks through states like Accepted, Running, and Succeeded. It handles the “tape”—an append-only log of events for every run.

HTTP and gRPC Transport Layer

Palyra exposes multiple interfaces: an Axum-based HTTP server for the Admin UI and OpenAI-compatible clients, and a Tonic-based gRPC server for high-performance internal communication and the CLI.

Configuration and Model Provider

Manages the RootFileConfig (TOML) and abstracts AI interactions through the ModelProvider trait. This allows the system to switch between OpenAI, local models, or deterministic providers while handling secrets via the Vault.

Journal Store and Persistence

A SQLite-backed append-only store (JournalStore) that tracks every message, tool execution, and state change. It includes full-text search (FTS) for memory recall and hash-chain integrity for security.

Scheduler, Routines, and Background Tasks

The spawn_scheduler_loop handles cron-based jobs and recurring routines. It manages task concurrency policies (e.g., Forbid, Replace, QueueOne) and ensures background maintenance like memory embedding backfills.

Node Runtime and mTLS Pairing

Handles the identity of external nodes (like browserd). It uses a pairing code flow to issue mTLS certificates, ensuring all node-to-node communication is encrypted and authenticated.

Usage Governance and Access Control

Enforces rate limits, token budgets, and permission checks via the AccessRegistry. It integrates the Cedar policy engine to decide if a specific principal can execute a specific tool.

Data Flow: Request Lifecycle

This diagram illustrates how a request (e.g., a message from the CLI) flows through the daemon’s internal entities. Message Processing Flow Sources: crates/palyra-daemon/src/transport/grpc/auth.rs#163-165, crates/palyra-daemon/src/gateway.rs#188-210, crates/palyra-daemon/src/orchestrator.rs#77-80

Resource Constraints and Constants

The daemon enforces strict limits to ensure stability and security:
ConstantValuePurpose
MAX_JOURNAL_RECENT_EVENTS100Limit for recent event snapshots
JOURNAL_WRITE_LATENCY_BUDGET_MS25msTarget latency for DB writes
MAX_MEMORY_ITEM_BYTES16 KBMaximum size of a single memory entry
MAX_HTTP_FETCH_BODY_BYTES512 KBEgress tool response limit
Sources: crates/palyra-daemon/src/gateway.rs#89-130

Child Pages