Preface

The landscape of proxy applications has become crowded and fragmented, so—out of personal interest—I decided to consolidate the ones I use into a single stack.

  • Chimera_Client takes inspiration from the open-source clash-rs project, with the long-term goal of matching Mihomo’s feature set.
  • Chimera follows the clash-nyanpasu project, but the key difference is that my top priority is first-class support for the chimera_client engine itself, which can largely be considered the clash-rs core.
  • Chimera_Server project can be viewed as a Rust implementation of xray-core, with compatibility with the original xray-core remaining the end goal.Additionally, its implementation is based on the open-source project shoes.

Introduction

This documentation set introduces the proxy ecosystem maintained in this repository, focusing on three core projects: Chimera_Client, Chimera, and Chimera_Server. Each module targets a different layer of the overall stack—client core, client GUI, and server core—but they share a common goal: delivering reliable, high-performance connectivity under diverse network conditions. The following chapters explain how these applications work together, what problems each component solves, and how teams can deploy and extend them.

In addition, the documentation for each project is generally divided into two major parts: one is a configuration guide for general users, intended for quick onboarding and day-to-day usage; the other is an advanced reference for developers, intended to help them understand implementation details, extension capabilities, and support further development.

System Topology

The reference deployment pairs clash-rs clients with one or multiple Chimera frontends, all built on the shared primitives exposed by chimera_core. Clients typically run on user devices or edge nodes where they terminate local applications and translate outbound traffic into proxy-aware streams. These streams traverse secure tunnels toward Chimera, which performs authentication, routing, and protocol termination before forwarding packets to upstream services or the public internet.

Because the stack centers on chimera_core, upgrades to cipher suites, multiplexing strategies, or configuration schemas become instantly available to both sides, minimizing version skew. Observability is likewise unified: telemetry emitted at each layer shares identifiers so that request flows remain traceable end to end.

Chimera_Client

Current Support Protocol and Transport

  • Trojan + WS
  • Hysteria2
  • Reality + tcp
  • Socks5
  • Http

Will support

  • xhttp
  • grpc
  • vmess
  • wireguard
  • ssh

Role and Objectives

chimera_client is the Clash-compatible client runtime in the Chimera ecosystem. Its design goal is practical compatibility with existing Clash/Mihomo profiles, while using Rust’s type safety and async ecosystem to build a maintainable codebase.

For operators, this means:

  • Preserve familiar configuration and policy mental models.
  • Improve implementation clarity through explicit schema and module boundaries.
  • Enable incremental parity: start from stable basics (e.g., SOCKS inbound + rules), then close feature gaps against clash-rs and Mihomo.

Relationship to Clash-rs and Mihomo

chimera_client documentation treats clash-rs and Mihomo as the two most important references:

  • clash-rs: Rust-native reference for parser/runtime behavior and config semantics.
  • Mihomo: de-facto production reference for broad ecosystem compatibility and advanced operational features.

In this chapter, each module page clearly marks:

  1. what works in chimera_client now,
  2. what is Clash/Mihomo-compatible target behavior,
  3. and what migration precautions to apply today.

Architecture Overview

Internally, the client is organized into four layers:

  1. Configuration layer
    • Parses Clash-style YAML into typed Rust structures.
    • Handles defaults, validation, and hot-reload boundaries.
  2. Inbound/controller layer
    • Owns local listeners (SOCKS/HTTP/mixed/TUN as parity evolves).
    • Exposes management APIs for status, switching, and diagnostics.
  3. Policy and DNS layer
    • Evaluates rule chains with first-match semantics.
    • Provides DNS strategy primitives (system resolver today; Clash-style DNS target).
  4. Outbound transport layer
    • Executes protocol handshakes and stream forwarding.
    • Encapsulates protocol-specific knobs while sharing common TLS/socket utilities.

This split mirrors common Clash-family architecture and reduces coupling between parser, runtime, and protocol engines.

Module Guide

Each functional area is documented independently:

  • Ports and listeners: key mapping and current inbound support. See Ports and Listeners.
  • DNS module: fake-IP vs real-IP models, resolver policy, and current implementation status. See DNS Module.
  • TUN module: route-all/split-route semantics and Linux policy-routing notes. See Tun Module.
  • Rules module: rule taxonomy, ordering strategy, and provider-based policy composition. See Rule Types and Their Effects.

Compatibility Snapshot (English Docs, Current)

Areachimera_client (current)clash-rs / Mihomo reference
Inbound listenerssocks_port available; others partial/in progressFull Clash-family listener matrix
DNSPrimarily system resolver path; Clash-style block documented as targetMature fake-IP/real-IP/split resolver workflows
TUNDocumented target model; not fully active on mainlineMature cross-platform implementations
RulesCore Clash rule language documented and alignedFull rule/provider ecosystem

Use this table as a reading index: module pages go deeper with examples and caveats.

Deployment Patterns

Current recommended pattern for production-like use:

  • Start with SOCKS-based local proxying.
  • Keep DNS conservative unless you are validating an in-progress DNS branch.
  • Use explicit rule ordering and small provider sets first, then scale.
  • Add TUN only when parity branch and environment prerequisites are verified.

For CI/testing, keep one minimal profile and one Clash/Mihomo-parity profile to detect parser/runtime divergence early.

Performance and Operational Focus

The long-term performance strategy is aligned with Clash-family workloads:

  • predictable low-overhead rule matching,
  • bounded memory behavior in long-lived sessions,
  • and high observability for policy debugging.

When introducing parity features (DNS/TUN/listeners), prioritize deterministic behavior and debuggability over implicit “magic” defaults.

Reference Repositories

Ports and Listeners

Overview

Ports define how local applications, dashboards, and DNS resolvers enter chimera_client. This page cross-references Clash-rs and Mihomo semantics, then states the current chimera_client status explicitly.

Key Mapping

Clash / Mihomo keychimera_client keyPurpose
port / http-portportHTTP CONNECT/plain proxy listener
socks-portsocks_portSOCKS5 listener
mixed-portmixed_portShared HTTP+SOCKS listener
redir-portredir_portLinux TCP transparent REDIRECT
tproxy-porttproxy_portLinux TPROXY (TCP/UDP)
external-controllerexternal_controllerREST controller endpoint
dns.listendns.listenLocal DNS socket

Behavior by Listener Type

HTTP proxy (port / http-port)

  • Typical browser/system-proxy entrypoint.
  • Expects HTTP CONNECT and HTTP proxy semantics.

SOCKS5 (socks-port)

  • Most widely compatible app-level inbound.
  • Supports direct app integration without kernel routing changes.

Mixed (mixed-port)

  • Single port for both HTTP and SOCKS protocols.
  • Useful where clients only allow one proxy endpoint setting.

Redir (redir-port)

  • Linux TCP transparent capture via iptables REDIRECT.
  • Does not capture UDP by itself.

TProxy (tproxy-port)

  • Transparent TCP+UDP capture path on Linux.
  • Requires policy routing (fwmark + route table) and firewall integration.

External controller (external-controller)

  • Management API for dashboards and automation.
  • Prefer loopback binding unless remote access is required.

DNS listen (dns.listen)

  • Local resolver socket used by fake-IP/split-DNS workflows.
  • Usually paired with TUN or transparent proxy mode.

Compatibility Status Matrix

Featurechimera_client nowclash-rs / Mihomo
SOCKS5 inboundAvailableAvailable
SOCKS UDP associateLimited / disabled in current notesAvailable
HTTP inboundReserved/plannedAvailable
Mixed inboundReserved/plannedAvailable
Redir inboundReserved/plannedAvailable
TProxy inboundReserved/plannedAvailable
External controllerUnder active developmentMature
Local DNS listenerUnder active developmentMature
  • Prefer socks_port as the stable ingress.
  • Keep management and DNS bindings on 127.0.0.1 during development.
  • Treat non-SOCKS inbounds as compatibility keys unless your branch explicitly enables them.

Configuration Examples

Minimal chimera_client profile (current-safe)

bind_address: "127.0.0.1"
allow_lan: false
socks_port: 7891
dns:
  enable: false
  ipv6: false

Clash / Mihomo reference layout

port: 7890
socks-port: 7891
mixed-port: 7892
redir-port: 7893
tproxy-port: 7894
external-controller: 127.0.0.1:9090
dns:
  listen: 127.0.0.1:1053

Migration Notes

When importing a profile from Mihomo or clash-rs:

  1. keep original keys for readability,
  2. map to chimera_client accepted keys where necessary,
  3. disable inbounds not yet active,
  4. verify with live connection tests before enabling LAN exposure.

DNS Module

Scope and Goals

The DNS module decides how domains are resolved before policy routing. In Clash-family clients, DNS behavior strongly affects rule hit accuracy, latency, and anti-pollution resilience.

This page uses clash-rs and Mihomo as reference behavior, while marking current chimera_client maturity.

Why DNS Design Matters

  • Domain rules require stable mapping between query results and connection flow.
  • Fake-IP mode can preserve domain intent even when apps later connect by IP.
  • Resolver choice impacts censorship resistance, startup reliability, and privacy leakage.

Configuration Areas

  • Upstreams: UDP / DoH / DoT endpoints and ordering.
  • Mode: fake-IP vs real-IP.
  • Policy routing for DNS: nameserver-policy and fallback strategy.
  • Cache strategy: capacity, TTL bounds, prefetch behavior.
  • Safety controls: fake-IP filters, hosts overrides, ECS handling.
  • Bootstrap: plain DNS for resolving encrypted DNS endpoints.

Mode Comparison

ModeAdvantagesTrade-offsTypical use
fake-ipBetter domain-rule retention after connectNeeds careful filter listTUN / transparent proxy deployments
redir-host / real-IP styleSimpler app compatibilityDomain intent can be lost after IP connectApp-level proxy with conservative DNS goals

Resolver Selection Flow (Reference)

  1. Check hosts override and cache.
  2. Choose resolver by policy (domain/set-based) or default list.
  3. Query primary resolver(s).
  4. Run fallback path when validation/latency criteria fail.
  5. Cache and return answer.

Compatibility Status

Capabilitychimera_client nowclash-rs / Mihomo reference
System resolver passthroughPrimary pathAlso supported
Clash-style local DNS serverIn progressMature
Fake-IP workflowTarget stateMature
Nameserver policy / fallback filterTarget stateMature

Configuration References

chimera_client (current conservative profile)

dns:
  enable: false
  ipv6: false

Clash/Mihomo-aligned target schema

dns:
  enable: true
  listen: 127.0.0.1:1053
  ipv6: false
  enhanced-mode: fake-ip
  fake-ip-range: 198.18.0.1/16
  fake-ip-filter:
    - "*.lan"
    - "*.local"
  default-nameserver:
    - 1.1.1.1
    - 8.8.8.8
  nameserver:
    - https://dns.alidns.com/dns-query
    - tls://1.1.1.1:853
  fallback:
    - 9.9.9.9
  fallback-filter:
    geoip: true
    geoip-code: CN

Practical Guidance

  • Start from real-IP/system resolver behavior for stability.
  • Enable fake-IP only after validating domain-rule workflows end-to-end.
  • Keep fake-IP exclusions narrow and auditable.
  • Track fallback hit ratio; sudden growth often means upstream degradation or blocking.

Troubleshooting Checklist

  • Verify listener reachability: dig @127.0.0.1 -p 1053 example.com.
  • Ensure OS/TUN resolver path really points to the client listener.
  • Inspect logs for timeout, TLS, or response-validation failures.
  • Test with one plain UDP resolver to isolate encrypted-DNS transport issues.

Alignment References

  • clash-rs: DNS schema and runtime behavior under clash-lib/src/config and DNS runtime modules.
  • Mihomo: production reference for enhanced-mode, nameserver-policy, and fallback-filter semantics.

Tun Module

Scope and Goals

TUN mode captures Layer-3 traffic from the host and sends it into the proxy pipeline without requiring each application to be proxy-aware. Compared with HTTP/SOCKS listeners, TUN is the closest to “system-wide proxy” behavior and is usually combined with policy routing and DNS control.

This page follows clash-rs semantics and config keys so future chimera_client parity is straightforward.

Current Project Status

  • chimera_client (current mainline) does not yet expose a tun block in its config parser.
  • The TUN section below is the clash-rs-aligned target shape and operational guidance, not a statement that all fields are already active in chimera_client.
  • Today, use SOCKS/listener-based workflows in chimera_client unless you are validating an in-progress TUN branch.

Configuration Schema (Clash-rs Aligned)

tun:
  enable: true
  device-id: "dev://utun1989"
  route-all: true
  gateway: "198.18.0.1/24"
  gateway-v6: "fd00:fac::1/64"
  mtu: 1500
  so-mark: 3389
  route-table: 2468
  dns-hijack: true
  # dns-hijack:
  #   - 1.1.1.1:53
  #   - 8.8.8.8:53
  # routes:
  #   - 1.1.1.1/32
  #   - 2001:4860:4860::8888/128

Key Fields and Semantics

KeyTypeDefaultNotes
enableboolfalseEnables TUN runtime.
device-idstringutun1989Accepts dev://<name>, fd://<n>, or plain name (treated as device name).
gatewayCIDR string198.18.0.1/24IPv4 address/prefix assigned to the TUN interface.
gateway-v6CIDR stringunsetOptional IPv6 address/prefix for dual-stack TUN.
route-allboolfalseRoute all host traffic through TUN.
routeslist<CIDR>emptyUsed when route-all: false to route only selected prefixes.
mtuu16platform defaultRuntime uses 1500 by default (65535 on Windows) if unset.
so-marku32unsetLinux fwmark for loop prevention / policy routing integration.
route-tableu322468Linux policy-routing table used by TUN route-all path.
dns-hijackbool or listfalseEnables DNS interception behavior in TUN path.

Per-Option Behavior (Clash-rs Source of Truth)

This section expands every tun field from TunConfig in clash-rs and explains practical impact.

enable

  • Turns the whole TUN pipeline on/off.
  • false means all other tun fields are ignored at runtime.

device-id

  • Interface identifier and creation mode.
  • Accepted forms in clash-rs parser:
    • dev://<name>: create/use a named TUN device.
    • plain <name>: treated as dev://<name>.
    • fd://<n>: adopt an already-open file descriptor (advanced embedding/systemd style).
  • Alias keys accepted by parser: device-url, device.

gateway

  • IPv4 CIDR assigned to TUN NIC, e.g. 198.18.0.1/24.
  • This defines both local TUN IP and prefix used for routing decisions.

gateway-v6

  • Optional IPv6 CIDR for TUN NIC.
  • If omitted, IPv6 handling in TUN path is effectively disabled.

route-all

  • true: full-tunnel, install default-path style routes/rules.
  • false: split-tunnel, only prefixes in routes are sent to TUN.
  • If route-all: true, routes list becomes operationally irrelevant.

routes

  • CIDR list used for split-tunnel mode (route-all: false).
  • Typical use: route specific public resolvers, target regions, or service networks only.

mtu

  • TUN interface MTU override.
  • Leave unset for runtime/platform defaults; set explicitly when encountering fragmentation/PMTU issues.

so-mark

  • Linux-only fwmark attached to outbound packets.
  • Used with ip rule/iptables/nftables to avoid proxy loops and integrate custom policy routing.

route-table

  • Linux-only policy route table index used by clash-rs TUN route installation.
  • Default is 2468; change when your system already uses that table number.

dns-hijack

  • false: do not redirect DNS in TUN path.
  • true: hijack UDP/53 queries to Clash DNS service.
  • list: clash-rs currently treats list mode as enabling hijack behavior (same effect as true).

Mihomo Gap Notes (Based on Public Tun Docs)

Compared with the clash-rs schema above, the following items are not documented as first-class tun keys in mihomo (same-name or same-shape):

  1. device-id with fd://<n> file-descriptor form.
    • Mihomo docs expose device but do not document fd-based takeover syntax.
  2. gateway / gateway-v6 explicit interface-address assignment fields.
    • Mihomo tun docs focus on route/rule controls and do not expose clash-rs-style gateway CIDR keys.
  3. route-all + routes exact pair.
    • Mihomo uses auto-route, route-address, route-exclude-address style controls instead of clash-rs key shape.
  4. route-table exact naming.
    • Mihomo exposes iproute2-table-index / iproute2-rule-index; functionally close but not the same key contract.

Note: so-mark in clash-rs and routing-mark in mihomo are conceptually similar (Linux packet mark), so this is a naming/compatibility difference, not a missing capability.

Device-ID Formats

  • dev://utun1989 or utun1989: create/use named TUN device.
  • dev://tun0: common Linux style.
  • fd://3: use an existing file descriptor, useful when another component owns TUN creation.

On macOS, the device name must use the utun prefix.

Routing Behavior

route-all: true

  • Linux: uses policy routing rules and a dedicated route table (route-table).
  • macOS/Windows: installs broad default-route entries through TUN.
  • DNS hijack integration is tied to this path on Linux (policy rule for destination port 53).

route-all: false

  • Only CIDRs in routes are routed through TUN.
  • This is safer for partial rollout and avoids taking over all host traffic.

If both are configured, route-all takes precedence operationally.

DNS Interaction

  • dns-hijack controls DNS interception in TUN flow, but it does not replace a working DNS module config.
  • For predictable domain-based routing, pair TUN with DNS settings (dns.enable, resolver list, fake-IP strategy where needed).
  • In practice, dns-hijack: true is commonly paired with fake-IP mode in Clash-style deployments.

Linux Notes (Policy Routing)

  • Ensure iproute2 (ip command) is available.
  • Run with sufficient privileges (CAP_NET_ADMIN or root-equivalent).
  • Prefer setting so-mark and keep it consistent with your external policy rules to avoid proxy loops.

Quick checks:

ip rule
ip route show table 2468
ip -6 route show table 2468

Example Profiles

Full-tunnel profile

tun:
  enable: true
  device-id: "dev://utun1989"
  route-all: true
  gateway: "198.18.0.1/24"
  dns-hijack: true

Split-route profile

tun:
  enable: true
  device-id: "dev://tun0"
  route-all: false
  gateway: "198.18.0.1/24"
  routes:
    - 1.1.1.1/32
    - 8.8.8.8/32
  dns-hijack: false

FD-based profile

tun:
  enable: true
  device-id: "fd://3"
  route-all: true
  gateway: "198.18.0.1/24"

Troubleshooting Checklist

  • Verify process privileges first; TUN creation and route changes fail silently in many restricted environments.
  • Confirm the interface exists (ip addr, ifconfig, or platform equivalent).
  • Validate route/rule installation after startup.
  • If DNS appears broken, verify the DNS listener is reachable and the system resolver path actually passes through TUN.
  • If traffic loops or stalls, check so-mark/policy-rule alignment and existing host firewall rules.

References and Alignment Notes

  • Clash-rs config schema: clash-lib/src/config/def.rs (TunConfig, defaults, dns-hijack shape).
  • Clash-rs config conversion: clash-lib/src/config/internal/convert/tun.rs.
  • Clash-rs tun runtime and device parsing: clash-lib/src/proxy/tun/inbound.rs.
  • Clash-rs route behavior: clash-lib/src/proxy/tun/routes/{linux,macos,windows}.rs.
  • Clash-rs sample profile: clash-bin/tests/data/config/tun.yaml.
  • Chimera_Client current parser snapshot: clash-lib/src/config/def.rs (no tun block yet on mainline).
  • Mihomo tun documentation (for key-shape comparison): https://wiki.metacubex.one/config/inbound/tun/.

Rule Types and Their Effects

Overview

In chimera_client, rules decide which outbound group handles a flow. Evaluation is top-to-bottom, first match wins, consistent with Clash-rs and Mihomo behavior.

Rule Evaluation Model

Input signals commonly include:

  • domain indicators (SNI / Host),
  • resolved destination IP,
  • destination/source ports,
  • process identity (when platform supports it),
  • GeoIP/GeoSite datasets,
  • and external rule-provider sets.

Typical actions route traffic to policy groups such as DIRECT, REJECT, Proxy, or Auto.

Common Domain Rules

DOMAIN

Exact hostname match.

rules:
  - DOMAIN,api.github.com,Proxy

DOMAIN-SUFFIX

Suffix match (includes subdomains).

rules:
  - DOMAIN-SUFFIX,google.com,Proxy

DOMAIN-KEYWORD

Substring-based domain match. Use carefully to avoid overmatching.

rules:
  - DOMAIN-KEYWORD,openai,Proxy

IP and Network Rules

IP-CIDR

IPv4 destination prefix match.

rules:
  - IP-CIDR,1.1.1.0/24,DIRECT

IP-CIDR6

IPv6 destination prefix match.

rules:
  - IP-CIDR6,2606:4700::/32,DIRECT

SRC-IP-CIDR

Source subnet match (useful on routers/gateways).

rules:
  - SRC-IP-CIDR,192.168.50.0/24,GameProxy

GEOIP

Country/region IP database match.

rules:
  - GEOIP,CN,DIRECT

GEOSITE

Domain category/list match.

rules:
  - GEOSITE,geolocation-!cn,Proxy

Port and Process Rules

DST-PORT

Destination-port-based routing.

rules:
  - DST-PORT,443,Proxy

SRC-PORT

Source-port-based routing.

rules:
  - SRC-PORT,60000-60100,DIRECT

PROCESS-NAME

Executable name match.

rules:
  - PROCESS-NAME,Telegram.exe,Proxy

PROCESS-PATH

Full executable path match.

rules:
  - PROCESS-PATH,/Applications/Discord.app/Contents/MacOS/Discord,Proxy

Provider and Logical Rules

RULE-SET

References remote/local provider-managed rule collections.

rule-providers:
  streaming:
    type: http
    behavior: domain
    url: https://example.com/streaming.yaml
    interval: 86400
    path: ./ruleset/streaming.yaml

rules:
  - RULE-SET,streaming,Proxy

MATCH

Final catch-all fallback.

rules:
  - MATCH,DIRECT
  1. Security blocks and guaranteed bypass (REJECT, private/local DIRECT).
  2. Precise business rules (DOMAIN, PROCESS-PATH, IP-CIDR).
  3. Provider/category rules (RULE-SET, GEOSITE).
  4. Broad heuristics (DOMAIN-KEYWORD, GEOIP).
  5. Final MATCH fallback.

Compatibility Notes (clash-rs + Mihomo)

  • First-match-wins semantics are aligned.
  • Rule syntax is largely portable, but behavior still depends on DNS mode and inbound type.
  • Process-level rules are platform-sensitive; validate on each target OS.
  • GeoIP/GeoSite freshness directly impacts correctness.

Minimal Mixed Example

rules:
  - DOMAIN,internal.example.com,DIRECT
  - DOMAIN-SUFFIX,corp.example.com,DIRECT
  - PROCESS-NAME,Telegram.exe,Proxy
  - GEOSITE,category-ads-all,REJECT
  - GEOIP,CN,DIRECT
  - RULE-SET,streaming,Proxy
  - MATCH,Auto

Operational Tips

  • Keep intent explicit; avoid broad early rules.
  • Version-control rule providers and refresh intervals.
  • Enable connection decision logging when debugging mismatches.
  • Validate DNS strategy and rule strategy together, especially with fake-IP/TUN.

Chimera GUI

Design Goals

Chimera serves as a high-performance ingress layer responsible for terminating client sessions, enforcing policies, and forwarding traffic to the target destination. A single ingress port can simultaneously expose multiple proxy protocols.

The core design priorities of Chimera are:

  • minimizing handshake latency,
  • providing fine-grained access control,
  • ensuring cross-platform compatibility,
  • enabling horizontal scalability,
  • and offering built-in observability.

Currently Supported Platforms

First Tier

  • 🖥️ Windows Windows
  • 🐧 Ubuntu
  • 🍎 macOS

Second Tier

  • ❄️ NixOS

Currently Supported Protocols

Please refer to Chimera_Client and clash-rs.


Explanation of Chimera’s Runtime Configuration Generation Mechanism

1. Key Takeaways

The configuration consumed by the core (e.g., chimera_client / mihomo) at startup and during hot-reload is not read directly from profiles.yaml. What the core actually uses is a runtime file:

  • clash-config.yaml (runtime configuration, located under app_config_dir)

This file is first composed in memory by the backend, then written to disk, and finally passed to the core either via startup arguments or Clash API hot-reload.

2. Configuration Inputs (Raw Materials)

The runtime configuration is built from four categories of inputs:

  1. Application settings: chimera-config.yaml

    • Struct: IVerge
    • Load entry: backend/tauri/src/config/chimera/mod.rsIVerge::new()
    • Purpose: controls field filtering, port strategy, TUN/system proxy behavior, etc.
  2. Clash Guard override template: clash-guard-overrides.yaml

    • Struct: IClashTemp
    • Load entry: backend/tauri/src/config/clash/mod.rsIClashTemp::new()
    • Purpose: forcibly overrides critical fields (e.g., mode, mixed-port, external-controller, secret, etc.)
  3. Profile metadata: profiles.yaml

    • Struct: Profiles
    • Load entry: backend/tauri/src/config/profile/profiles.rsProfiles::new()
    • Purpose: records the currently active profile (current) and the profile list (items)
  4. Concrete profile content files: app_config_dir/profiles/*.yaml

    • Load entry: Profiles::current_mappings()
    • Purpose: provides actual configuration content such as proxies, rules, DNS, TUN, etc.

3. Startup Initialization Phase

3.1 Creating Base Files (If Missing)

backend/tauri/src/utils/init/mod.rsinit_config() ensures the following files exist:

  • clash-guard-overrides.yaml (generated by default via IClashTemp::template())
  • chimera-config.yaml (generated by default via IVerge::template())
  • profiles.yaml (an empty/default configuration)

3.2 Loading Global Configuration Objects

backend/tauri/src/config/core.rsConfig::global() initializes:

  • Profiles::new()
  • IVerge::new()
  • IClashTemp::new()
  • IRuntime::new()

IRuntime is the in-memory runtime configuration container; its config field is an Option<Mapping>.

4. Main Runtime Configuration Composition Flow

Main entry: Config::generate() (backend/tauri/src/config/core.rs)

flowchart TD
  A["Start Config::generate"] --> B["Call enhance::enhance"]
  B --> C["Read YAML(s) of current profile"]
  C --> D["merge_profiles: merge configs"]
  D --> E["(Optional) whitelist-based field filtering"]
  E --> F["Override key fields (HANDLE_FIELDS)"]
  F --> G["Write into in-memory runtime config"]
  G --> H["Write out clash config YAML"]
  H --> I["Load at startup or hot-reload via PUT /configs"]

4.1 What enhance::enhance() Does

Location: backend/tauri/src/enhance/mod.rs

Core steps:

  1. Load Clash Guard configuration

    • let clash_config = Config::clash().latest().0.clone()
  2. Read current feature toggles/settings

    • such as enable_clash_fields, from IVerge
  3. Load the content of the currently active profile(s)

    • via Profiles::current_mappings()
    • this method iterates over current, reads profiles/<file>.yaml one by one, and converts them into Mapping
  4. (Reserved) Execute profile chain scripts

    • calls process_chain(...)
    • current implementation is a placeholder (no-op), returning the original config
  5. Merge multiple profile configurations

    • calls merge_profiles(...)

    • current strategy:

      • first config: full extend
      • subsequent configs: only append proxies to the existing proxies
  6. Whitelist field filtering (optional, controlled by a toggle)

    • use_whitelist_fields_filter(...)
    • when enable_clash_fields = true, only retains keys in valid + default fields
  7. Force-override Guard fields

    • writes back fields listed in HANDLE_FIELDS from IClashTemp into the final config
    • ensures critical control fields are centrally managed by the client

4.2 Scope of HANDLE_FIELDS Overrides

Defined in backend/tauri/src/enhance/field.rs:

  • mode
  • port
  • socks-port
  • mixed-port
  • allow-lan
  • log-level
  • ipv6
  • secret
  • external-controller

This means that even if these fields exist in a profile, the final values will be overwritten by the corresponding values from clash-guard-overrides.yaml.

5. Writing to Disk and File Locations

Entry: Config::generate_file(ConfigType::Run) (backend/tauri/src/config/core.rs)

  • Output in Run mode: app_config_dir()/clash-config.yaml
  • Output in Check mode: temp_dir()/clash-config-check.yaml

If generation fails, Config::init_config() provides a fallback: it writes IClashTemp directly as the runtime configuration.

6. How the Core Obtains This Configuration

6.1 Loaded at Startup

CoreManager::run_core()Instance::try_new() (backend/tauri/src/core/clash/core.rs):

  1. Calls Config::generate_file(ConfigType::Run) to get the path
  2. Passes the path to the core process via CoreInstanceBuilder.config_path(config_path)

In other words: the core reads clash-config.yaml directly at startup.

6.2 Hot-Reload During Runtime

CoreManager::update_config() flow:

  1. Config::generate().await? recomposes the in-memory config
  2. check_config().await? validates syntax/usability using the check file
  3. generate_file(Run) rewrites clash-config.yaml
  4. Calls PUT /configs with body { "path": "<absolute path>" } to instruct the core to reload

Relevant code:

  • backend/tauri/src/core/clash/core.rsupdate_config()
  • backend/tauri/src/core/clash/api.rsput_configs(...)

7. User Actions That Trigger a Rebuild

7.1 Switching/Modifying Profile Selection

Frontend commands.patchProfilesConfig → backend patch_profiles_config(...):

  1. Apply draft: Config::profiles().draft().apply(...)
  2. Trigger CoreManager::update_config()
  3. On success: Config::profiles().apply() + save_file()
  4. On failure: discard() (rollback)

7.2 Changing Settings (Certain Fields)

Frontend commands.patchVergeConfig → backend feat::patch_verge(...):

  • Writes an IVerge draft first

  • Some fields (e.g., enable_tun_mode) may trigger:

    • Config::generate() + run_core() (restart scenario)
    • or update_core_config() (hot-update scenario)

7.3 Importing the First Profile

After import_profile(...) succeeds, if there is no active profile yet, it automatically constructs ProfilesBuilder.current = [new_uid] and reuses patch_profiles_config(...) to trigger an update.

8. Key Details and Common Misunderstandings

  1. profiles.yaml is not the final configuration used by the core

    • it only stores profile metadata and the current pointer
  2. Profile content files are not passed to the core verbatim

    • they become the runtime config only after merging, filtering, and guard overrides
  3. external-controller may have its port changed before startup

    • prepare_external_controller_port() checks port availability according to policy and switches ports if necessary
  4. verge_mixed_port is primarily used for system proxy logic

    • it is not directly written to the runtime YAML’s mixed-port
    • system proxy uses verge_mixed_port first, otherwise falls back to Config::clash().get_mixed_port()
  5. get_runtime_yaml() returns IRuntime.config from memory

    • it is usually consistent with the recently written clash-config.yaml
    • but fundamentally it comes from memory, not from re-reading disk each time

9. Current Implementation Limitations (As of the Codebase)

  1. Chain script execution is currently a placeholder

    • process_chain(...) does not actually rewrite the config yet
  2. Global chain processing code is still commented out

    • only the scoped chain framework exists for now
  3. patch_clash_config IPC is still todo!()

    • the frontend will fail if it uses that IPC path
  4. Directly editing a profile file does not automatically trigger a hot-reload

    • save_profile_file(...) only writes the file; it does not call update_config()

10. Troubleshooting Checklist (Practical Order)

If you suspect “the core is using the wrong configuration,” check in this order:

  1. Confirm the active profile is correct

    • verify profiles.yamlcurrent
  2. Confirm the profile source content matches expectations

    • check app_config_dir/profiles/*.yaml
  3. Check guard override items

    • verify whether HANDLE_FIELDS in clash-guard-overrides.yaml overrides the values you intended
  4. Inspect the final runtime configuration

    • check clash-config.yaml
    • or call get_runtime_yaml() to view the in-memory version
  5. Confirm a hot-reload actually occurred

    • verify patch_profiles_config / patch_verge_config / restart_sidecar was executed
    • check logs to see whether PUT /configs succeeded
  6. If you see port-related issues

    • check whether external-controller was rewritten by the port strategy

Service Mode Configuration

Scope and Intent

In Chimera GUI, service mode runs the proxy core as a background system service while the GUI acts as the control surface. This separation is important when you need stable long-running behavior, elevated networking privileges, or startup-before-login workflows.

Foreground Mode vs Service Mode

ModeRuntime shapeTypical useMain limitation
Foreground modeGUI process owns the core directlyDevelopment and quick profile checksCore stops when GUI exits or user logs out
Service modeSystem service owns the core; GUI controls it via local IPCDaily use, TUN/transparent routing, always-on setupsRequires service install and permission management

Why Enable Service Mode

  • Keep traffic forwarding alive even if the GUI is closed.
  • Start proxy service automatically at boot/login with predictable lifecycle.
  • Support privileged paths (for example TUN, policy routing, transparent capture) more reliably.
  • Reduce behavior drift across user sessions on shared machines.

Configuration Workflow in Chimera GUI

  1. Prepare and validate your active profile in normal mode first.
  2. Open Chimera GUI settings and enable service mode.
  3. Install/register the service when prompted by the GUI.
  4. Choose startup policy:
    • Manual: start only when needed.
    • Automatic: start at system boot (recommended for always-on use).
  5. Apply settings and trigger a service restart from the GUI.
  6. Confirm the GUI can reconnect to the local control endpoint after restart.

Option labels may vary slightly by platform/build, but the intent is usually the same:

GUI option (common naming)MeaningSuggested default
Enable Service ModeSwitch core runtime ownership to system serviceOn for long-term daily usage
Install/Repair ServiceRegister or repair service metadataRun after first enable and after upgrades
Start Service at BootAuto-start service during system startupOn for TUN or gateway-style setups
Keep Running After GUI ExitLeave service active when GUI closesOn
Require Elevation on ApplyPrompt for admin/root rights when applying privileged changesOn
Auto Recover on CrashRestart service process after abnormal exitOn

Platform Notes

Windows

  • Service mode is usually backed by Windows Service Control Manager.
  • Use an elevated shell for first-time install/repair if GUI prompts fail.
  • Verify state with:
Get-Service *chimera*

Linux

  • Service mode is typically managed by systemd (chimera.service or similar unit name).
  • Prefer explicit restart after profile changes that affect TUN/routing behavior.
  • Verify state with:
systemctl status chimera.service
journalctl -u chimera.service -n 100 --no-pager

macOS

  • Service mode is usually implemented through launchd (system daemon style).
  • Ensure GUI and service binaries come from the same build channel/version.

Rollout Strategy

  1. Start with SOCKS/listener-only profile and confirm baseline connectivity.
  2. Enable service mode and verify reconnect behavior after GUI restart.
  3. Enable advanced options (TUN, DNS hijack, transparent capture) incrementally.
  4. Reboot once and verify auto-start, rule hit behavior, and DNS resolution stability.

Troubleshooting Checklist

SymptomLikely causeFix
Service cannot startMissing admin/root privilegesReinstall/repair service with elevation
GUI shows “disconnected from core”Control endpoint mismatch or service crash loopReapply service settings and inspect service logs
TUN features do not take effectService running but privileged route setup failedCheck system logs and permission/capability grants
Profile changes seem ignoredGUI saved config but service did not reloadTrigger explicit service restart from GUI
Traffic stops after logoutForeground mode still activeRecheck that service mode is enabled and installed

Operational Boundary

Service mode changes process lifecycle and permission model, not proxy policy semantics. Your rules, DNS strategy, and outbound definitions are still determined by the active Chimera profile.

chimera_server Library

Purpose and Scope

chimera_server is the shared Rust crate that provides protocol primitives, configuration schemas, crypto suites, and common utilities for both client and server projects. By centralizing these capabilities, the ecosystem avoids duplicated logic, ensures protocol compliance, and keeps security fixes consistent across binaries.

Key Modules

  • Configuration model: strongly typed structures plus serde-based serialization for Clash manifests, Chimera manifests, and shared policy fragments.
  • Crypto and handshake utilities: AEAD ciphers, key derivation, certificate pinning helpers, TLS fingerprint templates, and QUIC transport parameters.
  • Transport abstractions: traits for stream/session lifecycles, multiplexing interfaces, buffer management, and async runtime adapters.
  • Event bus: lightweight publish/subscribe mechanism so higher layers can tap into connection lifecycle events, metrics, and alerts.

API Surface and Extensibility

The crate exposes a stable Rust API along with optional C FFI bindings for other languages. Extension points allow third parties to register custom cipher suites, add routing annotations, or hook into telemetry emission. Versioning follows semver with clear migration guides whenever breaking changes occur, ensuring that clash-rs and Chimera can track upgrades smoothly.

Testing and Quality

chimera_server maintains exhaustive unit tests for parsers, crypto primitives, and transport behaviors. Integration suites spin up in-memory client/server pairs to validate interoperability before changes land. Benchmarks measure handshake latency, throughput, and memory footprint across representative hardware, providing baselines for regression detection.

Protocol

Overview

ProtocolDefault TransportAuthenticationStrengthsTypical Constraints
SOCKS5TCP control + optional UDPOptional username/passwordWorks with almost any TCP app, UDP associate modeClear-text by default, needs TLS/obfs elsewhere
HTTP(S) CONNECTTCP over HTTP/1.1 or HTTP/2Basic auth, bearer token, mutual TLSBlends with web traffic, easy to deploy on gatewaysOnly proxies TCP, relies on intermediary keeping long-lived tunnels
TrojanTLS over TCPPre-shared password validated inside TLSHard to fingerprint, benefits from CDN/SNIEach password maps to a port/user, needs valid TLS certificate
Hysteria 2QUIC (UDP) with TLS 1.3Password or OIDC-like tokenHigh throughput, UDP native, congestion tuningRequires open UDP ports, MTU tuning important
TUICQUIC (UDP) with TLS 1.3UUID or token-based auth0-RTT friendly, multiplexed streams, low handshake overheadNeeds UDP reachability, QUIC fingerprinting varies by implementation
VLESSTLS/XTLS over TCP or MKCPUUID-based identityFlexible multiplexing, optional XTLS auto-splitNo encryption without TLS/XTLS layer, ecosystem-specific tooling
xHTTP TransportHTTP-style stream over TLS/RealityUsually UUID/token from upper protocol (e.g., VLESS)Better web-traffic camouflage, friendly to reverse proxies/CDNsHeader/path mismatch breaks handshake; extra overhead versus raw TCP
Reality (TLS camouflage)TLS 1.3-like handshakePublic key + short ID (plus upstream auth)Certificate-less TLS mimicry, resistant to passive probingDepends on client fingerprint matching, tied to Xray tooling

Detailed breakdowns now live in dedicated files; each follows the same structure (highlights, flow, configuration snippet, strengths, and limitations) to make comparisons straightforward.

Deep Dives

  • SOCKS5 – General-purpose TCP/UDP proxy with flexible method negotiation.
  • HTTP CONNECT Proxy – HTTPS-friendly tunnels that ride over standard web ports.
  • Trojan – TLS-camouflaged password proxy ideal for CDN fronting.
  • Hysteria 2 – QUIC-based transport tuned for high-loss or high-latency links.
  • TUIC – QUIC-based proxy with multiplexing and aggressive latency tuning.
  • VLESS – UUID-auth protocol with configurable transports such as TLS, XTLS, or Reality.
  • xHTTP Transport – HTTP-like transport profile for Xray ecosystems, often paired with VLESS.
  • Reality – TLS camouflage layer used by Xray transports without certificates.

SOCKS5

Official RFC

The SOCKS version 5 protocol is specified primarily in RFC 1928.

Key related RFCs:

Highlights

  • Layer-4 proxy that forwards arbitrary TCP streams and supports UDP via ASSOCIATE command.
  • Method negotiation lets the server advertise NO AUTH, USERPASS, or custom authentication.
  • Widely supported by browsers, curl, SSH, and VPN clients.

Flow

  1. Client opens a TCP socket to the proxy.
  2. Client sends a list of supported authentication methods; server responds with the chosen method.
  3. Optional username/password exchange takes place.
  4. Client issues CONNECT, BIND, or UDP ASSOCIATE with destination info.
  5. Server replies with success/failure code and starts relaying traffic.

Configuration Snippet

Strengths

  • Works with legacy tooling without extra plugins.
  • UDP associate makes DNS-over-UDP possible.
  • Minimal framing overhead keeps latency low.

Limitations

  • No built-in encryption; must rely on TLS-over-SOCKS or upstream obfuscation.
  • UDP associate requires the client to keep listening on a local port, which some firewalls block.
  • Authentication is static unless wrapped in a management layer.

References

  • https://www.rfc-editor.org/rfc/rfc1928
  • https://www.rfc-editor.org/rfc/rfc1929
  • https://www.rfc-editor.org/rfc/rfc1961
  • https://www.rfc-editor.org/rfc/rfc3089

Appendices

RFC 1928 (Full Text)


#### RFC 1929 (Full Text)
```text

#### RFC 1961 (Full Text)
```text

#### RFC 3089 (Full Text)
```text

HTTP

Highlights

  • Presents itself as a normal HTTP(S) server and upgrades individual requests into tunnels via the CONNECT verb.
  • Easy to front with Nginx, Apache, or cloud load balancers.
  • Supports HTTP/2 multiplexing when both sides understand it.

Flow

  1. Client opens a TCP (or TLS) connection to the proxy endpoint.
  2. Client optionally performs HTTP auth (Basic, Digest, Bearer, or mutual TLS).
  3. Client sends CONNECT target.example.com:443 HTTP/1.1 (or an HTTP/2 :method CONNECT).
  4. Proxy validates policy, then responds 200 Connection Established.
  5. Subsequent bytes are relayed transparently until one side closes the tunnel.

Configuration Snippet

Strengths

  • Blends with standard HTTPS traffic; hard to distinguish from regular web browsing.
  • Works well behind corporate firewalls that only permit ports 80/443.
  • HTTP/2 variants allow many tunnels over one TCP session, reducing handshake cost.

Limitations

  • TCP-only; cannot forward UDP flows without extra encapsulation.
  • Proxies must maintain state per tunnel, which impacts scaling under many short-lived connections.
  • Additional HTTP headers may leak metadata if not sanitized.

Trojan

Highlights

  • Starts with a real TLS handshake; all subsequent bytes are TLS application data.
  • Auth is a pre-shared password hashed with SHA-224 and hex encoded.
  • Request framing reuses SOCKS5-style address fields for CONNECT and UDP ASSOCIATE.
  • Invalid or unknown traffic can be forwarded to a fallback endpoint to look like normal HTTPS.

Flow

  1. Client completes a standard TLS handshake with the server (SNI/ALPN as configured).
  2. Client sends hex(SHA224(password)) + CRLF + Trojan Request + CRLF (+ optional payload).
  3. Server validates the password and request, then connects to the destination.
  4. For TCP, data is relayed bidirectionally; for UDP, packets are framed and tunneled over the TLS stream.

Wire Format

  • The precise framing and field definitions live in Wire Format.
  • The first TLS record may include payload after the request to reduce packet count.

Traffic Handling

Strengths

  • Uses standard TLS stacks and certificates; inherits mature TLS security and ALPN support.
  • Hard to fingerprint when served from a legitimate HTTPS endpoint.
  • Minimal protocol overhead once the handshake completes.

Limitations

  • Shared-password model means revocation is coarse unless per-user passwords are used.
  • Requires valid TLS certificates and operational renewal.
  • Fallback behavior must be configured to keep probes indistinguishable from real HTTPS.

References

  • https://trojan-gfw.github.io/trojan/protocol

Trojan Wire Format

TLS Handshake

  • The client performs a normal TLS handshake first.
  • If the handshake fails, the server closes the connection like a regular HTTPS server.
  • Some implementations also return an nginx-like response to plain HTTP probes.

Initial Request

After TLS is established, the first application data packet is:

+-----------------------+---------+----------------+---------+----------+
| hex(SHA224(password)) |  CRLF   | Trojan Request |  CRLF   | Payload  |
+-----------------------+---------+----------------+---------+----------+
|          56           | 0x0D0A  |    Variable    | 0x0D0A  | Variable |
+-----------------------+---------+----------------+---------+----------+

Trojan Request

Trojan Request uses a SOCKS5-like format:

+-----+------+----------+----------+
| CMD | ATYP | DST.ADDR | DST.PORT |
+-----+------+----------+----------+
|  1  |  1   | Variable |    2     |
+-----+------+----------+----------+
  • CMD values: 0x01 CONNECT, 0x03 UDP ASSOCIATE.
  • ATYP values: 0x01 IPv4, 0x03 DOMAINNAME, 0x04 IPv6.
  • DST.ADDR is the destination address, DST.PORT is network byte order.
  • SOCKS5 field details: https://tools.ietf.org/html/rfc1928

UDP Associate Framing

When CMD is UDP ASSOCIATE, each UDP datagram is framed in the TLS stream as:

+------+----------+----------+--------+---------+----------+
| ATYP | DST.ADDR | DST.PORT | Length |  CRLF   | Payload  |
+------+----------+----------+--------+---------+----------+
|  1   | Variable |    2     |   2    | 0x0D0A  | Variable |
+------+----------+----------+--------+---------+----------+
  • Length is the payload size in network byte order.
  • Payload is the raw UDP datagram.

Notes

  • The first TLS record can include payload immediately after the request, reducing packet count and length patterns.
  • Clients often expose a local SOCKS5 proxy and translate local SOCKS5 requests into Trojan requests.

Trojan Traffic Handling

Other Protocols (Fallback)

  • Trojan listens on a TLS socket like a normal HTTPS service.
  • After TLS completes, the server inspects the first application data packet.
  • If the packet is not a valid Trojan request (wrong structure or password), the server treats it as “other protocols” and forwards the decrypted TLS stream to a preset endpoint (default 127.0.0.1:80).
  • The preset endpoint then controls the response, keeping the behavior indistinguishable from a real HTTPS site.

Active Detection

  • Probes without the correct structure or password are handed to the fallback endpoint.
  • As a result, active scanners see ordinary HTTPS or HTTP behavior rather than a bespoke proxy banner.

Passive Detection

  • With a valid certificate, traffic is protected by TLS and resembles ordinary HTTPS.
  • For HTTP destinations, there is only one RTT after the TLS handshake; non-HTTP traffic often looks like HTTPS keepalive or WebSocket.
  • This similarity can help bypass ISP QoS that targets obvious proxy signatures.

References

  • https://github.com/trojan-gfw/trojan/issues/14

References

  • https://v2.hysteria.network/zh/docs/developers/Protocol/

Hysteria 2 Protocol Specification

Hysteria is a TCP & UDP proxy based on QUIC, designed for speed, security and censorship resistance. This document describes the protocol used by Hysteria starting with version 2.0.0, sometimes internally referred to as the “v4” protocol. From here on, we will call it “the protocol” or “the Hysteria protocol”.

Requirements Language

The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119: https://tools.ietf.org/html/rfc2119

Underlying Protocol & Wire Format

The Hysteria protocol MUST be implemented on top of the standard QUIC transport protocol (RFC 9000) with the Unreliable Datagram Extension (RFC 9221).

All multibyte numbers use Big Endian format.

All variable-length integers (“varints”) are encoded/decoded as defined in QUIC (RFC 9000).

Authentication & HTTP/3 masquerading

One of the key features of the Hysteria protocol is that to a third party without proper authentication credentials (whether it’s a middleman or an active prober), a Hysteria proxy server behaves just like a standard HTTP/3 web server. Additionally, the encrypted traffic between the client and the server appears indistinguishable from normal HTTP/3 traffic.

Therefore, a Hysteria server MUST implement an HTTP/3 server (RFC 9114) and handle HTTP requests as any standard web server would. To prevent active probers from detecting common response patterns in Hysteria servers, implementations SHOULD advise users to either host actual content or set it up as a reverse proxy for other sites.

An actual Hysteria client, upon connection, MUST send the following HTTP/3 request to the server:

:method: POST
:path: /auth
:host: hysteria
Hysteria-Auth: [string]
Hysteria-CC-RX: [uint]
Hysteria-Padding: [string]

Hysteria-Auth: Authentication credentials.

Hysteria-CC-RX: Client’s maximum receive rate in bytes per second. A value of 0 indicates unknown.

Hysteria-Padding: A random padding string of variable length.

The Hysteria server MUST identify this special request, and, instead of attempting to serve content or forwarding it to an upstream site, it MUST authenticate the client using the provided information. If authentication is successful, the server MUST send the following response (HTTP status code 233):

:status: 233 HyOK
Hysteria-UDP: [true/false]
Hysteria-CC-RX: [uint/"auto"]
Hysteria-Padding: [string]

Hysteria-UDP: Whether the server supports UDP relay.

Hysteria-CC-RX: Server’s maximum receive rate in bytes per second. A value of 0 indicates unlimited; “auto” indicates the server refuses to provide a value and ask the client to use congestion control to determine the rate on its own.

Hysteria-Padding: A random padding string of variable length.

See the Congestion Control section for more information on how to use the Hysteria-CC-RX values.

Hysteria-Padding is optional and is only intended to obfuscate the request/response pattern. It SHOULD be ignored by both sides.

If authentication fails, the server MUST either act like a standard web server that does not understand the request, or in the case of being a reverse proxy, forward the request to the upstream site and return the response to the client.

The client MUST check the status code to determine if the authentication was successful. If the status code is anything other than 233, the client MUST consider authentication to have failed and disconnect from the server.

After (and only after) a client passes authentication, the server MUST consider this QUIC connection to be a Hysteria proxy connection. It MUST then start processing proxy requests from the client as described in the next section.

Proxy Requests

TCP

For each TCP connection, the client MUST create a new QUIC bidirectional stream and send the following TCPRequest message:

[varint] 0x401 (TCPRequest ID)
[varint] Address length
[bytes] Address string (host:port)
[varint] Padding length
[bytes] Random padding

The server MUST respond with a TCPResponse message:

[uint8] Status (0x00 = OK, 0x01 = Error)
[varint] Message length
[bytes] Message string
[varint] Padding length
[bytes] Random padding

If the status is OK, the server MUST then begin forwarding data between the client and the specified TCP address until either side closes the connection. If the status is Error, the server MUST close the QUIC stream.

UDP

UDP packets MUST be encapsulated in the following UDPMessage format and sent over QUIC’s unreliable datagram (for both client-to-server and server-to-client):

[uint32] Session ID
[uint16] Packet ID
[uint8] Fragment ID
[uint8] Fragment count
[varint] Address length
[bytes] Address string (host:port)
[bytes] Payload

The client MUST use a unique Session ID for each UDP session. The server SHOULD assign a unique UDP port to each Session ID, unless it has another mechanism to differentiate packets from different sessions (e.g., symmetric NAT, varying outbound IP addresses, etc.).

The protocol does not provide an explicit way to close a UDP session. While a client can retain and reuse a Session ID indefinitely, the server SHOULD release and reassign the port associated with the Session ID after a period of inactivity or some other criteria. If the client sends a UDP packet to a Session ID that is no longer recognized by the server, the server MUST treat it as a new session and assign a new port.

If a server does not support UDP relay, it SHOULD silently discard all UDP messages received from the client.

Fragmentation

Due to the limit imposed by QUIC’s unreliable datagram channel, any UDP packet that exceeds QUIC’s maximum datagram size MUST either be fragmented or discarded.

For fragmented packets, each fragment MUST carry the same unique Packet ID. The Fragment ID, starting from 0, indicates the index out of the total Fragment Count. Both the server and client MUST wait for all fragments of a fragmented packet to arrive before processing them. If one or more fragments of a packet are lost, the entire packet MUST be discarded.

For packets that are not fragmented, the Fragment Count MUST be set to 1. In this case, the values of Packet ID and Fragment ID are irrelevant.

Congestion Control

A unique feature of Hysteria is the ability to set the tx/rx (upload/download) rate on the client side. During authentication, the client sends its rx rate to the server via the Hysteria-CC-RX header. The server can use this to determine its transmission rate to the client, and vice versa by returning its rx rate to the client through the same header.

Three special cases are:

  • If the client sends 0, it doesn’t know its own rx rate. The server MUST use a congestion control algorithm (e.g., BBR, Cubic) to adjust its transmission rate.
  • If the server responds with 0, it has no bandwidth limit. The client MAY transmit at any rate it wants.
  • If the server responds with “auto”, it chooses not to specify a rate. The client MUST use a congestion control algorithm to adjust its transmission rate.

“Salamander” Obfuscation

The Hysteria protocol supports an optional obfuscation layer codenamed “Salamander”.

“Salamander” encapsulates all QUIC packets in the following format:

[8 bytes] Salt
[bytes] Payload

For each QUIC packet, the obfuscator MUST calculate the BLAKE2b-256 hash of a randomly generated 8-byte salt appended to a user-provided pre-shared key.

hash = BLAKE2b-256(key + salt)

The hash is then used to obfuscate the payload using the following algorithm:

for i in range(0, len(payload)):
    payload[i] ^= hash[i % 32]

The deobfuscator MUST use the same algorithms to calculate the salted hash and deobfuscate the payload. Any invalid packet MUST be discarded.

TUIC

Highlights

  • QUIC-based proxy protocol that uses TLS 1.3 for encryption and stream multiplexing.
  • Supports 0-RTT resumption and UDP relay over QUIC datagrams.
  • Designed for aggressive latency tuning with modern congestion control.

Flow

  1. Client opens a QUIC connection to the server and completes the TLS 1.3 handshake.
  2. Client authenticates with a UUID/token configured on the server.
  3. Client opens bidirectional QUIC streams for TCP requests and uses datagrams for UDP relay.
  4. Server validates auth, then forwards traffic to upstream destinations.

Configuration Snippet

Strengths

  • Low handshake overhead with 0-RTT and multiplexed streams.
  • Handles UDP natively without extra encapsulation layers.
  • Good performance on lossy or high-latency mobile networks.

Limitations

  • Requires UDP reachability and QUIC-friendly network paths.
  • QUIC fingerprints vary by implementation and can be throttled or blocked.
  • MTU and packet pacing tuning are often required for best results.

VLESS

Highlights

  • Lightweight stateless protocol from Project V that uses UUIDs for client identification.
  • Typically paired with TLS, XTLS, or Reality transport layers for encryption and camouflage.
  • Supports multiplexing, fallback routes, and advanced routing rules within the Xray core ecosystem.

Flow

  1. Client connects to the server transport (TLS, XTLS, Reality, gRPC, or MKCP).
  2. Client sends a VLESS header carrying the UUID, command (TCP/UDP), and target address.
  3. Server validates the UUID, then opens a stream or datagram tunnel to the destination.
  4. Optional features such as Flow Control Transport (FCT) or XTLS split accelerate traffic.

Configuration Snippet

Strengths

  • UUID-based auth scales well for many users and integrates with automated issuers.
  • Compatible with multiple transports, giving flexibility between TCP, gRPC, WS, or QUIC layers.
  • XTLS/Reality options reduce TLS overhead and mimic legitimate HTTPS fingerprints.

Limitations

  • Requires the Xray-core ecosystem; not natively supported by mainstream OS tools.
  • Misconfiguration of flow parameters can break compatibility with older clients.
  • Security relies heavily on the chosen transport; bare VLESS without TLS offers no encryption.

xHTTP Transport

Overview

xHTTP is an Xray transport that tunnels proxy traffic through regular HTTP request/response patterns, making it look closer to normal web application traffic. It is commonly used with VLESS + TLS/Reality to improve camouflage and traverse restrictive network environments.

When to Use

  • You need traffic to blend into common HTTPS API patterns.
  • Your network environment is sensitive to long-lived WebSocket or gRPC signatures.
  • You want to combine VLESS identity/auth with HTTP-style uplink/downlink behavior.

Core Configuration Fields

FieldSideMeaning
network: xhttpclient/serverEnables xHTTP transport.
pathclient/serverHTTP request path used by transport, must match on both sides.
hostclientOptional Host header override (for fronting/reverse proxy cases).
modeclient/serverTransport mode, commonly auto (default) or platform-specific variants.
extra.headersclientExtra HTTP headers to mimic app/API traffic.
xmuxclient/serverMultiplex tuning such as concurrency limits and connection reuse.
tls / realityclient/serverEncryption/camouflage layer strongly recommended in production.

Minimal Example (Client, Clash-Meta style)

proxies:
  - name: vless-xhttp
    type: vless
    server: edge.example.com
    port: 443
    uuid: 11111111-2222-3333-4444-555555555555
    tls: true
    servername: cdn.example.com
    network: xhttp
    xhttp-opts:
      path: /api/v1/sync
      host:
        - cdn.example.com
      mode: auto
      headers:
        User-Agent:
          - okhttp/4.12.0

Minimal Example (Server, Xray style)

{
  "inbounds": [
    {
      "port": 443,
      "protocol": "vless",
      "settings": {
        "clients": [
          { "id": "11111111-2222-3333-4444-555555555555" }
        ],
        "decryption": "none"
      },
      "streamSettings": {
        "network": "xhttp",
        "security": "tls",
        "tlsSettings": {
          "serverName": "cdn.example.com",
          "certificates": [
            {
              "certificateFile": "/etc/ssl/fullchain.pem",
              "keyFile": "/etc/ssl/privkey.pem"
            }
          ]
        },
        "xhttpSettings": {
          "path": "/api/v1/sync",
          "mode": "auto"
        }
      }
    }
  ]
}

Deployment Notes

  • Keep path and mode fully aligned between client and server, otherwise handshakes fail.
  • Prefer realistic but stable headers; frequently changing fingerprints can hurt reliability.
  • If deploying behind Nginx/Caddy/CDN, ensure request buffering and timeout limits fit long-lived proxy streams.
  • Start with conservative xmux values, then tune concurrency after observing latency and upstream limits.

Troubleshooting Checklist

  • EOF immediately after connect: verify UUID, TLS server name, and path consistency.
  • Frequent reconnects: check reverse proxy idle timeout and HTTP/2 upstream settings.
  • Good handshake but poor throughput: reduce header bloat, tune xmux, and verify CDN region affinity.

Reality

Highlights

  • TLS camouflage layer from the Xray ecosystem that imitates a TLS 1.3 handshake without issuing a certificate.
  • Uses a server public key and short ID to bind the handshake to a real-looking TLS fingerprint.
  • Commonly paired with VLESS or Trojan to provide authentication and routing on top of the transport.

Flow

  1. Client selects a cover domain and configures the server public key + short ID.
  2. Client initiates a TLS 1.3-like handshake (uTLS fingerprint) with SNI set to the cover domain.
  3. Server validates the short ID and key exchange to accept the session.
  4. On success, the connection upgrades to the chosen proxy protocol (for example VLESS).

Configuration Snippet

Strengths

  • Avoids certificate issuance and rotation while keeping TLS-like handshake behavior.
  • Harder to fingerprint via passive inspection when the TLS client fingerprint matches common browsers.
  • Integrates with XTLS flow control for reduced overhead.

Limitations

  • Requires compatible client fingerprints; mismatches can break connectivity.
  • Mostly confined to the Xray tooling ecosystem.
  • Effectiveness depends on the chosen cover domain and correct configuration.