Preface
The landscape of proxy applications has become crowded and fragmented, so—out of personal interest—I decided to consolidate the ones I use into a single stack.
Chimera_Clienttakes inspiration from the open-source clash-rs project, with the long-term goal of matching Mihomo’s feature set.Chimerafollows the clash-nyanpasu project, but the key difference is that my top priority is first-class support for thechimera_clientengine itself, which can largely be considered theclash-rscore.Chimera_Serverproject can be viewed as a Rust implementation of xray-core, with compatibility with the originalxray-coreremaining the end goal.Additionally, its implementation is based on the open-source project shoes.
Project Links
clash-rs: https://github.com/Watfaq/clash-rsMihomo: https://github.com/MetaCubeX/mihomoclash-nyanpasu: https://github.com/libnyanpasu/clash-nyanpasuxray-core: https://github.com/XTLS/Xray-coreshoes: https://github.com/cfal/shoes
Introduction
This documentation set introduces the proxy ecosystem maintained in this repository, focusing on three core projects: Chimera_Client, Chimera, and Chimera_Server. Each module targets a different layer of the overall stack—client core, client GUI, and server core—but they share a common goal: delivering reliable, high-performance connectivity under diverse network conditions. The following chapters explain how these applications work together, what problems each component solves, and how teams can deploy and extend them.
In addition, the documentation for each project is generally divided into two major parts: one is a configuration guide for general users, intended for quick onboarding and day-to-day usage; the other is an advanced reference for developers, intended to help them understand implementation details, extension capabilities, and support further development.
System Topology
The reference deployment pairs clash-rs clients with one or multiple Chimera frontends, all built on the shared primitives exposed by chimera_core. Clients typically run on user devices or edge nodes where they terminate local applications and translate outbound traffic into proxy-aware streams. These streams traverse secure tunnels toward Chimera, which performs authentication, routing, and protocol termination before forwarding packets to upstream services or the public internet.
Because the stack centers on chimera_core, upgrades to cipher suites, multiplexing strategies, or configuration schemas become instantly available to both sides, minimizing version skew. Observability is likewise unified: telemetry emitted at each layer shares identifiers so that request flows remain traceable end to end.
Chimera_Client
Current Support Protocol and Transport
- Trojan + WS
- Hysteria2
- Reality + tcp
- Socks5
- Http
Will support
- xhttp
- grpc
- vmess
- wireguard
- ssh
Role and Objectives
chimera_client is the Clash-compatible client runtime in the Chimera ecosystem.
Its design goal is practical compatibility with existing Clash/Mihomo profiles, while using Rust’s type safety and async ecosystem to build a maintainable codebase.
For operators, this means:
- Preserve familiar configuration and policy mental models.
- Improve implementation clarity through explicit schema and module boundaries.
- Enable incremental parity: start from stable basics (e.g., SOCKS inbound + rules), then close feature gaps against
clash-rsand Mihomo.
Relationship to Clash-rs and Mihomo
chimera_client documentation treats clash-rs and Mihomo as the two most important references:
- clash-rs: Rust-native reference for parser/runtime behavior and config semantics.
- Mihomo: de-facto production reference for broad ecosystem compatibility and advanced operational features.
In this chapter, each module page clearly marks:
- what works in
chimera_clientnow, - what is Clash/Mihomo-compatible target behavior,
- and what migration precautions to apply today.
Architecture Overview
Internally, the client is organized into four layers:
- Configuration layer
- Parses Clash-style YAML into typed Rust structures.
- Handles defaults, validation, and hot-reload boundaries.
- Inbound/controller layer
- Owns local listeners (SOCKS/HTTP/mixed/TUN as parity evolves).
- Exposes management APIs for status, switching, and diagnostics.
- Policy and DNS layer
- Evaluates rule chains with first-match semantics.
- Provides DNS strategy primitives (system resolver today; Clash-style DNS target).
- Outbound transport layer
- Executes protocol handshakes and stream forwarding.
- Encapsulates protocol-specific knobs while sharing common TLS/socket utilities.
This split mirrors common Clash-family architecture and reduces coupling between parser, runtime, and protocol engines.
Module Guide
Each functional area is documented independently:
- Ports and listeners: key mapping and current inbound support. See Ports and Listeners.
- DNS module: fake-IP vs real-IP models, resolver policy, and current implementation status. See DNS Module.
- TUN module: route-all/split-route semantics and Linux policy-routing notes. See Tun Module.
- Rules module: rule taxonomy, ordering strategy, and provider-based policy composition. See Rule Types and Their Effects.
Compatibility Snapshot (English Docs, Current)
| Area | chimera_client (current) | clash-rs / Mihomo reference |
|---|---|---|
| Inbound listeners | socks_port available; others partial/in progress | Full Clash-family listener matrix |
| DNS | Primarily system resolver path; Clash-style block documented as target | Mature fake-IP/real-IP/split resolver workflows |
| TUN | Documented target model; not fully active on mainline | Mature cross-platform implementations |
| Rules | Core Clash rule language documented and aligned | Full rule/provider ecosystem |
Use this table as a reading index: module pages go deeper with examples and caveats.
Deployment Patterns
Current recommended pattern for production-like use:
- Start with SOCKS-based local proxying.
- Keep DNS conservative unless you are validating an in-progress DNS branch.
- Use explicit rule ordering and small provider sets first, then scale.
- Add TUN only when parity branch and environment prerequisites are verified.
For CI/testing, keep one minimal profile and one Clash/Mihomo-parity profile to detect parser/runtime divergence early.
Performance and Operational Focus
The long-term performance strategy is aligned with Clash-family workloads:
- predictable low-overhead rule matching,
- bounded memory behavior in long-lived sessions,
- and high observability for policy debugging.
When introducing parity features (DNS/TUN/listeners), prioritize deterministic behavior and debuggability over implicit “magic” defaults.
Reference Repositories
chimera_client: https://github.com/MFSGA/Chimera_Clientclash-rs: https://github.com/Watfaq/clash-rsmihomo: https://github.com/MetaCubeX/mihomo
Ports and Listeners
Overview
Ports define how local applications, dashboards, and DNS resolvers enter chimera_client.
This page cross-references Clash-rs and Mihomo semantics, then states the current chimera_client status explicitly.
Key Mapping
| Clash / Mihomo key | chimera_client key | Purpose |
|---|---|---|
port / http-port | port | HTTP CONNECT/plain proxy listener |
socks-port | socks_port | SOCKS5 listener |
mixed-port | mixed_port | Shared HTTP+SOCKS listener |
redir-port | redir_port | Linux TCP transparent REDIRECT |
tproxy-port | tproxy_port | Linux TPROXY (TCP/UDP) |
external-controller | external_controller | REST controller endpoint |
dns.listen | dns.listen | Local DNS socket |
Behavior by Listener Type
HTTP proxy (port / http-port)
- Typical browser/system-proxy entrypoint.
- Expects HTTP CONNECT and HTTP proxy semantics.
SOCKS5 (socks-port)
- Most widely compatible app-level inbound.
- Supports direct app integration without kernel routing changes.
Mixed (mixed-port)
- Single port for both HTTP and SOCKS protocols.
- Useful where clients only allow one proxy endpoint setting.
Redir (redir-port)
- Linux TCP transparent capture via iptables REDIRECT.
- Does not capture UDP by itself.
TProxy (tproxy-port)
- Transparent TCP+UDP capture path on Linux.
- Requires policy routing (
fwmark+ route table) and firewall integration.
External controller (external-controller)
- Management API for dashboards and automation.
- Prefer loopback binding unless remote access is required.
DNS listen (dns.listen)
- Local resolver socket used by fake-IP/split-DNS workflows.
- Usually paired with TUN or transparent proxy mode.
Compatibility Status Matrix
| Feature | chimera_client now | clash-rs / Mihomo |
|---|---|---|
| SOCKS5 inbound | Available | Available |
| SOCKS UDP associate | Limited / disabled in current notes | Available |
| HTTP inbound | Reserved/planned | Available |
| Mixed inbound | Reserved/planned | Available |
| Redir inbound | Reserved/planned | Available |
| TProxy inbound | Reserved/planned | Available |
| External controller | Under active development | Mature |
| Local DNS listener | Under active development | Mature |
Recommended Usage Today
- Prefer
socks_portas the stable ingress. - Keep management and DNS bindings on
127.0.0.1during development. - Treat non-SOCKS inbounds as compatibility keys unless your branch explicitly enables them.
Configuration Examples
Minimal chimera_client profile (current-safe)
bind_address: "127.0.0.1"
allow_lan: false
socks_port: 7891
dns:
enable: false
ipv6: false
Clash / Mihomo reference layout
port: 7890
socks-port: 7891
mixed-port: 7892
redir-port: 7893
tproxy-port: 7894
external-controller: 127.0.0.1:9090
dns:
listen: 127.0.0.1:1053
Migration Notes
When importing a profile from Mihomo or clash-rs:
- keep original keys for readability,
- map to
chimera_clientaccepted keys where necessary, - disable inbounds not yet active,
- verify with live connection tests before enabling LAN exposure.
DNS Module
Scope and Goals
The DNS module decides how domains are resolved before policy routing. In Clash-family clients, DNS behavior strongly affects rule hit accuracy, latency, and anti-pollution resilience.
This page uses clash-rs and Mihomo as reference behavior, while marking current chimera_client maturity.
Why DNS Design Matters
- Domain rules require stable mapping between query results and connection flow.
- Fake-IP mode can preserve domain intent even when apps later connect by IP.
- Resolver choice impacts censorship resistance, startup reliability, and privacy leakage.
Configuration Areas
- Upstreams: UDP / DoH / DoT endpoints and ordering.
- Mode: fake-IP vs real-IP.
- Policy routing for DNS: nameserver-policy and fallback strategy.
- Cache strategy: capacity, TTL bounds, prefetch behavior.
- Safety controls: fake-IP filters, hosts overrides, ECS handling.
- Bootstrap: plain DNS for resolving encrypted DNS endpoints.
Mode Comparison
| Mode | Advantages | Trade-offs | Typical use |
|---|---|---|---|
fake-ip | Better domain-rule retention after connect | Needs careful filter list | TUN / transparent proxy deployments |
redir-host / real-IP style | Simpler app compatibility | Domain intent can be lost after IP connect | App-level proxy with conservative DNS goals |
Resolver Selection Flow (Reference)
- Check hosts override and cache.
- Choose resolver by policy (domain/set-based) or default list.
- Query primary resolver(s).
- Run fallback path when validation/latency criteria fail.
- Cache and return answer.
Compatibility Status
| Capability | chimera_client now | clash-rs / Mihomo reference |
|---|---|---|
| System resolver passthrough | Primary path | Also supported |
| Clash-style local DNS server | In progress | Mature |
| Fake-IP workflow | Target state | Mature |
| Nameserver policy / fallback filter | Target state | Mature |
Configuration References
chimera_client (current conservative profile)
dns:
enable: false
ipv6: false
Clash/Mihomo-aligned target schema
dns:
enable: true
listen: 127.0.0.1:1053
ipv6: false
enhanced-mode: fake-ip
fake-ip-range: 198.18.0.1/16
fake-ip-filter:
- "*.lan"
- "*.local"
default-nameserver:
- 1.1.1.1
- 8.8.8.8
nameserver:
- https://dns.alidns.com/dns-query
- tls://1.1.1.1:853
fallback:
- 9.9.9.9
fallback-filter:
geoip: true
geoip-code: CN
Practical Guidance
- Start from real-IP/system resolver behavior for stability.
- Enable fake-IP only after validating domain-rule workflows end-to-end.
- Keep fake-IP exclusions narrow and auditable.
- Track fallback hit ratio; sudden growth often means upstream degradation or blocking.
Troubleshooting Checklist
- Verify listener reachability:
dig @127.0.0.1 -p 1053 example.com. - Ensure OS/TUN resolver path really points to the client listener.
- Inspect logs for timeout, TLS, or response-validation failures.
- Test with one plain UDP resolver to isolate encrypted-DNS transport issues.
Alignment References
- clash-rs: DNS schema and runtime behavior under
clash-lib/src/configand DNS runtime modules. - Mihomo: production reference for enhanced-mode, nameserver-policy, and fallback-filter semantics.
Tun Module
Scope and Goals
TUN mode captures Layer-3 traffic from the host and sends it into the proxy pipeline without requiring each application to be proxy-aware. Compared with HTTP/SOCKS listeners, TUN is the closest to “system-wide proxy” behavior and is usually combined with policy routing and DNS control.
This page follows clash-rs semantics and config keys so future chimera_client parity is straightforward.
Current Project Status
chimera_client(current mainline) does not yet expose atunblock in its config parser.- The TUN section below is the clash-rs-aligned target shape and operational guidance, not a statement that all fields are already active in
chimera_client. - Today, use SOCKS/listener-based workflows in
chimera_clientunless you are validating an in-progress TUN branch.
Configuration Schema (Clash-rs Aligned)
tun:
enable: true
device-id: "dev://utun1989"
route-all: true
gateway: "198.18.0.1/24"
gateway-v6: "fd00:fac::1/64"
mtu: 1500
so-mark: 3389
route-table: 2468
dns-hijack: true
# dns-hijack:
# - 1.1.1.1:53
# - 8.8.8.8:53
# routes:
# - 1.1.1.1/32
# - 2001:4860:4860::8888/128
Key Fields and Semantics
| Key | Type | Default | Notes |
|---|---|---|---|
enable | bool | false | Enables TUN runtime. |
device-id | string | utun1989 | Accepts dev://<name>, fd://<n>, or plain name (treated as device name). |
gateway | CIDR string | 198.18.0.1/24 | IPv4 address/prefix assigned to the TUN interface. |
gateway-v6 | CIDR string | unset | Optional IPv6 address/prefix for dual-stack TUN. |
route-all | bool | false | Route all host traffic through TUN. |
routes | list<CIDR> | empty | Used when route-all: false to route only selected prefixes. |
mtu | u16 | platform default | Runtime uses 1500 by default (65535 on Windows) if unset. |
so-mark | u32 | unset | Linux fwmark for loop prevention / policy routing integration. |
route-table | u32 | 2468 | Linux policy-routing table used by TUN route-all path. |
dns-hijack | bool or list | false | Enables DNS interception behavior in TUN path. |
Per-Option Behavior (Clash-rs Source of Truth)
This section expands every tun field from TunConfig in clash-rs and explains practical impact.
enable
- Turns the whole TUN pipeline on/off.
falsemeans all othertunfields are ignored at runtime.
device-id
- Interface identifier and creation mode.
- Accepted forms in clash-rs parser:
dev://<name>: create/use a named TUN device.- plain
<name>: treated asdev://<name>. fd://<n>: adopt an already-open file descriptor (advanced embedding/systemd style).
- Alias keys accepted by parser:
device-url,device.
gateway
- IPv4 CIDR assigned to TUN NIC, e.g.
198.18.0.1/24. - This defines both local TUN IP and prefix used for routing decisions.
gateway-v6
- Optional IPv6 CIDR for TUN NIC.
- If omitted, IPv6 handling in TUN path is effectively disabled.
route-all
true: full-tunnel, install default-path style routes/rules.false: split-tunnel, only prefixes inroutesare sent to TUN.- If
route-all: true,routeslist becomes operationally irrelevant.
routes
- CIDR list used for split-tunnel mode (
route-all: false). - Typical use: route specific public resolvers, target regions, or service networks only.
mtu
- TUN interface MTU override.
- Leave unset for runtime/platform defaults; set explicitly when encountering fragmentation/PMTU issues.
so-mark
- Linux-only fwmark attached to outbound packets.
- Used with
ip rule/iptables/nftablesto avoid proxy loops and integrate custom policy routing.
route-table
- Linux-only policy route table index used by clash-rs TUN route installation.
- Default is
2468; change when your system already uses that table number.
dns-hijack
false: do not redirect DNS in TUN path.true: hijack UDP/53 queries to Clash DNS service.list: clash-rs currently treats list mode as enabling hijack behavior (same effect astrue).
Mihomo Gap Notes (Based on Public Tun Docs)
Compared with the clash-rs schema above, the following items are not documented as first-class tun keys in mihomo (same-name or same-shape):
device-idwithfd://<n>file-descriptor form.- Mihomo docs expose
devicebut do not document fd-based takeover syntax.
- Mihomo docs expose
gateway/gateway-v6explicit interface-address assignment fields.- Mihomo tun docs focus on route/rule controls and do not expose clash-rs-style gateway CIDR keys.
route-all+routesexact pair.- Mihomo uses
auto-route,route-address,route-exclude-addressstyle controls instead of clash-rs key shape.
- Mihomo uses
route-tableexact naming.- Mihomo exposes
iproute2-table-index/iproute2-rule-index; functionally close but not the same key contract.
- Mihomo exposes
Note:
so-markin clash-rs androuting-markin mihomo are conceptually similar (Linux packet mark), so this is a naming/compatibility difference, not a missing capability.
Device-ID Formats
dev://utun1989orutun1989: create/use named TUN device.dev://tun0: common Linux style.fd://3: use an existing file descriptor, useful when another component owns TUN creation.
On macOS, the device name must use the utun prefix.
Routing Behavior
route-all: true
- Linux: uses policy routing rules and a dedicated route table (
route-table). - macOS/Windows: installs broad default-route entries through TUN.
- DNS hijack integration is tied to this path on Linux (policy rule for destination port
53).
route-all: false
- Only CIDRs in
routesare routed through TUN. - This is safer for partial rollout and avoids taking over all host traffic.
If both are configured, route-all takes precedence operationally.
DNS Interaction
dns-hijackcontrols DNS interception in TUN flow, but it does not replace a working DNS module config.- For predictable domain-based routing, pair TUN with DNS settings (
dns.enable, resolver list, fake-IP strategy where needed). - In practice,
dns-hijack: trueis commonly paired with fake-IP mode in Clash-style deployments.
Linux Notes (Policy Routing)
- Ensure
iproute2(ipcommand) is available. - Run with sufficient privileges (CAP_NET_ADMIN or root-equivalent).
- Prefer setting
so-markand keep it consistent with your external policy rules to avoid proxy loops.
Quick checks:
ip rule
ip route show table 2468
ip -6 route show table 2468
Example Profiles
Full-tunnel profile
tun:
enable: true
device-id: "dev://utun1989"
route-all: true
gateway: "198.18.0.1/24"
dns-hijack: true
Split-route profile
tun:
enable: true
device-id: "dev://tun0"
route-all: false
gateway: "198.18.0.1/24"
routes:
- 1.1.1.1/32
- 8.8.8.8/32
dns-hijack: false
FD-based profile
tun:
enable: true
device-id: "fd://3"
route-all: true
gateway: "198.18.0.1/24"
Troubleshooting Checklist
- Verify process privileges first; TUN creation and route changes fail silently in many restricted environments.
- Confirm the interface exists (
ip addr,ifconfig, or platform equivalent). - Validate route/rule installation after startup.
- If DNS appears broken, verify the DNS listener is reachable and the system resolver path actually passes through TUN.
- If traffic loops or stalls, check
so-mark/policy-rule alignment and existing host firewall rules.
References and Alignment Notes
- Clash-rs config schema:
clash-lib/src/config/def.rs(TunConfig, defaults,dns-hijackshape). - Clash-rs config conversion:
clash-lib/src/config/internal/convert/tun.rs. - Clash-rs tun runtime and device parsing:
clash-lib/src/proxy/tun/inbound.rs. - Clash-rs route behavior:
clash-lib/src/proxy/tun/routes/{linux,macos,windows}.rs. - Clash-rs sample profile:
clash-bin/tests/data/config/tun.yaml. - Chimera_Client current parser snapshot:
clash-lib/src/config/def.rs(notunblock yet on mainline). - Mihomo tun documentation (for key-shape comparison):
https://wiki.metacubex.one/config/inbound/tun/.
Rule Types and Their Effects
Overview
In chimera_client, rules decide which outbound group handles a flow.
Evaluation is top-to-bottom, first match wins, consistent with Clash-rs and Mihomo behavior.
Rule Evaluation Model
Input signals commonly include:
- domain indicators (SNI / Host),
- resolved destination IP,
- destination/source ports,
- process identity (when platform supports it),
- GeoIP/GeoSite datasets,
- and external rule-provider sets.
Typical actions route traffic to policy groups such as DIRECT, REJECT, Proxy, or Auto.
Common Domain Rules
DOMAIN
Exact hostname match.
rules:
- DOMAIN,api.github.com,Proxy
DOMAIN-SUFFIX
Suffix match (includes subdomains).
rules:
- DOMAIN-SUFFIX,google.com,Proxy
DOMAIN-KEYWORD
Substring-based domain match. Use carefully to avoid overmatching.
rules:
- DOMAIN-KEYWORD,openai,Proxy
IP and Network Rules
IP-CIDR
IPv4 destination prefix match.
rules:
- IP-CIDR,1.1.1.0/24,DIRECT
IP-CIDR6
IPv6 destination prefix match.
rules:
- IP-CIDR6,2606:4700::/32,DIRECT
SRC-IP-CIDR
Source subnet match (useful on routers/gateways).
rules:
- SRC-IP-CIDR,192.168.50.0/24,GameProxy
GEOIP
Country/region IP database match.
rules:
- GEOIP,CN,DIRECT
GEOSITE
Domain category/list match.
rules:
- GEOSITE,geolocation-!cn,Proxy
Port and Process Rules
DST-PORT
Destination-port-based routing.
rules:
- DST-PORT,443,Proxy
SRC-PORT
Source-port-based routing.
rules:
- SRC-PORT,60000-60100,DIRECT
PROCESS-NAME
Executable name match.
rules:
- PROCESS-NAME,Telegram.exe,Proxy
PROCESS-PATH
Full executable path match.
rules:
- PROCESS-PATH,/Applications/Discord.app/Contents/MacOS/Discord,Proxy
Provider and Logical Rules
RULE-SET
References remote/local provider-managed rule collections.
rule-providers:
streaming:
type: http
behavior: domain
url: https://example.com/streaming.yaml
interval: 86400
path: ./ruleset/streaming.yaml
rules:
- RULE-SET,streaming,Proxy
MATCH
Final catch-all fallback.
rules:
- MATCH,DIRECT
Recommended Ordering
- Security blocks and guaranteed bypass (
REJECT, private/localDIRECT). - Precise business rules (
DOMAIN,PROCESS-PATH,IP-CIDR). - Provider/category rules (
RULE-SET,GEOSITE). - Broad heuristics (
DOMAIN-KEYWORD,GEOIP). - Final
MATCHfallback.
Compatibility Notes (clash-rs + Mihomo)
- First-match-wins semantics are aligned.
- Rule syntax is largely portable, but behavior still depends on DNS mode and inbound type.
- Process-level rules are platform-sensitive; validate on each target OS.
- GeoIP/GeoSite freshness directly impacts correctness.
Minimal Mixed Example
rules:
- DOMAIN,internal.example.com,DIRECT
- DOMAIN-SUFFIX,corp.example.com,DIRECT
- PROCESS-NAME,Telegram.exe,Proxy
- GEOSITE,category-ads-all,REJECT
- GEOIP,CN,DIRECT
- RULE-SET,streaming,Proxy
- MATCH,Auto
Operational Tips
- Keep intent explicit; avoid broad early rules.
- Version-control rule providers and refresh intervals.
- Enable connection decision logging when debugging mismatches.
- Validate DNS strategy and rule strategy together, especially with fake-IP/TUN.
Chimera GUI
Design Goals
Chimera serves as a high-performance ingress layer responsible for terminating client sessions, enforcing policies, and forwarding traffic to the target destination. A single ingress port can simultaneously expose multiple proxy protocols.
The core design priorities of Chimera are:
- minimizing handshake latency,
- providing fine-grained access control,
- ensuring cross-platform compatibility,
- enabling horizontal scalability,
- and offering built-in observability.
Currently Supported Platforms
First Tier
- 🖥️ Windows
- 🐧 Ubuntu
- 🍎 macOS
Second Tier
- ❄️ NixOS
Currently Supported Protocols
Please refer to Chimera_Client and clash-rs.
Explanation of Chimera’s Runtime Configuration Generation Mechanism
1. Key Takeaways
The configuration consumed by the core (e.g., chimera_client / mihomo) at startup and during hot-reload is not read directly from profiles.yaml.
What the core actually uses is a runtime file:
clash-config.yaml(runtime configuration, located underapp_config_dir)
This file is first composed in memory by the backend, then written to disk, and finally passed to the core either via startup arguments or Clash API hot-reload.
2. Configuration Inputs (Raw Materials)
The runtime configuration is built from four categories of inputs:
-
Application settings:
chimera-config.yaml- Struct:
IVerge - Load entry:
backend/tauri/src/config/chimera/mod.rs→IVerge::new() - Purpose: controls field filtering, port strategy, TUN/system proxy behavior, etc.
- Struct:
-
Clash Guard override template:
clash-guard-overrides.yaml- Struct:
IClashTemp - Load entry:
backend/tauri/src/config/clash/mod.rs→IClashTemp::new() - Purpose: forcibly overrides critical fields (e.g.,
mode,mixed-port,external-controller,secret, etc.)
- Struct:
-
Profile metadata:
profiles.yaml- Struct:
Profiles - Load entry:
backend/tauri/src/config/profile/profiles.rs→Profiles::new() - Purpose: records the currently active profile (
current) and the profile list (items)
- Struct:
-
Concrete profile content files:
app_config_dir/profiles/*.yaml- Load entry:
Profiles::current_mappings() - Purpose: provides actual configuration content such as proxies, rules, DNS, TUN, etc.
- Load entry:
3. Startup Initialization Phase
3.1 Creating Base Files (If Missing)
backend/tauri/src/utils/init/mod.rs → init_config() ensures the following files exist:
clash-guard-overrides.yaml(generated by default viaIClashTemp::template())chimera-config.yaml(generated by default viaIVerge::template())profiles.yaml(an empty/default configuration)
3.2 Loading Global Configuration Objects
backend/tauri/src/config/core.rs → Config::global() initializes:
Profiles::new()IVerge::new()IClashTemp::new()IRuntime::new()
IRuntime is the in-memory runtime configuration container; its config field is an Option<Mapping>.
4. Main Runtime Configuration Composition Flow
Main entry: Config::generate() (backend/tauri/src/config/core.rs)
flowchart TD A["Start Config::generate"] --> B["Call enhance::enhance"] B --> C["Read YAML(s) of current profile"] C --> D["merge_profiles: merge configs"] D --> E["(Optional) whitelist-based field filtering"] E --> F["Override key fields (HANDLE_FIELDS)"] F --> G["Write into in-memory runtime config"] G --> H["Write out clash config YAML"] H --> I["Load at startup or hot-reload via PUT /configs"]
4.1 What enhance::enhance() Does
Location: backend/tauri/src/enhance/mod.rs
Core steps:
-
Load Clash Guard configuration
let clash_config = Config::clash().latest().0.clone()
-
Read current feature toggles/settings
- such as
enable_clash_fields, fromIVerge
- such as
-
Load the content of the currently active profile(s)
- via
Profiles::current_mappings() - this method iterates over
current, readsprofiles/<file>.yamlone by one, and converts them intoMapping
- via
-
(Reserved) Execute profile chain scripts
- calls
process_chain(...) - current implementation is a placeholder (no-op), returning the original config
- calls
-
Merge multiple profile configurations
-
calls
merge_profiles(...) -
current strategy:
- first config: full
extend - subsequent configs: only append
proxiesto the existingproxies
- first config: full
-
-
Whitelist field filtering (optional, controlled by a toggle)
use_whitelist_fields_filter(...)- when
enable_clash_fields = true, only retains keys invalid + default fields
-
Force-override Guard fields
- writes back fields listed in
HANDLE_FIELDSfromIClashTempinto the final config - ensures critical control fields are centrally managed by the client
- writes back fields listed in
4.2 Scope of HANDLE_FIELDS Overrides
Defined in backend/tauri/src/enhance/field.rs:
modeportsocks-portmixed-portallow-lanlog-levelipv6secretexternal-controller
This means that even if these fields exist in a profile, the final values will be overwritten by the corresponding values from clash-guard-overrides.yaml.
5. Writing to Disk and File Locations
Entry: Config::generate_file(ConfigType::Run) (backend/tauri/src/config/core.rs)
- Output in
Runmode:app_config_dir()/clash-config.yaml - Output in
Checkmode:temp_dir()/clash-config-check.yaml
If generation fails, Config::init_config() provides a fallback: it writes IClashTemp directly as the runtime configuration.
6. How the Core Obtains This Configuration
6.1 Loaded at Startup
CoreManager::run_core() → Instance::try_new() (backend/tauri/src/core/clash/core.rs):
- Calls
Config::generate_file(ConfigType::Run)to get the path - Passes the path to the core process via
CoreInstanceBuilder.config_path(config_path)
In other words: the core reads clash-config.yaml directly at startup.
6.2 Hot-Reload During Runtime
CoreManager::update_config() flow:
Config::generate().await?recomposes the in-memory configcheck_config().await?validates syntax/usability using the check filegenerate_file(Run)rewritesclash-config.yaml- Calls
PUT /configswith body{ "path": "<absolute path>" }to instruct the core to reload
Relevant code:
backend/tauri/src/core/clash/core.rs→update_config()backend/tauri/src/core/clash/api.rs→put_configs(...)
7. User Actions That Trigger a Rebuild
7.1 Switching/Modifying Profile Selection
Frontend commands.patchProfilesConfig → backend patch_profiles_config(...):
- Apply draft:
Config::profiles().draft().apply(...) - Trigger
CoreManager::update_config() - On success:
Config::profiles().apply()+save_file() - On failure:
discard()(rollback)
7.2 Changing Settings (Certain Fields)
Frontend commands.patchVergeConfig → backend feat::patch_verge(...):
-
Writes an
IVergedraft first -
Some fields (e.g.,
enable_tun_mode) may trigger:Config::generate()+run_core()(restart scenario)- or
update_core_config()(hot-update scenario)
7.3 Importing the First Profile
After import_profile(...) succeeds, if there is no active profile yet, it automatically constructs ProfilesBuilder.current = [new_uid] and reuses patch_profiles_config(...) to trigger an update.
8. Key Details and Common Misunderstandings
-
profiles.yamlis not the final configuration used by the core- it only stores profile metadata and the
currentpointer
- it only stores profile metadata and the
-
Profile content files are not passed to the core verbatim
- they become the runtime config only after merging, filtering, and guard overrides
-
external-controllermay have its port changed before startupprepare_external_controller_port()checks port availability according to policy and switches ports if necessary
-
verge_mixed_portis primarily used for system proxy logic- it is not directly written to the runtime YAML’s
mixed-port - system proxy uses
verge_mixed_portfirst, otherwise falls back toConfig::clash().get_mixed_port()
- it is not directly written to the runtime YAML’s
-
get_runtime_yaml()returnsIRuntime.configfrom memory- it is usually consistent with the recently written
clash-config.yaml - but fundamentally it comes from memory, not from re-reading disk each time
- it is usually consistent with the recently written
9. Current Implementation Limitations (As of the Codebase)
-
Chain script execution is currently a placeholder
process_chain(...)does not actually rewrite the config yet
-
Global chain processing code is still commented out
- only the scoped chain framework exists for now
-
patch_clash_configIPC is stilltodo!()- the frontend will fail if it uses that IPC path
-
Directly editing a profile file does not automatically trigger a hot-reload
save_profile_file(...)only writes the file; it does not callupdate_config()
10. Troubleshooting Checklist (Practical Order)
If you suspect “the core is using the wrong configuration,” check in this order:
-
Confirm the active profile is correct
- verify
profiles.yaml→current
- verify
-
Confirm the profile source content matches expectations
- check
app_config_dir/profiles/*.yaml
- check
-
Check guard override items
- verify whether
HANDLE_FIELDSinclash-guard-overrides.yamloverrides the values you intended
- verify whether
-
Inspect the final runtime configuration
- check
clash-config.yaml - or call
get_runtime_yaml()to view the in-memory version
- check
-
Confirm a hot-reload actually occurred
- verify
patch_profiles_config/patch_verge_config/restart_sidecarwas executed - check logs to see whether
PUT /configssucceeded
- verify
-
If you see port-related issues
- check whether
external-controllerwas rewritten by the port strategy
- check whether
Service Mode Configuration
Scope and Intent
In Chimera GUI, service mode runs the proxy core as a background system service while the GUI acts as the control surface. This separation is important when you need stable long-running behavior, elevated networking privileges, or startup-before-login workflows.
Foreground Mode vs Service Mode
| Mode | Runtime shape | Typical use | Main limitation |
|---|---|---|---|
| Foreground mode | GUI process owns the core directly | Development and quick profile checks | Core stops when GUI exits or user logs out |
| Service mode | System service owns the core; GUI controls it via local IPC | Daily use, TUN/transparent routing, always-on setups | Requires service install and permission management |
Why Enable Service Mode
- Keep traffic forwarding alive even if the GUI is closed.
- Start proxy service automatically at boot/login with predictable lifecycle.
- Support privileged paths (for example TUN, policy routing, transparent capture) more reliably.
- Reduce behavior drift across user sessions on shared machines.
Configuration Workflow in Chimera GUI
- Prepare and validate your active profile in normal mode first.
- Open Chimera GUI settings and enable service mode.
- Install/register the service when prompted by the GUI.
- Choose startup policy:
- Manual: start only when needed.
- Automatic: start at system boot (recommended for always-on use).
- Apply settings and trigger a service restart from the GUI.
- Confirm the GUI can reconnect to the local control endpoint after restart.
Key Options and Recommended Defaults
Option labels may vary slightly by platform/build, but the intent is usually the same:
| GUI option (common naming) | Meaning | Suggested default |
|---|---|---|
Enable Service Mode | Switch core runtime ownership to system service | On for long-term daily usage |
Install/Repair Service | Register or repair service metadata | Run after first enable and after upgrades |
Start Service at Boot | Auto-start service during system startup | On for TUN or gateway-style setups |
Keep Running After GUI Exit | Leave service active when GUI closes | On |
Require Elevation on Apply | Prompt for admin/root rights when applying privileged changes | On |
Auto Recover on Crash | Restart service process after abnormal exit | On |
Platform Notes
Windows
- Service mode is usually backed by Windows Service Control Manager.
- Use an elevated shell for first-time install/repair if GUI prompts fail.
- Verify state with:
Get-Service *chimera*
Linux
- Service mode is typically managed by
systemd(chimera.serviceor similar unit name). - Prefer explicit restart after profile changes that affect TUN/routing behavior.
- Verify state with:
systemctl status chimera.service
journalctl -u chimera.service -n 100 --no-pager
macOS
- Service mode is usually implemented through
launchd(system daemon style). - Ensure GUI and service binaries come from the same build channel/version.
Rollout Strategy
- Start with SOCKS/listener-only profile and confirm baseline connectivity.
- Enable service mode and verify reconnect behavior after GUI restart.
- Enable advanced options (TUN, DNS hijack, transparent capture) incrementally.
- Reboot once and verify auto-start, rule hit behavior, and DNS resolution stability.
Troubleshooting Checklist
| Symptom | Likely cause | Fix |
|---|---|---|
| Service cannot start | Missing admin/root privileges | Reinstall/repair service with elevation |
| GUI shows “disconnected from core” | Control endpoint mismatch or service crash loop | Reapply service settings and inspect service logs |
| TUN features do not take effect | Service running but privileged route setup failed | Check system logs and permission/capability grants |
| Profile changes seem ignored | GUI saved config but service did not reload | Trigger explicit service restart from GUI |
| Traffic stops after logout | Foreground mode still active | Recheck that service mode is enabled and installed |
Operational Boundary
Service mode changes process lifecycle and permission model, not proxy policy semantics. Your rules, DNS strategy, and outbound definitions are still determined by the active Chimera profile.
chimera_server Library
Purpose and Scope
chimera_server is the shared Rust crate that provides protocol primitives, configuration schemas, crypto suites, and common utilities for both client and server projects. By centralizing these capabilities, the ecosystem avoids duplicated logic, ensures protocol compliance, and keeps security fixes consistent across binaries.
Key Modules
- Configuration model: strongly typed structures plus serde-based serialization for Clash manifests, Chimera manifests, and shared policy fragments.
- Crypto and handshake utilities: AEAD ciphers, key derivation, certificate pinning helpers, TLS fingerprint templates, and QUIC transport parameters.
- Transport abstractions: traits for stream/session lifecycles, multiplexing interfaces, buffer management, and async runtime adapters.
- Event bus: lightweight publish/subscribe mechanism so higher layers can tap into connection lifecycle events, metrics, and alerts.
API Surface and Extensibility
The crate exposes a stable Rust API along with optional C FFI bindings for other languages. Extension points allow third parties to register custom cipher suites, add routing annotations, or hook into telemetry emission. Versioning follows semver with clear migration guides whenever breaking changes occur, ensuring that clash-rs and Chimera can track upgrades smoothly.
Testing and Quality
chimera_server maintains exhaustive unit tests for parsers, crypto primitives, and transport behaviors. Integration suites spin up in-memory client/server pairs to validate interoperability before changes land. Benchmarks measure handshake latency, throughput, and memory footprint across representative hardware, providing baselines for regression detection.
Protocol
Overview
| Protocol | Default Transport | Authentication | Strengths | Typical Constraints |
|---|---|---|---|---|
| SOCKS5 | TCP control + optional UDP | Optional username/password | Works with almost any TCP app, UDP associate mode | Clear-text by default, needs TLS/obfs elsewhere |
| HTTP(S) CONNECT | TCP over HTTP/1.1 or HTTP/2 | Basic auth, bearer token, mutual TLS | Blends with web traffic, easy to deploy on gateways | Only proxies TCP, relies on intermediary keeping long-lived tunnels |
| Trojan | TLS over TCP | Pre-shared password validated inside TLS | Hard to fingerprint, benefits from CDN/SNI | Each password maps to a port/user, needs valid TLS certificate |
| Hysteria 2 | QUIC (UDP) with TLS 1.3 | Password or OIDC-like token | High throughput, UDP native, congestion tuning | Requires open UDP ports, MTU tuning important |
| TUIC | QUIC (UDP) with TLS 1.3 | UUID or token-based auth | 0-RTT friendly, multiplexed streams, low handshake overhead | Needs UDP reachability, QUIC fingerprinting varies by implementation |
| VLESS | TLS/XTLS over TCP or MKCP | UUID-based identity | Flexible multiplexing, optional XTLS auto-split | No encryption without TLS/XTLS layer, ecosystem-specific tooling |
| xHTTP Transport | HTTP-style stream over TLS/Reality | Usually UUID/token from upper protocol (e.g., VLESS) | Better web-traffic camouflage, friendly to reverse proxies/CDNs | Header/path mismatch breaks handshake; extra overhead versus raw TCP |
| Reality (TLS camouflage) | TLS 1.3-like handshake | Public key + short ID (plus upstream auth) | Certificate-less TLS mimicry, resistant to passive probing | Depends on client fingerprint matching, tied to Xray tooling |
Detailed breakdowns now live in dedicated files; each follows the same structure (highlights, flow, configuration snippet, strengths, and limitations) to make comparisons straightforward.
Deep Dives
- SOCKS5 – General-purpose TCP/UDP proxy with flexible method negotiation.
- HTTP CONNECT Proxy – HTTPS-friendly tunnels that ride over standard web ports.
- Trojan – TLS-camouflaged password proxy ideal for CDN fronting.
- Hysteria 2 – QUIC-based transport tuned for high-loss or high-latency links.
- TUIC – QUIC-based proxy with multiplexing and aggressive latency tuning.
- VLESS – UUID-auth protocol with configurable transports such as TLS, XTLS, or Reality.
- xHTTP Transport – HTTP-like transport profile for Xray ecosystems, often paired with VLESS.
- Reality – TLS camouflage layer used by Xray transports without certificates.
SOCKS5
Official RFC
The SOCKS version 5 protocol is specified primarily in RFC 1928.
Key related RFCs:
- RFC 1928 — SOCKS Protocol Version 5 (core protocol, addressing, UDP ASSOCIATE, authentication negotiation)
- RFC 1929 — Username/Password Authentication for SOCKS V5 (optional authentication method)
- RFC 1961 — GSS-API Authentication Method for SOCKS V5 (optional authentication)
- RFC 3089 — SOCKS-based IPv6/IPv4 Gateway (interoperability for IPv6 scenarios)
Highlights
- Layer-4 proxy that forwards arbitrary TCP streams and supports UDP via ASSOCIATE command.
- Method negotiation lets the server advertise
NO AUTH,USERPASS, or custom authentication. - Widely supported by browsers, curl, SSH, and VPN clients.
Flow
- Client opens a TCP socket to the proxy.
- Client sends a list of supported authentication methods; server responds with the chosen method.
- Optional username/password exchange takes place.
- Client issues
CONNECT,BIND, orUDP ASSOCIATEwith destination info. - Server replies with success/failure code and starts relaying traffic.
Configuration Snippet
Strengths
- Works with legacy tooling without extra plugins.
- UDP associate makes DNS-over-UDP possible.
- Minimal framing overhead keeps latency low.
Limitations
- No built-in encryption; must rely on TLS-over-SOCKS or upstream obfuscation.
- UDP associate requires the client to keep listening on a local port, which some firewalls block.
- Authentication is static unless wrapped in a management layer.
References
- https://www.rfc-editor.org/rfc/rfc1928
- https://www.rfc-editor.org/rfc/rfc1929
- https://www.rfc-editor.org/rfc/rfc1961
- https://www.rfc-editor.org/rfc/rfc3089
Appendices
RFC 1928 (Full Text)
#### RFC 1929 (Full Text)
```text
#### RFC 1961 (Full Text)
```text
#### RFC 3089 (Full Text)
```text
HTTP
Highlights
- Presents itself as a normal HTTP(S) server and upgrades individual requests into tunnels via the
CONNECTverb. - Easy to front with Nginx, Apache, or cloud load balancers.
- Supports HTTP/2 multiplexing when both sides understand it.
Flow
- Client opens a TCP (or TLS) connection to the proxy endpoint.
- Client optionally performs HTTP auth (Basic, Digest, Bearer, or mutual TLS).
- Client sends
CONNECT target.example.com:443 HTTP/1.1(or an HTTP/2:method CONNECT). - Proxy validates policy, then responds
200 Connection Established. - Subsequent bytes are relayed transparently until one side closes the tunnel.
Configuration Snippet
Strengths
- Blends with standard HTTPS traffic; hard to distinguish from regular web browsing.
- Works well behind corporate firewalls that only permit ports 80/443.
- HTTP/2 variants allow many tunnels over one TCP session, reducing handshake cost.
Limitations
- TCP-only; cannot forward UDP flows without extra encapsulation.
- Proxies must maintain state per tunnel, which impacts scaling under many short-lived connections.
- Additional HTTP headers may leak metadata if not sanitized.
Trojan
Highlights
- Starts with a real TLS handshake; all subsequent bytes are TLS application data.
- Auth is a pre-shared password hashed with SHA-224 and hex encoded.
- Request framing reuses SOCKS5-style address fields for CONNECT and UDP ASSOCIATE.
- Invalid or unknown traffic can be forwarded to a fallback endpoint to look like normal HTTPS.
Flow
- Client completes a standard TLS handshake with the server (SNI/ALPN as configured).
- Client sends
hex(SHA224(password))+ CRLF + Trojan Request + CRLF (+ optional payload). - Server validates the password and request, then connects to the destination.
- For TCP, data is relayed bidirectionally; for UDP, packets are framed and tunneled over the TLS stream.
Wire Format
- The precise framing and field definitions live in Wire Format.
- The first TLS record may include payload after the request to reduce packet count.
Traffic Handling
- Fallback behavior and anti-detection notes are in Traffic Handling.
Strengths
- Uses standard TLS stacks and certificates; inherits mature TLS security and ALPN support.
- Hard to fingerprint when served from a legitimate HTTPS endpoint.
- Minimal protocol overhead once the handshake completes.
Limitations
- Shared-password model means revocation is coarse unless per-user passwords are used.
- Requires valid TLS certificates and operational renewal.
- Fallback behavior must be configured to keep probes indistinguishable from real HTTPS.
References
- https://trojan-gfw.github.io/trojan/protocol
Trojan Wire Format
TLS Handshake
- The client performs a normal TLS handshake first.
- If the handshake fails, the server closes the connection like a regular HTTPS server.
- Some implementations also return an nginx-like response to plain HTTP probes.
Initial Request
After TLS is established, the first application data packet is:
+-----------------------+---------+----------------+---------+----------+
| hex(SHA224(password)) | CRLF | Trojan Request | CRLF | Payload |
+-----------------------+---------+----------------+---------+----------+
| 56 | 0x0D0A | Variable | 0x0D0A | Variable |
+-----------------------+---------+----------------+---------+----------+
Trojan Request
Trojan Request uses a SOCKS5-like format:
+-----+------+----------+----------+
| CMD | ATYP | DST.ADDR | DST.PORT |
+-----+------+----------+----------+
| 1 | 1 | Variable | 2 |
+-----+------+----------+----------+
- CMD values: 0x01 CONNECT, 0x03 UDP ASSOCIATE.
- ATYP values: 0x01 IPv4, 0x03 DOMAINNAME, 0x04 IPv6.
- DST.ADDR is the destination address, DST.PORT is network byte order.
- SOCKS5 field details: https://tools.ietf.org/html/rfc1928
UDP Associate Framing
When CMD is UDP ASSOCIATE, each UDP datagram is framed in the TLS stream as:
+------+----------+----------+--------+---------+----------+
| ATYP | DST.ADDR | DST.PORT | Length | CRLF | Payload |
+------+----------+----------+--------+---------+----------+
| 1 | Variable | 2 | 2 | 0x0D0A | Variable |
+------+----------+----------+--------+---------+----------+
- Length is the payload size in network byte order.
- Payload is the raw UDP datagram.
Notes
- The first TLS record can include payload immediately after the request, reducing packet count and length patterns.
- Clients often expose a local SOCKS5 proxy and translate local SOCKS5 requests into Trojan requests.
Trojan Traffic Handling
Other Protocols (Fallback)
- Trojan listens on a TLS socket like a normal HTTPS service.
- After TLS completes, the server inspects the first application data packet.
- If the packet is not a valid Trojan request (wrong structure or password), the server treats it as “other protocols” and forwards the decrypted TLS stream to a preset endpoint (default
127.0.0.1:80). - The preset endpoint then controls the response, keeping the behavior indistinguishable from a real HTTPS site.
Active Detection
- Probes without the correct structure or password are handed to the fallback endpoint.
- As a result, active scanners see ordinary HTTPS or HTTP behavior rather than a bespoke proxy banner.
Passive Detection
- With a valid certificate, traffic is protected by TLS and resembles ordinary HTTPS.
- For HTTP destinations, there is only one RTT after the TLS handshake; non-HTTP traffic often looks like HTTPS keepalive or WebSocket.
- This similarity can help bypass ISP QoS that targets obvious proxy signatures.
References
- https://github.com/trojan-gfw/trojan/issues/14
References
- https://v2.hysteria.network/zh/docs/developers/Protocol/
Hysteria 2 Protocol Specification
Hysteria is a TCP & UDP proxy based on QUIC, designed for speed, security and censorship resistance. This document describes the protocol used by Hysteria starting with version 2.0.0, sometimes internally referred to as the “v4” protocol. From here on, we will call it “the protocol” or “the Hysteria protocol”.
Requirements Language
The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119: https://tools.ietf.org/html/rfc2119
Underlying Protocol & Wire Format
The Hysteria protocol MUST be implemented on top of the standard QUIC transport protocol (RFC 9000) with the Unreliable Datagram Extension (RFC 9221).
All multibyte numbers use Big Endian format.
All variable-length integers (“varints”) are encoded/decoded as defined in QUIC (RFC 9000).
Authentication & HTTP/3 masquerading
One of the key features of the Hysteria protocol is that to a third party without proper authentication credentials (whether it’s a middleman or an active prober), a Hysteria proxy server behaves just like a standard HTTP/3 web server. Additionally, the encrypted traffic between the client and the server appears indistinguishable from normal HTTP/3 traffic.
Therefore, a Hysteria server MUST implement an HTTP/3 server (RFC 9114) and handle HTTP requests as any standard web server would. To prevent active probers from detecting common response patterns in Hysteria servers, implementations SHOULD advise users to either host actual content or set it up as a reverse proxy for other sites.
An actual Hysteria client, upon connection, MUST send the following HTTP/3 request to the server:
:method: POST
:path: /auth
:host: hysteria
Hysteria-Auth: [string]
Hysteria-CC-RX: [uint]
Hysteria-Padding: [string]
Hysteria-Auth: Authentication credentials.
Hysteria-CC-RX: Client’s maximum receive rate in bytes per second. A value of 0 indicates unknown.
Hysteria-Padding: A random padding string of variable length.
The Hysteria server MUST identify this special request, and, instead of attempting to serve content or forwarding it to an upstream site, it MUST authenticate the client using the provided information. If authentication is successful, the server MUST send the following response (HTTP status code 233):
:status: 233 HyOK
Hysteria-UDP: [true/false]
Hysteria-CC-RX: [uint/"auto"]
Hysteria-Padding: [string]
Hysteria-UDP: Whether the server supports UDP relay.
Hysteria-CC-RX: Server’s maximum receive rate in bytes per second. A value of 0 indicates unlimited; “auto” indicates the server refuses to provide a value and ask the client to use congestion control to determine the rate on its own.
Hysteria-Padding: A random padding string of variable length.
See the Congestion Control section for more information on how to use the Hysteria-CC-RX values.
Hysteria-Padding is optional and is only intended to obfuscate the request/response pattern. It SHOULD be ignored by both sides.
If authentication fails, the server MUST either act like a standard web server that does not understand the request, or in the case of being a reverse proxy, forward the request to the upstream site and return the response to the client.
The client MUST check the status code to determine if the authentication was successful. If the status code is anything other than 233, the client MUST consider authentication to have failed and disconnect from the server.
After (and only after) a client passes authentication, the server MUST consider this QUIC connection to be a Hysteria proxy connection. It MUST then start processing proxy requests from the client as described in the next section.
Proxy Requests
TCP
For each TCP connection, the client MUST create a new QUIC bidirectional stream and send the following TCPRequest message:
[varint] 0x401 (TCPRequest ID)
[varint] Address length
[bytes] Address string (host:port)
[varint] Padding length
[bytes] Random padding
The server MUST respond with a TCPResponse message:
[uint8] Status (0x00 = OK, 0x01 = Error)
[varint] Message length
[bytes] Message string
[varint] Padding length
[bytes] Random padding
If the status is OK, the server MUST then begin forwarding data between the client and the specified TCP address until either side closes the connection. If the status is Error, the server MUST close the QUIC stream.
UDP
UDP packets MUST be encapsulated in the following UDPMessage format and sent over QUIC’s unreliable datagram (for both client-to-server and server-to-client):
[uint32] Session ID
[uint16] Packet ID
[uint8] Fragment ID
[uint8] Fragment count
[varint] Address length
[bytes] Address string (host:port)
[bytes] Payload
The client MUST use a unique Session ID for each UDP session. The server SHOULD assign a unique UDP port to each Session ID, unless it has another mechanism to differentiate packets from different sessions (e.g., symmetric NAT, varying outbound IP addresses, etc.).
The protocol does not provide an explicit way to close a UDP session. While a client can retain and reuse a Session ID indefinitely, the server SHOULD release and reassign the port associated with the Session ID after a period of inactivity or some other criteria. If the client sends a UDP packet to a Session ID that is no longer recognized by the server, the server MUST treat it as a new session and assign a new port.
If a server does not support UDP relay, it SHOULD silently discard all UDP messages received from the client.
Fragmentation
Due to the limit imposed by QUIC’s unreliable datagram channel, any UDP packet that exceeds QUIC’s maximum datagram size MUST either be fragmented or discarded.
For fragmented packets, each fragment MUST carry the same unique Packet ID. The Fragment ID, starting from 0, indicates the index out of the total Fragment Count. Both the server and client MUST wait for all fragments of a fragmented packet to arrive before processing them. If one or more fragments of a packet are lost, the entire packet MUST be discarded.
For packets that are not fragmented, the Fragment Count MUST be set to 1. In this case, the values of Packet ID and Fragment ID are irrelevant.
Congestion Control
A unique feature of Hysteria is the ability to set the tx/rx (upload/download) rate on the client side. During authentication, the client sends its rx rate to the server via the Hysteria-CC-RX header. The server can use this to determine its transmission rate to the client, and vice versa by returning its rx rate to the client through the same header.
Three special cases are:
- If the client sends 0, it doesn’t know its own rx rate. The server MUST use a congestion control algorithm (e.g., BBR, Cubic) to adjust its transmission rate.
- If the server responds with 0, it has no bandwidth limit. The client MAY transmit at any rate it wants.
- If the server responds with “auto”, it chooses not to specify a rate. The client MUST use a congestion control algorithm to adjust its transmission rate.
“Salamander” Obfuscation
The Hysteria protocol supports an optional obfuscation layer codenamed “Salamander”.
“Salamander” encapsulates all QUIC packets in the following format:
[8 bytes] Salt
[bytes] Payload
For each QUIC packet, the obfuscator MUST calculate the BLAKE2b-256 hash of a randomly generated 8-byte salt appended to a user-provided pre-shared key.
hash = BLAKE2b-256(key + salt)
The hash is then used to obfuscate the payload using the following algorithm:
for i in range(0, len(payload)):
payload[i] ^= hash[i % 32]
The deobfuscator MUST use the same algorithms to calculate the salted hash and deobfuscate the payload. Any invalid packet MUST be discarded.
TUIC
Highlights
- QUIC-based proxy protocol that uses TLS 1.3 for encryption and stream multiplexing.
- Supports 0-RTT resumption and UDP relay over QUIC datagrams.
- Designed for aggressive latency tuning with modern congestion control.
Flow
- Client opens a QUIC connection to the server and completes the TLS 1.3 handshake.
- Client authenticates with a UUID/token configured on the server.
- Client opens bidirectional QUIC streams for TCP requests and uses datagrams for UDP relay.
- Server validates auth, then forwards traffic to upstream destinations.
Configuration Snippet
Strengths
- Low handshake overhead with 0-RTT and multiplexed streams.
- Handles UDP natively without extra encapsulation layers.
- Good performance on lossy or high-latency mobile networks.
Limitations
- Requires UDP reachability and QUIC-friendly network paths.
- QUIC fingerprints vary by implementation and can be throttled or blocked.
- MTU and packet pacing tuning are often required for best results.
VLESS
Highlights
- Lightweight stateless protocol from Project V that uses UUIDs for client identification.
- Typically paired with TLS, XTLS, or Reality transport layers for encryption and camouflage.
- Supports multiplexing, fallback routes, and advanced routing rules within the Xray core ecosystem.
Flow
- Client connects to the server transport (TLS, XTLS, Reality, gRPC, or MKCP).
- Client sends a VLESS header carrying the UUID, command (TCP/UDP), and target address.
- Server validates the UUID, then opens a stream or datagram tunnel to the destination.
- Optional features such as Flow Control Transport (FCT) or XTLS split accelerate traffic.
Configuration Snippet
Strengths
- UUID-based auth scales well for many users and integrates with automated issuers.
- Compatible with multiple transports, giving flexibility between TCP, gRPC, WS, or QUIC layers.
- XTLS/Reality options reduce TLS overhead and mimic legitimate HTTPS fingerprints.
Limitations
- Requires the Xray-core ecosystem; not natively supported by mainstream OS tools.
- Misconfiguration of flow parameters can break compatibility with older clients.
- Security relies heavily on the chosen transport; bare VLESS without TLS offers no encryption.
xHTTP Transport
Overview
xHTTP is an Xray transport that tunnels proxy traffic through regular HTTP request/response patterns, making it look closer to normal web application traffic. It is commonly used with VLESS + TLS/Reality to improve camouflage and traverse restrictive network environments.
When to Use
- You need traffic to blend into common HTTPS API patterns.
- Your network environment is sensitive to long-lived WebSocket or gRPC signatures.
- You want to combine VLESS identity/auth with HTTP-style uplink/downlink behavior.
Core Configuration Fields
| Field | Side | Meaning |
|---|---|---|
network: xhttp | client/server | Enables xHTTP transport. |
path | client/server | HTTP request path used by transport, must match on both sides. |
host | client | Optional Host header override (for fronting/reverse proxy cases). |
mode | client/server | Transport mode, commonly auto (default) or platform-specific variants. |
extra.headers | client | Extra HTTP headers to mimic app/API traffic. |
xmux | client/server | Multiplex tuning such as concurrency limits and connection reuse. |
tls / reality | client/server | Encryption/camouflage layer strongly recommended in production. |
Minimal Example (Client, Clash-Meta style)
proxies:
- name: vless-xhttp
type: vless
server: edge.example.com
port: 443
uuid: 11111111-2222-3333-4444-555555555555
tls: true
servername: cdn.example.com
network: xhttp
xhttp-opts:
path: /api/v1/sync
host:
- cdn.example.com
mode: auto
headers:
User-Agent:
- okhttp/4.12.0
Minimal Example (Server, Xray style)
{
"inbounds": [
{
"port": 443,
"protocol": "vless",
"settings": {
"clients": [
{ "id": "11111111-2222-3333-4444-555555555555" }
],
"decryption": "none"
},
"streamSettings": {
"network": "xhttp",
"security": "tls",
"tlsSettings": {
"serverName": "cdn.example.com",
"certificates": [
{
"certificateFile": "/etc/ssl/fullchain.pem",
"keyFile": "/etc/ssl/privkey.pem"
}
]
},
"xhttpSettings": {
"path": "/api/v1/sync",
"mode": "auto"
}
}
}
]
}
Deployment Notes
- Keep
pathand mode fully aligned between client and server, otherwise handshakes fail. - Prefer realistic but stable headers; frequently changing fingerprints can hurt reliability.
- If deploying behind Nginx/Caddy/CDN, ensure request buffering and timeout limits fit long-lived proxy streams.
- Start with conservative
xmuxvalues, then tune concurrency after observing latency and upstream limits.
Troubleshooting Checklist
EOFimmediately after connect: verify UUID, TLS server name, andpathconsistency.- Frequent reconnects: check reverse proxy idle timeout and HTTP/2 upstream settings.
- Good handshake but poor throughput: reduce header bloat, tune
xmux, and verify CDN region affinity.
Reality
Highlights
- TLS camouflage layer from the Xray ecosystem that imitates a TLS 1.3 handshake without issuing a certificate.
- Uses a server public key and short ID to bind the handshake to a real-looking TLS fingerprint.
- Commonly paired with VLESS or Trojan to provide authentication and routing on top of the transport.
Flow
- Client selects a cover domain and configures the server public key + short ID.
- Client initiates a TLS 1.3-like handshake (uTLS fingerprint) with SNI set to the cover domain.
- Server validates the short ID and key exchange to accept the session.
- On success, the connection upgrades to the chosen proxy protocol (for example VLESS).
Configuration Snippet
Strengths
- Avoids certificate issuance and rotation while keeping TLS-like handshake behavior.
- Harder to fingerprint via passive inspection when the TLS client fingerprint matches common browsers.
- Integrates with XTLS flow control for reduced overhead.
Limitations
- Requires compatible client fingerprints; mismatches can break connectivity.
- Mostly confined to the Xray tooling ecosystem.
- Effectiveness depends on the chosen cover domain and correct configuration.