Behavioral scenarios use a canonical line-based script format. Raw frame
goldens remain separate and are used only for internal/wire and malformed
frame cases.
Current state:
testing/testhostuses the production codec in both directions; transcript message names map to real IBKR integer message IDs- checked-in transcripts cover both live-grounded scenarios and synthetic fault-injection cases for disconnects, partial frames, lifecycle edges, and other protocol failures
- live-grounded behavior is captured from IB Gateway
server_version 200and frozen into replay artifacts - raw capture logs record per-leg connect/disconnect events plus TCP chunks; normalized replay artifacts reconstruct framed payloads from those chunks
- human-diffable
- ordered by runtime sequence
- machine-validated by repo tooling
- expressive enough for delays, disconnects, partial frames, and bindings
Each non-empty non-comment line is one step:
client <message> <json-object>
server <message> <json-object>
sleep <duration>
disconnect
split <direction> <sizes> <message> <json-object>
raw <direction> <base64>
The JSON object is part of the line DSL. It provides typed values without turning the scenario into a machine-first document format.
String values that start with $ are symbolic bindings.
- In client expectation steps they bind on first match.
- In later client steps they match the previously bound value.
- In server steps they resolve to the bound value.
client hello {"min_version":1,"max_version":1,"client_id":7}
server hello_ack {"server_version":1,"connection_time":"2026-04-05T12:00:00Z"}
server managed_accounts {"accounts":["DU12345"]}
server next_valid_id {"order_id":1001}
client req_contract_details {"req_id":"$req1","contract":{"symbol":"AAPL","sec_type":"STK","exchange":"SMART","currency":"USD"}}
server contract_details {"req_id":"$req1","contract":{"symbol":"AAPL","sec_type":"STK","exchange":"SMART","currency":"USD"},"market_name":"NMS","min_tick":"0.01","time_zone_id":"US/Eastern"}
server contract_details_end {"req_id":"$req1"}
testing/testhost currently uses the production codec in both directions, but
it should be treated as replay tooling rather than as a place to define IBKR
protocol semantics.
- Client traffic is decoded and matched against the script.
- Server traffic is encoded from the script and written through the same wire framing as production code.
- Partial writes, malformed frames, delays, and disconnects are driven by the script rather than by ad hoc per-test logic.
The live capture tooling separates raw evidence from replay semantics:
- raw
events.jsonlrecords connection lifecycle plus byte chunks as observed on the socket - normalized
frames.jsonlrecords connect/disconnect markers plus framed payloads reconstructed offline - TCP chunk boundaries are not replay semantics and must never be treated as message boundaries
The current paper Gateway target is 127.0.0.1:4002. Capture through the
recorder proxy so raw evidence and normalized replay artifacts stay linked:
go build -o /tmp/ibkr-recorder ./cmd/ibkr-recorder
go build -o /tmp/ibkr-capture ./cmd/ibkr-capture
go build -o /tmp/ibkr-normalize ./cmd/ibkr-normalize
IBKR_UPSTREAM=127.0.0.1:4002 ./scripts/record-scenarios.sh quote_stream_multi_asset historical_ticks_aapl_timezone_window
./scripts/verify-captures.sh captures/<capture-dir>Complex trading scenarios whose names start with api_ are still recorded
through the same proxy, but the capture driver uses the public ibkr.Client
facade instead of hand-written wire calls. The raw events.jsonl remains the
protocol evidence; driver.log beside the capture records the human-readable
public-API order lifecycle, and driver_events.jsonl records structured
scenario/order/execution/commission checkpoints keyed by scenario run ID and
order ref.
Useful scenario batches:
IBKR_CAPTURE_BATCH=trading-basic ./scripts/record-scenarios.sh
IBKR_CAPTURE_BATCH=trading-advanced ./scripts/record-scenarios.sh
IBKR_CAPTURE_BATCH=trading-campaigns ./scripts/record-scenarios.sh
IBKR_CAPTURE_BATCH=trading-all ./scripts/record-scenarios.shFor active-order reconnect captures, allow the recorder to accept multiple connection legs:
IBKR_RECORDER_MAX_LEGS=3 IBKR_CAPTURE_BATCH=trading-campaigns ./scripts/record-scenarios.shcmd/ibkr-normalize can also emit a raw transcript skeleton for curation:
./ibkr-normalize -dir captures/<capture-dir> -transcript-out /tmp/<scenario>.txtRaw capture directories remain local evidence because they may contain
account-specific details. When promoting behavior into CI, check in a curated
transcript under testdata/transcripts plus a public test that asserts the
behavior at the library API boundary. Record the raw capture directory name,
server version, scenario, and events.jsonl hash in the PR or accompanying
notes so the replay can be traced back to live evidence without committing raw
account data. Default replay tests should stay curated; exhaustive replay runs
use the replay-all catalog batch or an explicit test flag/env in the caller.
- use
live-coverage-matrix.mdas the target matrix for exhaustive live capture coverage and promotion status - use
ibkr-api-inventory.mdas the official/repo inventory that keeps the matrix from drifting away from IBKR's API surface - grow scenario coverage for reconnect, pacing, and version-gated branches
- grow scenario coverage for order-management edge cases and more complex order shapes
- prefer complex live scenarios over one-request smoke captures when adding new coverage, especially for order, execution, account, PnL, historical window, and multi-subscription behavior
- broaden live capture coverage beyond
server_version 200 - use the recorder and normalization tooling to derive new scenarios from contributor-owned Gateway or TWS sessions