Change two lines.
Save more tokens.
One OpenAI-compatible interface for your app. Change the base URL, use a Trex key, bind your own provider key in the dashboard, and let Trex adapt OpenAI, Anthropic, or OpenRouter upstreams behind the scenes. TrexAPI from ClawDiary is built around the TokenZip Protocol.
Start with a workflow that has already run end to end
This section shows a real staging workflow: one long release packet settles into a TrexID, then gets reused across on-call, rollback, and enterprise follow-up flows. It includes reconciled token savings and measured TrexID latency from a live run.
If TrexAPI does not save more token cost than your Pro Edge fee, we renew the next month for free
This commitment currently applies only to the standard Pro Edge monthly plan at $40/month: if you route your own OpenAI, Claude, or equivalent provider traffic through TrexAPI and the monthly savings do not exceed the subscription fee, the next month is extended at no charge; this is a renewal benefit, not a cash refund.
Reconciled against real provider usage
TrexAPI does not guess how much you probably saved. The platform reconciles original input tokens, actual upstream billed tokens, and saved cost against provider usage fields plus official pricing.
We only deserve to charge when the gain is clearly positive
For the standard Pro Edge monthly plan at $40/month, if routing your own OpenAI, Claude, or equivalent model traffic through TrexAPI does not save more than the subscription fee for that month, we extend the next month at no charge instead of refunding cash.
Applies only to Pro Edge ($40/mo)
This promise applies only to the standard Pro Edge monthly plan at $40/month and is intended for production use cases with a clear ROI requirement.
TrexAPI is the first end-to-end platform built around the TokenZip Protocol
From ClawDiary, TrexAPI combines account access, API key provisioning, payload lifecycle operations, TrexID pointerization, and production access into a single platform. The goal is not to leave elite context-compression capability inside a few flagship products, but to turn it into shared infrastructure that any team can adopt quickly.
Why choose TrexAPI
If you are already paying OpenAI, Claude, or another model provider, TrexAPI is not just another wrapper. Its core job is to use Semantic Edge Dynamic Optimization to turn long context into a reusable, exact-protected, and measurable Semantic Edge memory layer.
Semantic Edge Dynamic Optimization, not generic compression
TrexAPI does not mechanically trim long prompts. It first identifies the facts, constraints, entities, and conclusions the task actually depends on, then compresses the low-signal remainder.
Exact preservation for high-risk spans
IDs, amounts, clauses, code, commands, schema fragments, and other high-risk spans are routed through exact-preservation instead of being loosely rewritten with ordinary narrative text.
TrexID makes context reusable
Processed context is packaged as a TokenZip payload and stored behind a controlled TrexID. Downstream systems pass the TrexID and expand only when needed instead of re-sending the full context every time.
Savings are becoming measurable
The platform now starts reconciling original input tokens, actual upstream billed tokens, and TrexAPI-attributed savings against provider usage and official pricing tables.
Not another prompt utility, but a reconciled infrastructure layer built around Semantic Edge Dynamic Optimization
Most alternatives either relay requests directly to the model provider or stop at “compressing text a bit.” TrexAPI combines context handling, TrexID reuse, usage reconciliation, and a production control surface in one path.
| Dimension | Direct provider calls | Generic compression layer | TrexAPI |
|---|---|---|---|
| How context is handled | Sends the full prompt upstream as-is. | Usually relies on truncation, summaries, or superficial compression. | Runs Semantic Edge Dynamic Optimization first, then protects critical spans and compacts low-signal language. |
| Exact protection for high-risk data | Either keep everything or split it manually. | Can easily rewrite identifiers, clauses, or fields along with normal prose. | Identifiers, amounts, clauses, code, and structured fields are routed through an exact-preservation path. |
| Reusable across systems | The same full context is resent on every call. | Compressed output is typically a one-off piece of text. | Optimized output is packaged as a TokenZip payload and passed by TrexID reference across systems. |
| Cost reconciliation | You only see the provider’s raw bill. | Rarely reconciles cleanly against real provider usage. | Reconciles original tokens, billed tokens, and savings against upstream usage plus official pricing. |
| Production control surface | You still need to build your own accounts, access, keys, and governance layer. | Usually covers a narrow optimization layer, not a full production surface. | Accounts, API keys, payload lifecycle, and subscription policy live in the same platform. |
The key question is not whether you want a new interface. It is whether your context is valuable enough to optimize, cache, reuse, and reconcile.
Where TrexAPI fits best
- You already pay OpenAI, Claude, or another model provider and want those calls to become more token-efficient and reusable.
- Your workflows involve long context, multi-agent orchestration, cross-service chains, or context that gets reused more than once.
- You want a drop-in OpenAI-compatible base URL, a Trex key for clients, and a dashboard flow for binding your own provider key.
- You care whether the savings are real, and want usage, cost, cache reuse, and subscription guarantees reconciled instead of hand-waved.
Where direct calls may still be better
- Your traffic is mostly short one-off prompts with no real context reuse or cache value.
- You only want the thinnest possible pass-through and do not need TrexID, Semantic Edge Dynamic Optimization, billing reconciliation, or access controls.
- You do not want to bind your own provider key in a control plane or introduce any additional operational surface.
- You currently value the absolute smallest integration delta more than long-term token savings, reuse, and governance.
TrexAPI is designed around a lower-exposure default path for credentials, context, and references.
For most teams, the real privacy and security question is whether raw context, upstream API keys, and access permissions are pushed into fewer, clearer, governable paths.
Your upstream API key does not need to travel on every request
You can now bind your own OpenAI, Anthropic, or OpenRouter key in the dashboard. Trex reads it automatically on proxy calls so the upstream credential does not need to be resent on every request.
TrexID reduces repeated raw-context exposure
TrexAPI is designed to stop resending the full raw prompt whenever possible. Context settles into a TrexID so systems pass references and controlled payloads instead of repeatedly moving the original long text around.
High-risk spans stay on an exact path with access controls
Identifiers, clauses, amounts, code, and structured fields are routed through an exact-preservation path; payloads also carry TTL, allowed-receiver, and sender-agent metadata for controlled access.
Accounts, sessions, and keys live in one control plane
TrexAPI keeps account login, sessions, OAuth, API key create/revoke flows, and subscription-aware access in one place instead of scattering governance across multiple systems.
This is not plain compression. It is Semantic Edge Dynamic Optimization.
TrexAPI does not simply cut text down. It first identifies the context that actually matters, preserves high-risk spans exactly, semantically compacts low-signal narrative, and then packages the result as a TokenZip payload.
Semantic salience extraction
The system first isolates the parts of a long context that carry task value such as facts, constraints, states, decisions, entity relationships, and objectives rather than trimming every sentence equally.
Exact-preservation and structural protection
Code, field names, identifiers, amounts, dates, clause references, and structured fragments receive exact-preservation treatment; only ordinary narrative language is allowed into the semantic compaction layer.
Low-signal narrative compaction
Verbose modifiers, repeated framing, and low-density phrasing are compacted into shorter but semantically stable wording, turning rhetorical setup into explicit conclusions.
Trex payload packaging and TrexID pointerization
The result is packaged into quantized vectors, fallback text, and access metadata. Once stored in TrexAPI, it becomes a TrexID that downstream systems can reference directly.
TrexAPI preserves what matters and compresses what does not.
These examples are not random shortening. They show how TrexAPI preserves conclusions, constraints, identifiers, and decision-bearing structure while compressing narrative overhead.
The weather today is exceptionally radiant, with sunlight spilling across nearly the entire afternoon.
Today is bright with abundant sunlight.
Release the refund only after finance has verified Invoice INV-20481 and settlement Clause 7.2.
Verify Invoice INV-20481 and Clause 7.2 before refund release.
Because the board insists on preserving liquidity headroom through Q3, the 2026 rollout should prioritize revenue-adjacent markets and defer nonessential regional expansion.
Under the Q3 liquidity constraint, prioritize revenue-adjacent markets in the 2026 rollout and defer nonessential regions.
From optimized context to reusable TrexID memory
TrexAPI is not trying to output a shorter sentence and stop there. It turns processed context into a reusable semantic object: quantized, signed, stored, and expanded on demand when another system needs it.
Start free, then scale into production usage
From a smaller free tier to founder pricing and scale plans, production access is managed directly from the dashboard, and the free tier already includes one production key. Lifetime-plan scope and eligibility are documented on the pricing page.
Founding Lifetime
Includes lifetime access to the current Founders self-serve tier for teams that prefer a one-time Founders purchase.
- Limited to the first 100 buyers
- Unlimited free retrieval of existing TrexID cache
- Excludes future enterprise, private deployment, and SLA
Pro Lifetime
Includes lifetime access to the current Pro self-serve tier for teams that want to deploy production traffic through a one-time purchase.
- Limited to the first 50 buyers
- Includes the current Pro self-serve tier, excluding future enterprise terms
- Excludes savings-share enterprise, private deployment, and SLA
Developer
- Up to 2,500 requests / month
- Dashboard access
- 1 production API key included
- Community support
Founders
- Lifetime price while active
- Production API key provisioning
- First 500 users only
- No Pro Edge renewal promise
Pro Edge
- Up to 500,000 requests / month
- Production API access
- Priority email support
- Includes the Pro Edge renewal promise
Scale Edge
- Up to 2M requests / month
- Higher-throughput support lane
- Production rollout planning
- No Pro Edge renewal promise
Enterprise
- Base platform fee
- Optional private deployment or SLA fee
- 5% to 10% of verified net savings
- Custom capacity, contract, and rollout planning
Only the standard Pro Edge monthly plan at $40/month currently includes our token savings promise: if the savings from routing your own OpenAI, Claude, or equivalent provider traffic through TrexAPI do not exceed the subscription fee for the month, we extend the next month at no charge instead of refunding cash. As long as the subscription stays active, retrieving existing cache by TrexID also remains free and unlimited.