Built for the TokenZip Protocol

Change two lines.
Save more tokens.

One OpenAI-compatible interface for your app. Change the base URL, use a Trex key, bind your own provider key in the dashboard, and let Trex adapt OpenAI, Anthropic, or OpenRouter upstreams behind the scenes. TrexAPI from ClawDiary is built around the TokenZip Protocol.

Not a raw prompt relay, but a Semantic Edge memory layer shaped by Semantic Edge Dynamic Optimization so context becomes reusable, protected, and measurable through a simple path: compatible URL, Trex key, and a dashboard-bound upstream API key.
By ClawDiaryMarch release
Semantic Edge Dynamic Optimization engineExtract the semantic material a task actually depends on instead of relying on truncation or superficial shortening.
TrexID semantic memoryPackage processed context as a TokenZip payload, store it behind a TrexID, and move it between systems by reference.
Exact-preservingStructured fields, identifiers, and code fragments are routed through an exact path first
Savings meteredStarting to reconcile savings against upstream usage and official pricing
“Turn context into a TrexID-backed semantic memory layer instead of paying to resend the full prompt upstream every time.”
Real case

Start with a workflow that has already run end to end

This section shows a real staging workflow: one long release packet settles into a TrexID, then gets reused across on-call, rollback, and enterprise follow-up flows. It includes reconciled token savings and measured TrexID latency from a live run.

6,184 → 1,973Input token change in a real reconciled event
4,211Input tokens saved on that event
0.63sMeasured hot-path TrexID resolve latency
This case shows a validated TrexID workflow that can be integrated, can save tokens, and can keep being reused.
Renewal promise

If TrexAPI does not save more token cost than your Pro Edge fee, we renew the next month for free

This commitment currently applies only to the standard Pro Edge monthly plan at $40/month: if you route your own OpenAI, Claude, or equivalent provider traffic through TrexAPI and the monthly savings do not exceed the subscription fee, the next month is extended at no charge; this is a renewal benefit, not a cash refund.

Eligibility standard
monthly token-cost savings through TrexAPI <= monthly subscription fee
Outcome: TrexAPI renews one additional month for free

Reconciled against real provider usage

TrexAPI does not guess how much you probably saved. The platform reconciles original input tokens, actual upstream billed tokens, and saved cost against provider usage fields plus official pricing.

We only deserve to charge when the gain is clearly positive

For the standard Pro Edge monthly plan at $40/month, if routing your own OpenAI, Claude, or equivalent model traffic through TrexAPI does not save more than the subscription fee for that month, we extend the next month at no charge instead of refunding cash.

Applies only to Pro Edge ($40/mo)

This promise applies only to the standard Pro Edge monthly plan at $40/month and is intended for production use cases with a clear ROI requirement.

In other words, we do not want teams to subscribe to TrexAPI for a vague hope of saving a little. Our position is that this Semantic Edge and Semantic Edge Dynamic Optimization layer should stay in your production path only when it keeps producing a clear positive return.
Now Shipping

TrexAPI is the first end-to-end platform built around the TokenZip Protocol

From ClawDiary, TrexAPI combines account access, API key provisioning, payload lifecycle operations, TrexID pointerization, and production access into a single platform. The goal is not to leave elite context-compression capability inside a few flagship products, but to turn it into shared infrastructure that any team can adopt quickly.

Account auth, OAuth, and session handling
Dashboard-based production API key management
Signed payload push, fetch, HEAD, and revoke
Subscription-aware access and account controls
Why TrexAPI

Why choose TrexAPI

If you are already paying OpenAI, Claude, or another model provider, TrexAPI is not just another wrapper. Its core job is to use Semantic Edge Dynamic Optimization to turn long context into a reusable, exact-protected, and measurable Semantic Edge memory layer.

Semantic Edge Dynamic Optimization, not generic compression

TrexAPI does not mechanically trim long prompts. It first identifies the facts, constraints, entities, and conclusions the task actually depends on, then compresses the low-signal remainder.

Exact preservation for high-risk spans

IDs, amounts, clauses, code, commands, schema fragments, and other high-risk spans are routed through exact-preservation instead of being loosely rewritten with ordinary narrative text.

TrexID makes context reusable

Processed context is packaged as a TokenZip payload and stored behind a controlled TrexID. Downstream systems pass the TrexID and expand only when needed instead of re-sending the full context every time.

Savings are becoming measurable

The platform now starts reconciling original input tokens, actual upstream billed tokens, and TrexAPI-attributed savings against provider usage and official pricing tables.

Solution comparison

Not another prompt utility, but a reconciled infrastructure layer built around Semantic Edge Dynamic Optimization

Most alternatives either relay requests directly to the model provider or stop at “compressing text a bit.” TrexAPI combines context handling, TrexID reuse, usage reconciliation, and a production control surface in one path.

DimensionDirect provider callsGeneric compression layerTrexAPI
How context is handledSends the full prompt upstream as-is.Usually relies on truncation, summaries, or superficial compression.Runs Semantic Edge Dynamic Optimization first, then protects critical spans and compacts low-signal language.
Exact protection for high-risk dataEither keep everything or split it manually.Can easily rewrite identifiers, clauses, or fields along with normal prose.Identifiers, amounts, clauses, code, and structured fields are routed through an exact-preservation path.
Reusable across systemsThe same full context is resent on every call.Compressed output is typically a one-off piece of text.Optimized output is packaged as a TokenZip payload and passed by TrexID reference across systems.
Cost reconciliationYou only see the provider’s raw bill.Rarely reconciles cleanly against real provider usage.Reconciles original tokens, billed tokens, and savings against upstream usage plus official pricing.
Production control surfaceYou still need to build your own accounts, access, keys, and governance layer.Usually covers a narrow optimization layer, not a full production surface.Accounts, API keys, payload lifecycle, and subscription policy live in the same platform.
Use-case fit

The key question is not whether you want a new interface. It is whether your context is valuable enough to optimize, cache, reuse, and reconcile.

Where TrexAPI fits best

  • You already pay OpenAI, Claude, or another model provider and want those calls to become more token-efficient and reusable.
  • Your workflows involve long context, multi-agent orchestration, cross-service chains, or context that gets reused more than once.
  • You want a drop-in OpenAI-compatible base URL, a Trex key for clients, and a dashboard flow for binding your own provider key.
  • You care whether the savings are real, and want usage, cost, cache reuse, and subscription guarantees reconciled instead of hand-waved.

Where direct calls may still be better

  • Your traffic is mostly short one-off prompts with no real context reuse or cache value.
  • You only want the thinnest possible pass-through and do not need TrexID, Semantic Edge Dynamic Optimization, billing reconciliation, or access controls.
  • You do not want to bind your own provider key in a control plane or introduce any additional operational surface.
  • You currently value the absolute smallest integration delta more than long-term token savings, reuse, and governance.
Privacy & security

TrexAPI is designed around a lower-exposure default path for credentials, context, and references.

For most teams, the real privacy and security question is whether raw context, upstream API keys, and access permissions are pushed into fewer, clearer, governable paths.

Your upstream API key does not need to travel on every request

You can now bind your own OpenAI, Anthropic, or OpenRouter key in the dashboard. Trex reads it automatically on proxy calls so the upstream credential does not need to be resent on every request.

TrexID reduces repeated raw-context exposure

TrexAPI is designed to stop resending the full raw prompt whenever possible. Context settles into a TrexID so systems pass references and controlled payloads instead of repeatedly moving the original long text around.

High-risk spans stay on an exact path with access controls

Identifiers, clauses, amounts, code, and structured fields are routed through an exact-preservation path; payloads also carry TTL, allowed-receiver, and sender-agent metadata for controlled access.

Accounts, sessions, and keys live in one control plane

TrexAPI keeps account login, sessions, OAuth, API key create/revoke flows, and subscription-aware access in one place instead of scattering governance across multiple systems.

A common integration path is: bind your provider key once in the dashboard, call the compatible proxy with a Trex proxy key, settle high-value context into a TrexID, and reuse it by reference. That reduces repeated credential exposure and cuts down how often raw context has to move between systems. Trex handles the routing, optimization, and managed-proxy layer; upstream provider billing, availability, and policies remain governed by that provider.
Core mechanism

This is not plain compression. It is Semantic Edge Dynamic Optimization.

TrexAPI does not simply cut text down. It first identifies the context that actually matters, preserves high-risk spans exactly, semantically compacts low-signal narrative, and then packages the result as a TokenZip payload.

01

Semantic salience extraction

The system first isolates the parts of a long context that carry task value such as facts, constraints, states, decisions, entity relationships, and objectives rather than trimming every sentence equally.

02

Exact-preservation and structural protection

Code, field names, identifiers, amounts, dates, clause references, and structured fragments receive exact-preservation treatment; only ordinary narrative language is allowed into the semantic compaction layer.

03

Low-signal narrative compaction

Verbose modifiers, repeated framing, and low-density phrasing are compacted into shorter but semantically stable wording, turning rhetorical setup into explicit conclusions.

04

Trex payload packaging and TrexID pointerization

The result is packaged into quantized vectors, fallback text, and access metadata. Once stored in TrexAPI, it becomes a TrexID that downstream systems can reference directly.

Compression examples

TrexAPI preserves what matters and compresses what does not.

These examples are not random shortening. They show how TrexAPI preserves conclusions, constraints, identifiers, and decision-bearing structure while compressing narrative overhead.

Narrative compaction
Before

The weather today is exceptionally radiant, with sunlight spilling across nearly the entire afternoon.

After optimization
After

Today is bright with abundant sunlight.

Keeps the operative weather conclusion and removes decorative phrasing.
Exact-preservation example
Before

Release the refund only after finance has verified Invoice INV-20481 and settlement Clause 7.2.

After optimization
After

Verify Invoice INV-20481 and Clause 7.2 before refund release.

Identifiers and clause references are treated as high-risk spans and preserved exactly.
High-density business semantics
Before

Because the board insists on preserving liquidity headroom through Q3, the 2026 rollout should prioritize revenue-adjacent markets and defer nonessential regional expansion.

After optimization
After

Under the Q3 liquidity constraint, prioritize revenue-adjacent markets in the 2026 rollout and defer nonessential regions.

Preserves the constraint, timing, and decision while compressing the surrounding narrative.
TrexID flow

From optimized context to reusable TrexID memory

TrexAPI is not trying to output a shorter sentence and stop there. It turns processed context into a reusable semantic object: quantized, signed, stored, and expanded on demand when another system needs it.

Extract task-bearing contextIdentify the semantic material that actually influences model decisions.
Protect exact high-risk spansProtect IDs, clauses, code, and structured fields through an exact path.
Package a quantized semantic payloadOrganize the optimized result into a TokenZip payload.
Transmit it as a TrexIDPass the TrexID between systems and expand only on demand.
That is why TrexAPI fits multi-agent, cross-service, and long-context workloads: you stop paying to re-send the same prompt body repeatedly and instead settle the context into a TrexID that can be expanded by reference where needed.

Start free, then scale into production usage

From a smaller free tier to founder pricing and scale plans, production access is managed directly from the dashboard, and the free tier already includes one production key. Lifetime-plan scope and eligibility are documented on the pricing page.

Limited lifetime · 100 left
Limited lifetime release · launch window100 / 100 left
This lifetime offer closes at sell-out

Founding Lifetime

$249/one-time

Includes lifetime access to the current Founders self-serve tier for teams that prefer a one-time Founders purchase.

  • Limited to the first 100 buyers
  • Unlimited free retrieval of existing TrexID cache
  • Excludes future enterprise, private deployment, and SLA
View lifetime offer
Limited lifetime · 50 left
Limited lifetime release · launch window50 / 50 left
This lifetime offer closes at sell-out

Pro Lifetime

$599/one-time

Includes lifetime access to the current Pro self-serve tier for teams that want to deploy production traffic through a one-time purchase.

  • Limited to the first 50 buyers
  • Includes the current Pro self-serve tier, excluding future enterprise terms
  • Excludes savings-share enterprise, private deployment, and SLA
View lifetime offer

Developer

$0/month
  • Up to 2,500 requests / month
  • Dashboard access
  • 1 production API key included
  • Community support
Sign Up
First 500
Early supporter window · closes at sell-out500 / 500 left

Founders

$9.9/month
  • Lifetime price while active
  • Production API key provisioning
  • First 500 users only
  • No Pro Edge renewal promise
Claim Founders

Pro Edge

$40/month
  • Up to 500,000 requests / month
  • Production API access
  • Priority email support
  • Includes the Pro Edge renewal promise
View Pro Edge

Scale Edge

$120/month
  • Up to 2M requests / month
  • Higher-throughput support lane
  • Production rollout planning
  • No Pro Edge renewal promise
View Scale Edge

Enterprise

Base fee+ savings share
  • Base platform fee
  • Optional private deployment or SLA fee
  • 5% to 10% of verified net savings
  • Custom capacity, contract, and rollout planning
Contact Sales

Only the standard Pro Edge monthly plan at $40/month currently includes our token savings promise: if the savings from routing your own OpenAI, Claude, or equivalent provider traffic through TrexAPI do not exceed the subscription fee for the month, we extend the next month at no charge instead of refunding cash. As long as the subscription stays active, retrieving existing cache by TrexID also remains free and unlimited.