S2S callback flow

Postback Debugging for CPA: Finding Why Tracker, Network, and Advertiser Numbers Don’t Match

In CPA marketing, “postback debugging” is the process of proving, with logs and request data, where a conversion got lost, duplicated, delayed, or attributed differently across your tracker, the network, and the advertiser. In 2026, most disputes still come down to the same few technical roots: missing or altered click IDs, broken redirects, strict parameter validation, deduplication rules, and timing differences caused by queues and time zones. The aim is not to “make the numbers look nicer”, but to isolate the exact hop that failed and collect evidence strong enough for a network or advertiser to act on.

Start With a Single Conversion and Rebuild the Full Path

The fastest way to debug is to stop looking at aggregates and pick one conversion that is clearly “wrong” (for example: visible in the network but absent in the tracker, or shown in the tracker but rejected by the advertiser). Gather its identifiers from every side: your click ID, the network’s click reference, the advertiser’s transaction/order ID, and any event ID your tracker generated. If one party cannot provide an ID, that is already a signal: either the identifier was never passed, it was overwritten, or the event was generated from a different flow than you expect.

Next, rebuild the timeline. Use UTC as your common language and write down the click time, redirect time(s), landing time (if available), and conversion time reported by each system. A “missing conversion” often becomes “late conversion” once you line up timestamps: queued server-to-server callbacks, retry backoff, and batch imports can shift reporting by minutes or hours. If your tracker uses local time while a partner exports in UTC, the mismatch can also appear as a “day boundary” problem rather than a true loss.

Finally, verify the path in order: traffic source → tracker click endpoint → any intermediate redirects → offer URL → advertiser landing → advertiser conversion → network postback → tracker conversion endpoint. Your goal is to confirm that the same unique identifier (usually the click ID) survives every hop without being truncated, re-encoded incorrectly, or replaced. When you find the first hop where the ID disappears or changes, you’ve found the section that needs fixing.

What to Capture in Logs So You Can Prove the Root Cause

For each hop, capture the raw request and the raw response. At minimum, keep: full URL (including query string), HTTP method, status code, response headers that affect caching or redirects, IP address, user agent (for browser hops), and the exact timestamp with timezone. For server-to-server callbacks, add the request body (if any), signature fields (HMAC/token), and the partner’s source IP or ASN if you validate origins. It is difficult to win a dispute with “we think it fired”; it is easy to win with a request line, a 200/4xx/5xx status, and a matching ID.

Store the “normalised” version of key identifiers alongside the raw version. Normalisation means: URL-decoded once, trimmed, case-handled consistently, and validated against allowed character sets and length. A common real-world issue is that one side sends an ID with characters that survive in a browser but fail a strict backend validator, or an ID gets double-encoded so “%2B” becomes “+” and then becomes a space. If your system rejects or rewrites parameters, log both the original and the stored value.

Also log decision points, not just requests. That includes: deduplication decisions (why an event was considered a duplicate), attribution selection (which click won and why), fraud or quality filters triggered, and any mapping logic that converts partner statuses into your internal statuses. In many “tracker vs network” disputes, the postback arrived fine, but your tracker intentionally ignored it due to a rule that is correct for one partner and wrong for another.

Validate the Postback Payload and Matching Logic End-to-End

Once you can trace the conversion path, focus on the exact payload that should match a click to a conversion. In practice, this usually means one primary key (click ID) plus supporting fields: payout, currency, event type, goal ID, and transaction/order ID. Confirm that the click ID the advertiser or network uses is exactly the one your tracker generated. If the network uses its own reference, confirm the mapping from your click ID to their reference is stored reliably at click time and is retrievable later.

Then validate your matching logic under real constraints. Does your tracker require the click to exist, be within an attribution window, and be from an allowed geo/device? If the conversion comes in after the click’s retention period, your tracker may drop it even though the network still counts it. If you rotate sub-IDs or pass multiple IDs (for example, both a source click ID and a network click reference), confirm which one is the true key on each side and ensure you are not accidentally overwriting the one that matters.

Finally, check for “silent failures” in transport. A postback might be sent but never recorded if the partner receives non-2xx responses and does not retry, or if your endpoint replies 200 but fails internally (queue overflow, validation error after response, database write failure). In 2026, many systems use asynchronous processing, so you must distinguish between “request accepted” and “event persisted”. Your logs should show whether a callback was merely received, or also processed into a final conversion record.

Common Technical Mismatch Patterns You Can Test Quickly

Parameter drift is still the most frequent cause: wrong parameter name, missing required parameter, or sending the right value under the wrong key. If a partner uses “clickid” but your endpoint expects “cid”, you may record the request but fail to match it. Similarly, “goal” vs “event”, numeric goal IDs vs string goal names, and currency formatting differences can make an event look valid while breaking your reconciliation downstream. The fix is usually boring: publish a single authoritative spec per partner and enforce it at integration time.

Duplicates and double-fires come next. They happen when a user refreshes a thank-you page, when an advertiser fires both browser pixel and server-to-server for the same action, or when a partner retries aggressively without an idempotency key. If your tracker deduplicates by transaction ID but the advertiser doesn’t send one, you might count two events where the network counts one (or the reverse). Make deduplication explicit: define which field is the idempotency key, what happens when it’s absent, and how long you keep the dedupe window.

Attribution-rule differences can look like “missing postbacks” even when everything is firing. If the network credits last click but your tracker uses first click (or different lookback windows by channel), you will see legitimate discrepancies. This is especially visible with cross-device and privacy-restricted traffic, where deterministic IDs are limited and systems fall back to probabilistic or modelled attribution. The practical approach is to align rules per offer: define the winning click rule, window, and reattribution conditions, then verify that all systems are configured to the same policy.

S2S callback flow

Build a Repeatable Debug Pack for the Network or Advertiser

When you escalate, speed matters. Create a “debug pack” that includes: a single example click and conversion, all IDs from every side, the UTC timeline, the raw postback/callback request, your endpoint response, and the processing outcome in your tracker. Include the offer ID, goal name/ID, and the exact payout/currency you expected. This avoids long back-and-forth where each side asks for a different identifier and weeks get lost in translation.

Make your evidence easy to verify. If you can, include a hashed version of sensitive values (for example, SHA-256 of an email) rather than the raw personal data, and indicate exactly how it was hashed (normalisation steps, salt usage). In 2026, privacy obligations are not optional; partners will often refuse to share raw user-level data, but they will compare hashes if you give them a consistent method. This allows you to prove identity matches without exchanging personal details.

Lastly, propose a fix and a validation step. If the issue is a non-2xx response, show the status code and recommend retry behaviour or whitelisting. If it’s a parameter name mismatch, attach the corrected format. If it’s attribution policy, ask the partner to confirm the exact rule in writing and mirror it in your tracker. Then agree on a short test window where you both monitor a known set of test clicks and conversions to confirm the discrepancy is resolved.

How to Prevent the Same Discrepancy From Returning

Add automated monitoring that compares counts and revenue across systems with sensible thresholds and time delays. You don’t want alerts every time a queue is slow, but you do want to know when the gap exceeds normal variance for a specific offer or traffic source. Break monitoring down by offer, goal, and major source so you can isolate problems quickly instead of staring at a blended dashboard that hides the issue.

Introduce contract-like specs for integrations: parameter keys, encoding rules, signature method, retry policy, and idempotency behaviour. Store the spec where both technical and account teams can find it, and require sign-off when something changes. Most “sudden” discrepancies are not mysterious; someone changed a redirect, rotated a token, updated a template, or enabled a new fraud rule without verifying the end-to-end flow.

Finally, keep a small library of test cases that reflect your real traffic: delayed conversions, duplicate callbacks, missing optional fields, and different device types. Re-run them after tracker updates, network changes, or offer migrations. Postback debugging is easiest when you treat it as engineering hygiene rather than an emergency response to angry messages about missing payouts.