How Stellar Tracks Conversions

Last Updated: 2024-06-04

Understanding how Stellar tracks and attributes conversions is essential for interpreting your A/B test results accurately. This guide explains our conversion tracking methodology and how we attribute user actions to your experiments.

1. Types of Conversions

Stellar offers two primary methods of counting conversions to provide flexibility in how you analyze your experiment results:

  • Total Conversions: Counts every conversion event, even when the same visitor converts multiple times. This metric is valuable when measuring overall engagement or revenue-generating actions.
  • Unique Conversions: Counts up to one conversion per visitor, regardless of how many times they convert. This metric helps understand the number of distinct users who completed the desired action and is typically more reliable for measuring true experiment impact.

Note: While both metrics provide valuable insights, unique conversions often provide a more accurate picture of experiment performance. Total conversions can be skewed by individual users triggering multiple events (such as refreshing a page repeatedly or making several small purchases), which may not reflect the true effectiveness of your variants.

2. Visitor Identification

We track users across sessions using a persistent visitor ID that's specific to each device and browser combination:

  • Each visitor is assigned a unique identifier when they first interact with your site.
  • A user who returns multiple times across a given period is still counted as one unique visitor, provided they use the same device and browser.
  • This persistent tracking enables accurate attribution of conversions to experiment variants, even across multiple sessions.

3. Conversion Attribution

For a conversion to be attributed to an experiment, the following conditions must be met:

  • Experiment Exposure: The experiment must have been mounted (shown) to the user before their conversion can be counted.
  • Cross-Session Attribution: If a user has viewed an experiment variant, any subsequent conversions will be attributed to that variant, even if they occur in future sessions.
  • No Attribution Without Exposure: Users who have never been exposed to an experiment cannot generate conversions for that experiment.

Attribution Example

If you're tracking page visits to /thank-you as a conversion event:

  • A user who visits your site and sees an experiment variant, then returns three days later and completes a purchase (reaching the thank-you page), will have that conversion attributed to the experiment variant they saw.
  • A user who visits your site but never encounters the experiment (perhaps they never visited the page where the experiment runs) will not generate a conversion for the experiment, even if they complete a purchase.

4. Experiment Status and Conversion Tracking

The status of your experiment affects how conversions are tracked:

  • Active Experiments: Continue to accumulate conversions from users who have been exposed to the experiment, following the industry-standard principle that a user's behavior remains influenced by the experiment they experienced.
  • Paused Experiments: Do not accumulate new conversions while in the paused state. When an experiment is paused, no new users are exposed to variants, and conversion tracking is temporarily suspended.
  • Resumed Experiments: Begin tracking conversions again once reactivated, continuing to attribute conversions from previously exposed users and new users.

5. Best Practices for Conversion Tracking

To ensure accurate and meaningful results from your A/B tests:

  • Define Clear Conversion Goals: Establish specific, measurable actions that align with your business objectives.
  • Consider Conversion Windows: Determine an appropriate timeframe for tracking conversions after exposure to your experiment.
  • Analyze Both Metrics: Review both total and unique conversions to gain comprehensive insights into user behavior, with special attention to unique conversions to avoid data skew from repeat actions.
  • Monitor Traffic Distribution: Ensure your experiment receives sufficient traffic for statistically significant results.
  • Watch for Anomalies: Be alert to sudden spikes in total conversions that aren't matched by unique conversions, as this may indicate a small number of users performing repeated actions rather than genuine experiment success.