Dashboard analytics chart showing visitor targeting segments
Back to Blog
Producttargetingbehavioral triggersa/b testing

Stop Running Experiments on the Wrong Visitors

How behavioral triggers let you test on visitors who are actually ready to convert

D
David S.
Founder, Segmently
·April 22, 2026·9 min read

Most A/B tests fail before they start because they expose every visitor to every experiment, regardless of intent. Behavioral triggers fix that. Here is how to use scroll depth, time on page, click triggers, and referral source to run sharper experiments and get cleaner data.

Here is a scenario that plays out on thousands of websites every day. A team runs a headline A/B test. Traffic is split 50/50. After two weeks they have a "winner" with a 2% lift in conversions. They ship it. Nothing happens to revenue.

The test was not wrong. The targeting was. The experiment was running on every visitor who hit the page: people who bounced in three seconds, people who arrived from a spam referrer, people who had already seen the page a dozen times. Mixing high-intent and low-intent visitors in the same test pool dilutes the signal until it is almost meaningless.

Behavioral triggers are how you fix this. Instead of firing an experiment the moment someone lands on a page, you wait until they have demonstrated some level of intent. You test the visitors who are actually worth testing.

What we shipped in v1.58.0

Segmently now ships two new targeting capabilities in the experiment wizard: behavioral triggers and visitor type targeting. Both are configured in the Audience tab, no code required, and both work with every existing experiment type including visual editor experiments.

Behavioral triggers let you delay experiment activation until a visitor meets a specific behavioral condition. Visitor type targeting lets you restrict an experiment to new visitors only, returning visitors only, or everyone.

Let us walk through each one.

Behavioral trigger 1: Scroll depth

Scroll depth fires your experiment when a visitor has scrolled a specified percentage of the page. You set a threshold between 0% and 100% in the wizard.

The classic use case is long-form content. If you have a blog post, a long landing page, or a product page with heavy detail, a visitor who has scrolled 60% is categorically different from one who just landed. They read. They are interested. An experiment served to them carries real signal.

Scroll depth is also powerful for testing mid-page CTAs and section variations. If you want to test a CTA that lives 40% down a page, there is no point including visitors who never reached it in your conversion rate calculation. The trigger removes them automatically.

Rule of thumb: set scroll depth to match the position of the element you are testing. Testing a CTA at the 50% mark? Trigger at 45%. You want visitors who reached the element, not just everyone who visited the URL.

Behavioral trigger 2: Time on page

Time on page fires your experiment after a visitor has been on the page for a set number of seconds. Values of 5 to 30 seconds are the most common starting points.

This trigger is primarily about filtering bounces. A visitor who spends three seconds on your page and leaves is not a conversion opportunity. Including them in your A/B test data adds noise without adding value. Even a modest 5-second threshold cuts a significant portion of non-engaged traffic from your experiment.

Time on page also works well for intent signals. SaaS pricing pages, for example, see a meaningful split between visitors who scan and leave versus visitors who spend 20 or more seconds comparing plans. An experiment that only activates for the latter group is testing actual buyers, not browsers.

  • 5 seconds: Filters immediate bounces, good default for most experiments
  • 15 seconds: Targets engaged readers, works well for content-heavy pages
  • 30 seconds: Targets high-intent visitors, ideal for pricing and checkout pages
  • 60+ seconds: Targets deeply engaged visitors, useful for onboarding flows

Behavioral trigger 3: Element click

Element click fires your experiment when a visitor clicks a specific element, identified by a CSS selector. You enter the selector in the wizard and the experiment activates on the next page view after that click occurs, or on the current page depending on your configuration.

This trigger unlocks a category of experiment that was previously difficult: interaction-gated testing. You can now show one version of a confirmation modal to visitors who clicked "Add to cart" and a different version to those who clicked "Save for later." The act of clicking becomes the qualifier.

Other practical applications include: testing variations of a free trial CTA only to visitors who already clicked on pricing, testing checkout page copy only to visitors who clicked "Start checkout," or testing onboarding steps only to visitors who clicked a specific feature in the app.

The CSS selector field accepts any valid selector: class names (.btn-primary), IDs (#checkout-btn), element plus attribute combinations (button[data-action="upgrade"]), and compound selectors. If you are unsure of the selector, open your browser DevTools, right-click the element, and copy the CSS selector.

Behavioral trigger 4: Referral source

Referral source lets you restrict an experiment to visitors arriving from a specific traffic channel. The available categories are: organic search, paid advertising, social media, direct traffic, email campaigns, and other.

This solves a targeting problem that is especially common for teams running paid acquisition campaigns. Paid traffic and organic traffic have different intent profiles, different price sensitivities, and different conversion patterns. An experiment that tests the same copy on both groups can return a result that applies to neither.

With referral source targeting, you can run a landing page headline test exclusively for paid visitors, a trust-signal test exclusively for organic visitors, and a re-engagement test exclusively for visitors arriving from your email list. Three separate experiments, each with clean data.

One of the most reliable use cases: run your core pricing page experiment on organic visitors only. Organic visitors are comparison shopping. Paid visitors already clicked an ad that made a specific promise. They are in different decision states. Mixing them is how you get inconclusive results.

Visitor type targeting: new vs. returning

Separate from behavioral triggers, the Audience tab now includes a Visitor Type selector with three options: all visitors, new visitors only, and returning visitors only.

Visitor type is determined by a first-party cookie. If the cookie is not present, the visitor is classified as new. If it is present, they are classified as returning. No additional setup is required.

New visitor targeting is useful for first-impression experiments. Onboarding copy, hero headlines, and initial value propositions are decisions that matter most at first contact. A visitor who has been to your site ten times has already formed an opinion; including them in your headline test muddles the signal you actually care about.

Returning visitor targeting is useful for re-engagement experiments. If you want to test a "welcome back" banner, a special offer for repeat visitors, or a streamlined checkout experience for customers who have bought before, you need a clean returning-only segment to measure the effect.

Combining triggers

All of these targeting options can be used together. You can run an experiment that targets returning visitors who arrived from paid search, have been on the page for at least 10 seconds, and have scrolled 40% of the way down. Each condition is additive.

That level of targeting specificity is not always necessary or desirable. Narrow your audience too much and you extend the time needed to reach statistical significance. The goal is not maximum precision; it is removing the visitors who genuinely cannot give you a useful signal without removing so many visitors that you cannot run the test at all.

A useful rule: start with one trigger that directly relates to the element or behavior you are testing. Add a second trigger only if you have a clear reason why excluding that additional group improves your data.

No code. Seriously.

Every trigger and targeting option described in this post is configured entirely in the Segmently experiment wizard. There is no JavaScript to write, no custom events to instrument, no SDK calls to make. The snippet handles detection automatically once the experiment goes live.

Scroll depth is measured by the snippet via a scroll event listener. Time on page is tracked from the moment the snippet initializes. Element clicks are observed through event delegation on the CSS selector you specify. Referral source is parsed from document.referrer at page load. Visitor type is read from the first-party cookie the snippet sets on first visit.

You configure the rules. Segmently handles the measurement.

Where to find it

Open any experiment in draft or active state, navigate to the Audience tab, and scroll to the Behavioral Triggers section. The Visitor Type selector is directly above it. Both are available on Professional and above.

If you have experiments running right now with high traffic but low statistical confidence, adding a time-on-page trigger of 5 to 10 seconds is often the fastest single change you can make to improve data quality. Try it.

Tags

targetingbehavioral triggersa/b testingconversion optimizationnew features

Ready to start experimenting?

Segmently gives you enterprise-grade A/B testing at a fraction of the cost. Free to start. No credit card required.