Creative Testing Guide Post Andromeda

What's up, Marketers! This is Aazar.

This newsletter is about leveling up your paid growth marketing skills by analyzing the best brands' paid strategy, tactics, positioning, and value props.

This newsletter is divided into:

  • Sharing what I've learned (this issue)

  • Sometimes sharing some other performance marketers’ lessons with you

  • And I analyze & compare the best ads on the internet

My favorite finds

For those who love creative strategy in this newsletter.

Big News for Senior Creative Strategists

One of my current clients who spends 7 figures on Meta is looking for a senior creative strategist.

This is what they’re looking for:

A fast-moving creative strategist with 5+ years of experience who can manage multiple creators, uphold high quality, and make sharp data-backed decisions. Someone with strong taste, high integrity, and an obsession with UGC, experimentation, and platform-native storytelling. Skilled in scripts/briefs, excellent at async communication, and naturally AI-native. Driven, curious, ambitious, and ready to grow with the team.

Salary: $6K+bonus/month

Drop your LI profile by showing interest. I’ll connect you.

I'm interested in this creative strategist role

Drop your LinkedIn profile below

Login or Subscribe to participate in polls.

Many of you loved my recent post about testing Meta’s Andromeda Update. Here’s what happened. So, this is an extension of that newsletter.

Most advertisers think Andromeda changed creative testing.

Here's what actually happened:

Andromeda exposed who was doing creative testing right and who was relying on the hacks.

I booked a call with my Meta AE to understand what's really shifting. She told me something that reframes everything:

"Around 50-60% of winning the auction now comes from your creative. Not your targeting. Not your structure. Your creative."

Think about that for a second.

You're no longer testing ads. You're testing who the ad is for.

Because Meta now shows your creative to people based on their scrolling behavior, their format preferences, the messages they engage with, and even the emotional tone they respond to.

If your ads look the same, Meta thinks your audience is the same. Performance stalls.

That's why creative diversity isn't a nice-to-have anymore. It's the unlock.

In this edition, I'm breaking down what Meta AEs are actually recommending right now, why diversity of creative matters more than volume, and how to rebuild your testing system for 2025 without throwing away what still works.

Let's get into it.

Meta's New Algorithm (What Has Actually Changed)

I've already shared a complete newsletter on what has actually changed, but here's a quick snapshot in case you're new here.

Here's the simplest way to understand it:

Meta now uses your creative to decide who to show ads to.

Here's what that means in practice:

  • If your creatives look or sound similar, Meta keeps hitting the same user pool

  • Fatigue arrives faster because the system "classifies" similar creatives as the same concept

  • Creative diversification isn't optional anymore, it expands your reach

  • Distinct angles open up new pockets of buyers

  • Tiny creative tweaks don't count as new signals

  • You don't need audience hacks. You need creative variety

Simple rule: The algorithm is smart enough to target. You just need to give it variety.

Why Creative Diversification Matters More Now

My Meta AE put it perfectly:

"If your creatives all look similar, the system will keep showing them to the same type of users."

And that's the biggest hidden mistake brands make.

If you don't diversify, here's what happens:

  • You fatigue the same audience segment

  • You lose the opportunity to reach new pockets of converters

  • You get fewer learnings and fewer winners

  • Your scaling slows down

But diversification doesn't mean random experimentation.

It means:

  • Different concepts

  • Different emotional angles

  • Different visual styles

  • Different hooks

Because each variation unlocks a new path in the algorithm.

What Diversification Actually Means (Not What People Think)

Most accounts think of one angle and try to diversify like this

  • Same layout

  • Same background

  • Same format

  • Same structure

  • Different text

Meta sees those as the same ad.

The Meta AE I spoke to confirmed this. Even if you think they're different, the system doesn't.

So diversification should look more like this:

  • One emotional angle ad

  • One price/value ad

  • One testimonial-style ad

  • One product demo ad

  • One contrast ad

  • One visual metaphor ad

  • One benefit-first static

  • One fast-cut video

  • One calm, long-form video

That's real variation.

And the more distinct each ad looks, the more the system can "route" them to totally different people inside your broad targeting.

A Simple Testing Structure That Works

I've simplified creative testing to just two campaigns:

Campaign 1: Scaling Campaign (broad, stable, consistent winners)

Campaign 2: Creative Testing Campaign (messy, wild, experimental)

Scaling doesn't change. Testing changes every week.

But here's the big shift:

Inside testing, I now test concepts, not single ads.

Instead of 1 ad with tiny variations, I run 3 concepts × 4–5 variations each.

So a single ad set easily has 12–15 creatives.

This does three things:

  1. Meta instantly sees which angles people respond to

  2. I never rely on one asset to save the week

  3. Winners surface faster and scale better

And this aligns 100% with what my AE told me.

Thanks to our partners who support this newsletter.

Tools worth checking out:

Atria: You're only as good an advertiser as your swipe file. Atria helps save good ads and analyze them in-depth. But the best part? Their AI helps me create concepts and scripts within seconds. Check it out for free. Most importantly, they have built-in ad analytics to create more winning ads.

They now have everything I need: a swipe file and discovery ads, an AI creative strategist, analytics for collaboration and client reporting, asset management to maintain a single library for my video editing team, and competitor tracking to monitor their every move.

Check my YT video review for a full breakdown

How I Actually Run the Creative Testing Campaign

Earlier, my focus was:

One concept, multiple hooks, pick a winner, move it to scale. Then move on and experiment with different concepts.

That still works, but I've added one big layer:

Concept testing first. Hook testing second. Variations third.

Here's my current approach.

Step 1: Decide the creative concepts

For each week, I plan 2–3 new concepts like:

  • New avatar

  • New problem/benefit

  • New emotional angle

  • New demonstration or proof style

Example:

  • Concept 1: "Busy parents with no time"

  • Concept 2: "Price-sensitive buyers who hate wasting money"

  • Concept 3: "Skeptical users who tried competitors and were disappointed"

Each of these is a different world for the algorithm.

Step 2: Build variations inside each concept

Inside each concept, I aim for:

  • 3–5 variations per concept

  • Different visual hooks

  • Different opening lines

  • Different format (UGC, talking head, static, animation)

This gives me:

  • Enough diversity to keep Meta happy

  • Enough structure to still understand what worked

Step 3: Match Meta AE's advice

I make sure:

  • Each ad set has at least 5 creatives

  • Creatives are not just "the same thing in different clothing"

  • Different formats are mixed (static + video + UGC etc.)

The test campaign is now a batch of concepts, not a bunch of tiny tweaks.

The Creative Fatigue Problem (Especially for Big Ad Spends)

Most people misunderstand fatigue.

Your testing campaign rarely fatigues.

Your scaling campaign does.

Because scaling campaigns hold your winners, and Meta keeps pushing them hard.

So the anti-fatigue strategy is simple: test fast enough to replace winners before they die.

What that looks like:

  • 2–3 winner iterations every week

  • 4–5 new concepts every week

  • A constant pipeline of variety

This is how you stay ahead of fatigue, not react to it.

Budget Allocation

Some rules haven't changed since the old days.

This one is still correct:

20–40% of your entire ad budget should go into creative testing.

The logic is simple:

  • Your winners come from testing

  • Your scale comes from the winners

  • Your stability comes from having a steady stream of replacements

Some performance marketers still insist on "50 conversions to exit the learning phase," but that's just outdated.

What matters is:

  • Can the creative get enough intent signals?

  • Is the CPA within acceptable range?

  • Can the ad survive a budget raise?

That's your true "learning phase."

The Metrics That Actually Matter for Creative Testing

  • Unique CTR

  • Unique CPC

  • CPM

  • Hold rate

  • Hook rate

  • Landing page view rate

  • Cost per landing page view

  • Cost per acquisition

  • Checkout initiated

  • Result rate

  • Conversion rate

  • Checkout/Purchase rate

These tell you the "health" of the creative even before conversions happen.

Here's how to read them:

  • If CTR is awful, the angle is wrong

  • If the hook rate is low, the first 3 seconds didn't work

  • If the hold rate is low, storytelling didn't land

  • If LPVR is low, curiosity wasn't strong enough

  • If CPC is high, the ad isn't competitive

  • If CPM is high, the angle is attracting expensive audiences (sometimes it is okay to attract expensive audience)

When to Kill an Ad vs. When to Keep It Running

The simplest rule:

If CPA is 2–3× your average, kill it.

But also:

  • If an ad consistently spends more but doesn't convert, kill it

  • If an ad gets no spend, don't kill it. Let testing continue

  • If an ad wins cheap conversions, scale that ad set, then move it to the main campaign later

Good ads must survive a budget increase.

Bad ads reveal themselves early.

Simple Rules for Choosing Winners

This is the part where I've simplified things for myself.

Here are the rules I follow:

  • If Meta doesn't want to spend on it, it's not a winner

  • If CPA is good and stable for a few days, it's a winner

  • If performance dies every time I scale it, it's not a true winner yet

  • Concept beats "tiny variations" every single time

  • Always have at least one new concept in testing every week

And one more personal rule:

"Don't fall in love with your ads. Fall in love with what the data is telling you."

Just because you like a creative doesn't mean your market does.

The Weekly Creative Testing System I'm Using Now

To keep things practical, here's the simple rhythm I recommend.

Pipeline A: Iterations of Winners

Every week:

  • Take last week's or last month's winners

  • Build 2–3 new versions with:

    • New hooks

    • New intros

    • New edits or structure

    • New formats

  • Keep them inside the testing campaign first

  • Move the best ones to the main prospecting campaign

Goal: Extend the life of existing winners.

Pipeline B: New Creative Concepts

Every week:

  • Launch 2–3 brand new concepts

  • Change avatar, angle, story, offer framing

  • Mix formats: UGC, static, talking head, product demo, testimonial

Goal: Find new types of winners the algorithm can scale.

That's how you should think about creative testing under Andromeda.

Key Takeaways (TLDR Version)

  • Andromeda rewards advertisers who test different ideas, not tiny variations of the same ad

  • Your creative now acts like your targeting. The message, tone, and visual world decide who Meta shows your ad to

  • Winners show themselves early. If an ad can't pick up delivery or intent signals in the first few days, it's not a winner

  • Iterations keep the account stable, but new concepts unlock new pockets of buyers. You need both every week

  • The algorithm reads patterns, not hopes. If a creative isn't resonating, no amount of budget will save it

  • Good ads must survive a budget increase. Testing winners and scaling winners are not always the same

  • Every creative dies. Your job is to replace them before they fade, not after performance drops

  • A weekly creative engine beats any structural hack. More angles = more learning = more scale

  • The new system is simple: 2–3 winner iterations + 4–5 new concepts, tested together in a broad environment

Another partner shoutout.

Happy Growing with Paid Social,

Aazar Shad

Since this newsletter is free, I do it to follow my curiosity. But I’d love it if you could leave some feedback so I know if I am helping you or not.

What did you think of this newsletter? I appreciate your feedback!

Login or Subscribe to participate in polls.

Three ways I can help you, whenever you are ready:

  • Work with me to get you growth from paid marketing. Book a call here. I’m open to more clients now.

  • Level up your paid marketing by joining my community, where we share the latest tactics and get nuanced paid marketing questions answered here (we are now 90+, but there is a waitlist to join).

  • If you’re looking to level up your creative ad strategy, check out our bootcamp recordings and resources on-demand, only for $197. Prices are going up by 30% soon. Simply pay here, I’ll send you a follow-up email immediately.

Reply

or to participate.