Google App Campaigns (GAC) now account for over 40% of Android app install ad spend globally, according to AppsFlyer's 2025 Performance Index.
This guide covers the exact optimization levers that move the needle: asset-level reporting, conversion event selection, budget and bid strategy, geo targeting, and Google's Asset Excellence framework. Every tactic here is grounded in real benchmarks.
Prerequisites: You need an active Google Ads account with a published app on Google Play or the App Store, Google Analytics for Firebase (GA4) linked to your app, at least one conversion event firing reliably, and a minimum daily budget of $50 to generate enough signal for machine learning optimization.
Page Contents
- Step 1: How does Google App Campaign machine learning actually work, and why does it matter for optimization?
- Step 2: How do you choose the right conversion event for Google App Campaigns?
- Step 3: How do you read and use asset-level reporting in Google App Campaigns?
- Step 4: How do you build a high-performing asset portfolio for Google App Campaigns?
- Step 5: How do you set and adjust budgets without disrupting Google App Campaign learning?
- Step 6: How do you optimize bids for Google App Campaigns in 2026?
- Step 7: How do you optimize geo targeting in Google App Campaigns?
- Step 8: What is Google's Asset Excellence framework and how do you use it?
- Step 9: How do you structure multiple Google App Campaigns for one app?
- Step 10: How do you diagnose and fix a sudden CPA spike in Google App Campaigns?
- Step 11: How do you use audience signals and exclusions in Google App Campaigns?
- Step 12: How do you measure true GAC performance beyond Google's self-reported data?
- Common Mistakes to Avoid
- Frequently Asked Questions
- Related Reading
Step 1: How does Google App Campaign machine learning actually work, and why does it matter for optimization?
Google App Campaigns rely on a single automated system that combines your creative assets, bid signals, and conversion data to find users across Search, Play Store, YouTube, Display, and Discover. Understanding this architecture is non-negotiable for optimization because every lever you pull feeds the same algorithm.
Unlike manual campaign types, GAC gives you no keyword-level, placement-level, or audience-level control. Per Google's official documentation, the system uses your conversion event as its primary learning signal and your assets as the raw material for ad assembly.
This means optimization is fundamentally about three inputs: the quality of your conversion signal, the diversity and performance of your creative assets, and your budget and bid settings. Get any one of these wrong and the algorithm optimizes toward the wrong outcome.
According to the Adjust State of App Growth Report, apps that optimize all three inputs simultaneously see 30-50% lower CPA than those optimizing in isolation.
Key insight: GAC optimization means feeding better inputs to the algorithm, not overriding it.
- No manual keyword or placement control exists
- Conversion events are the algorithm's primary signal
- Asset quality directly determines ad combinations
- Budget stability affects machine learning convergence
- All five placements share one optimization engine
| GAC Placement | Approximate Traffic Share (Android) | Primary Asset Types Used |
|---|---|---|
| Google Play Store | 30-40% | Text, images |
| YouTube | 20-30% | Video, text overlays |
| Google Display Network | 15-25% | Images, HTML5, text |
| Google Search | 10-15% | Text headlines, descriptions |
| Google Discover | 5-10% | Images, text |
Pro tip: The algorithm typically needs 100 conversion events per week per campaign to exit the learning phase, per Google's own guidance. If you're under this threshold, consider moving to a higher-funnel event temporarily.
Step 2: How do you choose the right conversion event for Google App Campaigns?
Select the conversion event that is closest to revenue while still generating at least 10 conversions per day (or roughly 100 per week) within your campaign. This is the single most impactful optimization decision you'll make.
Google offers three campaign sub-types: App Installs (tCPI), App Engagement (tCPA on in-app events), and App Pre-registration. Most growth teams should run tCPA campaigns optimizing for a post-install event because pure install optimization often attracts low-intent users.
According to AppsFlyer's State of App Marketing report, apps using in-app event optimization see 25-35% higher Day 7 retention compared to install-optimized campaigns. The tradeoff is higher CPI, but the downstream LTV more than compensates.
For subscription apps, the ideal event is often "start free trial" rather than "purchase." Purchase events frequently don't hit the volume threshold, which starves the algorithm of data.
A fitness app optimizing for trial starts at $12 tCPA will typically outperform one optimizing for subscriptions at $45 tCPA simply because the algorithm gets 3-4x more conversion signals.
Key insight: Pick the event closest to revenue that still fires 10+ times daily per campaign.
- Install optimization attracts low-intent users
- tCPA campaigns outperform tCPI on LTV metrics
- Trial start events beat purchase events for volume
- Minimum 10 daily conversions keeps learning stable
- Reassess event selection quarterly as scale changes
| Conversion Event | Typical Volume Threshold | Best For | Risk Level |
|---|---|---|---|
| App Install | Very high | Top-of-funnel scale | Low quality users |
| Registration / Sign-up | High | Freemium and social apps | Moderate quality |
| Free Trial Start | Medium | Subscription apps | Good balance |
| Purchase / Subscribe | Low | High-LTV apps with big budgets | Algorithm starvation |
| In-app Action (e.g., Level 5) | Medium-high | Gaming apps | Proxy accuracy risk |
How do you migrate from install optimization to in-app event optimization without resetting learning?
Create a new campaign rather than changing the conversion event on an existing one. Changing the event mid-flight forces the algorithm to re-learn from scratch, often causing a 2-3 week performance dip, per Google's best practices.
Run both campaigns in parallel for 2-3 weeks. Gradually shift budget from the install campaign to the event-optimized campaign as it exits the learning phase. Monitor CPA stability, not just volume, during this transition.
Pro tip: If your target event has fewer than 10 daily conversions per campaign, use Firebase's "predicted" events like predicted first-time spenders. Google's predictive audiences can bridge the volume gap while maintaining downstream quality.
Step 3: How do you read and use asset-level reporting in Google App Campaigns?
Google rates every asset as "Low," "Good," or "Best" based on relative performance within its asset group. Navigate to your campaign, click "Asset Report," and sort by performance rating. Assets rated "Low" for more than 7 days with meaningful impressions should be replaced.
The rating system is comparative, not absolute. A "Best" rated headline in a poorly performing campaign might still be underperforming relative to industry benchmarks. Always cross-reference asset ratings with campaign-level CPA and conversion volume.
According to Google's asset reporting documentation, the system needs approximately 2,000 impressions per asset before generating a reliable rating. Assets with fewer impressions showing "Pending" status should be given more time before making replacement decisions.
One critical nuance: Google's system heavily front-loads impressions to new assets during a testing phase. A new video might get 60-70% of impressions in its first 48 hours, then stabilize. Don't panic if a new asset's metrics look volatile in the first few days.
Key insight: Replace "Low" rated assets after 7 days and 2,000+ impressions, never sooner.
- Ratings are relative within each asset group
- Wait for 2,000 impressions before acting on ratings
- New assets get front-loaded impressions initially
- Cross-reference ratings with campaign-level CPA
- "Pending" status means insufficient data, not poor performance
What metrics should you track beyond Google's asset ratings?
Export asset-level data weekly into a spreadsheet. Track impressions, conversion rate, and estimated CPA per asset. Google doesn't surface CPA per asset directly, but you can approximate it by comparing periods when specific assets were active versus inactive.
Pay special attention to video completion rates on YouTube placements. According to industry data from creative asset benchmarking for Google App Campaigns, videos with >25% completion rates typically correlate with lower CPA.
Pro tip: Set a weekly calendar reminder to review asset reports every Monday. Consistent iteration, replacing 1-2 assets per week, compounds into significant performance gains over 90 days.
Step 4: How do you build a high-performing asset portfolio for Google App Campaigns?
Upload the maximum number of assets allowed: 4 text headlines, 5 descriptions, 20 images, and 20 videos. Filling all slots gives the algorithm maximum combinatorial flexibility. Campaigns using all asset slots see 12-15% more conversions on average, according to Google's internal case studies shared at Google Marketing Live 2024.
Diversity matters more than volume. Each asset should test a distinct creative angle: social proof, feature demo, urgency, emotional benefit, or price anchoring. If all five descriptions say roughly the same thing, you're wasting slots.
For video, upload in three aspect ratios: landscape (16:9), portrait (9:16), and square (1:1). Portrait video is essential for YouTube Shorts inventory, which now represents a growing share of GAC video impressions. Missing an aspect ratio means the system either skips that inventory or auto-crops, which almost always hurts performance.
Key insight: Fill every asset slot with creatively distinct angles across all aspect ratios.
- Max assets: 4 headlines, 5 descriptions, 20 images, 20 videos
- Each asset should test a unique value proposition
- Upload video in 16:9, 9:16, and 1:1 ratios
- Portrait video unlocks YouTube Shorts inventory
- Diverse assets outperform volume-only approaches
| Asset Type | Max Allowed | Min Recommended | Key Consideration |
|---|---|---|---|
| Text Headlines | 4 | 4 | 30 char limit, include CTA variation |
| Descriptions | 5 | 5 | 90 char limit, distinct value props |
| Images | 20 | 8-10 | 1200×628, 1200×1200, 320×480 sizes |
| Videos | 20 | 6 (2 per ratio) | 16:9, 9:16, 1:1 aspect ratios |
| HTML5 | 20 | 0-3 | Only if you have interactive content |
How often should you refresh creative assets?
Replace 20-30% of your asset portfolio every 2-3 weeks. Creative fatigue on GAC is real but slower than on social platforms like Meta, where fatigue hits in 7-10 days. GAC's broader inventory mix extends asset lifespan.
The AppsFlyer Performance Index shows that top-performing apps on Google refresh creatives 2-3x per month. Never replace all assets simultaneously. That resets the algorithm's learning across all combinations and typically causes a 15-25% CPA spike for 5-7 days.
What creative formats work best on Google App Campaigns in 2026?
Video dominates performance. According to Google's own data, campaigns with video assets generate 20% more installs at similar CPA versus campaigns relying solely on images and text. The algorithm prioritizes video for YouTube and Discover placements.
For gaming apps, fail ads and gameplay footage consistently outperform polished cinematic trailers. For non-gaming categories, product demo videos under 15 seconds with a clear CTA in the first 3 seconds perform strongest.
Pro tip: Run a creative audit using Google's Asset Excellence framework (covered in Step 8) before building new assets. Knowing what's missing prevents wasted production effort.
Step 5: How do you set and adjust budgets without disrupting Google App Campaign learning?
Set your initial daily budget at 50x your target CPA. If your tCPA is $10, start at $500/day. This gives the algorithm enough room to find and convert approximately 50 users per day, well above the minimum conversion threshold.
This recommendation comes directly from Google's App campaign best practices.
Budget changes must be incremental. Increase or decrease by no more than 20% every 3-5 days. Larger swings trigger re-learning, which manifests as a CPA spike lasting 5-14 days. In practice, a $500/day campaign should move to $600/day, stabilize for 4-5 days, then move to $720/day.
Underspending is more dangerous than overspending on GAC. If your daily budget consistently hits its cap before midnight, the algorithm can't explore enough of the auction landscape. You'll see inflated CPAs because the system concentrates spend on a smaller, more competitive subset of impressions.
Need help scaling your mobile app growth? Talk to RocketShip HQ about how we apply these strategies for apps spending $50K+/month on UA.
Key insight: Set daily budget at 50x your target CPA and adjust in 20% increments only.
- Initial budget = 50x target CPA per day
- Never change budget by more than 20% at once
- Wait 3-5 days between adjustments
- Budget caps before midnight signal underspending
- Underspending inflates CPA through limited auction access
How do you handle budget during seasonal peaks?
Pre-scale budgets 2-3 weeks before known seasonal peaks (Q4 holidays, back-to-school, New Year). Ramping budget during the peak itself means competing at inflated CPMs while still in the learning phase.
According to the Adjust State of App Growth report, Q4 CPIs on Android increase by 15-30% across categories. Start budget increases in October for December peaks, using the same 20% increment rule but accelerating to every 2-3 days.
Pro tip: If CPA spikes after a budget change, resist the urge to cut budget immediately. Give the algorithm 5-7 days to re-stabilize. Cutting budget in panic compounds the disruption.
Step 6: How do you optimize bids for Google App Campaigns in 2026?
Start your tCPA bid at 10-20% above what you're actually willing to pay. This gives the algorithm headroom to explore and learn. Once the campaign stabilizes after 2-3 weeks, incrementally reduce the bid by 5-10% every 5-7 days toward your true target.
Bidding too low from the start is the most common mistake. The algorithm interprets an aggressive tCPA as a hard constraint and responds by severely limiting delivery. You'll see low impressions, low spend, and paradoxically higher effective CPA because the system can only win the cheapest, lowest-quality auctions.
For tROAS campaigns (available when optimizing toward purchase revenue), the logic inverts. Start your tROAS target 10-20% lower than your actual target, then ratchet it upward as the campaign learns. A finance app targeting 300% tROAS should begin at 250% tROAS and increase in 10% increments.
Key insight: Start bids 10-20% above your true target, then reduce gradually after learning stabilizes.
- Aggressive tCPA bids starve the algorithm of delivery
- Reduce bids by 5-10% every 5-7 days max
- tROAS targets should start low and increase
- Never change bid and budget simultaneously
- Learning phase lasts 2-3 weeks for new campaigns
| App Category | Typical Android tCPA (US) | Recommended Starting tCPA | Source |
|---|---|---|---|
| Casual Gaming | $1.50 – $3.00 | $3.00 – $3.60 | Liftoff 2024 Report |
| Health & Fitness (trial) | $10.00 – $18.00 | $18.00 – $21.60 | Liftoff 2024 Report |
| Finance / Fintech | $15.00 – $35.00 | $35.00 – $42.00 | Liftoff 2024 Report |
| E-commerce (purchase) | $8.00 – $20.00 | $20.00 – $24.00 | Liftoff 2024 Report |
| Social / Dating | $5.00 – $12.00 | $12.00 – $14.40 | Liftoff 2024 Report |
Pro tip: According to Google App Campaign targeting mechanics, the algorithm weights recent conversion data more heavily. After a tCPA reduction, your first 48 hours of data are the most volatile. Judge performance on a rolling 7-day window, not daily snapshots.
Step 7: How do you optimize geo targeting in Google App Campaigns?
Segment your campaigns by geo tiers from day one. Run separate campaigns for Tier 1 (US, UK, CA, AU, DE), Tier 2 (Western Europe, Japan, South Korea), and Tier 3 (Southeast Asia, Latin America, India).
Mixing geos in one campaign causes the algorithm to dump spend into the cheapest markets, starving your highest-LTV geos.
Per AppsFlyer's eCommerce marketing data, US Android CPIs average $2.50-$4.00 for shopping apps, while India CPIs run $0.20-$0.50. A mixed-geo campaign will naturally over-index on India installs because they're cheaper, destroying your blended ROAS.
Within Tier 1, consider isolating the US into its own campaign if it represents more than 50% of your revenue. The US market has unique auction dynamics, competitive density, and user behavior patterns that warrant dedicated budget allocation.
For fintech apps with compliance requirements, geo segmentation also serves a regulatory function. Different markets require different disclosures, and running separate campaigns per region makes it easier to attach region-specific creative assets.
Key insight: Never mix geo tiers in one campaign. The algorithm will overspend in cheapest markets.
- Separate campaigns for Tier 1, Tier 2, Tier 3 geos
- Isolate US if it's over 50% of revenue
- Mixed geos cause spend to shift to cheapest markets
- Compliance needs reinforce geo segmentation
- Adjust tCPA per campaign to reflect regional LTV
| Geo Tier | Example Markets | Android CPI Range (2025) | Typical Day 30 Retention |
|---|---|---|---|
| Tier 1 | US, UK, CA, AU, DE | $2.00 – $5.00 | 8-15% |
| Tier 2 | FR, JP, KR, IT, ES | $1.00 – $3.00 | 6-12% |
| Tier 3 | IN, BR, ID, MX, PH | $0.15 – $1.00 | 3-8% |
How do you identify underperforming geos within a tier?
Pull the geographic report in Google Ads weekly. Sort by cost-per-conversion and compare to your LTV data from your MMP (AppsFlyer, Adjust, or Kochava). Any country where CPA exceeds 80% of its average user LTV should be excluded or moved to a lower-bid campaign.
Watch for countries that generate high install volume but low post-install engagement. According to Adjust's global benchmarks, Day 1 retention below 20% from any country in a Tier 1 campaign is a red flag worth investigating for fraud or low-quality inventory.
Pro tip: Create a geo performance dashboard updating weekly. Track CPI, CPA, Day 7 retention, and ROAS by country. Countries that drift below 1.0x ROAS at Day 30 should be excluded from that campaign tier.
Step 8: What is Google's Asset Excellence framework and how do you use it?
Asset Excellence is Google's scoring methodology that rates your overall campaign asset portfolio on a scale from "Below Average" to "Excellent." It assesses asset diversity, quality ratings, and format coverage. Campaigns scoring "Excellent" see up to 15% lower CPA, per Google's published guidance.
The framework evaluates four dimensions: asset type coverage (do you have text, images, and videos?), aspect ratio coverage (are all three video ratios present?), individual asset performance ratings, and refresh frequency. Missing any dimension caps your score.
To check your score, navigate to your App Campaign, click the "Asset Report" tab, and look for the Asset Excellence indicator at the top. Google introduced this more prominently in late 2024 and continues to weight it more heavily in auction dynamics through 2025-2026.
RocketShip HQ uses this framework as a baseline diagnostic for every campaign audit. The most common gap we see: missing portrait (9:16) video, which alone can drop a campaign from "Good" to "Below Average" and restrict access to YouTube Shorts inventory.
Key insight: Campaigns rated "Excellent" in Asset Excellence see up to 15% lower CPA.
- Scores range from Below Average to Excellent
- Evaluates type coverage, ratios, quality, and freshness
- Missing portrait video is the most common gap
- Check score in the Asset Report tab
- Score influences auction competitiveness directly
| Asset Excellence Score | Typical CPA Impact | Common Gaps to Fix |
|---|---|---|
| Excellent | Baseline (lowest CPA) | None, maintain refresh cadence |
| Good | +5-8% CPA vs Excellent | Usually missing 1 video ratio or low asset count |
| Below Average | +10-20% CPA vs Excellent | Missing video entirely or all assets rated Low |
| Not Rated | Highly variable | Insufficient assets or too new |
How do you go from "Good" to "Excellent" in Asset Excellence?
Audit your asset portfolio against this checklist: all 4 headline slots filled, all 5 description slots filled, at least 8 images across landscape, portrait, and square, and at least 6 videos (2 per aspect ratio). Then verify at least 70% of assets are rated "Good" or "Best."
Replace every "Low" rated asset with a new creative that tests a different angle. Don't iterate on a failed concept. If a feature-demo video rated "Low," try a testimonial or emotional hook instead.
According to cross-channel UA analysis, creative angles that work on Meta or TikTok often translate well to GAC video placements.
Pro tip: Screenshot your Asset Excellence score weekly. Track it alongside CPA trends. You'll often see a 3-5 day lag between asset improvements and CPA changes, as the algorithm needs time to re-learn optimal combinations.
Step 9: How do you structure multiple Google App Campaigns for one app?
Run 2-4 campaigns per app maximum, segmented by objective and geo. More than 4 campaigns for a single app in one market fragments your budget and conversion signal, which degrades algorithmic performance.
The recommended structure: one tCPA campaign for your primary conversion event in your top geo, one tCPA campaign for secondary geos, and optionally one tROAS campaign if you have sufficient purchase data. Keep a separate install-optimized campaign only if you need pure volume for brand awareness.
Avoid the temptation to create campaigns per creative theme. Unlike Meta, where campaign-level creative segmentation can help, GAC's internal asset testing mechanism handles creative variation within a single campaign. Splitting creatives across campaigns just dilutes data per campaign.
For apps running in many markets, the social app advertising playbook offers additional campaign architecture patterns that work well for multi-geo launches.
Key insight: Run 2-4 campaigns max per app. More campaigns fragment your conversion signal.
- Segment by objective and geo, not by creative
- tCPA campaign for primary event in top market
- Separate campaign for secondary geos
- Optional tROAS campaign if purchase data is sufficient
- Avoid campaign-per-creative-theme structures
Pro tip: If you run both an install campaign and a tCPA event campaign for the same geo, they'll compete in the same auctions. Expect 10-15% CPI inflation on both. Only run parallel objectives if the incremental volume justifies the cannibalization cost.
Step 10: How do you diagnose and fix a sudden CPA spike in Google App Campaigns?
Check these five causes in order: (1) recent budget or bid change, (2) asset exhaustion or creative fatigue, (3) conversion tracking breakage, (4) seasonal or competitive shifts, (5) algorithm re-learning.
The most common cause, responsible for roughly 60% of CPA spikes in our experience, is a budget or bid change that triggered re-learning.
Conversion tracking breakage is the silent killer. A Firebase SDK update, an app release that breaks event firing, or an MMP configuration change can silently stop conversions from reporting to Google Ads. The algorithm interprets zero conversions as "this audience doesn't convert" and starts exploring wildly.
Check your Firebase DebugView and Google Ads conversion reporting daily.
If the spike coincides with no changes on your end, check Google's Auction Insights (where available) and industry CPM indices. Per AppsFlyer's marketing data, Android CPMs can spike 20-40% during major shopping events, which directly impacts your effective CPA.
Key insight: 60% of CPA spikes trace back to a budget, bid, or conversion tracking change.
- Check budget/bid changes first, they're the usual culprit
- Verify conversion events fire correctly in Firebase DebugView
- Creative fatigue builds gradually over 3-4 weeks
- Seasonal CPM spikes inflate CPA externally
- Give algorithm 5-7 days before making reactive changes
What's the step-by-step triage process?
Day 1-2: Document the spike. Compare today's CPA to a 14-day rolling average. If CPA is less than 30% above average, it may be normal variance. Wait.
Day 3-5: If the spike persists, check the change history in Google Ads for any budget, bid, asset, or geo changes in the past 14 days. Verify Firebase conversion events are firing with correct parameters.
Day 5-7: If no internal cause is found, review competitive landscape. Check if a major competitor launched a campaign or if a seasonal event is inflating CPMs. Adjust tCPA bid upward by 10% temporarily to maintain delivery while investigating.
Day 7+: If CPA hasn't returned to baseline, this may be a structural issue. Consider launching a fresh campaign with current best assets and the refined conversion event. Sometimes a reset is faster than repair.
Pro tip: Keep a campaign change log in a shared spreadsheet. Note every budget, bid, asset, or geo change with date and rationale. When spikes occur, this log cuts triage time from hours to minutes.
Step 11: How do you use audience signals and exclusions in Google App Campaigns?
While GAC doesn't offer manual audience targeting, you can provide audience signals (formerly "audience hints") and exclusion lists. Upload a first-party customer list of existing users to exclude from install campaigns.
This prevents wasting spend on users who already have your app, which according to common industry patterns can account for 5-15% of wasted spend on Android campaigns.
For re-engagement campaigns, upload lapsed user lists segmented by last-active date. Users who lapsed 7-30 days ago respond at roughly 3-5x the rate of users lapsed 90+ days, per industry benchmarks from Adjust's re-engagement data.
Audience signals tell the algorithm "users like these are valuable" without restricting targeting. Upload your highest-LTV user list as a signal, and the algorithm will use it as a starting point for finding similar users. This doesn't guarantee lookalike targeting, but it biases the algorithm's exploration in a productive direction.
Key insight: Exclude existing users from install campaigns to save 5-15% of wasted spend.
- Upload customer exclusion lists for install campaigns
- Provide high-LTV user lists as audience signals
- Segment lapsed users by recency for re-engagement
- Audience signals guide, they don't restrict, the algorithm
- Refresh exclusion lists monthly for accuracy
Pro tip: For music and audio streaming apps, creating separate exclusion lists for free-tier and paid-tier users prevents install campaigns from targeting existing free users who should be receiving upgrade campaigns instead.
Step 12: How do you measure true GAC performance beyond Google's self-reported data?
Never rely solely on Google Ads reporting for performance measurement. Use an independent MMP (AppsFlyer, Adjust, Branch, or Kochava) as your source of truth. Google's self-attributed install counts are consistently 15-30% higher than MMP-attributed installs due to view-through attribution differences and overlapping claims.
Configure your MMP with a click-through attribution window of 7 days and a view-through window of 1 day (or disable view-through entirely). This gives you a more conservative, realistic picture of GAC's incremental contribution.
According to industry analysis from MobileDevMemo, view-through attribution on display networks inflates reported ROAS by 20-50% compared to click-through only measurement.
For a holistic view, run incrementality tests quarterly. Pause GAC in a test geo for 2-3 weeks while holding all other channels constant. Compare organic install velocity in the test geo versus control geos.
Most apps find GAC's true incremental contribution is 60-80% of what the MMP reports, which is still strong but important for accurate budgeting.
Key insight: Google self-reports 15-30% more installs than independent MMPs attribute.
- Use an MMP as source of truth, not Google Ads
- Set 7-day click, 1-day view attribution windows
- Run quarterly incrementality tests in isolated geos
- View-through attribution inflates ROAS by 20-50%
- True incrementality is typically 60-80% of reported
| Measurement Method | Accuracy Level | Effort Level | Recommended Frequency |
|---|---|---|---|
| Google Ads Self-Reported | Low (inflated) | None | Daily monitoring only |
| MMP Last-Touch Attribution | Medium | Low | Ongoing, primary KPI source |
| MMP Multi-Touch Attribution | Medium-High | Medium | Monthly review |
| Geo-Based Incrementality Test | High | High | Quarterly |
| Media Mix Modeling (MMM) | High (directional) | Very High | Bi-annually |
Pro tip: Build a weekly reconciliation report comparing Google Ads reported conversions to MMP-attributed conversions. Track the "attribution gap" percentage over time. If the gap suddenly widens beyond 30%, investigate possible SDK or tracking issues immediately.
Common Mistakes to Avoid
- Mistake 1: Changing budget by more than 20% at once, triggering a 1-2 week CPA spike
- Mistake 2: Mixing Tier 1 and Tier 3 geos in one campaign, causing 70%+ spend shift to cheap markets
- Mistake 3: Replacing all assets simultaneously, resetting the algorithm's learned combinations entirely
- Mistake 4: Optimizing for purchases when daily conversion volume is under 10, starving the algorithm
- Mistake 5: Ignoring portrait (9:16) video, losing access to YouTube Shorts inventory
- Mistake 6: Trusting Google self-reported data without MMP cross-validation, inflating ROAS by 20-50%
- Mistake 7: Launching more than 4 campaigns per app per market, fragmenting conversion signal
Optimizing Google App Campaigns comes down to disciplined management of three inputs: conversion events, creative assets, and budget/bid settings. Start with your conversion event selection this week, audit your Asset Excellence score, build a geo-segmented campaign structure, then iterate weekly on asset performance.
Within 30 days of consistent optimization, most apps see measurable CPA improvements of 15-30%.
Frequently Asked Questions
How long does Google App Campaign learning phase take?
The learning phase typically lasts 2-3 weeks or until the campaign accumulates roughly 100 conversion events, per Google's official documentation. During this period, expect CPA to be 20-40% above your target. Don't make changes until learning completes.
Can you run Google App Campaigns for iOS apps?
Yes, but iOS campaigns are limited to Search and Display placements (no YouTube). Post-ATT, iOS signal loss significantly impacts optimization. According to AppsFlyer data, iOS GAC CPIs run 40-60% higher than Android due to reduced signal availability and SKAdNetwork limitations.
Should you use tCPI or tCPA bidding for Google App Campaigns?
Use tCPA in almost all cases. tCPI optimizes only for installs regardless of post-install quality. tCPA campaigns targeting a meaningful in-app event produce users with 25-35% higher retention, per AppsFlyer benchmarks. Only use tCPI if you need raw volume for a launch and have no post-install event data yet.
What's the minimum budget needed for a Google App Campaign?
Google recommends a daily budget of at least 50x your tCPA. For a $5 tCPA, that's $250/day. Campaigns spending under this threshold frequently fail to exit the learning phase, resulting in inconsistent delivery and inflated costs.
Do HTML5 assets improve Google App Campaign performance?
Rarely for most app categories. HTML5 assets require significant production effort and only serve on Display network placements. Focus on video and images first. The exception is gaming apps, where playable HTML5 ads can improve conversion rates by 10-20% according to Google's creative benchmarks.
How does Google App Campaign performance compare to Meta App Install Ads?
According to the cross-channel comparison of top UA channels, Google delivers lower CPIs on Android (typically 15-25% cheaper than Meta) but Meta often delivers higher post-install engagement. Most scaled apps run both, allocating 40-60% to Google and 30-40% to Meta.
Can you exclude specific placements like Display from Google App Campaigns?
No. Google does not allow placement-level exclusions in App Campaigns. The algorithm decides placement mix automatically. You can influence it indirectly through asset types: campaigns with only text and images won't serve on YouTube, and campaigns with strong video assets will tilt toward YouTube and Discover.
How do you handle Google App Campaigns for a brand new app with no historical data?
Start with an install-optimized campaign at $2-3 tCPI for the first 2-4 weeks to accumulate user data. Once you have 100+ daily installs and measurable in-app events, launch a parallel tCPA campaign optimizing for the highest-volume meaningful event. Transition budget over 2-3 weeks.
Looking to scale your mobile app growth with performance creative that delivers results? Talk to RocketShip HQ to learn how our frameworks can work for your app.
Not ready yet? Get strategies and tips from the leading edge of mobile growth in a generative AI world: subscribe to our newsletter.