Google App Campaigns (GAC) hide more than they show. Understanding asset performance ratings, network-level reports, and conversion paths separates profitable scaling from blind spending. According to AppsFlyer's State of App Marketing report, Google remains a top-3 media source globally, yet most advertisers misread the limited data Google provides.
This guide shows you exactly how to extract actionable signal from GAC reporting.
Prerequisites: You need an active Google App Campaign with at least 14 days of data and 100+ conversions. Access to Google Ads reporting, a mobile measurement partner (AppsFlyer, Adjust, or Singular), and ideally Google Analytics for Firebase are required. Familiarity with basic campaign structures covered in our Google App Campaign targeting guide is helpful.
Page Contents
- Step 1: Why is Google App Campaign reporting so limited compared to other platforms?
- Step 2: How do Google's asset performance ratings (Low, Good, Best) actually work?
- Step 3: How do you read network-level reports in Google App Campaigns?
- Step 4: How do you analyze conversion paths in Google App Campaigns?
- Step 5: How do you identify which creative combinations actually win?
- Step 6: What metrics should you prioritize in Google App Campaign reports?
- Step 7: How do you diagnose why a Google App Campaign suddenly underperforms?
- Step 8: How do you benchmark Google App Campaign performance against industry standards?
- Step 9: How should you structure your Google App Campaign reports for weekly review?
- Step 10: How do you use asset reports to inform your creative production pipeline?
- Step 11: How do you handle discrepancies between Google Ads data and MMP data?
- Step 12: How do you build a testing roadmap using GAC report insights?
- Common Mistakes to Avoid
- Frequently Asked Questions
- Related Reading
Step 1: Why is Google App Campaign reporting so limited compared to other platforms?
Google App Campaigns use machine learning to distribute your ads across Search, YouTube, Display, AdMob, and Google Play, but they deliberately restrict granular reporting to protect their optimization algorithms. You cannot see individual keyword bids, placement-level performance, or exact creative combinations the way you can on Meta or TikTok.
According to Google's official documentation, GAC reports at the asset level, campaign level, and network level, but never at the ad group-by-placement level. This design choice means the algorithm decides allocation across networks, and you can only influence it through creative inputs, bid targets, and budget.
Per the Adjust State of App Growth report, Google's share of app install ad spend grew 12% year-over-year in 2025. Advertisers are spending more on a platform that gives them less control. Understanding what reports are available, and what they actually mean, is essential to avoid flying blind.
Key insight: GAC reports at asset and network level only, never at placement or keyword level.
- No keyword-level reporting in GAC
- No placement-level CPI visibility
- Asset ratings replace granular creative metrics
- Network reports show Search, YouTube, Display splits
- Campaign-level metrics are your primary control lever
| Reporting Dimension | Available in GAC | Available in Meta |
|---|---|---|
| Individual keyword performance | No | N/A |
| Placement-level CPI | No | Yes (breakdowns) |
| Asset-level impressions/clicks | Yes | Yes |
| Creative combination performance | No | Yes (dynamic creative) |
| Network-level split | Yes (4 networks) | Yes (FB, IG, AN) |
| Audience segment reporting | Limited | Yes |
Pro tip: Export raw data via Google Ads API rather than relying on the UI. The API exposes AssetPerformanceLabel changes over time, which the UI does not track historically.
Step 2: How do Google's asset performance ratings (Low, Good, Best) actually work?
Google rates each uploaded asset as Learning, Low, Good, or Best based on its relative contribution to conversions within that asset type. A "Best" rated video does not mean it has the lowest CPI globally. It means it outperforms other videos in the same campaign.
The rating system is relative, not absolute.
If you upload four videos and three are mediocre, the least mediocre one gets "Best." According to Google's asset reporting documentation, an asset needs sufficient impressions and conversions before moving from "Learning" to a rated status, typically requiring at least 5,000 impressions over several days.
Here is where most advertisers make a critical mistake. They treat "Low" assets as failures and immediately replace them. But a "Low" rated text headline might still be driving installs at an acceptable CPA. The rating tells you relative performance within a type, not whether the asset is profitable.
As discussed in our analysis of AI creative testing pitfalls, replacing assets too quickly prevents the algorithm from stabilizing. Google recommends waiting at least 2-3 weeks before acting on ratings.
Key insight: Asset ratings are relative within each type, not absolute performance indicators.
- "Best" means best among that asset type only
- "Learning" requires ~5,000 impressions to resolve
- "Low" does not automatically mean unprofitable
- Ratings shift as new assets are added
- Wait 2-3 weeks before replacing Low assets
| Rating | What It Means | Recommended Action |
|---|---|---|
| Learning | Insufficient data to rate | Wait, do not replace |
| Low | Underperforms other assets of same type | Evaluate after 14+ days, then consider replacing |
| Good | Performs comparably to other assets | Keep, iterate on variations |
| Best | Top performer among same asset type | Scale, create similar variants |
How many assets should you have per type for ratings to be meaningful?
Google allows up to 20 images, 20 videos, 10 text headlines, and 5 descriptions per ad group. But maxing out every slot is counterproductive.
Per analysis on the perils of asset stuffing, uploading too many assets without thematic separation dilutes the algorithm's ability to identify winning combinations. A better approach is 4-6 assets per type with clear thematic differentiation.
For campaigns spending under $500/day, even 4-6 assets per type may be too many. Budget constrains how many assets can exit "Learning" status. Rule of thumb: you need roughly $100/day per asset type for ratings to stabilize within two weeks.
Pro tip: Create a spreadsheet tracking asset ratings weekly. Google does not show historical rating changes, so manual tracking is the only way to identify when a "Best" asset starts declining, which signals creative fatigue.
Step 3: How do you read network-level reports in Google App Campaigns?
Navigate to Campaigns > Segments > Network (with search partners) to see performance split across Google Search, YouTube, Google Display Network, and Google Play / third-party app stores. This is the most underused report in GAC.
According to our creative assets guide for Google App Campaigns, each network favors different asset types. Search relies heavily on text headlines. YouTube depends on video. Display uses images and HTML5. Understanding which network drives your conversions shapes which assets to prioritize.
Industry patterns show clear differences. Per data from Liftoff's 2025 Mobile Ad Creative Index, YouTube placements typically produce 15-25% higher retention rates but 30-50% higher CPIs than Display network placements. Search often delivers the lowest CPI but at limited scale, since intent-based queries for app categories are finite.
Key insight: YouTube drives higher retention but higher CPI; Display offers scale at lower cost.
- Search: lowest CPI, limited scale, text-dependent
- YouTube: highest retention, highest CPI, video-dependent
- Display: largest scale, lowest quality, image-dependent
- Google Play: high intent, small volume
| Network | Typical CPI Range (Non-Gaming) | Primary Asset Type | Relative Quality |
|---|---|---|---|
| Google Search | $0.80 – $2.50 | Text headlines | Highest intent |
| YouTube | $2.00 – $5.00 | Video (landscape + portrait) | High retention |
| Display Network | $0.40 – $1.80 | Images + HTML5 | Variable, often lower |
| Google Play | $1.00 – $3.00 | Store listing assets | High intent, low volume |
What should you do when one network dominates spend?
If Display Network consumes 70%+ of your budget, it usually means your video assets are underperforming or missing. Google shifts spend to networks where it can most easily generate conversions, and Display has the largest inventory.
To rebalance toward YouTube (which often delivers better downstream metrics), upload strong portrait and landscape videos in 15-second and 30-second lengths. According to Google's best practices, including both orientations can increase YouTube eligible inventory by 60%.
You cannot directly control network allocation. But you can influence it by strengthening assets for the network you want to scale. Adding high-quality video typically shifts 10-20% of budget from Display to YouTube within the first week.
How do you cross-reference network reports with MMP data?
Google's network-level report shows installs and cost. Your MMP (AppsFlyer, Adjust, Singular) shows post-install events. The gap between them reveals quality differences by network.
Export campaign-level data from both sources, then compare Day 7 retention and revenue per install. Common pattern: Display installs show 20-40% lower D7 retention versus YouTube installs, per AppsFlyer's Performance Index data.
If your campaign's CPA target is based on downstream revenue, Display-heavy campaigns may look efficient on CPI but underperform on ROAS.
Pro tip: Segment network reports by day of week. YouTube spend often spikes on weekends (when mobile video consumption increases by 18-22% per Google's internal data), which can explain CPI fluctuations in your campaign-level reporting.
Step 4: How do you analyze conversion paths in Google App Campaigns?
Google's conversion path reporting shows the sequence of interactions (impressions and clicks) before a user installs. Access it through Tools > Attribution > Conversion Paths. This reveals whether users convert after a single YouTube view, or after seeing a Display ad three times first.
Most app installs are not single-touch. According to Google's 2025 attribution data, the average app install involves 2.3 touchpoints across networks before conversion. A user might see a Display banner, then a YouTube pre-roll, then search for your app and install from Google Play.
This matters for asset strategy. If your conversion paths show that YouTube is predominantly an "assisting" network (appears early in the path) rather than a "closing" network, your YouTube creative should focus on awareness and brand recall rather than direct-response CTAs.
The story-driven ad approach used by Lily's Garden exemplifies this: emotional resonance in early-funnel placements drives downstream conversions even when the assisted install is attributed elsewhere.
Key insight: Average app install involves 2.3 touchpoints; YouTube often assists rather than closes.
- Access via Tools > Attribution > Conversion Paths
- Multi-touch paths average 2.3 interactions
- YouTube often assists, Display often closes
- Path length varies by app category
- Use path data to inform creative messaging per network
How do you use path length to optimize budget allocation?
If your average path length exceeds 3 touchpoints, your campaign likely needs higher daily budget to generate sufficient frequency. Google's algorithm cannot create multi-touch paths if budget constrains daily reach.
For subscription apps with longer consideration cycles, path lengths of 3-5 touchpoints are normal, per AppsFlyer's eCommerce app marketing data. Ensure your daily budget is at least 10x your target CPA to give the algorithm room to build these multi-touch journeys.
Pro tip: Compare conversion path reports before and after major creative refreshes. A shift from 3-touch to 2-touch paths after adding strong video assets indicates your new creatives are more persuasive and converting users faster, saving budget on redundant impressions.
Step 5: How do you identify which creative combinations actually win?
This is the hardest question in GAC reporting because Google does not show which specific combination of headline + description + image/video drove each install. You have to infer it through a structured testing methodology.
The approach that works: isolate variables across ad groups. Instead of loading one ad group with 20 images, create separate ad groups with 3-4 images each, grouped by creative theme. Keep text assets identical across ad groups. This way, performance differences between ad groups are attributable to the image/video theme.
Need help scaling your mobile app growth? Talk to RocketShip HQ about how we apply these strategies for apps spending $50K+/month on UA.
RocketShip HQ's approach to this mirrors the psychology-based creative methodology discussed by Solsten. For Solitaire Klondike, switching the headline concept from "train your brain" to "hardest solitaire game" (based on player psychological profiling) improved IPM from 0.97 to 2.4.
In GAC, you would test this by running two ad groups with identical images but different headline sets.
Key insight: Isolate variables across ad groups since Google won't show winning combinations directly.
- Google never shows exact asset combinations
- Use ad group isolation to test themes
- Keep text constant when testing visuals
- Keep visuals constant when testing copy
- 3-4 assets per type per ad group is optimal
| Test Structure | Ad Group A | Ad Group B | What You Learn |
|---|---|---|---|
| Visual theme test | 4 gameplay images | 4 lifestyle images | Which visual theme wins |
| Copy angle test | Same images, benefit headlines | Same images, urgency headlines | Which copy angle wins |
| Video length test | 15-sec videos only | 30-sec videos only | Optimal video duration |
| Emotional tone test | Same format, humor tone | Same format, aspirational tone | Which emotion converts |
How much budget does each test ad group need?
Each ad group needs enough budget to generate statistically significant results. According to Google's recommendation, plan for at least 50 conversions per ad group before comparing performance. At a $3.00 CPI, that is $150 minimum per ad group.
In practice, you often need 100+ conversions per ad group for stable CPA numbers, especially for subscription apps where install-to-trial rates introduce another variable. Plan $300-$500 per ad group for a reliable test.
As detailed in our guide to paid UA channels, test budgets in GAC need to be higher than Meta because Google's learning period is longer, typically 7-14 days versus Meta's 3-5 days.
Pro tip: Label ad groups with clear naming conventions like AG_Gameplay_BenefitCopy_Q1. Six months from now, you will not remember what "Ad Group 7" was testing. Naming discipline is the foundation of institutional creative knowledge.
Step 6: What metrics should you prioritize in Google App Campaign reports?
Focus on three tiers of metrics, each answering a different question. Tier 1 (daily monitoring): CPI, conversions, cost. Tier 2 (weekly analysis): conversion rate, network split, asset ratings. Tier 3 (monthly strategic review): ROAS, retention cohorts from your MMP, and LTV payback period.
A common trap is optimizing purely for CPI. According to AppsFlyer's Performance Index, Google ranks highly on install volume but varies significantly on retention quality depending on network mix.
A campaign with a $1.20 CPI that is 85% Display traffic may underperform a $2.80 CPI campaign that is 60% YouTube on a Day 30 ROAS basis.
The metric that matters most depends on your optimization event. For tCPA campaigns optimizing to in-app purchases, track revenue per install (RPI) at Day 7 and Day 30. For tROAS campaigns, track actual return versus target weekly.
Key insight: CPI alone is misleading; network mix determines downstream value per install.
- Daily: CPI, conversions, spend pacing
- Weekly: conversion rate, network split, asset ratings
- Monthly: ROAS, D7/D30 retention, LTV payback
- Compare Google-reported CPI vs MMP-reported CPI
- Track revenue per install, not just cost per install
Pro tip: Set up automated rules in Google Ads to alert you if daily CPI exceeds 150% of your target. GAC can spike during algorithm re-learning phases, and catching spikes within 24 hours (versus weekly review) can save 5-10% of monthly budget per industry benchmarks from Singular's ROI Index.
Step 7: How do you diagnose why a Google App Campaign suddenly underperforms?
Sudden CPI spikes or volume drops in GAC usually trace to one of five causes: creative fatigue, budget changes, conversion event problems, seasonal competition shifts, or algorithm re-learning.
Start with the asset performance report. If assets that were rated "Best" are now "Good" or "Low," creative fatigue is likely. According to industry data from Liftoff, mobile ad creative fatigue sets in after approximately 2-4 weeks of heavy spend. Refresh your weakest-performing asset type first.
Next, check for budget-related disruptions. Increasing budget by more than 20% in a single day forces the algorithm to re-learn, per Google's own guidelines. The recommended maximum daily budget increase is 15-20%.
Also verify your conversion postbacks. A broken MMP SDK or delayed postback can cause Google's algorithm to make bad optimization decisions. In-app event data latency beyond 24 hours degrades campaign performance significantly.
Key insight: Check creative fatigue, budget changes, and conversion postbacks before changing strategy.
- Creative fatigue: assets drop from Best to Low
- Budget spikes: >20% daily increase triggers re-learning
- Broken postbacks: check MMP integration daily
- Seasonal competition: Q4 CPMs rise 20-40%
- Algorithm re-learning: takes 3-7 days to stabilize
| Symptom | Likely Cause | Diagnostic Step | Fix Timeline |
|---|---|---|---|
| CPI spikes 30%+ | Creative fatigue or budget change | Check asset ratings + budget history | 3-7 days after fix |
| Volume drops 50%+ | Budget too low for CPA target | Compare budget to target CPA ratio | Increase budget or raise CPA target |
| Conversion rate drops | Broken SDK or postback delay | Verify MMP real-time dashboard | Immediate SDK fix |
| Network mix shifts to Display | Video assets underperforming | Review video asset ratings | Upload new videos, wait 7 days |
| All assets stuck in Learning | Insufficient budget or too many assets | Reduce asset count or raise budget | 7-14 days |
Pro tip: Keep a campaign change log. Every budget change, asset swap, and CPA target adjustment should be timestamped. When performance shifts, you can trace it to a specific change rather than guessing. Correlation with change dates is your most powerful diagnostic tool.
Step 8: How do you benchmark Google App Campaign performance against industry standards?
Benchmarking GAC performance requires context: your app category, target geography, optimization event, and campaign maturity all affect what "good" looks like.
According to Adjust's 2025 State of App Growth data, median CPIs on Google vary significantly by category. Gaming apps see lower CPIs but higher volume, while fintech and health apps face higher CPIs with better downstream conversion rates. The table below summarizes typical ranges.
Compare your metrics to these benchmarks weekly, but remember that averages mask enormous variance. A meditation app in the US will have fundamentally different benchmarks than a casual game in Southeast Asia. Use these as directional guides, not absolute targets.
Key insight: Benchmark by category and geo, not against overall averages that mask variance.
- Gaming CPIs are lowest but quality varies most
- Fintech CPIs are highest but LTV justifies them
- US CPIs run 2-3x higher than Southeast Asia
- Compare D7 retention alongside CPI for true benchmarking
- Track quarter-over-quarter trends, not just snapshots
| App Category | Median Google CPI (US) | Typical D7 Retention | Benchmark Source |
|---|---|---|---|
| Casual Gaming | $1.20 – $2.50 | 25-35% | Liftoff 2025 Report |
| Midcore / Strategy Gaming | $2.50 – $5.00 | 15-25% | Liftoff 2025 Report |
| Subscription / Lifestyle | $2.00 – $4.50 | 12-20% | RevenueCat State of Subs 2025 |
| Fintech | $5.00 – $12.00 | 18-28% | Adjust 2025 Report |
| Social / Dating | $2.50 – $6.00 | 10-18% | AppsFlyer Performance Index 2025 |
| eCommerce / Shopping | $1.50 – $3.50 | 15-22% | AppsFlyer eCommerce Report 2025 |
Pro tip: For subscription apps, benchmark install-to-trial rate alongside CPI. According to data from music and audio streaming app campaigns, a healthy install-to-free-trial rate on GAC is 15-25%. Below 10% signals a mismatch between ad creative promise and app onboarding experience.
Step 9: How should you structure your Google App Campaign reports for weekly review?
Build a reporting dashboard that answers three questions in order: (1) Is spend pacing to plan? (2) Is efficiency holding? (3) Are assets healthy? This three-layer framework prevents getting lost in vanity metrics.
Layer 1 covers budget and delivery: daily spend, total conversions, CPI/CPA trend. Layer 2 covers efficiency: CPI by network, conversion rate trend, cost per key in-app event. Layer 3 covers creative health: asset ratings distribution, days since last creative refresh, percentage of assets in "Learning."
For teams managing multiple campaigns, automate Layer 1 and 2 using Google Ads scripts or Looker Studio. Reserve human analysis time for Layer 3, where judgment about creative themes and testing roadmaps matters most.
Social networking app campaigns particularly benefit from weekly creative reviews because retention sensitivity to creative-audience fit is highest in that category.
Key insight: Automate spend and efficiency reporting; focus human time on creative health analysis.
- Layer 1: Budget pacing and delivery (automate)
- Layer 2: Efficiency by network (automate)
- Layer 3: Asset health and testing roadmap (manual)
- Use Looker Studio for automated dashboards
- Weekly review cadence is optimal for GAC
What should a weekly report template include?
Include these sections: Campaign snapshot (spend, CPI, conversions vs target), Network breakdown (Search/YouTube/Display/Play split with CPI per network), Asset scorecard (count of Best/Good/Low/Learning per type), MMP cross-reference (D1 and D7 retention for the week's installs), and Action items.
Keep the report to one page. Decision-makers need signal, not data. Highlight the 1-2 metrics that changed most versus the prior week and the 1 action you are taking in response.
Pro tip: Set up a Google Ads script to email you when any asset's rating changes. Rating shifts often precede CPI changes by 2-3 days, giving you an early warning window to prepare replacement assets before performance degrades.
Step 10: How do you use asset reports to inform your creative production pipeline?
Asset reports should directly feed your creative brief. The gap between what most teams do (produce whatever feels right) and what works (produce based on performance data) is where budget gets wasted.
Start by categorizing your "Best" rated assets by theme, format, and message. If your top-performing images all feature in-app screenshots with UI elements visible, that is a signal to produce more screenshot-style creatives rather than lifestyle photography.
According to research on player psychology and ad creatives, understanding why certain creative themes resonate (not just which ones do) prevents the local maxima problem where you iterate on surface variations while missing breakthrough concepts.
For mobile gaming apps using fail ad formats, asset reports might show that intentionally "bad" gameplay footage outperforms polished trailers. This data should reshape your production priorities entirely.
Plan creative production in 2-week sprints aligned with your GAC reporting cadence. Each sprint should produce replacements for "Low" rated assets and variations of "Best" rated ones. Budget roughly 30% of creative output for experimental new concepts and 70% for proven theme iterations.
Key insight: Let asset performance data drive creative briefs, not intuition or competitor copying.
- Categorize Best assets by theme, format, message
- Produce 70% iterations, 30% experiments
- Align creative sprints with 2-week report cadence
- Replace Low assets before they drag campaign CPI
- Analyze why winners work, not just what they show
Pro tip: Track creative "velocity": how many new assets you upload per month versus how many exit Learning status. If more than 40% of new assets remain stuck in Learning, you are either uploading too many at once or your test budget is insufficient. Reduce asset count per ad group before adding more.
Step 11: How do you handle discrepancies between Google Ads data and MMP data?
Discrepancies between Google's reported installs and your MMP's attributed installs are normal and expected. Google counts installs using its own attribution methodology, while your MMP uses last-touch or multi-touch attribution across all media sources.
According to AppsFlyer's State of App Marketing report, discrepancies of 10-30% between self-reported network data and MMP data are typical. Google tends to over-report because it counts view-through conversions (a user saw an ad, then installed later) that your MMP might attribute to a different source.
The key principle: use Google's data for campaign optimization decisions (asset ratings, network allocation, bidding). Use MMP data for budget allocation decisions across channels and for LTV analysis. Never mix the two in the same report without clearly labeling the source.
For campaigns subject to regulatory requirements, such as fintech app compliance standards, MMP data with proper consent frameworks should be your source of record for performance claims.
Key insight: Discrepancies of 10-30% between Google and MMP data are normal; use each for different decisions.
- Google over-reports due to view-through attribution
- Use Google data for in-platform optimization
- Use MMP data for cross-channel budget allocation
- Never mix sources in one report without labels
- View-through window settings drive most discrepancies
How do you reduce the discrepancy gap?
Align attribution windows. Google defaults to a 30-day click / 1-day view window. If your MMP uses 7-day click / 1-day view, the mismatch inflates Google's numbers. Set both to the same window where possible.
Also verify timezone alignment. Google Ads defaults to your account timezone. Your MMP might default to UTC. A timezone mismatch creates daily discrepancies that cancel out weekly but confuse day-level analysis.
Finally, check for SDK integration issues. According to Adjust's technical documentation, 5-8% of install events can be lost due to SDK initialization timing on older Android devices. Regular SDK health checks reduce this noise.
Pro tip: Create a "discrepancy ratio" metric: MMP installs / Google reported installs. Track this weekly. A stable ratio (e.g., consistently 0.78) is fine. A ratio that suddenly shifts from 0.78 to 0.55 indicates a tracking problem that needs immediate investigation.
Step 12: How do you build a testing roadmap using GAC report insights?
A testing roadmap translates report insights into structured experiments. Without one, creative testing becomes reactive and chaotic, which wastes the majority of test budget.
RocketShip HQ uses a prioritization framework: score each potential test on Expected Impact (based on asset report data), Ease of Execution (creative production effort), and Learning Value (does this test teach you something applicable across channels). Tests scoring highest across all three dimensions run first.
From asset reports, identify the largest gap. If your "Best" video has been running for 6+ weeks and is starting to fatigue, video testing is the priority. If all your text headlines are rated "Good" (none "Best"), headline testing offers the biggest unlock.
Structure your roadmap in 4-week cycles. Week 1-2: launch new test ad groups. Week 3: first read on asset ratings and directional CPI. Week 4: conclusive read, document learnings, plan next cycle.
Per analysis of AI creative testing costs, producing more creatives with AI tools does not reduce testing costs because each test variant still requires proportional ad spend to evaluate.
Key insight: Score tests on impact, ease, and learning value; run the highest-scoring first.
- 4-week testing cycles align with GAC learning periods
- Prioritize the asset type with the biggest gap
- More AI-generated creatives still require proportional test budget
- Document every test result for institutional knowledge
- Apply winning themes cross-channel to Meta and TikTok
How do you document test results for future reference?
Create a creative testing database with these fields: test hypothesis, ad group name, start date, end date, asset count, conversions generated, CPI result, asset ratings achieved, and key learning.
This database becomes your most valuable UA asset over time. After 6 months of disciplined documentation, you will have a pattern library showing which creative themes, formats, and messages consistently perform in GAC. This eliminates redundant testing and accelerates new campaign launches.
Include screenshots of the actual assets tested. Six months later, "gameplay video with dramatic music" means nothing without the visual reference.
Pro tip: Allocate 15-20% of your total GAC budget to testing ad groups. The remaining 80-85% runs proven winners at scale. This ratio, per common industry practice, balances learning velocity with performance stability.
Common Mistakes to Avoid
- Mistake 1: Replacing "Low" assets immediately instead of waiting 14+ days for stable ratings
- Mistake 2: Ignoring network-level reports, leading to 80%+ Display spend without realizing it
- Mistake 3: Asset stuffing: uploading 20 images into one ad group, preventing any from exiting Learning
- Mistake 4: Increasing daily budget by 50%+ and triggering full algorithm re-learning for 7 days
- Mistake 5: Using Google-reported installs for cross-channel budget allocation instead of MMP data
- Mistake 6: Never tracking asset rating changes over time, missing early fatigue signals
- Mistake 7: Testing copy and visuals simultaneously in one ad group, making results unattributable
Reading Google App Campaign reports effectively requires looking beyond surface metrics. Focus on asset ratings as relative signals, network reports as quality indicators, and conversion paths as creative strategy guides.
Start this week: export your network-level report, cross-reference with MMP retention data, and build your first creative testing roadmap using the 4-week cycle framework outlined above.
Frequently Asked Questions
Can you see which keywords trigger your Google App Campaign ads?
No. GAC does not provide keyword-level reporting. You can see a limited "search terms" report showing some queries that triggered impressions, but it covers only a fraction of actual search volume. According to Google's documentation, this report may show fewer than 10% of actual search queries that triggered your ads.
How long should you let a new Google App Campaign run before evaluating performance?
Allow at least 14 days and 100 conversions before making optimization decisions. Per Google's guidelines, the algorithm's learning period requires this minimum. Campaigns targeting downstream events like purchases may need 3-4 weeks for stable performance.
Does Google App Campaign performance differ between Android and iOS?
Yes, significantly. iOS campaigns face SKAdNetwork attribution limitations, which restrict Google's optimization signal. iOS CPIs average 30-60% higher than Android in the same app categories per Adjust's 2025 data. Reporting granularity is also lower on iOS.
Should you run separate campaigns for tCPI and tCPA optimization?
Yes. tCPI campaigns optimize for install volume and work best for top-of-funnel growth. tCPA campaigns optimize for in-app events (purchases, subscriptions) and deliver higher quality users at higher CPIs. Running both simultaneously lets you balance volume and quality. Keep budgets separate to avoid one cannibalizing the other.
How do you tell if creative fatigue is causing performance decline versus seasonal competition?
Check asset ratings first. If "Best" assets are downgrading to "Good" or "Low," that is creative fatigue. If asset ratings remain stable but CPI is rising, look at auction-level signals: increased competition during Q4 or app launches in your category drive CPM inflation. Cross-reference with CPM trends in your Google Ads auction insights report.
What video specifications work best for YouTube placements in GAC?
Upload both portrait (9:16) and landscape (16:9) in 15-second and 30-second lengths. According to Google's asset requirements, including both orientations increases eligible YouTube inventory by 60%. Keep file sizes under 150MB and ensure the core message appears in the first 3 seconds.
Can you exclude specific placements or apps from Google App Campaigns?
You can exclude specific apps and app categories via account-level placement exclusions, but not at the campaign level within GAC. Navigate to Tools > Placement Exclusions. Industry practice suggests excluding children's content apps and low-quality game apps, which according to common UA benchmarks can consume 5-15% of Display budget with near-zero retention.
How do HTML5 assets perform compared to static images in Google Display placements?
HTML5 interactive assets typically achieve 10-20% higher click-through rates than static images on Display, per Google's creative benchmarks. However, they require more production effort and are harder to iterate quickly. For most teams spending under $50K/month on GAC, static images with strong visual hooks deliver better ROI on production time.
Looking to scale your mobile app growth with performance creative that delivers results? Talk to RocketShip HQ to learn how our frameworks can work for your app.
Not ready yet? Get strategies and tips from the leading edge of mobile growth in a generative AI world: subscribe to our newsletter.