App Keyword OptimizationUpdated on April 30, 2026

How to do ASO keyword research and win in app stores (at enterprise scale)?

ASO keyword research - also called app store keyword research or mobile app keyword research - is how mobile apps get found in search on the App Store and Google Play. It's the work of identifying which terms real users type, prioritizing them by relevance, search volume, and ranking feasibility. After that placing the winners in the indexed metadata fields each store exposes.

Ivan Žgela
researching app keywords

For a team running this across 40 apps in 8 markets, the regular-activity part hides most of the work. It’s a monthly cadence that requires a lot of effort, but very often app marketing teams have one person with a spreadsheet open on one screen and two different tools open on the other.

This guide is written for those teams. It gives you a scoring formula you can run against any candidate list (RVD). It also gives you a multi-market rollout cadence for teams with more markets than headcount (Market Tier Matrix). And it gives you a clean line between what keyword research actually moves, and what it can’t do alone.

Even if you manage a single app in one market, the guide is still for you. The more apps you manage, the volume of work changes.

What is ASO keyword research?

ASO keyword research is the first step in app store optimization. The job is to figure out which search terms inside the App Store and Google Play map to what your app does, which of those terms people actually use, and which ones you can plausibly rank for given who you’re up against.

The output is a prioritized list of keywords that goes into the indexed metadata fields each store exposes. On the App Store that’s the app name, the subtitle, and the 100-character keyword field. On Google Play that’s the app title, the short description, and the long description.

Those fields are where the ranking fight happens. A candidate keyword that never lands in indexed metadata doesn’t exist as far as either store’s search algorithm is concerned.

The way keyword research differs from its web equivalent is the store layer. Google ranks pages across the whole internet. The App Store and Google Play rank one listing per app against competitors inside the same category, inside a closed index they control.

The research approach is the same. The output fields and the algorithm behavior aren’t.

Why the enterprise version of keyword research looks different

Indie or independent developers researching keywords for one app rarely run into the problems this guide addresses. The work changes when you run strategic app keyword research across a portfolio of apps.

Scale changes the math

40 apps across 8 markets is 320 market listings to maintain, not one. A keyword call made on day one of a quarter needs to integrate across all of them without breaking market-specific context. That’s why the Market Tier Matrix section exists later in this guide.

Ownership is distributed across three teams

Many enterprise apps don’t have a dedicated ASO lead. UA runs paid, brand owns visuals, mobile dev controls the release calendar, and ASO falls between the three.

A head of growth at a regulated crypto exchange we talked with put it plainly: “We don’t have anyone in-house specifically for ASO.” That’s the org-design shape most readers are standing in.

Uplift isn't what metadata alone buys

Metadata changes move rankings by one or two positions, sometimes zero. A common scenario is that you change descriptions in the UK, Italy, and Spain storefronts. A couple of keywords may move one position and stay there. That’s normal.

Metadata alone doesn’t move category rank. Download velocity does. Keyword research is what makes sure the velocity you build lands against terms real users search.

Keyword research at scale has to respect all three aspects. The rest of this guide is built to help you understand and implement the process strategically.

Where do your potential app keywords come from?

Before you can prioritize a keyword list, you need to have one. Most ASO tools make that harder than it should be. They ask for a seed list first, then show you variants of what you already gave them.

Good discovery starts from sources that don’t need a prompt. Four matter for enterprise teams.

App Store and Google Play autofill

Type the first three letters of what your app does into the search bar inside each store. The autofill suggestions are what real users typed recently. It’s the rawest signal available, and it’s free.

Record the top 10 suggestions for your top 5 category seeds per market. That’s 50 potential keywords before you’ve touched a tool.

Competitor metadata

Your top three to five direct competitors are already doing the research. Their app title, subtitle, and long description name the keywords they’ve decided matter.

Pulling those terms into your keyword list gives you 100 to 300 potential keywords per app, depending on how keyword-dense their metadata is.

Tools like App Radar reverse-engineer this automatically across multiple competitors at once, which matters when you’re managing 40 apps and don’t have time to do it by hand.

Apple Ads search term reports

These show the actual queries users typed that triggered your paid ads. Real intent data, direct from Apple.

If you don’t have any keywords ready, that is fine – you can reverse-engineer your competitors’ keywords and where they appear.

Pair search term reports with competitor metadata extraction and you have both sides covered.

One caveat worth knowing: competitor bidding intelligence (which keywords other apps bid on in Apple Ads) isn’t the same as organic keyword discovery. Bidding data skews toward high-commercial-intent terms because apps only bid on what converts. Useful input, but not a substitute for a dedicated organic keyword tool.

AI-surfaced organic keywords

App Radar and comparable tools use AI to suggest keywords based on your app description, category, and current rankings. These fill the gap between the three human-sourced methods above and long-tail terms nobody is manually looking for yet.

With these four sources, you could get 200 to 500 potential keywords per app. The next step is deciding which ones earn a spot in your metadata.

How do you prioritize a keyword list? The RVD Score

At 200 to 500 potential keywords per app, you can’t target them all. Apple’s keyword field is 100 characters. Google Play’s short description is 80. The title is 30 characters on both stores. There’s physical room for maybe 20 to 40 well-chosen keywords per app.

That’s the constraint. Prioritization is how exactly you choose and place those keywords.

The RVD Score is a simple formula for ranking your list:

RVD Score = (Relevance × Volume) / Difficulty

Score each keyword on a 1 to 10 scale across all three inputs. Multiply relevance by volume. Divide by difficulty. Higher score, higher priority.

The point isn’t mathematical purity. It’s having one number per keyword so you can sort 500 of them and take the top 40.

Relevance: how well the keyword matches your app

A keyword can have huge volume, but if it doesn’t describe what your app actually does, users who tap through won’t install. They’ll bounce. Bounces hurt your conversion rate and burn paid budget if you’re also bidding on that term in Apple Ads.

Score 1-3 for loose or tangential fits. 4-6 for adjacent use cases. 7-10 for terms that describe your app’s core function.

You don’t get exact search counts. Apple publishes a Search Popularity score on a 0 to 100 scale inside Apple Ads. Google Play keywords are not exposed directly, but ASO tools like App Radar estimate popularity from ranking and indexing patterns.

Score 1-3 for popularity under 20. 4-6 for 20 to 50. 7-10 for anything above 50.

Difficulty: how crowded the keyword already is

Difficulty combines two inputs: how many apps target the keyword in their metadata, and how well-ranked those apps already are. A keyword targeted by 50 indie apps is much easier than one targeted by 5 apps with millions of downloads.

Most tools output difficulty on a 0 to 100 scale. Invert it for RVD: 0-20 = score of 10, 21-40 = score of 7, 41-60 = score of 4, 61+ = score of 1-2.

A worked example

Five potential keywords for a hypothetical ed-tech app targeting US Spanish learners:

Keyword R V D RVD Bucket
learn spanish 10 10 2 50 Stretch target
language app 8 9 2 36 Stretch target
daily spanish lessons 10 6 7 8.6 Core target
spanish grammar drills 9 3 10 2.7 Long-tail win

Three tiers emerge naturally from the scores:

  • Stretch targets (RVD 20+). Huge upside, brutal competition. Include one or two in metadata. Organic wins take 6+ months and require download velocity the keyword alone won’t generate.
  • Core targets (RVD 5-20). Aim for 15-25 of these per app. Top 10 ranking within 60-90 days is realistic with disciplined metadata.
  • Long-tail wins (RVD under 5 but high relevance). Low volume individually but a bundle of 10-15 long-tail terms adds up to strong organic install volume.

The trap is high-popularity terms with low relevance. “Translate app” might score well on V and D if you’re an ed-tech app, but users who tap through expecting translation will bounce when they hit your lesson grid. Relevance is the only non-negotiable of the three. High V and low D don’t compensate.

Why scoring matters more at enterprise scale

At portfolio scale, you can’t hand-negotiate every keyword. A scoring formula compresses the decision. Once the list is scored, optimization becomes “include everything above X until you run out of metadata slots.”

One honest ceiling: RVD helps you pick the right fights, but metadata alone doesn’t win them. Real ranking depends on download velocity you build through creative, conversion, and UA.

How do you find the keywords your competitors own and you don't?

Your direct competitors have already done keyword research you can learn from. The keywords they target, especially the ones they rank well for, are a map of what works in your category.

The cleanest approach sorts every competitor keyword into three buckets: overlap, gap, and capture. One pass through your competitor set means one filtered list at the end.

Look for overlap with keywords you and your competitors both rank for

Pull your current rankings for any keyword scoring above, say, position 30 on the App Store or top 100 on Google Play. Do the same for each of your top 3-5 competitors. Anywhere both you and a competitor show up in the results is overlap.

Overlap is useful for two reasons:

  • It tells you which keywords are contested, which helps you spot where ranking is stable versus volatile.
  • It’s a sanity check on your metadata: if you’re not even ranking on terms your closest competitors own, something in your metadata placement is off.

Find the gap keywords only your competitors rank for

A keyword where a competitor ranks in the top 10 and you’re nowhere means the following:

  • either a keyword they’ve built metadata equity for that you haven’t tried
  • a keyword they’re getting download velocity on that you can’t match yet
  • or a keyword that isn’t relevant to your app and should be filtered out before you waste time

A common question we hear from enterprise buyers during tool evaluations: “Do we know which keywords bring our competitors their traffic?”

Gap analysis is the answer. Tools like App Radar run this automatically across your competitor set, flagging every term where a competitor ranks in the top 10 and you’re outside the top 100.

Capture the keywords you can realistically take

Not every gap keyword is worth chasing. Filter the gap list through three tests.

First, relevance. Apply the R score from the RVD section. If a keyword scores below 6, drop it. High volume and low difficulty won’t save you.

Second, difficulty delta. Check the competitor’s authority. If the competitor ranking for the term has 10x your downloads, that’s not a gap you can close with keyword research alone.

Third, metadata fit. Does the keyword fit naturally into an existing metadata slot, or would adding it require pushing out something already working?

What’s left is the capture list. That’s where you focus in the next quarter.

A short worked example

A fitness app comparing against two competitors in the US market might produce a table like this:

Keyword Your rank Competitor A Competitor B Bucket
home workout 12 8 14 Overlap
hiit timer 89 3 Gap – capture candidate
yoga for seniors 5 Gap – check relevance
7 minute workout 11 2 Gap – check difficulty
running tracker 38 22 Overlap

“Yoga for seniors” drops on the relevance test – a general fitness app shouldn’t chase a senior-specific vertical.

“7 minute workout” drops on the difficulty test – it’s a branded term dominated by an incumbent with ten million downloads. “Hiit timer” survives all three. That’s the keyword you target next quarter.

Pulling every competitor keyword is data. Sorting them into three buckets and running the capture filter is insight. The filter puts competitor intelligence into the same priority vocabulary as RVD, so you run one keyword-review workflow, not two.

How do you scale keyword research across 8+ markets? The Market Tier Matrix

Enterprise ASO leads face a practical problem that small-team playbooks ignore.

What if you have 50+ countries you advertise your app in? Is it still possible to see worldwide app keyword search volume for each keyword, not just per country?

The honest answer: no single view exists today. Apple Search Popularity scores are per-country. App Radar and similar ASO tools estimate per-market. A worldwide-aggregate keyword popularity number isn’t sitting on the shelf.

So enterprise keyword research doesn’t solve for the missing aggregate view. It works around it. The workaround is the Market Tier Matrix.

The matrix: revenue on one axis, maturity on the other

Sort every market your app runs in against two axes.

The first is revenue contribution. High means your top 3 markets by revenue. Medium covers markets 4 through 8. Low is anything below your top 8.

The second is ASO maturity. High means you already rank well for core category terms in that market. Medium means you rank on brand and some generics but gaps exist. Low means you’re indexed but not ranking, or you haven’t launched localized metadata yet.

That’s a 3×3 matrix. Most enterprise portfolios cluster in four cells:

  • High revenue, high maturity. Defend what you’ve built. Refresh quarterly.
  • High revenue, low maturity. The biggest opportunity. Deep work here first.
  • Low revenue, low maturity. Light touch. Translate core metadata, set a baseline, monitor.
  • Low revenue, high maturity. Maintain. No new research unless ranking drops.

The middle cells distribute between these patterns. You don’t need a rule for every cell. You need a clear rule for the extremes, and local judgment for the rest.

The 1-2-3 Rollout: three waves, not twenty

Once markets are tiered, keyword research rolls out in three waves across a quarter.

  • Wave 1: deep work in Tier 1 markets. Full candidate discovery (the four sources from earlier), full RVD scoring, full overlap-gap-capture analysis against top 3-5 competitors. Usually 2-3 markets. That can take 2-3 weeks of work per market.
  • Wave 2: replicate into Tier 2 markets. Take the Wave 1 keyword list as a starting point. Translate into Tier 2 language and cultural context. Re-score on local V and D (relevance usually carries across markets). Reuse the competitor analysis structure but swap the competitor set for local leaders. Usually this takes 1-2 weeks per market.
  • Wave 3: refresh Tier 3 markets. Machine translation with native-speaker review for the core RVD top-10 keywords. No full competitor analysis. Monitor rankings for 60 days before committing more budget. This could take half a day to a day per market.

The math: a Tier 1 market gets roughly 12-15 days of effort over the quarter. A Tier 3 market gets half a day. Four to six Tier 1 markets plus six to eight Tier 2 markets plus the rest as Tier 3 fills one ASO manager’s quarter without overflow.

Why the matrix beats an equal-effort rollout

Equal-effort app localization is where solo ASO managers drown. One person running ten to forty apps can’t also do twenty-market keyword research at equal depth every quarter. The Market Tier Matrix makes prioritization explicit and concentrates effort where revenue justifies it.

App Radar supports this pattern with multi-market metadata management and localized keyword suggestions. But tier logic matters more than the tool. A disciplined manager with Excel and Google Translate beats an undisciplined one with the best platform in the category.

Where do you place app keywords? iOS vs Google Play mechanics

Keyword research produces a list. Placement is where that list turns into ranking signal. The two stores treat keyword placement differently, and the details matter.

App Store

Three fields carry most of the keyword weight in the App Store:

  • App name (30 characters). Highest weight. Core brand plus your most important generic keyword.
  • Subtitle (30 characters). Second-highest weight. The next-tier keyword cluster.
  • Keyword field (100 characters, comma-separated, invisible to users). Where the rest of your priority list lives. No spaces after commas.

In-app purchase display names and descriptions index lightly. Promotional text and the main app description do not index for keywords.

Writing dense keyword paragraphs in the description wastes effort. The description converts readers, but it doesn’t rank.

A recent change matters here. Since September-October 2025, custom product pages rank organically for the keywords they’re assigned to, not just for paid ad traffic. That means a custom product page variant built around a specific keyword cluster can serve users who land on your page from that organic search.

The keyword research implication: for high-value stretch or core keywords, a dedicated custom product page is worth considering rather than forcing one default page to serve every search intent.

Google Play

Three fields matter the most for ASO metadata in Google Play:

  • App title (30 characters). Same weight logic as Apple.
  • Short description (80 characters). Indexed. Above-the-fold, visible to users.
  • Long description (4,000 characters). Indexed. Keyword density matters – 2 to 3 percent for your top 5 keywords is the safe range. Above 4 percent and the algorithm reads it as stuffing, and ranking drops.

Unlike Apple, Google Play re-weights a keyword based on how often it appears. So adding repetition within the 2-3 percent range helps. The same word three times across the long description generally outranks it appearing once.

One rule spans both stores: don’t repeat the same keyword in both title and subtitle on Apple, or title and short description on Google Play – a better tactic is to go after other important keywords from your list.

How do you measure whether keyword research actually worked?

You can measure what ranked. You can’t measure, with precision, which keyword drove which install. That gap matters.

What you track monthly

The core report has four lines:

  • how many of your top 40 keywords rank in top 10, top 30, and top 100
  • your search visibility score (the composite health metric most ASO tools publish)
  • your category rank
  • your indexed keyword count

If you are a team lead, the most important question is – “how is this keyword performing month over month – volume, impressions, ranking?”

Track quarter over quarter, not day to day. Daily ranking movement is a distraction, because quarterly is where keyword research effort shows up.

The installs-per-keyword attribution gap

The most common question across enterprise ASO evaluation calls is some version of: “If we connect our app, could we see installs per keyword?” The honest answer is no, not cleanly.

Apple does not publish install attribution by keyword for organic search. Google Play Console does publish an installs-per-keyword report, but only for Google Play and only for the installs Google is confident it can attribute.

For teams running Apple Ads, there’s a tighter read worth flagging. An Apple Ads management platform like SplitMetrics Acquire, connected to your MMP, surfaces per-keyword installs, conversion rate, and revenue for paid traffic – paired with each keyword’s current organic rank, Share of Voice, and Impression Share inside the same view.

None of those last three signals are available natively outside an Apple Ads management tool.

Combined, this is the closest read most enterprise teams get to per-keyword install attribution: paid metrics as the leading indicator, organic rank movement as the lagging confirmation.

Moving forward with ASO keyword research

ASO keyword research isn’t a project with a finish line. It’s a quarterly cadence run against a moving target, with a tool stack that won’t fully close the paid-organic data gap, against competitors who are also doing the work.

The teams that win are the ones running the work reliably – disciplined discovery, a scoring formula, a competitor pass, and a market tier prioritization they actually stick to.

A single-app team could run this playbook with a spreadsheet and an App Radar account. A scale-up running ten to forty apps across eight or more markets needs the full multi-market operating model, and that’s where you need tools, processes and people.

Bonus - what app marketers really ask about ASO keyword research

Where do we start with keyword research for a new app?

The answer starts with the four discovery sources: App Store and Google Play autofill, your top 3 to 5 competitors’ metadata, Apple Ads search term reports, and AI-surfaced suggestions from an organic ASO tool. Once you have a candidate list, you score it with RVD, which stands for relevance times volume divided by difficulty. You then prioritize the top 40 keywords for metadata placement. That is a single quarter of work for a first app store keyword research pass, and it is the canonical starting point for any team coming to keyword research for ASO without a prior baseline.

Why do metadata changes move my rankings by only one position?

That outcome is normal at portfolio scale. Metadata lifts alone are small in competitive markets, and one or two positions per keyword is a typical result. Real rank movement needs metadata plus download velocity, which means paid UA, featuring, reviews, and creative wins. The ASO keywords that do not get velocity will stall in the rankings even with clean metadata.

Is keyword research different on the App Store versus Google Play?

Yes, the mechanics differ. The App Store indexes the app name, subtitle, and keyword field only, and the description does not count toward keyword ranking. Google Play indexes the title, short description, and long description, and repetition inside a 2 to 3 percent density range helps. The underlying research process is the same across both stores, but the placement rules differ. Some practitioners use the term app store keyword analysis specifically for the Apple-side evaluation, and app store keyword search is another variant used for the Google Play workflow.

What are the best app store keyword research tools?

App Radar and AppFollow are the main enterprise options on the market today. The app store keyword research tool most teams start with is App Radar, thanks to its multi-market support and competitor keyword reverse-engineering. Whichever product you pick from the ASO keyword research tools category, consistency inside one tool’s volume estimates matters more than the tool choice itself. Custom keyword workflows usually require API export, which is worth checking before you commit to a vendor.

Share this post

slika ivan Žgela
Ivan Žgela
SEO and Lead Content Manager
Ivan leads organic growth strategy across the SplitMetrics and App Radar brands. With 10+ years in SEO, content marketing and mobile app industry, he specializes in turning technical app growth topics into search-driven content that reaches mobile marketers, UA managers, and growth teams.
c