ASO ranking factors in 2026: how the App Store and Google Play actually rank apps
ASO ranking factors are the inputs Apple and Google use to decide which apps appear, and in what order, when someone searches the App Store or Google Play. There are 16 of them in 2026. Some you write yourself. Some you earn.

The list hasn’t changed much. The weights have. Apple and Google both rebalanced between 2024 and 2026, away from pre-install signals (keywords, ratings, raw downloads) and toward post-install signals (retention, conversion rate, semantic relevance). Most older ranking-factor guides still describe the old weights.
Around 65% of app downloads come through search. In March 2026 Apple compressed the organic window further by adding paid ad placements between organic results. Every factor matters more than a year ago.
Apple and Google don’t publish full ranking weights, so this article combines what they do publish using SplitMetrics and App Radar experience of working across thousands of apps.
What ASO ranking factors actually matter in 2026?
There are 16 ranking factors in total. You write 11 of them into your store listing yourself. The algorithm earns the remaining 5 from how users behave around your app.
The on-metadata fields you write differ between Apple and Google Play in specifics. The off-metadata signals you earn are mostly shared between the two stores.
On-metadata fields are what you write into App Store Connect or Google Play Console:
| # | Factor | App Store spec | Google Play spec | Weight |
|---|---|---|---|---|
| 1 | App Name / App Title | 30 chars, indexed | 30 chars, indexed | Highest |
| 2 | Subtitle / Short Description | 30 chars, indexed | 80 chars, indexed, shown above the fold | High |
| 3 | Keyword Field | 100 chars, hidden, comma-separated | Not used (Long Description plays this role) | High (Apple only) |
| 4 | Long Description | 4,000 chars, indexed at lower weight | 4,000 chars, indexed | Medium |
| 5 | Promotional Text | 170 chars, indexed, editable without app review | Not present | Medium (Apple only) |
| 6 | Screenshot text overlays | OCR-indexed since June 2025 | Not indexed | Medium (Apple only) |
| 7 | In-App Events / Promo Content | Event name + short description indexed; long description not | Timed Promo Content cards | Medium |
| 8 | Custom Product Pages / SLEs | Up to 70 versions, rank organically for assigned keywords since July 2025 | Store Listing Experiments (native A/B testing) | Medium |
| 9 | Categories + tags | Primary + secondary category | Main category + up to 5 custom tags | Low-Medium |
| 10 | In-App Purchase names | 64 chars per IAP, indexed | Not indexed | Low (Apple only) |
| 11 | Data Safety / Privacy Labels | iOS Privacy Labels (light signal) | Required since 2022, completeness affects discoverability | Medium (Google Play only) |
App Title carries the highest single weight in both stores.
The Apple keyword field is unique. It allows 100 hidden characters and has no Google Play equivalent.
Google Play’s Short Description punches above its visual size – industry research says 84.2% of successful Google Play ranking improvements correlated with adding the target keyword to the Short Description, even when the Title was unchanged.
Off-metadata signals are what the algorithm measures from how the world reacts to your app:
| # | Signal | What the algorithm measures | Weight |
|---|---|---|---|
| 12 | Download velocity | Installs per day relative to category baseline | Highest |
| 13 | Conversion rate | Tap-through rate and install rate from impressions | High |
| 14 | Ratings and reviews | Average rating, review count, recency, sentiment – velocity often beats absolute rating | High |
| 15 | Retention and quality | Day 1, 7, 30 retention; crash-free sessions; ANR rate. Healthy benchmarks: D1 above 35%, D7 above 15% | High |
| 16 | Update cadence | Frequency of meaningful updates | Medium |
Off-metadata signals are where 2026 ranking is most clearly different from 2020. Conversion rate, retention, and review velocity now carry as much weight as the metadata fields, sometimes more. Data shows that apps using custom product pages on Apple generated 6.56 billion impressions with conversion rates climbing from 42% to 56% among adopters, yet only 31% of apps use custom product pages as of Q1 2025. The lever exists. Most apps have not pulled it.
A small app with above-category retention and a 4.6-star rating can outrank a bigger app with stronger metadata and weaker post-install signals. The next two sections take each algorithm separately. They cover what the App Store actually weighs first, what Google Play does differently, and where the practical differences live.
How does the App Store algorithm rank apps?
The App Store algorithm weighs two named relevance signals, and Apple has now confirmed both publicly, which means they impact app store optimization in App Store.
The first signal is behavioral relevance. This covers what users actually do when they see an app in search results: taps on the listing, downloads, and post-install behavior including retention and engagement.
The second signal is textual relevance, which measures how well your app’s metadata semantically matches what the user searched for. Apple named both signals explicitly in its March 2026 research paper Scaling Search Relevance: Augmenting App Store Ranking with LLM-Generated Judgments. That paper described an A/B test in which ranking augmented with LLM-generated relevance labels delivered 0.24% more downloads.
At App Store scale that lift translates into dozens of millions of incremental installs per year. Apple fine-tuned a 3-billion-parameter LLM on existing human relevance judgments to generate the new labels. The implication is that semantic understanding is now a first-class part of how Apple ranks apps. Keyword matching alone is no longer enough.
For apps’ day-to-day ranking work, that frame translates into a clear hierarchy.
- App Title carries the strongest single weight, and a target keyword in the Title outranks the same keyword anywhere else on the listing.
- Subtitle and the hidden Keyword Field come next.
- Long Description, Promotional Text, In-App Purchase names, and the OCR-indexed text on your screenshots all contribute lower-weight signal.
Apple began OCR-indexing screenshot captions in June 2025. In-App Events index their event name and short description but not their long description. Custom product pages have ranked organically for their assigned keywords since July 2025. They are now a genuine ranking surface, not only a tool for conversion-rate testing.
On the behavioral side, the strongest input is install velocity relative to your category baseline. Conversion rate from impressions to installs comes next. App reviews matter, but review velocity often beats absolute rating: an app with 4.2 stars and 100 fresh reviews per week typically outranks an app with 4.5 stars and 5 reviews per week.
Retention curves and crash-free session rates feed in too. Apps that hit Day 1 retention above 35% and Day 7 retention above 15% sit in the healthy band that Apple’s algorithm rewards.
The next section covers the Google Play algorithm, which weighs many of the same inputs but in different proportions and with different surfaces.
How does the Google Play algorithm rank apps?
The Google Play algorithm reads more inputs than the App Store algorithm that matter for ASO in Google Play. It indexes the Title, the Short Description, the Long Description, your category and tags, your promo content cards, and the data you submit to the data safety section. It also reads heavily from Android Vitals – crash rate, ANR rate, battery efficiency, startup time – and from how users behave after they install.
Google’s recent rebalance is the most-discussed story in the ASO industry. Eric Seufert at Mobile Dev Memo described it as a move “from ranking by install volume to ranking by retention and engagement.” Apps that perform well on Android Vitals see measurable ranking improvements even when their metadata stays unchanged. Apps with rough Vitals lose visibility regardless of how well their listing is written.
The on-metadata hierarchy looks different from Apple’s.
- App Title carries the strongest single weight. Title is 30 characters since 2021, down from 50 – a change that older guides still get wrong.
- Short Description (80 characters) sits just below it and punches above its visual size.
Industry analysis of Google Play ranking changes found that 84.2% of successful improvements correlated with adding the target keyword to the Short Description, even when the Title was unchanged. Long Description follows, with the first lines carrying more weight than the rest. Categories and the up-to-five custom tags help the algorithm assign your app to the right neighborhoods, even though they do not directly carry keyword weight.
Two surfaces deserve their own mention.
Google Play Console’s native A/B testing tool has evolved from a pure conversion-rate-optimization tool into a ranking lever. The winning variant feeds back into install velocity and conversion data, both of which feed into ranking.
The data safety section, mandatory since 2022, affects discoverability beyond pure trust signaling. Apps with incomplete or rejected data safety entries see ranking suppression in some categories.
On the behavioral side, the same hierarchy as the App Store applies, but with one practical difference – Google Play weights long-term engagement more heavily. Apps that hold Day 7 and Day 30 retention above category baseline often outrank apps with stronger initial install velocity but weaker retention curves.
The next two sections take the on-metadata fields one store at a time, with the specific levers that move each.
Which on-metadata fields move App Store ranking?
App Store on-metadata is the layer Apple gives you the most control over. Eleven fields are indexed for search, and each carries a different weight. Below is what each field does, what the rules are, and the lever that moves it.
App Name is the strongest keyword surface
App Name has 30 characters and carries the highest single keyword weight in the App Store. A target keyword in the App Name typically ranks higher than the same keyword in any other field.
The field is one of the few you cannot change without resubmitting your app for review, which makes it slow to test. Treat it as your most expensive piece of metadata real estate. Most apps use 5 to 12 characters for their brand name, then spend the rest on a single high-value keyword phrase. “Calm: Meditation and Sleep” follows that pattern.
App Subtitle is the second-strongest field
App Subtitle has another 30 characters and is indexed for keywords just like the App Name. It also appears below your app name in search results, which means it does double duty as a ranking field and a click-through field.
A common mistake is to repeat keywords from the App Name in the Subtitle. Apple does not credit duplicate keywords across fields. Use the Subtitle for keyword variations that complement the Name rather than echo it.
Keyword Field contains 100 hidden characters
The Keyword Field is unique to the App Store. You get 100 characters, hidden from users, comma-separated, no spaces. The field is medium-weight in ranking, lower than App Name and Subtitle, but still strong enough that most apps can find ranking gains by curating it.
Best practice is to fill it with keyword variations that do not appear in your visible metadata, separated by single commas, no spaces, no articles (“a”, “the”, “and”). Apple’s algorithm builds keyword combinations across all the fields, so individual words combine with each other and with words from your visible metadata.
In-App Purchase names are a small but indexed surface
Each In-App Purchase has a 64-character display name, and Apple indexes those names for search. An app with 10 IAPs effectively gets 640 extra characters of indexable text.
The weight is low, but the field is worth using when your IAP names align with secondary keywords. Subscription plans, content packs, and feature unlocks are the typical candidates.
In-app events help too with ranking and discovery
Apple introduced in-app events with iOS 15 in 2021, and they are now first-class ranking surfaces. The event name (30 characters) and short description (50 characters) are indexed for keywords. The long description (120 characters) is not.
Each event also generates a discovery card that surfaces in search results and the Today tab. Apps that publish 2 to 4 events per month earn meaningfully more organic impressions than apps that do not use the surface at all.
Custom product pages are a ranking surface, not just a CRO tool
Custom product pages were introduced in 2021 as a paid-UA tool. Ad managers could send different audiences to different versions of the listing. Since July 2025, custom product pages also rank organically for the keywords you assign to each version. Apple doubled the per-app limit to 70 in October 2025.
This means a single app can effectively rank for many more keyword combinations than its primary listing can hold. Custom product pages are now both a creative testing tool and a ranking lever.
Apple's Spotlight Search helps keeping installed apps relevant
Spotlight Search lets users find installed apps and content from inside their device. Enabling Spotlight indexing for your app’s content makes it easier for users to re-engage with your app from outside the App Store. Higher engagement feeds back into ranking through the behavioral signals Apple weighs.
Spotlight is a low-priority lever compared to the fields above, but worth enabling.
The next section covers the same depth for Google Play.
Which on-metadata fields move Google Play ranking?
Google Play indexes more text than Apple does, but weights it differently. The fields below are listed in roughly descending ranking weight.
App Title contains 30 characters since 2021
App Title is 30 characters and carries the strongest single keyword weight on Google Play. The change from 50 to 30 characters happened in 2021, which makes older guides immediately recognizable as out of date.
Treat the Title as your most expensive metadata and reserve it for one high-volume target plus your brand.
Short Description gives 80 characters punching above its size
Short Description has 80 characters and sits above the fold on the Play Store listing. Google indexes it heavily and treats it as a near-equal partner to the Title for ranking purposes.
For many apps, the Short Description is where the next ranking gain hides. A keyword that does not fit naturally in the 30-character Title often fits in the 80-character Short Description, and Google’s algorithm rewards that placement strongly.
Long Description is front-loaded with 4,000 characters
Long Description has 4,000 characters and is fully indexed. The first 250 to 300 characters carry the most weight, since Google deprioritizes keywords that appear deep in the description.
Current best practice is 2 to 3 natural mentions of your primary keyword, not the older “repeat 3 to 5 times” advice. Google now penalizes keyword stuffing.
Categories, custom tags, and promo content
Google Play uses your primary category, secondary category, and up to five custom tags to assign your app to the right neighborhoods. None of these directly carry keyword weight, but they affect which collections, top charts, and editorial placements your app is eligible for.
Google’s promo content surface generates timed cards on your listing. The text on those cards is indexed and contributes to ranking.
Store listing experiments are a ranking lever, not just CRO
Store listing experiments are Google Play Console’s native A/B testing tool. You run a variant test on your icon, screenshots, or short description for 30 to 60 days, and the winning variant gets promoted to your default listing.
The winning variant typically lifts conversion rate, which feeds back into ranking through the behavioral signals Google weighs. Store listing experiments are now both a CRO tool and a ranking lever.
Data safety section is important - completeness affects discoverability
The data safety section has been mandatory since 2022. Apps with incomplete or rejected data safety entries see ranking suppression in some categories, and their listings carry visible warnings that depress conversion.
Treat data safety completion the same way you treat your privacy policy – mandatory, audited, and worth getting right the first time.
The next section covers the off-metadata signals both stores share.
What off-metadata signals drive ranking in both stores?
Off-metadata signals are the post-install layer of ASO. They are shared between Apple and Google Play in concept, even when the underlying surfaces differ.
Download velocity beats total downloads
Both algorithms care more about installs per day than installs to date. An app gaining 500 fresh installs a day will rank above an older app with more total downloads but slower momentum.
Velocity is recalculated frequently, which makes it a faster lever than most metadata changes.
Conversion rate is now a ranking signal
Conversion rate is the share of users who tap on your search-result entry and then install. Both stores treat it as a ranking input on top of being a CRO metric.
The strongest levers are creative assets and on-store testing. On Apple, custom product pages are the testing surface. On Google Play, store listing experiments serve the same role. Apps that test creative at a regular cadence typically run several percentage points ahead of baseline conversion, which compounds into higher ranking over time.
Review velocity often beats absolute rating
Both stores read your average rating, review count, recency, and sentiment. Velocity often beats the absolute number.
An app with a 4.2 rating and 100 fresh reviews per week typically outranks an app with 4.5 stars and 5 reviews per week. Asking for reviews at the right moments matters more than chasing a perfect rating.
Retention day numbers that matter: Day 1 above 35%, Day 7 above 15%
Retention curves are now part of how both algorithms decide which apps to promote. Apple weighs Day 1 and Day 7 retention. Google Play weighs Day 1, 7, and 30, with stronger weight on the longer end.
App quality signals feed in alongside retention. Crash-free session rate, ANR rate on Android, and startup time all matter. A buggy app with strong metadata loses to a stable app with average metadata.
Update cadence should be at least monthly
Both stores reward apps that release meaningful updates regularly. The algorithm reads update cadence as a quality signal, since an actively-maintained app is more likely to keep retaining users.
The right cadence is roughly every 4 to 6 weeks for the App Store and every 2 to 4 weeks for Google Play. Bumped version numbers without substantive changes do not earn the signal.
The next section answers the questions readers ask most about ASO ranking factors.
What else should you know about ASO ranking factors?
Here are a few more things that app owners typically ask us.
How does Apple's App Store search algorithm work?
Apple’s algorithm weighs two named relevance signals. Behavioral relevance covers what users do with search results, including taps, downloads, and post-install retention. Textual relevance measures how well your metadata semantically matches the user’s query. Apple confirmed both signals in its March 2026 research paper Scaling Search Relevance, which described an A/B test using LLM-generated relevance labels to augment ranking.
Are Google Play's ranking factors the same as Apple's?
The signals overlap but the weights and surfaces differ. Both stores rank apps using on-metadata fields plus off-metadata behavioral signals. Google Play indexes more fields, including the full 4,000-character long description, and weighs Android Vitals (crash rate, ANR rate, startup time). Apple weighs the unique 100-character Keyword Field and the OCR-indexed text on screenshots, which Google Play does not index.
How often do App Store and Google Play rankings change?
Rankings update continuously. Both stores recalculate as new install, conversion, retention, and review data comes in. Top-100 charts visibly refresh several times per day. Only 40% of top-100 App Store apps stay in the top 100 for a full year, and fewer than 25% do on Google Play. Apps that optimize for the right factors hold position longer and recover from drops faster.
Do paid ads affect organic ranking?
Paid ads do not directly boost organic ranking. Apple and Google publicly state that paid placements are separate from organic ranking algorithms. Indirectly, the install velocity that paid campaigns generate feeds into the same behavioral signals organic ranking weighs. Apple Ads and Google App Campaigns can therefore lift organic ranking when they drive enough velocity to push install-per-day metrics above category baseline.
What changed in ASO ranking in 2026?
Three changes matter most. First, retention and conversion rate are now first-class ranking signals in both stores, alongside the older keyword and download metrics. Second, Apple began OCR-indexing screenshot caption text in June 2025, and custom product pages began ranking organically in July 2025. Third, Apple’s March 2026 paid ad expansion compressed the organic visibility window, making every individual ranking factor more decisive.
Where can I find Apple's official documentation on ranking factors?
Apple does not publish full ranking weights, but the Search Ads documentation, the App Store Connect Help section on Search Results, and Apple’s annual WWDC sessions on App Store search cover the inputs Apple weighs. Apple’s March 2026 research paper Scaling Search Relevance: Augmenting App Store Ranking with LLM-Generated Judgments is the most explicit public Apple description of how the algorithm decides.
How long does it take to see ranking changes after updating metadata?
Most metadata changes show ranking impact within 4 to 8 weeks. Apple typically reindexes new metadata within hours of an app release. Google Play indexes faster but takes longer to stabilize ranking shifts. Conversion-rate changes from creative tests show up faster, usually within 2 to 4 weeks. App Title and App Name changes carry more risk and take longer to recover from if they go wrong.
So what does all this mean for your app?
ASO ranking is not a solve-once problem. The 16 factors above move continuously, both algorithms recalculate as new behavioral data comes in, and the post-install signals that now drive most ranking outcomes need ongoing creative testing, retention work, and a steady update cadence.
Smaller and mid-sized apps can follow the playbook above and build strong organic ranking with discipline alone. For apps running paid UA at scale, working across multiple markets, or competing against well-resourced category leaders, ranking work needs a system – one with coordinated organic and paid campaigns, structured creative testing, and competitor monitoring that does not stop.

Latest Posts
Academy Lessons
Continue lessons


