APR methodology

APR methodology: measuring brand evaluation inside AI answers

A practical explanation of AI Perception Ranking (APR) — what it measures, how the four components are calculated, and how teams can improve each dimension.

Core concept

APR measures brand state inside AI answers, not classic SEO rank

In classic search, teams mostly measured rankings on a results page. In AI search, users read generated answers and make decisions from the brands, comparisons, caveats, and citations inside that answer. APR is built for that new surface.

Kinsho AI connects APR to daily prompt monitoring, competitor benchmarking, source analysis, and action planning — making it an operational metric rather than a passive score.

Four measurement stages

The four dimensions APR measures

Kinsho reads AI answers through presence, context, position, source authority, and cross-model consistency.

Answer presence

Check whether brand names, aliases, and product names appear inside AI answers by model. Japanese triple-script variants are unified automatically.

Context and position

Track whether the brand is recommended, compared, cautioned against, or omitted — and at what position in the answer.

Supporting sources

Review whether answers are grounded in owned pages, editorial sources, reviews, or third-party references. Source authority directly affects APR.

Cross-model consistency

Verify that ChatGPT, Gemini, and Perplexity evaluate your brand consistently. High variance between models signals unresolved risk.

Formula

How the APR score is calculated

The four components are combined as a weighted average, scaled to 0–100. Weights are adjustable by industry and business objective.

APR = (M × 0.30) + (R × 0.30) + (Q × 0.25) + (C × 0.15)

M
Mention Rate — Weight 30%
Percentage of representative prompts where your brand appears in the AI answer
R
Recommendation Rank — Weight 30%
Weighted score for how early your brand appears when mentioned
Q
Citation Quality — Weight 25%
Authority of the sources AI uses; official sites and tier-1 press score highest
C
Cross-Model Consistency — Weight 15%
Uniformity of perception across ChatGPT, Gemini, and Perplexity

Weight customization

The weights above are research-based defaults. For consumer goods, raise M weight; for B2B SaaS, raise Q weight. Custom weight profiles are available on Growth plan and above.

How to improve

Improvement approach by component

Identify the weakest of the four components in the Kinsho AI dashboard and start there for the fastest APR gains.

Raise Mention Rate

Earn placement in authoritative third-party media: industry publications, press releases, and specialized editorial sources. Coverage volume and diversity are the main levers.

Raise Recommendation Rank

Create content that answers comparison-query prompts clearly: "X vs Y", "why choose X", "best tool for [use case]". Direct positioning in answer-friendly formats helps most.

Raise Citation Quality

Build out official structured data, authoritative Wikipedia entries, industry-body profiles, and high-credibility press coverage. Flag and address any low-authority citations (forums, outdated pages).

Raise Cross-Model Consistency

Provide consistent brand facts across sources that each AI model prioritizes. Publishing the same entity information in both English and Japanese helps unify perception across models.

聯絡我們

掌控 AI 對
您品牌的表述。

我們將在4個工作小時內回覆,週一至週五,UTC時間9:00–18:00。

或預約15分鐘introductory通話: 預約通話