你正在訪問的內容是外部程式的映像位址,僅用於使用者加速訪問,本站無法保證其可靠性。當前的連結位址(單點即可複製)為 https://greasyfork.org.cn/zh-CN/scripts/535088-ao3-qscore,源站連結 點此以跳轉。
Autosorting 'Quality' Indicator trained on 11k+ works. Very generous with small fics, rewards engagement over popularity (bookmarks-collections/kudos instead of hits) with a 0-100 score spread. Sort & position toggles included.
Improves on the classic (kudos,hits) metric. This score combines metric pairs correlated with some of my favorite fics (check graph at the very bottom for correlation): (bookmarks,kudos), (collections,kudos).
Autosort: Navbar toggle ⇊|⇅ (sorted|default).
Indicator position: Navbar toggle ⇱|⇲ (top|bottom). ⇱ dodges the year: easier to parse visually and stays visible on works collapsed by KH or KHX .
Dimming: Applied to low-confidence scores; dimmed fics are sorted at the end.
Toggles do not need reloading. Colors have automatic dark mode.
Notes on editing the code:
By default, all thresholds are tuned at the lowest possible to be generous with small fics while filtering noise from data.
Raising them is fine but lowering is not recommended as it may un-dim inaccurate scores.
You can observe the intermediate scores by setting const DEBUG = true;.
If editing the code, you may want to disable auto-updates to preserve your changes.
Scoring:
- The max of all metrics is taken to surface the best works across all metrics. This causes a bias towards more high scores.
- Alpha blending is used to take the score closer to the average of ALL metrics, softening scores that are missing some metrics.
- If at least one metric is in range [min,∞[ the score gets softened with an alpha of 0.99 (Remove 1 point from scores 100 unless their other metrics are also high to keep them rare.)
- Else, uncertain metrics in the range [bmin,min] use a stronger factor of 0.75 to make fewer 100.
Recommendations:
- You find that scoring fic with mins ≥3 kudos is too generous, and want to raise it to ≥15 kudos, set THRESHOLDS { 'kudos': { dim_below: 15 }}.
Raising 'kudos': { min: 15 } would have the same effect since the range [bmin,min] is dimmed; but using stronger alpha to to soften those scores.
Also, bmin can be raised to make the dimmed scores more exclusive as long as you keep bmin ≤ min
- You find the (collections,kudos) metric inaccurate and want to disable it for a pure (bookmarks,kudos) score, set THRESHOLDS { 'collections': bmin=Infinity, min=Infinity }.
I do not recommend adjusting other values.
v2.34: - Disabled the metric (comments,kudos), it had the worst correlation (interaction dependent on the type of work, the word count and whether the author answers comments.) Oneshots also followed a completely different interaction curves that I never modeled separately and just filtered out. The model is now (bookmarks,kudos) + (collections,kudos) above 4 collections (linear part of the model, empirical).
v2.26: - Improved scoring for extreme values (Distributions are centered at 0 in case someone lowers bmin. Extreme highs are denoised and future proofed using parallel lines past the cutoff.)
v2.23: - Added separate ALPHA for dimmed works, and updated defaults.
v2.22:
- Enhanced DEBUG indicators.
- I realised that (comments,kudos) has two different distributions if chapters==1 or chapters>=2, probably because people interact with oneshots differently. Adding a min threshold to disable (comments,kudos) if chapters<2 fixed its distribution. I have not tried to model this metric separately for chapters==1.
- I rewrote the algorithm to use stronger minimums, with a dimmed fallback to the old ones.
v2.21:
- Added tunable z_scale to produce fewer scores from one metric should you prefer one.
- Option to dim/fail if fewer than N metrics pass, should make alpha<1 more interesting for those who use it. Requiring multiple metrics gives more values to mix making the Average more meaningful (Average did nothing when 1 metric passed. Now, it will reward scores with all-N high metrics).
v2.20:
- Looking closely at the graphs, Kudos should have been trained on min=8, this cleaned up the distributions.
- Default is back to Max, the rounding off affected large works only. Small works get more 100s because 1/3 metric is activated.
Tested: tCDF makes fewer 100s, spreads middle 1pt both sides, but 6x expensive; tCDF_norm does opposite; sticking with CDF.
v2.19:
- Updated the formula and fixed the defaults.
Now using a Normal regression per-metric, and α*Max+(1-α)*Average for the final score. 
Below: Max, Average, and Final blended score, with α=0.95 and α=0.68.

Demonstrates how lowering α mixes more Average and shifts the scores left (including the red dots -- my favorite fics). Therefore, Max is the better metric, but i set the default at α=0.95 to round it a bit and have fewer 100 score.
v2.13:
- Option to take the AVERAGE of all metrics instead of their MAX in the code. (the scale becomes a blending weight instead of a cap in that case). Gives a nicer spread while still blending.
v2.12:
- The (comments/kudos) metric is back among (bookmarks/kudos) and (collections/kudos).
- Uniformization is disabled by default atop the code: it was illogical considering the use of max() to surface the best metric. This means you will see more high scores.
- Can disable a metric, scale it down, or raise its min contribution floor in the code. 

v2.3: Switched to GAM score {Bk,Col/Ku} instead of Polynomial score {Bk,Col,Com/Ku}. (can install v2.7 to compare both with the Q/P toggle)
v2.0:
• 2nd-degree polynomial quantile regressions (P10, P50, and P90) of the 3 selected metrics pairs:
For each pair, an individual score is computed by:
◦ Deriving a skewed normal distribution from the polynomial contours.
◦ Z-score calculation based on its deviation from the P50 center-line.
◦ Converted into a 0-100 inverted percentile rank via NCDF.
• Final score == Max of all 3 reliable scores, gated by minimum stat counts.
The score is mapped to a perfect 0-100 percentile rank via ECDF normalization for uniform scoring.
• Log() Engagement metrics (bookmarks/collections/comments/kudos) selected due to high correlation with my top favorite fics. By comparison the classic (x/hits) metrics feel completely random:
Update: I also tested (kudos,hits/chapters) and it is strongly bimodal (1 chapter, 2+ chapters) and just as random as (kudos,hits). In fact I suspect that all metric pairs are bimodal and that 1-chapter works should ideally be modeled separately.