Original #theDress chroma-axis remapping (CIELUV) – dual orientation hypotheses
This tool collapses your image’s object colours onto the same single blue↔yellow CIELUV hue axis that made #theDress ambiguous. Two variants differ only in axis orientation (Hypothesis A vs B), echoing alternative illumination assumptions. Use the intensity slider to interpolate between the original and fully mapped “Dress-like” version.
1. Convert all pixels to CIELUV (a perceptual colour space).
2. Find your image’s dominant chroma line (PCA on u*,v*) and collapse its chroma to that line (“one hue”).
3. Align lightness vs chroma sign to match the Dress (correlation with v*).
4. Ignore v* amplitude (quirk of original algorithm) and use only centred u* to drive position along the published Dress axis.
5. Impose Dress per-channel mean & standard deviation (L*, u*, v*).
6. Generate a flipped orientation (Hypothesis B) by negating the axis to represent the rival interpretation.
7. Blend mapped vs original in Luv, then convert back to sRGB with clipping.
Input: sRGB image (entire image treated as object), fixed Dress stats.
Constants (Dress):
DressPC = [0.245, 0.970] (normalized)
DressMean = [45.2, -8.7, 15.3]
DressStd = [12.8, 18.4, 22.1]
DressCorr(L,v) ≈ -0.58
1. sRGB→linear→XYZ→CIELUV (D65 whitepoint).
2. Compute PCA on (u,v): eigenvector p_user (unit), means μ_u, μ_v.
3. One-hue collapse: (u,v) → μ + t·p_user with t = (c - μ)·p_user.
4. Correlation alignment: r_user = corr(L, v_collapse);
if r_user * DressCorr < 0 then (u,v) ← - (u,v).
5. Amplitude (quirk): a = u - mean(u).
6. Map to Dress axis: (u_map, v_map) = a * DressPC.
7. Channel-wise z-score to Dress stats:
For k in {L,u,v}: out_k = DressStd_k * ( (chan_k - mean(chan_k))/std(chan_k) ) + DressMean_k
8. Hypothesis A = mapped; Hypothesis B = mapped with axis flipped before standardization (equivalent to sign inversion).
9. Intensity blend in Luv: Lmix = (1−t)·Lorig + t·Lmapped (and u,v).
10. Luv→XYZ→linear RGB→gamma sRGB; clip to [0,1]; OOG % estimated prior to clip.