% FARE: Provably Fair Representation Learning with Practical Certificates % Jill-Jênn Vie % MILLE CILS 2023 — aspectratio: 169 institute: \includegraphics[height=1cm]{figures/inria.png} \includegraphics[height=1cm]{figures/soda.png} header-includes:
Learning fair representations such that \alert{any} classifier using these representations cannot discriminate even if they are trying to.
We need \alert{practical} certificates
(Bounds should be explicitly computable on real datasets)
Demographic parity $\Delta(g) = | \E_{p(z | s = 0)} g(z) - \E_{p(z | s = 1)} g(z) | $ |
\pause
\begin{definition} Given small $\varepsilon$, finite dataset $D$, encoder $f : x \mapsto z$ creating representations, a \alert{practical DP distance certificate} is a value $T^*(n, D) \in \R$ such that \(\sup_{g \in \mathcal{G}} \Delta(g) \leq T^*(n, D)\) holds with probability $1 - \varepsilon$. \end{definition}
Representations should have finite support
i.e. $f : \R^d \to {z_1, \ldots, z_k}$ one of $k$ possible values (whaaat?)
\pause
But actually this includes decision trees (each leaf has same encoding)
\centering
Nikola Jovanović, Mislav Balunovic, Dimitar Iliev Dimitrov, Martin Vechev. \alert{FARE: Provably Fair Representation Learning with Practical Certificates.} Proceedings of the 40th International Conference on Machine Learning, PMLR 202:15401-15420, 2023. \url{https://arxiv.org/abs/2210.07213}
Thanks! jill-jenn.vie@inria.fr