A Fuzzy Commitment Approach to Privacy Preserving Behavioral Targeting

Citation

Rane, S.; Uzun, E. A Fuzzy Commitment Approach to Privacy Preserving Behavioral Targeting. Workshop on Security and Privacy aspects of Mobile Environments (SPME 2014), Maui, Hawaii, September 2014.

Abstract

This paper describes a privacy-preserving framework for delivering coupons to users that approximately satisfy a predefined behavioral profile. The framework is designed to be non-interactive, i.e., vendor-side communication occurs only when it pushes coupons out to the users that it regards as potential customers. User privacy is protected by performing all targeting operations on the end-users device. The protocol is based on a fuzzy commitment primitive that is realized using error correcting codes. The central idea is that a user is able to extract the coupon if her behavioral profile approximately matches the vendors target profile. Unless the coupon is redeemed, the vendor discovers no information about the users behavioral profile. The error correction coding framework enforces a natural tradeoff between the privacy of the vendor and the specificity of targeting. In other words, if the vendor wants to target a broad class of potential customers, it must reveal more information about its targeting strategy to ineligible users. Conversely, if the vendor wants to reveal less information about its targeting strategy to ineligible users, then it must target a more focused class of potential customers.


Read more from SRI

  • A photo of Mary Wagner

    Recognizing the life and work of Mary Wagner 

    A cherished SRI colleague and globally respected leader in education research, Mary Wagner leaves behind an extraordinary legacy of groundbreaking work supporting children and youth with disabilities and their families.

  • Testing XRGo in a robotics laboratory

    Robots in the cleanroom

    A global health leader is exploring how SRI’s robotic telemanipulation technology can enhance pharmaceutical manufacturing.

  • SRI research aims to make generative AI more trustworthy

    Researchers have developed a new framework that reduces generative AI hallucinations by up to 32%.