To obtain access to full text of journal and articles you must register!
- Article name
- A regularized inverse‑problem framework for assessing indirect leakage risk in privacy‑preserving data enclaves
- Authors
- Galmanov P. A., , galmanov.pa@phystech.edu, Moscow Institute of Physics and Technology (National Research University); SC "OKB SAPR"; JSC "Sberbank-Technology", Moscow Region, Dolgoprudny, Russia; Moscow, Russia
- Keywords
- indirect information leakage / data enclaves / regularization / ill‑posed inverse problems / RKHS
- Year
- 2025 Issue 4 Pages 25 - 30
- Code EDN
- JVDVCT
- Code DOI
- 10.52190/2073-2600_2025_4_25
- Abstract
- This article examines the conditions under which indirect information leakage may arise in AI systems. We propose a practical testing procedure to determine whether the accumulated number of queries suffices to predict a hidden sensitive attribute from the outputs better than chance. To this end, we estimate the empirical noise level from repeated queries, fit a stable (regularized) learning procedure to the obtained results, and then test whether predictions improve on an independent sample and are not attributable to chance (using cross‑validation and a permutation test). The outcome is summarized by a single number - the leakage threshold - defined as the minimal number of queries after which leakage becomes statistically detectable. We show analytically and confirm experimentally that the leakage threshold increases monotonically with increasing noise. In addition, a simple ablation helps identify the channels that contribute most to leakage. The proposed procedure enables well‑founded control measures: limiting the number of queries, increasing the noise level, controlling adaptivity, and tuning individual channels. Thus, administrators gain a tool that links access parameters of an AI system to a measurable level of risk.
- Text
- BUY for read the full text of article
- Buy
- 500.00 rub