论文标题
不断的问题:差异私人持续观察的细粒度复杂性
Constant matters: Fine-grained Complexity of Differentially Private Continual Observation
论文作者
论文摘要
我们研究了在持续观察下计数差分私有算法的细粒误差界。我们的主要见解是,使用低三角矩阵时的基质机制可以在连续观察模型中使用。更具体地说,我们对计数矩阵$ m_ \ mathsf {count} $进行明确的分解,并明确绑定了错误。我们还提供细粒分析,指定上限的精确常数。我们的分析基于$ m_ \ mathsf {count} $的{\ em完全限制的norm}(cb-norm)的上限和下限。在此过程中,我们改善了Mathias的28年最著名的界限(Siam Journal on Matrix Analysis and Applications,1993),$ M_ \ Mathsf {Count} $的CB-Norm,用于$ M_ \ Mathsf {count} $的大范围。此外,我们是第一个在连续观察下(例如二进制计数,维持直方图,释放近似剪切的合成图,许多基于图的统计量,基于基于图的统计数据)以及底带和情节计数的各种问题的具体误差界限。最后,我们注意到,我们的结果可用于在非互动的本地学习中获得细粒度的误差(以及$(ε,δ)$ - 在连续观察下计数的$(ε,δ)$的第一个下限。 (SODA2023)表明,我们的分解还达到了细粒度的于点误差。
We study fine-grained error bounds for differentially private algorithms for counting under continual observation. Our main insight is that the matrix mechanism when using lower-triangular matrices can be used in the continual observation model. More specifically, we give an explicit factorization for the counting matrix $M_\mathsf{count}$ and upper bound the error explicitly. We also give a fine-grained analysis, specifying the exact constant in the upper bound. Our analysis is based on upper and lower bounds of the {\em completely bounded norm} (cb-norm) of $M_\mathsf{count}$. Along the way, we improve the best-known bound of 28 years by Mathias (SIAM Journal on Matrix Analysis and Applications, 1993) on the cb-norm of $M_\mathsf{count}$ for a large range of the dimension of $M_\mathsf{count}$. Furthermore, we are the first to give concrete error bounds for various problems under continual observation such as binary counting, maintaining a histogram, releasing an approximately cut-preserving synthetic graph, many graph-based statistics, and substring and episode counting. Finally, we note that our result can be used to get a fine-grained error bound for non-interactive local learning {and the first lower bounds on the additive error for $(ε,δ)$-differentially-private counting under continual observation.} Subsequent to this work, Henzinger et al. (SODA2023) showed that our factorization also achieves fine-grained mean-squared error.