The ''Propose-Test-Release'' (PTR) framework is a classic recipe for designing differentially private (DP) algorithms that are data-adaptive, i.e. those that add less noise when the input dataset is nice. We extend PTR to a more general setting by privately testing data-dependent privacy losses rather than local sensitivity, hence making it applicable beyond the standard noise-adding mechanisms, e.g. to queries with unbounded or undefined sensitivity. We demonstrate the versatility of generalized PTR using private linear regression as a case study. Additionally, we apply our algorithm to solve an open problem from ''Private Aggregation of Teacher Ensembles (PATE)'' -- privately releasing the entire model with a delicate data-dependent analysis.
translated by 谷歌翻译
Motivated by personalized healthcare and other applications involving sensitive data, we study online exploration in reinforcement learning with differential privacy (DP) constraints. Existing work on this problem established that no-regret learning is possible under joint differential privacy (JDP) and local differential privacy (LDP) but did not provide an algorithm with optimal regret. We close this gap for the JDP case by designing an $\epsilon$-JDP algorithm with a regret of $\widetilde{O}(\sqrt{SAH^2T}+S^2AH^3/\epsilon)$ which matches the information-theoretic lower bound of non-private learning for all choices of $\epsilon> S^{1.5}A^{0.5} H^2/\sqrt{T}$. In the above, $S$, $A$ denote the number of states and actions, $H$ denotes the planning horizon, and $T$ is the number of steps. To the best of our knowledge, this is the first private RL algorithm that achieves \emph{privacy for free} asymptotically as $T\rightarrow \infty$. Our techniques -- which could be of independent interest -- include privately releasing Bernstein-type exploration bonuses and an improved method for releasing visitation statistics. The same techniques also imply a slightly improved regret bound for the LDP case.
translated by 谷歌翻译
Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning. By exploiting historical transitions, a policy is trained to maximize a learned value function while constrained by the behavior policy to avoid a significant distributional shift. In this paper, we propose our closed-form policy improvement operators. We make a novel observation that the behavior constraint naturally motivates the use of first-order Taylor approximation, leading to a linear approximation of the policy objective. Additionally, as practical datasets are usually collected by heterogeneous policies, we model the behavior policies as a Gaussian Mixture and overcome the induced optimization difficulties by leveraging the LogSumExp's lower bound and Jensen's Inequality, giving rise to a closed-form policy improvement operator. We instantiate offline RL algorithms with our novel policy improvement operators and empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark.
translated by 谷歌翻译
我们研究在线动态定价的问题,具有两种类型的公平限制:“程序公平性”,要求拟议的价格在不同群体之间的预期等同于期望,而“实质性公平”要求公认的价格要求公认的价格在预期中保持平等在不同的群体中。同时进行程序和实质性公平的政策称为“双重公平”。我们表明,双重公平的政策必须是随机的,才能获得比将相同价格分配给不同群体的最佳琐碎政策更高的收入。在两组设置中,我们为达到$ \ tilde {o}(\ sqrt {t})$遗憾的两组定价问题提供了在线学习算法,零过程不公平和$ \ tilde {o}(\ sqrt {t})$对$ t $回合学习的实质性不公平。我们还证明了两个下限,表明这些结果是遗憾和不公平性的,这两者在理论上都是最佳的,直到迭代的对数因素。据我们所知,这是第一个学会定价的动态定价算法,同时满足了两个公平的约束。
translated by 谷歌翻译
我们考虑了具有一系列二次损耗的序列,即LQR控制的问题。我们提供了一种有效的在线算法,该算法实现了$ \ tilde {o}的最佳动态(策略)遗憾(\ text {max} \ {n^{n^{1/3} \ mathcal {tv}(m_ {1:n})^{2/3},1 \})$,其中$ \ Mathcal {tv}(m_ {1:n})$是任何Oracle序列序列的总变化,由$ M_1,...,...,...,...,...,...,...,...,...,...,...,...,...,...m_n $ - 事后选择以迎合未知的非机构性。该费率提高了$ \ tilde {o}(\ sqrt {n(\ Mathcal {tv}}(m_ {1:n})+1)} $的最佳已知费率(\ sqrt {N(\ Mathcal {tv}})$ - 理论上最佳的LQR。主要技术组件包括将LQR减少到在线线性回归,并延迟由于Foster和Simchowitz(2020)而延迟反馈,以及具有最佳$ \ tilde {o}(n^{1/3})的新的适当学习算法(N^{1/3})$动态的遗憾是``小匹配''二次损失的家庭,这可能引起独立的兴趣。
translated by 谷歌翻译
每个例子梯度剪辑是一个关键算法步骤,可实现对深度学习模型的实用差异私有(DP)培训。但是,剪辑规范$ r $的选择对于在DP下实现高精度至关重要。我们提出了一个易于使用的替代品,称为Autoclipping,它消除了任何DP优化器(包括DP-SGD,DP-ADAM,DP-LAMB等)调整$ R $的需求。自动变体与现有的DP优化器一样私有和计算效率,但不需要DP特定的超参数,因此使DP培训与标准的非私人培训一样适合。我们在非凸vex设置中对自动DP-SGD进行了严格的融合分析,这表明它具有与标准SGD相匹配的渐近收敛速率。我们还展示了各种语言和视觉任务,这些任务自动剪辑优于或匹配最新的,并且可以轻松使用对现有代码库的最小更改。
translated by 谷歌翻译
量化的神经网络吸引了很多关注,因为它们在推理过程中降低了空间和计算复杂性。此外,人们已经有民间传说是一种隐性的正规化程序,因此可以改善神经网络的普遍性,但是没有现有的工作正式使这种有趣的民间传说形式化。在本文中,我们将神经网络中的二元权重作为随机舍入的随机变量,并研究神经网络中不同层的分布传播。我们提出了一个准神经网络来近似分布传播,该分布传播是一个具有连续参数和平滑激活函数的神经网络。我们为该准神经网络得出神经切线核(NTK),并表明NTK的特征值大约以指数呈指数速率衰减,这与具有随机尺度的高斯内核相当。这反过来表明,与具有实际价值权重的二元重量神经网络的繁殖核Hilbert空间(RKHS)涵盖了严格的功能子集。我们使用实验来验证我们提出的准神经网络可以很好地近似二进制重量神经网络。此外,与实际值重量神经网络相比,二进制重量神经网络的概括差距较低,这与高斯内核和拉普拉斯内核之间的差异相似。
translated by 谷歌翻译
以目标为导向的强化学习,代理商需要达到目标状态,同时将成本降至最低,在现实世界应用中受到了极大的关注。它的理论配方是随机最短路径(SSP),在在线环境中进行了深入研究。然而,当禁止使用这种在线互动并且仅提供历史数据时,它仍然被忽略了。在本文中,当状态空间和动作空间有限时,我们考虑离线随机路径问题。我们设计了基于简单的价值迭代算法,以解决离线政策评估(OPE)和离线政策学习任务。值得注意的是,我们对这些简单算法的分析产生了强大的实例依赖性边界,这可能意味着接近最佳的最佳范围最佳范围。我们希望我们的研究能够帮助阐明离线SSP问题的基本统计限制,并激发超出当前考虑范围的进一步研究。
translated by 谷歌翻译
The offline reinforcement learning (RL) problem is often motivated by the need to learn data-driven decision policies in financial, legal and healthcare applications. However, the learned policy could retain sensitive information of individuals in the training data (e.g., treatment and outcome of patients), thus susceptible to various privacy risks. We design offline RL algorithms with differential privacy guarantees which provably prevent such risks. These algorithms also enjoy strong instance-dependent learning bounds under both tabular and linear Markov decision process (MDP) settings. Our theory and simulation suggest that the privacy guarantee comes at (almost) no drop in utility comparing to the non-private counterpart for a medium-size dataset.
translated by 谷歌翻译
大型语言模型被显示为记住隐私信息,例如培训数据中的社会保险号。鉴于培训语料库的巨大规模,筛选和自动筛选和过滤这些隐私数据是一项挑战。在本文中,我们提出了秘密编辑的培训(CRT),这是一种培训语言生成模型的方法,同时保护机密细分市场。我们从差异隐私(解决一个相关但独特的问题)中借鉴了想法,并表明我们的方法能够通过随机将培训过程的部分随机化来防止意外的记忆。此外,我们证明了通过近似正确的筛选策略进行修复会放大机密性保证。我们实施LSTM和GPT语言模型的方法。我们的实验结果表明,通过CRT训练的模型获得了几乎相同的困惑,同时保持了强大的机密性。
translated by 谷歌翻译