• La Universidad
    • Historia
    • Rectoría
    • Autoridades
    • Secretaría General
    • Pastoral UC
    • Organización
    • Hechos y cifras
    • Noticias UC
  • 2011-03-15-13-28-09
  • Facultades
    • Agronomía e Ingeniería Forestal
    • Arquitectura, Diseño y Estudios Urbanos
    • Artes
    • Ciencias Biológicas
    • Ciencias Económicas y Administrativas
    • Ciencias Sociales
    • College
    • Comunicaciones
    • Derecho
    • Educación
    • Filosofía
    • Física
    • Historia, Geografía y Ciencia Política
    • Ingeniería
    • Letras
    • Matemáticas
    • Medicina
    • Química
    • Teología
    • Sede regional Villarrica
  • 2011-03-15-13-28-09
  • Organizaciones vinculadas
  • 2011-03-15-13-28-09
  • Bibliotecas
  • 2011-03-15-13-28-09
  • Mi Portal UC
  • 2011-03-15-13-28-09
  • Correo UC
- Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log in
    Log in
    Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log in
    Log in
    Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Guzman Paredes, Cristobal Andres"

Now showing 1 - 15 of 15
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    A sequential Stackelberg game for dynamic inspection problems
    (2022) Guzman Paredes, Cristobal Andres; Javiera Riffo; Claudio Telha; Mathieu Van Vyve
  • Loading...
    Thumbnail Image
    Item
    Best-case lower bounds in online learning
    (2021) Guzman Paredes, Cristobal Andres; Nishant Mehta; Ali Mortazavi
    Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In this work, we initiate the study of best-case lower bounds in online convex optimization, wherein we bound the largest improvement an algorithm can obtain relative to the single best action in hindsight. This problem is motivated by the goal of better understanding the adaptivity of a learning algorithm. Another motivation comes from fairness: it is known that best-case lower bounds are instrumental in obtaining algorithms for decision-theoretic online learning (DTOL) that satisfy a notion of group fairness. Our contributions are a general method to provide best-case lower bounds in Follow the Regularized Leader (FTRL) algorithms with time-varying regularizers, which we use to show that best-case lower bounds are of the same order as existing upper regret bounds: this includes situations with a fixed learning rate, decreasing learning rates, timeless methods, and adaptive gradient methods. In stark contrast, we show that the linearized version of FTRL can attain negative linear regret. Finally, in DTOL with two experts and binary losses, we fully characterize the best-case sequences, which provides a finer understanding of the best-case lower bounds.
  • Loading...
    Thumbnail Image
    Item
    Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness
    (2022) Sarah Sachs; Hedi Hadiji; Tim van Erven; Guzman Paredes, Cristobal Andres
  • Loading...
    Thumbnail Image
    Item
    Corrigendum in “Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization via Information Theory”
    (2024) Gábor Braun; Guzman Paredes, Cristobal Andres; Sebastian Pokutta
  • Loading...
    Thumbnail Image
    Item
    Differentially Private Generalized Linear Models Revisited
    (2022) Raman Arora; Raef Bassily; Guzman Paredes, Cristobal Andres; Michael Menart; Enayat Ullah
  • Loading...
    Thumbnail Image
    Item
    Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings
    (2021) Raef Bassily; Guzman Paredes, Cristobal Andres; Michael Menart
    We study differentially private stochastic optimization in convex and non-convex settings. For the convex case, we focus on the family of non-smooth generalized linear losses (GLLs). Our algorithm for the $\ell_2$ setting achieves optimal excess population risk in near-linear time, while the best known differentially private algorithms for general convex losses run in super-linear time. Our algorithm for the $\ell_1$ setting has nearly-optimal excess population risk $\tilde{O}\big(\sqrt{\frac{\log{d}}{n\varepsilon}}\big)$, and circumvents the dimension dependent lower bound of \cite{Asi:2021} for general non-smooth convex losses. In the differentially private non-convex setting, we provide several new algorithms for approximating stationary points of the population risk. For the $\ell_1$-case with smooth losses and polyhedral constraint, we provide the first nearly dimension independent rate, $\tilde O\big(\frac{\log^{2/3}{d}}{{(n\varepsilon)^{1/3}}}\big)$ in linear time. For the constrained $\ell_2$-case with smooth losses, we obtain a linear-time algorithm with rate $\tilde O\big(\frac{1}{n^{1/3}}+\frac{d^{1/5}}{(n\varepsilon)^{2/5}}\big)$. Finally, for the $\ell_2$-case we provide the first method for {\em non-smooth weakly convex} stochastic optimization with rate $\tilde O\big(\frac{1}{n^{1/4}}+\frac{d^{1/6}}{(n\varepsilon)^{1/3}}\big)$ which matches the best existing non-private algorithm when $d= O(\sqrt{n})$. We also extend all our results above for the non-convex $\ell_2$ setting to the $\ell_p$ setting, where $1 < p \leq 2$, with only polylogarithmic (in the dimension) overhead in the rates.
  • No Thumbnail Available
    Item
    Fast, Deterministic and Sparse Dimensionality Reduction
    (2018) Daniel Dadush; Guzman Paredes, Cristobal Andres; Neil Olver
  • Loading...
    Thumbnail Image
    Item
    Faster Rates of Convergence to Stationary Points in Differentially Private Optimization
    (2023) Raman Arora; Raef Bassily; Guzman Paredes, Cristobal Andres; Tomás González; Michael Menart; Enayat Ullah
  • Loading...
    Thumbnail Image
    Item
    Lower bounds for parallel and randomized convex optimization
    (2020) Diakonikolas, Jelena; Guzman Paredes, Cristobal Andres
  • Loading...
    Thumbnail Image
    Item
    Network Pricing: How to Induce Optimal Flows Under Strategic Link Operators
    (2021) Guzman Paredes, Cristobal Andres
    Regulating Network Pricing Games by Using Price Caps
  • Loading...
    Thumbnail Image
    Item
    Non-Euclidean Differentially Private Stochastic Convex Optimization
    (2021) Raef Bassily; Guzman Paredes, Cristobal Andres; Anupama Nandi
  • Loading...
    Thumbnail Image
    Item
    Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems
    (2023) Digvijay Boob; Guzman Paredes, Cristobal Andres
    AbstractIn this work, we conduct the first systematic study of stochastic variational inequality (SVI) and stochastic saddle point (SSP) problems under the constraint of differential privacy (DP). We propose two algorithms: Noisy Stochastic Extragradient (NSEG) and Noisy Inexact Stochastic Proximal Point (NISPP). We show that a stochastic approximation variant of these algorithms attains risk bounds vanishing as a function of the dataset size, with respect to the strong gap function; and a sampling with replacement variant achieves optimal risk bounds with respect to a weak gap function. We also show lower bounds of the same order on weak gap function. Hence, our algorithms are optimal. Key to our analysis is the investigation of algorithmic stability bounds, both of which are new even in the nonprivate case. The dependence of the running time of the sampling with replacement algorithms, with respect to the dataset size n, is $$n^2$$ n 2 for NSEG and $${\widetilde{O}}(n^{3/2})$$ O ~ ( n 3 / 2 ) for NISPP.
  • Loading...
    Thumbnail Image
    Item
    Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses
    (2020) Raef Bassily; Vitaly Feldman; Guzman Paredes, Cristobal Andres; Kunal Talwar
    Uniform stability is a notion of algorithmic stability that bounds the worst case change in the model output by the algorithm when a single data point in the dataset is replaced. An influential work of Hardt et al. [2016] provides strong upper bounds on the uniform stability of the stochastic gradient descent (SGD) algorithm on sufficiently smooth convex losses. These results led to important progress in understanding of the generalization properties of SGD and several applications to differentially private convex optimization for smooth losses.Our work is the first to address uniform stability of SGD on nonsmooth convex losses. Specifically, we provide sharp upper and lower bounds for several forms of SGD and full-batch GD on arbitrary Lipschitz nonsmooth convex losses. Our lower bounds show that, in the nonsmooth case, (S)GD can be inherently less stable than in the smooth case. On the other hand, our upper bounds show that (S)GD is sufficiently stable for deriving new and useful bounds on generalization error. Most notably, we obtain the first dimension-independent generalization bounds for multi-pass SGD in the nonsmooth case. In addition, our bound allow us to derive a new algorithm for differentially private nonsmooth stochastic convex optimization with optimal excess population risk. Our algorithm is simpler and more efficient than the best known algorithm for the nonsmooth case, due to Feldman et al. [2020].
  • Loading...
    Thumbnail Image
    Item
    Stochastic Halpern Iteration with Variance Reduction for Stochastic Monotone Inclusions
    (2022) Xufeng Cai; Chaobing Song; Guzman Paredes, Cristobal Andres; Jelena Diakonikolas
  • Loading...
    Thumbnail Image
    Item
    The complexity of nonconvex-strongly-concave minimax optimization
    (2021) Siqi Zhang; Junchi Yang; Guzman Paredes, Cristobal Andres; Negar Kiyavash; Niao He

Bibliotecas - Pontificia Universidad Católica de Chile- Dirección oficinas centrales: Av. Vicuña Mackenna 4860. Santiago de Chile.

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback