Scheduling the Learning Rate Via Hypergradients: New Insights and a New Algorithm | OpenReview

We present a theoretical analysis of the proposed algorithms and prove that, under natural assumptions, they have lower space complexity than prior algorithms. We also evaluate our algorithms on two real-world datasets and demonstrate empirically their performance gains. Data driven algorithm design for combinatorial problems is an important aspect of modern data science. Rather than using off the shelf algorithms that only have worst case performance guarantees, practitioners typically optimize over large families of parametrized algorithms and tune the parameters of these algorithms based on past problem instances in order to perform well on future instances from that domain.

In this talk, I will discuss general techniques for providing formal guarantees for data driven algorithm selection via online learning. We provide upper and lower bounds on regret for algorithm selection from infinite parametrized families of algorithms in online settings, where problems arrive sequentially and we must choose parameters online.

The major technical challenge is that for many combinatorial problems including subset selection problems, partitioning problems, and clustering , small differences between two algorithms can lead to a cascade of changes in their behavior and dramatically change their performance.

This leads to very interesting challenges that require new techniques that help significantly push the boundaries of learning theory as well. Classic algorithm design focuses on optimizing worst case complexity, resulting in algorithms that perform well for any possible input. In many applications, however, the typical input is not just far from worst case, but has some predictable characteristics. We first focus on the caching problem and give results that formally incorporate machine learned predictions into algorithm design.

We then talk about the modeling choices and challenges faced when extending the work to weighted caching. I'll give an overview of recent work in augmenting traditional online algorithms with additional information such as machine-learnt predictions. This talk demonstrates the different design choices available to the algorithm designer with respect to the type and structure of the predictions used. Naturally, the flavor of theoretical results obtained depends greatly on the prediction models.

Using the well-studied Ski-Rental problem as a running example, I'll cover three distinct prediction models and design online algorithms in these regimes with very different competitive ratios. Recent research has focused on incorporating the advice of ML models in improving the performance of online algorithms, without compromising the intrinsic robustness offered by traditional competitive analysis.

This talk will focus on the case of multiple experts in this setting. In particular, we show how the advice provided by multiple experts can be incorporated into an online algorithm for the classical ski rental problem.

The goal is to match the performance of the best expert without knowing her identity, a task similar in flavor to the traditional experts learning. In contrast to online learning, however, this is not an iterative process, and the algorithm does not have the scope of learning the identity of the best expert over time.

We do not make any assumptions on the quality of the advice provided by the experts, and obtain tight performance bounds in this context. The use of machine learning and predictive models have produced a revolution in science and engineering.

Online optimization problems are a natural source of uncertainty that predictions can be used to manage and improve performance. This paper studies how predictive techniques can be used to break through worst case barriers in online scheduling. The make-span minimization problem on unrelated machines and its special case, restricted assignment, are classic problems in online scheduling theory. In this paper we construct non-trivial predictions for these problems and design algorithms that utilize these predictions to compute solutions online.

Our predictions are compact in size, having dimension linear in the number of machines. The performance guarantees of our algorithms depend on the accuracy of the predictions, and moderately accurate predictions allow our techniques to beat the worst case lower bounds. We consider the semi-online model that generalizes the classical online computational model. The semi-online model postulates that the unknown future has a predictable part and an adversarial part; these parts can be arbitrarily interleaved.

An algorithm in this model operates as in the standard online model, i. We study bipartite matching in the semi-online model. Our main contributions are competitive algorithms for this problem and a near-matching hardness bound. The competitive ratio of the algorithms nicely interpolates between the truly offline setting i. In the secretary problem, a set of items are revealed to us in random order, and we want to maximize the probability of picking the best among them.

In the stochastic multi-armed bandit problem, we perform pulls from a set of arms, each with a fixed but unknown payoff probability, and want to minimize the regret. Both these problems are well-studied in the sequential decision-making literature. We consider the semi-adversarial setting where an adversary is allowed to introduce some limited amount of corruptions. We show that classical algorithms fail badly in the presence of corruptions, and then we give algorithms that are more robust to corruptions.

The traditional approach in algorithm design assumes that there is an underlying objective that is known to the algorithm designer and the focus is on efficiently optimizing that objective. In many cases however, the objectives we aim to optimize are not known but rather learned from data. So what are the guarantees of the algorithms we develop and teach when the input is learned from data? In this talk we will address this question and discuss challenges at the intersection of machine learning and algorithms.

We will present some stark impossibility results and argue for new algorithmic paradigms. Sampling-based planning algorithms such as RRT and its variants are powerful tools for path planning problems in high-dimensional continuous state and action spaces.

While these algorithms perform systematic exploration of the state space, they do not fully exploit past planning experiences from similar environments. In this paper, we design a meta path planning algorithm, called Neural Exploration-Exploitation Trees NEXT , which can utilize prior experience to drastically reduce the sample requirement for solving new path planning problems.

More specifically, NEXT contains a novel neural architecture which can learn from experiences the dependency between task structures and promising path search directions. Then this learned prior is integrated with a UCB-type algorithm to achieve an online balance between exploration and exploitation when solving a new problem.

Empirically, we show that NEXT can complete the planning tasks with very small search trees and significantly outperforms previous state-of-the-arts on several benchmark problems. Deep Learning has been applied successfully to many basic human tasks such as object recognition and speech recognition, and increasingly to the more complex task of language understanding. In addition, deep learning has been extremely successful in the context of planning tasks in constrained environments e.

We ask: Can deep learning be applied to design algorithms? We formulate this question precisely, and show how to apply deep reinforcement learning to design online algorithms for a number of optimization problems such as the AdWords problem choosing which advertisers to include in a keyword auction , the online knapsack problem, and optimal stopping problems.

Our results indicate that the models have learned behaviours that are consistent with the traditional optimal algorithms for these problems. It is based on balanced graph partitioning followed by supervised learning. When instantiated with KaHIP Sanders, Schulz SEA and neural networks, respectively, the new algorithm consistently outperforms quantization-based and tree-based partitioning methods.

The algorithm uses a training set of input matrices in order to optimize its performance. Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix S , sometimes by one order of magnitude. We also study mixed matrices where only some of the rows are trained and the remaining ones are random, and show that matrices still offer improved performance while retaining worst-case guarantees.

Info Invited Speakers Schedule Resources. Summer Workshop on Learning-Based Algorithms. Joint work with Rishi Gupta. Relevance highest to lowest Publication Date newest to oldest Publication Date oldest to newest.

Close Clear All Find. Authors: Lattanzi, S. Word Cloud More Like This. Lattanzi, S. Online Scheduling via Learned Weights. Email address:. Close Send. Last Name:. First Name:. Search Results Selected Authors Type in a name, or the first few letters of a name, in one or both of appropriate search boxes above and select the search button. An attempt will be made to match authors that most closely relate to the text you typed.

No authors are currently selected. Choosing "Select These Authors" will enter a blank value for author search in the parent form.




Plastic Sheds In Stock Code
Man Shed Near Me Us