DIMACS Workshop on E+M=C2
Eric Allender and Mike Saks are 60

January 26-27, 2017
DIMACS Center, CoRE Building, Rutgers University


Program

Thursday, January 26, 2017
9:15-10:00Breakfast and registration
10:00-10:15Opening: DIMACS Director’s Welcome, Rebecca Wright
Organizers’ Welcome, Michal Koucky and Martin Strauss
10:15- 11:00Avi Wigderson: Brascamp-Lieb inequalities, operator scaling and optimization   Video
11:15- 12:00 Harry Buhrman: Catalytic space   Video
12:00- 2:00Lunch
2:00- 2:45Meena Mahajan: Enumerator polynomials: Completeness and Intermediate Complexity   Video
2:45- 3:15Break
3:15- 3:45Clifford Smyth: Restricted Stirling and Lah numbers and their inverses
4:00 -4:30Yi Li: Estimating the Schatten Norm of Matrices   Video
4:45- 5:15Mary Wootters: Repairing Reed-Solomon Codes   Video
5:20- Conference dinner

Friday - January 27, 2017
9:15-10:00Breakfast and registration
10:00- 10:45Neil Immerman: Algebra, Logic and Complexity   Video
11:15- 12:00Nati Linial: Hypertrees   Video
12:00- 2:00Lunch
2:00- 2:45Toni Pitassi: Strongly Exponential Lower Bounds for Monotone Computation
2:45- 3:15Break
3:15- 3:45Mohan Paturi: Satisfiability algorithms and circuit lower bounds   Video
4:00- 4:45Lance Fortnow: Connecting Randomness and Complexity   Video
6:00-9:30Outing: Stress Factory Comedy Club

Saturday - January 28, 2017
10:30am- 3:30Social program - Brunch at Claire and Eric's house.

Abstracts

Harry Buhrman: Catalytic space

We define the notion of a catalytic-space computation. This is a computation that has a small amount of clean space available and is equipped with additional auxiliary space, with the caveat that the additional space is initially in an arbitrary, possibly incompressible, state and must be returned to this state when the computation is finished. We show that the extra space can be used in a nontrivial way, to compute uniform \(TC^1\)-circuits with just a logarithmic amount of clean space. The extra space thus works analogously to a catalyst in a chemical reaction. \(TC^1\)-circuits can compute for example the determinant of a matrix, which is not known to be computable in logspace. We will introduce the model and show some basic results as well as some recent results regarding non-deterministic and randomized variants of the model.

Lance Fortnow: Connecting Randomness and Complexity

Throughout his career Eric Allender found amazing connections between randomness and computational complexity, often through the rich theory of Kolmogorov Complexity, an algorithmic measure of information. This talk will explore many of these relationships and how they have shaped our view the power of randomness.

Neil Immerman: Algebra, Logic and Complexity

I first met Eric Allender in May 1986 at a Bart Station in Berkeley on our way to STOC. Eric has continued to make brilliant and original contributions to our understanding of complexity throughout his career so far. In this talk I will discuss some current work related to Eric's interests on the constraint satisfaction problem, the complexity of determinant, and other problems in algebra, logic and complexity.

Yi Li: Estimating the Schatten Norm of Matrices

The Schatten \(p\)-norm of a matrix \(A\) is defined as \(||A||_p = (\sum_i \sigma_i(A)^p)^{1/p}\). When \(p\to 0\), the quantity tends to the rank of \(A\) and when \(p\to\infty\), the \(p\)-norm tends to the operator norm of \(A\). In this talk, I shall review recent progress on the problem of estimating Schatten norms in data streaming model, including sketching upper bounds, sketching lower bounds and bit lower bounds.

Nati Linial: Hypertrees

This is part of our ongoing effort to investigate what we call high-dimensional combinatorics. Hypertrees are the high-dimensional counterparts of trees, first introduced by Kalai in 1983. Mathematically speaking, we still know very little about them. However, they are quite easy to test on the computer and simulations indicate many fascinating phenomena that exist in this domain. The definition of a hypertree is inspired by the traditional, one dimensional notion of a tree. Trees can be defined in several different ways and here we follow the linear-algebraic route: Let \(A\) be the \(n \times {n\choose 2}\) incidence matrix of the complete graph of order \(n\). Any set of columns of \(A\) can be viewed as the edge set of a graph. The rank of \(A\) is \(n-1\) and it is easy to see that a column basis of \(A\) is synonymous with a tree. The definition of \(d\)-dimensional hypertree now suggests itself. We start with the signed inclusion matrix (aka boundary operator) of sides \({n\choose d} \times {n\choose d+1}\), whose rank is \({n-1\choose d}\). A column basis is a \(d\)-dimensional hypertree. Many questions suggest themselves and are still widely open. For example: In dimension \(1\) we have Cayley formula for the number of labeled trees. What can be said in dimension \(d\)? Furthermore, as I will explain in my talk there are several structural properties which are trivial for trees, but still very mysterious for hypertrees.

Based on ongoing joint work with my PhD student Yuval Peled.

Meena Mahajan: Enumerator polynomials: Completeness and Intermediate Complexity

\(VNP, VP, VBP\) are central complexity classes in algebraic complexity theory. The notions of reductions and completeness are central to understanding the relationships between them. In this talk I will describe

  1. polynomial families based on graph homomorphisms and complete for each of these classes,
  2. polynomial families based on basic combinatorial \(NP\)-complete problems, and unless \(PH\) collapses, provably "intermediate" in nature,
  3. a lower bound showing that to express the clique polynomial as a monotone projection of the permanent polynomial, exponential "blow-up" is required.

This is joint work with Nitin Saurabh.

Mohan Paturi: Satisfiability algorithms and circuit lower bounds

This talk provides highlights of my joint work with Mike Saks.

Toni Pitassi: Strongly Exponential Lower Bounds for Monotone Computation

We prove truly exponential size lower bounds of \(2^{\Omega(n)}\) for computing an explicit Boolean monotone function in NP in the following models of computation: monotone formulas, monotone switching networks, monotone span programs and monotone comparator circuits, where n is the number of variables of the underyling function. Our lower bounds improve on the best previous explicit lower bounds in each of these models and are the best possible up to the constant in the exponent. Moreover, we give one unified proof that is short and fairly elementary.

This is joint work with Robert Robere.

Clifford Smyth: Restricted Stirling and Lah numbers and their inverses

Let \(s(n,k)\), \(S(n,k)\), and \(L(n,k)\) be the Stirling numbers of the first kind, Stirling numbers of the second kind, and the Lah numbers, respectively.  It is well known that the inverse of the infinite lower triangular matrix \([s(n,k)]_{n,k \geq 1}\) is \([(-1)^{n-k} S(n,k)]_{n,k \geq 1}]\), the inverse of \([S(n,k)]_{n,k}\) is \([(-1)^{n-k} s(n,k)]_{n,k}\), and the inverse of \([L(n,k)]_{n,k}\) is \([(-1)^{n-k} L(n,k)]_{n,k}\).

More generally, we consider restricted versions of these numbers, \(s(n,k,R)\), \(S(n,k,R)\), and \(L(n,k,R)\), for arbitrary subsets \(R\) of natural numbers. These are defined to be the number of ways of partitioning a set of size \(n\) into \(k\) non-empty cycles (for Stirling numbers of the first kind), subsets (for Stirling numbers of the second kind) or ordered lists (for Lah numbers) with the size of each cycle, subset, or list in \(R\).  (To recover \(S(n,k)\), \(s(n,k)\), and \(L(n,k)\), take \(R\) to be the set of natural numbers.)

If \(R\) contains \(1\) then the matrices \([S(n,k,R)]_{n,k}\), \([s(n,k,R)]_{n,k}\) and \([L(n,k,R)]_{n,k}\) are all invertible.  We obtain combinatorial formulas for the entries of these inverse matrices, expressing each such entry as the difference between the sizes of two explicitly defined sets of forests. (Note, that some of these entries must indeed be negative.)

For \(S(n,k,R)\) and \(L(n,k,R)\) and for certain \(R\) we can do better: each \((n,k)\) entry of the inverse will have a predictable sign, \(\sigma(n,k)\), and its magnitude will be the size of a single explicitly defined set of forests.  Among these special \(R\)'s are those which contain \(1\) and \(2\) and which have the property that for all odd \(n\) in \(R\) with \(n \geq 3\) we have \(n-1\) and \(n+1\) in \(R\).  (For these \(R\), the sign of the \((n,k)\) entry is \((-1)^{n-k}\), as when \(R\) is the set of the natural numbers.)

Our proofs depend in part on two combinatorial interpretations of the coefficients of the compositional inverse of a power series.

This is joint work with David Galvin of the University of Notre Dame and John Engbers of Marquette University.

Avi Wigderson: Brascamp-Lieb inequalities, operator scaling and optimization

I will explain how recent developments connecting analysis and algebra through algorithms for operator scaling, reveal their potential power as a new tool for non-convex optimization.

Based on joint works with Ankit Garg, Leonid Gurvits and Rafael Olivera.

Mary Wootters: Repairing Reed-Solomon Codes

Reed-Solomon (RS) codes are often used in distributed storage. However, recently it's been observed that the traditional recovery algorithms for RS codes are substantially sub-optimal in this setting, and the quickly-developing field of regenerating codes has provided some much better solutions.

In this work, we show that, in fact, RS codes are much better for distributed storage than you might think! Our main result is that, in some parameter regimes, RS codes themselves are optimal regenerating codes, among MDS codes with linear repair schemes. Moreover, we give a characterization of MDS codes with good linear repair schemes which holds in any parameter regime, and which can be used to give non-trivial repair schemes for RS codes in other settings.

Along the way, we'll develop a surprising fact about low-degree polynomials. A fundamental fact about polynomial interpolation is that k evaluations determine a degree k-1 polynomial. This is necessary in a strong sense: if we have only k-1 evaluations, we don't learn anything at all about the evaluation on a k'th point. However, our repair scheme for RS codes shows that if we are allowed to consider partial evaluations (in the sense that we only get a few bits of information about the evaluation), then in fact we can do better (in the sense that we can use fewer bits total to determine an unknown evaluation).

This is joint work with Venkat Guruswami.


Outing: Stress Factory Comedy Club

A trip to the show The NY Kings Comedy Tour at Stress Factory in New Brunswick. If you would like to take part please buy a ticket directly on-line. The seating is on the first-come, first-served basis so on the day of the event we will try to reserve a block of tables together. (The all you can eat buffet is ok but nothing special.)

Social program

Brunch at Claire and Eric's house organized by Eric's and Mike's families starting from about 10:30am. If you want to attend, RSVP to Mike Saks ([email protected]) for address and let him know your dietary restrictions. Ride sharing will be organized during the workshop.