The polymath blog

September 22, 2015

The Erdős discrepancy problem has been solved by Terence Tao

Filed under: polymath5 — Gil Kalai @ 12:41 pm
Tags: ,

Polymath5 was devoted to the Erdős discrepancy problem. It ran in 2010 and there were a few additional posts in 2012, without reaching a solution. The problem has now been solved by Terry Tao using  some observations from the polymath project combined with important recent developments in analytic number theory. See this blog post  from Tao’s blog and this concluding blog post from Gowers’s blog.

January 20, 2014

Two polymath (of a sort) proposed projects

Filed under: discussion,polymath proposals — Gil Kalai @ 5:20 pm
Tags: , ,

This post is meant to propose and discuss a polymath project and a sort of polymath project.

I. A polymath proposal: Convex hulls of real algebraic varieties.

One of the interesting questions regarding the polymath endeavor was:

Can polymath be used to develop a theory/new area?

My idea is to have a project devoted to develop a theory of “convex hulls of real algebraic varieties”. The case where the varieties are simply a finite set of points is a well-developed area of mathematics – the theory of convex polytopes, but the general case was not studied much. I suppose that for such a project the first discussions will be devoted to raise questions/research directions. (And mention some works already done.)

In general (but perhaps more so for an open-ended project), I would like to see also polymath projects which are on longer time scale than existing ones but perhaps less intensive, and that people can “get in” or “spin-off” at will in various times.

II. A polymath-of-a-sort proposal: Statements about the Riemann Hypothesis

The Riemann hypothesis is arguably the most famous open question in mathematics. My view is that it is premature to try to attack the RH by a polymath project (but I am not an expert and, in any case, a project of this kind is better conducted with some specific program in mind). I propose something different. In a sort of polymath spirit the project I propose invite participants, especially professional mathematicians who thought about the RH over the years,  to share their thoughts about RH.

Ideally each comment will be

1) One or a few paragraphs long

2) Well-thought, focused and rather polished

A few comments by the same contributors are also welcome.

To make it clear, the thread I propose is not going to be a research thread and also not a place for further discussions beyond some clarifying questions. Rather it is going to be a platform for interested mathematician to make statements and expressed polished thoughts about RH. (Also, if adopted, maybe we will need a special name for such a thing.)


This thread is not launching any of the two suggested projects, but rather a place to discuss further these proposals. For the second project,  it will be better still if the person who runs it will be an expert in the area, and certainly not an ignorant. For the first project, maybe there are better ideas for areas/theories appropriate for polymathing.

November 4, 2013

Polymath9: P=NP? (The Discretized Borel Determinacy Approach)

Filed under: polymath proposals — Gil Kalai @ 2:07 pm
Tags: ,


Tim Gowers Proposed and launched a new polymath proposal aimed at a certain approach he has for proving that NP \ne P.

September 20, 2013

Polymath8 – A Success !

Filed under: news — Gil Kalai @ 5:58 pm

The main objectives of the polymath8 project, initiated by Terry Tao  back in June, were “to understand the recent breakthrough paper of Yitang Zhang establishing an infinite number of prime gaps bounded by a fixed constant {H}, and then to lower that value of {H} as much as possible.”

Polymath8 was a remarkable success! Within two months the best value of H that was 70,000,000 in Zhang’s proof was reduced to 5,414. Moreover, the polymath setting looked advantageous for this project, compared to traditional ways of doing mathematics. (I have written a post with some more details and thoughts about it, looked from a distance.)

August 9, 2013

Polymath7 research thread 5: the hot spots conjecture

Filed under: hot spots,research — Terence Tao @ 7:22 pm

This post is the new research thread for the Polymath7 project to solve the hot spots conjecture for acute-angled triangles, superseding the previous thread; this project had experienced a period of low activity for many months, but has recently picked up again, due both to renewed discussion of the numerical approach to the problem, and also some theoretical advances due to Miyamoto and Siudeja.

On the numerical side, we have decided to focus first on the problem of obtaining validated upper and lower bounds for the second Neumann eigenvalue {\mu_2} of a triangle {\Omega=ABC}. Good upper bounds are relatively easy to obtain, simply by computing the Rayleigh quotient of numerically obtained approximate eigenfunctions, but lower bounds are trickier. This paper of Liu and Oshii has some promising approaches.

After we get good bounds on the eigenvalue, the next step is to get good control on the eigenfunction; some approaches are summarised in this note of Lior Silberman, mainly based on gluing together exact solutions to the eigenfunction equation in various sectors or disks. Some recent papers of Kwasnicki-Kulczycki, Melenk-Babuska, and Driscoll employ similar methods and may be worth studying further. However, in view of the theoretical advances, the precise control on the eigenfunction that we need may be different from what we had previously been contemplating.

These two papers of Miyamoto introduced a promising new method to theoretically control the behaviour of the second Neumann eigenfunction {u_2}, by taking linear combinations of that eigenfunction with other, more explicit, solutions to the eigenfunction equation {\Delta u = - \mu_2 u}, restricting that combination to nodal domains, and then computing the Dirichlet energy on each domain. Among other things, these methods can be used to exclude critical points occurring anywhere in the interior or on the edges of the triangle except for those points that are close to one of the vertices; and in this recent preprint of Siudeja, two further partial results on the hot spots conjecture are obtained by a variant of the method:

  • The hot spots conjecture is established unconditionally for any acute-angled triangle which has one angle less than or equal to {\pi/6} (actually a slightly larger region than this is obtained). In particular, the case of very narrow triangles have been resolved (the dark green region in the area below).
  • The hot spots conjecture is also established for any acute-angled triangle with the property that the second eigenfunction {u_2} has no critical points on two of the three edges (excluding vertices).

So if we can develop more techniques to rule out critical points occuring on edges (i.e. to keep eigenfunctions monotone on the edges on which they change sign), we may be able to establish the hot spots conjecture for a further range of triangles. In particular, some hybrid of the Miyamoto method and the numerical techniques we are beginning to discuss may be a promising approach to fully resolve the conjecture. (For instance, the Miyamoto method relies on upper bounds on {\mu_2}, and these can be obtained numerically.)

The arguments of Miyamoto also allow one to rule out critical points occuring for most of the interior points of a given triangle; it is only the points that are very close to one of the three vertices which we cannot yet rule out by Miyamoto’s methods. (But perhaps they can be ruled out by the numerical methods we are also developing, thus giving a hybrid solution to the conjecture.)

Below the fold I’ll describe some of the theoretical tools used in the above arguments.


June 4, 2013

Polymath proposal: bounded gaps between primes

Filed under: planning,polymath proposals — Terence Tao @ 4:31 am

Two weeks ago, Yitang Zhang announced his result establishing that bounded gaps between primes occur infinitely often, with the explicit upper bound of 70,000,000 given for this gap.  Since then there has been a flurry of activity in reducing this bound, with the current record being 4,802,222 (but likely to improve at least by a little bit in the near future).

It seems that this naturally suggests a Polymath project with two interrelated goals:

  1. Further improving the numerical upper bound on gaps between primes; and
  2. Understanding and clarifying Zhang’s argument (and other related literature, e.g. the work of Bombieri, Fouvry, Friedlander, and Iwaniec on variants of the Elliott-Halberstam conjecture).

Part 1 of this project splits off into somewhat independent sub-projects:

  1. Finding narrow prime admissible tuples of a given cardinality (or, dually, finding large prime admissible tuples in a given interval).  This part of the project would be relatively elementary in nature, relying on combinatorics, elementary number theory, computer search, and perhaps some clever algorithm design.  (Scott Morrison has already been hosting a de facto project of this form at this page, and is happy to continue doing so).
  2. Solving a calculus of variations problem associated with the Goldston-Yildirim-Pintz argument (discussed at this blog post, or in this older survey of Soundararajan) [in particular, this could lead to an improvement of a certain key parameter k_0, currently at 341,640, even without any improvement in the parameter \varpi mentioned in part 3. below.]
  3. Delving through the “hard” part of Zhang’s paper in order to improve the value of a certain key parameter \varpi (which Zhang sets at 1/1168, but is likely to be enlargeable).

Part 2 of this project could be run as an online reading seminar, similar to the online reading seminar of the Furstenberg-Katznelson paper that was part of the Polymath1 project.  It would likely focus on the second half of Zhang’s paper and would fit well with part 1.3.  I could run this on my blog, and this existing blog post of mine could be used for part 1.2.

As with other polymath projects, it is conceivable that enough results are obtained to justify publishing one or more articles (which, traditionally, we would publish under the D.H.J. Polymath pseudonym).  But it is perhaps premature to discuss this possibility at this early stage of the process.

Anyway, I would be interested to gauge the level of interest and likely participation in these projects, together with any suggestions for improving the proposal or other feedback.

March 2, 2013

Polymath proposal (Tim Gowers): Randomized Parallel Sorting Algorithm

Filed under: polymath proposals — Gil Kalai @ 4:41 pm


From Holroyd’s sorting networks picture gallery

A celebrated theorem of Ajtai, Komlos and Szemeredi describes a sorting network for  $n$ numbers of depth $O(log N)$. rounds where in each runs $n/2$. Tim Gowers proposes to find collectively a randomized sorting with the same properties.

February 14, 2013

Next Polymath Project(s): What, When, Where?

Filed under: polymath proposals — Gil Kalai @ 3:26 pm


Let us have a little discussion about it.

We may also discuss both general and specific open research mathematical projects which are of different flavor/rules.

Proposals for polymath projects appeared on this blog,  in this post on Gowers’s blog, and in several other places.

September 10, 2012

Polymath7 research threads 4: the Hot Spots Conjecture

Filed under: hot spots,research — Terence Tao @ 7:28 pm

It’s time for another rollover of the  Polymath7 “Hot Spots” conjecture, as the previous research thread has again become full.

Activity has now focused on a numerical strategy to solve the hot spots conjecture for all acute angle triangles ABC.  In broad terms, the strategy (also outlined in this document) is as follows.   (I’ll focus here on the problem of estimating the eigenfunction; one also needs to simultaneously obtain control on the eigenvalue, but this seems to be to be a somewhat more tractable problem.)

  1. First, observe that as the conjecture is scale invariant, the only relevant parameters for the triangle ABC are the angles \alpha,\beta,\gamma, which of course lie between 0 and \pi/2 and add up to \pi.  We can also order \alpha \leq \beta \leq \gamma, giving a parameter space which is a triangle between the values (\alpha,\beta,\gamma) = (0,\pi/2,\pi/2), (\pi/4,\pi/4,\pi/2), (\pi/3,\pi/3,\pi/3).
  2. The triangles that are too close to the degenerate isosceles triangle {}(0,\pi/2,\pi/2) or the equilateral triangle {}(\pi/3,\pi/3,\pi/3) need to be handled by analytic arguments.  (Preliminary versions of these arguments can be found here and Section 6 of these notes  respectively, but the constants need to be made explicit (and as strong as possible)).
  3. For the remaining parameter space, we will use a sufficiently fine discrete mesh of angles (\alpha,\beta,\gamma); the optimal spacing of this mesh is yet to be determined.
  4. For each triplet of angles in this mesh,  we partition the triangle ABC (possibly after rescaling it to a reference triangle \hat \Omega, such as the unit right-angled triangle) into smaller subtriangles, and approximate the second eigenfunction w (or the rescaled triangle \hat w) by the eigenfunction w_h (or \hat w_h) for a finite element restriction of the eigenvalue problem, in which the function is continuous and piecewise polynomial of low degree (probably linear or quadratic) in each subtriangle; see Section 2.2 of these notes.    With respect to a suitable basis, w_h can be represented by a finite vector u_h.
  5. Using numerical linear algebra methods (such as Lanczos iteration) with interval arithmetic, obtain an approximation \tilde u to u_h, with rigorous bounds on the error between the two.  This gives an approximation to w_h or \hat w_h with rigorous error bounds (initially of L^2 type, but presumably upgradable).
  6. After (somehow) obtaining a rigorous error bound between w and w_h (or \hat w and \hat w_h), conclude that w stays far from its extremum when one is sufficiently far away from the vertices A,B,C of the triangle.
  7. Using L^\infty stability theory of eigenfunctions (see Section 5 of these notes), conclude that w stays far from its extremum even when (\alpha,\beta,\gamma) is not at a mesh point.  Thus, the hot spots conjecture is not violated away from the vertices.  (This argument should also handle the vertex that is neither the maximum nor minimum value for the eigenfunction, leaving only the neighbourhoods of the two extremising vertices to deal with.)
  8. Finally, use an analytic argument (perhaps based on these calculations) to show that the hot spots conjecture is also not violated near an extremising vertex.

This all looks like it should work in principle, but it is a substantial amount of effort; there is probably still some scope to try to simplify the scheme before we really push for implementing it.

July 13, 2012

Minipolymath4 project, second research thread

Filed under: polymath proposals — Terence Tao @ 7:49 pm

It’s been almost 24 hours since the mini-polymath4 project was opened in order to collaboratively solve Q3 from the 2012 IMO.  In that time, the first part of the question was solved, but the second part remains open.  In other words, it remains to show that for any sufficiently large k and any N, there is some n \geq (1.99)^k such that the second player B in the liar’s guessing game cannot force a victory no matter how many questions he asks.

As the previous research thread is getting quite lengthy (and is mostly full of attacks on the first part of the problem, which is now solved), I am rolling over to a fresh thread (as is the standard practice with the polymath projects).  Now would be a good time to summarise some of the observations from the previous thread which are still relevant here.  For instance, here are some relevant statements made in previous comments:

  1. Without loss of generality we may take N=n+1; if B can’t win this case, then he certainly can’t win for any larger value of N (since A could simply restrict his number x to a number up to n+1), and if B can win in this case (i.e. he can eliminate one of the n+1 possibilities) he can also perform elimination for any larger value of N by partitioning the possible answers into n+1 disjoint sets and running the N=n+1 strategy, and then one can iterate one’s way down to N=n+1.
  2. In order to show that B cannot force a win, one needs to show that for any possible sequence of questions B asks in the N=n+1 case, it is possible to construct a set of responses by A in which none of the n+1 possibilities of x are eliminated, thus each x belongs to at least one of each block of k+1 consecutive sets that A’s answers would indicate.
  3. It may be useful to look at small examples, e.g. can one show that B cannot win when k=1, n=1? Or when k=2, n=3?

It seems that some of the readers of this blog have already obtained a solution to this problem from other sources, or from working separately on the problem, so I would ask that they refrain from giving spoilers for this question until at least one solution has been arrived at collaboratively.

Also, participants are encouraged to edit the wiki as appropriate with new developments and ideas, and participate in the discussion thread for any meta-discussion about the polymath project.

« Previous PageNext Page »

Blog at