The polymath blog

November 8, 2009

Proposal (Tim Gowers) The Origin of Life

Filed under: polymath proposals — Gil Kalai @ 12:54 pm

Early Earth

 

 

 

 

 

 

 

 

 

A presentation of one possible near future polymath: the mathematics of the origin of life can be found on Gowers’s blog.

October 27, 2009

(Research thread V) Determinstic way to find primes

Filed under: finding primes,research — Terence Tao @ 10:25 pm

It’s probably time to refresh the previous thread for the “finding primes” project, and to summarise the current state of affairs.

The current goal is to find a deterministic way to locate a prime in an interval [z,2z] in time that breaks the “square root barrier” of \sqrt(z) (or more precisely, z^{1/2+o(1)}).  Currently, we have two ways to reach that barrier:

  1. Assuming the Riemann hypothesis, the largest prime gap in [z,2z] is of size z^{1/2+o(1)}.  So one can simply test consecutive numbers for primality until one gets a hit (using, say, the AKS algorithm, any number of size z can be tested for primality in time z^{o(1)}.
  2. The second method is due to Odlyzko, and does not require the Riemann hypothesis.  There is a contour integration formula that allows one to write the prime counting function \pi(z) up to error z^{1+o(1)}/T in terms of an integral involving the Riemann zeta function over an interval of length O(T), for any 1 \leq T \leq z.  The latter integral can be computed to the required accuracy in time about z^{o(1)} T.  With this and a binary search it is not difficult to locate an interval of width z^{1+o(1)}/T that is guaranteed to contain a prime in time z^{o(1)} T.  Optimising by choosing T = z^{1/2} and using a sieve (or by testing the elements for primality one by one), one can then locate that prime in time z^{1/2+o(1)}.

Currently we have one promising approach to break the square root barrier, based on the polynomial method, but while individual components of this approach fall underneath the square root barrier, we have not yet been able to get the whole thing below (or even matching) the square root.  I will sketch the approach (as far as I understand it) below; right now we are needing some shortcuts (e.g. FFT, fast matrix multiplication, that sort of thing) that can cut the run time further.

(more…)

October 16, 2009

Nature article on Polymath

Filed under: news — Terence Tao @ 4:25 pm

Timothy Gowers and Michael Nielsen have written an article “Massively collaborative mathematics“, focusing primarily on the first Polymath project, for the October issue of Nature.

August 28, 2009

(Research Thread IV) Determinstic way to find primes

Filed under: finding primes,research — Terence Tao @ 1:43 am
Tags:

This post will be somewhat abridged due to my traveling schedule.

The previous research thread for the “finding primes” project is now getting quite full, so I am opening up a fresh thread to continue the project.

Currently we are up against the “square root barrier”: the fastest time we know of to find a k-digit prime is about \sqrt{10^k} (up to \exp(o(k)) factors), even in the presence of a factoring oracle (though, thanks to a method of Odlyzko, we no longer need the Riemann hypothesis).  We also have a “generic prime” razor that has eliminated (or severely limited) a number of potential approaches.

One promising approach, though, proceeds by transforming the “finding primes” problem into a “counting primes” problem.  If we can compute prime counting function \pi(x) in substantially less than \sqrt{x} time, then we have beaten the square root barrier.

Currently we have a way to compute the parity (least significant bit) of \pi(x) in time x^{1/2+o(1)}, and there is hope to improve this (especially given the progress on the toy problem of counting square-frees less than x).  There are some variants that also look promising, for instance to work in polynomial extensions of finite fields (in the spirit of the AKS algorithm) and to look at residues of \pi(x) in other moduli, e.g. \pi(x) mod 3, though currently we can’t break the x^{2/3+o(1)} barrier for that particular problem.

August 13, 2009

(Research Thread III) Determinstic way to find primes

Filed under: finding primes,research — Terence Tao @ 5:10 pm
Tags:

This is a continuation of Research Thread II of the “Finding primes” polymath project, which is now full.  It seems that we are facing particular difficulty breaching the square root barrier, in particular the following problems remain open:

  1. Can we deterministically find a prime of size at least n in o(\sqrt{n}) time (assuming hypotheses such as RH)?  Assume one has access to a factoring oracle.
  2. Can we deterministically find a prime of size at least n in O(n^{1/2+o(1)}) time unconditionally (in particular, without RH)? Assume one has access to a factoring oracle.

We are still in the process of weighing several competing strategies to solve these and related problems.  Some of these have been effectively eliminated, but we have a number of still viable strategies, which I will attempt to list below.  (The list may be incomplete, and of course totally new strategies may emerge also.  Please feel free to elaborate or extend the above list in the comments.)

Strategy A: Find a short interval [x,x+y] such that \pi(x+y) - \pi(x) > 0, where \pi(x) is the number of primes less than x, by using information about the zeroes \rho of the Riemann zeta function.

Comment: it may help to assume a Siegel zero (or, at the other extreme, to assume RH).

Strategy B: Assume that an interval [n,n+a] consists entirely of u-smooth numbers (i.e. no prime factors greater than u) and somehow arrive at a contradiction.  (To break the square root barrier, we need a = o(\sqrt{u}), and to stop the factoring oracle from being ridiculously overpowered, n should be subexponential size in u.)

Comment: in this scenario, we will have n/p close to an integer for many primes between \sqrt{u} and u, and n/p far from an integer for all primes larger than u.

Strategy C: Solve the following toy problem: given n and u, what is the distance to the closest integer to n which contains a factor comparable to u (e.g. in [u,2u])?  [Ideally, we want a prime factor here, but even the problem of getting an integer factor is not fully understood yet.]  Beating \sqrt{u} here is analogous to breaking the square root barrier in the primes problem.

Comments:

  1. The trivial bound is u/2 – just move to the nearest multiple of u to n.  This bound can be attained for really large n, e.g. n =(2u)! + u/2.  But it seems we can do better for small n.
  2. For u \leq n \leq 2u, one trivially does not have to move at all.
  3. For 2u \leq n \leq u^2, one has an upper bound of O(n/u), by noting that having a factor comparable to u is equivalent to having a factor comparable to n/u.
  4. For n \sim u^2, one has an upper bound of O(\sqrt{u}), by taking x^2 to be the first square larger than n, y^2 to be the closest square to x^2-n, and noting that (x-y)(x+y) has a factor comparable to u and is within O(\sqrt{u}) of n.  (This paper improves this bound to O(u^{0.4}) conditional on a strong exponential sum estimate.)
  5. For n=poly(u), it may be possible to take a dynamical systems approach, writing n base u and incrementing or decrementing u and hope for some equidistribution.   Some sort of “smart” modification of u may also be effective.
  6. There is a large paper by Ford devoted to this sort of question.

Strategy D. Find special sequences of integers that are known to have special types of prime factors, or are known to have unusually high densities of primes.

Comment. There are only a handful of explicitly computable sparse sequences that are known unconditionally to capture infinitely many primes.

Strategy E. Find efficient deterministic algorithms for finding various types of “pseudoprimes” – numbers which obey some of the properties of being prime, e.g. 2^{n-1}=1 \hbox{ mod } n.  (For this discussion, we will consider primes as a special case of pseudoprimes.)

Comment. For the specific problem of solving 2^{n-1}=1 \hbox{ mod } n there is an elementary observation that if n obeys this property, then 2^n-1 does also, which solves this particular problem; but this does not indicate how to, for instance, have 2^{n-1}=1 \hbox{ mod } n and 3^{n-1}=1 \hbox{ mod } n obeyed simultaneously.

As always, oversight of this research thread is conducted at the discussion thread, and any references and detailed computations should be placed at the wiki.

August 9, 2009

(Research thread II) Deterministic way to find primes

Filed under: finding primes,research — Terence Tao @ 3:57 am
Tags:

This thread marks the formal launch of “Finding primes” as the massively collaborative research project Polymath4, and now supersedes the proposal thread for this project as the official “research” thread for this project, which has now become rather lengthy. (Simultaneously with this research thread, we also have the discussion thread to oversee the research thread and to provide a forum for casual participants, and also the wiki page to store all the settled knowledge and accumulated insights gained from the project to date.) See also this list of general polymath rules.

The basic problem we are studying here can be stated in a number of equivalent forms:

Problem 1. (Finding primes) Find a deterministic algorithm which, when given an integer k, is guaranteed to locate a prime of at least k digits in length in as quick a time as possible (ideally, in time polynomial in k, i.e. after O(k^{O(1)}) steps).

Problem 2. (Finding primes, alternate version) Find a deterministic algorithm which, after running for k steps, is guaranteed to locate as large a prime as possible (ideally, with a polynomial number of digits, i.e. at least k^c digits for some c>0.)

To make the problem easier, we will assume the existence of a primality oracle, which can test whether any given number is prime in O(1) time, as well as a factoring oracle, which will provide all the factors of a given number in O(1) time. (Note that the latter supersedes the former.) The primality oracle can be provided essentially for free, due to polynomial-time deterministic primality algorithms such as the AKS primality test; the factoring oracle is somewhat more expensive (there are deterministic factoring algorithms, such as the quadratic sieve, which are suspected to be subexponential in running time, but no polynomial-time algorithm is known), but seems to simplify the problem substantially.

The problem comes in at least three forms: a strong form, a weak form, and a very weak form.

  1. Strong form: Deterministically find a prime of at least k digits in poly(k) time.
  2. Weak form: Deterministically find a prime of at least k digits in \exp(o(k)) time, or equivalently find a prime larger than k^C in time O(k) for any fixed constant C.
  3. Very weak form: Deterministically find a prime of at least k digits in significantly less than (10^k)^{1/2} time, or equivalently find a prime significantly larger than k^2 in time O(k).

The pr0blem in all of these forms remain open, even assuming a factoring oracle and strong number-theoretic hypotheses such as GRH. One of the main difficulties is that we are seeking a deterministic guarantee that the algorithm works in all cases, which is very different from a heuristic argument that the algorithm “should” work in “most” cases. (Note that there are already several efficient probabilistic or heuristic prime generation algorithms in the literature, e.g. this one, which already suffice for all practical purposes; the question here is purely theoretical.) In other words, rather than working in some sort of “average-case” environment where probabilistic heuristics are expected to be valid, one should instead imagine a “Murphy’s law” or “worst-case” scenario in which the primes are situated in a “maximally unfriendly” manner. The trick is to ensure that the algorithm remains efficient and successful even in the worst-case scenario.

Below the fold, we will give some partial results, and some promising avenues of attack to explore. Anyone is welcome to comment on these strategies, and to propose new ones. (If you want to participate in a more “casual” manner, you can ask questions on the discussion thread for this project.)

Also, if anything from the previous thread that you feel is relevant has been missed in the text below, please feel free to recall it in the comments to this thread.

(more…)

August 3, 2009

Polymath on other blogs

Filed under: planning — Terence Tao @ 12:33 pm

There has been some discussion of the polymath enterprise on other blogs, so I thought it would be good to collect these links on the main polymath wiki page.  If you find another link about polymath on the net, please feel free to add it to the wiki also (or at least to mention it in the comments here).

It should also be mentioned that besides the proposed polymath projects on this blog, Gil Kalai is in the process of setting up a polymath project on the polynomial Hirsch conjecture, tentatively scheduled to be launched later this month.  See the following preparatory posts:

  1. The polynomial Hirsch conjecture, a proposal for Polymath 3 (July 17)
  2. The polynomial Hirsch conjecture, a proposal for Polymath 3 cont. (July 28)
  3. The polynomial Hirsch conjecture – how to improve the upper bounds (July 30)
  4.  The polynomial Hirsch conjecture : discussion thread (August 9)
  5.  The polynomial Hiresch conjecture: discussion thread continued (September 6)
  6. Plans for polymath3 (December 8). The plan is to launched polymath3 on the polynomial Hirsch conjecture in April 15, 2010.

An extremely rudimentary wiki page for the proposed project has now been created.

polymath3

New: Tim Gowers devotes a post to several proposals for a polymath project in November.

July 28, 2009

Deterministic way to find primes: discussion thread

Filed under: discussion,finding primes — Terence Tao @ 3:09 pm

The proposal “deterministic way to find primes” is not officially a polymath yet, but is beginning to acquire the features of one, as we have already had quite a bit of interesting ideas.  So perhaps it is time to open up the discussion thread a little earlier than anticipated.  There are a number of purposes to such a discussion thread, including but not restricted to:

  1. To summarise the progress made so far, in a manner accessible to “casual” participants of the project.
  2. To have “meta-discussions” about the direction of the project, and what can be done to make it run more smoothly. (Thus one can view this thread as a sort of “oversight panel” for the research thread.)
  3. To ask questions about the tools and ideas used in the project (e.g. to clarify some point in analytic number theory or computational complexity of relevance to the project).  Don’t be shy; “dumb” questions can in fact be very valuable in regaining some perspective.
  4. (Given that this is still a proposal) To evaluate the suitability of this proposal for an actual polymath, and decide what preparations might be useful before actually launching it.

To start the ball rolling, let me collect some of the observations accumulated as of July 28:

  1. A number of potentially relevant conjectures in complexity theory and number theory have been identified: P=NP, P=BPP, P=promise-BPP, existence of PRG, existence of one-way functions, whether DTIME(2^n) has subexponential circuits, GRH, the Hardy-Littlewood prime tuples conjecture, the ABC conjecture, Cramer’s conjecture, discrete log in P, factoring in P.
    1. The problem is solved if one has P=NP, existence of PRG, or Cramer’s conjecture, so we may assume that these statements all fail.  The problem is probably also solved on P=promise-BPP, which is a bit stronger than P=BPP, but weaker than existence of PRG; we currently do not have a solution just assuming P=BPP, due to a difficulty getting enough of a gap in the success probabilities.
      1. Existence of PRG is assured if DTIME(2^n) does not have subexponential circuits (Impagliazzo-Wigderson), or if one has one-way functions (is there a precise statement to this effect?)
    2. Discrete log being in hard (or easy) may end up being a useless hypothesis, since one needs to find large primes before discrete logarithms even make sense.
  2. If the problem is false, it implies (roughly speaking) that all large constructible numbers are composite.  Assuming factoring is in P, it implies the stronger fact that all large constructible numbers are smooth.  This seems unlikely (especially if one assumes ABC).
  3. Besides adding various conjectures in complexity theory or number theory, we have found some other ways to make the problem easier:
    1. The trivial deterministic algorithm for finding k-bit primes takes exponentially long in k in the worst case.  Our goal is polynomial in k.  What about a partial result, such as exp(o(k))?
      1. An essentially equivalent variant: in time polynomial in k, we can find a prime with at least log k digits.  Our goal is k.  Can we find a prime with slightly more  than log k digits?
    2. The trivial probabilistic algorithm takes O(k^2) random bits; looks like we can cut this down to O(k).  Our goal is O(log k) (as one can iterate through these bits in polynomial time).  Can we do o(k)?
    3. Rather than find primes, what about finding almost primes?  Note that if factoring is in P, the two problems are basically equivalent.  There may also be other number theoretically interesting sets of numbers one could try here instead of primes.
  4. At the scale log n, primes are assumed to resemble a Poisson process of intensity 1/log n (this can be formalised using a suitably uniform version of the prime tuples conjecture).  Cramer’s conjecture can be viewed as one extreme case of this principle.  Is there some way to use this conjectured Poisson structure in a way without requiring the full strength of Cramer’s conjecture?  (I believe there is also some work of Granville and Soundarajan tweaking Cramer’s prediction slightly, though only by a multiplicative constant if I recall correctly.)

See also the wiki page for this project.

July 27, 2009

Selecting another polymath project

Filed under: planning — Terence Tao @ 5:55 pm

In a few months (the tentative target date is October), we plan to launch another polymath project (though there may also be additional projects before this date); however, at this stage, we have not yet settled on what that project would be, or even how exactly we are to select it.  The purpose of this post, then, is to begin a sort of pre-pre-selection process, in which we discuss how to organise the search for a new project, what criteria we would use to identify particularly promising projects, and how to run the ensuing discussion or voting to decide exactly which project to begin.  (We think it best to only launch one project at a time, for reasons to be discussed below.)

There are already a small number of polymath projects being proposed, and I expect this number to grow in the near future.  Anyone with a problem which is potentially receptive to the polymath approach, and who is willing to invest significant amounts of time and effort to administrate and advance the effort, is welcome to make their own proposal, either in their own forum, or by contacting one of us.  (If you do make a proposal on your own wordpress blog, put it in the category “polymath proposals” so that it will be picked up by the above link.)    There is already some preliminary debate and discussion at the pages of each of these proposals, though one should avoid any major sustained efforts at solving the problem yet, until the participants for the project are fully assembled and prepared, and the formal polymath threads are ready to launch.

[One lesson we got from the minipolymath feedback was that one would like a long period of lead time before a polymath project is formally launched, to get people prepared by reading up and allocating time in advance.    So it makes sense to have the outlines of a project revealed well in advance, though perhaps the precise details of the project (e.g. a compilation of the proposer's own thoughts on the problem) can wait until the launch date.]

On the other hand, we do not want to launch multiple projects at once.  So far, the response to each new launched project has been overwhelming, but this may not always be the case in the future, and in particular simultaneous projects may have to compete with each other for attention, and perhaps most crucially, for the time and efforts of the core participants of the project.  Such a conflict would be particularly acute for projects that are in the same field, or in related fields.  (In particular, we would like to diversify the polymath enterprise beyond combinatorics, which is where most of the existing projects lie.)

So we need some way to identify the most promising projects to work on.  What criteria would we look for in a polymath project that would indicate a high likelihood of full or partial success, or at least a valuable learning experience to aid the organisation of future projects of this type?  Some key factors come to mind:

  1. The amount of expected participation. The more people who are interested in participating, both at a casual level and at a more active full time level, the better the chances that the project will be a success.  We may end up polling readers of this blog for their expected participation level (no participation, observation only, casual participation, active participation, organiser/moderator) for each proposed project, to get some idea as to the interest level.
  2. The feasibility of the project. I would imagine that a polymath to solve the Riemann Hypothesis will be a spectacular and frustrating fiasco; we should focus on problems that look like some progress can be made.  Ideally, there should be several potentially promising avenues of inquiry identified in advance; simply dumping the problem onto the participants with no suggestions whatsoever (as was done with the minipolymath project) seems to be a suboptimal way to proceed.
  3. The flexibility of the project. This is related to point #2; it may be that the problem as stated is beyond the ability of the polymath effort, but perhaps some interesting variant of the problem is more feasible.  A problem which allows for a number of variations would be more suitable for a polymath effort, especially since polymath projects seem particularly capable of pursuing multiple directions of attack at once.
  4. The available time and energy of the administrator. Another thing we learned from the minipolymath project was that these projects need one or more active leaders who are willing to take the initiative and push the project in the directions it needs to go (e.g. by encouraging more efforts at exposition when the flood of ideas become too chaotic).  The proposer of a project would be one obvious choice for such a leader, but there seems to be no reason why a project could have multiple such leaders (and any given participant could choose to seize the initiative and make a major push to advance the project unilaterally).
  5. The barriers to entry. Some projects may require a substantial amount of technical preparation before participation; this is perhaps one reason why existing projects have been focused on “elementary” fields of mathematics, such as combinatorics.  Nevertheless, it should be possible (perhaps with some tweaking of the format) to adapt these projects to more “sophisticated” mathematical fields.  For instance, one could imagine a polymath project which is not aimed at solving a particular problem per se, but is instead trying to understand a difficult mathematical topic (e.g. quantum field theory, to pick a subject a random) as thoroughly as possible.  Given the right leadership, and sufficient interest, this very different type of polymath project could well be a great success.
  6. Lack of conflict with existing research. It has been pointed out that one should be careful not to let a polymath project steamroll over the existing research plans of some mathematician (e.g. a grad student’s thesis).  This is one reason why we are planning an extended process to select projects, so that such clashes can be identified as early as possible, presumably removing that particular project from contention.  (There is also the danger that even a proposal for a polymath project may deter other mathematicians from pursuing that problem by more traditional means; this is another point worth discussing here.)

Over to the other readers of this blog: what else should we be looking for in a polymath project?  How quickly should we proceed with the selection process?  Should we decide by popular vote, or by some fixed criteria?

Proposal: Boshernitzan’s problem

Filed under: polymath proposals — Terence Tao @ 2:32 am

Another proposal for a polymath project is the following question of Michael Boshneritzan:

Question. Let x_1, x_2, x_3, \ldots \in {\Bbb Z}^d be a (simple) path in a lattice {\Bbb Z}^d which has bounded step sizes, i.e. 0 < |x_{i+1}-x_i| < C for some C and all i.  Is it necessarily the case that this path contains arbitrarily long arithmetic progressions, i.e. for each k there exists a, r \in {\Bbb Z}^d with r non-zero such that a, a+r, \ldots, a+(k-1)r \in \{x_1,x_2,x_3,\ldots\}.

The d=1 case follows from Szemerédi’s theorem, as the path has positive density.  Even for d=2 and k=3 the problem is open.  The question looks intriguingly like the multidimensional Szemerédi theorem, but it is not obvious how to deduce it from that theorem.  It is also tempting to try to invoke the Furstenberg correspondence principle to create an ergodic theory counterpart to this question, but to my knowledge this has not been done.  There are also some faint resemblances to the angel problem that has recently been solved.

In honour of mini-polymath1, one could phrase this problem as a grasshopper trying to jump forever (with bounded jumps) in a square lattice without creating an arithmetic progression (of length three, say).

It is also worth trying to find a counterexample if C, d, k are large enough.  Note that the continuous analogue of the problem is false: a convex curve in the plane, such as the parabola \{ (x,x^2): x \in {\Bbb R}\}, contains no arithmetic progressions, but is rectifiable.  However it is not obvious how to discretise this example.

In short, there seem to be a variety of tempting avenues to try to attack this problem; it may well be that many of them fail, but the reason for that failure should be instructive.

Andrew Mullhaupt informs me that this question would have applications to short pulse rejected Boolean delay equations.

« Previous PageNext Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 246 other followers