Is taking place on Gowers’s blog!
January 9, 2010
December 31, 2009
Proposal (Tim Gowers): Erdos’ Discrepancy Problem
For a description of Erdos’ discrepancy problem and a large discussion see this blog on Gowers’s blog.
The decision for the next polymath project over Gowers’s blog will be between three projects: The polynomial DHJ problem, Littlewood problem, and the Erdos discrepency problem. To help making the decision four polls are in place!
November 20, 2009
Proposals (Tim Gowers): Polynomial DHJ, and Littlewood’s problem
Tim Gowers described two additional proposed polymath projects. One about the first unknown cases of the polynomial Density Hales Jewett problem. Another about the Littelwood’s conjecture.
I will state one problem from each of these posts:
1) (Related to polynomial DHJ) Suppose you have a family of graphs on n labelled vertices, so that we do not two graphs in the family such that is a subgraph of and the edges of which are not in form a clique. (A complete graph on 2 or more vertices.) can we conclude that =? (In other words, can we conclude that contains only a diminishing fraction of all graphs?)
Define the “distance” between two points in the unit cube as the product of the absolute value of the differences in the three coordinates. (See Tim’s remark below.)
2) (Related to Littlewood) Is it possible to find n points in the unit cube so that the “distance” between any two of them is at least ?
A negative answer to Littlewood’s problem will imply a positive answer to problem 2 (with some constant). So the pessimistic saddle thought would be that the answer to Problem 2 is yes without any bearing on Littlewood’s problem.
November 8, 2009
Proposal (Tim Gowers) The Origin of Life
A presentation of one possible near future polymath: the mathematics of the origin of life can be found on Gowers’s blog.
October 27, 2009
(Research thread V) Determinstic way to find primes
It’s probably time to refresh the previous thread for the “finding primes” project, and to summarise the current state of affairs.
The current goal is to find a deterministic way to locate a prime in an interval in time that breaks the “square root barrier” of (or more precisely, ). Currently, we have two ways to reach that barrier:
- Assuming the Riemann hypothesis, the largest prime gap in is of size . So one can simply test consecutive numbers for primality until one gets a hit (using, say, the AKS algorithm, any number of size z can be tested for primality in time .
- The second method is due to Odlyzko, and does not require the Riemann hypothesis. There is a contour integration formula that allows one to write the prime counting function up to error in terms of an integral involving the Riemann zeta function over an interval of length , for any . The latter integral can be computed to the required accuracy in time about . With this and a binary search it is not difficult to locate an interval of width that is guaranteed to contain a prime in time . Optimising by choosing and using a sieve (or by testing the elements for primality one by one), one can then locate that prime in time .
Currently we have one promising approach to break the square root barrier, based on the polynomial method, but while individual components of this approach fall underneath the square root barrier, we have not yet been able to get the whole thing below (or even matching) the square root. I will sketch the approach (as far as I understand it) below; right now we are needing some shortcuts (e.g. FFT, fast matrix multiplication, that sort of thing) that can cut the run time further.
October 16, 2009
Nature article on Polymath
Timothy Gowers and Michael Nielsen have written an article “Massively collaborative mathematics“, focusing primarily on the first Polymath project, for the October issue of Nature.
August 28, 2009
(Research Thread IV) Determinstic way to find primes
This post will be somewhat abridged due to my traveling schedule.
The previous research thread for the “finding primes” project is now getting quite full, so I am opening up a fresh thread to continue the project.
Currently we are up against the “square root barrier”: the fastest time we know of to find a k-digit prime is about (up to factors), even in the presence of a factoring oracle (though, thanks to a method of Odlyzko, we no longer need the Riemann hypothesis). We also have a “generic prime” razor that has eliminated (or severely limited) a number of potential approaches.
One promising approach, though, proceeds by transforming the “finding primes” problem into a “counting primes” problem. If we can compute prime counting function in substantially less than time, then we have beaten the square root barrier.
Currently we have a way to compute the parity (least significant bit) of in time , and there is hope to improve this (especially given the progress on the toy problem of counting square-frees less than x). There are some variants that also look promising, for instance to work in polynomial extensions of finite fields (in the spirit of the AKS algorithm) and to look at residues of in other moduli, e.g. , though currently we can’t break the barrier for that particular problem.
August 13, 2009
(Research Thread III) Determinstic way to find primes
This is a continuation of Research Thread II of the “Finding primes” polymath project, which is now full. It seems that we are facing particular difficulty breaching the square root barrier, in particular the following problems remain open:
- Can we deterministically find a prime of size at least n in time (assuming hypotheses such as RH)? Assume one has access to a factoring oracle.
- Can we deterministically find a prime of size at least n in time unconditionally (in particular, without RH)? Assume one has access to a factoring oracle.
We are still in the process of weighing several competing strategies to solve these and related problems. Some of these have been effectively eliminated, but we have a number of still viable strategies, which I will attempt to list below. (The list may be incomplete, and of course totally new strategies may emerge also. Please feel free to elaborate or extend the above list in the comments.)
Strategy A: Find a short interval [x,x+y] such that , where is the number of primes less than x, by using information about the zeroes of the Riemann zeta function.
Comment: it may help to assume a Siegel zero (or, at the other extreme, to assume RH).
Strategy B: Assume that an interval [n,n+a] consists entirely of u-smooth numbers (i.e. no prime factors greater than u) and somehow arrive at a contradiction. (To break the square root barrier, we need , and to stop the factoring oracle from being ridiculously overpowered, n should be subexponential size in u.)
Comment: in this scenario, we will have n/p close to an integer for many primes between and u, and n/p far from an integer for all primes larger than u.
Strategy C: Solve the following toy problem: given n and u, what is the distance to the closest integer to n which contains a factor comparable to u (e.g. in [u,2u])? [Ideally, we want a prime factor here, but even the problem of getting an integer factor is not fully understood yet.] Beating here is analogous to breaking the square root barrier in the primes problem.
Comments:
- The trivial bound is u/2 – just move to the nearest multiple of u to n. This bound can be attained for really large n, e.g. . But it seems we can do better for small n.
- For , one trivially does not have to move at all.
- For , one has an upper bound of , by noting that having a factor comparable to u is equivalent to having a factor comparable to n/u.
- For , one has an upper bound of , by taking to be the first square larger than n, to be the closest square to , and noting that has a factor comparable to u and is within of n. (This paper improves this bound to conditional on a strong exponential sum estimate.)
- For n=poly(u), it may be possible to take a dynamical systems approach, writing n base u and incrementing or decrementing u and hope for some equidistribution. Some sort of “smart” modification of u may also be effective.
- There is a large paper by Ford devoted to this sort of question.
Strategy D. Find special sequences of integers that are known to have special types of prime factors, or are known to have unusually high densities of primes.
Comment. There are only a handful of explicitly computable sparse sequences that are known unconditionally to capture infinitely many primes.
Strategy E. Find efficient deterministic algorithms for finding various types of “pseudoprimes” – numbers which obey some of the properties of being prime, e.g. . (For this discussion, we will consider primes as a special case of pseudoprimes.)
Comment. For the specific problem of solving there is an elementary observation that if n obeys this property, then does also, which solves this particular problem; but this does not indicate how to, for instance, have and obeyed simultaneously.
As always, oversight of this research thread is conducted at the discussion thread, and any references and detailed computations should be placed at the wiki.
August 9, 2009
(Research thread II) Deterministic way to find primes
This thread marks the formal launch of “Finding primes” as the massively collaborative research project Polymath4, and now supersedes the proposal thread for this project as the official “research” thread for this project, which has now become rather lengthy. (Simultaneously with this research thread, we also have the discussion thread to oversee the research thread and to provide a forum for casual participants, and also the wiki page to store all the settled knowledge and accumulated insights gained from the project to date.) See also this list of general polymath rules.
The basic problem we are studying here can be stated in a number of equivalent forms:
Problem 1. (Finding primes) Find a deterministic algorithm which, when given an integer k, is guaranteed to locate a prime of at least k digits in length in as quick a time as possible (ideally, in time polynomial in k, i.e. after steps).
Problem 2. (Finding primes, alternate version) Find a deterministic algorithm which, after running for k steps, is guaranteed to locate as large a prime as possible (ideally, with a polynomial number of digits, i.e. at least digits for some .)
To make the problem easier, we will assume the existence of a primality oracle, which can test whether any given number is prime in O(1) time, as well as a factoring oracle, which will provide all the factors of a given number in O(1) time. (Note that the latter supersedes the former.) The primality oracle can be provided essentially for free, due to polynomial-time deterministic primality algorithms such as the AKS primality test; the factoring oracle is somewhat more expensive (there are deterministic factoring algorithms, such as the quadratic sieve, which are suspected to be subexponential in running time, but no polynomial-time algorithm is known), but seems to simplify the problem substantially.
The problem comes in at least three forms: a strong form, a weak form, and a very weak form.
- Strong form: Deterministically find a prime of at least k digits in poly(k) time.
- Weak form: Deterministically find a prime of at least k digits in time, or equivalently find a prime larger than in time O(k) for any fixed constant C.
- Very weak form: Deterministically find a prime of at least k digits in significantly less than time, or equivalently find a prime significantly larger than in time O(k).
The pr0blem in all of these forms remain open, even assuming a factoring oracle and strong number-theoretic hypotheses such as GRH. One of the main difficulties is that we are seeking a deterministic guarantee that the algorithm works in all cases, which is very different from a heuristic argument that the algorithm “should” work in “most” cases. (Note that there are already several efficient probabilistic or heuristic prime generation algorithms in the literature, e.g. this one, which already suffice for all practical purposes; the question here is purely theoretical.) In other words, rather than working in some sort of “average-case” environment where probabilistic heuristics are expected to be valid, one should instead imagine a “Murphy’s law” or “worst-case” scenario in which the primes are situated in a “maximally unfriendly” manner. The trick is to ensure that the algorithm remains efficient and successful even in the worst-case scenario.
Below the fold, we will give some partial results, and some promising avenues of attack to explore. Anyone is welcome to comment on these strategies, and to propose new ones. (If you want to participate in a more “casual” manner, you can ask questions on the discussion thread for this project.)
Also, if anything from the previous thread that you feel is relevant has been missed in the text below, please feel free to recall it in the comments to this thread.
August 3, 2009
Polymath on other blogs
There has been some discussion of the polymath enterprise on other blogs, so I thought it would be good to collect these links on the main polymath wiki page. If you find another link about polymath on the net, please feel free to add it to the wiki also (or at least to mention it in the comments here).
It should also be mentioned that besides the proposed polymath projects on this blog, Gil Kalai is in the process of setting up a polymath project on the polynomial Hirsch conjecture, tentatively scheduled to be launched later this month. See the following preparatory posts:
- The polynomial Hirsch conjecture, a proposal for Polymath 3 (July 17)
- The polynomial Hirsch conjecture, a proposal for Polymath 3 cont. (July 28)
- The polynomial Hirsch conjecture – how to improve the upper bounds (July 30)
- The polynomial Hirsch conjecture : discussion thread (August 9)
- The polynomial Hiresch conjecture: discussion thread continued (September 6)
- Plans for polymath3 (December 8). The plan is to launched polymath3 on the polynomial Hirsch conjecture in April 15, 2010.
An extremely rudimentary wiki page for the proposed project has now been created.
New: Tim Gowers devotes a post to several proposals for a polymath project in November.