Give a number , how to find a prime larger than in time bounded by a polynomial in ?

]]>The following define two deterministic prime generating algorithms which generate some curious composites, which are essentially a small (tiny?) subset of Hardy’s round numbers.

(Note that the focus is on generating the prime gaps, and not the primes themselves.)

TRIM NUMBERS

A number is Trim if, and only if, all its divisors are less than the Trim difference , where:

(i) , and ;

(ii) , and ;

(iii) is the smallest even integer that does not occur in the n’th sequence:

;

(iv) is selected so that:

, where is the i'th prime.

It follows that is a prime unless all its prime divisors are less than .

Question: What is the time taken to generate ?

COMPACT NUMBERS

A number is Compact if, and only if, all its divisors, except a maximum of one, are less than the Compact difference , where:

(i) , and ;

(ii) , and ;

(iii) is the smallest even integer that does not occur in the n’th sequence:

;

(iv) is selected so that, for all :

;

(v) is selected so that:

;

(vi) if , then:

.

It follows that is either a prime, or a prime square, unless all, except a maximum of 1, prime divisors of the number are less than .

What is the time taken to generate ?

The distribution of the Compact numbers suggests that the prime difference may be .

The algorithms can be seen in more detail at:

]]>Well, there are artificial counterexamples… , for instance, is computable in time poly(n) but has an error term which is exponential in n.

But yes, in general there seems to be a strong correlation between a quantity being easy to compute on one hand, and being easy to estimate on the other (Raphy Coifman is fond of pushing this particular philosophy).

]]>Related to the comment above, I’d be interested to know if anyone is aware of any result of the following form: Give a non-trivial -bound for computing some non-trivial number-theoretic quantity.

]]>(1)

(2)

(3)

(4)

In the following table, the first column indicates (ignoring log terms) how fast our best algorithm computes the above quantities, the second column indicates the best known error term on these quantities, and the third column indicates the conjectured error term on each of these quantities.

(1)

(2)

(3)

(4)

In some cases (such as (1)) we can compute a quantity such that the order of the computation time is smaller than that of the best known error term. However, in none of these cases can we compute a quantity where the exponent on the computation time is better than the conjectured error term. This leads to the following rather vague question. Is anyone aware of any number theoretic quantity whose computation time is smaller than the conjectured error term?

]]>Dear Mark, one way to proceed is to first find a pair of consecutive residues b, b+1 mod W^2 that are not divisible by any square less than W^2 (this is possible by the Chinese remainder theorem). The density of square-frees in b mod W and in b+1 mod W are both close to 1, so there will be a lot of consecutive square-frees in this pair of progressions.

]]>I typed in the paper to google, and found that Odlyzko has a link on his website at:

I looked up the section where it mentions the algorithm, and apparently there is an analytic algorithm to compute in time using the zeta function (but not the explicit formula)! So it seems we were trying to solve a problem that has already been solved. Still, it would be good to have other solutions.

]]>According to this paper

Odlyzko, Andrew M.(1-BELL)

Analytic computations in number theory. (English summary) Mathematics of Computation 1943–1993: a half-century of computational mathematics (Vancouver, BC, 1993), 451–463,

http://www.ams.org/mathscinet-getitem?mr=1314883

there is an elementary algorithm to compute in time, though I don’t know whether this is a deterministic algorithm (I don’t have access to the paper). If so, that would solve this problem by binary search.

]]>