For the time being this is an almost empty post, the main purpose of which is to provide a space for mathematical comments connected with the project of assessing whether it is possible to use the recent ideas of Sanders and of Bateman and Katz to break the barrier in Roth’s theorem. (In a few hours’ time I plan to write a brief explanation of what one of the main difficulties seems to be.)
Added later. Tom Sanders made the following remarks as a comment. It seems to me to make more sense to have them as a post, since they are a good starting point for a discussion. So I have taken the liberty of upgrading the comment. Thus, the remainder of this post is written by Tom.
This will hopefully be an informal post on one aspect of what we might need to do to translate the Bateman-Katz work into the setting.
One of the first steps in the Bateman-Katz argument is to note that if is a cap-set (meaning it is free of three-term progressions) of density then we can assume that there are no large Fourier coefficients, meaning
They use this to develop structural information about the large spectrum, , which consequently has size between and . This structural information is then carefully analysed in the `beef’ of the paper.
To make the assumption there are no large Fourier coefficients they proceed via the usual Meshulam argument: if there is a large coefficient then we get a density increment of the form on a subspace of co-dimension and this can be iterated until they have all been removed. In the setting this has to be `Bourgainised’.
We proceed relative to Bohr sets rather than subspace. The problem with Bohr sets is that they do not function exactly as subspaces and, in particular, they do not have a nice additive closure property. A good model to think of is the unit cube in . The sumset is not roughly equal to ; it is about times as large as . However, if we take some small dilate of , say the cube of side length then we do have that since . This provide a sort of approximate additive closure, and the fact that it can be usefully extended to Bohr sets and used for Roth’s theorem was usefully notice by Bourgain in his paper `On triples in arithmetic progression‘.
In our situation, if is a Bohr set of dimension , and has relative density then we shall try to remove all characters such that . Given such a character we produce a new Bohr set defined to be the intersection of (dilated by a factor of ) and the -approximate level set of (meaning the set of such that ) with
and has density on a translate of . After running this for at most iterations we end up with a Bohr set such that
However, the only lower bound we have on the size of a Bohr set in a general Abelian group is , which means we have to take or else our Bohr sets will become too small. Of course, in the width plays (essentially) no role in determining the size of the Bohr set and we have and we can take as desired for the Bateman-Katz analysis.
Having seen this weakness of `Bourgainisation’ one naturally wants to look for arguments which somehow involve iterating a smaller number of times: if we had been able to take many Fourier coefficients together each time we passed to a new Bohr set we would not have had to iterate, and therefore narrow, the Bohr set so many times. In fact Heath-Brown and Szemeredi in their papers `Integer sets containing no arithmetic progressions‘ and `Integer sets containing no arithmetic progressions‘ provided such.
The key idea of the Heath-Brown-Szemeredi approach in the Bohr set context is to intersect the dilate of the Bohr set with the -approximate level set of all the characters in the large spectrum . This set has size at most by Parseval’s theorem and so we get a Bohr set with
However, in this case we end up with a much bigger density increment. Indeed, for all from which we essentially get that
This translates to a density increment of and such an increment can only be iterated times — that is to say not very many times. Unfortunately even when combined with Chang’s theorem this does not give an improvement over Bourgain’s original argument and it wasn’t until 2008 that Bourgain produced a new argument improving our understanding in `Roth’s theorem on progressions revisited‘.
In this sequel a more careful analysis of the large spectrum is produced and this benefits from knowing whether or not contains most of the Fourier mass in or not. The point here is that we are given by the usual Fourier arguments that supports a large Fourier mass. Now, if is somewhat bigger than then it is a stronger statement to say that contains most of the Fourier mass. If it does then our plan might be to run one of the known Roth arguments; if it doesn’t then is large and we can hope to run the Bateman-Katz argument.
Hopefully I’ll talk more about Bourgain’s method which gives (and a slight refinement which gives ) because these along with Bourgain’s original approach can all make use of the fact that is large, rather than simply the statement that has no non-trivial three-APs (which is stronger). One of the problems we face is that naively the proof giving cannot make use of the fact that is large unless this can be converted into a meaningful physical space statement.
I did wonder if there was some slight hope that the in the Bateman-Katz result would be sufficiently large, say bigger than , that it could be combined with the argument to give an improvement. This seems unlikely as I am told is rather small.