The polymath blog

March 6, 2017

Rota’s Basis Conjecture: Polymath 12

Filed under: polymath proposals — tchow8 @ 11:18 pm

There has been enough interest that I think we can formally declare Rota’s Basis Conjecture to be Polymath 12. I am told that it is standard Polymath practice to start a new blog post whenever the number of comments reaches about 100, and we have reached that point, so that is one reason I am writing a second post at this time. I am also told that sometimes, separate “discussion” and “research” threads are created; I’m not seeing an immediate need for such a separation yet, and so I am not going to state a rule that comments of one type belong under the original post whereas comments of some other type belong under this new post. I will just say that if you are in doubt, I recommend posting new comments under this post rather than the old one, but if common sense clearly says that your comment belongs under the old post then you should use common sense.

The other reason to create a new post is to take stock of where we are and perhaps suggest some ways to go forward. Let me emphasize that the list below is not comprehensive, but is meant only to summarize the comments so far and to throw in a few ideas of my own. Assuming this project continues to gather steam, the plan is to populate the associated Polymath Wiki page with a more comprehensive list of references and statements of partial results. If you have an idea that does not seem to fit into any of the categories below, please consider that to be an invitation to leave a comment about your idea, not an indication that it is not of interest!

Matroids with No Small Circuits

I want to start with an idea that I mentioned in my MathOverflow post but not in my previous Polymath Blog post. I think it is very promising, and I don’t think many people have looked at it. Geelen and Humphries proved that Rota’s Basis Conjecture is true for paving matroids. In the case of vector spaces, what this means is that they proved the conjecture in the case where every (n – 1)-element subset of the given set of n2 vectors is linearly independent. It is natural to ask if n – 1 can be reduced to n – 2. I have not digested the Geelen–Humphries paper so I do not know how easy or hard this might be, but it certainly could not hurt to have more people study this paper and make an attempt to extend its results. If an oracle were to tell me that Rota’s Basis Conjecture has a 10-page proof and were to ask me what I thought the method was, then at this point in time I would guess that the proof proceeds by induction on the size of the smallest circuit. Even if I am totally wrong, I think we will definitely learn something by understanding exactly why this approach cannot be extended.

Independent Partial Transversals

Let me now review the progress on the three ideas I mentioned in my first blog post. In Idea 1, I asked if the n2 vectors could be partitioned into at most 2n – 2 independent partial transversals. A nice proof that the answer is yes was given by domotorp. Eli Berger then made a comment that suggested that the topological methods of Aharoni and Berger could push this bound lower, but there was either an error in his suggestion or we misunderstood it. It would be good to get this point clarified. I should also mention that Aharoni mentioned to me offline that he unfortunately could not participate actively in Polymath but that he did have an answer to my question about their topological methods, which is that the topological concepts they were using were intrinsically not strong enough to bring the bound down to n + 1, let alone n. It might nevertheless be valuable to understand exactly how far we can go by thinking about independent partial transversals. Ron Aharoni and Jonathan Farley both had interesting ideas along these lines; rather than reproduce them here, let me just say that you can find Aharoni’s comment (under the previous blog post) by searching for “Vizing” and Farley’s comment by searching for “Mirsky.”

Local Obstructions

Idea 2 was to look for additional obstructions to natural strengthenings of Rota’s Basis Conjecture, by computationally searching for counterexamples that arise if the number of columns is smaller than the number of rows. Luke Pebody started such a search but reported a bug. I still believe that this computational search is worth doing, because I suspect that any proof that Rota’s Basis Conjecture holds for all matroids is going to have to come to grips with these counterexamples.

Note that if we are interested just in vector spaces, we could do some Gröbner basis calculations. I am not sure that this would be any less computationally intensive than exhausting over all small matroids, but it might reveal additional structure that is peculiar to the vector space case.

Algebraic Geometry

There has been minimal progress in this (admittedly vague) direction. I will quote Ellenberg’s initial thoughts: “If you were going to degenerate, what you would need to do is say: is there any version of this question that makes sense when the basic object is, instead of a basis of an n-dimensional vector space V, a 0-dimensional subscheme of V of degree n which is not contained in any hyperplane? For instance, in 2-space you could have something which was totally supported at the point (0,1) but which was “fat” in the horizontal direction of degree 2. This is the scheme S such that what it means for a curve C to contain S is that S passes through (0,1) and has a horizontal tangent there.”

Let me also mention that Jan Draisma sent me email recently with the following remarks: “A possible idea would be to consider a counterexample as lying in some suitable equivariant Hilbert scheme in which being a counterexample is a closed condition, then degenerate to a counterexample stable under a Borel subgroup of GLn, and come to a contradiction. ‘Equivariant’ should reflect the action of GLn × (SnSnn). However, I have not managed to make this work myself, even in low dimensions. In fact, having a good algebro-geometric argument for the n = 3 case, rather than a case-by-case analysis, would already be very nice!”

Alon–Tarsi Conjecture

Now let me move on to other ideas suggested in the comments. There were several thoughts about the Alon–Tarsi Conjecture that the Alon–Tarsi constant LnevenLnodd ≠ 0 when n is even. Rebecca Stones gave a formula that, as Gil Kalai observed, equated the Alon–Tarsi constant with the top Fourier–Walsh coefficient for the function detn; i.e., up to sign, the Alon–Tarsi constant is

ΣA (–1)σ(A) det(A)n,

where the sum is over all zero-one matrices and σ(A) is the number of zero entries in A. This formula suggests various possibilities. For example one could try to prove that LnevenLnodd ≢ 0 (mod p) where p = 2n + 1 is prime, because in this case, det(A)n must be 0, 1, or –1. This would already be a new result for n = 26, and the case n = 6 is small enough to compute explicitly and look for inspiration. Luke Pebody posted the results of some computations in this case.

Another possibility, suggested by Gil Kalai, is to consider a Gaussian analogue. Instead of random zero-one matrices, consider random Gaussian matrices and try to understand the Hermite expansion of detn, in particular showing that the coefficient corresponding to all ones is nonzero. This might be easier and might give some insight.

Note also that in the comments to my MathOverflow post, Abdelmalek Abdesselam proposed an analogue of the Alon–Tarsi conjecture for odd n. I do not think that many people have looked at this.

Generalizations and Special Cases

Some generalizations and special cases of the conjecture were mentioned in the comments. Proving the conjecture for graphic matroids or binary matroids would be an enormous advance. There is a generalization due to Jeff Kahn, in which we have n2 bases Bij and we have to pick vijBij to form an n × n grid whose rows and columns are all bases. Another generalization was prompted by a remark by David Eppstein: Suppose we are given n bases B1, …, Bn of a vector space of dimension mn, and suppose we are given an n × n zero-one matrix with exactly m 1’s in every row and column. Can we replace each 1 in the matrix with a vector in such a way that the m vectors in row i are the elements of Bi and such that the m vectors in every column form a basis?

Juan Sebastian Lozano suggested the following reformulation: Does there exist a group G such that V is a representation of G and there exists giG such that gi Bi = Bi+1, and for every vector bB1,

span{g0b, …, gn – 1b} = V

where gi = gig1 and g0 is the identity?

Other Ideas

Fedor Petrov mentioned a theorem by him and Roman Karasev that looks potentially relevant (or at least the method of proof might be useful). Let p be an odd prime, and let V be the Fp-vector space of dimension k. Denote V* = V \ {0} and put m = |V*|/2 = (pk – 1)/2. Suppose we are given m linear bases of the vector space V

(v11, …, v1k), (v21, …, v2k), …, (vm1, …, vmk).

Then there exist pairwise distinct x1, …, xm, y1, …, ymV* and a map g:[m] → [k] such that for every i ∈ {1, …, m} we have yixi = vig(i).

Gil Kalai notes that the Alon–Tarsi conjecture is related to the coloring polynomial of a graph and asks if we can learn anything by considering more general polynomials such as

Π {(xiλexj) : i < j, {i,j} = eE(G)},

where the λe are weights associated to the edges e.

43 Comments »

  1. Thank you for this wonderful summary!

    Here are some very preliminary thoughts which might be of interest. I was thinking that a perhaps instructive structure might be the exterior algebra on V. There are three things that are attractive about this:

    1) If we consider Rota’s basis conjecture to be a statement on the existence of certain non-zero elements in the n^{th} power of the graded algebra, then prima facie, it looks like the idea of a paving matroid is exactly a vector space upon which you can induct on the dimension of V when proving the existence of these non-zero elements. I don’t think that this will give new information about the matroid problem — in fact it seems to me that the obstruction to doing this more generally is exactly the combinatorial/independence problem which the more general matroid approach aims to solve. However, if there is some structure specific to the vector space case, as opposed to the matroid, it seems the exterior algebra would be the natural place to capture it.

    2) I do not understand fully the discussion of the Alon–Tarsi Conjecture above, but it seems that if we are talking about the signs of permutations (i.e. looking for odd v even latin squares, where the elements in our squares are basis elements), then one might be able to recover the formula given by Rebecca Stone by considering the action of S_n on the highest exterior power in the exterior algebra. This might lend itself to some generalization (it seems to me that this is roughly the context of the mathoverflow question you link on the Alon–Tarsi Conjecture for odd n, not with the exterior algebra, but with the tensor algebra, but that was just at a glance).

    3) Speaking of my reformulation on the last post: one can probably make a representation theoretic reformulation on the exterior algebra (e.g. maybe using the ‘induced representation’, meaning the representation passed into the exterior algebra) in which one might get explicit conditions on the determinants of the things in the image of the homomorphism into End(V).

    Anyway, my general point is simply that the exterior algebra might be a good place to think about the intersection of some of the approaches, which might help in the solving the problem, even if I doubt it is anywhere near enough on its own.

    Comment by Juan Sebastian Lozano — March 7, 2017 @ 12:24 am | Reply

  2. Small comment on the algebraic geometry, still with very little content: Draisma’s comment is a much better way of saying what I was trying to say. Or rather, you might say: the content of my question is to ask what it would *mean* for a degenerate (e.g. Borel-fixed) object to be a “counterexample” to Rota (since it wouldn’t literally be subject to Rota’s conjecture in its original form.) I also want to endorse Draisma’s bringing up of the symmetry group GL_n × (S_n ⋉ S_n)^n, which acts on every vector space in this whole story and their symmetric powers too. What do we know about the ring of invariants?

    Comment by JSE — March 7, 2017 @ 5:04 am | Reply

    • I briefly discussed Draisma’s idea with Patrick Brosnan. One somewhat annoying fact is that Rota’s Basis Conjecture (or any variation or generalization that I have seen) hypothesizes that certain sets are bases, and being a basis is an open condition rather than a closed condition. I’m not sure how serious an obstacle this is?

      Comment by tchow8 — March 28, 2017 @ 3:00 pm | Reply

  3. Have people seen this paper that just came out? It should be checked.
    Carlos Gamas, ON THE CONJECTURE OF ALON-TARSI, Gulf Journal of Mathematics Vol 4, Issue 4 (2016) 108-116

    Abstract. The Alon-Tarsi Conjecture states that for even n, the number of even latin squares of order n differs from the number of odd latin squares. In this note we prove that this conjecture is true if and only if there exists a permutation ζ ∈ S_{n^2} and a spherical function, φ, such that φ(ζ)\ne 0.

    Comment: it looks like there are a couple of misprints in this paper. Hopefully they are not serious. (Both the spherical function and the permutation are defined at the beginning of the paper. So the actual statement in the abstract could be misleading?)

    Comment by David Glynn — March 7, 2017 @ 9:33 am | Reply

  4. It seems that the Gaussian analog is actually equivalent to the Alon-Tarsi conjecture.

    It can be formulated as follows:

    Let n be even and let X=(x_{ij}) be a random n \times n matrix. Then the expectation

    \mathbb E (det (X)^n \prod_{1 \le i,j \le n} x_{ij}) is non zero.

    (Perhaps it is positive.)

    We can take X to be a random Boolean matrix with \pm 1 entries or a random Gaussian matrix with entries distributed N(0,1).
    There is maybe some hope that because we know a lot about random Gaussian matrices (and correlation inequalities for Gaussian measures) there will be some other way/formula showing that the expectation in question for the Gaussian case is positive.

    Comment by Gil Kalai — March 7, 2017 @ 2:00 pm | Reply

    • Sorry to be dense, but why is the Gaussian analog equivalent to the Alon-Tarsi conjecture?

      Comment by tchow8 — March 10, 2017 @ 12:09 am | Reply

      • Suppose the elements of X are independent, symmetric about 0 and each have non-0 finite variance. We do not assume identical distributions.

        Express \mathrm{det}(X)^n\Pi_{i,j}x_{i,j} as a polynomial in the x_{i,j}.

        It is a homogeneous polynomial of degree 2n^2 in n^2 variables, in which each monomial contains each variable at least once.

        The expected value of any monomial is 0 if it has a variable raised to an odd power (since each variable is symmetric around 0 and they are all independent). Any monomial that is not of such a form must include every variable at least twice. By the degree condition, it must include each variable exactly twice.

        Therefore the expectation of \mathrm{det}(X)^n\Pi_{i,j}x_{i,j} is equal to c_n times the product of the variances of the cells, where c_n is independent of the distribution of the cells, and is equal to the \Pi_{i,j}x_{i,j} coefficient in \mathrm{det}(X)^n.

        Finally: \mathrm{det}(X)^n can be written as the sum of elements \sigma_1, \ldots, \sigma_n of the product of the signs of the sigmas times X_{i, \sigma_ji}, which is equal to \Pi_{i, j}x_{i, j} precisely when the sigmas form the rows of a Latin square.

        Therefore Alon-Tarsi is equivalent to the expectation of \mathrm{det}(X)^n\Pi_{i,j}x_{i,j} being non-0.

        Comment by Luke Pebody — March 28, 2017 @ 6:38 pm | Reply

  5. Tim asked me to write something about online versions of Rota’s Basis
    conjecture. Here, the n bases arrive one at a time, and after you receive
    the i-th basis, you have to decide immediately in what order to write it
    as i-th row of the array you are creating. Of course, the goal is that
    the columns of the eventual array are again independent. Is
    there an algorithm that does this for i up to and including n? There
    are at least two versions of this question:

    (1) The input consists of vectors in a vector space. In this case,
    for _even_ n, the online version still follows from Alon-Tarsi (in
    char 0), and for _odd_ n any online algorithm can be fooled in the
    last steps (if the field contains sufficiently many roots of unity);
    see https://arxiv.org/pdf/1312.5953.pdf

    (2) The input consists of elements in an abstract matroid. In other
    words, when the new basis arrives, you get told which subsets of
    the elements so far are independent, but not more. In this case, an online algorithm
    can be fooled regardless for any n≥3 (even with linearly realisable matroids); see page 9 of
    https://arxiv.org/pdf/1312.5953.pdf

    This leads to a question that I think might prove tractable: determine
    the maximal number f(n)<n (n≥3) such that there exists an online
    algorithm in scenario (2) for arranging f(n) bases, but not for arranging
    f(n)+1 bases. E.g., does f(n) grow as a square root of n?

    Comment by Jan Draisma — March 7, 2017 @ 3:57 pm | Reply

    • No, I’m pretty sure f(n) is linear, at least n/4+c and at most n/2+c. Indeed you can do k+1 bases in 4k dimensions (at least for linearly realisable matroids), but you can’t do them in 2k-1 dimensions.

      For the lower bound, you simply use an extension of the argument from the linked paper above: There exist vectors v_{i,j}\in\mathbb{R}^{2k-1} for 1\le i\le k and 1\le j\le 2k-1 such that they are in general position (i.e. any 2k-1 of them are linearly independent), but the vectors v_j=\sum_{i=1}^kv_{i,j} for 1\le j\le 2k-1 are the same (non-zero) vector. (Proof: fix a prime p that is greater than
      \frac{((2k-1)k)!}{(2k-1)!((2k-1)(k-1))!}, and choose the sum v_1=v_2=\ldots=v_{2k-1} and the vectors v_{i,j} with i|A|+|B|, then at least one of A and B can be extended by some element of D.

      This is true for linearly realisable matroids because the space generated by the union of A and B is of dimension at least |C|, so the intersection of the spaces generated by A and by B is of dimension at most |A|+|B|-|C|, so is less than |D|, so there is some element of D that is not in the intersection of those spaces, so there is some element of D that is either not in the space generated by A or the space generated by B.

      So our strategy in 4k dimension is: when the jth basis arrives (for 1\le j\le k+1), make it so the first k+1-j sets of 4 columns each form an independent set of size 4j, and then use Hall’s Basis Theorem to make the other columns all be independent.

      It is clear by the basis exchange axiom that we can make the first k+1-j sets of 4 columns each form an independent set of size 4j (we start with an independent set of size 4j-4 and have 4j independent vertices to choose from).

      For the remainder, we have 4(j-1) columns (of j-1 vectors each) of which the first pair of pairs of columns both make a matroid of dimension at least 2(j-1), the next pair of pairs of columns both make a matroid of dimension at least 2(j-2), and so on until the final pair of pairs of columns both make a matroid of dimension at least 2.

      Create a bipartite graph with partition S_1 of the 4(j-1) columns and S_2 of the 4(j-1) vectors we need to add in from the latest basis, where an edge exists if the column and the vector make an independent set (of size j).

      Each vertex in S_1 makes a matroid of dimension j-1, so there are at least 3(j-1) of the vertices in S_2 adjacent. In particular if T is a subset of S_1 is of size at most 3(j-1), the total number of neighbours of T is at least |T|. If, on the other hand, T is of size 4(j-1)-p where p < (j-1), it includes both columns from one of the first p+1 pairs of columns, so it includes two pairs of columns which together make a matroid of dimension at most 2j-2-p. So these two columns (let’s call them A and B) are of size j-1, and their union contains a set C of size 2j-1-p, so if D is any set of size more than p, either A or B is extendible by an element of D. In particular, there are at most p - 1 elements of S_2 which are not adjacent to either A or B, so there are at least 4(j-1)-p=|T| neighbours of T.

      So the conditions of Hall’s Marriage Theorem apply, and we can add the remaining vectors of the basis to the matrix and make each column independent.

      My instincts are that this proof can be improved to actually show that you can do 2k, solving the problem completely. I did this for k=2, but couldn’t quite do so for k=3.

      Comment by Luke Pebody — March 25, 2017 @ 5:38 pm | Reply

      • If someone can edit this to make the latex things work, that would be very welcome. It seems to have caused a huge gap including the end of the proof of the upper bound and the start of the proof of the lower bound. Is there a ‘preview my post’ feature?

        Comment by Luke Pebody — March 25, 2017 @ 5:43 pm | Reply

        • I tried to fix your LaTeX; hopefully this is what you wanted. I’m new to WordPress myself so I am not aware of a “preview” feature; I agree that that would be nice.

          Comment by tchow8 — March 26, 2017 @ 9:16 pm | Reply

          • Thanks Tim, that looks a lot better.

            It has occurred to me the Lemma I needed (if A, B, C and D are independent sets in a matroid with C contained in A\cup B and with |C|+|D|>|A|+|B| then either A or B can be extended by some element of D) is true by the submodularity of the rank function applied to A\cup D and B\cup D.

            The rank of the intersection of A\cup D and B\cup D is at least |D| clearly, and the rank of their union is at least |C|. It follows that the sum of their ranks is more than |A|+|B|, and hence either the rank of A\cup D is more than |A| (and hence A is not a maximal independent set in A\cup D) or the rank of B\cup D is more than |B| (and hence B is not a maximal independent set in B\cup D).

            Comment by Luke Pebody — March 26, 2017 @ 9:57 pm | Reply

      • Since this turned out pretty unreadable, I threw together some latex of the two proofs at https://www.overleaf.com/8773999gccdbdmfdgkm

        Comment by Luke Pebody — March 26, 2017 @ 4:36 pm | Reply

        • Pretty sure the same proof works to show you can do k+1 bases in 3k dimensions. What I’m using the groups of 4 for is that if you remove 0 or 1 columns, there’s still a pair remaining that works together. The same would be true with groups of 3.

          Comment by Luke Pebody — March 26, 2017 @ 5:12 pm | Reply

    • So I think it might be possible to extend this method to fully solve the question for online matroids.

      I feel that the answer is that you can do k + 1 bases in rank 2k, but can’t necessarily do them in rank 2k-1. So far it is proved you can in rank 3k and can’t in rank 2k-1.

      In order to place the (k+1)’th basis, it is sufficient to achieve that some 2 columns so far together have rank 2k, that if you cover up any column there are 2 columns remaining that together have rank at least 2k-1, that if you cover up any 2 columns there are 2 columns remaining that together have rank at least 2k-2 and so on (up to if you cover up any k-2 columns there are 2 columns remaining that together have rank at least k+2).

      We stop at k-2, because it is trivial that if you cover up any k-1 columns there are 2 columns remaining that together have rank at least k+1: otherwise in the remaining k x (k+1) grid all the columns are maximal independent sets and all the rows are larger independent sets.

      For k=2, you want to achieve that some 2 columns have rank 4, which is trivial to achieve in rank 4 by extending independent sets..

      For k=3, you want to achieve both that some 2 columns have dimension 6 (trivial in rank 6 by extending independent sets) and also that if you cover up any column, some 2 uncovered columns have rank at least 5. Wondering if maybe someone can see how to achieve this in rank 6? (Or prove it can’t always be done)

      In particular, for any k, we would be done if we could achieve that after the k’th basis, we can find k disjoint pairs of columns such that one pair combined has rank 2k, one pair has rank 2k-1, one pair has rank 2k-2, …, one pair has rank k+1. Maybe you can even arrange it so you can find k disjoint pairs of columns such that each pair combined has rank 2k?

      Comment by Luke Pebody — March 28, 2017 @ 6:14 am | Reply

      • I haven’t gotten around to studying your argument yet, but I just wanted to say that fully solving the question for online matroids would be an excellent partial result and I hope that others will jump in to help.

        Comment by tchow8 — March 28, 2017 @ 3:08 pm | Reply

      • So far I could only digest the lower bound. I think it might be useful to summarize it as follows. In an n-dimensional space, we pick a line, and n subspaces of dimension n/2 that all contain it, but are otherwise generic. For each subspace, take n/2 generating vectors. Then any n of these n x n/2 vectors are independent. So if we get them in a matrix as n/2 rows of length n, we can do nothing but randomly put them in columns. If we are unlucky, we might just group them exactly such that each column generates one of the n/2-dimensional subspaces. But then if the next row we get contains a vector from the line they have in common, we are #$&!ed.

        Comment by domotorp — March 29, 2017 @ 11:25 am | Reply

  6. This is coming to an earlier question of Jordan-in what way could this conjecture made more general. I propose the following: Take any n by n array of nonzero vectors such that for any 1\leq k \leq n-1 no matter how we pick k vectors from distinct rows on each of the remaining rows we can find a vector not in their span. This is easily checked out by having bases on each of the rows or having the example of a vectors linearly independent on the first column and linear multiple of each of them on the rows.
    One motivation for this is: we draw an n-partite graph on the rows and draw an edge between the vertices in different parts if they are independent. Rota would say that each vertex is connected to at least n-1 vertices in any other part and then we want to find n disjoints paths through the parts of the graph. Jordan’s example is even more rich in edges, every vertex has degree n.
    The condition stated above seems very reminiscent of the marriage lemma condition.

    Comment by VladM — March 7, 2017 @ 7:19 pm | Reply

    • Once again, this fails on Colin McDiarmid’s example. Denote the edges of K_4 by 12, 13, 14, 23, 24, 34 and create duplicates 14', 24', 34' of three of the edges. Then the sets \{14, 14', 23\}, \{24, 24', 13\}, and \{34, 34', 12\} satisfy your condition but the conclusion of Rota’s Basis Conjecture fails.

      I think it is going to be difficult to formulate a satisfactory generalization of this type without coming to grips with the “local obstructions” that I mentioned above.

      Comment by tchow8 — March 7, 2017 @ 7:30 pm | Reply

  7. On Matroids with No Small Circuits.

    A paving matroid is one in which the sets of size less than the rank are
    all independent. With Peter Humphries I proved that:

    Theorem.
    Let B_1,\ldots ,B_n be disjoint sets of size n \ge 3, and let
    M_1 \ldots ,M_n be rank n paving matroids on \cup_i B_i
    such that B_i is a basis of M_i for each i \in \{1,\ldots,n\}.
    Then there exist n disjoint transversals A_1,\ldots,A_n of
    B_1,\ldots ,B_n such that A_i is a basis of M_i for each
    i \in \{1,\ldots ,n\}.

    This strengthening of Rota’s Basis Conjecture fails for matroids
    in general and for n=2 for paving matroids. After an unpleasant
    case analysis establishing the result for the n=3 case, the result
    is proved by an easy inductive argument.

    Tim is proposing extending the results in our paper, for any fixed
    k>1, to rank-n matroids in which the (n-k)-element sets are
    all independent. It could well be true that the above theorem itself
    would hold for such matroids so long as n is large relative
    to k. To prove such a result one would likely need to do a lot
    of work to establish it for some value of n, but then the induction
    should be easy.

    I don’t see this leading to a full proof of Rota’s Basis Conjecture,
    but it is a concrete approach toward interesting partial results.

    On a related note, it is conjectured that the proportion of n-element
    matroids that are paving tends to one as n goes to infinity. Given the
    exciting new results of Jorn van der Pol and Rudi Pendavingh on the
    structure of almost all matroids one might be able to prove that
    Rota’s Basis Conjecture (for matroids) is almost always true.

    Comment by Jim Geelen — March 7, 2017 @ 8:56 pm | Reply

    • I have been reading the Geelen-Humphries paper and I would like to explain here the idea behind their induction step, which is the part of their argument that I think is most valuable if we are thinking of extending the results.

      The starting point is the observation that in Rota’s Basis Conjecture, it is easy to arrange for the first column of the grid to be a basis. It is then natural to imagine covering up the first row and column and applying some kind of induction hypothesis to the remaining (n-1)\times (n-1) grid. However, several issues arise if we try to do this naively. First of all, the rows of the (n-1)\times (n-1) grid aren’t bases of an (n-1)-dimensional vector space; they’re still sitting inside an n-dimensional vector space. Secondly, and more seriously, even if we were to get the columns of the (n-1)\times (n-1) grid to all be linearly independent, there’s no guarantee that when we return to the original grid and add back the vector v_{1j} from the first row, the jth column will still be linearly independent.

      The first idea that Geelen and Humphries have to try to address these problems is contraction. For those not familiar with matroid theory, contraction may be thought of as orthogonal projection: If M is a set of nonzero vectors in a real vector space and v\in M, then the contraction of M by v (written M/v) is obtained by orthogonally projecting all the vectors of M onto the hyperplane perpendicular to v, and deleting v itself (which has been projected to zero, of course). This has the effect of reducing the rank by one, and also has the nice property that if we take any basis of the contraction and add back the original vector v, we get a basis for the original vector space. In Rota’s Basis Conjecture, the idea is that for each column j, we contract by v_{1j}. This (seemingly) addresses both issues in the previous paragraph.

      However, in the process of seemingly solving these issues, we’ve introduced new issues. The vectors in the first row are all distinct (in fact, linearly independent) so contracting by v_{1j} can give a totally different result from contracting by v_{1j'} if j\ne j'. This is why Geelen and Humphries end up proving something about n different matroid structures on the same ground set. Another issue is, if we contract by v_{1j}, how do we know that we’ll get a basis (for the contraction) in the jth row? We somehow need to ensure that if we combine v_{1j} with the vectors v_{j2}, v_{j2}, \ldots, v_{jn}, the result is a basis of the original space.

      This latter condition is something that Geelen and Humphries state explicitly as a lemma, and prove using the assumption that we are dealing with paving matroids. Namely, they prove that if there are no linearly dependent sets of size less than n (actually, as mentioned in the previous paragraph, they prove this for n different paving matroid structures on the ground set, but I’ll ignore this for simplicity), then it is possible to find v_{j1} \in B_j for all j\in \{2, 3, \ldots, n\} so that \{v_{11}, v_{21}, v_{31}, \ldots, v_{n1}\} is a basis and also \{v_{1j},  v_{j2}, v_{j2}, \ldots, v_{jn}\} is a basis for all j\in \{2, 3, \ldots, n\}.

      The interesting thing about this lemma is that it is a type of basis exchange property. We’re starting with n bases and then we’re simultaneously swapping out n-1 elements from the first basis with one element from each of the other bases in a way that leaves everything a basis at the end. As indicated above, this particular basis exchange property does not hold in full generality, but relies on the assumption that there are no small linearly dependent sets. Geelen and Humphries prove this lemma by induction. Here I want to focus not on the induction step of this argument, but on the argument that lets them prove the base case, because it illustrates something that I think is of general interest for Rota’s Basis Conjecture.

      The key claim is the following: Suppose we have a matroid with no dependent sets of size less than k, where k \ge 3. Let I_1 and I_2 be two disjoint independent sets of size k, and let S_1 be any subset of I_1 of size 2 and let S_2 be any subset of I_2 of size 3. Then there exists x \in S_1 and y\in S_2 such that if we “swap” x and y, then the resulting sets I_1 \backslash x \cup y and I_2 \backslash y \cup x are independent.

      How do we prove this claim? First we note the following: Denote the two elements of S_1 by x and x', and let y be arbitrary; then we claim that either I_1 \backslash x \cup y or I_1 \backslash x' \cup y is independent. For if not, they are both linearly dependent, and since there are no dependent sets of size less than k, they are both circuits (minimal dependent sets). They are distinct and both contain y, so by the circuit elimination axiom of matroid theory, their union, minus y, must contain a circuit. But their union minus y is a subset of I_1 itself, which is independent; contradiction. So, either I_1 \backslash x \cup y or I_1 \backslash x' \cup y is independent. Now apply this argument to each element y \in S_2 in turn; by the pigeonhole principle, there must be some y_1, y_2\in S_2 such that both I_1 \backslash x \cup y_1 and I_1 \backslash x \cup y_2 are independent (or else this claim holds for x', but without loss of generality we may assume that it is x). Then, applying the argument at the beginning of this paragraph with I_2 in place of I_1 and y_1, y_2 in place of x, x', we can infer that for one of y_1 or y_2—and without loss of generality we may assume it is y_1—we must have I_2 \backslash y_1 \cup x is independent as well.

      The reason I think the argument in the previous paragraph is interesting is that it can be interpreted as saying that the absence of small circuits implies a weak form of base-orderability. Base-orderability is, I think, a crucial concept for the conjecture; it is known that strongly base-orderable matroids not only satisfy Rota’s Basis Conjecture but also have no local obstructions, so they satisfy various strengthenings of the conjecture. I may say something more about this later, but this comment is already long enough!

      Comment by tchow8 — March 9, 2017 @ 5:16 pm | Reply

      • The argument seems correct, except that it seems that half of the time you wrote independent instead of dependent; so in the claim, we need to suppose that there is no dependent set of size less than k. Also, let me remark that it might be better to suppose from the beginning that I_1 and I_2 are disjoint (otherwise the statement is not necessarily true).

        Comment by domotorp — March 9, 2017 @ 9:06 pm | Reply

        • I fixed the dependent/independent typos; thanks!

          What’s a counterexample when I_1 and I_2 are not disjoint?

          Comment by tchow8 — March 9, 2017 @ 10:54 pm | Reply

          • If I_1=I_2 and S_1 \cap S_2=\emptyset.

            Comment by domotorp — March 10, 2017 @ 9:11 am | Reply

            • Ah, I see. Technically, I_1 \backslash x \cup y and I_2 \backslash y \cup x are independent in this case as well, but they now have smaller cardinality than the original independent sets, and the term “swap” is misleading, so I agree that it is better to specify that I_1 and I_2 are disjoint.

              Comment by tchow8 — March 20, 2017 @ 4:59 pm | Reply

    • I think that the prospects for generalizing Geelen–Humphries are good. Here are some thoughts.

      Definition. Let k be a non-negative integer. Say that a matroid of rank n is k-paving if it has no circuits of size less than n-k+1.

      Thus a 0-paving matroid is a uniform matroid and a 1-paving matroid is a paving matroid in the usual sense. I believe that it is straightforward to prove the following Proposition (though it would be good for someone to verify):

      Proposition. For any fixed k, the class of k-paving matroids is minor-closed; i.e., if you delete or contract an element of a k-paving matroid, the result is a k-paving matroid.

      My first thought is that it is natural to try to generalize the Geelen–Humphries result to k-paving matroids.

      A second observation is that, as I alluded to an in earlier comment, a key lemma in their paper is the following:

      Key Lemma. Let B_1, \ldots, B_n be disjoint sets of size n \ge 3, and let M_1, \ldots, M_n be rank-n paving matroids on \bigcup_i B_i such that B_i is a basis of M_i for each i\in \{1, \ldots, n\}. Then there is an ordering of the elements of B_1 as a_1, \ldots, a_n and a transversal \{b_2, \ldots, b_n\} of \{B_2, \ldots, B_n\} such that the set (B_1 \backslash \{a_2, \ldots, a_n\}) \cup \{b_2, \ldots, b_n\} is a basis of M_1, and for all j \in \{2, \ldots, n\}, (B_j \backslash b_j) \cup a_j is a basis of M_j.

      One thing that I realized recently is that if all the matroids M_i are identical, with M_1 = \cdots = M_n = M say, then the Key Lemma holds for any matroid M, not just paving matroids. This follows from the following result, which I believe was first proved in “Comments on bases in dependence structures” by Richard Brualdi, Bull. Austral. Math. Soc. 1 (1969), 161–167.

      Strong Basis Exchange. If B_1 and B_2 are bases of a matroid, then for any e \in B_1 there exists f\in B_2 such that both (B_1 \backslash e) \cup f and (B_2 \backslash f) \cup e are bases.

      To prove the Key Lemma when M_1 = \cdots = M_n = M, use Strong Basis Exchange to swap each a_j in turn with some b_j \in B_j, as j runs from 2 to n.

      I think that the Key Lemma is false in general if the M_i are not all identical and are not all paving matroids (though I don’t have a counterexample offhand; maybe someone else can construct one). Certainly Brualdi’s proof breaks down. However, I suggest the following plan: Tinker with the statement of the Key Lemma, as well as the statement of the Theorem in Geelen’s comment above, to get the induction step of Geelen–Humphries to work for a more general class of matroids than paving matroids (perhaps k-paving matroids, or perhaps an even more general class of matroids). This seems eminently doable to me. Of course, even if this plan succeeds, it doesn’t prove much by itself in the absence of a proof of the base case. But I think it would be an important step forwards.

      Comment by tchow8 — March 20, 2017 @ 6:16 pm | Reply

      • Here are some further thoughts about the “tinkering” I mentioned above.

        Suppose that we modify the Theorem in Geelen’s comment by replacing “paving matroid” with “2-paving matroid.” I don’t know if this modified assertion is true, but suppose we try to prove it using similar methods. If I haven’t made a mistake, the class of 2-paving matroids is minor-closed, so the same strategy, of using contraction to reduce to a smaller instance, has a fighting chance of working. In fact, I think that if the Key Lemma holds for 2-paving matroids then the inductive step (though not necessarily the base case) of the proof of the Theorem should go through.

        However, some trouble arises when we try to prove the Key Lemma by mimicking the argument of Geelen and Humphries. I think you can only conclude that if you pick any three elements of a basis then you can swap one of them out with an element of a disjoint basis of another 2-paving matroid (provided that the rank is at least seven), and this lets you get (B_1 \backslash \{a_2, \ldots, a_{n-1}\}) \cup \{b_2, \ldots, b_{n-1}\} to be a basis as well as (B_j\backslash b_j) \cup a_j to be a basis for j \in \{2, \ldots, n-1\}, but then you get stuck at the last step j = n.

        Now maybe the Key Lemma is still true, and one just needs a different argument. On the other hand, here’s another possible diagnosis of what’s going wrong: We know that the Key Lemma is true for all matroids when all the matroid structures are equal, and what’s causing us trouble is that we’re trying to prove it for a whole sequence of distinct matroid structures M_1, \ldots, M_n. It seems difficult to avoid considering different matroid structures on the same ground set if we stick to the strategy of contracting a different vector for each column. Nevertheless, from the point of view of Rota’s Basis Conjecture, we don’t necessarily need to prove the Key Lemma for arbitrary M_i. For example, the M_i that arise in the (purported) inductive proof of Rota’s Basis Conjecture are all minors of the same initial matroid. So maybe what we need to do is to narrow our sights a little and only try to prove the Key Lemma for matroids M_i that enjoy some degree of similarity, that is preserved when we perform certain sequences of deletions and contractions from the same starting matroid. By the same token, the Theorem itself might need to be similarly narrowed. These narrowings might make the Key Lemma (and the Theorem) easier to prove, yet still general enough to imply Rota’s Basis Conjecture.

        Comment by tchow8 — March 21, 2017 @ 8:44 pm | Reply

        • Question. Let M be a base-orderable matroid with ground set E, and let E' \subseteq E. Let M_1 and M_2 be matroid structures on E', each of which is obtained by a sequence of deletions and contractions of the elements of M \backslash E' (but not necessarily the same sequence of deletions and contractions; so although M_1 and M_2 are both minors of M and have the same ground set E', they are not necessarily isomorphic). Suppose further that M_1 and M_2 have the same rank. Is it necessarily the case that if B_1 is a basis of M_1 and B_2 is a basis of M_2, then there exists a bijection f : B_1 \to B_2 such that for all e\in B_1, (B_1 \backslash e) \cup f(e) is a basis of M_1 and (B_2 \backslash f(e)) \cup e is a basis of M_2?

          It is plausible to me that the answer is yes. For example, note that an independent set of a minor is independent in the original matroid, so both B_1 and B_2 are necessarily independent in M. It seems plausible that we could extend both of them to bases in M, use the base-orderability of M to get a bijection, and then restrict the bijection. Or something like that.

          If the answer is yes, then I think that the Geelen–Humphries induction argument goes through for base-orderable matroids. In the Theorem of Geelen’s comment above, we replace “paving matroid” by “base-orderable matroid,” and we also require that all n matroid structures arise from a sequence of deletions and contractions from the same starting matroid. We modify the Key Lemma similarly. The Key Lemma then follows from a positive answer to the Question, because we can swap each a_j in turn with some b_j \in B_j as j runs from 2 to n. Then we can apply the induction hypothesis to the contractions M_i/a_i restricted to the set (B_2 \cup \cdots \cup B_n) \backslash \{b_2, \ldots, b_n\}.

          Comment by tchow8 — March 29, 2017 @ 1:23 am | Reply

          • Unfortunately, the answer to the Question is no. Let M be the matroid associated with the graph that is a disjoint union of two triangles T_1 and T_2. Choose e_1 \in T_1 and e_2 \in T_2, and let M_1 = M / e_1 \backslash e_2 and let M_2 = M / e_2 \backslash e_1. Then all these matroids are base-orderable, and M_1 and M_2 are even abstractly isomorphic (but not equal), but it is easy to check that the desired bijection f does not necessarily exist.

            Comment by tchow8 — March 29, 2017 @ 5:51 pm | Reply

  8. Let me say a bit more about base-orderability. A matroid is strongly base-orderable if, for any two bases B_1 and B_2, there exists a bijection f:B_1\to B_2 such that for any subset S\subseteq B_1, the set B_1 \backslash S \cup f(S) is a basis. It is not hard to show that this axiom implies that B_2 \backslash f(S) \cup S is also a basis. Strong base-orderability is a very strong condition. We have the following result:

    Proposition. Let M be a strongly base-orderable matroid of rank n \ge 2 with 2n elements, and suppose that there exists a partition of M into 2 disjoint bases. Given any n disjoint independent sets I_1, \ldots, I_n with 0 \le |I_i| \le 2, there exists an n\times 2 grid such that the elements of I_i appear in row i and such that both columns are bases of M.

    Sketch of proof: Let B_1 and B_2 be disjoint bases of M. Denote the elements of B_1 by 1, 2, \ldots, n. Let f be the the bijection guaranteed by strong base-orderability, and write 1' for f(1), 2' for f(2), etc. Construct any n\times 2 grid such that the elements of I_i appear in row i. Then a Hall’s marriage theorem argument along the lines of Proof 2 of Theorem 1 in this paper shows that we can permute the elements within each row so that both columns contain the numbers 1 through n if we erase the primes. Strong base-orderability implies that the two columns must be bases.

    Theorem 4 of this paper then implies that the generalization of the above Proposition with 2 replaced by any larger number is also true, and so this yields a proof of Wild’s theorem that Rota’s Basis Conjecture holds for strongly base-orderable matroids.

    As already discussed in earlier comments, there are local obstructions (such as K_4 and J) that prevent the above Proposition from being true in general. The point is that these local obstructions vanish in the presence of strong base-orderability. Thus if we wish to search for local obstructions, we should look at non-base-orderable matroids. Another way to think about it is that if we want to generalize Wild’s theorem, we might want to look at weaker base-orderability hypotheses and see how far we can get with them. A very optimistic, but not inconceivable, route to proving Rota’s Basis Conjecture would be to formulate a generalization to n\times k grids that has some kind of weak base-orderability hypothesis H with two properties: (1) when k < n, H blocks annoying counterexamples like K_4 and J, and (2) when k = n, H is automatically satisfied by all matroids.

    Administrative note: I will be unavailable to participate in Polymath for the next week or so. Obviously, this should not prevent others from continuing to work.

    Comment by tchow8 — March 10, 2017 @ 12:00 am | Reply

    • Here is another thought about base-orderability. A matroid is said to be base-orderable (note that I have dropped the adverb “strongly”) if for any two bases B_1 and B_2, there exists a bijection f : B_1 \to B_2 such that for every x \in B_1, the sets (B_1 \backslash x) \cup f(x) and (B_2 \backslash f(x)) \cup x are both bases. The following theorem is proved in J. de Sousa and D. J. A. Welsh, “A characterisation of binary transversal structures,” J. Math. Anal. Appl. 40 (1972), 55–59.

      Theorem. A binary matroid (i.e., a matroid representable by vectors in a vector space over the field with two elements) is base-orderable if and only if it contains no K_4 minor.

      As we have seen, base-orderability (or at least strong base-orderability) makes life easier for the would-be prover of Rota’s Basis Conjecture. So one could imagine a strategy for proving Rota’s Basis Conjecture for binary matroids that proceeds along the following lines: When base-orderability seems to be useful, split into two cases. In Case A, go ahead and assume base-orderability. In Case B, exploit the above theorem to conclude that there must be a K_4 minor lying around somewhere, and do something special to take care of it.

      Comment by tchow8 — March 21, 2017 @ 9:01 pm | Reply

    • I’m having trouble figuring out how to prove that if you delete an element e from a base-orderable matroid M then the resulting matroid M \backslash e is still base-orderable. Of course if the rank r(M \backslash e) equals the rank r(M) then this is obvious, but if r(M \backslash e) = r(M) - 1 then the obvious line of argument is to extend the given bases B_1 and B_2 of M\backslash e to bases B_1' and B_2' of M, then restrict the bijection f : B_1' \to B_2' to B_1. The thing I’m worried about is that maybe e \in B_2 and f sends some element of B_1 to e, in which case f does not restrict to a bijection between B_1 and B_2.

      I’m confident the result is true because I see it stated everywhere. Welsh’s book even gives a proof but it’s essentially the above argument and so it looks to me as though it has a gap. Oxley’s book states it as an exercise and cites the literature, but when I looked up the papers the terminology was sufficiently different that I couldn’t figure out how to extract what I wanted. Can someone else help?

      Comment by tchow8 — March 23, 2017 @ 1:58 am | Reply

      • O.K., I see what I was missing now:

        Fact. Let M be base-orderable. Let B_1 and B_2 be bases of M, and let f : B_1 \to B_2 be a bijection whose existence is guaranteed by the base-orderability axiom. Then f restricted to B_1 \cap B_2 must be the identity map.

        The proof is obvious as soon as you think about it: If e \in B_1 \cap B_2 and f is not the identity map, then (B_2 \backslash f(e)) \cup e contains “two copies of e” and hence cannot be a basis.

        So now returning to the “obvious line of argument” in my comment above, if r(M\backslash e) = r(M)-1 then that must mean that every basis of M contains e. In particular, B_1' and B_2' must both contain e, and by the above Fact, f is the identity on e. So f does indeed restrict to a bijection between B_1 and B_2, and we’re done.

        I am now hopeful about proving Rota’s Basis Conjecture for base-orderable matroids, which would generalize Wild’s result that the conjecture holds for strongly base-orderable matroids.

        Comment by tchow8 — March 28, 2017 @ 2:52 pm | Reply

  9. Some remarks on Alon-Tarsi conjecture as a correlation inequality (that Tim asked about). (I inquired a few random matrices friends and this comment reflects also a discussion with Ofer Zeitouni and remarks by Krzysztof Oleszkiewicz and Ron Peled).
    Start with any symmetric (around zero) real probability distribution D and consider an n \times n random matrix X=(x_{ij}) with entries i.i.d. from D and consider the expectation

    {\bf E}=\mathbb E (det (X)^n (\prod_{1 \le i,j \le n} x_{ij})).

    When you evaluate this expectation in terms of monomials in the X_{ij} the contribution of monomials that appears with degree one is zero. (Because D is symmetric around zero). The contribution to the monomial
    \prod_{i,j} x_{ij}^2 is precisely a signed summation of Latin squares. This comes from the product of n disjoint generalized diaginals in the expansion of det(X)^n,
    The sign is the product of signs of all n permutations representing these generalized diagonals.

    This expression does not depend on the distribution, it is zero when n is odd, and the common belief that for even values of n there are more even than odd Latin squares gives (I think) that

    {\bf E} is negative when n=2~(\mod 4) and {\bf E} is positive if n=0~(\mod 4). (This should be double checked).

    Three comments:

    1) I am less optimistic now that this probabilistic interpretation is useful. For Gaussian matrices there are nice expressions for the joint eigenvalues distribution but it does not seem to mix well with the product of entries. Of course the key would be to give another expression for this correlation or a wider context but I dont see any.

    2) It would be nice to examine the case for complex distributions and some familiar families of random matrices like random Hermitian matrices.

    3) This observation is essentially Theorem 1.9(e) of Kumar and Landsberg’s paper CONNECTIONS BETWEEN CONJECTURES OF ALON-TARSI, HADAMARD-HOWE, AND INTEGRALS OVER THE SPECIAL UNITARY GROUP.

    Comment by Gil Kalai — March 10, 2017 @ 8:07 am | Reply

    • There are very relevant papers by Jeannette Janssen. One classic paper The Dinitz problem solved for rectangles contains a proof based on Alon-Tarsi theorem of a rectangle case of Dinitz conjecture. It is very interesting if the polynomial used by Janssen can beused to prove a “rectangular” (this a bit weaker) case of the Rota conjecture. (Note that a similar slightly weaker form of the conclusion of Rota conjecture is conjectured by Ron Aharoni in much greater generality.) The other paper is On even and odd Latin squares. It contains the relation between the various definitions of signs of matrices (mentioned in my previous comment). Another remark is that the conjecture that there are more even Latin squares than odd ones may well be a wishful thinking based on a few very small cases. I wonder if somebody can offer a heuristic explanation why this should be true.

      Comment by Gil Kalai — March 14, 2017 @ 8:07 am | Reply

      • As a starting point for a heuristic argument that there are more even Latin squares than odd ones, we might note that the number of even derangements of n minus the number of odd derangements of n is (-1)^{n-1}(n-1). Admittedly, I’m not quite sure what to do with this observation…

        Comment by tchow8 — March 28, 2017 @ 3:11 am | Reply

        • This means that the number of row-even partial 2\times n Latin squares minus the number of row-odd partial 2\times n Latin squares is
          (-1)^{n-1}(n-1)n!.

          Clearly for 3\times n this difference will be 0, so maybe we should instead consider partial m\times n matrices where the first row is 1, 2, \ldots, n.

          Then for m=1 the difference is 1, for m=2, it is (-1)^{n-1}(n-1),
          and for m=3, it is 0, 0, 2, 0, 72, -320, 3600, -32256, …, which is at http://oeis.org/A098276. For m=4, the sequence starts 0,0,0,24,-576,12960.

          Comment by Luke Pebody — March 28, 2017 @ 4:59 am | Reply

    • It may be worth mentioning that Levent Alpoge has used the Kumar–Landsberg result to estimate the size of the Alon–Tarsi constant. Unfortunately, from the point of view of Rota’s Basis Conjecture, he provides only an upper bound rather than a lower bound, but I think it is still an interesting result.

      Comment by tchow8 — March 20, 2017 @ 6:29 pm | Reply

  10. […] the occasion of Polymath 12 devoted to the Rota basis conjecture let me remind you about the Alon-Tarsi conjecture and test […]

    Pingback by Test Your Intuition about the Alon-Tarsi Conjecture | Combinatorics and more — March 15, 2017 @ 10:29 am | Reply

  11. Let me provide another possible inductive approach:
    Let S be a sequence split into some subsequences S_1, \ldots, S_r (called color classes).
    We say that another subsequence is rainbow, if it contains at most one element from each S_i.

    Now the lemma: (from https://arxiv.org/abs/1702.08170)
    Let M be a matroid of rank r and S be a sequence of k\cdot r elements from M,
    split into r subsequences, each of length at most k.
    Then any largest independent rainbow subsequence of S is a basis of M
    if and only if (*) there does not exist an integer s<r and set of s+1 color classes, such that the union of these color classes has rank s.

    The idea would be to remove the largest independent rainbow set, leaving (k-1)r elements and use recursion. This of course does not always work, in many cases the remaining elements will not satisfy (*),
    but it seems to me that in this case one can try to use some backtracking (going several steps back if necessary) and exchange axiom to force (*).

    Comment by Pavel Paták — March 23, 2017 @ 9:13 am | Reply

    • This is an interesting idea. My guess is that a more promising approach than backtracking is to try to modify/generalize the lemma to make the induction/recursion go through without backtracking. Once you start backtracking, it becomes difficult to keep track of all the different cases that could arise.

      More specifically, can you give examples of how (*) can fail after a basis is removed? Then we could think about what conditions are needed to ensure that this scenario can’t happen. That will require modifying the induction hypothesis and maybe force more conditions, but hopefully the modification process will eventually converge.

      Comment by tchow8 — March 28, 2017 @ 1:27 am | Reply

  12. Is Rota’s basis conjecture]implied by the same statement restricted to vector spaces over a fixed field (say \mathbb{R}, or even \mathbb{Z}_2)?

    Comment by dominiczypen — March 28, 2017 @ 6:17 am | Reply

    • This is not known.

      Comment by tchow8 — March 28, 2017 @ 2:41 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: