The polymath blog

June 12, 2012

Polymath7 research thread 1: The Hot Spots Conjecture

Filed under: hot spots,research — Terence Tao @ 8:58 pm

The previous research thread for the Polymath7 project “the Hot Spots Conjecture” is now quite full, so I am now rolling it over to a fresh thread both to summarise the progress thus far, and to make it a bit easier to catch up on the latest developments.

The objective of this project is to prove that for an acute angle triangle ABC, that

  1. The second eigenvalue of the Neumann Laplacian is simple (unless ABC is equilateral); and
  2. For any second eigenfunction of the Neumann Laplacian, the extremal values of this eigenfunction are only attained on the boundary of the triangle.  (Indeed, numerics suggest that the extrema are only attained at the corners of a side of maximum length.)

To describe the progress so far, it is convenient to draw the following “map” of the parameter space.  Observe that the conjecture is invariant with respect to dilation and rigid motion of the triangle, so the only relevant parameters are the three angles \alpha,\beta,\gamma of the triangle.  We can thus represent any such triangle as a point (\alpha/\pi,\beta/\pi,\gamma/\pi) in the region \{ (x,y,z): x+y+z=1, x,y,z > 0 \}.  The parameter space is then the following two-dimensional triangle:

Thus, for instance

  1. A,N,P represent the degenerate obtuse triangles (with two angles zero, and one angle of 180 degrees);
  2. B,F,O represent the degenerate acute isosceles triangles (with two angles 90 degrees, and one angle zero);
  3. C,E,G,I,L,M represent the various permutations of the 30-60-90 right-angled triangle;
  4. D,J,K represent the isosceles right-angled triangles (i.e. the 45-45-90 triangles);
  5. H represents the equilateral triangle (i.e. the 60-60-60 triangle);
  6. The acute triangles form the interior of the region BFO, with the edges of that region being the right-angled triangles, and the exterior being the obtuse triangles;
  7. The isosceles triangles form the three line segments NF, BP, AO.  Sub-equilateral isosceles triangles (with apex angle smaller than 60 degrees) comprise the open line segments BH,FH,OH, while super-equilateral isosceles triangles (with apex angle larger than 60 degrees) comprise the complementary line segments AH, NH, PH.

Of course, one could quotient out by permutations and only work with one sixth of this diagram, such as ABH (or even BDH, if one restricted to the acute case), but I like seeing the symmetry as it makes for a nicer looking figure.

Here’s what we know so far with regards to the hot spots conjecture:

  1. For obtuse or right-angled triangles (the blue shaded region in the figure), the monotonicity results of Banuelos and Burdzy show that the second claim of the hot spots conjecture is true for at least one second eigenfunction.
  2. For any isosceles non-equilateral triangle, the eigenvalue bounds of Laugesen and Siudeja show that the second eigenvalue is simple (i.e. the first part of the hot spots conjecture), with the second eigenfunction being symmetric around the axis of symmetry for sub-equilateral triangles and anti-symmetric for super-equilateral triangles.
  3. As a consequence of the above two facts and a reflection argument found in the previous research thread, this gives the second part of the hot spots conjecture for sub-equilateral triangles (the green line segments in the figure). In this case, the extrema only occur at the vertices.
  4. For equilateral triangles (H in the figure), the eigenvalues and eigenfunctions can be computed exactly; the second eigenvalue has multiplicity two, and all eigenfunctions have extrema only at the vertices.
  5. For sufficiently thin acute triangles (the purple regions in the figure), the eigenfunctions are almost parallel to the sector eigenfunction given by the zeroth Bessel function; this in particular implies that they are simple (since otherwise there would be a second eigenfunction orthogonal to the sector eigenfunction).  Also, a more complicated argument found in the previous research thread shows in this case that the extrema can only occur either at the pointiest vertex, or on the opposing side.

So, as the figure shows, there has been some progress on the problem, but there are still several regions of parameter space left to eliminate.  It may be possible to use perturbation arguments to extend validity of the hot spots conjecture beyond the known regions by some quantitative extent, and then use numerical verification to finish off the remainder.  (It appears that numerics work well for acute triangles once one has moved away from the degenerate cases B,F,O.)

The figure also suggests some possible places to focus attention on, such as:

  1. Super-equilateral acute triangles (the line segments DH, GH, KH).  Here, we know the second eigenfunction is simple (and anti-symmetric).
  2. Nearly equilateral triangles (the region near H).  The perturbation theory for the equilateral triangle could be non-trivial due to the repeated eigenvalue here.
  3. Nearly isosceles right-angled triangles (the regions near D,G,K).  Again, the eigenfunction theory for isosceles right-angled triangles is very explicit, but this time the eigenvalue is simple and perturbation theory should be relatively straightforward.
  4. Nearly 30-60-90 triangles (the regions near C,E,G,I,L,M).  Again, we have an explicit simple eigenfunction in the 30-60-90 case and an analysis should not be too difficult.

There are a number of stretching techniques (such as in the Laugesen-Siudeja paper) which are good for controlling how eigenvalues deform with respect to perturbations, and this may allow us to rigorously establish the first part of the hot spots conjecture, at least, for larger portions of the parameter space.

As for numerical verification of the second part of the conjecture, it appears that we have good finite element methods that seem to give accurate results in practice, but it remains to find a way to generate rigorous guarantees of accuracy and stability with respect to perturbations.  It may be best to focus on the super-equilateral acute isosceles case first, as there is now only one degree of freedom in the parameter space (the apex angle, which can vary between 60 and 90 degrees) and also a known anti-symmetry in the eigenfunction, both of which should cut down on the numerical work required.

I may have missed some other points in the above summary; please feel free to add your own summaries or other discussion below.

94 Comments »

  1. […] has been some progress in the polymath 7 project. See the new thread here. Like this:LikeBe the first to like this […]

    Pingback by New thread for Polymath 7 « Euclidean Ramsey Theory — June 12, 2012 @ 10:00 pm | Reply

  2. Here is a simple eigenvalue comparison theorem: if 0 = \lambda_1(D) \leq \lambda_2(D) \leq \ldots denotes the Neumann eigenvalues of a domain D (counting multiplicity), and T: D \to TD is a linear transformation, then

    \|T\|_{op}^{-2} \lambda_k(D) \leq \lambda_k(TD) \leq \|T^{-1}\|_{op}^2 \lambda_k(D)

    for each k. This is because of the Courant-Fisher minimax characterisation of \lambda_k(D) as the supremum of the infimum of the Rayleigh-Ritz quotient \int_D |\nabla u|^2/ \int_D |u|^2 over all codimension k subspaces of L^2(D), and because any candidate u \in L^2(D) for the Rayleigh-Ritz quotient on D can be transformed into a candidate u \circ T^{-1} \in L^2(TD) for the Rayleigh-Ritz quotient on TD, and vice versa. (This is not the most sophisticated comparison theorem available – for instance, the Laugesen-Siudeja paper has a more delicate analysis involving comparison of one triangle against two reference triangles, instead of just one – but it is one of the easiest to state and prove.)

    One corollary of this theorem is that if one has a spectral gap \lambda_2(D) < \lambda_3(D) for some triangle D, then this spectral gap persists for all nearby triangles TD, as long as T has condition number less than (\lambda_3(D)/\lambda_2(D))^{1/2}. This should allow us to start rigorously verifying the simplicity of the eigenvalue for at least some of the regions of the above figure, and in particular in the vicinity of the points C,D,E,G,I,J,K,L,M where the eigenvalues are explicit. With numerics, we should be able to cover other areas as well, except in the vicinity of the equilateral triangle H where of course we have a repeated eigenvalue, but perhaps some perturbative analysis near that triangle can establish simplicity there too.

    Comment by Terence Tao — June 12, 2012 @ 10:50 pm | Reply

    • Stability of Neumann eigenvalues was studied by Banuelos and Pang (Electron. J. Diff. Eqns., Vol. 2008(2008), No. 145, pp. 1-13) and Pang (http://dx.doi.org/10.1016/j.jmaa.2008.04.026). They prove that multiplicity 1 is stable under small perturbations, while multiplicity 2 is not. Hence linear transformation above can be replaced with almost any small perturbation.

      Comment by Bartlomiej Siudeja — June 12, 2012 @ 11:41 pm | Reply

    • And a small last name correction: Siujeda should really be Siudeja. Here and in the main summary. [Oops! Sorry about that. Corrected, -T.]

      Comment by Bartlomiej Siudeja — June 12, 2012 @ 11:46 pm | Reply

    • Joe and I have a working high-order finite element code (to give increased order of approximation as we increase the resolution) . We’re working on a mapped domain (as described in a different thread), and are starting to explore the parameter space you suggested.

      So far, no surprises, though we haven’t reached the perturbed equilateral triangle. We hope to post some results and graphics soon. Visualizing the results is taking some thought: for each point in parameter space, we want to record: whether the conjecture holds for the approximation; the approximate eigenvalue(s); the spectral gap; and some measure of the quality of the approximation.

      Comment by Nilima Nigam — June 13, 2012 @ 4:25 am | Reply

  3. Just a note: The rigorous numerical approach from [FHM1967] was used extensively to study eigenvalues of triangles by Pedro Antunes and Pedro Freitas. They studied various aspects of the Dirichlet spectrum using improvement of [FHM1967] due to Payne and Moler (http://www.jstor.org/stable/2949550). This method also works extremely well with bessel functions, even for far from degenerate triangles.

    Comment by Bartlomiej Siudeja — June 12, 2012 @ 10:55 pm | Reply

    • The Fox, Henrici and Moler paper is beautiful, and was updated by Betcke and Trefethen in SIAM Review in 2005. Barnett has a more recent paper discussing the method of particular solutions, based on Bessel functions, applied to the Neumann problem. This is harder, and the numerics are more challenging:

      Comment by Nilima Nigam — June 13, 2012 @ 4:30 am | Reply

  4. Continuing the ideas for Comments 13,14, and 18 of the previous thread,

    Consider a super-equilateral isosceles triangle (I will call it a 50-50-80 triangle to make things clear). As discussed in Comment 14 and 18, since we know the second eigenfunction is anti-symmetric we can instead consider the 40-50-90 right triangle with mixed Dirichlet-Neumann.

    Two comments/ideas:

    -It should also be that we can now “unfold” the 40-50-90 triangle into a 40-40-100 triangle with mixed Dirichlet-Neumann and, intuitively at least, it should be the case that the first non-trivial eigenfunction there is the eigenfunction we are looking for (Though while I think that “folding in” is always legal, appealing to the Raleigh-Ritz formalism, in general “folding out” might introduce new first-non-trivial eigenfunctions). I am not sure if this really buys us anything though…

    -Having reduced the problem to the Dirichlet-Neumann boundary case, maybe it is possible to implement the method of particular solutions as suggested by Nilima in Comment 13 (links provided there). The method of particular solutions, at least as presented in those papers, considered a Dirichlet boundary condition that an eigenfunction was chosen to try and match. For the mixed problem, we now have a Dirichlet boundary (the fact that the other two boundaries are Neumann shouldn’t matter as those are taken care of for free when choosing an eigenfunction consisting of “Fourier-Bessel” functions anchored at the opposite angle).

    Comment by letmeitellyou — June 12, 2012 @ 11:48 pm | Reply

    • On the first non-trivial eigenfunction for a triangle with mixed boundary conditions (two sides Neumann, and one side Dirichlet):

      Intuitively, the following statement must be true for all such triangles: The maximum of the first non-trivial eigenfunction occurs at the corner opposite to the Dirichlet side.

      Perhaps this is on the books somewhere? A probabilistic interpretation is as follows: The solution to the heat equation on the mixed-boundary triangle with initial condition u_0 \equiv 1 can be expressed probabilistically as

      u(x,t)=P_x(\tau>t)

      Where \tau is the first time that X_t, a Brownian motion starting from x and reflected on the Neumann sides, hits the Dirichlet side. Intuitively to keep your Brownian motion alive the longest you would start it at the opposite corner.

      Of course this is all intuition and not a formal proof…

      Comment by letmeitellyou — June 13, 2012 @ 12:28 am | Reply

      • Probabilistic intuition is extremely convincing. In fact to make it even more appealing, think about “regular” polygon that can be built by gluing matching Neumann sides of many triangles. We get a “regular” polygon with Dirichlet boundary conditions. By rotational symmetry maximum survival time must happen at the center. Of course not every triangle gives a nice polygon (angles never add up to 2pi), and the ones we need never give one. We would need a multiple cover to make a polygon for arbitrary rational angles, but the intuition is kind of lost this way.

        Comment by Bartlomiej Siudeja — June 13, 2012 @ 12:53 am | Reply

        • Yah I was thinking about this as well… you would get sort of a spiral staircase no? But I think there might be some issue with defining the Brownian motion on this spiral staircase as it might flip out near the origin (i.e. it will have some crazy winding number). Although, with probability 1, the Brownian motion won’t actually hit the origin so maybe it isn’t a big deal.

          On page 472 of the paper [BT2005] Timo Betcke, Lloyd N. Trefethen, Reviving the Method of Particular Solutions, they mention that how the eigenfunction for the wedge cannot be extended analytically unless an integer multiple of the angle is 2\pi.

          Comment by Chris Evans — June 13, 2012 @ 3:01 am | Reply

      • Actually maybe a proof can be furnished using a synchronous coupling!

        Consider a triangle with 1 side Dirichlet and 2 sides Neumann. Orient it so that it lies in the right half plane x\geq 0 and has its Dirichlet side along the y-axis (so that the point with the largest x-coordinate in the triangle is the opposite corner (where we claim the hotspot is).

        Now consider two points x and y in the plane (I will abuse notation and call the points x=(x_1,x_2) and y = (y_1,y_2)). Now consider a synchronously-coupled reflected Brownian motion (X_t,Y_t) started from these two points (Synchronously coupled means that they are driven by the same brownian motion but they might of course reflect at different times).

        If y lies to the right of x, it ought to be the case that always Y_t lies to the right of X_t. consequently X_t is more likely to hit the Dirichlet boundary than Y_t.

        It therefore would follow that the place to start to take the longest to hit the boundary is the point furthest to the right, i.e. the opposite corner as predicted.

        Notes:

        -The issues with coupled Brownian motions dancing around each other should be avoided here.. in the acute triangle with all three sides Neumann this was an issue but here there is only one corner to play around/bounce off of.

        -This is really stating the following monotonicity theorem: If u_0\equiv 1 then u(x,t) is monotonically increasing from left to right for all t>0. There might be a more direct analytic proof.

        -Seeing as this was a very simple argument it is likely to be already known (or I could be wrong about the coupling preserving the orientation).

        Comment by letmeitellyou — June 13, 2012 @ 2:53 am | Reply

        • Unfortunately, I think the synchronous coupling can flip the orientation of X_t and Y_t. Suppose for instance that X_t and Y_t are oriented vertically, and Y_t hits one of the Neumann sides oriented diagonally. Then Y_t can bounce in such a way that it ends up to the left of X_t.

          But perhaps some variant of this coupling trick should work…

          Comment by Terence Tao — June 13, 2012 @ 4:13 am | Reply

          • Ah, good point! The points $x$ and $y$ would have to start such that the angle between them is smaller than the angle of the opposite side… this is actually a condition in the Baneulos-Burdzy paper as well (the “left-right” formalism is just a simpler way to discuss it). But I don’t think this will be an obstacle.

            I will work on writing this up more clearly

            Edit: While talking in terms of all these angles is messy, the succinct explanation is:

            As long as the points x and y are such that the line segment connecting them is nearly horizontal (and it’s a wide range that is allowed based on the angles… basically anything from the angle you get if you ram them against the bottom line to the angle you get when you ram them against the top line), than what I wrote should hold. And that is sufficient to prove the lemma.

            Comment by Chris Evans — June 13, 2012 @ 4:28 am | Reply

            • Ok, here is a writeup which explains things more precisely

              http://www.math.missouri.edu/~evanslc/Polymath/MixedTriangle

              In there I only give an argument for the case that the angle opposite the Dirichlet side is acute… but I think the obtuse case should be true as well. It all boils down to whether the following probabalistic statement is true:

              Consider the infinite wedge \{(r,\theta)\vert 0\leq\theta\leq\gamma < \pi\}. Let (X_t,Y_t) be a synchronously coupled Brownian motion starting from points x and y such that (thought of as elements of the complex plane), 0\leq \arg(y-x) \leq \gamma. Then 0\leq \arg(Y_t-X_t) \leq \gamma for all t>0.

              Comment by Chris Evans — June 13, 2012 @ 5:48 am | Reply

              • I think this does indeed work for acute angles, so this should settle the super-equilateral isosceles case, but I’ll try to recheck the details tomorrow. I think I can also recast the coupling arguments as a PDE argument based on the maximum principle – this doesn’t add anything as far as the results are concerned, but may be a useful alternate way of thinking about these sorts of arguments. (I come from a PDE background rather than a probability one, so I am perhaps biased in this regard.)

                This type of argument may also settle the non-isosceles case in regimes in which we can show that the nodal line is reasonably flat, though I don’t know how one would actually try to show that…

                Comment by Terence Tao — June 13, 2012 @ 6:44 am | Reply

              • OK, I wrote up both a sketch of the Brownian motion argument and the maximum principle argument on the wiki at

                http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture#Isosceles_triangles

                So I think we can now move super-equilateral isosceles triangles (the lines HD, HJ, HK in the above diagram) into the “done” column, thus finishing off all the isosceles cases. (Actually the argument also works for the lowest anti-symmetric mode of the sub-equilateral triangles as well, though this is not directly relevant for the hot spots conjecture.) So now we have to start braving the cases in which there is no axis of symmetry to help us…

                Comment by Terence Tao — June 13, 2012 @ 4:52 pm | Reply

                • I’m a bit confused about the PDE proof of Corrolary 4. In the case where x lies on the interior of DB, it is correct that \nabla u is parallel to DB. However, we do not know what is its direction. If it has the same direction like the vector DB then we are OK. But if its direction is BD then it does not lie in the sector S.

                  Comment by Hung Tran — June 13, 2012 @ 8:08 pm | Reply

                  • By hypothesis, at this point \nabla u lies on the boundary of the region S_{\varepsilon(t+1)} (in particular, it is not in S). The only point on this boundary that is parallel to DB is the point which is a distance \varepsilon(t+1) from the origin in the BD direction. (I should draw a picture to illustrate this but I was too lazy to do so for the wiki.)

                    Comment by Terence Tao — June 13, 2012 @ 8:18 pm | Reply

                    • Thanks for your clarification. I got that part.

                      I’m still confused though. In the proof, you basically performed the reflection arguments to consider the cases when x lies on the interiors of DB,\ AB. By doing so, x turns out to be an interior point of the domain and then it is pretty straightforward to deduce the result from classical maximum principle.

                      My concern is about the reflection arguments. Do you need sth like \dfrac{\partial^2 u}{\partial n^2}(x)=0 in order to do so?

                      Comment by Hung Tran — June 14, 2012 @ 5:15 am

                    • No, to reflect around a flat edge one only needs the Neumann condition \partial u / \partial n = 0. The second normal derivative \partial^2 u / \partial n^2 will reflect in an even fashion (rather than an odd fashion) around the edge, and so does not need to vanish; it only needs to be continuous in order to obtain a C^2 reflection. Once one has a C^2 reflection, one solves the eigenfunction equation in the classical sense in the unfolded domain, and elliptic regularity in that domain upgrades the regularity to C^\infty (at least as long as one stays away from the corners).

                      Comment by Terence Tao — June 14, 2012 @ 2:58 pm

                    • Oh, I meant at the specific point x. Your argument should be OK for eigenfunctions. But here we are dealing with the heat equations, right?

                      In general, I think it would be really interesting to consider the heat equation u_t - \Delta u =0 in {\rm ABC} \times (0,\infty) with the given initial data u_0 chosen in such a way that it is increasing along some specific directions. Let say (u_0)_\xi \ge 0 for some unit vector \xi. If we can use maximum principle to show that u_\xi \ge 0 by essentially killing the boundary cases then we are done.

                      Comment by Hung Tran — June 14, 2012 @ 4:13 pm

                    • Ah, fair enough, but even when reflecting a solution to the heat equation rather than an eigenfunction, one still gets a classical (C^2 in space, C^1 in time) solution to the heat equation on reflection as long as the Neumann boundary condition is satisfied (and providing that the original solution was already C^2 up to the boundary, which I believe can be established rigorously in the acute triangle case), and then by applying parabolic regularity instead of elliptic regularity one can ensure that this is a smooth solution. (Alternatively, one can unfold the triangle around the edge of interest at time zero, solve the heat equation with Neumann data on the unfolded kite region, and then use the uniqueness theory of the heat equation to argue that this solution is necessarily symmetric around the edge of unfolding, and that the restriction to the original triangle is the original solution to the heat equation.)

                      Comment by Terence Tao — June 14, 2012 @ 4:26 pm

                    • Oh, thank you. Probably now I see my source of confusion. Probable one needs \dfrac{\partial u_0}{\partial n}=0 on {\rm AB, \ BC, \ CA} in order to get higher regularity when reflecting. I was confused about this part.

                      So why don’t we proceed by considering the heat equation with Neumann boundary condition in {\rm ABC} \times (0,\infty) with given initial date $u_0$ satisfying sth like \dfrac{\partial u_0}{\partial n}=0 on {\rm AB, \ BC, \ CA} and (u_0)_\xi \ge 0 for some unit direction $\xi$. If we then let v=u_\xi then v solves also the heat equation. We want to show that v \ge 0 or so by using maximum principle. As we know, \max_{{\rm ABC} \times [0,T]} v = \max \{ \max_{\rm{ABC}} (u_0)_\xi, \max_{\rm{AB,BC,CA} \times (0,T)} v\}. And since one can omit the boundary cases by performing reflection method, it should be OK.

                      Comment by Hung Tran — June 14, 2012 @ 6:24 pm

                    • I have done some computations to support my argument above. The point now is to build a function u_0: {\rm ABC} \to \mathbb{R} so that \dfrac{\partial u_0}{\partial n}=0 on the edges and (u_0)_\xi \ge 0 for some unit vector $\xi$. Then u inherits this monotonicity property of u_0, namely u_\xi \ge 0 in {\rm ABC} \times (0,\infty).

                      Here is the first computation in case {\rm ABC} is an acute isosceles triangle like in Corollary 4. Let’s assume A=(0,1),\ B=(-a,0), \ C=(a,0) for some 0<a \le 1. Then we can build u_0 which is antisymmetric around {\rm OA} as u_0(x,y)=\sin(\frac{\pi}{2a}x) \cos (\frac{\pi}{2}y)^{1/a^2}. It turns out that $(u_0)_x, \nabla u \cdot (\frac{1}{a},1) \ge 0$ for x \ge 0. This is exactly the needed function for Corollary 4.

                      I will try to build such u_0 for general acute triangle to see if the shape of {\rm ABC} has anything to do with the direction \xi. It may then help us to see where the min and the max of the second eigenfunctions locate.

                      Comment by Hung Tran — June 15, 2012 @ 4:33 am

                • Great! Actually, half of my graduate thesis was on reflected Brownian motion and the other half was on maximum principles for systems… so it is cool to see that they are related.

                  And on a more practical note, rigorously arguing the geometric properties of coupled Brownian motion can be a bit of a mess (involving Ito’s formula) so if it can be avoided by appealing to the maximum principle, so much the better.

                  Comment by Chris Evans — June 13, 2012 @ 9:51 pm | Reply

              • After a night’s rest, I think the statement I made above about “the infinite wedge preserving the angle” only holds true in the acute case. For the obtuse case, it isn’t to hard to see how the angle won’t always be preserved.

                It still seems it should be the case that the first eigenfunction for the mixed triangle should be at the vertex opposite the Dirichlet side… but at this point I suppose we only need to know the acute case.

                Edit: Actually I think the obtuse case might follow from the following paper by Mihai Pascu which uses an exotic “scaling coupling” to prove Hot-Spots results for C^{1,\alpha} convex domains which are symmetric about one axis.

                http://www.ams.org/journals/tran/2002-354-11/S0002-9947-02-03020-9/home.html

                Reflecting the triangle across its Dirichlet side would give such a domain provided that we could “smooth out the corners” without affecting the eigenfunction too much.

                Comment by Chris Evans — June 13, 2012 @ 9:48 pm | Reply

  5. Chris, I am not sure this is pertinent to your argument. But the regularity of the eigenfunctions for the mixed Dirichlet-Neumann case must degenerate, as the angle between the Dirichlet and Neumann sectors becomes near pi. To see this, think about a sector of a circle with Dirichlet data on one ray and the curvilinear arc, and Neumann on the remaining ray. The solution (by seperation of variables) is again in terms of Bessel functions, but this time with fractional order. As long as the angle of the sector is less than pi, a reflection about the Neumann side would give you an eigenfunction problem with Dirichlet data, and you pick out the one with the right symmetry.

    However, as the interior angle approached pi, after reflection the doubled sector gets closer to the circle with a slit. The resulting eigenfunction is not smooth.

    This argument suggests that if, after reflections, you have a mixed boundary eigenproblem where the Dirichlet-Neumann segments are meeting at nearly flat angles, then there may be issues.

    Comment by Nilima Nigam — June 13, 2012 @ 3:08 pm | Reply

    • Well, for our application the Dirichlet-Neumann region of interest is a folded super-equilateral triangle, so one of the angles between Dirichlet and Neumann is a right angle (thus becomes not an angle at all when unfolded) and the other is between 30 and 45 degrees, so the regularity looks pretty good (C^\infty at the right angle, C^{2,\varepsilon} at the less-than-45-degree-angle, and C^{3,\varepsilon} at the remaining angle between the two Neumann edges, which is less than 60 degrees. (From Bessel function expansion in a Neumann triangle we know that eigenfunctions have basically \pi/\alpha degrees of regularity at an angle of size \alpha, and are C^\infty when \pi/\alpha is an integer. I think the same should also be true for solutions to the heat equation with reasonable initial data, though I didn’t check this properly.)

      But, yes, things are probably more delicate once the Dirichlet-Neumann angles get obtuse. In the case when the Dirichlet boundary comes from a nodal line from a Neumann eigenfunction, the Dirichlet boundary should hit the Neumann boundary at right angles (unless it is in a corner or is somehow degenerate), so this should not be a major difficulty.

      Comment by Terence Tao — June 13, 2012 @ 3:51 pm | Reply

    • Hmm… it seems that we have shown that for a triangle with mixed boundary conditions (one side Dirichlet, two sides Neumann), that the extremum of the first eigenfunction lies at the vertex opposite the Dirichlet side, provided that angle is acute.

      Such a triangle could have that the angle between the Dirichlet side and one of the Neumann sides is arbitrarily close to \pi… but things should still be ok (provided what I wrote in the previous paragraph is true).

      In your example, you have two sides which are Dirichlet and only one side which is Neumann… maybe that is what makes the difference?

      Comment by Chris Evans — June 13, 2012 @ 9:56 pm | Reply

      • Chris, I tried the case where there where two Neumann sides and one Dirichlet. Same problem- but my argument is for a mixed problem where the junction angle is nearing pi. As Terry points out, this concern may not arise for the argument you are trying.

        Comment by Nilima Nigam — June 14, 2012 @ 3:55 am | Reply

  6. We’re exploring the parameter space corresponding to the region BDO in the triangle above. We’re taking a set of discrete points in this parameter set, and verifying the conjecture as well as computing the spectral gap for the corresponding domain . To debug, we’re taking a coarse spacing of pi/10 in each direction, but we will refine this. We’re using piecewise quadratic polynomials in an H^1 conforming finite element method, with Arnoldi iterations with shift to get the smaller eigenvalues.

    I have a quick question- is there some target spacing you’d like? This will influence some memory management issues.

    Comment by Nilima Nigam — June 13, 2012 @ 10:01 pm | Reply

    • Hmm, good question. As a test case for a back-of-the-envelope calculation, let’s look at the range of stability for the isosceles right-angled (i.e. 45-45-90) triangle (point D in the diagram), say with vertices (0,0), (1,0), (1,1) for concreteness. This is half of the unit square and so the Neumann eigenvalues can in fact be read off quite easily by Fourier series. The second eigenvalue is \pi^2, with eigenfunction \cos \pi x + \cos \pi y, and then there is a third eigenvalue at 2\pi^2 with eigenfunction \cos \pi(x+y) + \cos \pi(x-y). So, by Comment 2, the second eigenvalue remains simple for all linear images TD of this triangle with condition number less than \sqrt{2}. To convert the 45-45-90 triangle into another right-angled triangle (\pi/2-\alpha, \alpha,\pi/2) triangle for some 0 < \alpha  < \pi/2 requires a transformation of condition number \cot \alpha, which lets one obtain simplicity of eigenvalues for such triangles whenever \alpha > 0.615, or about 35 degrees – enough to get about two thirds of the way from point D on the diagram to point C. This extremely back of the envelope calculation suggests that increments of about 10 degrees (or about \pi/20) at a time might be enough to get a good resolution. But things may get worse as one approaches the equilateral triangle (point H) or the degenerate triangle (points B, F, O).

      By permutation symmetry it should be enough to explore the triangle BDH instead of BDO. The Laugesen-Suideja paper at http://arxiv.org/abs/0907.1552 has some figures on eigenvalues in the isosceles case (Fig 2 and Fig 3) that could be used for comparison.

      Comment by Terence Tao — June 13, 2012 @ 10:37 pm | Reply

      • thanks, this is helpful. I’ll set this running with pi/50, to be on the safe side. This will take a few hours to run.

        certainly the numerics suggest that the manner in which I approach the point for the equilateral triangle impacts the spectral gap. however, the resolution is not sharp enough to make this formal.

        Comment by Nilima Nigam — June 13, 2012 @ 11:17 pm | Reply

        • A detail which will not affect any analytical attack, but which should be noted for anyone else doing numerics on this.

          As we search through parameter space, we look at what happens with a triangle with given edges – but we should probably fix one side, so we can compare eigenvalues. This is important since what we also want to examine is the spectral gap.

          Joe and I’ve fixed one side of the acute triangle to have length 1. As we range through parameter space, the other sides, and the area of the triangles, change. We are recording this information.

          May I recommend that if anyone else is doing numerics on this problem, they also make available the area of the triangles used (or at least one side) for each choice of angles? This way, we’ll be able to compare eigenvalues on triangles with the same angles.

          Comment by Nilima Nigam — June 14, 2012 @ 4:07 am | Reply

  7. I think i can show that second eigenfunction is simple. It involves a few not-overly complicated cases of comparisons between a given triangle and a few known cases (through linear mappings). There seems to be a way to do all of this using one very complicated comparison (with 4-5 reference triangles) and an extremely ugly upper bound for acute triangles (many pages to write it down), but that is probably not worth pursuing. I will try to write something tonight, at least one simple case. It appears that even around equilateral everything should be OK.

    Comment by Bartlomiej Siudeja — June 13, 2012 @ 10:18 pm | Reply

    • Here is a very rough write-up of just one case containing equilateral, right isosceles, and some non-isosceles cases. I am sure this case can be optimized to include larger area. Another 3-4 cases and all triangles should be covered. I will try to optimize the approach before I post all the cases. Near the end of the argument there is an ugly inequality involving triangle parametrization. It should reduce to polynomial inequality, so in the worst case we can evaluate a few (or a bit more) points and find rough gradient estimates.

      Click to access simple.pdf

      Comment by Bartlomiej Siudeja — June 14, 2012 @ 2:25 am | Reply

      • I was playing with reference triangles a bit more, and it seems that one case with 3 reference triangles (near equilateral) and another with just 2 (near degenerate cases) should be enough to cover all acute triangles. Details to follow.

        Comment by Bartlomiej Siudeja — June 14, 2012 @ 3:06 pm | Reply

        • Great news! In addition to resolving one part of the hot spots conjecture, I think having a rigorous lower bound on the spectral gap \lambda_3-\lambda_2 will also be useful for perturbation arguments if we are to try to verify things by rigorous numerics.

          Comment by Terence Tao — June 15, 2012 @ 1:26 am | Reply

          • This thread is getting somewhat large!

            I’d posted some of this information below, but this may be useful. A plot of the spectral gap for the approximated eigenvalues, \lambda_3-\lambda_2 multiplied by the area of the triangle \Omega as we range through parameter space is here:

            Comment by Nilima Nigam — June 15, 2012 @ 1:48 am | Reply

          • The simplest proof that eigenvalue is simple will have almost no gap bound. However, if one wants to get something for a specific triangle, one can use very complicated comparisons and upper bounds without much trouble. In particular upper bound can include 3 or more known eigenfunctions. Except that even with just 2 eigenfunctions there is no way to write down the result from Rayleigh quotient for the test function on general triangle without using many pages. This is obviously not a problem for a specific triangle. The Mathematica package I mentioned in 12 was written specifically for those really ugly test function.

            Comment by Bartlomiej Siudeja — June 15, 2012 @ 2:26 am | Reply

  8. In comment thread 4, Terry suggested looking at the nodal line for more arbitrary triangles, which would then divide the triangle into two mixed domains.

    Running computer simulations (but only for the graphs G_n as I am not setup to do more accurate numerical approximation), it seems that the nodal line is always near the sharpest corner. Perhaps it is even close to an arc? So then that mixed-boundary sub-domain might be handled by arguments similar to those in comment thread 4. But I am not sure what we would do on the other sub-domain as it would have a strange geometry…

    A related question: Rather than divide into sub-domains by the nodal line, is it possible to divide with respect to another level curve, say u = 2? This would lead to the mixed boundary condition with Neumann boundary on some sides and “u=2” on some sides… but presumably the behavior of the heat flow on that region is the same as the mixed-Dirichlet-Neumann boundary heat flow after you subtract off the constant function 2.

    Comment by Chris Evans — June 13, 2012 @ 10:30 pm | Reply

    • It may be easier to show that the extremum occurs at the sharpest corner than it is to figure out what happens to the other extremum (this was certainly my experience with the thin triangle case). See for instance Corollary 1(ii) of the Atar-Burdzy paper http://webee.technion.ac.il/people/atar/lip.pdf which establishes the extremising nature of the pointy corner for a class of domains that includes for instance parallelograms.

      Once one considers level sets of eigenfunctions at heights other than 0, I think a lot less is known. For instance, the Courant nodal theorem tells us that the nodal line \{u=0\} of a second eigenfunction is a smooth curve that bisects the domain into two regions, but this is probably false once one works with other level sets (though, numerically, it seems to be valid for acute triangles).

      Comment by Terence Tao — June 13, 2012 @ 10:45 pm | Reply

    • There is a paper of Burdzy at http://arxiv.org/pdf/math/0203017.pdf devoted to the study of the nodal line in regions such as triangles, with the main tool being mirror couplings; I haven’t digested it, but it does seem quite relevant to this strategy.

      Comment by Terence Tao — June 14, 2012 @ 4:51 pm | Reply

  9. I’ve been looking at the stability of eigenvalues/eigenfunctions with respect to perturbations, and it seems that the first Hadamard variation formula is the way to go.

    A little bit of setup. Following the notation on the wiki, we perturb off of a “reference” triangle \hat \Omega to a nearby triangle B \hat \Omega, where B is a linear transformation close to the identity. The second eigenfunction on B\hat \Omega can be pulled back to a mean zero function on \hat \Omega which minimizes the modified Rayleigh quotient

    \int_{\hat \Omega} \nabla^T u M \nabla u / \int_{\hat \Omega} u^2

    amongst mean zero functions, where M = B^{-1} (B^{-1})^T is a symmetric perturbation of the identity matrix; this function then obeys the modified eigenvalue equation

    -\nabla \cdot M \nabla u = \lambda_2 u

    with boundary condition n \cdot M \nabla u = 0.

    Now view B = B(t) as deforming smoothly in time with B(0)=I, then M also deforms smoothly in time with M(0)=I. As long as the second eigenvalue of the reference triangle is simple, I believe one can show that \lambda and u will also vary smoothly in time (after normalizing u to have norm one). One can then solve for the derivatives \dot \lambda_2(0), \dot u(0) at time zero by differentiating the eigenvalue equation and the boundary condition. What one gets is the first variation formulae

    \dot \lambda_2(0) = \int_{\hat \Omega} \nabla^T u(0) \dot M(0) \nabla u(0)

    and

    (-\Delta - \lambda_2(0)) \dot u(0) = \pi( \nabla \cdot \dot M(0) \nabla u(0) )

    subject to the inhomogeneous Neumann boundary condition

    n \cdot \nabla \dot u(0) = - n \cdot \dot M(0) \nabla u(0)

    where \pi is the projection to the orthogonal complement of u(0) (and to 1) and \dot u(0) is also constrained to this orthogonal complement.

    I think that by using C^2 bounds on the reference eigenfunction u(0), one should then be able to obtain C^2 bounds on the derivative \dot u(0), though there is of course a deterioration if the spectral gap \lambda_3(0)-\lambda_2(0) goes to zero. But this stability in C^2 norm should be enough to show, for instance, that if one has a reference triangle in which the second eigenfunction is simple and only has extrema in the vertices, then any sufficiently close perturbation of this triangle will also have this property. (Note from Bessel function expansion that if an extrema occurs at an acute vertex, then the Hessian is definite at that vertex, and so for any small C^2 perturbation of that eigenfunction, the vertex will still be the local extremum.) Thus, for instance, we should now be able to get the hot spots conjecture in some open neighborhood of the open intervals BD and DH (and similarly for permutations). Furthermore it should be possible to quantify the size of this neighborhood in terms of the spectral gap.

    This argument doesn’t quite work for perturbations of the equilateral triangle H due to the repeated eigenvalue, but I think some modification of it will.

    EDIT: I think the equilateral case is going to be OK too. The variation formulae will control the portion of \dot u(0) in the complement of the second eigenspace nicely, and so one can write the second eigenfunction of a perturbed equilateral triangle (after changing coordinates back to the reference triangle) as the sum of something coming from the second eigenspace of the original equilateral triangle, plus something small in C^2 norm. I think there is enough “concavity” in the second eigenfunctions of the original equilateral triangle that one can then ensure that for any sufficiently small perturbation of that triangle, the second eigenfunction only has extrema at the vertices. Will try to write up details on the wiki later.

    Comment by Terence Tao — June 14, 2012 @ 4:21 pm | Reply

    • Using raw numerics (the finer-resolution calculation is not yet done), here is what I observe:

      one can perturb from the equilateral triangle in a symmetric way, ie, by changing one angle by -\epsilon and the others by \epsilon/2 Or one can perturb each angle differently. The spectral gap changes rather differently, depending on how one perturbs.

      I should revisit these calculations by scaling by the Jacobian of the mapping B of the domain in each case (following the Courant spectral gap result).

      Comment by Nilima Nigam — June 14, 2012 @ 6:03 pm | Reply

      • Here are some graphics, to explore the parameter region (BDH) above. To enable visualization, I’m plotting data as functions of the $lateex (\alpha,\beta)$. I’m taking a rectangular grid oriented with the sides BD and DH, with 25 steps in each direction. So there are (25)^5 grid points.

        Each parameter (alpha,beta,gamma) yields a triangle \Omega . I’m fixing one side to be of unit length. For details, please see the wiki.

        For each triangle, the second Neumann eigenvalue and third eigenvalue (first and second non-zero Neumann eigenvalue) is computed. I also kept track of where max|u| occurs, where u is the second eigenfunction. This is because numerically I can get either u or -u. I

        A plot of the 2nd Neumann eigenvalue as we range through parameter space is here: http://www.math.sfu.ca/~nigam/polymath-figures/Lambda2.jpg

        A plot of the 3rdd Neumann eigenvalue as we range through parameter space is here: http://www.math.sfu.ca/~nigam/polymath-figures/Lambda3.jpg

        A plot of the spectral gap, \lambda_3-\lambda_2 multiplied by the area of the triangle \Omega as we range through parameter space is here:

        One sees that the eigenvalues vary smoothly in parameter space, and that the spectral gap is largest for acute triangles without particular symmetries.

        For each triangle, I also kept track of the physical location of max|u|. If it went to the corner (0,0), I allocated a value of 1; if it went to (1,0) I allocated a value of 2, and if it went to the third corner, I allocated 3. If the maximum was not reported to be at a corner, I put a value of 0.

        show the result. Note that we obtain some values of 0 inside parameter space. Please DON”T interpret this to mean the conjecture fails. Rather, this is a signal that eigenfunction is likely flattening out near a corner, and that the numerical values at points near the corner are very close.

        I’m running these calculations with finer tolerances now, but it will take some hours.

        Comment by Nilima Nigam — June 14, 2012 @ 8:49 pm | Reply

    • Hi,
      I think there may be something to do using analytic pertubation theory.

      The first remark is that, using a linear diffeomorphism
      we can pullback the Dirichlet energy form (\int_T \left | \nabla u \right |^2 dxdy) on a moving triangle T to a quadratic form
      on a fixed triangle T_0 that can be written \int_{T_0} {}^t\nabla u  A \nabla u dxdy for some symmetric matrix A so that
      studying the Neumann Laplacian on T amounts to study the latter quadratic form restricted to H^1(T_0) with respect to
      the standard Euclidean scalar product. If we now let A depend analytically on a real parameter t then we get a real-analytic
      family in the sense of Kato-Rellich so that the eigenvalues (and eigenvectors) are organized into real-analytic branches.

      Let (E(t), u(t)) be such an analytic eigenbranch, we define the following function f by f(t)= \frac{ \| u(t)\|_{ \infty,\partial T_0 } }{ \| u(t)\|_{\infty,T_0} }
      (observe that now eveything is defined on T_0) and suppose we can prove that this function also is analytic (that is for any choice of analytic
      perturbation and any corresponding eigenbranch). Then I think we can prove the following statement : “For any triangle T there is a Neumann eigenfunction
      whose maximum is on the boundary”. The proof would be as follows. Start from your triangle T and move one of its vertices along the corresponding altitude.
      This defines an analytic perturbation and for any t small enough the obtained triangle is obtuse. For t very small the second eigenbranch is simple and
      satisfy the hotspot conjecture so that if we follow this particular branch, the corresponding f is identically 1 for t small enough and since it is analytic
      it is always 1. The claimed eigenfunction is the one that corresponds to this eigenbranch (because of crossings, it need not be the second one).

      If we want to prove the real hotspot conjecture we can try to argue in the opposite direction : start from the second eigenvalue and follow the same perturbation.
      We now have to prove the following things :
      1- For t small the branch becomes simple so that it corresponds to the N-th eigenvalue,
      2- For any N and any t small enough the N-th eigenfunction has its maximum on the boundary.

      Of course this line of reasoning relies heavily on the analyticity of f which I haven’t been able to establish yet (observe that t \mapsto u(t) is analytic
      with values in H^1 which is not good enough for C^0 bounds). Recently I have been thinking that maybe we could instead try to prove
      that $f_r$ is analytic where the subscript r means that we have removed a ball of that radius near each vertex. It should be easier to prove that
      this one is analytic (but then we need to prove something on the maximum of u_2 for any tobtuse triangle when we remove a ball near each vertex).

      I finish by pointing at two references on multiplicities in the spectrum of triangles.
      First some advertisement
      – Hillairet-Judge Simplicity and asymptotic separation of variables, CMP, 2011, 302(2) (Erratum, CMP, 2012, 311 (3))
      – Berry-Wilkinson Diabolical points in the spectra of triangles, Proc. Roy. Soc. London, 1984, 392(1802), pp.15-43

      Comment by Luc Hillairet — June 15, 2012 @ 11:57 am | Reply

      • [I was editing this comment and I accidentally transferred ownership of it to myself, which is why my icon appears here. Sorry, please ignore the icon; this is Nilima’s post. – T.]

        An analytic perturbation argument from known cases would certainly be great! I thought about a similar argument for the thin triangle case (http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture, under ‘thin not-quite-sectors). But I was thinking about perturbing from a sector to the triangle, and you’re thinking about perturbing from one triangle to another.

        Let’s see if I follow your argument. Following the notation in (http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture, under ‘reformulation on a reference domain’), one can replace the reference triangle by any other. One then shows analyticity of the eigenvalues with respect to perturbations in the mapping B, and shows the domain of analyticity is large enough to cover all acute triangles. Is this correct?

        Comment by Nilima Nigam — June 15, 2012 @ 2:59 pm | Reply

      • I think it may be difficult to show analyticity of a sup norm; note that even the sup of two analytic functions \max(f(t),g(t)) is not analytic when the two functions cross (e.g. |t| = \max( t, -t)). The enemy here is that as one varies t, a new local extremum gets created somewhere in the interior of the triangle, and eventually grows to the point where it overtakes the established extremum on the vertices, creating a non-analytic singularity in the L^infty norm.

        However, I think one does have analyticity as long as the extrema are unique (up to symmetry, in the isosceles case) and non-degenerate (i.e. their Hessian is definite), and the eigenvalue is simple. This is for instance the case for the non-equilateral acute isosceles and right-angled triangles, where we know that the eigenvalues are simple and the extrema only occur at the vertices of the longest side, and a Bessel expansion at a (necessarily acute) extremal vertex shows that any extremum is non-degenerate (it looks like a non-zero scalar multiple of the 0th Bessel function J_0(\sqrt{\lambda} r), plus lower order terms which are o(r^2) as r \to 0). Certainly in this setting, the work of Banuelos and Pang ( http://eudml.org/doc/130789;jsessionid=080D9E5423278BA5ACFC818847CA97FE ) applies, and small perturbations of the triangle give small perturbations of the eigenfunction in L^infty norm at least. This (together with uniform C^2 bounds for eigenfunctions in a compact family of acute triangles, which is sketched on the wiki, and is needed to handle the regions near the vertices) is already enough to give the hot spots conjecture for sufficiently small perturbations of a right-angled or non-equilateral acute isosceles triangle.

        The Banuelos-Pang results require the eigenvalue to be simple, so the perturbation theory of the equilateral triangle (in which the second eigenvalue has multiplicity 2) is not directly covered. However, it seems very likely that for any sufficiently small perturbation of the equilateral triangle, a second eigenfunction of the perturbed triangle should be close in L^infty norm to _some_ second eigenfunction of the original triangle (but this approximating eigenfunction could vary quite discontinuously with respect to the perturbation). Assuming this, this shows the hot spots conjecture for perturbations of the equilateral triangle as well, because _every_ second eigenfunction of the equilateral triangle can be shown to have extrema only at the vertices, and to be uniformly bounded away from the extremum once one has a fixed distance away from the vertices (this comes from the strict concavity of the image of the complex second eigenfunction of the equilateral triangle, discussed on the wiki).

        The perturbation argument also shows that in order for the hot spots conjecture to fail, there must exist a “threshold” counterexample of an acute triangle in which one of the vertex extrema is matched by a critical point either on the edge or interior of the triangle, though it is not clear to me how to use this information.

        Comment by Terence Tao — June 15, 2012 @ 3:53 pm | Reply

        • Thanks ! Actually what I had in mind was trying to prove that t\mapsto u(t) is analytic with values in C^0(T_0) but then I
          imprudently jumped to think that this would imply the analyticity of the supnorm. So I am not sure there is something to save from the analyticity
          approach I was suggesting.

          Except maybe the following fact : I think that the set of triangles such that \lambda_2 is simple is open and dense
          (and also full measure for a natural class of Lebesgue measure).
          We have proved that for any mixed Dirichlet-Neumann boundary condition … except Neumann everywhere ! I have a sketch of proof
          for the latter case but I never carried out the details (so there may be some bugs in the argument).

          Last thing concerning analyticity of the eigenvalues and eigenfunctions, this holds only for one-parameter analytic families of triangles.
          I don’t think the eigenvalues can be arranged to be analytic on the full parameter space (because there are crossings).

          Comment by Luc Hillairet — June 15, 2012 @ 5:01 pm | Reply

  10. I would like to propose a further probabilistic intuition, based on
    comment 15 of thread 1, and another
    possiblity for attacking the problem. It is based on relating free
    Brownian motion with reflecting Brownian motion.

    If B is a one dimensional Brownian motion, and we define the
    floor function \lfloor \rfloor and the zig-zag function f(x) = \| x - 2 \lfloor (x+1) / 2 \rfloor \|,
    then R=f(B) is a reflecting Brownian motion on [0,1] (as can be
    rigorously proved using stochastic calculus and local time for example) and its density
    is the fundamental solution of the heat equation with Neumann boundary
    conditions. To write an expression of the transition density p_t^R of R in
    terms of the transition density p_t of B , write
    y_1\sim y_2 if
    f(y_1)=f(y_2) and note that

    (1) p_t^R(x,y)=\sum_{\tilde y\sim y} p_t(x,\tilde y)

    if x,y\in (0,1) but

    p_t^R(x,1)=2\sum_{\tilde y\sim 1}p_t(x,y)

    This explains why the boundary points 0 and 1 accumulate (or trap) heat at
    twice the rate as interior vertices, and I believe that from here one
    can conceptually prove hotspots in the very simple case of the interval.

    For two dimensional reflecting Brownian motion, one needs a similar reflection
    function. To construct it: think first of an equilateral triangle
    constructed as a kaleidoscope with 3 sides of equal length. Each point
    inside the triangle gives rise to a lattice of points in the plane
    which will be identified via the equivalence relation \sim . We then write the fundamental solution to the heat equation with
    Neumann boundary condition on the triangle via formula (1) for
    points in the interior of the triangle. However, points at the sides
    of the triangle accumulate heat at twice the rate while corner points
    trap it at 6 times the rate (because the triangle is equilateral).

    In general one would hope that a corner of angle alpha gets heated
    \lfloor 2\pi/\alpha\rfloor times faster
    than interior points.

    I think that stochastic calculus is not yet mature enough to prove
    that reflecting brownian motion in the triangle can be constructed by
    applying the reflection \sim to free brownian motion (lacking a multidimensional Tanaka formula). However,
    one can see if formula (1) does give the fundamental solution to the
    heat equation with Neumann boundary conditions.

    Comment by Gerónimo — June 14, 2012 @ 4:23 pm | Reply

    • Hmm, I’m not so sure about the factor of 2 in the formula for p_t^R(x,1), as this would imply that the heat kernel is discontinuous at the boundary, which I’m pretty sure is not the case. Note that the epsilon-neighbourhood of a boundary point in one dimension is only half as large as the epsilon-neighbourhood of an interior point, and so I think this factor of 1/2 cancels out the factor of 2 that one is getting from the folding coming from the zigzag function. So the heating at the endpoints is coming more from the convexity properties of the heat kernel than from folding multiplicity.

      Still, this does in principle give an explicit formula for the heat kernel on the triangle as some sort of infinitely folded up version of the heat kernel on something like the plane (but one may have to work instead with something more like the universal cover of a plane punctured at many points if the angles do not divide evenly into pi). One problem in the general case is that the folding map becomes dependent on the order of the edges the free Brownian motion hits, and so cannot be represented by a single map f unless one works in some complicated universal cover.

      Comment by Terence Tao — June 14, 2012 @ 4:38 pm | Reply

      • I agree, the formula for p_t^R(x,1) shouldn’t have the factor two and the
        intuition there is incorrect. However, it does suggest a new one: since
        the heat kernel p_t decays rapidly, endpoints with nearby
        reflections will accumulate more density (the notion of nearby depends on the
        amount of time elapsed) and corners of angle \alpha are points
        where there are (mainly) 2\pi/\alpha nearby reflections.

        Also, maybe one does not need to leave the plane since to construct the
        reflecting brownian motion since two-dimensional free Brownian motion
        does not visit the corners of the triangle (by polarity of countable
        sets), so one only needs to keep changing the reflection edge as soon
        as a new one is reached. The
        transition density does indeed seem more complicated, but perhaps (1)
        might provide sensible approximations.

        Comment by Gerónimo — June 14, 2012 @ 5:44 pm | Reply

        • It is true that once one shows that Neumann heat kernel is increasing toward boundary, the hot-spots conjecture is true. But this approach is much harder than just proving hot-spots conjecture. Until very recently there was Laugesen-Morpurgo conjecture stating the Neumann heat kernel for a ball is increasing toward boundary. This was settled by Pascu and Gageonea (http://www.sciencedirect.com/science/article/pii/S0022123610003526) in 2011 using mirror couplings.

          Reflection argument seems very appealing, but even for an interval I have not seen a proof that Neumann heat kernel is increasing using explicit series of Gaussian terms coming from reflections. The above paper also settles interval case. One can also use Dirichlet heat kernel to prove this (http://pages.uoregon.edu/siudeja/neumann.pdf, slides 6 and 7).

          For triangles reflections are not enough to cover the plane. You may have to also flip the reflected triangle along the line perpendicular to the reflection side in order to ensure that you can cover the plane. This however means that you loose continuity on the boundary.

          Comment by Bartlomiej Siudeja — June 14, 2012 @ 6:07 pm | Reply

          • A small correction: “diagonal of Neumann heat kernel should be increasing toward boundary”, so p_t^R(x,x) should be increasing when x goes to boundary.

            Comment by Bartlomiej Siudeja — June 14, 2012 @ 8:32 pm | Reply

    • It does seem like such a procedure would be hard (perhaps hopelessly so) to implement for triangles that don’t tile the plane nicely (which are most triangles) for the reasons given in the other replies. But if such an argument were to work it would first need to be worked out for the case of an equilateral triangle. I’d be interested in seeing such an argument but I am not sure how it would go…

      Suppose the initial heat is a point mass at one corner, and draw out a full tiling of the plane. Then the unreflected heat flow would have a nice Gaussian distribution, and the reflected heat flow could be recovered by folding in all the triangles… but how would you show that the hottest point upon folding is at the corner you started the heat flow at? You have an infinite sum and it is not the case that each triangle in this sum has its maximum at that corner…

      Comment by Chris Evans — June 14, 2012 @ 8:26 pm | Reply

  11. A couple of people asked for some pictures of nodal lines.

    Here are some on triangles which aren’t isoceles or right or equilateral, and whose angles aren’t within pi/50 of those special cases, either:


    Here are the nodal lines corresponding to the 2nd and 3rd Neumann eigenfunction on a nearly equilateral triangle. Note the multiplicity of the 2nd eigenvalue is 1, but the spectral gap \lambda_3-\lambda_2 is small. I found these interesting.


    Comment by Nilima Nigam — June 14, 2012 @ 5:16 pm | Reply

    • Is the nearly equilateral triangle isosceles? If it is, the nearly antisymmetric case should not look the way it does. Every eigenfunction on isosceles triangle must be either symmetric or antisymmetric. Otherwise corresponding eigenvalue is not simple. It is not impossible that the third one is not simple, but for nearly equilateral triangle that is extremely unlikely. Here antisymmetric case is the second eigenvalue, so it must be antisymmetric. Even if this triangle is not isosceles, the change in the shape of the nodal like is really huge.

      Comment by Bartlomiej Siudeja — June 15, 2012 @ 3:51 am | Reply

      • No, the nearly equilateral triangle is not isoceles.

        Comment by Nilima Nigam — June 15, 2012 @ 3:54 am | Reply

        • Also, do you have bounds on how the nodal lines should change as we perturb away from the equilateral triangle in an asymmetric fashion? This would be interesting to compare with.

          Comment by Nilima Nigam — June 15, 2012 @ 4:01 am | Reply

          • No, I do not think I have anything for nodal lines. One of the papers by Antunes and Freitas may have something, but they mostly concentrate on the way eigenvalues change. Nothing for nodal lines. It is quite surprising, and good for us, that the change is so big.

            Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:08 am | Reply

  12. In case someone wants to see eigenfunctions of all known triangles and a square (right isosceles triangle), I have written a Mathematica package http://pages.uoregon.edu/siudeja/TrigInt.m. See ?Equilateral and ?Square for usage. A good way to see nodal domains is to use RegionPlot with eigenfunction>0. The package can also be used to facilitated linear deformation for triangles. In particular Transplant moves a function from one triangle to another(put {x,y} as function to see the linear transformation itself). There is a T[a,b] notation for triangle with vertices (0,0) (1,0) and (a,b). Function Rayleigh evaluates Rayleigh quotient of a given function on a given triangles (with one side on x-axis). There are also other helper functions for handling triangles. Everything is symbolic so parameters can be used. Put this in Mathematica to import the package:
    AppendTo[$Path,ToFileName[{$HomeDirectory, “subfolder”, “subfolder”}]];
    << TrigInt`
    The first line may be needed for Mathematica kernel to see the file. After that
    Equilateral[Neumann,Antisymmetric][0,1] gives the first antisymmetric eigenfunction
    Equilateral[Eigenvalue][0,1] gives the second eigenvalue

    Comment by Anonymous — June 14, 2012 @ 8:15 pm | Reply

    • There is also a function TrigInt which is much faster than regular Int for complicated trigonometric functions. Limits for the integral can be obtained using Limits[triangle]. For integration it might be a good idea to use extended triangle notation T[a,b,condition] where condition is something like b>0.

      Comment by Bartlomiej Siudeja — June 14, 2012 @ 8:20 pm | Reply

      • I’m not a Mathematica user, so my question may be naive. Are the eigenfunctions being computed symbolically by Mathematica?
        If not, could you provide some details on what you’re using to compute the eigenfunctions/values?
        It would be great if you could post this information to the Wiki.

        Comment by Nilima Nigam — June 15, 2012 @ 4:04 am | Reply

        • They are computed using general formula. The nicest write-up is probably in the series of papers by McCartin. All eigenfunctions look almost the same, sum of three terms, each is a product of two cosines/sines. The only difference is integer coefficients under trigs. The same formula works for Dirichlet, just a bit different numbers.

          Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:12 am | Reply

        • Here is a code from the package (with small changes for readability). First there are some convenient definitions.
          h=1;
          r=h/(2Sqrt[3]);
          u=r-y;
          v=Sqrt[3]/2(x-h/2)+(y-r)/2;
          w=Sqrt[3]/2(h/2-x)+(y-r)/2;

          Then a function that contains all cases, #1 and #2 are just integers, f and g are trig functions:
          EqFun[f_,g_]:=f[Pi (-#1-#2)(u+2r)/(3r)]g[Pi (#1-#2)(v-w)/(9r)]+
          f[Pi #1 (u+2r)/(3r)]g[Pi (2#2+#1)(v-w)/(9r)]+
          f[Pi #2 (u+2r)/(3r)]g[Pi (-2#1-#2)(v-w)/(9r)];

          All the cases:
          Equilateral[Neumann,Symmetric]=EqFun[Cos,Cos]&;
          Equilateral[Neumann,Antisymmetric]=EqFun[Cos,Sin]&;
          Equilateral[Dirichlet,Symmetric]=EqFun[Sin,Cos]&;
          Equilateral[Dirichlet,Antisymmetric]=EqFun[Sin,Sin]&;

          Eigenvalue is the same regardless of the case. For Neumann you need 0<=#1<=#2. For Dirichlet: 0<#1<=#2. And antisymmetric cannot have #1=#2.
          Equilateral[Eigenvalue]=Evaluate[4/27(Pi/r)^2(#1^2+#1 #2+#2^2)]&;

          Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:22 am | Reply

          • I’m sorry, I’m really not familiar with this package. Am I correct, reading the script above, that you are computing an *analytic* expression for the eigenvalue? That is, if I give three angles of an arbitrary triangle(a,b,pi-a-b), your script renders the Neumann eigenvalue and eigenfunction in closed form?

            Or is this code for the cases where the closed form expressions for the eigenvalues are known (equilateral, right-angled, etc)? This is also very nice to have, for verification of other methods of calculation.

            When we map one triangle to another, the eigenvalue problem changes (see the Wiki, or previous discussions here). It is great if you have a code which can analytically compute the eigenvalues of the mapped operator on a specific triangle, or equivalently, eigenvalues on a generic triangle.

            Comment by Nilima Nigam — June 15, 2012 @ 4:36 am | Reply

            • This package is not fancy at all. It has formulas for equilateral, right isosceles and half of equilateral. These are known explicitly. Fro other triangles it just helps evaluate Rayleigh quotient on something like f composed with T (linear). This just gives upper bounds for eigenvalues. Or it might help speed-up calculations for Hadamard variation, since you do not need to think about what is the linear transformation from one triangle to another. And it can evaluate Rayleigh quotient on transformed triangle Was handy for proving bounds for eigenvalues, and to see nodal domains for known cases.

              I wish I had any analytic formula for eigenvalues on arbitrary triangle.

              Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:42 am | Reply

              • OK, thanks for the clarification! I was thrown by your initial comment about it accepting all parameters. Now I understand that you’re able to get good bounds, rather than the exact eigenvalues.

                Comment by Nilima Nigam — June 15, 2012 @ 5:23 am | Reply

            • The fact that there are quite a few known cases means that you can make a linear combination of known eigenfunctions (each transplanted to a given triangle) and evaluate Rayleigh quotient. PDEToolbox is not a benchmark for FEM, but I have seen cases where 16GB of memory was not enough to bring numerical result below upper bound obtained from a test function containing 5 known eigenfunctions.

              Comment by Bartlomiej Siudeja — June 15, 2012 @ 5:12 am | Reply

              • PDEtoolbox is great for generating a quick result, but not for careful numerics, and it doesn’t do high order. Yes, you could wait a long while to get good results if you relied solely on PDEToolBox. Joe Coyle (whose FEM solver we’re using) has implemented high-accuracy conforming approximants, and we’re keeping tight control on residual errors. Details of our approximation strategy are on the Wiki. I’m also thinking of implementing a completely non-variational strategy, so we have two sets of results to compare.

                Comment by Nilima Nigam — June 15, 2012 @ 5:29 am | Reply

                • I used to use PDEToolbox for visualizations, but I no longer have license for it. Besides, it does not have 3D, and eigenvalues in 3D behave much worse than in 2D. I have written a wrapper for eigensolver from FEniCS project (http://fenicsproject.org/). It is most likely not good for rigorous numerics, and I am not even a beginner in FEM. However, it works perfectly for plotting. In particular one can see that nodal line moves away very quickly from vertices. The nearly equilateral case Nilima posted must indeed be extremely close to equilateral. While Nilima crunches the data, anyone who wants to see more pictures is welcome to use my script. It is a rough implementation with no-so-good documentation, but it can handle many domains with any boundary conditions (also mixed). There is a readme file. Download link: http://pages.uoregon.edu/siudeja/fenics.zip. I have not tested this only on Mac, so I am not sure it will work in Windows or Linux, though it should.

                  To get a triangle one can use
                  python eig.py tr a b -N -s number -m -c3 -e3
                  tr is domain specification, a,b is the third vertex, -N gives Neumann, -s number is number of triangles, -m shows mesh, -c3 gives contours instead of surface plots (3 contours are good for nodal line), -e3 to get 3 eigenvalues
                  There are many options. python eig.py -h lists all of them with minimalistic explanations

                  Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:57 pm | Reply

  13. Some random thoughts about the nodal line:

    1) I believe the Nodal Line Theorem guarantees that the nodal line consists of a curve with end points on the boundary and which divides the triangle into two sub-regions. It might be possible to prove that in fact the two endpoints of the nodal line lie on different sides of the triangle. (The alternate case, that the nodal line looks like a handle sticking out from one of the edges, feels wrong… in fact maybe it is the case that for no domain ever is it the case that the two endpoints of the nodal line lie on the same straight line segment of its boundary).

    2) If 1) were true, then it would follow that the nodal line does in fact straddle one of the corners. Moreover, we know apriori that the nodal line is orthogonal to the boundary (so at least locally near the boundary it starts to “bow out”). The nodal line ought not to be too serpentine… that would cause the second eigenfunction to have a large H^1-norm while allowing the L^2-norm to stay small… which would violate the Raleigh-Ritz formulation of the 2nd eigenfunction.

    3) Since the nodal line is “bowed out” at the boundary, and has incentive not to be serpentine, it seems like it shouldn’t “bow in”. If we could show that the slope/angle of the nodal-line stays within a certain range then the arguments used for the mixed Dirichlet-Neumann triangle could be applied to show that the extremum of the eigenfunction in this sub-region in fact lies at the corner the nodal line is straddling.

    Of course this is all hand-wavy and means nothing without precise quantitative estimates :-/

    In particular though, does any one know if the statement ” for no domain ever is it the case that the two endpoints of the nodal line lie on the same straight line segment of its boundary” is true? I can’t think of any domain for which that would be the case…

    Comment by Chris Evans — June 14, 2012 @ 8:53 pm | Reply

    • I think your last statement is true. Suppose a nodal line for the Neumann Laplacian in a polygonal domain end both its endpoints on the same line segment. Consider the domain Q enclosed by the nodal line and the piece of the line segment enclosed between the nodal line end-points. This region is a subset of the original domain.

      Now, on Q, the eigenfunction u has the following properties: it satisfies \Delta u + \Lambda u=0 in Q, has zero Dirichlet data on the curvy part of the boundary of Q, and satisfies zero Neumann data on the straight line part of its boundary. Now reflect Q across the straight line segment, and you get a Dirichlet problem for \Delta u + \Lambda u=0 in the doubled domain.

      I now claim \Lambda cannot be an eigenvalue of the Dirichlet problem on this doubled domain. \Lambda is the first eigenvalue of the mixed Dirichlet-Neumann problem on Q. This is easy- there are no other nodal lines in Q. Hence \Lambda is smaller than the first eigenvalue of the Dirichlet problem on Q (fewer constraints). Doubling the domain just increases the value of the Dirichlet eigenvalue. So \Lambda cannot be an eigenvalue on the doubled domain.

      Finally, we have the Helmholtz problem \Delta u + \Lambda u=0 on the doubled domain, with zero boundary data. We’ve just shown \Lambda is not an eigenvalue, so the problem is uniquely solvable, and hence u=0 in the doubled domain.

      This would be a contradiction.

      Comment by Nilima Nigam — June 14, 2012 @ 9:35 pm | Reply

      • I think there is something wrong with this argument. When you double the domain, the Dirichlet eigenvalue must go down. In fact \Lambda is exactly equal to the first Dirichlet eigenvalue on doubled Q (which has Dirichlet condition all around). Doubled Q has a line of symmetry, hence by simplicity of the first Dirichlet eigenvalue, the eigenfunction must be symmetric. Hence it must satisfy Neumann condition on the straight part of the boundary of Q.

        Comment by Bartlomiej Siudeja — June 14, 2012 @ 10:00 pm | Reply

      • Once we double the original domain and get a doubled Q with Dirichlet condition all around, we can claim that this domain has larger eigenvalues than the origin domain doubled with Dirichlet all around. Assuming the doubled domain is convex, we can use Payne-Levine-Weinberger inequality \mu_3\le\lambda_1, Neumann is below Dirichlet. Without convexity we just have \mu_2 <\lambda_1. Our original eigenfunction gives a eigenvalue on the doubled domain, but unfortunately it might not be second. If it was we would be done. Under convexity assumption it should be easier, but I am not sure yet how to finish the proof.

        Comment by Bartlomiej Siudeja — June 14, 2012 @ 10:38 pm | Reply

      • I like the idea of taking advantage of the fact that the boundary is flat to reflect across it, but for the reasons Siudeja mentions I don’t quite follow the argument.

        Maybe it is possible to make an argument by reflecting the entire domain (not just the Q in your notation) across the straight line segment. The reflected eigenfunction would then have a nodal line which is a circle…

        Thus we would have an eigenfunction which has only *one* nodal line and it is a loop floating in the middle… does the Nodal Line Theorem preclude this?

        Comment by Chris Evans — June 14, 2012 @ 11:27 pm | Reply

        • The unit disk contains a Neumann eigenfunction J_0(r/j_1) whose nodal line is a closed circle – but it isn’t the second eigenvalue. But it is the second eigenvalue amongst the radial functions, which already suggests one has to somehow “break the symmetry” (whatever that means) in order to rule out loops…

          Comment by Terence Tao — June 15, 2012 @ 12:28 am | Reply

    • I think that if one can prove that the second eigenfunction of an acute scalene triangle never vanishes at a vertex (i.e. the nodal line cannot cross a vertex), then by a continuity argument (starting from a very thin acute triangle, for instance) shows that for any acute scalene triangle, the nodal line crosses each of the edges adjacent to the pointiest vertex exactly once. I don’t know how to prevent vanishing at a vertex though. (Note that for an equilateral or super-equilateral isosceles triangle, the nodal line does go through the vertex, though as shown in the image http://people.math.sfu.ca/~nigam/polymath-figures/nearly-equilateral-1.jpg from comment 11, the nodal line quickly moves off of the vertex once one perturbs off of the isosceles case.)

      I was looking at the argument that shows the nodal line is not a closed loop, hoping to get some mileage out of a reflection argument, but unfortunately it relies on an isoperimetric inequality and does not seem to be helpful here. (The argument is as follows: if the nodal line is a closed loop, enclosing a subdomain D of the original triangle T, then by zeroing out everything outside of the loop we see that the second Neumann eigenvalue of T is at least as large as the first Dirichlet eigenvalue of D, which is in turn larger than the first Dirichlet eigenvalue of T. But there are isoperimetric inequalities that assert that among all domains of a given area, the first Dirichlet eigenvalue is minimised and the second Neumann eigenvalue are maximised at a disk, implying in particular that the Neumann eigenvalue of T is less than or equal to the Dirichlet eigenvalue of T, giving the desired contradiction.)

      Comment by Terence Tao — June 14, 2012 @ 10:55 pm | Reply

      • This is exactly what I was trying to do above. I think that isoperimetric inequality is not needed. Neumann eigenvalue is just equal to Dirichlet in the loop (laplacian is local), which is larger than Dirichlet on the whole domain which is larger than second Neumann on the whole domain (Polya and others). For convex domains even the third Neumann eigenvalue is below the first Dirichlet. But even this is not enough for our case.

        Comment by Bartlomiej Siudeja — June 14, 2012 @ 11:04 pm | Reply

      • I have done a few numerical plots for super-equilateral triangles sheared by very small number. It seems that the speed at which nodal line moves away from the vertex when shearing is growing when isosceles triangle approaches equilateral. For triangle with vertices (0,0), (1,0) and (1/2+epsilon,sqrt(3)/ (2+epsilon)), nodal line looks almost the same regardless of epsilon. I tried epsilon=0.1, 0.01, 0.0001. Nodal line touches the side about 1/3 of the way from vertex.

        Comment by Bartlomiej Siudeja — June 15, 2012 @ 6:31 pm | Reply

    • I think reflection may actually work, unless I am missing something. Let T be the original acute triangle, Q the quadrilateral obtained by reflection, and S the reflection line. We assume that the second Neumann eigenfunction of T has endpoints on S. Now reflect to get interior Dirichlet domain D. This one is smaller than Q, so by domain monotonicity has strictly larger first Dirichlet eigenvalue than Q with Dirichlet boundary conditions. Due to convexity of Q we get that the third Neumann eigenvalue of Q is not larger than the first Dirichlet eigenvalue of Q (http://www.jstor.org/stable/2375044). We will be done if we can show that the second Neumann eigenfunction of T gives the second or third Neumann eigenfunction of Q. Due to line of symmetry in Q, every eigenfunction must be symmetric or antisymmetric. If not, we could reflect it, then add original and reflection to get symmetric. We could also subtract to get antisymmetric. Hence non symmetric eigenfunction of Q implies double eigenvalue. One of those must be symmetric, so it must be the Neumann eigenfunction of T, and we are done. So suppose that the second Neumann eigenfunction on Q is antisymmetric. If the third on is also antisymmetric, it must have additional nodal line, hence by antisymmetry must have at least 4 nodal domains. But this is not possible. Hence either the second or the third eigenfunction on Q must be symmetric, hence it must satisfy Neumann condition on S. Therefore it must be the second eigenfunction on T. Contradiction.

      Comment by Bartlomiej Siudeja — June 15, 2012 @ 12:44 am | Reply

      • Nice! (Though I’m not clear about the line “Non symmetric eigenfunction of Q implies double eigenvalue”, it seems that this is neither true nor needed for the argument. Also, I replaced your jstor link with the stable link.)

        Comment by Terence Tao — June 15, 2012 @ 1:20 am | Reply

        • No symmetry for eigenfunction means that we can reflect the eigenfunction to get a new one (different). Now take a sum to get something symmetric (Neumann on S), subtract to get antisymmetric (DIrichlet on S). Neither one will be 0, and they must be orthogonal. So eigenvalue must be double or higher. This just means that eigenspace for something symmetric can always be decompose into symmetric and antisymmetric.

          Comment by Bartlomiej Siudeja — June 15, 2012 @ 1:29 am | Reply

          • Oh, I see what you mean now. (I had confused “non symmetric” with “anti-symmetric”.) I put a quick writeup of the argument on the wiki.

            Comment by Terence Tao — June 15, 2012 @ 1:48 am | Reply

            • The reference I included was to a paper of Friedlander, where has cites a much older paper by Levine and Weinberger where the inequality is proved. There is also a nice paper by Frank and Laptev that gives good account of who proved what (http://www2.imperial.ac.uk/~alaptev/Papers/FrLap2.pdf).

              Comment by Bartlomiej Siudeja — June 15, 2012 @ 2:16 am | Reply

  14. Concerning the method of attack I suggested in the previous comment, it seems that 1) is proven (as the nodal line connects two edges, it does indeed straddle some vertex.

    It occurs to me that 2) and 3) can be more succinctly phrased as the conjecture that the mixed boundary domain consisting of this corner and nodal line is *convex*.

    I think showing that would be enough… because the nodal line intersects the boundary orthogonally, knowing this region is convex should control the slope of the nodal line enough that earlier arguments would get the extremum in the corner.

    Comment by Chris Evans — June 15, 2012 @ 7:27 am | Reply

  15. […] proposed by Chris Evans, and that has already expanded beyond the proposal post into its first research discussion post. (To prevent clutter and to maintain a certain level or organization, the discussion gets cut up […]

    Pingback by Three number theory bits: One elementary, the 3-Goldbach, and the ABC conjecture « mixedmath — June 15, 2012 @ 1:58 pm | Reply

  16. […] previous research thread for the Polymath7 “Hot Spots Conjecture” project has once again become quite full, so […]

    Pingback by Polymath7 research threads 2: the Hot Spots Conjecture « The polymath blog — June 15, 2012 @ 9:49 pm | Reply

    • As you can see, I’ve rolled over the thread again as this thread is also approaching 100 comments and getting a little hard to follow. The pace is a bit hectic, but I guess this is a good thing, as it is an indication that we are making progress and understanding the problem better…

      Comment by Terence Tao — June 15, 2012 @ 9:51 pm | Reply

  17. […] been quite an active discussion in the last week or so, with almost 200 comments across two threads (and a third thread freshly opened up just now).  While the problem is still not completely […]

    Pingback by Updates on the two polymath projects « What’s new — June 15, 2012 @ 10:22 pm | Reply

  18. […] time to roll over the research thread for the Polymath7 “Hot Spots” conjecture, as the previous research thread has again become […]

    Pingback by Polymath7 research threads 3: the Hot Spots Conjecture « The polymath blog — June 24, 2012 @ 7:22 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a comment

Blog at WordPress.com.