Tags: #projects/my-talks #projects/reading-groups
See Floer Reading Group Fall 2020
Summary/Outline
Outline
What we’re trying to prove:
- 8.1.5: \((d{\mathcal{F}})_u\) is a Fredholm operator of index \(\mu(x) - \mu(y)\).
What we have so far:
- Define \begin{align*} L: W^{1, p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2 n}\right) & \longrightarrow L^{p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2 n}\right) \\ Y & \longmapsto \frac{\partial Y}{\partial s}+J_{0} \frac{\partial Y}{\partial t}+S(s, t) Y \end{align*} where \begin{align*} S: {\mathbb{R}}\times S^1 &\to \operatorname{Mat}(2n; {\mathbb{R}}) \\ S(s, t) &\overset{s\to\pm\infty}\to S^\pm(t) .\end{align*}
Outline
-
Took \(R^\pm: I \to {\mathsf{Sp}}(2n; {\mathbb{R}})\): symplectic paths associated to \(S^\pm\)
-
These paths defined \(\mu(x), \mu(y)\)
-
Section 8.7: \begin{align*} R^\pm \in {\mathcal{S}}\coloneqq\left\{{R(t) {~\mathrel{\Big\vert}~}R(0) = \operatorname{id}, ~ \operatorname{det}(R(1) - \operatorname{id})\neq 0}\right\} \implies L \text{ is Fredholm} .\end{align*}
-
WTS 8.8.1: \begin{align*} \operatorname{Ind}(L)\stackrel{\text{Thm?}}{=} \mu(R^-(t)) - \mu(R^+(t)) = \mu(x) - \mu(y) .\end{align*}
From Yesterday
-
Han proved 8.8.2 and 8.8.4.
- So we know \(\operatorname{Ind}(L) = \operatorname{Ind}(L_1)\)
-
Today: 8.8.5 and 8.8.3:
- Computing \(\operatorname{Ind}(L_1)\) by computing kernels.
8.8.5: \(\dim \ker F, F^*\)
Recall
\begin{align*} L: W^{1, p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2 n}\right) & \longrightarrow L^{p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2 n}\right) \\ Y & \longmapsto \frac{\partial Y}{\partial s}+J_{0} \frac{\partial Y}{\partial t}+S(s, t) Y \\ \\ L_{1}: W^{1, p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2 n}\right) & \longrightarrow L^{p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2 n}\right) \\ Y & \longmapsto \frac{\partial Y}{\partial s}+J_{0} \frac{\partial Y}{\partial t}+S(s) Y \\ \\ L_{1}^{\star}: W^{1, q}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2 n}\right) & \longrightarrow L^{q}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2 n}\right) \\ Z & \longmapsto-\frac{\partial Z}{\partial s}+J_{0} \frac{\partial Z}{\partial t}+S(s)^t Z \end{align*}
Here \({1\over p} + {1\over q} = 1\) are conjugate exponents.
Reductions
\begin{align*} L_1^* &= -{\frac{\partial }{\partial s}\,} + J_0 {\frac{\partial }{\partial t}\,} + S(s)^t .\end{align*}
-
Since \(\operatorname{coker}L_1 \cong \ker L_1^*\), it suffices to compute \(\ker L_1^*\)
-
We have
\begin{align*} J_0^1 \coloneqq \left[\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right] \implies J_0 = \begin{bmatrix} J_0^1 & & & \\ & J_0^1 & & \\ & & \ddots & \\ & & & J_0^1 \end{bmatrix} \in \bigoplus_{i=1}^n \operatorname{Mat}(2; {\mathbb{R}}) .\end{align*}
- This allows us to reduce to the \(n=1\) case.
Setup
\(L_1\) used a path of diagonal matrices constant near \(\infty\): \begin{align*} S(s) \coloneqq\left(\begin{array}{cc} a_{1}(s) & 0 \\ 0 & a_{2}(s) \end{array}\right), \quad \text { with } a_{i}(s)\coloneqq\left\{\begin{array}{ll} a_{i}^{-} & \text {if } s \leq-s_{0} \\ a_{i}^{+} & \text {if } s \geq s_{0} \end{array}\right. .\end{align*}
\begin{center} \includegraphics[width = \textwidth]{figures/image_2020-05-27-20-10-07.png} \end{center}
Statement of Later Lemma (8.8.5)
Let \(p>2\) and define \begin{align*} F: W^{1, p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2}\right) &\longrightarrow L^{p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2}\right) \\ Y &\mapsto \frac{\partial Y}{\partial s}+J_{0} \frac{\partial Y}{\partial t}+S(s) Y .\end{align*}
Note: \(F\) is \(L_1\) for \(n=1\): \begin{align*} L_{1}: W^{1, p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2 n}\right) & \longrightarrow L^{p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2 n}\right) \\ Y & \longmapsto \frac{\partial Y}{\partial s}+J_{0} \frac{\partial Y}{\partial t}+S(s) Y .\end{align*}
Statement of Lemma
\begin{align*} F: W^{1, p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2}\right) &\longrightarrow L^{p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2}\right) \\ Y &\mapsto \frac{\partial Y}{\partial s}+J_{0} \frac{\partial Y}{\partial t}+S(s) Y .\end{align*}
Suppose \(a_i^\pm \not \in 2\pi {\mathbb{Z}}\).
- Suppose \(a_1(s) = a_2(s)\) and set \(a^\pm \coloneqq a_1^\pm = a_2^\pm\). Then
\begin{align*} \operatorname{dim} \operatorname{Ker} F &= 2 \cdot {\sharp}\left\{\ell \in \mathbb{Z} {~\mathrel{\Big\vert}~} 2\pi \ell \in (a^-, a+) \subset {\mathbb{R}}\right \} \\ \operatorname{dim} \operatorname{Ker} F^{*} &= 2 \cdot {\sharp}\left\{\ell \in \mathbb{Z} {~\mathrel{\Big\vert}~} 2\pi\ell \in (a^+, a^-) \subset{\mathbb{R}} \right\} .\end{align*}
- Suppose \(\sup_{s\in {\mathbb{R}}} {\left\lVert {S(s)} \right\rVert} < 1\), then
\begin{align*} \operatorname{dim} \operatorname{Ker} F &= {\sharp}\left\{i \in\{1,2\} {~\mathrel{\Big\vert}~}~a_{i}^{-}<0 \text { and } a_{i}^{+}>0\right\}\\ \operatorname{dim} \operatorname{Ker} F^{*} &={\sharp}\left\{i \in\{1,2\} {~\mathrel{\Big\vert}~}~ a_{i}^{+}<0 \text { and } a_{i}^{-}>0\right\} .\end{align*}
Statement of Lemma
In words:
-
If \(S(s)\) is a scalar matrix, set \(a^\pm = a_1^\pm = a_2^\pm\) to the limiting scalars and count the integer multiples of \(2\pi\) between \(a^-\) and \(a^+\).
-
Otherwise, if \(S\) is uniformly bounded by 1, count the number of entries the flip from positive to negative as \(s\) goes from \(-\infty\to \infty\).
\begin{center} \includegraphics[width = \textwidth]{figures/image_2020-05-27-20-10-07.png} \end{center}
Proof of Assertion 1
- Suppose \(a_1(s) = a_2(s)\) and set \(a^\pm \coloneqq a_1^\pm = a_2^\pm\). Then
\begin{align*} \operatorname{dim} \operatorname{Ker} F &= 2 \cdot {\sharp}\left\{\ell \in \mathbb{Z} {~\mathrel{\Big\vert}~} 2\pi \ell \in (a^-, a+) \subset {\mathbb{R}}\right \} \\ \operatorname{dim} \operatorname{Ker} F^{*} &= 2 \cdot {\sharp}\left\{\ell \in \mathbb{Z} {~\mathrel{\Big\vert}~} 2\pi\ell \in (a^+, a^-) \subset{\mathbb{R}} \right\} .\end{align*}
Step 1: Transform to Cauchy-Riemann Equations
- Write \(a(s) \coloneqq a_1(s) = a_2(s)\).
- Start with equation on \({\mathbb{R}}^2\), \begin{align*}\mathbf{Y}(s, t) = \left[ Y_1(s, t), Y_2(s, t) \right].\end{align*}
- Replace with equation on \({\mathbb{C}}\): \begin{align*}\mathbf{Y}(s, t) = Y_1(s, t) + i Y_2(s, t).\end{align*}
Assertion 1, Step 1: Reduce to CR
-
Expand definition of the PDE \begin{align*} F(\mathbf{Y}) = 0 \leadsto \mkern 1.5mu\overline{\mkern-1.5mu{\partial}\mkern-1.5mu}\mkern 1.5mu\mathbf{Y} + S \mathbf{Y} = 0 \\ \\ \frac{\partial}{\partial s} \mathbf{Y} +\left(\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right) \frac{\partial}{\partial t} \mathbf{Y} +\left(\begin{array}{cc} a(s) & 0 \\ 0 & a(s) \end{array}\right) \mathbf{Y} =0 .\end{align*}
-
Change of variables: want to reduce to \(\mkern 1.5mu\overline{\mkern-1.5mu{\partial}\mkern-1.5mu}\mkern 1.5mu\tilde Y = 0\)
-
Choose \(B \in \operatorname{GL}(1, {\mathbb{C}})\) such that \(\mkern 1.5mu\overline{\mkern-1.5mu{\partial}\mkern-1.5mu}\mkern 1.5muB + SB = 0\)
-
Set \(Y = B\tilde Y\), which (?) reduces the previous equation to \begin{align*} \mkern 1.5mu\overline{\mkern-1.5mu{\partial}\mkern-1.5mu}\mkern 1.5mu\tilde Y = 0 .\end{align*}
Assertion 1, Step 1: Reduce to CR
Can choose (and then solve) \begin{align*} B = \begin{bmatrix} b(s) & 0 \\ 0 & b(s) \end{bmatrix} {\quad \operatorname{where} \quad} {\frac{\partial b}{\partial s}\,} = -a(s)b(s) \\ \\ \implies b(s) = \exp{\int_0^s -a(\sigma) ~d\sigma} \coloneqq\exp{-A(s)} .\end{align*}
Remarks:
- For some constants \(C_i\), we have
\begin{align*} A(s) = \begin{cases} C_1 + a^- s, & s \leq -\sigma_0 \\ C_2 + a^+ s, & s \geq \sigma_0 \\ \end{cases} .\end{align*}
-
The new \(\tilde Y\) satisfies CR, is continuous and \(L^1_{\text{loc}}\), so elliptic regularity \(\implies C^\infty\).
-
The real/imaginary parts of \(\tilde Y\) are \(C^\infty\) and harmonic.
Assertion 1, Step 2: Solve CR
-
Identify \(s+it \in {\mathbb{R}}\times S^1\) with \(u = e^{2\pi z}\)
-
Apply Laurent’s theorem to \(\tilde Y(u)\) on \({\mathbb{C}}\setminus\left\{{0}\right\}\) to obtain an expansion of \(\tilde Y\) in \(z\).
-
Deduce that the solutions of the system are given by \begin{align*} \tilde Y (u) =\sum_{\ell \in \mathbf{Z}} c_{\ell} u^\ell \implies \tilde{Y}(s+i t) =\sum_{\ell \in \mathbf{Z}} c_{\ell} e^{(s+i t) 2 \pi \ell} .\end{align*} where \(\left\{{c_\ell}\right\}_{\ell\in{\mathbb{Z}}} \subset {\mathbb{C}}\) converges for all \(s, t\).
Assertion 1, Step 2: Solve CR
Use \(e^{s+it} = e^s\qty{\cos(t) + i\sin (t)}\) to write in real coordinates: \begin{align*} \tilde{Y}(s, t)=\sum_{\ell \in \mathbb{Z}} e^{2 \pi s \ell} \begin{bmatrix} \cos(2\pi\ell t) & -\sin(2\pi \ell t) \\ \sin(2\pi\ell t) & \cos(2\pi \ell t) \end{bmatrix} \begin{bmatrix} \alpha_\ell \\ \beta_\ell \end{bmatrix} .\end{align*}
Use \begin{align*} Y = B\tilde Y = \begin{bmatrix} e^{-A(s)} & 0 \\ 0 & e^{-A(s)} \end{bmatrix} \tilde Y \end{align*}
to write \begin{align*} Y(s, t)=\sum_{\ell \in \mathbb{Z}} e^{2 \pi s \ell} \begin{bmatrix} e^{-A(s)} & 0 \\ 0 & e^{-A(s)} \end{bmatrix} \begin{bmatrix} \cos(2\pi\ell t) & -\sin(2\pi \ell t) \\ \sin(2\pi\ell t) & \cos(2\pi \ell t) \end{bmatrix} \begin{bmatrix} \alpha_\ell \\ \beta_\ell \end{bmatrix} .\end{align*}
For \(s\leq s_0\) this yields for some constants \(K, K'\): \begin{align*} Y(s, t) = \sum_{\ell\in {\mathbb{Z}}} e^{2\pi\ell - a^-} \begin{bmatrix} e^K \qty{\alpha_\ell \cos(2\pi\ell t) - \beta_\ell \sin(2\pi\ell t) } \\ e^{K'} \qty{ \alpha_\ell \sin(2\pi\ell t) + \beta_\ell \cos(2\pi \ell t)} \end{bmatrix} .\end{align*}
Condition on \(L^p\) Solutions
For \(s\leq s_0\) we had \begin{align*} Y(s, t) = \sum_{\ell\in {\mathbb{Z}}} e^{\qty{2\pi\ell - a^-}s} \begin{bmatrix} e^K \qty{\alpha_\ell \cos(2\pi\ell t) - \beta_\ell \sin(2\pi\ell t) } \\ e^{K'} \qty{ \alpha_\ell \sin(2\pi\ell t) + \beta_\ell \cos(2\pi \ell t)} \end{bmatrix} \end{align*}
and similarly for \(s\geq s_0\), for some constants \(C, C'\) we have: \begin{align*} Y(s, t) = \sum_{\ell\in {\mathbb{Z}}} e^{\qty{2\pi\ell - a^+}s} \begin{bmatrix} e^C \qty{\alpha_\ell \cos(2\pi\ell t) - \beta_\ell \sin(2\pi\ell t) } \\ e^{C'} \qty{ \alpha_\ell \sin(2\pi\ell t) + \beta_\ell \cos(2\pi \ell t)} \end{bmatrix} .\end{align*}
Then \begin{align*} Y\in L^p \iff \text{exponential terms} \overset{\ell\to\infty}\to 0 .\end{align*}
Condition on \(L^p\) Solutions: Small Tails
\begin{align*} Y(s, t) = \sum_{\ell\in {\mathbb{Z}}} e^{\qty{2\pi\ell - a^-}s} \begin{bmatrix} e^K \qty{\alpha_\ell \cos(2\pi\ell t) - \beta_\ell \sin(2\pi\ell t) } \\ e^{K'} \qty{ \alpha_\ell \sin(2\pi\ell t) + \beta_\ell \cos(2\pi \ell t)} \end{bmatrix} \end{align*}
- \(\ell \neq 0\): Need \(\alpha_\ell = \beta_\ell = 0\) or \(2\pi \ell < a^+\)
-
\(\ell = 0\): Need both
- \(\alpha_0 = 0\) or \(a^+ > 0\) and
- \(\beta_0 = 0\) or \(a^+ > 0\).
Counting Solutions
\begin{align*} \begin{cases} \alpha_\ell = \beta_\ell = 0 \text{ or } 2\pi\ell \in (a^-, a^+) & \ell\neq 0 \\ \qty{\alpha_0 = 0 {\operatorname{ or }}0 \in (a^-, a^+)} {\operatorname{ and }}\qty{\beta_0 = 0 {\operatorname{ or }}0\in (a^-, a^+)} & \ell = 0 \end{cases} .\end{align*}
- Finitely many such \(\ell\) that satisfy these conditions
- Sufficient conditions for \(Y(s, t) \in W^{1, p}\).
Compute dimension of space of solutions: ` \begin{align*} \operatorname{dim} \operatorname{Ker} F &=2 \cdot {\sharp}\left{{\ell \in \mathbb{Z}^{*} {~\mathrel{\Big\vert}~} 2\pi\ell \in (a^-, a^+) }\right}
- 2\cdot \indic{0 \in (a^-, a^+)} \ &=2 \cdot {\sharp}\left{\ell \in \mathbb{Z} {~\mathrel{\Big\vert}~}2\pi\ell \in (a^-, a^+) \right} .\end{align*} `{=html}
Note: not sure what \({\mathbb{Z}}^*\) is: most likely \({\mathbb{Z}}\setminus\left\{{0}\right\}\).
Counting Solutions
Use this to deduce \(\dim \ker F^*\):
-
\(Y\in \ker F^* \iff Z(s, t) \coloneqq Y(-s, t)\) is in the kernel of the operator \begin{align*} \tilde F: W^{1, q}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2}\right) &\longrightarrow L^{p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2}\right) \\ Z &\mapsto \frac{\partial Z}{\partial s}+J_{0} \frac{\partial Z}{\partial t}+S({\color{red}-s}) Y .\end{align*}
-
Obtain \(\ker F^* \cong \ker \tilde F\).
-
Formula for \(\dim \ker \tilde F\) almost identical to previous formula, just swapping \(a^-\) and \(a^+\).
Assertion 2
Assertion 2: Suppose \(\sup_{s\in {\mathbb{R}}} {\left\lVert {S(s)} \right\rVert} < 1\), then \begin{align*} \operatorname{dim} \operatorname{Ker} F &= {\sharp}\left\{i \in\{1,2\} {~\mathrel{\Big\vert}~}~a_{i}^{-}<0 < a_{i}^{+}\right\}\\ \operatorname{dim} \operatorname{Ker} F^{*} &={\sharp}\left\{i \in\{1,2\} {~\mathrel{\Big\vert}~}~ a_{i}^{+}<0 < a_{i}^{-} \right\} .\end{align*}
We use the following:
-
Lemma 8.8.7: \begin{align*} \sup_{s\in {\mathbb{R}}} {\left\lVert { S(s) } \right\rVert} < 1 \implies \text{the elements in }\ker F,~ \ker F^* \text{ are independent of }t .\end{align*}
-
Proof: in subsection 10.4.a.
Proof of Assertion 2
\begin{align*} F: W^{1, p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2}\right) &\longrightarrow L^{p}\left(\mathbb{R} \times S^{1} ; \mathbb{R}^{2}\right) \\ Y &\mapsto \frac{\partial Y}{\partial s}+J_{0} \frac{\partial Y}{\partial t}+S(s) Y .\end{align*}
-
Given as a fact: ` \begin{align*} \mathbf{Y} \in \ker F \implies {\frac{\partial }{\partial s},}\mathbf{Y} = \mathbf{a}(s)\mathbf{Y} ~~~
-a_1(s) & 0 \\ 0 & -a_2(s) \end{bmatrix} \mathbf{Y} .\end{align*} <span>`{=html}
- Therefore we can solve to obtain \begin{align*} \mathbf{Y}(s) = \mathbf{c}_0 \exp{-\mathbf{A}(s)}{\quad \operatorname{where} \quad} \mathbf{A}(s) = \int_0^s -\mathbf{a}(\sigma) ~d\sigma .\end{align*}
Proof of Assertion 2
-
Explicitly in components: \begin{align*} \begin{cases} {\frac{\partial Y_1}{\partial s}\,} &= -a_1(s) Y_1 \\ {\frac{\partial Y_s}{\partial s}\,} &= -a_2(s) Y_2 \\ \end{cases} \quad \implies \quad Y_i(s) = c_i e^{-A_i(s)}, \quad A_i(s) &= \int_0^s -a_i(\sigma) ~d\sigma .\end{align*}
-
As before, for some constants \(C_{j, i}\), \begin{align*} A_i(s) = \begin{cases} C_{1, i} + a_i^-\cdot s & s \leq -\sigma_0 \\ C_{2, i} + a_i^+\cdot s & s \geq \sigma_0 \\ \end{cases} .\end{align*}
-
Thus \begin{align*} Y_i \in W^{1, p} \iff 0 \in (a_i^-, a_i^+) ,\end{align*}
establishing
\begin{align*} \dim \ker F = {\sharp}\left\{i \in\{1,2\} {~\mathrel{\Big\vert}~}0 \in (a_i^-, a_i^+) \right\} .\end{align*}
8.8.3: \(\operatorname{Ind}(L_1) = k^- - k^+\)
Statement and Outline
Statement: let \(k^\pm \coloneqq\operatorname{Ind}(R^\pm)\); then \(\operatorname{Ind}(L_1) = k^- - k^+\).
- Consider four cases, depending on parity of \(k^\pm - n\)
- Show all 4 lead to \(\operatorname{Ind}(L_1) = k^- - k^+\)
- \(k^- \equiv k^+ \equiv n \operatorname{mod}2\).
- \(k^- \equiv n, k^+ \equiv n-1 \operatorname{mod}2\)
- \(k^- \equiv n-1, k^+ \equiv n \operatorname{mod}2\).
- \(k^- \equiv k^+ \equiv n-1 \operatorname{mod}2\)
\begin{center} \includegraphics[width = 0.3\textwidth]{figures/image_2020-05-27-22-54-44.png} \end{center}
Case 1: \(k^+ \equiv k^- \equiv n \operatorname{mod}2\)
\begin{align*} S_{k^-} & = \begin{bmatrix} -\pi & & & & & & & \\ & -\pi & & & & & & \\ & & \ddots & & & & & \\ & & & & -\pi & & & \\ & & & & & -\pi & & \\ & & & & & & (n-1-k^-)\pi & \\ & & & & & & & (n-1-k^-)\pi \\ \end{bmatrix} \\ S_{k^+} & = \begin{bmatrix} -\pi & & & & & & & \\ & -\pi & & & & & & \\ & & \ddots & & & & & \\ & & & & -\pi & & & \\ & & & & & -\pi & & \\ & & & & & & (n-1-{\color{blue}k^+})\pi & \\ & & & & & & & (n-1-{\color{blue}k^+})\pi \\ \end{bmatrix} .\end{align*}
Case 1: \(k^- \equiv k^+ \equiv n \operatorname{mod}2\)
- Take \(a_1(s) = a_2(s)\) so \(a_1^\pm = a^\pm\)
- Apply the proved lemma to obtain
\begin{align*} \dim \ker L_1 &= 2\cdot {\sharp}\left\{{\ell \in {\mathbb{Z}}{~\mathrel{\Big\vert}~}2\ell \in (n-1-k^-, n-1-k^+)}\right\} \\ &= \begin{cases} k^- - k^+ & k^- > k^+ \\ 0 & \text{else} \end{cases} \\ \\ \dim \ker L_1^* &= 2\cdot {\sharp}\left\{{ \ell \in {\mathbb{Z}}{~\mathrel{\Big\vert}~}2\ell \in (k^- - n + 1, k^+ - n + 1)}\right\} \\ &= \begin{cases} k^+ - k^- & k^+ > k^- \\ 0 & \text{otherwise} \end{cases} \\ \\ \implies \operatorname{Ind}(L_1) &= \qty{k^- - k^+ \over 2} - \qty{k^+ - k^- \over 2} = k^- - k^+ .\end{align*}
Case 2: \(k^+ \not\equiv k^- \equiv n \operatorname{mod}2\)
\begin{align*} S_{k-} & = \begin{bmatrix} -\pi & & & & & & & \\ & -\pi & & & & & & \\ & & \ddots & & & & & \\ & & & & -{\color{red}{\varepsilon}}\pi & & & \\ & & & & & -{\color{red}{\varepsilon}}\pi & & \\ & & & & & & (n-1-{\color{red}k^-})\pi & \\ & & & & & & & (n-1-{\color{red}k^-})\pi \\ \end{bmatrix} \\ S_{k^+} & = \begin{bmatrix} -\pi & & & & & & & \\ & -\pi & & & & & & \\ & & \ddots & & & & & \\ & & & & {\color{red}{\varepsilon}} & & & \\ & & & & & -{\color{red}{\varepsilon}} & & \\ & & & & & & (n-{\color{red}2}-k^+)\pi & \\ & & & & & & & (n-{\color{red}2}-k^+)\pi \\ \end{bmatrix} .\end{align*}
Case 2: \(k^+ \not\equiv k^- \equiv n \operatorname{mod}2\)
- Take \(a_1(s) = a_2(s)\) everywhere except the \(n-1\)st block, where we can assume \(\sup_{s\in {\mathbb{R}}} {\left\lVert {S(s)} \right\rVert} < 1\).
- Assertion 2 applies and we get
\begin{align*} \dim \ker L_1 &= 2\cdot {\sharp}\left\{{\ell \in {\mathbb{Z}}{~\mathrel{\Big\vert}~}2\ell \in (n-1-k^-, n-2-k^+)}\right\} + 1 \\ &= \begin{cases} \qty{k^- - k^+ - 1} + 1 & k^- > k^+ \\ 1 & \text{otherwise} \end{cases} \\ \\ \dim \ker L_1^* &= 2\cdot {\sharp}\left\{{\ell \in {\mathbb{Z}}{~\mathrel{\Big\vert}~}2\ell \in (k^- - n + 1, k^+ - n + 2)}\right\} \\ &= \begin{cases} k^+ - k^- + 1, & k^+ > k^- \\ 0 & \text{otherwise} \end{cases} \\ \implies \operatorname{Ind}(L_1) &= \qty{ {k^- - k^+ -1 \over 2} + 1} - \qty{k^+ - k^- + 1 \over 2} = k^- - k^+ .\end{align*}
The other 2 cases involve different matrices \(S_{k^\pm}\), but proceed similarly.