HEJ, HU ISSN 1418-7108
Manuscript no.: ANM-980724-A
Frontpage previous next

  
Analysis of a special full two-grid operator

For the special case where the smoothing parameter is chosen to be $\omega = 1$ a proof of the growth of $\mu$ can be established.

Theorem 2   Consider the model problem 1 and the full two-grid operator as ${M}_I$. Suppose that bilinear interpolation and $\omega$-Jacobi smoothing with $\omega = 1$ is used. Then the behaviour of the spectral radius is $\, \mu\,=\,\varrho \,(S_C^{-1} T_C^{\phantom{\!\!\!\!-1}})
\, \ge \, O(h^{-1}) $ as $h \to 0$.

Proof: For convenience we assume that $n \ge 8$ is a multiple of 4.

Let the upper index (in parentheses) of a matrix or a vector refer to the corresponding entry. With this notation we have

\begin{eqnarray*}
\delta_i & = & \max_{[l]} \varrho \left( D_{[l]}^{-1}
H_{[l]...
...2 + s_1} } \cdot
\sum_{n/4}^{n/2-1} G_{[k,1]}^{(1,1)}
\qquad .
\end{eqnarray*}



since $\, H_{[1]} \,$ is symmetric. Additionally $ \Lambda_{[k,1]} $ and thus $ G_{[k,1]} $ (and $ H_{[1]} $) are positively semidefinit, and therefore $\, G_{[k,1]}^{(1,1)} \ge 0 $.

We restrict $k$ to values $ n/4 \le k < n/2$ and aim for a bound of $\, G_{[k,1]}^{(1,1)} \,$ for such $k$. For simplicity we will omit the index quadruplet $[k,1]$ of the matrices $G_{[k,1]} , \Lambda_{[k,1]} , \Theta_{[k,1]} $, $C_{[k,1]}$ and $\Gamma_{[k,1]}$. From (15) we obtain

\begin{eqnarray*}
G_{[k,1]} \, = \, G & = &
\Gamma^T \cdot C^T \, \Theta \, C^...
... \\
\mbox{and} \qquad
G^{(1,1)} & = & e_1^T \cdot G \cdot e_1
\end{eqnarray*}



with $ e_1 := (1,0)^T \,$ being the first unitary vector. Utilizing the trigonometric identities we conclude

\begin{eqnarray*}
\Gamma \cdot e_1 & = &
\left( - \frac{\displaystyle \mu_k (...
...a_{k',1}} \, , \, 0 \, , \,0
\right)^T \\
& =: & \eta \cdot d
\end{eqnarray*}



with the new notation

\begin{displaymath}
\eta^2 := 2h \cdot \sin^2 (k \pi h) = 8h \cdot s_k c_k
\qq...
...ystyle \lambda_{k',1}} \, , \, 0 \, , \,0
\right)^T
\qquad .
\end{displaymath}

This leads to

\begin{eqnarray*}
G^{(1,1)} & = & \eta^2 \, \cdot \,
d^T \, C^T \cdot \Theta \...
... \eta^2 \, \cdot \, b^T \cdot A^T \cdot \Lambda \cdot A
\cdot b
\end{eqnarray*}




\begin{displaymath}
\mbox{with} \qquad
A := \Theta \, C \, \Theta \, \in \mbox...
...and} \qquad b := C \, d \, \in \mbox{$\mathbb{R}$}^4
\qquad .
\end{displaymath}

All entries of the diagonal matrix $\Lambda$ are positive. This implies

\begin{eqnarray*}
G^{(1,1)} & = & \eta^2 \, \cdot \,
\sum_{j=1}^4 \left( b^T \...
...1} \cdot
\left( \left( A \, b \right)^{(1)} \right)^2
\qquad .
\end{eqnarray*}



The first entry of $A \cdot b $ can be written as

\begin{displaymath}
\left( A \, b \right)^{(1)} \, = \, (1,0,0,0) \cdot A \cdot b
\, = \, e_1^T \, A \cdot b
\end{displaymath}

with $\, e_1 := (1,0,0,0)^T \,$ being the first unitary vector. We will now investigate the vectors $ e_1^T \, A $ and $b$.


Let us start with $ e_1^T \, A = e_1^T \cdot \Theta \, C \, \Theta
=: \left( a_1 \, , \, a_2 \, , \, a_3 \, , \, a_4 \right) $. We perform one $\omega$-Jacobi smoothing step with $\omega = 1$. The corresponding matrix $ \Theta $ becomes

\begin{displaymath}
\Theta \, = \, I - \frac{\omega}{4} \Lambda
\, = \, \mbox...
...k - s_1 \, , \, c_k - c_1 \, , \, s_k - c_1
\right\} \qquad .
\end{displaymath}

In (9) the coarse grid correction matrix $\, C = I - R \cdot c \, c^T \Lambda \,$ has been derived. A cumbersome but straight-forward calculation results in

\begin{displaymath}
\renewedcommand{arraystretch}{2} e_1^T \cdot \Theta \, C \,...
... s_1 \cdot (c_k + c_1) \cdot (c_1 - s_k)
\end{array} \right)^T
\end{displaymath}

Using $ 0 < s_1 \le s_k < 1/2 < c_k \le c_1 < 1$ we obtain the inequalities

\begin{eqnarray*}
c_k^2 c_1^2 \, \frac{ s_k + s_1 }{ s_k c_k + s_1 c_1}
& \le ...
...eft( \frac{\pi}{4} - \pi h \right)
\ge \frac{1}{4} s_k \qquad .
\end{eqnarray*}



Analogously the following bounds of the other entries $ a_2 \ldots a_4 $ are derived:

\begin{displaymath}
\renewedcommand{arraystretch}{1.5} \begin{array}{rcccl}
\f...
...e & a_3 & \le & 0 \\
0 & \le & a_4 & \le & 2 s_1
\end{array}\end{displaymath}


The vector $ b = C \, d =:
\left( b_1 \, , \, b_2 \, , \, b_3 \, , \, b_4 \right)^T $ is dealt with in a similar manner. From the expansion of $C$ and $d$ we conclude

\begin{displaymath}
\renewedcommand{arraystretch}{2} b \, = \, \frac{1}{4} \cdo...
...aystyle s_k c_k + s_1 c_1 }
\cdot s_k s_1
\end{array} \right)
\end{displaymath}

As above, we similarly obtain

\begin{displaymath}
\renewedcommand{arraystretch}{1.5} \begin{array}{rcccl}
\f...
...aystyle 1}{\displaystyle 64} & \le & b_4 & \le & 0
\end{array}\end{displaymath}

These estimates for $a_j $ and $b_j$ result in

\begin{eqnarray*}
\left( A \, b \right)^{(1)} & = & e_1^T \, A \cdot b
\, = \,...
... \cdot s_k^4
\qquad \qquad \mbox{if } k \ge n/4 \, , \, n \ge 8
\end{eqnarray*}



Now the matrix entry $ H_{[1]}^{(1,1)} $ can be bounded according to

\begin{eqnarray*}
H_{[1]}^{(1,1)} \, = \, H^{(1,1)}
& = & \sum_{[k]} G^{(1,1)}...
...\pi}{2} x )
\; > \; 0.0001
\qquad \mbox{for } n \ge 8 \qquad .
\end{eqnarray*}



Since $ s_1 = \sin^2 (\pi * h /2 ) = O(h^2) $ we conclude

\begin{eqnarray*}
4 \sqrt{s_1^2 + s_1} & = & O ( h ) \\
\mbox{and } \qquad
\...
...1^2 + s_1} } \cdot
H_{[1]}^{(1,1)}
& = & O ( h^{-1} ) \qquad .
\end{eqnarray*}



Thus the desired spectral radius $\mu = 2 \delta_i $ grows (at least) like $O(h^{-1})$ as $h \to 0$.

HEJ, HU ISSN 1418-7108
Manuscript no.: ANM-980724-A
Frontpage previous next