# Računanje i analiza matrične funkcije predznaka

Stopić, Petra (2016) Računanje i analiza matrične funkcije predznaka. Diploma thesis, Faculty of Science > Department of Mathematics.

 Preview
PDF
Language: Croatian

Matrix sign function is defined in two ways. Both definitions require values of sign function on the spectrum of $A$. First definition is about Jordan canonial form and the other with polynomial interpolation. Everything is based on the complex coordinate plane so for sign function it's important to know if eigenvalues are on the right or left side of the plane. Function $sign(A)$ has some useful properties: involution, diagonalizable matrix and commutation with $A$. Schur algorithm is based on Schur decomposition. The problem is therefore to computing $sign(T)$ where matrix $T$ is triangular matrix from decomposition. The complexity of the algorithm is $O(n^3)$. In total, $\frac{86}{3}n^3$ flops are needed to get the solution. The next method is Newton's. It uses Newton's iterations. In that chapter, theorem about quadratically convergence of Newton's sign iterations is proven. The other result from that theorem is that iterations will converge slower if the spectral radius is much greater than 1 and also if eigenvalues of $A$ are very close to the imaginary axe. The method requires $2in^3$ where $i$ is the number of used iterations. The complexity of the algorithm is also $O(n^3)$. An effective way to enhance the initial speed of convergence is to scale the iterations. That's why scaled Newton's iterations exist. They are very similar to the original Newton's iterations. The only difference is that scalar is multiplied by original iteration. There are three types of that positive and real scalar: determinantal, spectral and norm. There is the theorem which tells that scaled Newton's iterations converge to $sign(A)$. The finite iteration will be after $d+p-1$ steps where $d$ is the number of distinct eigenvalues and $p$ is determined with dimension of the largest Jordan block. Algorithm needs $2in^3$ flops for $i$ iterations. There is one more kind of the iterations which is derivatived from formula: $sign(A) := A(A^2)^{-\frac{1}{2}}$ and their name is Padé iterations. They are determined on rational polynomial functions. The theorem about convergence of Padé iterations contains two cases: global and local convergence. Speed of the convergence in both cases is $l+m+1$ where $l$ and $m$ are degrees of polynomials which are used in that Padé approximation. Numerical stability of iterations is explained by theorem which gives the result that iterations which superlineary converge to $sign(A)$ are stable. Also, Fréchet derivation of the iteration function in $S$ is idempotent and it's equal to Fréchet derivation of $sign$ function. They are both equal to: $\frac{1}{2} (H-SHS)$ where $H$ is small perturbation. Because of these results, I conclude that for bounded condition of the matrix $S := sign(A)$, iteration is stable. That is also valid when $H$ i $S$ commute. Also, in relation with iterations, stopping criteria is analyzed. The theorem about residual error and relative error boundaries is also proven. Sensitivity and condition of matrices are explained by relative condition number of the function $sign$. The result from that part gives the boundaries for condition number of $sign$ function.