Eigenvalues for our matrices can be calculated using an external library called ARPACK and its parallel extension which is called PARPACK. PARPACK is a common open-source software package that is capable of returning a few eigenvalues of a given matrix. The algorithm that it uses is described by the authors as a synthesis of the Arnoldi/Lanczos process with Implicitly Shifted QR technique. This library is included with the Jetstream distribution and will be compiled automatically with the make_jetstream script. Note that this software has LAPACK dependency. The compiled LAPACK library will need to exist in your bin directory for ARPACK to compile properly.
Compile
The first thing you will need to do is compile ARPACK and PARPACK, which I have included in the repository. This only ever has to be done once. You can do this simply by typing "./make_jetstream" when in jetstream directory. However, if you don't want to compile all of jetstream, you can just go into jetstream/ARPACK and type "make all". Then go up to jetstream level and type "make clean", then "make".
Use
To calculate the eigenvalues, the first thing to do is set opt_method='get_eigvals'. You can then choose parameters to set up your problem. The input parameters are a new namelist EIGV which needs to be added at the end of the input file. Go to the Eigval input parameters main page for more information on the input parameters.
Here is an example of what part of your input file might look like:
&EIGV
eigval%nev = 10 !-- Number of eigenvalues to try to get. Increasing this increases the cost proportionally.
eigval%ncv = 22 !-- Size of the Arnoldi subspace, should be at least 2*eigval%nev+2
eigval%maxitr = 300
eigval%which = 'SM' ! 'SM', 'LM', 'SR', 'MS', all explained below
eigval%tol = 1.d-12
eigval%ilu_flev = 5
eigval%print_lev = 1 !-- Higher print levels print more to screen, most of the high level printing is not very useful though.
!-- The following two flags can be used to get the eigenvalues with without the row and column scaling.
!-- This is supported by all eigenvalue calculations but needs to use the standard row and column scaling, not the special normalization method that is currently and option in the codes.
eigval%no_rscl = .true.
eigval%no_cscl = .true.
!-- Linear solver stuff, used with 'SM', 'SR', and 'MS'
eigval%s%matvec = 'jacbn' ! Can be any matvec, including 'approx', 'jacbn', 'frcht' and 'cmplx'. This defines the matrix implicitly.
eigval%s%prec = 'schur'
eigval%s%krylv_size = 60
eigval%s%krylv_max = 12345
eigval%s%gcrot_k = 1
eigval%s%rel_lin_tol = 1.d-14 ! Should be at least as small as eigval%tol
!-- Only used for 'MS'
eigval%sigma_r = 0.d0
!-- Only used for 'SR'
cayley%mode = 1
cayley%maxits = 10
cayley%r = 5
cayley%nnewev_min = 3
cayley%rho = 1.2
cayley%alpha1_max = 1.d100
cayley%tol = 1.d-10
cayley%rstrt = .false.
&END
By default, the eigenvalue solver will just initialize your matrices using freestream conditions which is probably not very useful. You will probably want to do a flow solve first and then run the case as a restart:
The output will mostly be in the screen file, but info from the linear solver is displayed in the results.solv file and there is also a results.gdat file that will be output that contains the final eigenvalues (the same info is also in the screen file).
Theory
If you want to go through the theory in detail, the best place to start is the ARPACK User's Manual. In this section I will just try to outline the theory with enough information to help understanding of the input parameters a bit more.
The basic algorithm is based on the power method. If we define a sequence from:
then the sequence converges to the eigenvector corresponding to the largest in magnitude eigenvalue of A. Obviously, if the intermediate values of this sequence are retained, then a Krylov subspace is produced:
Kk(A,v1)=Span{v1,Av1,A2v1,\hdots,Ak−1v1}.
The approximate eigenpairs are constructed from this subspace by imposing the Galerkin condition: A vector
is called a Ritz vector with corresponding Ritz value theta if the Galerkin condition
⟨w,Ax−xθ⟩=0forallw∈Kk(A,v1)
is satisfied. The Ritz values of H (and their corresponding Ritz vectors) are eigenvalues (and corresponding eigenvectors) of A, where H is an upper Hessenberg matrix approximated by the Arnoldi factorization:
If instead of the largest eigenvalues we want eigenvalues near some complex value sigma, we may instead solve the eigenvalue problem of the shift and invert spectral transformation:
(A−σI)−1x=νxwhereν=1λ−σ.
The eigenvalues can then be calculated directly:
λj=σ+1/νj.
The case of sigma=0 is of special interest since the Ritz values of the shift and invert system will now in fact correspond to the smallest eigenvalues of A. The ARPACK user's manual suggests that shift invert methods should be used whenever possible, especially when the eigenvalues are clustered. The tolerance used by the linear solver must be at least as stringent as the tolerance used in the eigenvalue calculation, so obviously this is quite expensive for our purposes. Since the matrix does not change throughout iteration, it would be efficient to perform an ILU factorization with as high a fill level as possible for preconditioning.
Largest Magnitude (LM) Eigenvalues
This is the easiest and cheapest calculation. Since no linear systems need to be solved, the solver input parameters are irrelevant. To calculate the largest magnitude eigenvalues, set eigval%which = 'LM'.
Calculating the Smallest Magnitude (SM) Eigenvalues
The Lanczos power series method converges to the eigenvector corresponding to the largest eigenvalue. ARPACK also has a way of calculating the smallest eigenvalues but since it does not take the matrix internally it does not use a shift and invert method and I don't know what it actually does. However, I have not been able to get the method to converge.
The way that the smallest eigenvalues are calculated in Jetstream is by using a shift and invert spectral transformation with sigma=0. In other words, we apply the Lanczos power series method using ARPACK to A^{-1} instead of A to get the largest eigenvalues of A^{-1} and then take the reciprocal to get the smallest eigenvalues of A. The catch of course is that the matrix-vector products A^{-1}v are much more expensive to calculate than Av since they require us to solve a linear system of equations. This system is solved using GCROT and it is important to match the tolerance %s%rel_lin_tol to be at least as small as the eigenvalue tolerance %tol.
Since the largest eigenvalues of A^{-1} tend to be less clustered than the largest eigenvalues of A, the 'SM' method tends to converge in fewer iterations than the 'LM' case, though obviously the iterations are much more expensive. Keep in mind also that each Lanczos vector will require for a linear system to be solved at each iteration, greatly increasing the CPU time. For example, if eigval%ncv=22 and it takes 5 outer iterations to converge, this is a total of 22*5=110 solves with GCROT.
To calculate the smallest magnitude eigenvalues, set eigval%which = 'SM'. This is the default.
Manual Shift Invert (MS)
It is also possible to calculate the eigenvalues closest to a point sigma by calculating the eigenvalues of (A-sigma*I)^{-1}. If an eigenvalue of this system is represented by theta, then lambda=sigma+1/theta is an eigenvalue of A. Since the largest magnitude eigenvalues of (A-sigma*I)^{-1} correspond to the eigenvalues of A that are near sigma, the Lanzos-Arnoldi iterations will converge first to eigenvalues near to sigma. One application of this method, for example, is for calculating the eigenvalues of near-singular matrices where A may be too poorly conditioned to invert directly. In this case, use a small negative value for sigma.
To use manual shift, set eigval%which = 'MS'.
Eigenvalues with Smallest Real Part (SR)
This is by far the most complicated and also least reliable eigenvalue calculation, but is important for stability analysis. The most reliable way to do this would be to use some combination of a Cayley trasform and imaginary shift to search along the imaginary axis for eigenvalues in the vicinity. However, performing an imaginary shift would result in a complex matrix, and we do not have the tools to solve the linear system. Instead, we attempt to solve the problem using a Cayley transform approach directly.
The Cayley transform of A is defined:
C(A) = (A-a1*I)^{-1}*(A-a2*I), a1 < a2.
The theta is an eigenvalue of C(A) then lambda=(theta-a2)(theta-a1) is an eigenvalue of A.
Consider the interval [a1,a2]. It can be easily shown that the set of eigenvalues of A with real part less than the midpoint (a1+a2)/2 corresponds to the set of eigenvalues of C(A) with modulus greater than 1, and the set of eigenvalues of A with real part greater than (a1+a2)/2 corresponds to the set of eigenvalues of C(A) with modulus less than 1. So, in principle, it seems that if we have some estimate of the eigenvalue with smallest real part, obtained for example with the 'SM' method, then we can set (a1 + a2)/2 slightly larger than this value and the Cayley transform method should converge to any eigenvalues with smaller real part.
In practice, however, it is more complicated than this. The problem is that if the radius (a2-a1)/2 is too small then any eigenvalues of A with real part less than (a2 + a1)/2 will have modulus greater than 1 but close to 1 in Cayley transformed space. This is particularly true of eigenvalues with large imaginary part, which are also the eigenvalues least likely to show up in an 'SM' calculation. The largest eigenvalues of A will have modulus less than 1 in Cayley transformed space but will also have modulus near to 1. The problem is, there are many such eigenvalues, which can result in a high level of clustering of eigenvalues near to the perimeter of the unit circle in Cayley transformed space, making it difficult for the Lanczos iterations to converge to the eigenvalues that we want. Here are some things that will improve the success rate of converging to the eigenvalues with smallest real part:
1. Eigenvalues will have greater modulus in Cayley transformed space if the real part is close to a1.
2. Increasing the radius (a2-a1)/2 will increase the modulus in Cayley transformed space of eigenvalues with large imaginary part.
The first point above has limited effectiveness because eigenvalues with large imaginary part can still have modulus very near 1 even in the limiting case when the real part of the eigenvalue of A is equal to a1. To ensure convergence, it is important to use a sufficiently large Cayley radius. The disadvantage is that it results in more computational effort. Though we know the eigenvalues of A to the left of (a1+a2)/2 have modulus greater than 1 in Cayley transformed space, there is no direct correlation between the size of the real part of the eigenvalues of A and the modulus of the eigenvalues of C(A). In other words, the only way to be sure that the eigenvalue with smallest real part has been found is to find all eigenvalues of C(A) with modulus greater than 1, and the number of eigenvalues with this property increases as the Cayley radius increases. Using the Cayley transform method will also output the eigenvalues in Cayley transformed space, so be sure to check that at least one eigenvalue with modulus less than 1 has been converged to. Of course, this is still not a guarantee that the eigenvalue with smallest real part has been found, since as mentioned earlier, there may still be convergence issues where some eigenvalues might get missed if they have large imaginary part or if the real part is too far from a1.
The best way to get the eigenvalues with smallest real part is to first get at least one eigenvalue using the eigval = 'SM' option. Then, cut and paste the converged eigenvalues from the results.his file into a file called 'eigval.erstrt'. Set cayley%rstrt=.true. and eigval%which='SR' to start Cayley transform iterations. There is some logic built in to automatically set up the Cayley radius and perform Cayley transforms in an iterative loop. If you do not initialize with 'SM' and do not use the restarting option then it will actually just initialize with 'SM' anyways, I am just recommending to do this manually since it is a long process and you may decide to adjust some parameters based on the initial eigenvalue calculation.
Checking the results
The eigenvalues are output in the screen and results.his file. There are three columns output: the first two are the real and imaginary components of each eigenvalue respectively and the last is the final residual of ||Av-lambda*v||/||v||. It is very important to check this last column. Even though the codes are used to calculate eigenvalues, the algorithm actually converges to eigenvectors which happen to correspond to the largest in magnitude eigenvalues. In the case where the eigenvectors are highly non-normal (which is typically all of our matrices) the eigenvalue calculation is highly sensitive to any source of numerical error. This is because nearly parallel eigenvectors can have very different eigenvalues. As a result, unless the eigenvectors are solved to machine precision, the eigenvalues extracted in post-processing may be very inaccurate. The eigenvalues are extracted by a PARPACK routine by performing A*v=v' on the eigenvectors and then probably using some least squares fitting to approximate the eigenvalue by minimizing ||lambda*v-v'||. Thus the accuracy of calculating the eigenvalues is related to the accuracy of the eigenvectors but clearly they do not have the same accuracy.
If the eigenvalues are not being produced accurately and you are using a tight linear solver tolerance such as 1E-15, then check if the final matvec check at the end of each linear solve in the results.solv file consistent with this tolerance. If it is not, then probably the linear system is too ill-conditioned to give an accurate solution, which is typical of eigenvalue problems since ironically we are often interested in eigenvalues of ill-conditioned systems. One fix for this when using the 'SM' method is to switch to a manual shift-invert method 'MS' with a near-zero negative real shift, for example eigval%sigma_r=-1E-3. This will result in a linear system that is easier to solve but will stiffen the eigenvalue problem because it will have a clustering effect on the eigenvalues of the shift-inverted matrix, resulting in more 'matvecs' for the eigenvalue solver. However, it is sometimes necessary to do this and I have had some success using this method to get eigenvalues for stiff systems.
If the same problem is persisting with the 'SR' problem, then try reducing the size of a1 by setting cayley%alpha1_max. (Note that a1 can be negative.) This will of course increase the size of the Cayley radius and you may need to increase the size of the eigval%nev to compensate.
Convergence Issues
Sometimes the solver will seem to take forever to converge. This is mainly an issue for the 'SM' and 'SR' case since these are way more expensive than the 'LM' case. Convergence of the Arnoldi-Lanzos process is actually correlated more than anything else to the size of the number of requested eigenvalues and the size of the Arnoldi subspace. Changing either of these values by 1 can have a significant impact on convergence depending on the existence of complex-conjugate pairs or how the eigenvalues are clustered. Of course there is no way to know these things beforehand so it is basically like throwing darts at a cat in the dark (a black cat, no less) so you can try to change things randomly and hope for better results.
Eigenvalues for our matrices can be calculated using an external library called ARPACK and its parallel extension which is called PARPACK. PARPACK is a common open-source software package that is capable of returning a few eigenvalues of a given matrix. The algorithm that it uses is described by the authors as a synthesis of the Arnoldi/Lanczos process with Implicitly Shifted QR technique. This library is included with the Jetstream distribution and will be compiled automatically with the make_jetstream script. Note that this software has LAPACK dependency. The compiled LAPACK library will need to exist in your bin directory for ARPACK to compile properly.
Compile
The first thing you will need to do is compile ARPACK and PARPACK, which I have included in the repository. This only ever has to be done once. You can do this simply by typing "./make_jetstream" when in jetstream directory. However, if you don't want to compile all of jetstream, you can just go into jetstream/ARPACK and type "make all". Then go up to jetstream level and type "make clean", then "make".Use
To calculate the eigenvalues, the first thing to do is set opt_method='get_eigvals'. You can then choose parameters to set up your problem. The input parameters are a new namelist EIGV which needs to be added at the end of the input file. Go to the Eigval input parameters main page for more information on the input parameters.Here is an example of what part of your input file might look like:
&EIGV
eigval%nev = 10 !-- Number of eigenvalues to try to get. Increasing this increases the cost proportionally.
eigval%ncv = 22 !-- Size of the Arnoldi subspace, should be at least 2*eigval%nev+2
eigval%maxitr = 300
eigval%which = 'SM' ! 'SM', 'LM', 'SR', 'MS', all explained below
eigval%tol = 1.d-12
eigval%ilu_flev = 5
eigval%print_lev = 1 !-- Higher print levels print more to screen, most of the high level printing is not very useful though.
!-- The following two flags can be used to get the eigenvalues with without the row and column scaling.
!-- This is supported by all eigenvalue calculations but needs to use the standard row and column scaling, not the special normalization method that is currently and option in the codes.
eigval%no_rscl = .true.
eigval%no_cscl = .true.
!-- Linear solver stuff, used with 'SM', 'SR', and 'MS'
eigval%s%matvec = 'jacbn' ! Can be any matvec, including 'approx', 'jacbn', 'frcht' and 'cmplx'. This defines the matrix implicitly.
eigval%s%prec = 'schur'
eigval%s%krylv_size = 60
eigval%s%krylv_max = 12345
eigval%s%gcrot_k = 1
eigval%s%rel_lin_tol = 1.d-14 ! Should be at least as small as eigval%tol
!-- Only used for 'MS'
eigval%sigma_r = 0.d0
!-- Only used for 'SR'
cayley%mode = 1
cayley%maxits = 10
cayley%r = 5
cayley%nnewev_min = 3
cayley%rho = 1.2
cayley%alpha1_max = 1.d100
cayley%tol = 1.d-10
cayley%rstrt = .false.
&END
By default, the eigenvalue solver will just initialize your matrices using freestream conditions which is probably not very useful. You will probably want to do a flow solve first and then run the case as a restart:
OPT_METHOD = 'get_eigvals'
DIABLO% RESTART = .true., DIABLO% USE_FRSTRM = .false.
The output will mostly be in the screen file, but info from the linear solver is displayed in the results.solv file and there is also a results.gdat file that will be output that contains the final eigenvalues (the same info is also in the screen file).
Theory
If you want to go through the theory in detail, the best place to start is the ARPACK User's Manual. In this section I will just try to outline the theory with enough information to help understanding of the input parameters a bit more.The basic algorithm is based on the power method. If we define a sequence from:
then the sequence converges to the eigenvector corresponding to the largest in magnitude eigenvalue of A. Obviously, if the intermediate values of this sequence are retained, then a Krylov subspace is produced:
Kk(A,v1)=Span{v1,Av1,A2v1,\hdots,Ak−1v1}.
The approximate eigenpairs are constructed from this subspace by imposing the Galerkin condition: A vector
is called a Ritz vector with corresponding Ritz value theta if the Galerkin condition
⟨w,Ax−xθ⟩=0forallw∈Kk(A,v1)
is satisfied. The Ritz values of H (and their corresponding Ritz vectors) are eigenvalues (and corresponding eigenvectors) of A, where H is an upper Hessenberg matrix approximated by the Arnoldi factorization:
If instead of the largest eigenvalues we want eigenvalues near some complex value sigma, we may instead solve the eigenvalue problem of the shift and invert spectral transformation:
(A−σI)−1x=νxwhereν=1λ−σ.
The eigenvalues can then be calculated directly:
λj=σ+1/νj.
The case of sigma=0 is of special interest since the Ritz values of the shift and invert system will now in fact correspond to the smallest eigenvalues of A. The ARPACK user's manual suggests that shift invert methods should be used whenever possible, especially when the eigenvalues are clustered. The tolerance used by the linear solver must be at least as stringent as the tolerance used in the eigenvalue calculation, so obviously this is quite expensive for our purposes. Since the matrix does not change throughout iteration, it would be efficient to perform an ILU factorization with as high a fill level as possible for preconditioning.
Largest Magnitude (LM) Eigenvalues
This is the easiest and cheapest calculation. Since no linear systems need to be solved, the solver input parameters are irrelevant. To calculate the largest magnitude eigenvalues, set eigval%which = 'LM'.Calculating the Smallest Magnitude (SM) Eigenvalues
The Lanczos power series method converges to the eigenvector corresponding to the largest eigenvalue. ARPACK also has a way of calculating the smallest eigenvalues but since it does not take the matrix internally it does not use a shift and invert method and I don't know what it actually does. However, I have not been able to get the method to converge.The way that the smallest eigenvalues are calculated in Jetstream is by using a shift and invert spectral transformation with sigma=0. In other words, we apply the Lanczos power series method using ARPACK to A^{-1} instead of A to get the largest eigenvalues of A^{-1} and then take the reciprocal to get the smallest eigenvalues of A. The catch of course is that the matrix-vector products A^{-1}v are much more expensive to calculate than Av since they require us to solve a linear system of equations. This system is solved using GCROT and it is important to match the tolerance %s%rel_lin_tol to be at least as small as the eigenvalue tolerance %tol.
Since the largest eigenvalues of A^{-1} tend to be less clustered than the largest eigenvalues of A, the 'SM' method tends to converge in fewer iterations than the 'LM' case, though obviously the iterations are much more expensive. Keep in mind also that each Lanczos vector will require for a linear system to be solved at each iteration, greatly increasing the CPU time. For example, if eigval%ncv=22 and it takes 5 outer iterations to converge, this is a total of 22*5=110 solves with GCROT.
To calculate the smallest magnitude eigenvalues, set eigval%which = 'SM'. This is the default.
Manual Shift Invert (MS)
It is also possible to calculate the eigenvalues closest to a point sigma by calculating the eigenvalues of (A-sigma*I)^{-1}. If an eigenvalue of this system is represented by theta, then lambda=sigma+1/theta is an eigenvalue of A. Since the largest magnitude eigenvalues of (A-sigma*I)^{-1} correspond to the eigenvalues of A that are near sigma, the Lanzos-Arnoldi iterations will converge first to eigenvalues near to sigma. One application of this method, for example, is for calculating the eigenvalues of near-singular matrices where A may be too poorly conditioned to invert directly. In this case, use a small negative value for sigma.To use manual shift, set eigval%which = 'MS'.
Eigenvalues with Smallest Real Part (SR)
This is by far the most complicated and also least reliable eigenvalue calculation, but is important for stability analysis. The most reliable way to do this would be to use some combination of a Cayley trasform and imaginary shift to search along the imaginary axis for eigenvalues in the vicinity. However, performing an imaginary shift would result in a complex matrix, and we do not have the tools to solve the linear system. Instead, we attempt to solve the problem using a Cayley transform approach directly.The Cayley transform of A is defined:
C(A) = (A-a1*I)^{-1}*(A-a2*I), a1 < a2.
The theta is an eigenvalue of C(A) then lambda=(theta-a2)(theta-a1) is an eigenvalue of A.
Consider the interval [a1,a2]. It can be easily shown that the set of eigenvalues of A with real part less than the midpoint (a1+a2)/2 corresponds to the set of eigenvalues of C(A) with modulus greater than 1, and the set of eigenvalues of A with real part greater than (a1+a2)/2 corresponds to the set of eigenvalues of C(A) with modulus less than 1. So, in principle, it seems that if we have some estimate of the eigenvalue with smallest real part, obtained for example with the 'SM' method, then we can set (a1 + a2)/2 slightly larger than this value and the Cayley transform method should converge to any eigenvalues with smaller real part.
In practice, however, it is more complicated than this. The problem is that if the radius (a2-a1)/2 is too small then any eigenvalues of A with real part less than (a2 + a1)/2 will have modulus greater than 1 but close to 1 in Cayley transformed space. This is particularly true of eigenvalues with large imaginary part, which are also the eigenvalues least likely to show up in an 'SM' calculation. The largest eigenvalues of A will have modulus less than 1 in Cayley transformed space but will also have modulus near to 1. The problem is, there are many such eigenvalues, which can result in a high level of clustering of eigenvalues near to the perimeter of the unit circle in Cayley transformed space, making it difficult for the Lanczos iterations to converge to the eigenvalues that we want. Here are some things that will improve the success rate of converging to the eigenvalues with smallest real part:
1. Eigenvalues will have greater modulus in Cayley transformed space if the real part is close to a1.
2. Increasing the radius (a2-a1)/2 will increase the modulus in Cayley transformed space of eigenvalues with large imaginary part.
The first point above has limited effectiveness because eigenvalues with large imaginary part can still have modulus very near 1 even in the limiting case when the real part of the eigenvalue of A is equal to a1. To ensure convergence, it is important to use a sufficiently large Cayley radius. The disadvantage is that it results in more computational effort. Though we know the eigenvalues of A to the left of (a1+a2)/2 have modulus greater than 1 in Cayley transformed space, there is no direct correlation between the size of the real part of the eigenvalues of A and the modulus of the eigenvalues of C(A). In other words, the only way to be sure that the eigenvalue with smallest real part has been found is to find all eigenvalues of C(A) with modulus greater than 1, and the number of eigenvalues with this property increases as the Cayley radius increases. Using the Cayley transform method will also output the eigenvalues in Cayley transformed space, so be sure to check that at least one eigenvalue with modulus less than 1 has been converged to. Of course, this is still not a guarantee that the eigenvalue with smallest real part has been found, since as mentioned earlier, there may still be convergence issues where some eigenvalues might get missed if they have large imaginary part or if the real part is too far from a1.
The best way to get the eigenvalues with smallest real part is to first get at least one eigenvalue using the eigval = 'SM' option. Then, cut and paste the converged eigenvalues from the results.his file into a file called 'eigval.erstrt'. Set cayley%rstrt=.true. and eigval%which='SR' to start Cayley transform iterations. There is some logic built in to automatically set up the Cayley radius and perform Cayley transforms in an iterative loop. If you do not initialize with 'SM' and do not use the restarting option then it will actually just initialize with 'SM' anyways, I am just recommending to do this manually since it is a long process and you may decide to adjust some parameters based on the initial eigenvalue calculation.
Checking the results
The eigenvalues are output in the screen and results.his file. There are three columns output: the first two are the real and imaginary components of each eigenvalue respectively and the last is the final residual of ||Av-lambda*v||/||v||. It is very important to check this last column. Even though the codes are used to calculate eigenvalues, the algorithm actually converges to eigenvectors which happen to correspond to the largest in magnitude eigenvalues. In the case where the eigenvectors are highly non-normal (which is typically all of our matrices) the eigenvalue calculation is highly sensitive to any source of numerical error. This is because nearly parallel eigenvectors can have very different eigenvalues. As a result, unless the eigenvectors are solved to machine precision, the eigenvalues extracted in post-processing may be very inaccurate. The eigenvalues are extracted by a PARPACK routine by performing A*v=v' on the eigenvectors and then probably using some least squares fitting to approximate the eigenvalue by minimizing ||lambda*v-v'||. Thus the accuracy of calculating the eigenvalues is related to the accuracy of the eigenvectors but clearly they do not have the same accuracy.If the eigenvalues are not being produced accurately and you are using a tight linear solver tolerance such as 1E-15, then check if the final matvec check at the end of each linear solve in the results.solv file consistent with this tolerance. If it is not, then probably the linear system is too ill-conditioned to give an accurate solution, which is typical of eigenvalue problems since ironically we are often interested in eigenvalues of ill-conditioned systems. One fix for this when using the 'SM' method is to switch to a manual shift-invert method 'MS' with a near-zero negative real shift, for example eigval%sigma_r=-1E-3. This will result in a linear system that is easier to solve but will stiffen the eigenvalue problem because it will have a clustering effect on the eigenvalues of the shift-inverted matrix, resulting in more 'matvecs' for the eigenvalue solver. However, it is sometimes necessary to do this and I have had some success using this method to get eigenvalues for stiff systems.
If the same problem is persisting with the 'SR' problem, then try reducing the size of a1 by setting cayley%alpha1_max. (Note that a1 can be negative.) This will of course increase the size of the Cayley radius and you may need to increase the size of the eigval%nev to compensate.
Convergence Issues
Sometimes the solver will seem to take forever to converge. This is mainly an issue for the 'SM' and 'SR' case since these are way more expensive than the 'LM' case. Convergence of the Arnoldi-Lanzos process is actually correlated more than anything else to the size of the number of requested eigenvalues and the size of the Arnoldi subspace. Changing either of these values by 1 can have a significant impact on convergence depending on the existence of complex-conjugate pairs or how the eigenvalues are clustered. Of course there is no way to know these things beforehand so it is basically like throwing darts at a cat in the dark (a black cat, no less) so you can try to change things randomly and hope for better results.