The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The linear transformation in this example is called a shear mapping. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The Mona Lisa example pictured here provides a simple illustration. The eigenvectors of the matrix (red lines) are the two special directions such that every point on them will just slide on them. I'm open to changing languages if needed.A 2×2 real and symmetric matrix representing a stretching and shearing of the plane. I'm happy to provide additional details upon request. This isn't ideal as it makes my program much slower, and I'm also not even sure it will work with the stiffness and rounding problems I've encountered with () I suspect Radau or BDF would be able to navigate the stiffness, but not the rounding.Īnybody have any ideas? Any other algorithms for finding eigenvalues that could handle this? Can () work with numpy.float128 instead of numpy.float64 or would even that extra precision not help? I'm at a point where I may just run _ivp() for an arbitrarily long time (a few thousand hours) which will probably take a long time to compute, and then use _fit() to approximate the analytical solutions I want, since I have a good idea of their forms. My matrix is real, but unfortunately not symmetric, so () is not viable either. I've also seen suggestions to use numpy.real_if_close() to remove the imaginary portions of the complex values, but I'm not sure this is a good solution either several eigenvalues from () are 0, which is a sign of error to me, but additionally almost all the real portions are of the same scale as the imaginary portions (exceedingly small), which makes me question their validity as well. I have seen suggestions for similar users' problems to use sympy to solve for the eigenvalues, but when it hadn't solved my matrix after 5 hours I figured it wasn't a viable solution for my large system. I believe this to be a stiffness or floating point rounding problem where the underlying LAPACK algorithm is unable to handle either the very small values (smallest is ~3e-14, and most nonzero values are of similar scale) or disparity between some values (largest is ~4000, but values greater than 1 only show up a handful of times). This time around, however, () doesn't seem to like my matrix and is giving me complex values, which I know are wrong because I'm modeling a physical system that can't have complex rates of growth or decay (or sinusoidal solutions), much less complex values for its variables. I've used this method in the past for similar 40x40 matrices, and it's much (tens, in some cases hundreds of times) faster than _ivp() and also makes post model analysis much easier as I can find maximum values and maximum rates of change using () or evaluate my function at inf to see where things settle if left long enough. I'm attempting to find the eigenvalues and eigenvectors of this matrix to construct a function that serves as the analytical solution to the system so that I can just give it a time and it will give me values for each variable. I have a 150x150 sparse (~500 nonzero entries of 22500) matrix representing a system of first order, linear differential equations. Are there any libraries, methods, algorithms, or solutions for working with this many, very small numbers? _ivp() works with implicit methods (have tried Radau and BDF), but the output is wildly wrong. Sympy could do it given an infinite amount of time, but after running it for 5 hours I gave up. Solving for the eigenvectors and eigenvalues is impossible with () as the returned values are complex and should not be, it does not support numpy.float128 either, and the matrix is not symmetric so () won't work. I am looking for an algorithm to solve a large-system, solvable, linear IVP that can handle very small floating point values.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |