AN EFFICIENT METHOD FOR NONLINEAR DYNAMIC ANALYSIS OF 3D SPACE STRUCTURES
Prof.Dr.Hashamdar
SCRUTINIZATION OF TECHNIQUES TO OPTIMIZING FUNCTIONS OF SEVERAL VARIABLES
where V is a descent direction vector,
S is the steplength, and
K is the iteration number.
the computational effort involved is high and it may sometimes prove more advantageous to calculate S. The steplength is only at selected points or attempt to set bounds on its value rather than to evaluate it exactly. The calculation of the steplength depends on the method of minimization and it is a compromise between the numbers of iterations. The computational efforts involved in each one from iteration and the obtainable accuracy. The term steplength implies that the descent vector V is normalized, although this is not explicitly required in the resulting algorithms.
The descent method can be classified according to the way in which the descent direction V is found. The descent direction can either be calculated from the values of the function alone, or form values of the function together with values of its partial derivative. The descent direction can also be calculated by the additional information gained from the second partial derivative of the function. In general the methods using the second partial derivatives require less iteration than those relying on the values of the first derivative, but they clearly involve more computation per iteration. Minimization method according to all aspects can further be classified. The information gained in previous iterations is used to calculate the next descent direction. A brief description featuring the outline of the three major classes of methods appear below.
Direct search methods, (C0– methods) are methods which rely only on evaluation of F(X) at a sequence of point X1, X2 … in order to reach the minimum point X . These methods are normally used when the function f is not differentiable. These methods are also subjected to random error or the derivatives are discontinuous.
First order methods (C1– methods) are methods which make use of the first partial derivatives of the function f for calculation of the descent vector. The existence and continuity of the first partial derivative of f, and g, is essential for this class of methods. Examples of such methods are the method of steepest descent, the method of conjugate gradients and the method of Fletcher-Reeves.
Second order methods (C2- methods) are methods which require the second partial derivatives as well as the first derivative of f. C2- methods are suitable for minimization of functions which can be differentiated twice and in which both derivatives are continuous. Hence, the second partial derivative of a function of several variables is a matrix. These classes of methods require considerable computer storage. The best example of this type of methods is the Newton-Raphson method.
روش بهینه سازی و تعریف ریاضی آن در معادله حرکت دینامیکی
آموزش جامع روش های پیشرفته آنالیز دینامیکی غیر خطی سازه های فضائی سه بعدی
پرفسور حشمدار