Description: 雅可比、塞德尔和SOR迭 代法的原理。代法的原雅可比、塞德尔和SOR迭代法的原理。理。-Jacobian, Seidel and SOR Diez Jacobian, Seidel SOR iteration method and the principle. On behalf of the original law Jacobian, Seidel SOR iteration method and the principle. Jimmy. Platform: |
Size: 2048 |
Author:张名 |
Hits:
Description: 迭代法是解线性代数方程组的另一类方法,特别适用于解大型稀疏线性方程组。它的基本思想是针对求解问题预先设计好某种迭代格式,从而产生求解问题的近似解迭代序列,在迭代序列收敛于精确解的情况下,按精度要求取某个迭代值作为问题解的近似值。迭代法具有原始系数举证始终不变,算法简单,编写程序较方便,所需存储单元较少的优点。-iterative method was the linear algebraic equations of the other methods, particularly applicable to the large sparse linear equations. The basic idea is to solve the problem in advance against the design iteration some good format, thereby creating problems for the approximate solution of iterative sequence, the iterative sequence converges to the exact solution, precision demanded by a certain value as a question iterative solution approximation. Iteration with the original proof coefficient remains unchanged, the algorithm is simple, the preparation process more convenient, less required storage unit advantages. Platform: |
Size: 1024 |
Author:江理彬 |
Hits:
Description: 这是一段使用并行方式运算的jacobi迭代,目的是为了说明MPI使用过程中的对称消息发送的方式,矩阵经迭代多次后,其结果是一个16*16的方阵,元素都为8-This is a way to use parallel computing jacobi iteration, in order to explain the purpose of use in the process of MPI message symmetrical manner, after repeated matrix by iteration, the result is a 16* 16 square matrix, elements are 8 Platform: |
Size: 1024 |
Author:邓超 |
Hits:
Description: Using Jacobi method and Gauss-Seidel iterative methods to solve the following system
The required precision is =0.00001, and the maximum iteration number N=25. Compare the number of iterations and the convergence of these two methods
Platform: |
Size: 78848 |
Author:吕鹏 |
Hits:
Description: The MDP toolbox proposes functions related to the resolution of discrete-time Markov Decision Process : finite horizon, value iteration, policy iteration, linear programming algorithms with some variants.
The functions (m-functions) were developped with MATLAB v6.0 (one of the functions requires the Mathworks Optimization Toolbox) by the decision team of the Biometry and Artificial Intelligence Unit of INRA Toulouse (France).
The version 2.0 (February 2005) handles sparse matrices and contains an example Platform: |
Size: 2437120 |
Author:劉德華 |
Hits:
Description: 此程序为解非线性方程组的不动点迭代法,在MATLAB执行.-This procedure for the solution of nonlinear equations of the fixed point iteration method, in the MATLAB implementation. Platform: |
Size: 1024 |
Author:yang |
Hits:
Description: 牛顿迭代方法计算算法示例,可以通过这个来写出其他的计算方法-Newton iteration method algorithm for example, you can write this to the calculation of other Platform: |
Size: 1024 |
Author:李黎 |
Hits:
Description:
% Train a two layer neural network with the Levenberg-Marquardt
% method.
%
% If desired, it is possible to use regularization by
% weight decay. Also pruned (ie. not fully connected) networks can
% be trained.
%
% Given a set of corresponding input-output pairs and an initial
% network,
% [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms)
% trains the network with the Levenberg-Marquardt method.
%
% The activation functions can be either linear or tanh. The
% network architecture is defined by the matrix NetDef which
% has two rows. The first row specifies the hidden layer and the
% second row specifies the output layer.- Train a two layer neural network with the Levenberg-Marquardt method. If desired, it is possible to use regularization by weight decay. Also pruned (ie. not fully connected) networks can be trained. Given a set of corresponding input-output pairs and an initial network, [W1, W2, critvec, iteration, lambda] = marq (NetDef, W1, W2, PHI, Y, trparms) trains the network with the Levenberg-Marquardt method . The activation functions can be either linear or tanh. The network architecture is defined by the matrix NetDef which has two rows. The first row specifies the hidden layer and the second row specifies the output layer. Platform: |
Size: 3072 |
Author:张镇 |
Hits:
Description: Fixed-Point iteration-Function fixed_point(p0, N) approximates the solution of an equation f(x) = 0, rewritten in the form x = g(x), which is a sub-function the user has to enter. the call to the function fixed_point(p0, N) returns the root of the equation f(x),i.e. the fixed-point of g(x), if the procedure is successful or a sequence of iterates in case something goes wrong. p0 is the initial approximation and N the maximum number of iterations. If, after N iterations, condition |x(k)- x(k-1)| < tol is not satisfied, all iterated values will be displayed, accompanied by a message asking the user to either change p0 in case of divergence or enter another g(x) that does not lead to complex numbers arithmetics. Reasons for the program to go wrong are the divergence of iterates and/or appearance of complex numbers for example with functions involving sqrt(x) when one of the iterates is negative. Platform: |
Size: 1024 |
Author:王怀东 |
Hits: