CodeBus
www.codebus.net
Search
Sign in
Sign up
Hot Search :
Source
embeded
web
remote control
p2p
game
More...
Location :
Home
Search - iteration
Main Category
SourceCode
Documents
Books
WEB Code
Develop Tools
Other resource
Sub Category
Compress-Decompress algrithms
STL
Data structs
Algorithm
AI-NN-PR
matlab
Bio-Recognize
Crypt_Decrypt algrithms
mathematica
Maple
DataMining
Big Data
comsol
physical calculation
chemical calculation
simulation modeling
Search - iteration - List
[
AI-NN-PR
]
模拟退火算法实现旅行商算法
DL : 0
采用的是康力山等人确定的实验参数。 对于n个城市的旅行商问题,其参数如下: 初始温度:t0=280, 每一个温度下采用固定的迭代次数L=100n, 温度的衰减系数alpha=0.92 算法停止的准则是当相邻两个温度得到的解变化很小时算法停止。-used the Stanozolol Hill were determined by the experimental parameters. N cities for the traveling salesman problem, the parameters are as follows : initial temperature : t0 = 280, each a fixed temperature of the iteration number L = 100n, temperature attenuation coefficient alpha = 0.92 algorithm to stop the criteria when two adjacent the temperature change was very Solution Algorithm stop.
Date
: 2026-01-01
Size
: 2kb
User
:
谢继晖
[
AI-NN-PR
]
srcV0624
DL : 1
这个代码是policy iteration算法关于强化学习的. 请您用winzip 解压缩-policy iteration algorithm for enhanced learning. Please use winzip decompress
Date
: 2026-01-01
Size
: 18kb
User
:
柳春
[
AI-NN-PR
]
dossier
DL : 0
For the incomplete methods, we kept the representation of the queens by a table and the method of calculation to determine if two queens are in conflict, which is much faster for this kind of problems than the representation by a matrix. heuristics: descent. Tests: 100 queens in less than 1 second and 67 iterations. 500 queens in 1 second and 257 iterations. 1000 queens in 11 seconds and 492 iterations. heuristics: Simulated annealing. Tests: 100 queens in less than 1 second and 47 iterations. 500 queens in 5 seconds and 243 iterations. 1000 queens in 13 seconds and 497 iterations. heuristics: based on Simulated Annealing. Tests: 100 queens in less than 1 second and 60 iterations. 500 queens in 1 second and 224 iterations. 1000 queens in 5 seconds and 459 iterations. 10 000 queens in 20 minutes 30 seconds and 4885 iterations. -For the incomplete methods, we kept the representation of the queens by a tab le and the method of calculation to determine if two queens are in conflict, which is much faster for this kind of problems th an the representation by a matrix. heuristics : descent. Tests : 100 queens in less than a second and 67 iteration s. 500 queens in a second and 257 iterations. 100 queens 0 in 11 seconds and 492 iterations. heuri stics : Simulated annealing. Tests : 100 queens in less than a second and 47 iteration s. 500 queens in 5 seconds and 243 iterations. 10 00 queens in 13 seconds and 497 iterations. heur istics : based on Simulated Annealing. Tests : 100 queens in less than a second and 60 iteration s. 500 queens in a second and 224 iterations. 100 0 queens in 5 seconds and 459 iterations. q 1
Date
: 2026-01-01
Size
: 51kb
User
:
ZHU
[
AI-NN-PR
]
immunity
DL : 0
提供一个人工免疫算法源程序,其算法过程包括: 1.设置各参数 2.随机产生初始群体——pop=initpop(popsize,chromlength) 3.故障类型编码,每一行为一种!code(1,:),正常;code(2,:),50%;code(3,:),150%。实际故障测得数据编码,这里Unnoralcode,188% 4.开始迭代(M次): 1)计算目标函数值:欧氏距离[objvalue]=calobjvalue(pop,i) 2)计算群体中每个个体的适应度fitvalue=calfitvalue(objvalue) 3)选择newpop=selection(pop,fitvalue) objvalue=calobjvalue(newpop,i) % 交叉newpop=crossover(newpop,pc,k) objvalue=calobjvalue(newpop,i) % 变异newpop=mutation(newpop,pm) objvalue=calobjvalue(newpop,i) % 5.求出群体中适应值最大的个体及其适应值 6.迭代停止判断。-provide a source of artificial immune algorithm, the algorithm process include : 1. Two of the parameters set. Initial randomly generated groups-- pop = initpop (popsize, chromlength) 3. Fault type coding, each act a! Code (1 :), normal; Code (2, :), 50%; Code (3 :), 150%. Fault actual measured data coding, here Unnoralcode, 188% 4. Beginning iteration (M) : 1) the objective function value : Euclidean distance [objvalue] = calobjvalue (pop, i) 2) calculation of each individual groups of fitness calfitvalue fitvalue = ( objvalue) 3) = newpop choice selection (pop, fitvalue) objvalue = calobjvalue (newpop, i) =% newpop cross-crossover (newpop, pc, k) = calobjvalue objvalue (newpop, i) =% variation newpop mutation (newpop, pm ) objvalue = calobjvalue (newpop, i)% 5. groups sought to adapt th
Date
: 2026-01-01
Size
: 9kb
User
:
江泉
[
AI-NN-PR
]
PSO-C
DL : 0
在C语言环境下使用的粒子群优化算法,需要给出最大速度、迭代次数、作为中断条件的最小误差等初始条件。-in the C-language environment to the use of the PSO algorithm, the greatest need is speed, the number of iteration, as the smallest disruption error conditions such as initial conditions.
Date
: 2026-01-01
Size
: 57kb
User
:
[
AI-NN-PR
]
bipso
DL : 0
围绕粒子群的当前质心对粒子群重新初始化.这样,每个粒子在随后的迭代中将在新的位置带着粒子在上次搜索中获得的“运动惯性”(wvi)向Pi,Pg的方向前进,从而可以在粒子群的运动过程中获得新的位置,增加求得更优解的机会.随着迭代的继续,经过变异的粒子群又将趋向于同一点,当粒子群收敛到一定程度时又进行下一次变异,如此反复,直到迭代结束.-particle swarm around the center of mass of the current PSO reinitialization. Thus, Each particle in the next iteration will be in the new location with particles in the last search was the "inertia" (wvi ) Pi, Pg orientation, and thus can PSO course of the campaign was a new position, increase seek better solutions opportunities. With the continued iteration, after variation of PSO will tend to the same point. When PSO converge to a certain extent when the next variation, so repeatedly, until the end of iteration.
Date
: 2026-01-01
Size
: 75kb
User
:
wanglg
[
AI-NN-PR
]
dpso_ccpzgf
DL : 0
二维二进制离散粒子群求解agent联盟问题的源代码,按c键迭代开始执行-two-dimensional discrete binary PSO Union agent for the source code, according to c Key iteration started
Date
: 2026-01-01
Size
: 226kb
User
:
[
AI-NN-PR
]
ANewC4.5alg
DL : 0
经典的数据挖掘分类算法,由ID3算法演变而来。本算法主要用于处理连续属性值,基本过程如下: 1.根据属性的值对数据集排序 2.用不同的阈值将数据集动态的分类 3.迭代根据阈值进行划分 4.得到所有可能的阈值、增益以及增益比-classical classification of data mining algorithms, evolved from the ID3 algorithm. This is mainly used to deal with continuous attribute values, the basic process is as follows : 1. According to the values of attributes of sorting two data sets. different thresholds of data sets dynamic classification 3. According iteration threshold Progressive- draw 4. all possible threshold, and gains than Gain
Date
: 2026-01-01
Size
: 145kb
User
:
kpeng
[
AI-NN-PR
]
firbynna
DL : 0
此程序为本人编写的神经网络法设计1型FIR滤波器的程序,读者读此程序后,可以很深刻地理解如何用BP网络和LMS算法来设计滤波器。 只需更改程序中的H值,即可生成各种低通,高通,带通,带阻滤波器。程序运行结果可得到滤波器系数,幅频曲线和衰减曲线。 可通过更改迭代步长和误差极限来调整滤波器特性。-I prepared for this procedure the neural network type 1 FIR filter design procedures, the reader after reading this program, it is a profound understanding of how to use BP network and the LMS algorithm to design filters. Just change the procedures in H values, to generate a variety of low-pass, high pass, band-pass, band stop filter. Program is running the results of available filter coefficients, amplitude-frequency curve and the attenuation curve. Can change the iteration step size and error to adjust the filter characteristics of the limit.
Date
: 2026-01-01
Size
: 1kb
User
:
黄翔东
[
AI-NN-PR
]
clustering
DL : 0
一个聚类算法用K-mean处理后迭代,论文发表在PAK-A clustering algorithm with K-mean treatment iteration, papers published in the PAK
Date
: 2026-01-01
Size
: 2.34mb
User
:
杜亮
[
AI-NN-PR
]
psot
DL : 1
粒子群算法工具箱 该工具箱将PSO算法的核心部分封装起来,提供给用户的为算法的可调参数,用户只需要定义好自己需要优化的函数(计算最小值或者最大值),并设置好函数自变量的取值范围、每步迭代允许的最大变化量(称为最大速度,Max_V)等,即可自行优化。-Particle Swarm Optimization Toolbox of the Toolkit will be the core of the PSO algorithm package, and made available to the user adjustable parameters for the algorithm, users only need to define their need to optimize the function (calculation of the minimum or maximum), and set good function from the range of variables, each iteration step the maximum allowable variation (known as maximum speed, Max_V) and so on, can self-optimize.
Date
: 2026-01-01
Size
: 801kb
User
:
张鹤峰
[
AI-NN-PR
]
PSOt
DL : 0
微粒群工具箱PSOt为PSO的工具箱,该工具箱将PSO算法的核心部分封装起来,提供给用户的为算法的可调参数,用户只需要定义好自己需要优化的函数(计算最小值或者最大值),并设置好函数自变量的取值范围、每步迭代允许的最大变化量(称为最大速度,Max_V)等,即可自行优化。-Particle Swarm PSOt Toolbox Toolbox for PSO, the PSO algorithm toolbox will be the core of the package, and made available to the user adjustable parameters for the algorithm, users only need to define their need to optimize the function (calculation of the minimum or max), and set a good function from the range of variables, each iteration step the maximum allowable variation (known as maximum speed, Max_V) and so on, can self-optimize.
Date
: 2026-01-01
Size
: 743kb
User
:
dahai
[
AI-NN-PR
]
tenlei
DL : 0
function [U,center,result,w,obj_fcn]= fenlei(data) [data_n,in_n] = size(data) m= 2 % Exponent for U max_iter = 100 % Max. iteration min_impro =1e-5 % Min. improvement c=3 [center, U, obj_fcn] = fcm(data, c) for i=1:max_iter if F(U)>0.98 break else w_new=eye(in_n,in_n) center1=sum(center)/c a=center1(1)./center1 deta=center-center1(ones(c,1),:) w=sqrt(sum(deta.^2)).*a for j=1:in_n w_new(j,j)=w(j) end data1=data*w_new [center, U, obj_fcn] = fcm(data1, c) center=center./w(ones(c,1),:) obj_fcn=obj_fcn/sum(w.^2) end end display(i) result=zeros(1,data_n) U_=max(U) for i=1:data_n for j=1:c if U(j,i)==U_(i) result(i)=j continue end end end -function [U, center, result, w, obj_fcn] = fenlei (data) [data_n, in_n] = size (data) m = 2 Exponent for U max_iter = 100 Max. iteration min_impro = 1e-5 Min. improvement c = 3 [center, U, obj_fcn] = fcm (data, c) for i = 1: max_iter if F (U)> 0.98 break else w_new = eye (in_n, in_n) center1 = sum (center)/ca = center1 (1) ./center1 deta = center-center1 (ones (c, 1),:) w = sqrt (sum (deta. ^ 2)) .* a for j = 1: in_n w_new (j, j) = w (j) end data1 = data* w_new [center, U, obj_fcn] = fcm (data1, c) center = center./w (ones (c, 1),:) obj_fcn = obj_fcn/sum (w. ^ 2) end end display (i) result = zeros (1, data_n) U_ = max (U) for i = 1: data_n for j = 1: c if U (j, i) == U_ (i) result (i) = j continue end end end
Date
: 2026-01-01
Size
: 3kb
User
:
download99
[
AI-NN-PR
]
marq
DL : 0
% Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.- Train a two layer neural network with the Levenberg-Marquardt method. If desired, it is possible to use regularization by weight decay. Also pruned (ie. not fully connected) networks can be trained. Given a set of corresponding input-output pairs and an initial network, [W1, W2, critvec, iteration, lambda] = marq (NetDef, W1, W2, PHI, Y, trparms) trains the network with the Levenberg-Marquardt method . The activation functions can be either linear or tanh. The network architecture is defined by the matrix NetDef which has two rows. The first row specifies the hidden layer and the second row specifies the output layer.
Date
: 2026-01-01
Size
: 3kb
User
:
张镇
[
AI-NN-PR
]
Fractal
DL : 0
分形与图形设计,有Julia集,Mandelbrot集,Newton迭代以及三维混沌吸引子设计出的图像-Fractal and graphic design, there are Julia sets, Mandelbrot set, Newton iteration, as well as the design of three-dimensional chaotic attractor of the image
Date
: 2026-01-01
Size
: 1kb
User
:
廖洪运
[
AI-NN-PR
]
GuoA
DL : 0
郭涛算法(GuoA)是基于子空间搜索(多父体重组)和群体爬山法相结合的演化算法。它通过利用少数个体所张成的子空间随机生成新的个体,体现了随机搜索的非凸性。此外,由于GuoA算法采用了单个体劣汰策略,算法在每次演化 迭代中,只把群体中适应性能最差的个体淘汰出局,淘汰压力 较小,既保证了群体的多样性,又可使具有较好适应性的个体能够一直保留。实践证明, GuoA算法具有较好的坚韧性,对于不同的优化问题无须修改算法的参数,而且效率很高,可能同时找到多个最优解。-Guo Tao algorithm (GuoA) is based on the sub-space search (more than the reorganization of the parent body) and combination groups climbing the evolutionary algorithm. It is through the use of a small number of individual sub-space by Zhang generate a new random individual, reflects the random search of the non-convexity. In addition, the algorithm uses a single GuoA poor individual survival strategies, the evolution algorithm in each iteration, only to groups of individuals to adapt to the worst performance out of the game, out less stressful, not only to ensure the diversity of the groups, but also could have better adaptability to the individual has been retained. Practice has proved that, GuoA algorithm has good tenacity, and for different optimization algorithm is no need to change the parameters, and efficient, you may find more than one optimal solution at the same time.
Date
: 2026-01-01
Size
: 3kb
User
:
zhao
[
AI-NN-PR
]
Artificial_fish
DL : 0
每条人工鱼迭代20次的位置,每条人工鱼迭代20次的位置,对应的Y的值,每条人工鱼迭代20次的临时位置,公告牌,每条人工鱼迭代20次的位置,对应的Y的临时值,公告牌,人工鱼移动的最大步长,求出最优的函数值;-Each of artificial fish, the location of 20 iterations, each iteration of artificial fish, the location of 20 times the corresponding value of Y, each iteration of artificial fish 20 times a temporary location, bulletin boards, each iteration of artificial fish 20 second position, corresponding to the provisional value of Y, bulletin boards, artificial fish, the biggest step to move, find the optimal function value
Date
: 2026-01-01
Size
: 2kb
User
:
陈超
[
AI-NN-PR
]
Jacobi
DL : 0
雅可比(Jacobi)迭代算法的C++实现,供研究设计使用-Jacobi (Jacobi) iteration algorithm for C++ implementation for research designed to use
Date
: 2026-01-01
Size
: 314kb
User
:
[
AI-NN-PR
]
pi.py
DL : 0
Reinforcement Learning policy iteration algorithm
Date
: 2026-01-01
Size
: 2kb
User
:
helen_ray
[
AI-NN-PR
]
MDP_vi.py
DL : 0
reinforcement learning value iteration algorithm
Date
: 2026-01-01
Size
: 2kb
User
:
helen_ray
«
1
2
3
»
CodeBus
is one of the largest source code repositories on the Internet!
Contact us :
1999-2046
CodeBus
All Rights Reserved.