CodeBus
www.codebus.net
Search
Sign in
Sign up
Hot Search :
Source
embeded
web
remote control
p2p
game
More...
Location :
Home
Search - h.3
Main Category
SourceCode
Documents
Books
WEB Code
Develop Tools
Other resource
Sub Category
Compress-Decompress algrithms
STL
Data structs
Algorithm
AI-NN-PR
matlab
Bio-Recognize
Crypt_Decrypt algrithms
mathematica
Maple
DataMining
Big Data
comsol
physical calculation
chemical calculation
simulation modeling
Search - h.3 - List
[
AI-NN-PR
]
BPNet
DL : 0
人工神经网络BP算法 1、动态改变学习速率 2、加入动量项 3、运用了Matcom4.5的矩阵运算库(可免费下载,头文件matlib.h), 方便矩阵运算,当然,也可自己写矩阵类 4、可暂停运算 5、可将网络以文件的形式保存、恢复-Artificial neural network BP algorithm 1, the dynamic change learning rate 2, adding momentum of 3, using the matrix calculation Matcom4.5 library (can be downloaded free of charge, the first document matlib.h), facilitate the matrix calculation, of course, can also write their own matrix Class 4, may be suspended by the operator 5, may be the form of network to the preservation, restoration
Date
: 2025-12-16
Size
: 1.56mb
User
:
吴昆
[
AI-NN-PR
]
BPNet
DL : 0
人工神经网络BP算法 1、动态改变学习速率 2、加入动量项 3、运用了Matcom4.5的矩阵运算库(可免费下载,头文件matlib.h), 方便矩阵运算,当然,也可自己写矩阵类 4、可暂停运算 5、可将网络以文件的形式保存、恢复 -Artificial neural network BP algorithm 1, the dynamic change learning rate 2, adding momentum of 3, using the matrix calculation Matcom4.5 library (can be downloaded free of charge, the first document matlib.h), facilitate the matrix calculation, of course, can also write their own matrix Class 4, may be suspended by the operator 5, may be the form of network to the preservation, restoration
Date
: 2025-12-16
Size
: 2.1mb
User
:
谢文辉
[
AI-NN-PR
]
h
DL : 0
目的:使学生加深对图搜索技术的理解,初步掌握图搜索基本编程方法,并能运用图搜索技术解决一些应用问题。 要求: (1) 可使用第3章中的状态图搜索通用程序,这时只需编写规则集程序;也可用PROLOG语言或其他语言另行编程。 (2) 程序运行时,应能在屏幕上显示程序运行结果。 -Objective: To enable students to deepen their understanding of search technology plan, a preliminary master plan search basic programming method, and can use map search technology to solve some application problems. Requirements: (1) can be used in Chapter 3 of the state diagram generic search procedures, this time just to prepare rule set procedures also be used PROLOG language or other programming language separately. (2) is running, should be able to run the screen results.
Date
: 2025-12-16
Size
: 12kb
User
:
nimamama
[
AI-NN-PR
]
tony
DL : 0
河內塔問題 #include<stdio.h> #include<stdlib.h> int fun_a(int) void fun_b(int,int,int,int) int main(void) { int n int option printf("題目二:河內塔問題\n") printf("請輸入要搬移的圓盤數目\n") scanf("%d",&n) printf("最少搬移的次數為%d次\n",fun_a(n)) printf("是否顯示移動過程? 是請輸入1,否則輸入0\n") scanf("%d",&option) if(option==1) { fun_b(n,1,2,3) } system("pause") return 0 } int fun_a(int n) { int sum1=2,sum2=0,i for(i=n i>1 i--) { sum1=sum1*2 } sum2=sum1-1 return sum2 } void fun_b(int n,int left,int mid,int right) { if(n==1) printf("把第%d個盤子從第%d座塔移動到第%d座塔\n",n,left,right) else { fun_b(n-1,left,right,mid) printf("把第%d個盤子從第%d座塔移動到第%d座塔\n",n,left,right) fun_b(n-1,mid,left,right) } } -err
Date
: 2025-12-16
Size
: 6kb
User
:
林小世
[
AI-NN-PR
]
Hopfield
DL : 0
implement a Hopfield network for the retrieval of stored 2-D patterns-Input Parameters: Ask the user to input the number of fundamental memories p (2 ≤ p ≤ 5), the width w (3 ≤ w ≤ 5) and height h (3 ≤ h ≤ 6) of each 2-D pattern. Storage Phase: Each fundamental memory is stored in a text file with .txt extension. Retrieval Phase: Ask the user to input the name of the text file storing the noisy pattern.
Date
: 2025-12-16
Size
: 4kb
User
:
Owen Yin
[
AI-NN-PR
]
bpshenjing
DL : 0
人工神经网络BP算法 1、动态改变学习速率 2、加入动量项 3、运用了Matcom4.5的矩阵运算库(可免费下载,头文件matlib.h), 方便矩阵运算,当然,也可自己写矩阵类 4、可暂停运算 5、可将网络以文件的形式保存、恢复 -Artificial neural network BP algorithm 1, the dynamic change learning rate 2, adding momentum item 3, the use of matrix operations Matcom4.5 library (available for free download, headers matlib.h), to facilitate matrix operations, but may also write your own matrix Class 4, may suspend the operation 5, the network can be saved as a file and restore
Date
: 2025-12-16
Size
: 1.86mb
User
:
rongya
[
AI-NN-PR
]
LDPC
DL : 0
这是关于LDPC信道编码模块设计的程序 打开源程序,先运行gengrate_h.m程序,陆续将码长设置为756bit,列重设置为3,行重设置为9。在Workspace中同时将H、A、B、C、D、E、Hget、Fget、g、Tget这是个变量选择另存为encode_in.mat 格式。再运行main_encode.m进行编码,主程序运行后,在当前目录下,自动生成编码结果文件“encode—out.mat”,这将作为下一次扩频调制仿真实验的的输入信号。最后分别查看Workspace中的变量s(编码前数据)和xyuan(编码后数据)的波形。 对比后,可以看出编码前的数据码片长度为504bit,编码后的码片数据长度为756bit。编码效率=编码前码片长度/编码后码片长度=2/3。-This is about the LDPC channel coding module design process Open source, first run gengrate_h.m program, phasing out the code length is set to 756bit, column re-set to 3, line weight is set to 9. In the Workspace in the same time, H, A, B, C, D, E, Hget, Fget, g, Tget This is a variable select Save As encode_in.mat format. Then run main_encode.m encoding, the main program running in the current directory, the results of automatically generated code file "encode-out.mat", which will serve as the next simulation of the spread spectrum modulation input signal. Finally, the variables were View Workspace in s (before encoding data) and xyuan (encoded data) waveform. After comparison, we can see the data before encoding chip length of 504bit, encoded data length of chip 756bit. Coded before coding efficiency = chip length/length of the encoded chip = 2/3.
Date
: 2025-12-16
Size
: 15kb
User
:
吴健
[
AI-NN-PR
]
99456518BpNet_src
DL : 0
/// /// /// /// /// /// /// /// /// /// /// /// /// /////人工神经网络BP算法///////////////////////////////// //1、动态改变学习速率 //2、加入动量项 //3、运用了Matcom4.5的矩阵运算库(可免费下载,头文件matlib.h), // 方便矩阵运算,当然,也可自己写矩阵类 //4、可暂停运算 //5、可将网络以文件的形式保存、恢复 ///////////////作者:同济大学材料学院 张纯禹////////////////////// ///////////////email:chunyu_79@hotmail.com////////////////////////// ///////////////QQ:53806186////////////////////////////////////////// ///////////////欢迎不断改进!欢迎讨论其他实用的算法!///////////////// #include "BpNet.h"-you are wecome to commiutate eith me
Date
: 2025-12-16
Size
: 5kb
User
:
武六
[
AI-NN-PR
]
bashuma-
DL : 0
人工智能八数码问题实现八数码问题:在3×3的方格棋盘上,摆放着1到8这八个数码,有1个方格是 空的,其初始状态如图1所示,要求对空格执行空格左移、空格右移、空格上移 和空格下移这四个操作使得棋盘从初始状态到目标状态。 -#include <stdio.h>//定义变量 #include <stdlib.h>//定义变量 typedef struct s_node//定义创建节点 { struct s_node*father //定义父结点 struct s_node*prev struct s_node*next //定义下一个结点 int step //定义步骤为整形 int diff int weight int m[9] //定义m为9个整形数据 } node node*open_out(node*head) //open表输出节点 int open_insert(node*head, node*item) //定义open表插入节点为整形 int close_insert(node*head, node*item) //定义close表插入为整形 int inv_pair(int m[]) //定义m数据成对为整形 void swap(int*, int*) //定义空交换 int operate(int m[], int op) //定义操作 int diff(int m[], int n[])
Date
: 2025-12-16
Size
: 275kb
User
:
林莹莹
[
AI-NN-PR
]
machine-learning_PCA
DL : 0
环境为winpython 32bit 2.7.5.3 p = PCA() print u"均值化后的数据集为:",p.dataset( H:\\PCA_test.txt ) print u"协方差矩阵为:",p.COV() print u"特征向量为:",p.eig_vector()[1] tt = p.pc(dim=1) print "tt:",tt print u"新的维度数据集",tt[1]- """ Principal components analysis,PCA,COV,eig,eig vector Parameters¶ : path:数据集的存放路径 dim : 数据降维后的维度数 Attribute: means_data : 原数据- 原数据均值化 m : 数据集的行数 n :数据集的列数 cov_matrix : 协方差矩阵 eig_vectors : 协方差矩阵的特征向量 eig_value : 协方差矩阵求得的特征值 cum_P : 排序后的特征值, 累积百分比计算 Method: PCA.dataset():数据集导入, return:means_data,array(m,n) PCA.cov_matrix:协方差矩阵计算, retrn:cov_matrix,array(n,n) PCA.eig_vector:特征值和特征向量计算, return:(eig_value, eig_vectors),(array(1xn),array(n,n)) PCA.pc:降维后的数据集计算, return:data_rescaled,array(m,dim), defaut:dim=2 """
Date
: 2025-12-16
Size
: 2kb
User
:
hhkk
[
AI-NN-PR
]
DeepLearnToolbox_CNN_lzbV3.0
DL : 0
CNN - 主程序 参考文献: [1] Notes on Convolutional Neural Networks. Jake Bouvrie. 2006 [2] Gradient-Based Learning Applied to Document Recognition. Yann LeCun. 1998 [3] https://github.com/rasmusbergpalm/DeepLearnToolbox 作者:陆振波 电子邮件:luzhenbo2@qq.com 个人博客: http://blog.sina.com.cn/luzhenbo2 毕业院校:海军工程大学,船舶与海洋工程(水声工程),博士 精通方向:信号处理,图像处理,人工智能,模式识别,支持向量机,深度学习,机器学习,机器视觉,群体智能,Matlab与VC++混编 擅长技能:战略规划,团队管理,C,C++,Matlab,OpenCV,DSP,并行计算,图像处理,智能视觉,卷积神经网络,人脸检测,行人检测,车牌识别,特征提取,智能眼镜,辅助驾驶,ADAS,AdaBoost,LBP,HOG,MeanShift,目标检测,目标识别,目标跟踪,大数据 - CNN- the main program reference: [1] Notes on Convolutional Neural Networks. Jake Bouvrie. 2006 [2] Gradient-Based Learning Applied to Document Recognition. Yann LeCun. 1998 [3] https://github.com/rasmusbergpalm/DeepLearnToolbox Author: Lu Zhenbo E-mail: luzhenbo2@qq.com Personal blog: http://blog.sina.com.cn/luzhenbo2 Graduated: Naval University of Engineering, Naval Architecture and Marine Engineering (Underwater Acoustic), Dr. Proficient direction: signal processing, image processing, artificial intelligence, pattern recognition, support vector machines, deep learning, machine learning, machine vision, swarm intelligence, Matlab and VC++ mixed Good skills: strategic planning, team management, C, C++, Matlab, OpenCV, DSP, parallel computing, image processing, intelligent vision, convolution neural network, face detection, pedestrian detection, license plate recognition, feature extraction, smart glasses, driver assistance, ADAS, AdaBoost, LBP, H
Date
: 2025-12-16
Size
: 959kb
User
:
陆振波
[
AI-NN-PR
]
libsvm-3.1
DL : 0
LIBSVM is an integrated software for support vector classification, (C-SVC, nu-SVC), regression (epsilon-SVR, nu-SVR) and distribution estimation (one-class SVM). It supports multi-class classification. Since version 2.8, it implements an SMO-type algorithm proposed in this paper: R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using second order information for training SVM. Journal of Machine Learning Research 6, 1889-1918, 2005. You can also find a pseudo code there. (how to cite LIBSVM) Our goal is to help users other fields to easily use SVM as a tool. LIBSVM provides a simple interface where users can easily link it with their own programs. Main features of LIBSVM include-LIBSVM is an integrated software for support vector classification, (C-SVC, nu-SVC), regression (epsilon-SVR, nu-SVR) and distribution estimation (one-class SVM). It supports multi-class classification. Since version 2.8, it implements an SMO-type algorithm proposed in this paper: R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using second order information for training SVM. Journal of Machine Learning Research 6, 1889-1918, 2005. You can also find a pseudo code there. (how to cite LIBSVM) Our goal is to help users other fields to easily use SVM as a tool. LIBSVM provides a simple interface where users can easily link it with their own programs. Main features of LIBSVM include
Date
: 2025-12-16
Size
: 1.26mb
User
:
carl2380
[
AI-NN-PR
]
libsvm-3.22
DL : 1
libsvm-3.22.rar LIBSVM is an integrated software for support vector classification, (C-SVC, nu-SVC), regression (epsilon-SVR, nu-SVR) and distribution estimation (one-class SVM). It supports multi-class classification. Since version 2.8, it implements an SMO-type algorithm proposed in this paper: R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using second order information for training SVM. Journal of Machine Learning Research 6, 1889-1918, 2005. You can also find a pseudo code there. (how to cite LIBSVM) Our goal is to help users other fields to easily use SVM as a tool. LIBSVM provides a simple interface where users can easily link it with their own programs. Main features of LIBSVM include-libsvm-3.22.rar LIBSVM is an integrated software for support vector classification, (C-SVC, nu-SVC), regression (epsilon-SVR, nu-SVR) and distribution estimation (one-class SVM). It supports multi-class classification. Since version 2.8, it implements an SMO-type algorithm proposed in this paper: R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using second order information for training SVM. Journal of Machine Learning Research 6, 1889-1918, 2005. You can also find a pseudo code there. (how to cite LIBSVM) Our goal is to help users other fields to easily use SVM as a tool. LIBSVM provides a simple interface where users can easily link it with their own programs. Main features of LIBSVM include
Date
: 2025-12-16
Size
: 820kb
User
:
carl2380
CodeBus
is one of the largest source code repositories on the Internet!
Contact us :
1999-2046
CodeBus
All Rights Reserved.