CodeBus
www.codebus.net
Search
Sign in
Sign up
Hot Search :
Source
embeded
web
remote control
p2p
game
More...
Location :
Home
Search - k matrix
Main Category
SourceCode
Documents
Books
WEB Code
Develop Tools
Other resource
Sub Category
Compress-Decompress algrithms
STL
Data structs
Algorithm
AI-NN-PR
matlab
Bio-Recognize
Crypt_Decrypt algrithms
mathematica
Maple
DataMining
Big Data
comsol
physical calculation
chemical calculation
simulation modeling
Search - k matrix - List
[
AI-NN-PR
]
朴素贝叶斯
DL : 0
调用过程 CM = Confusion_matrix(train_predicts, train_targets) [combining_predicts, errorrate] = combining_NB(DP, test_targets, CM) DP,三维数组,(i,j,k)为第k个样本的DP矩阵 targets 为 0 1 2 -process called CM = Confusion_matrix (train_predicts, train_targets) [combining_predicts, errorrate] = combining_NB (DP, test_targets, CM) DP, three-dimensional array (i, j, k) for the k samples of DP matrix targets for 0 1 2
Date
: 2025-12-29
Size
: 2kb
User
:
[
AI-NN-PR
]
gmeans
DL : 0
gmeans-- Clustering with first variation and splitting 文本聚类算法Gmeans ,使用了3种相似度函数,cosine,euclidean ,KL.文本数据使用的是稀疏矩阵形式. -gmeans clustering with first variation and splitting Gmeans,a text clustering algorithm, uses 3 functions,cosine,euclidean and KL in similarity measuring.Text data are described by sparse matrix.
Date
: 2025-12-29
Size
: 70kb
User
:
修宇
[
AI-NN-PR
]
MCRGSA
DL : 0
MCRGSA------组播路由问题遗传模拟退火算法 %M-----------遗传算法进化代数 %N-----------种群规模,取偶数 %Pm----------变异概率调节参数 %K-----------同一温度下状态跳转次数 %t0----------初始温度 %alpha-------降温系数 %beta--------浓度均衡系数 %ROUTES------备选路径集 %Num---------到各节点的备选路径数目 %Cost--------费用邻接矩阵 %Source------源节点标号 %End---------目的节点标号组成的向量 %MBR---------各代最优路径编码-MCRGSA------ Multicast Routing genetic simulated annealing% M----------- genetic algorithm algebra% N----------- population size, even take Pm----------% probability parameter variation%----------- same temperature K Jump under a number of state----------% t0 initial temperature cooling% alpha------- beta coefficient%-------- balanced concentration coefficient% ROUTES------ Alternative Path Set% Enable--------- nodes to the number of alternative paths%-------- Cost adjacent costs Matrix% Source------ source node labeling% End--------- destination node labeling Group Vector%% of the MBR--------- generations optimal path coding
Date
: 2025-12-29
Size
: 1kb
User
:
程爱华
[
AI-NN-PR
]
KModies
DL : 0
k中心点 编制和调试一个程序,它将用户输入的正规式转换为以状态图和矩阵形式表示的确定有穷自动机。 1.把正规式转换为NFA 2.将NFA确定化为DFA • #作为正规式的终止符 • 考虑复合正规式 • 开始状态号为0 -focal point for the preparation of k and debug a program, it will the user to enter the formal conversion to a state diagram and matrix forms express the determination of DFA. 1. The formal type is converted to NFA2. Will determine the NFA into a DFA
Date
: 2025-12-29
Size
: 1kb
User
:
刘自咏
[
AI-NN-PR
]
selfAffinity
DL : 0
AP是在数据点的相似度矩阵的基础上进行聚类.对于规模很大的数据集,AP算法是一种快速、有效的聚类方法,这是其他传统的聚类算法所不能及的,-A semi-supervised clustering method based on affinity propagation (AP) algorithm is proposed in this paper. AP takes as input measures of similarity between pairs of data points. AP is an efficient and fast clustering algorithm for large dataset compared with the existing clustering algorithms, such as K-center clustering. But for the datasets with complex cluster structures, it cannot produce good clustering results. It can improve the clustering performance of AP by using the priori known labeled data or pairwise constraints to adjust the similarity matrix. Experimental results show that such method indeed reaches its goal for complex datasets, and this method outperforms the comparative methods when there are a large number of pairwise constraints.
Date
: 2025-12-29
Size
: 367kb
User
:
lilan
[
AI-NN-PR
]
nearestneighbour
DL : 0
Compute nearest neighbours (by Euclidean distance) to a set of points of interest from a set of candidate points. The points of interest can be specified as either a matrix of points (as columns) or indices into the matrix of candidate points. Points can be of any (within reason) dimension. nearestneighbour can be used to search for k nearest neighbours, or neighbours within some distance (or both) If only 1 neighbour is required for each point of interest, nearestneighbour tests to see whether it would be faster to construct the Delaunay Triangulation (delaunayn) and use dsearchn to lookup the neighbours, and if so, automatically computes the neighbours this way. This means the fastest neighbour lookup method is always used.
Date
: 2025-12-29
Size
: 30kb
User
:
nadir
[
AI-NN-PR
]
lsyc
DL : 0
信道容量C的迭代算法 函数说明: [CC,Paa]=ChannelCap(P,k) 为信道容量函数 变量说明: P:输入的正向转移概率矩阵,k:迭代计算精度 CC:最佳信道容量,Paa:最佳输入概率矩阵 Pa:初始输入概率矩阵,Pba:正向转移概率矩阵 Pb:输出概率矩阵,Pab:反向转移概率矩阵 C:初始信道容量, r:输入符号数,s:输出符号数 -Channel capacity C of the iterative algorithm Function Description: [CC, Paa] = ChannelCap (P, k) for the channel capacity function variable declaration: P: Enter a positive transition probability matrix, k: Diego generation of precision CC: Best channel capacity, Paa: optimal input probability matrix Pa: initial input probability matrix, Pba: positive transition probability matrix Pb: output probability matrix, Pab: reverse transition probability Matrix C: the initial channel capacity, r: the number of input symbols, s: output symbol number
Date
: 2025-12-29
Size
: 6kb
User
:
lijing
[
AI-NN-PR
]
xxs
DL : 0
信道容量C的迭代算法 函数说明: [CC,Paa]=ChannelCap(P,k) 为信道容量函数 变量说明: P:输入的正向转移概率矩阵,k:迭代计算精度 CC:最佳信道容量,Paa:最佳输入概率矩阵 Pa:初始输入概率矩阵,Pba:正向转移概率矩阵 Pb:输出概率矩阵,Pab:反向转移概率矩阵 C:初始信道容量, r:输入符号数,s:输出符号数 -Channel capacity C of the iterative algorithm Function Description: [CC, Paa] = ChannelCap (P, k) for the channel capacity function variable declaration: P: Enter a positive transition probability matrix, k: Diego generation of precision CC: Best channel capacity, Paa: optimal input probability matrix Pa: initial input probability matrix, Pba: positive transition probability matrix Pb: output probability matrix, Pab: reverse transition probability Matrix C: the initial channel capacity, r: the number of input symbols, s: output symbol number
Date
: 2025-12-29
Size
: 3kb
User
:
lijing
[
AI-NN-PR
]
k-means
DL : 0
基于K-means聚类算法的社团发现方法 先定义了网络中节点关联度,并构建了节点关联度矩阵, 在此基础上给出了一种基于 K-means聚类算法的复杂网络社团发现方法。 以最小关联度原则选取新的聚类中心, 以最大关联度原则进行模式归类,直到所有的节点都划分完为止, 最后根据模块度来确定理想的社团数-K-means clustering algorithm based on the association discovery To define a network node correlation, and build the node correlation matrix in this basis, given a K-means clustering algorithm based on a complex network of associations that way. The principle of the minimum correlation to select a new cluster center to the principle of maximum correlation pattern classification until all the nodes are divided until the end, the last under the module to determine the degree of the ideal number of community
Date
: 2025-12-29
Size
: 113kb
User
:
maverick
[
AI-NN-PR
]
metric-learning_survey_v2
DL : 0
关于metric learning的综述,涉及到许多的知识:SVM、kernel、SDP等-This paper surveys the field of distance metric learning from a principle perspective, and includes a broad selection of recent work. In particular, distance metric learning is reviewed under different learning conditions: supervised learning versus unsupervised learning, learning in a global sense versus in a local sense and the distance matrix based on linear kernel versus nonlinear kernel. In addition, this paper discusses a number of techniques that is central to distance metric learning, including convex programming, positive semi-definite programming, kernel learning, dimension reduction, K Nearest Neighbor, large margin classification, and graph-based approaches.
Date
: 2025-12-29
Size
: 315kb
User
:
刘建飞
[
AI-NN-PR
]
GLCM_Features1
DL : 0
K-means clustering with cooccurrence matrix image
Date
: 2025-12-29
Size
: 5kb
User
:
seriari
[
AI-NN-PR
]
KRT_from_P
DL : 0
此程序可从投影矩阵中分解出摄像机的内外参数(K、R、T),效果很不错。需要包含opencv头文件。-The program can decompose interior and exterior parameters(K,R,T) from a camera s projection matrix. Strong performace has been demonstrated. What you only have to do is to contain relative OPENCV head files.
Date
: 2025-12-29
Size
: 2kb
User
:
王维
[
AI-NN-PR
]
kmeans1
DL : 0
K-means算法,算法步骤如下: Step1.利用式(2)计算距离矩阵D=(),其中=dist[i, j] (); Step2.扫描坐标距离矩阵D,寻找距离的最大值和最小值,用式(3)计算limit; Step3.扫描坐标距离矩阵D,寻找矩阵中距离最小的2个数据a,b,将数据a,b加入集合,={a,b},同时将数据a,b从U中删除,更新距离矩阵D; Step4.利用 (4)式在U中寻找距离集合最近的数据样本t,如果小于limit,则将t加入集合,同时将t从集合U中删除,更新距离矩阵D,重复Step5,否则停止; Step5.若i<k,i=i+1,重复步骤Step3、Step4,直至k个集合完成; Step6.取集合中数据的算术平均值记作数据中心,并计算得到的坐标值,完成k个数据中心的选取。-Algorithm steps are as follows: Step1. Type (2) is used to calculate the distance matrix D = (), including = dist [I, j] () Step2. Scan coordinate distance matrix D, looking for the maximum and the minimum distance, use type (3) calculate the limit Step3. Scan coordinate distance matrix D, looking for matrix minimum distance of two data a, b, and the data to a, b to join the collection, = {a, b}, at the same time the data a, b is removed from the U, update the distance matrix D Step4. Using (4) in the U find closest to the collection of data samples t, if less than the limit, then t join collection, at the same time t is removed from the set U, update the distance matrix D, repeat Step5, otherwise stop Step5. If I < k, I = I+ 1, repeat steps Step3, Step4, until k collection is complete Step6. Take the arithmetic mean of the collection of data for the data center, and to calculate the coordinates, to complete the selection of k data center. The above steps distribution cu
Date
: 2025-12-29
Size
: 125kb
User
:
ming
[
AI-NN-PR
]
k-nearest-neighbors-with-incremental-distance-upd
DL : 0
k-nearest-neighbors classification with incremental update of distance matrix
Date
: 2025-12-29
Size
: 1kb
User
:
samira
[
AI-NN-PR
]
DeepLearningDropout-master
DL : 2
dropout和深度学习算法的结合使用,有详细的使用说明和数据集(Three types of layers: - C: convolutional layer (matrix map) - MP: max-pooling layer (matrix map) - F: fully connected layer (vector map) - O: output layer Convolutional Layers: - Scale: scale (size of patch) - Number of output maps: outputMap - Shared weights: k - Bias: b Max-pooling layer - Scale: scale (size of patch) - Max-coordinate matrix: k (1 if max, 0 if not) Fully connected layer (dimension and number of feature maps stay the same) - Weight matrix: w - Bias: b Output layer (dimension equal to the dimension of output label) - Weight matrix: w - Bias: b Common parameters - Result: a - Delta: d)
Date
: 2025-12-29
Size
: 35.9mb
User
:
咕_噜
CodeBus
is one of the largest source code repositories on the Internet!
Contact us :
1999-2046
CodeBus
All Rights Reserved.