-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathcodes comparison.txt
64 lines (41 loc) · 1.6 KB
/
codes comparison.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
784-500-500-2000-10
Hinton's architecture (apply HJB, LF, LM) on MNIST database (Application based work)
Not only is the insertion important but also the measurement mechanism.
Keep note of the NN structure and the codes data structures.
peculiar normalisation tecnique
random state
nnsetup
activation functions will be the same
nn.W data structure is importat.
Adding one constant unit for each layer
parametre modification
use only one weights
nntrain
for each epoch=== time of completion, minibatch error, total batch error.
{
opts
loss
numbatches are used over here
nnff// finds the current state
nn.a and uses nn.W's. Same activation function is used in every layer.
'each row is a separate training sample and coluums are the units'
nn.e = error in the last layer.... each layer each unit.
nn.L= total error for the batch.
finds the layer activations==neuron outputs, errors and losses
nnbp// finds the update
mmapplygrads// incorporates the upgrades
}
nntest
one more nnff on the test data and taking the maximum at the output.
Substitution
learning_rate or eta is a parametre
Use th system of jacobians finally
*Only thing whcih changes in multiple layers is the Jacobian clacuation. Supported by eq28,29
* To suitably modify NN datastructure
Only input layer will have a bias whereas others will not have any bias.
// learning multi-layer backprop
Kept the layer neurons in the form of functions
why is constant not used?
The subcodes are independent of number of layers, only the superficial codes have to be
changed to accomadate more layers.
dnn_findJ is anologus to nnff plus finding the jacobian