-
Notifications
You must be signed in to change notification settings - Fork 854
New training option #63
base: master
Are you sure you want to change the base?
Conversation
…ts of the neural network have to be (re)initialized or not.
@antoniodeluca , if this does what I think it does, you're my hero! Say I've been training a net for 2 days, and it's 84 iterations in by this point (turns out understanding human attraction is rather complex, even for a computer). I'd love to be able to train that same net even more at some point in the future, to take advantage of the two days worth of work that has already gone into training it. If your PR does that, I'd be super happy! I'd been debating how complex this would be to implement myself; I'm glad I bothered checking out the PRs here to find you've already figured it out, and see that the tests for this are longer than the source code! |
@ClimbsRocks , yes it does exactly what you need. Hope it will help you. |
It already is! I created GridSearch for this library, and then at the end, I wanted to train the best classifier for significantly longer. Allowing the classifier to warm start at the point it had previously trained to has been awesome! It's part of a larger project, but I hope to spin gridSearch out into it's own repo pretty soon to make harthur's awesome brainjs even more accessible to developers. I love her philosophy of abstracting away complexity, which pretty directly contrasts with scikit-learn's approach. I'm actually trying to build out my project to take that philosophy of avoiding unnecessary complexity to an even higher level. Thanks again for this critical component of it! |
again, borrowed from: harthur#63
per https://github.com/harthur/brain/pull/63/files This allows a brain that has already been trained to continue to train on top of what it has already learned. Previously, every call to .train() would start fresh with a totally new neural net, even if you had previously trained one for several days. tests warm start ability again, borrowed from: harthur#63 updates package.json for warmBrain explains difference between warmBrain and harthur/brain removes capital letter in name fixes formatting
per https://github.com/harthur/brain/pull/63/files This allows a brain that has already been trained to continue to train on top of what it has already learned. Previously, every call to .train() would start fresh with a totally new neural net, even if you had previously trained one for several days. tests warm start ability again, borrowed from: harthur#63 updates package.json for warmBrain explains difference between warmBrain and harthur/brain removes capital letter in name fixes formatting bumps version number
Thanks a lot, I'll need this too and I prefer this as using streams or anything. gotta be useful in my IA development. You made my day. |
For me this pull-request does not work. Here a simple example: net.train([{input: [1, 1], output: [0]}]); //trains 1,1 => 0
net.runInput([1, 1]); //results 0.0702838678761908 => correct
net.train([{input: [1, 0], output: [1]}]); //trains 1,0 => 1
net.runInput([1, 1]); //results 0.850795812504698 => incorrect
net.runInput([1, 0]); //results 0.929557447931592 => correct I've simply commented out this To me its seems a lot of iterations to learn the new 1,0 => 1 overwrite simply the learned 1,1=>0 as the result of the learning, not through resetting the weights. |
@marcj, your code is not working because you are not passing the "initialization" parameter. First training session net.train(data, { Second training session net.train(data, { |
Well, as I said
What I mean with that is that I deleted the line |
Is there a reason why you only use one iteration for the second training? I guess with such low iterations the network has no chance to learn enough to get the errors low. Also with one iteration there's no need to pass any errorThreshold because the network has no chance to react with one iteration to a high error rate. |
@marcj, I do not think that the initialize function should be removed. |
@marcj, the second training is done with one iterarion just as an example. |
Hi, I'm trying to use your patch with loading a neural network from a file first, however it fails with the error rain/lib/neuralnetwork.js:180 The way I am doing is to load the file with fromJSON and then running train() with the 'initialization' parameter set to false, but it doesn't seem to work. Apparently some default values are set in the init method that is disabled. What can I do? Thanks |
@antoniodeluca, well I called Can you confirm this behavior of this code? net.train([{input: [1, 1], output: [0]}]); //trains 1,1 => 0
net.runInput([1, 1]); //results 0.0702838678761908 => correct
net.train([{input: [1, 0], output: [1]}], {initialization: false}); //trains 1,0 => 1
net.runInput([1, 1]); //results 0.850795812504698 => incorrect
net.runInput([1, 0]); //results 0.929557447931592 => correct |
Would you consider closing this here, and reopening at: https://github.com/harthur-org/brain.js/pulls? If so, we could add you as a collaborator. |
Yes, sure. I do it today or tomorrow. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why
_.isUndefined(options.initialization) ? true : options.initialization;
Maybe can write as
_.isUndefined(options.initialization) || options.initialization;
Sometime back I added a |
Usage example: const net = brain.NeuralNetwork();
net.train([]);
//sometime later
net.train([], { keepNetworkIntact: true }); |
I installed 'npm install brain.js' and these not have argument keepNetworkIntact. What? |
We need to bump the version. In the meantime, can you install via: |
@robertleeplummerjr thank you, that's work, and arg 'keepNetworkIntact' too ok! |
Added a new training option that enable users to specify if the weights of the neural network have to be (re)initialized or not. This is very useful when an user wants to call more time the neural network training method or for any other reason for which one doesn't want that weights change.