Hello;
-- l'm looking for practical resources and code with julia for restricted boltzman machine with L2 regularization. Thanks for your helps You received this message because you are subscribed to the Google Groups "julia-stats" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Boltzmann.jl supports both - L1 and L2 regularization (although it's not documented yet):
--
Note, that observations should be on columns, which goes along with many other machine learning packages, but may be different from statistical packages that often put observations on rows. On Monday, July 18, 2016 at 6:22:19 PM UTC+3, Ahmed Mazari wrote:
You received this message because you are subscribed to the Google Groups "julia-stats" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
thank you.it helps me On Tue, Jul 19, 2016 at 8:42 AM, Andrei Zh <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "julia-stats" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Andrei Zh
Here are my weights between VISIBLE and HIDDEN units # h : hidden, v : visible ? gemm!('N', 'T', lr, h_neg, v_neg, 0.0, rbm.dW) gemm!('N', 'T', lr, h_pos, v_pos, -1.0, rbm.dW) this is the code for standard weights updating . now l want to modify this two functions to add L2 regularization . How can l do that efficiently ? any ideas !! the change to make and to add is between the two functions : gemm!('N', 'T', lr, h_neg, v_neg, 0.0, rbm.dW) # l think it's here to do the regularization gemm!('N', 'T', lr, h_pos, v_pos, -1.0, rbm.dW) thanks for help l'm dummy with this concepts On Tue, Jul 19, 2016 at 8:42 AM, Andrei Zh <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "julia-stats" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Seems like you are looking at some terribly outdated version of Boltzmann.jl, try to update to the latest master.
-- L2 regularization is essentially an additional term in a loss function that you try to minimize. You can add this term to the loss function itself or add gradient of this term to the gradient of loss function. Boltzmann.jl uses second approach, splitting gradient calculation into 2 parts: 1. Calculate original gradient (`gradient_classic` function). 2. Apply "updaters" such as learning rate, momentum, weight decay, etc. (`grad_apply_*` functions). Regularization (both - L1 and L2) is implemented in `grad_apply_weight_decay!` and boils down to the expression:
where `decay_rate` is L2 hyperparameter, `rbm.W` is current set of parameters (minus biases) and `dW` is currently calculated weight gradient. So to use L2 regularization you only need to add parameters `weight_decay_kind=:l2` and `weight_decay_rate=<your rate>` to the `fit` function (see my first post for example). On Wednesday, July 20, 2016 at 5:26:15 PM UTC+3, Ahmed Mazari wrote:
You received this message because you are subscribed to the Google Groups "julia-stats" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Free forum by Nabble | Edit this page |