Restricted boltzman machine with L2 reglarization

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Restricted boltzman machine with L2 reglarization

Ahmed Mazari
Hello;

l'm looking for practical resources and code with julia for restricted boltzman machine with L2 regularization.

Thanks for your helps

--
You received this message because you are subscribed to the Google Groups "julia-stats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: Restricted boltzman machine with L2 reglarization

Andrei Zh
Boltzmann.jl supports both - L1 and L2 regularization (although it's not documented yet):

# install if needed
Pkg.add("Boltzmann")

using Boltzmann

# create dataset
X = randn(100, 2000)
X = (X + abs(minimum(X))) / (maximum(X) - minimum(X))
rbm = BernoulliRBM(100, 50) 

# fit with L2 regularization (weight decay)
fit(rbm, X; weight_decay_kind=:l2, weight_decay_rate=0.9)

Note, that observations should be on columns, which goes along with many other machine learning packages, but may be different from statistical packages that often put observations on rows. 


On Monday, July 18, 2016 at 6:22:19 PM UTC+3, Ahmed Mazari wrote:
Hello;

l'm looking for practical resources and code with julia for restricted boltzman machine with L2 regularization.

Thanks for your helps

--
You received this message because you are subscribed to the Google Groups "julia-stats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: Restricted boltzman machine with L2 reglarization

Ahmed Mazari
thank you.it helps me

On Tue, Jul 19, 2016 at 8:42 AM, Andrei Zh <[hidden email]> wrote:
Boltzmann.jl supports both - L1 and L2 regularization (although it's not documented yet):

# install if needed
Pkg.add("Boltzmann")

using Boltzmann

# create dataset
X = randn(100, 2000)
X = (X + abs(minimum(X))) / (maximum(X) - minimum(X))
rbm = BernoulliRBM(100, 50) 

# fit with L2 regularization (weight decay)
fit(rbm, X; weight_decay_kind=:l2, weight_decay_rate=0.9)

Note, that observations should be on columns, which goes along with many other machine learning packages, but may be different from statistical packages that often put observations on rows. 



On Monday, July 18, 2016 at 6:22:19 PM UTC+3, Ahmed Mazari wrote:
Hello;

l'm looking for practical resources and code with julia for restricted boltzman machine with L2 regularization.

Thanks for your helps

--
You received this message because you are subscribed to a topic in the Google Groups "julia-stats" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/julia-stats/P3XdO8Yz-w8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "julia-stats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: Restricted boltzman machine with L2 reglarization

Ahmed Mazari
In reply to this post by Andrei Zh
Here are my weights between VISIBLE and HIDDEN units

# h : hidden,  v : visible  ?
    gemm!('N', 'T', lr, h_neg, v_neg, 0.0, rbm.dW)
    gemm!('N', 'T', lr, h_pos, v_pos, -1.0, rbm.dW)

this is the code for standard weights updating .

now l want to modify this two functions to add L2 regularization . How can l do that efficiently ? any ideas  !!
the change to make and to add is between the two functions :

gemm!('N', 'T', lr, h_neg, v_neg, 0.0, rbm.dW)
#  l think it's here to do the regularization
 gemm!('N', 'T', lr, h_pos, v_pos, -1.0, rbm.dW)

thanks for help l'm dummy with this concepts

On Tue, Jul 19, 2016 at 8:42 AM, Andrei Zh <[hidden email]> wrote:
Boltzmann.jl supports both - L1 and L2 regularization (although it's not documented yet):

# install if needed
Pkg.add("Boltzmann")

using Boltzmann

# create dataset
X = randn(100, 2000)
X = (X + abs(minimum(X))) / (maximum(X) - minimum(X))
rbm = BernoulliRBM(100, 50) 

# fit with L2 regularization (weight decay)
fit(rbm, X; weight_decay_kind=:l2, weight_decay_rate=0.9)

Note, that observations should be on columns, which goes along with many other machine learning packages, but may be different from statistical packages that often put observations on rows. 



On Monday, July 18, 2016 at 6:22:19 PM UTC+3, Ahmed Mazari wrote:
Hello;

l'm looking for practical resources and code with julia for restricted boltzman machine with L2 regularization.

Thanks for your helps

--
You received this message because you are subscribed to a topic in the Google Groups "julia-stats" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/julia-stats/P3XdO8Yz-w8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "julia-stats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: Restricted boltzman machine with L2 reglarization

Andrei Zh
Seems like you are looking at some terribly outdated version of Boltzmann.jl, try to update to the latest master.

L2 regularization is essentially an additional term in a loss function that you try to minimize. You can add this term to the loss function itself or add gradient of this term to the gradient of loss function. Boltzmann.jl uses second approach, splitting gradient calculation into 2 parts: 

1. Calculate original gradient (`gradient_classic` function).
2. Apply "updaters" such as learning rate, momentum, weight decay, etc. (`grad_apply_*` functions). 

Regularization (both - L1 and L2) is implemented in `grad_apply_weight_decay!` and boils down to the expression: 

axpy!(-decay_rate, rbm.W, dW)

where `decay_rate` is L2 hyperparameter, `rbm.W` is current set of parameters (minus biases) and `dW` is currently calculated weight gradient. 

So to use L2 regularization you only need to add parameters `weight_decay_kind=:l2` and `weight_decay_rate=<your rate>` to the `fit` function (see my first post for example). 


On Wednesday, July 20, 2016 at 5:26:15 PM UTC+3, Ahmed Mazari wrote:
Here are my weights between VISIBLE and HIDDEN units

# h : hidden,  v : visible  ?
    gemm!('N', 'T', lr, h_neg, v_neg, 0.0, rbm.dW)
    gemm!('N', 'T', lr, h_pos, v_pos, -1.0, rbm.dW)

this is the code for standard weights updating .

now l want to modify this two functions to add L2 regularization . How can l do that efficiently ? any ideas  !!
the change to make and to add is between the two functions :

gemm!('N', 'T', lr, h_neg, v_neg, 0.0, rbm.dW)
#  l think it's here to do the regularization
 gemm!('N', 'T', lr, h_pos, v_pos, -1.0, rbm.dW)

thanks for help l'm dummy with this concepts

On Tue, Jul 19, 2016 at 8:42 AM, Andrei Zh <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="pF2TKKlyCAAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">faithle...@...> wrote:
<a href="https://github.com/dfdx/Boltzmann.jl" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Fdfdx%2FBoltzmann.jl\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEXUs5K-wJG-tFp6CB4BcPfIv3FEw&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Fdfdx%2FBoltzmann.jl\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEXUs5K-wJG-tFp6CB4BcPfIv3FEw&#39;;return true;">Boltzmann.jl supports both - L1 and L2 regularization (although it's not documented yet):

# install if needed
Pkg.add("Boltzmann")

using Boltzmann

# create dataset
X = randn(100, 2000)
X = (X + abs(minimum(X))) / (maximum(X) - minimum(X))
rbm = BernoulliRBM(100, 50) 

# fit with L2 regularization (weight decay)
fit(rbm, X; weight_decay_kind=:l2, weight_decay_rate=0.9)

Note, that observations should be on columns, which goes along with many other machine learning packages, but may be different from statistical packages that often put observations on rows. 



On Monday, July 18, 2016 at 6:22:19 PM UTC+3, Ahmed Mazari wrote:
Hello;

l'm looking for practical resources and code with julia for restricted boltzman machine with L2 regularization.

Thanks for your helps

--
You received this message because you are subscribed to a topic in the Google Groups "julia-stats" group.
To unsubscribe from this topic, visit <a href="https://groups.google.com/d/topic/julia-stats/P3XdO8Yz-w8/unsubscribe" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://groups.google.com/d/topic/julia-stats/P3XdO8Yz-w8/unsubscribe&#39;;return true;" onclick="this.href=&#39;https://groups.google.com/d/topic/julia-stats/P3XdO8Yz-w8/unsubscribe&#39;;return true;">https://groups.google.com/d/topic/julia-stats/P3XdO8Yz-w8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to <a href="javascript:" target="_blank" gdf-obfuscated-mailto="pF2TKKlyCAAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">julia-stats...@googlegroups.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;" onclick="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;">https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "julia-stats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.