Is the master algorithm on the roadmap?

classic Classic list List threaded Threaded
41 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Is the master algorithm on the roadmap?

Kevin Liu-2
I am wondering how Julia fits in with the unified tribes

mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

https://www.youtube.com/watch?v=B8J4uefCQMc
Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Isaiah Norton
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="pfEuMfzXAgAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">kvt...@...> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
Correction: Founders: tribe is mainly of Symbolists?

On Friday, June 3, 2016 at 10:36:01 AM UTC-3, Kevin Liu wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2


On Friday, June 3, 2016 at 10:44:02 AM UTC-3, Kevin Liu wrote:
Correction: Founders: tribe is mainly of Symbolists?

On Friday, June 3, 2016 at 10:36:01 AM UTC-3, Kevin Liu wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc


Tribe Problem Solution.png (546K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Tom Breloff
In reply to this post by Kevin Liu-2
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation for it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="ZsDaj4L9AgAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">kvt...@...> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
In reply to this post by Tom Breloff
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="ZsDaj4L9AgAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">kvt...@...> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two recommended by Bill Gates to learn about AI. 

From the book, I understand all learners have to represent, evaluate, and optimize. There are many types of learners that do this. What Domingos does is generalize these three parts, (1) using Markov Logic Network to represent, (2) posterior probability to evaluate, and (3) genetic search with gradient descent to optimize. The posterior can be replaced for another accuracy measure when it is easier, as genetic search replaced by hill climbing. Where there are 15 popular options for representing, evaluating, and optimizing, Domingos generalized them into three options. The idea is to have one unified learner for any application. 

There is code already done in R https://alchemy.cs.washington.edu/. My question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
<a href="https://github.com/tbreloff/OnlineAI.jl/issues/5" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;">https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
Alchemy is like an inductive Turing machine, to be programmed to learn broadly or restrictedly.

The logic formulas from rules through which it represents can be inconsistent, incomplete, or even incorrect-- the learning and probabilistic reasoning will correct them. The key point is that Alchemy doesn't have to learn from scratch, proving Wolpert and Macready's no free lunch theorem wrong by performing well on a variety of classes of problems, not just some.

On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two recommended by Bill Gates to learn about AI. 

From the book, I understand all learners have to represent, evaluate, and optimize. There are many types of learners that do this. What Domingos does is generalize these three parts, (1) using Markov Logic Network to represent, (2) posterior probability to evaluate, and (3) genetic search with gradient descent to optimize. The posterior can be replaced for another accuracy measure when it is easier, as genetic search replaced by hill climbing. Where there are 15 popular options for representing, evaluating, and optimizing, Domingos generalized them into three options. The idea is to have one unified learner for any application. 

There is code already done in R <a href="https://alchemy.cs.washington.edu/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;">https://alchemy.cs.washington.edu/. My question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
<a href="https://github.com/tbreloff/OnlineAI.jl/issues/5" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;">https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
The Markov logic network represents a probability distribution over the states of a complex system (i.e. a cell), comprised of entities, where logic formulas encode the dependencies between them. 

On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
Alchemy is like an inductive Turing machine, to be programmed to learn broadly or restrictedly.

The logic formulas from rules through which it represents can be inconsistent, incomplete, or even incorrect-- the learning and probabilistic reasoning will correct them. The key point is that Alchemy doesn't have to learn from scratch, proving Wolpert and Macready's no free lunch theorem wrong by performing well on a variety of classes of problems, not just some.

On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two recommended by Bill Gates to learn about AI. 

From the book, I understand all learners have to represent, evaluate, and optimize. There are many types of learners that do this. What Domingos does is generalize these three parts, (1) using Markov Logic Network to represent, (2) posterior probability to evaluate, and (3) genetic search with gradient descent to optimize. The posterior can be replaced for another accuracy measure when it is easier, as genetic search replaced by hill climbing. Where there are 15 popular options for representing, evaluating, and optimizing, Domingos generalized them into three options. The idea is to have one unified learner for any application. 

There is code already done in R <a href="https://alchemy.cs.washington.edu/" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;">https://alchemy.cs.washington.edu/. My question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
<a href="https://github.com/tbreloff/OnlineAI.jl/issues/5" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;">https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
Another important cool thing I think is worth noting: he added the possibility of weights to rules (attachment). Each line is equivalent to a desired conclusion. 

On Wednesday, August 3, 2016 at 4:27:05 PM UTC-3, Kevin Liu wrote:
The Markov logic network represents a probability distribution over the states of a complex system (i.e. a cell), comprised of entities, where logic formulas encode the dependencies between them. 

On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
Alchemy is like an inductive Turing machine, to be programmed to learn broadly or restrictedly.

The logic formulas from rules through which it represents can be inconsistent, incomplete, or even incorrect-- the learning and probabilistic reasoning will correct them. The key point is that Alchemy doesn't have to learn from scratch, proving Wolpert and Macready's no free lunch theorem wrong by performing well on a variety of classes of problems, not just some.

On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two recommended by Bill Gates to learn about AI. 

From the book, I understand all learners have to represent, evaluate, and optimize. There are many types of learners that do this. What Domingos does is generalize these three parts, (1) using Markov Logic Network to represent, (2) posterior probability to evaluate, and (3) genetic search with gradient descent to optimize. The posterior can be replaced for another accuracy measure when it is easier, as genetic search replaced by hill climbing. Where there are 15 popular options for representing, evaluating, and optimizing, Domingos generalized them into three options. The idea is to have one unified learner for any application. 

There is code already done in R <a href="https://alchemy.cs.washington.edu/" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;">https://alchemy.cs.washington.edu/. My question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
<a href="https://github.com/tbreloff/OnlineAI.jl/issues/5" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;">https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc


Screen Shot 2016-08-03 at 16.30.41.png (55K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Christof Stocker
In reply to this post by Kevin Liu-2
Hello Kevin,

Enthusiasm is a good thing and you should hold on to that. But to save yourself some headache or disappointment down the road I advice a level head. Nothing is really as bluntly obviously solved as it may seems at first glance after listening to brilliant people explain things. A physics professor of mine once told me that one of the (he thinks) most malicious factors to his past students progress where overstated results/conclusions by other researches (such as premature announcements from CERN). I am no mathematician, but as far as I can judge is the no free lunch theorem of pure mathematical nature and not something induced empirically. These kind of results are not that easily to get rid of. If someone (especially an expert) states such a theorem will prove wrong I would be inclined to believe that he is not talking about literally, but instead is just trying to make a point about a more or less practical implication.

Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb Kevin Liu:
The Markov logic network represents a probability distribution over the states of a complex system (i.e. a cell), comprised of entities, where logic formulas encode the dependencies between them. 

On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
Alchemy is like an inductive Turing machine, to be programmed to learn broadly or restrictedly.

The logic formulas from rules through which it represents can be inconsistent, incomplete, or even incorrect-- the learning and probabilistic reasoning will correct them. The key point is that Alchemy doesn't have to learn from scratch, proving Wolpert and Macready's no free lunch theorem wrong by performing well on a variety of classes of problems, not just some.

On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two recommended by Bill Gates to learn about AI. 

From the book, I understand all learners have to represent, evaluate, and optimize. There are many types of learners that do this. What Domingos does is generalize these three parts, (1) using Markov Logic Network to represent, (2) posterior probability to evaluate, and (3) genetic search with gradient descent to optimize. The posterior can be replaced for another accuracy measure when it is easier, as genetic search replaced by hill climbing. Where there are 15 popular options for representing, evaluating, and optimizing, Domingos generalized them into three options. The idea is to have one unified learner for any application. 

There is code already done in R <a href="https://alchemy.cs.washington.edu/" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;">https://alchemy.cs.washington.edu/. My question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
<a href="https://github.com/tbreloff/OnlineAI.jl/issues/5" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;">https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
In reply to this post by Kevin Liu-2
Alchemy is also less expensive and opaque than Watson's meta learning: 'I believe you have prostate cancer because the decision tree, the genetic algorithm, and Naïve Bayes say so, although the multilayer perceptron and the SVM disagree.'

On Wednesday, August 3, 2016 at 4:36:52 PM UTC-3, Kevin Liu wrote:
Another important cool thing I think is worth noting: he added the possibility of weights to rules (attachment). Each line is equivalent to a desired conclusion. 

On Wednesday, August 3, 2016 at 4:27:05 PM UTC-3, Kevin Liu wrote:
The Markov logic network represents a probability distribution over the states of a complex system (i.e. a cell), comprised of entities, where logic formulas encode the dependencies between them. 

On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
Alchemy is like an inductive Turing machine, to be programmed to learn broadly or restrictedly.

The logic formulas from rules through which it represents can be inconsistent, incomplete, or even incorrect-- the learning and probabilistic reasoning will correct them. The key point is that Alchemy doesn't have to learn from scratch, proving Wolpert and Macready's no free lunch theorem wrong by performing well on a variety of classes of problems, not just some.

On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two recommended by Bill Gates to learn about AI. 

From the book, I understand all learners have to represent, evaluate, and optimize. There are many types of learners that do this. What Domingos does is generalize these three parts, (1) using Markov Logic Network to represent, (2) posterior probability to evaluate, and (3) genetic search with gradient descent to optimize. The posterior can be replaced for another accuracy measure when it is easier, as genetic search replaced by hill climbing. Where there are 15 popular options for representing, evaluating, and optimizing, Domingos generalized them into three options. The idea is to have one unified learner for any application. 

There is code already done in R <a href="https://alchemy.cs.washington.edu/" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;">https://alchemy.cs.washington.edu/. My question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
<a href="https://github.com/tbreloff/OnlineAI.jl/issues/5" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;">https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc

Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
In reply to this post by Christof Stocker
Thanks for the advice Cristof. I am only interested in people wanting to code it in Julia, from R by Domingos. The algo has been successfully applied in many areas, even though there are many other areas remaining. 

On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker <[hidden email]> wrote:
Hello Kevin,

Enthusiasm is a good thing and you should hold on to that. But to save yourself some headache or disappointment down the road I advice a level head. Nothing is really as bluntly obviously solved as it may seems at first glance after listening to brilliant people explain things. A physics professor of mine once told me that one of the (he thinks) most malicious factors to his past students progress where overstated results/conclusions by other researches (such as premature announcements from CERN). I am no mathematician, but as far as I can judge is the no free lunch theorem of pure mathematical nature and not something induced empirically. These kind of results are not that easily to get rid of. If someone (especially an expert) states such a theorem will prove wrong I would be inclined to believe that he is not talking about literally, but instead is just trying to make a point about a more or less practical implication.


Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb Kevin Liu:
The Markov logic network represents a probability distribution over the states of a complex system (i.e. a cell), comprised of entities, where logic formulas encode the dependencies between them. 

On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
Alchemy is like an inductive Turing machine, to be programmed to learn broadly or restrictedly.

The logic formulas from rules through which it represents can be inconsistent, incomplete, or even incorrect-- the learning and probabilistic reasoning will correct them. The key point is that Alchemy doesn't have to learn from scratch, proving Wolpert and Macready's no free lunch theorem wrong by performing well on a variety of classes of problems, not just some.

On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two recommended by Bill Gates to learn about AI. 

From the book, I understand all learners have to represent, evaluate, and optimize. There are many types of learners that do this. What Domingos does is generalize these three parts, (1) using Markov Logic Network to represent, (2) posterior probability to evaluate, and (3) genetic search with gradient descent to optimize. The posterior can be replaced for another accuracy measure when it is easier, as genetic search replaced by hill climbing. Where there are 15 popular options for representing, evaluating, and optimizing, Domingos generalized them into three options. The idea is to have one unified learner for any application. 

There is code already done in R https://alchemy.cs.washington.edu/. My question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:


Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Christof Stocker
This sounds like it could be a great contribution. I shall keep a curious eye on your progress

Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin Liu:
Thanks for the advice Cristof. I am only interested in people wanting to code it in Julia, from R by Domingos. The algo has been successfully applied in many areas, even though there are many other areas remaining. 

On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="qcpt-1JBAgAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">stocker....@...> wrote:
Hello Kevin,

Enthusiasm is a good thing and you should hold on to that. But to save yourself some headache or disappointment down the road I advice a level head. Nothing is really as bluntly obviously solved as it may seems at first glance after listening to brilliant people explain things. A physics professor of mine once told me that one of the (he thinks) most malicious factors to his past students progress where overstated results/conclusions by other researches (such as premature announcements from CERN). I am no mathematician, but as far as I can judge is the no free lunch theorem of pure mathematical nature and not something induced empirically. These kind of results are not that easily to get rid of. If someone (especially an expert) states such a theorem will prove wrong I would be inclined to believe that he is not talking about literally, but instead is just trying to make a point about a more or less practical implication.


Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb Kevin Liu:
The Markov logic network represents a probability distribution over the states of a complex system (i.e. a cell), comprised of entities, where logic formulas encode the dependencies between them. 

On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
Alchemy is like an inductive Turing machine, to be programmed to learn broadly or restrictedly.

The logic formulas from rules through which it represents can be inconsistent, incomplete, or even incorrect-- the learning and probabilistic reasoning will correct them. The key point is that Alchemy doesn't have to learn from scratch, proving Wolpert and Macready's no free lunch theorem wrong by performing well on a variety of classes of problems, not just some.

On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two recommended by Bill Gates to learn about AI. 

From the book, I understand all learners have to represent, evaluate, and optimize. There are many types of learners that do this. What Domingos does is generalize these three parts, (1) using Markov Logic Network to represent, (2) posterior probability to evaluate, and (3) genetic search with gradient descent to optimize. The posterior can be replaced for another accuracy measure when it is easier, as genetic search replaced by hill climbing. Where there are 15 popular options for representing, evaluating, and optimizing, Domingos generalized them into three options. The idea is to have one unified learner for any application. 

There is code already done in R <a href="https://alchemy.cs.washington.edu/" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;">https://alchemy.cs.washington.edu/. My question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
<a href="https://github.com/tbreloff/OnlineAI.jl/issues/5" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;">https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc


Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
Markov Logic Network is being used for the continuous development of drugs to cure cancer at MIT's CanceRX, on DARPA's largest AI project to date, Personalized Assistant that Learns (PAL), progenitor of Siri. One of Alchemy's largest applications to date was to learn a semantic network (knowledge graph as Google calls it) from the web. 



Picture doesn't appear to be open-source, even though its Paper is available. 

I'm in the process of comparing the Picture Paper and Alchemy code and would like to have an open-source PILP from Julia that combines the best of both. 

On Wednesday, August 3, 2016 at 5:01:02 PM UTC-3, Christof Stocker wrote:
This sounds like it could be a great contribution. I shall keep a curious eye on your progress

Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin Liu:
Thanks for the advice Cristof. I am only interested in people wanting to code it in Julia, from R by Domingos. The algo has been successfully applied in many areas, even though there are many other areas remaining. 

On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker <[hidden email]> wrote:
Hello Kevin,

Enthusiasm is a good thing and you should hold on to that. But to save yourself some headache or disappointment down the road I advice a level head. Nothing is really as bluntly obviously solved as it may seems at first glance after listening to brilliant people explain things. A physics professor of mine once told me that one of the (he thinks) most malicious factors to his past students progress where overstated results/conclusions by other researches (such as premature announcements from CERN). I am no mathematician, but as far as I can judge is the no free lunch theorem of pure mathematical nature and not something induced empirically. These kind of results are not that easily to get rid of. If someone (especially an expert) states such a theorem will prove wrong I would be inclined to believe that he is not talking about literally, but instead is just trying to make a point about a more or less practical implication.


Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb Kevin Liu:
The Markov logic network represents a probability distribution over the states of a complex system (i.e. a cell), comprised of entities, where logic formulas encode the dependencies between them. 

On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
Alchemy is like an inductive Turing machine, to be programmed to learn broadly or restrictedly.

The logic formulas from rules through which it represents can be inconsistent, incomplete, or even incorrect-- the learning and probabilistic reasoning will correct them. The key point is that Alchemy doesn't have to learn from scratch, proving Wolpert and Macready's no free lunch theorem wrong by performing well on a variety of classes of problems, not just some.

On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two recommended by Bill Gates to learn about AI. 

From the book, I understand all learners have to represent, evaluate, and optimize. There are many types of learners that do this. What Domingos does is generalize these three parts, (1) using Markov Logic Network to represent, (2) posterior probability to evaluate, and (3) genetic search with gradient descent to optimize. The posterior can be replaced for another accuracy measure when it is easier, as genetic search replaced by hill climbing. Where there are 15 popular options for representing, evaluating, and optimizing, Domingos generalized them into three options. The idea is to have one unified learner for any application. 

There is code already done in R <a href="https://alchemy.cs.washington.edu/" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;">https://alchemy.cs.washington.edu/. My question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
<a href="https://github.com/tbreloff/OnlineAI.jl/issues/5" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;">https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc


Reply | Threaded
Open this post in threaded view
|

Re: Is the master algorithm on the roadmap?

Kevin Liu-2
So, I think in the next 20 years (2003), if we can get rid of all of the traditional approaches to artificial intelligence, like neural nets and genetic algorithms and rule-based systems, and just turn our sights a little bit higher to say, can we make a system that can use all those things for the right kind of problem? Some problems are good for neural nets; we know that others, neural nets are hopeless on them. Genetic algorithms are great for certain things; I suspect I know what they're bad at, and I won't tell you. (Laughter)  - Minsky, founder of CSAIL MIT

Those programmers tried to find the single best way to represent knowledge - Only Logic protects us from paradox. - Minsky (see attachment from his lecture)

On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:
Markov Logic Network is being used for the continuous development of drugs to cure cancer at MIT's <a href="http://cancerx.mit.edu/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fcancerx.mit.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFaBZgsiQ1CSy2RN23zjsAFJJo16A&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fcancerx.mit.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFaBZgsiQ1CSy2RN23zjsAFJJo16A&#39;;return true;">CanceRX, on DARPA's largest AI project to date, <a href="https://pal.sri.com/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fpal.sri.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGQHrudF84I4S9ocn8iKbqZGgOxOg&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fpal.sri.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGQHrudF84I4S9ocn8iKbqZGgOxOg&#39;;return true;">Personalized Assistant that Learns (PAL), progenitor of Siri. One of Alchemy's largest applications to date was to learn a semantic network (knowledge graph as Google calls it) from the web. 

<a href="http://people.csail.mit.edu/kersting/ecmlpkdd05_pilp/pilp_ida2005_tut.pdf" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fpeople.csail.mit.edu%2Fkersting%2Fecmlpkdd05_pilp%2Fpilp_ida2005_tut.pdf\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNH2or61H9KMehUsaxRwUvyegujQaA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fpeople.csail.mit.edu%2Fkersting%2Fecmlpkdd05_pilp%2Fpilp_ida2005_tut.pdf\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNH2or61H9KMehUsaxRwUvyegujQaA&#39;;return true;">Some on Probabilistic Inductive Logic Programming / Probabilistic Logic Programming / Statistical Relational Learning from CSAIL (my understanding is Alchemy does PILP from entailment, proofs, and interpretation)

<a href="http://probcomp.csail.mit.edu/index.html" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fprobcomp.csail.mit.edu%2Findex.html\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGwvpGNDDyL4eSUp1D4m_7zBHEFww&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fprobcomp.csail.mit.edu%2Findex.html\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGwvpGNDDyL4eSUp1D4m_7zBHEFww&#39;;return true;">The MIT Probabilistic Computing Project (where there is Picture, an extension of Julia, for computer vision; Watch the video from Vikash)

<a href="http://www.inference.vc/deep-learning-is-easy/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.inference.vc%2Fdeep-learning-is-easy%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHu347FfJdRZmZJvg6-rK3aztTBcQ&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.inference.vc%2Fdeep-learning-is-easy%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHu347FfJdRZmZJvg6-rK3aztTBcQ&#39;;return true;">Probabilistic programming could do for Bayesian ML what Theano has done for neural networks. - Ferenc Huszár

Picture doesn't appear to be open-source, even though its Paper is available. 

I'm in the process of comparing the Picture Paper and Alchemy code and would like to have an open-source PILP from Julia that combines the best of both. 

On Wednesday, August 3, 2016 at 5:01:02 PM UTC-3, Christof Stocker wrote:
This sounds like it could be a great contribution. I shall keep a curious eye on your progress

Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin Liu:
Thanks for the advice Cristof. I am only interested in people wanting to code it in Julia, from R by Domingos. The algo has been successfully applied in many areas, even though there are many other areas remaining. 

On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker <[hidden email]> wrote:
Hello Kevin,

Enthusiasm is a good thing and you should hold on to that. But to save yourself some headache or disappointment down the road I advice a level head. Nothing is really as bluntly obviously solved as it may seems at first glance after listening to brilliant people explain things. A physics professor of mine once told me that one of the (he thinks) most malicious factors to his past students progress where overstated results/conclusions by other researches (such as premature announcements from CERN). I am no mathematician, but as far as I can judge is the no free lunch theorem of pure mathematical nature and not something induced empirically. These kind of results are not that easily to get rid of. If someone (especially an expert) states such a theorem will prove wrong I would be inclined to believe that he is not talking about literally, but instead is just trying to make a point about a more or less practical implication.


Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb Kevin Liu:
The Markov logic network represents a probability distribution over the states of a complex system (i.e. a cell), comprised of entities, where logic formulas encode the dependencies between them. 

On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu wrote:
Alchemy is like an inductive Turing machine, to be programmed to learn broadly or restrictedly.

The logic formulas from rules through which it represents can be inconsistent, incomplete, or even incorrect-- the learning and probabilistic reasoning will correct them. The key point is that Alchemy doesn't have to learn from scratch, proving Wolpert and Macready's no free lunch theorem wrong by performing well on a variety of classes of problems, not just some.

On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin Liu wrote:
Hello Community, 

I'm in the last pages of Pedro Domingos' book, the Master Algo, one of two recommended by Bill Gates to learn about AI. 

From the book, I understand all learners have to represent, evaluate, and optimize. There are many types of learners that do this. What Domingos does is generalize these three parts, (1) using Markov Logic Network to represent, (2) posterior probability to evaluate, and (3) genetic search with gradient descent to optimize. The posterior can be replaced for another accuracy measure when it is easier, as genetic search replaced by hill climbing. Where there are 15 popular options for representing, evaluating, and optimizing, Domingos generalized them into three options. The idea is to have one unified learner for any application. 

There is code already done in R <a href="https://alchemy.cs.washington.edu/" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Falchemy.cs.washington.edu%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNE4fL9aRXcvBo4x6ipiLIqF5C9F3A&#39;;return true;">https://alchemy.cs.washington.edu/. My question: anybody in the community vested in coding it into Julia?

Thanks. Kevin

On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu wrote:
<a href="https://github.com/tbreloff/OnlineAI.jl/issues/5" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Ftbreloff%2FOnlineAI.jl%2Fissues%2F5\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFQGuYsD57TloxPmjHSWrAhRfNoPw&#39;;return true;">https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
I plan to write Julia for the rest of me life... given it remains suitable. I am still reading all of Colah's material on nets. I ran Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for jumping in and telling me about OnlineAI.jl, I will look into it once I am ready. From a quick look, perhaps I could help and learn by building a very clear documentation of it. Would really like to see Julia a leap ahead of other languages, and plan to contribute heavily to it, but at the moment am still getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
Kevin: computers that program themselves is a concept which is much closer to reality than most would believe, but julia-users isn't really the best place for this speculation. If you're actually interested in writing code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we might tackle code generation using a neural framework I'm working on. 

On Friday, June 3, 2016, Kevin Liu <[hidden email]> wrote:
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not lecture at Google with a TensorFlow question in the end), were unsuccessful penny traders, Julia was a language for web design, and the tribes in the video didn't actually solve problems, perhaps this would be a wildly off-topic, speculative discussion. But these statements couldn't be farther from the truth. In fact, if I had known about this video some months ago I would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This master algorithm, as you are aware, would require collaboration with other tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
There could be parts missing as Domingos mentions, but induction, backpropagation, genetic programming, probabilistic inference, and SVMs working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
This is not a forum for wildly off-topic, speculative discussion.

Take this to Reddit, Hacker News, etc.


On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu <[hidden email]> wrote:
I am wondering how Julia fits in with the unified tribes

<a href="http://mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fmashable.com%2F2016%2F06%2F01%2Fbill-gates-ai-code-conference%2F%238VmBFjIiYOqJ\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHqV6hyKXbcTz0NxgQjBkzcVXa1PA&#39;;return true;">mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

<a href="https://www.youtube.com/watch?v=B8J4uefCQMc" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;" onclick="this.href=&#39;https://www.youtube.com/watch?v\x3dB8J4uefCQMc&#39;;return true;">https://www.youtube.com/watch?v=B8J4uefCQMc



Representing Knowledge - Logic - Minsky.png (1M) Download Attachment
123