You are viewing a single comment's thread.

view the rest of the comments →

awarenessadventurer ago

Here is a good example of a thred started on VOAT that went nowhere at the time. But it is becoming more relevant in light of new information. https://voat.co/v/DeepLearning/478739 Neural Networks, Types, and Functional Programming (colah.github.io) submitted 1.5 years ago by 3even

I SMELL AI in use and development. (Artificial Intelligence) VOAT is a library. Where is VOAT search? asleep at the wheel (or crashed and dead)

https://colah.github.io/posts/2015-09-NN-Types-FP/

excerpt: "An Ad-Hoc Field Deep learning, despite its remarkable successes, is a young field. While models called artificial neural networks have been studied for decades, much of that work seems only tenuously connected to modern results.

It’s often the case that young fields start in a very ad-hoc manner. Later, the mature field is understood very differently than it was understood by its early practitioners. For example, in taxonomy, people have grouped plants and animals for thousands of years, but the way we understood what we were doing changed a lot in light of evolution and molecular biology. In chemistry, we have explored chemical reactions for a long time, but what we understood ourselves to do changed a lot with the discovery of irreducible elements, and again later with models of the atom. Those are grandiose examples, but the history of science and mathematics has seen this pattern again and again, on many different scales.

It seems quite likely that deep learning is in this ad-hoc state.

At the moment, deep learning is held together by an extremely successful tool. This tool doesn’t seem fundamental; it’s something we’ve stumbled on, with seemingly arbitrary details that change regularly. As a field, we don’t yet have some unifying insight or shared understanding. In fact, the field has several competing narratives!

I think it is very likely that, reflecting back in 30 years, we will see deep learning very differently.

Deep Learning 30 Years in the Future If we think we’ll probably see deep learning very differently in 30 years, that suggests an interesting question: how are we going to see it? Of course, no one can actually know how we’ll come to understand the field. But it is interesting to speculate. At present, three narratives are competing to be the way we understand deep learning. There’s the neuroscience narrative, drawing analogies to biology. There’s the representations narrative, centered on transformations of data and the manifold hypothesis. Finally, there’s a probabilistic narrative, which interprets neural networks as finding latent variables. These narratives aren’t mutually exclusive, but they do present very different ways of thinking about deep learning. This essay extends the representations narrative to a new answer: deep learning studies a connection between optimization and functional programming. In this view, the representations narrative in deep learning corresponds to type theory in functional programming. It sees deep learning as the junction of two fields we already know to be incredibly rich. What we find, seems so beautiful to me, feels so natural, that the mathematician in me could believe it to be something fundamental about reality. This is an extremely speculative idea. I am not arguing that it is true. I wish to argue only that it is plausible, that one could imagine deep learning evolving in this direction. To be clear: I am primarily making an aesthetic argument, rather than an argument of fact. I wish to show that this is a natural and elegant idea, encompassing what we presently call deep learning. Optimization & Function Composition The distinctive property of deep learning is that it studies deep neural networks – neural networks with many layers. Over the course of multiple layers, these models progressively bend data, warping it into a form where it is easy to solve the given task."

continues in article.

zitterbewegung • 2 years ago This is really interesting. I think that if also if you apply machine learning to attempt to generate programs you may be able to do some tedious refactoring / programming also. I haven't seen machine learning explicitly constructed as a type though. Both Coq and ACL2 have had an external machine learning program where it takes programs as input and suggests new programs. Coq has had a machine learning model integrated into it. See http://arxiv.org/abs/1410.5467 . Also ACL2 http://arxiv.org/abs/1404.3034 • Reply•Share › Avatar Neil Shepperd • 2 years ago This is intriguing, as it is almost precisely the approach I am taking in my Haskell neural networks library: https://github.com/nshepper.... The language I use is that of category theory, though. Neural networks form the category of differentiable functions between various spaces.

There seem to be a few options in how to represent these things though. First, one can represent the parameters explicitly or implicitly, and I'm so far not sure which is preferable. And I have a few different RNN combinators describing different styles of accumulating map (scan), where again I'm not sure if a particular one is really "best". The idea does seem to be sound though, and a reasonable way of talking about neural net constructions.