An epistemic approach to neural networks in the field of artificial intelligence
Saturday 29 October // 10:45 // Auditorium
In order to understand something about neural networks, their use and role in the field of artificial intelligence, it is worth to take a step back from the current hypes, be it Convolutional Neural Networks, Generative Adversarial Networks or Large Language Models and to look at the history of the field.
In a course called Applied AI, a course with the aim to allow for a hands-on experience, we take the Perceptron model, as it has been proposed by Mark Rosenblatt, as a starting point. Historically the Perceptron model is interesting for several reasons. First it was heavily promoted by Mark Rosenblatt with promises that seem awkwardly familiar with today’s discourse. Second because of the rebuttal of Rosenblatt’s approach by Papert and Minsky, which led to the so called neural winter. This story illustrates how science is always a political battle about funding and recognition. Third from a technological perspective the Perceptron is conceptually very close to today’s neural networks. The basic concept of using an error signal between the output and it’s expected output in order to adapt the weights between network nodes persists in today’s neural network. One major breakthrough in the history of neural networks was the invention of the back-propagation algorithm that allowed to pass on this error signal through multiple layers.
During the course students start building the Perceptron model using the neural network programming tool tensor-flow, we re-iterate, how the Perceptron is able to learn logical functions such as the and and the or, how it fails to learn the xor, how enhanced by an additional layer and back-propagation it can learn the xor and finally how the one layer model achieves a recognition rate on the MNIST dataset an astonishing 80 %.