In this blog post I show how to use logistic regression to classify images. Logistic regression can be regarded as the simplest form of a feed forward neural network (in the sense of “perceptron”). It’s therefore a good starting point for understanding the inner works of neural networks.
Regression represents one of the cornerstones of machine learning. Comprehending its logic and math provides a solid foundation for learning more advanced machine learning techniques such as neural networks.
I finally found some time to enhance my neural network to support deep learning. The network now masters a variable number of layers and is capable of running convolutional layers. The architecture is generic, light weight (very small memory footprint) and super fast. :-)
I’ve extended my simple 1-Layer neural network to include a hidden layer and use the back propagation algorithm for updating connection weights. The size of the network (number of neurons per layer) is dynamic. It’s accuracy in classifying the handwritten digits in the MNIST database improved from 85% to >91%.
In this post I’ll explore how to use a very simple 1-layer neural network to recognize the handwritten digits in the MNIST database.
When people talk about artificial intelligence and machine learning, they most often refer to (artificial) neural networks (ANN or NN). Let’s explore some machine learning basics, without excessive math, purely from a programmer’s perspective.
Finally! I launched my new blog about my hacking adventures in the world of artificial intelligence and machine learning.