Let’s face the truth. You all thought about us as Python and Django lovers and Angular freaks. Even if you are right, there’s more that meets the eye. I, for one, made my own, now not-so-private research on the topic of convolutional neural networks.

What I wanted to do, was to test the image scaling capabilities of the convolutional neural networks, especially the issue of scaling up low-resolution images. Although the typical algorithms such as Bicubic or Lanczos make it pretty well, they usually fail on big scale-ups around 500%. You can simply see the pixels with a naked eye. So, I wanted to see if the convolutional neural networks can do it in more intelligent and efficient way.

Those of you that are interested can check out the full repository of this neural network at our GitHub.

Technologies and Architecture

In my neural network research, I decided to use Keras which is a modular neural network library written in Python. The strong points of Keras are modularity and minimalism to make each module short, simple and understandable at the first glance. Additionally, it’s really easy to create new modules (classes and functions) so it’s a preferred tool for advanced research. Keras is designed to work on top of Theano, also a Python library helpful in defining, optimizing and evaluating mathematical expressions. Using Theano on CUDA Nvidia framework and using GPU to distinctly accelerate the evaluation process was also very helpful.

For my research, I used a convolutional neural network of a simple structure. The convolution layer has 150 9 x 9 filters with a 200 x 200 sized images being the input. After that comes the activation layer (RELU) followed by output layer. The optimizer used in the network is Adam on default parameters.

Network Training

The network was trained on a couple thousand images from the Image-net.org. For the training of increasing twofold, the input image was twofold reduced and then twofold increased. The output image was the original image.

To back up for a 2GB limited memory of the graphics card, images of a bigger resolution were divided into 200 x 200 pieces. There were close to 5,000 images during a one learning epoch. When the training process stopped showing development on the given training data set, a new data set was automatically downloaded.

To deal with the jpg artifacts which, if not erased before the image size is increased, can and will drastically influence the image quality, I decided to make a second convolutional neural network of a similar architecture which was trained by models of images with very low jpg compression and their original counterparts.

As you can see below, the effects are at least acceptable.

Original image:
Original Image Image scaled-up with GIMP and Lanczos filter:
Image scaled-up by GIMP and Lanczos filter Image scaled-up with Lanczos filter and corrected with the neural network:
Image scaled-up by Lanczos filter and corrected with my neural network

Standard neural network use case

If we take a closer look at the application of neural networks, it becomes clear that there are certain fields on which neural networks do a very good job. For one, financial and economic forecasting is heavily influenced by machine learning. Prediction of the stock market price change is one of the best examples. For such a fast, day-to-day business, it is extremely important to have a fast and accurate tool for price forecast for both macro and microeconomics. Similar, yet a bit different application of these algorithms, can be found in case of sales forecasting and customer research, thus can influence overall customer service and give the given business accurate data about customer-product relations.

Thanks to modern hardware specifications the machine learning and neural networks, although still in development, became very effective and will become even more useful in near future. For these reasons, it is most natural for TEONITE to use such algorithms for fast and accurate economics data predictions and to analyze huge databases efficiently.