image-render

A London start-up ā€˜Deep Render’ that is applying machine learning to image compressions, has raised Ā£1.6 million in seed funding. Founded in mid-2017 by Arsalan Zafar and Chri Besenbruch, Deep Render aims at solving data consumption issues that affect internet connections. These issues choke during peak periods caused by the lockdown period across many countries.

The start-up is initiating an entirely new approach to image compression, as image and video data at present are taking up 80% of the total internet traffic. ā€œOurĀ Biological CompressionĀ technology rebuilds media compression from scratch by using the advances of the machine learning revolution and by mimicking the neural processes of the human eye,ā€ explains Deep Render co-founder and CEO Chri Besenbruch.

ā€œOur secret sauce is in the way the data is compressed and sent across the network. The traditional technology relies on various modules each connected – but which don’t actually ā€˜talk’ to each other. An image is optimized for module one before moving to module two, and it’s then optimized for module two and so on. This not only causes delays, but it can also cause losses in data which can ultimately reduce the quality and accuracy of the resulting image. Plus, if one stage of optimization doesn’t work, the other modules don’t know about it so can’t correct any mistakesā€.

To solve this issue, the CEO stated that the image compression technology of Deep Render replaces individual components with a single element that overtakes an entire domain. This means that the steps of compression logic are interconnected as an ā€œend-to-endā€ training method.

ā€œWhat’s more, Deep Render trains its machine learning platform with the end goal in mind,ā€ adds Besenbruch. ā€œThis has the benefit of both boosting the efficiency and accuracy of the linear functions and extending the software’s capability to model and perform non-linear functions. Think of it as a line and curve. An image, by its nature, has a lot of curvature from changes in tone, light, brightness and color. By expanding the compression software’s ability to consider each of these curves means it’s also able to tell which images are more visually pleasing. As humans, we do this intuitively. We know when color is a little off, or the landscape doesn’t look quite right. We don’t even realize we do this most of the time, but it plays a major role in how we assess images and videosā€.

To test the concept, Deep Render recently carried out a large-scaleĀ AmazonĀ Ā MTurk study, wherein 5,000 participants took part. This process includes testing the image compression algorithm against BPG (a market standard for image compression, and part of the video compression standard H.265). When asked to compare the perceptual quality over the CLIC-Vision dataset, around 95% of the participants rated its images as more visually pleasing, with the Deep Render images coming out as half the file size. ā€œOur technological breakthrough represents the foundation for a new class of compression methods,ā€ claims the Deep Render co-founder.

CEO Besebruch names Magic Pony as its past competitor. Twitter bought magic Pony for $150 million a year after it was founded. ā€œMagic Pony was also looking at deep learning for solving the challenges of image and video compression,ā€ he explains. ā€œHowever, Magic Pony looked at improving the traditional compression pipeline via post and pre-processing steps using AI, and thus was ultimately still limited by its restrictions. Deep Render does not want to ā€˜improve’ the traditional compression pipeline; we are out to destroy it and rebuild it from its ashesā€.

Besenbruch said the current competitors of Deep Render are WaveOne from Silicon Valley, and TuCodec from Shanghai.