Facebook AI Research (FAIR) team yesterday announced a neural network capable of “translating music across musical instruments, genres, and styles.” The research paper was first spotted by tech-news site TheNextWeb.
“In this work we are able, for the first time as far as we know, to produce high fidelity musical translation between instruments, styles, and genres,” explain the researchers. “For example, we convert the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven…”
“Our results present abilities that are, as far as we know, unheard of. Asked to convert one musical instrument to another, our network is on par or slightly worse than professional musicians. Many times, people find it hard to tell which is the original audio file and which is the output of the conversion that mimics a completely different instrument.”
FAIR’s approach involves auto-encoding where the network can process audio from inputs it has never been exposed to previously. Instead of trying to match pitch or memorize notes, the AI’s unsupervised learning method uses high-level semantics interpretation.
Facebook’s Universal Music Transaction Network is another example of how far we’ve reached in the field of AI. Five years ago, we wouldn’t have dreamed of translating Choplin into something new and wonderful.
Not so long ago, Facebook opened two new AI research labs in Pittsburgh and Seattle. The labs will include professors hired from Carnegie Mellon University and University of Washington. The opening of the labs has also prompted fears that Facebook is now poaching the instructors needed to train the next generation of AI researchers.
Facebook has quite a few reasons to invest in AI. Many of its latest initiatives such as the photo-video sorting on machine learning which filters out photos you sent to your ex. The flagship social network is also experimenting with AI that can read text in order to filter out hate speech.