Maybe we’ll all be using synth sounds built by robots soon?
Google is using machine learning techniques to synthesize never-before-heard hybrid instruments, Wired reports.
NSynth is a project from Google Magenta, a team of AI researchers at the tech giant tasked with finding out whether AI can be used to create art and music.
“Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples,” the team explains in a blog post.
The program uses samples of real instruments such as organ, glockenspiel and flute, which the AI analyzes in order to create new sounds using the mathematical characteristics of the notes from each.
As audio samples from Wired demonstrate, NSynth has already made some convincing hybrids of a bass and flute, organ and flute, and bass and organ.
Synthesized hybrid instruments are nothing new: software synths such as Output’s recent Analog Strings for example will let you make your own blends, but Magenta’s project appears to be the first time that artificial intelligence has tried to figure the process out for itself.
If you want to get up close with NSynth, Google will publicly demonstrate the technology later this week at Moogfest, Moog’s annual celebration of music and technology in Durham, North Carolina.
Read next: We Are The Robots: Is the future of music artificial?