Lo-Fi Player explorations

lfp

Earlier today, I was introduced to Lo-Fi Player, a browser-based virtual room in which you can generate instrumental lo-fi hip-hop tracks that’s powered by machine learning models from magenta.js.

For a user, the idea is fairly simple. A room loads with multiple items in it: a guitar, a keyboard, a bass guitar, a TV, a radio, view outside, etc. You tinker with each of these to either change the way the room looks or the music that’s being played. You can make the melody ‘denser’ or ‘chiller’ (or other adjectives), add bass, subtract drums, whatever you wish (and has been modelled).

This making the melody ‘denser’ or ‘chiller’ is achieved through Recurrent Neural Networks. I don’t believe I’m qualified to go into any detail beyond ‘some machine learning algorithm is used’. But this is a good place to start gaining some intelligence on how it’s done. I plan on gaining that intelligence. Updates will follow.

Until then, you can chill in the first room I’ve created on Lo-Fi Player.

Previous
Previous

Sonic Pi: a short introduction

Next
Next

Ramblings on a medialess future