Meet Luke Dawson, a computer experiment that creates music. He’s a collaboration between human and machine, which sounds cooler than what it actually is.
Luke enjoys all types of music, but he thoroughly enjoys composing upbeat songs with twangy electric guitars and thumping drums. (He also has a thing for modular synthesizers.)
On his free time, he browses the internet and collects data on cat videos, language,
world domination, and music.
I don’t know how frequently music will come out, because while it’s definitely easy to generate music, the part that takes the most time is improving Dawson to make good music in a consistent manner (check the future updates section for more info.)
How it Works
Alright, so this is the part where I stop promoting Luke and just start nerding everyone out with my obscure hobbies.
Luke Dawson isn’t just Luke Dawson. He’s a system of multiple whatchamacallits and thingymabobs. Very thorough explanation, I know.
While I had fun coding lots of different things and made different media for him, I plan on stealing some code from smart people on GitHub for some later updates to Luke Dawson.
Magenta is a big project started by the Google Brain team. After giving it training data, you can get it to generate music, art, and city maps (literally anything you want.)
People have created many models to generate different data using Magenta, which means that if you want something done, you can probably find a model for it.
I decided to get Magenta and try it out. Unfortunately, I have no experience with Python and I ended up giving up after wasting an entire morning trying to install a bunch of different libraries.
So I resorted to good ol’ midi generators, which generate music with handmade algorithms. This is a huge step down from deep learning, but it’s much easier for me since I can generate tons upon tons of midi files with a click of a button.
Unfortunately, I have to cycle through about fifty files before I find a viable one to use.
The algorithm generates a couple channels, each channel having an instrument. The structure is vastly classical and based on the Sonata Allegro movement, which simply means that the music jumps up and down in 5ths while the theme grows in a random manner until it ends at the tonic of the scale it started on.
Now, the music doesn’t finish perfectly. While it ends on the same key it started on, it just cuts off abruptly so I just run it through Audacity and add a fade out because I’m lazy.
The music is divided up as so:
-A lead instrument (I usually have two accompanying each other.)
-A drum set
-A background instrument (usually a synth.)
The files that are being spat out use default Soundfont files, which means that the music sounds pretty crappy.
Everything is ran through Fl Studio or Reaper and the instruments are replaced by nicer-sounding ones.
After that’s all done, we (should) have a nice short mp3 file of computer generated music.
Now, this entire project is a computer and human collaboration. The Instagram API prevents me from fully automating everything, and due to my lack of knowledge I still have to do the tedious work of trying to figure out how to make music even though the computer does the entire 99% of it. I have no idea how to mix music, side-chain, or do any of those cool things “real” music composers make.
But I’ll just keep working with Dawson and we’ll see where this leads.
If you want to learn more, check out this page. There won’t be any updates until I make a big change.
Alright, so I haven’t given up on Magenta yet, but the only programming language I know is C++. Even then, my knowledge is limited (I need to use Wikis and online articles for the syntax of most commands.)
The biggest update of all time would be getting Luke Dawson to run on Magenta, because I’d generate hundreds of midi files using my current method and use them as training data. (I want to run a bunch of Pokémon game music through Magenta and see what happens.)
Incoming Updates (listed in a somewhat chronological top-down manner):
-Vocals (simple pre-made words and mixing.)
-Vocals (lyrics generation ran through Vocaloid 4 + vocal glitcher and stuffs.)
-Using Magenta for deep-learning and better generation.
-Ability to copy bands like The Beatles and generate similar music. (This has been done before, but it was done by a large team of smart people and the code is proprietary, so I can’t steal it.)
-Other genres of music (with Magenta.)
-Lyrics in other language (I’ll do a poll on Instagram for a language when/if I get to that point. It’ll probably run on Google Translate.)
I’d like to reiterate the point that I’m doing this entire thing by myself, and if anybody has any suggestions (or any technical help,) I’d gladly appreciate it.
Luke Dawson gets frequent updates, but if I posted every update on Instagram it’d get spammy really quick.
And until I can get everything sorted out, he’s gonna stay on my hard drive for the time being.
-Generates midi files.
-That’s it. I still have to do everything (ugh.)