Some Example Applications
One program that does not quite require this is, a program called Variations, developed by Bruce L. Jacob. This program uses genetic algorithms that are evolved the the operators liking, then it composes and listen to the piece to decide whether or not it is good. B.Jacob describes how this genetic "ear" works in his Composing With Genetic Algorithms paper:
"...The [ear] module is a collection of chromosomes, each of which acts as a data filter that identifies harmonic combinations as “good” or “bad.” Before composition begins, the chromosomes are evolved to reflect the musical tastes of the human operator. First, a set of randomly-generated ear chromosomes are auditioned on how well they filter material. The evaluation mechanism in this process, as in virtually all other genetic music studies, is a human judge. Musical examples are created and passed through the ear chromosomes, and the human operator assigns weights to chromosomes according to how well they agree with his or her inclinations. Chromosomes with high marks are more likely to reproduce and have their alleles present in the next generation. Successive generations therefore exhibit the best traits of previous generations. Once there is a satisfactory set of filters, the [music creating] process...begins..." [B.Jacob]An example of a variations created song can be downloaded here, additional samples are also for download at the site:
A program such as variations creates music autonomously, but what about dynamically? The best example of dynamic music creation is GenJam - or Genetic Jammer. This program by Al Biles listens to him play through a device called a pitch-tracker that converts the notes from the trumpet into MIDI notes. The program then will improvise using an original set of notes. Biles will then tell GenJam whether the sequence it just played was good or bad. On the next time round, then program will get rid of the tunes that sounded bad, and evolve new parts from the good sounding songs. GenJam uses a genetic algorithm-based engine to develop its notes and riffs.
Transcription is in many ways even harder than composition due to the nature of the human ear and brain. A "trained" human can distinguish instruments, pick out melodies from bass lines, hear the difference between the drums and a guitar. With a computer, since everything is represented in 0s and 1s, it makes life a little harder! For a computer to listen to a 4-member rock band playing, and to create 4 written pieces of music to correspond perfectly with the guitar, bass, drums and singer would be an incredible task. To show just how hard this task would be, we'll look at one instrument, and how hard it would be to transpose that instrument alone!
This section will focus on the difficulties of getting the computer to listen to an instrument and pick out the notes. The most problematic instrument will no doubt be the electric guitar, with the problem of filtering out the effects, the the incredible speed the instrument can be played at. Probably people who don't play the guitar will not find this section too useful (or interesting!).
Notice how the waveform repeats perfectly - this indicates that only one note is being played? Despite this perfectly repeating waveform there is still a small problem. An A-note is mathematically 440Hz, which is a perfect sine wave shape (as shown). So, why does an A-note from a guitar look different? Because of a phenomenon called overtones. When you play an A-note on a guitar, not only do you get 440Hz, you get 220, 880, and others at different octaves. 440Hz is the dominant waveform, but the other tones are noticeable when you look at the waveform. This is small problem, though, and by merely finding the dominant tone, you have your note! Electronic tuners, MIDI guitars, and harmonizers are a perfect examples. Yet, the minute things get a bit more complicated, so does the waveform, and consequently the computer has a much harder time figuring out the sequence of notes.
Effects and Speed