Weaving Code into Music
The world is full of patterns. Everywhere you look, everything you see and everything you hear, you can find little formations, equations, and recurring themes that comprise them. Whether it be in art, in nature, in structures or in sound, it is an indubitable fact that if one only takes the time to observe and listen, one would discover that there are so many different kinds of the most beautiful symmetries within them all. This article is about the fusion of two prevalent and vast fields today that have these very symmetries as their foundations, music and programming, and how they work hand in hand to achieve the most curious of results.
Image-Line's FL Studio Software
In the technology-driven world of today, it shouldn’t be perceived as an impossible feat to have a computer producing music. In fact, close to all of the musicians today are reliant on computer software one way or another, through sample libraries, digital effects pedals, MIDI instruments, USB microphones and of course, the DAWs (Digital Audio Workstation) such as FL Studio and ProTools. A particularly famous example of musical digitalization would be the use of autotune, by artists such as T-Pain. His voice is digitized in such a way that it ends up sounding almost like a robot’s, if robots were to have voices. This “robot” voice can jump from one frequency of pitch to another with no transitions in between; very unnatural singing for any human! However, all of these are merely tools and techniques to aid or complement sounds and music that ultimately, are sourced from a human being.
An angle less tackled than finding ways to enhance music would be, if instead of humans using their thought processes and creativity to create music, it could come from computers. Instead of humans considering all the music theory, with their cultural, stylistic influences in order to compose music, would it not be possible for machines to do the same? After all, music has its own sets of rules that can range from strict ones to loose ones. Johann Sebastian Bach is a shining example, for one only needs to listen to his many fugues to hear how very formal and structured they can be, yet still reflecting the genius of his mind in intricately entrancing melodies, harmonies and counterpoint.
On the other hand, the field of programming is all about logic, calculations and step-by-step processes in order to achieve a certain goal in mind. It is of no surprise then, that there are applications of programming all around us, from our mobile phones and tablets to our computers, our point-of-sale systems in shopping environments, and so much more to enumerate. It would then be a very interesting exploration to see what programming can come up with when it is used with the same musical rules composers abide by in its calculations, to attempt to make something more often born out of the human mind.
This kind of research has actually existed ever since 1956, where the first computer-generated composition, the “Illiac Suite” was composed, all the way up to more recent times in 2012 where an autonomous computer composer Iamus was able to release a free, full-length contemporary classical music album. Techniques that were used in the composition of these pieces of music would either deal with style imitation, where a computer composes a piece of music based on a large sample set of human-made ones, or with computational creativity, where an attempt to model the way the human mind processes creativity is made, therefore using creative and original styles that do not necessarily conform to any rules.
Something that is quite pleasant to know is that this kind of research is not so rare or exclusive to certain countries and cultures, but also exists within the Ateneo de Manila University itself. In the Department of Information Systems and Computer Science, there exists a Computational Sound and Music Laboratory where both undergraduates and graduates alike contribute studies in the form of theses to expand on the field. A few examples from the batch of 2016 has studies that deal with compositions with affective computing derived from images, a prediction of future musical trends based on the current ones, and auditory conditioning to improve human sound localization. I myself am part of a team composed of Carlo Nikolas Montenegro and Christine Joy Saavedra, who under the guidance of Dr. Andrei D. Coronel have devised a system that uses multiple techniques and objectives all in one, to compose jazz chord progressions for given melodies.
The field of algorithmic music composition is continually expanding to this day, getting closer and closer to producing results that would make people question twice about pieces of music being exclusively “human” creations. This is not to say that algorithmic music composition aims to replace the creative input of man, but only to explore it in ways never done before, and to be able to open our eyes to newer understandings of the infinitely rich human mind.
Photos from:
http://www.culturecatch.com/music/bach-born-330-years-ago
http://www.medienkunstnetz.de/works/illiac-suite/