Evolution of Musical Genres
I wanted to share some of my thoughts about the evolution of music over the course of history, both as someone who enjoys listening to music, but also as an occasional composer and musical software developer.
One theme I want to explore is how different musical genres focus on different attributes or “dimensions” of music. Often a new genre will emphasize certain aspects of music that have been neglected by previous genres, while simplifying or ignoring other aspects.
At a fundamental level, music can be thought of as an audio information stream with different aspects, each of which can either be simple or complex. These aspects or “dimensions” can be categorized as follows:
Melody — the arrangement of sequences of notes in a particular order.
Harmony — combinations of different frequencies which are played simultaneously.
Rhythm — musical timing in general; more specifically, complex quasi-periodic sequences of musical events or “beats”.
Phrasing and Articulation — dynamic changes in loudness or intensity of individual notes.
Tone — the actual sound of the instrument being played.
Lyric — the words that accompany the music.
Each of these dimensions can encode information. In modern software parlance, you can think of each of these aspects as encoding a bit stream.
I want to be careful about my use of the word “bit” here. I’m not actually talking about bits in a computer, nor am I talking about the actual waveforms and frequencies of the music. Rather, I am trying to talk about the way music is perceived by the human mind. And for that, we need to talk about information from the standpoint of abstract information theory.
In this context, the word “bit” represents the smallest possible discrete unit of information. It’s a measure of how much information is present in a signal.
I’m not going to get into the details of psycho-acoustics or signal processing. All we need to understand at this point is that a complex melody or rhythm contains more information than a simple one, that is, it has a higher “bit rate”.
From an information-theoretical perspective, unpredictable events have more information than predictable ones. If I tell you that the sun will rise tomorrow, I haven’t told you much — my statement contains essentially no information. On the other hand, if I tell you the sun won’t rise tomorrow, that’s a big deal.
However, unpredictable isn’t the same as random. If I flip a coin 100 times, what you would expect to see is a jumbled series of heads and tails. If, on the other hand, I told you I got 100 heads in a row, you would be very surprised. In this case, the non-random result has more information than the random one. It all depends on what your expectations are.
I should note here that the technical term for a completely random signal is noise. In human terms, “noise” is a signal that cannot be interpreted by the listener.
In music, predictability is defined both by the mathematics of musical theory as well as by human culture and experience. Certain combinations of tones are pleasing to us, and appear often in musical works, making them highly predictable. However, when music is too predictable, it becomes stale and uninspiring. In information terms, it has a low bit rate.
There is also an upper limit on bit rate — a limit to how much complexity the human ear can comprehend. If the signal is too complex, audiences can no longer interpret the music and it starts to sounds like noise. It becomes “inaccessible” and can only be appreciated by a small number of expert listeners.
The key point that I want to get across in this essay is that this perceptual upper limit doesn’t apply to each of the musical dimensions individually, but rather it applies to all of them together. That is, you can have complex melodies and harmonies combined with simple rhythms; or you can have complex rhythms and phrasing combined with simple melodies. What you can’t have is complex everything all at once.
Because of these limits on complexity, the history of music can be seen as a history of tradeoffs, with composers in one era making a different set of tradeoffs than composers in another. When you make one factor more complex, you have to simplify other factors in compensation. Thus, each era or period focuses on some musical attributes while neglecting or ignoring others.
In addition, music evolves within each period. There is an upward slope of complexity with a single genre. This happens not only because audiences become familiar with the conventions of the genre, but also because musical artists are in competition, and seek to out-do (or be inspired by) one another, aiming for new heights of musical prowess. There is also an opposite tendency for music to become increasingly formulaic and standardized — more predictable — as a genre matures. This is especially true in music intended for a mass audience, where commercial concerns and the desire for wide popularity place a lower cap on the degree of complexity of a work than there would be for a more esoteric piece aimed at a specialized audience.
Let’s talk about the history of some musical genres and their tradeoffs:
In the early classical period, composers like Bach primarily explored melodic complexity, while later classical music explored complexities of harmony and phrasing.
Jazz was a departure from classical that simplified some aspects and complicated others, breaking the boundaries of what was considered “predictable” on many fronts.
Les Paul’s invention of the electric guitar, and subsequent development of guitar distortion, opened up a new avenue of expression: tonal manipulation.
The influence of African beats into blues, gospel and early rock paved the way for exploration of more complex rhythms. But with rock n’ roll’s complex overdriven tones and beats came a corresponding simplification of melody and harmony.
The development of electronic music styles like trance and techno further developed the techniques of tonal manipulation, where the melody and beat are almost comically simple but the real performance is in the artist’s manipulation of the sound-generating parameters.
And rap music eschews melody entirely, focusing primarily on the lyrical complexity of the work.
Each genre has a lifecycle: it starts by exploring some fresh new dimension of sound, encoding information within that dimension, while departing from the previous style by simplifying the other dimensions. Over time, complexity increases; but eventually the genre becomes so complex that the mass audience can no longer easily decode it, or it gets into a rut and starts to become repetitive. Eventually, the genre collapses under its own weight. This is the point at which the audience is ready to accept a radical new departure, and a new period begins.
The ability to interpret or “parse” the late stage works of a genre is something that requires experience, and in some cases, you had to have grown up with it in order to listen to it. Here are two examples from my own experience: (1) I can’t make any sense of Mahler, who is a late-stage classical composer known for extremely complex harmonies. (2) The alt-rock band “The Cure” is known for its extreme guitar distortion, but it just sounds like white noise to me; but people who grew up listening to that stuff can make sense of it.
What about the future? That’s a good question. On the one hand, it doesn’t seem like there are a lot of audio dimensions left for exploration, at least that we know of. On the other hand, artists often surprise us, and there may be some composer already out there, writing etherial songs in (say) 17/8 time signature that might be the forerunner of the next wave.