top of page
MY BUTTON
MY BUTTON

 

 

1.1 Abstract

 

     Fourier’s Theorem states that ‘a mathematical theorem stating that a periodic function f(x) which is reasonably continuous may be expressed as the sum of a series of sine or cosine terms (called the Fourier series), each of which has specific amplitude and phase coefficients known as Fourier coefficients.’ [1]  In an attempt to test the idea that any sound can be created out of sine waves, the spectral content and decay behavior of an instrument was analyzed and then recreated.  The waveforms of each open string on a guitar were recorded, and after the first eleven harmonics on each string were identified and measured, each partial was replicated with signal generators in Pro Tools.  It was concluded that the synthesized strings were very comparable to their acoustic counterparts in both their frequency spectrum and their sound, and that a more thorough replication of an acoustic instrument could yield a very playable and realistic MIDI instrument.

 

 

1.2 Introduction

 

     The study began as a means to replicate several instruments utilizing Fourier’s Theorem, just to see how difficult it was to create realistic sounding virtual instruments using sine waves.  As it turned out, this initial goal was too ambitious, so instead of performing the study on multiple instruments (Kalimba, Glockenspiel, Mandolin, and Voice), so research was focused down to only acoustic guitar. A previous study done by students of Columbia University, New York proved to be helpful during the research stage to better understand other various methods of synthesis as well as experimenting with constructing a realistic attack. [2] The initial thought was to use white and pink noise bursts to recreate attack timbre, but it was soon realized that measuring  the content of the attack was more complicated than initially anticipated, and the action of a pick striking a string had too many spectral cues to be adequately recreated with signal-generated noise.  In light of this, the actual attack from each guitar string was separated from the rest of the waveform and placed at the beginning of the synthesised partials.  The next most defining characteristic of an instrument is the decay of the individual overtones of the waveform, so data was also taken (and taken into account during the synthesis stage) on how each partial lost energy over time. According to a study on decay conducted by B. Roberts, “For most physical systems the amplitude of an oscillation decays exponentially.  Thus the sound from an impulsively excited instrument (plucked string, drum, etc.) will also exhibit exponential decay. [3] this confirmed the suspicion that the nature of decay would be linear, as related to a dB scale, and was translated, as such in our synthesis.

 

 

1.3 Methodology

 

     To collect the data, a copy of Pro Tools 10 DAW, a Focusrite interface and an MXL V67 condenser microphone were all put in the same room and used to record all six individual strings on an acoustic guitar.  Once all strings were recorded into Pro Tools at a reasonable level, each string was bounced to its own WAV file at a resolution of 24 bits and a sample rate of 48k, and then subsequently sent to Ableton Live 8 and Studio one to begin analyzing the frequency spectrum of each string. Using Ableton’s and Studio One’s spectral analyzers, the musical pitch and amplitude of the first eleven partials were identified and recorded. The pitch measurement was only taken once, because the pitch of the harmonics stayed constant over time, but the amplitude measurement was taken twice: once at t = 0 seconds, to record the maximum amplitude of each partial, and then once more at t = 2 seconds, to see how each partial decayed over time. The concert pitch data was converted into Hertz, and the frequency and initial amplitude of each partial were entered into Pro Tools’ Signal Generator to re-create the initial behavior of each string. For 10 out of the eleven partials, the Signal Generator was set to produce a sine wave. However, for the partial that was the highest octave multiple of the fundamental pitch of the string, a triangle wave was used to fill out the upper frequencies with appropriate multiples of the fundamental without the hassle of taking more data.  The volume of each sine wave was adjusted with fader automation to linearly decrease until it reached the amplitude that was measured at t = 2 seconds, to roughly simulate the decay behavior of each partial. The volume was kept constant after t = 2 seconds, however, so that the final instrument could hold out notes as long as a key was pressed. After the harmonic content and decay of the strings were adequately synthesised, the attack of the acoustic guitar was separated from the rest of the waveform and spliced in super scientifically at the beginning of each synthesized string.

 

     When it actually became time for the strings to be turned into a MIDI instrument, the strings and the spliced attacks were exported to Propellerhead’s Reason. The NN-XT Advanced Sampler was used to load up and map the samples across the keyboard, with each string set to loop through a portion of the post-two-seconds-constant-volume part of the waveform as long as a note was held out. An envelope-controlled filter was added to each string individually so that even after the two-second mark, the higher harmonics of the strings would still decay in a natural-sounding way. The release behavior of the strings were handled by a single global parameter to make adjusting the release to fit the musical project as easy as possible.

     Also, because simply creating an instrument wasn’t nearly scientific enough, the data was also compiled to find the relation between frequency and decay. The two amplitude measurements for each partial were subtracted from each other, yielding the number of deciBels that each partial decayed over the course of two seconds. This new set of numbers was compared to the frequency in Hertz of each partial, and crammed into a graph with the hopes that some kind of pattern would emerge.

 

 

1.4 Results

 

     It was found that instruments can successfully be synthesized by closely analyzing the unique spectral and harmonic content that make up single strings on a guitar.  After data collection, it was found that the frequency spectrums of our synthesized strings, when compared to real strings had great resemblance.   (see Figures 1 and 2)  Since we were not interested in trying to synthesize room noise, everything below the fundamental is left untouched.  Besides, a MIDI instrument shouldn’t have a room sound already associated with it, because you’ll want to put it in whatever space best fits your production.  The use of a triangle wave as opposed to a sine wave on the last harmonic worked well to fill out the upper frequencies with multiples of the fundamental which gave a more ‘full’ sounding sample in the end.

 

 

 

 

 

 

 

 

 

 

 

 

Figure 1: B-String Acoustic Guitar Frequency Spectrum Graph

 

 

 

 

 

 

 

 

 

Figure 2: B-String Synthesized Acoustic Guitar Frequency Spectrum Graph

 

 

     Because the actual attack of the instrument was used in our virtual instrument, we were able to generate very realistic sounding tones.  This decision was based on previous attempts to replicate the attack using pink/white noise methods without much success.  When compared to other various synthesized guitars, it was found that although lacking polish, this instrument either sounded just as good or better than most.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 3: (Frequency vs Decay)

 

 

     As far as the relationship of frequency and decay is concerned, the graph shows that there is indeed some proportionality to how much energy a tone takes to produce (with lower frequencies requiring more energy) and how long the tone will sustain. The deviations from the trendline in the lower frequencies are suspected to be due to the varying importance of the harmonics; the higher harmonics of the low E string live in the same frequency range as the fundamental partials of the high E string, and the distance of the partials from their fundamental tone leads to varying decay times. However, as frequency increases, and all of the partials are reasonably distanced from their fundamental tone, the fit to the trendline becomes more and more apparent. The trendline in question is a second-degree polynomial, but no value derived from it can be directly translated to the relationship between frequency and decay. More partials and amplitude measurements would no doubt make the nature of their relationship more clear.

It should be noted that the two middle strings, D and G, had much longer decays than any of their neighbors. This is probably to be attributed to the resonant body of the guitar, which has been designed to resonate the most at the middle range of pitches that the guitar can produce. The two middle strings excited the guitar body more, and therefore had a longer sustain. It should also be noted that the first harmonic (an octave above the fundamental pitch) of some of the strings was actually louder than the fundamental itself. Whether this is a property of the strings or of the guitar body is unclear.

 

1.5 Conclusion

     To conclude, it was found to be very possible to create a MIDI instrument using additive synthesis.  The method of measuring out each individual harmonic and its decay time proved to be tedious, but yielded great sounding results when compared to similar MIDI instruments.  Furthermore, it would have been possible to get an even better, more realistic sounding synthesis if more harmonics had been measured and accounted for.   

     And although recreating the decay of the acoustic guitar strings was possible, recreating the attack of the strings was something much more difficult to accomplish.  Recreating the attack by using white noise seemed like a possible solution, but it lacked the natural sound, timbre and color of the pick striking the string, because the string vibration at the transient was much more complex than the decay.  

     With the information gathered from this experiment, it would be possible to recreate the other instruments that were originally part of this study (Kalimba, Glockenspiel, Mandolin, and Voice) using the same technique of analysis and additive synthesis.  For that matter, it would be possible to recreate any such instrument according to Fourier’s Theorem, so long as one has the tenacity to map out a plethora of harmonics.

 

 

1.6 References

 

     [1] B. Truax, “Handbook for Acoustic Ecology,” World Soundscape Project, 1999, pp. Fourier’s Theorem.

 

     [2] S. Sanders, and R. Weiss, “Synthesizing a Guitar Using Physical Modeling Techniques,” [Online] Available:. http://www.ee.columbia.edu/~ronw/dsp/

 

     [3] B. L. Roberts, “Exponential Decay,” [Online] Available: http://physics.bu.edu/py231/exponential_decay.pdf

 

     [4] T. Tolonen, and H. Jarevelainen, “Perceptual Study of Decay Parameters in Plucked String Synthesis,” [Online] Available: http://lib.tkk.fi/Diss/2000/isbn9512251965/article10.pdf

 

     [5] K. Bradley, “Synthesis of an Acoustic Guitar with a Digital String Model and Linear Prediction.” [Online] Available: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.129.381&rep=rep1&type=pdf

 

     [6] R. G. Laughlin, B.D. Truax, B. V. Funt, “Synthesis of Acoustic Timbres using Principal Component Analysis.” Computer Music Association, 1990 pp. 95-99

 

bottom of page