On June the 2nd I will present a paper, titled Determining an Empirical Emission Model for the Auralization of Jet Aircraft, at the Euronoise conference in Maastricht, The Netherlands. My presentation will be in the Auralisation of urban sound session. The conference is just a couple of weeks away from now, and I am still analysing and gathering results. Nevertheless, I thought it would be nice to give a bit of an insight on what I'm working on now and what I will present at Euronoise.
## Emission model for auralizations
Currently I'm developing an emission model for jet aircraft that can be used for auralizations. Existing emission models for noise prediction generally predict sound pressure levels in 1/3-octaves. This works for noise prediction, however, for auralization a finer resolution is needed. Indeed, one needs to be able to model individual tones and in certain cases also modulations.
At Empa we have in the past conducted measurements for the sonAIR project resulting in a large dataset including audio recordings at multiple sites, cockpit data and flight track data. Colleagues of mine are using this dataset now to develop a next-generation emission model for noise prediction and I'm using this dataset to develop an emission model for auralizations.
## Analysis of an event
Let's now have a look at one specific event and how I analyse such event. I've included the Python code I use for the analysis, so you get a better idea of how I'm working. To give you an idea, there is over 500 GB of audio recordings and several hundreds of MBs on other data. The audio is stored in a single hdf5
file using h5py and all other data in a SQLite database. All data is handled using the amazing Blaze and pandas modules.
from sonair.processing import *
%matplotlib inline
We begin by loading the data belonging to a combination of event (i.e. aircraft passage) and receiver and from a certain start and stop time. start
and stop
are here seconds relative to a certain event reference time. This reference time is the time at which the aircraft is closest to any of the receivers and turned out to be quite conventient to use.
event = '10_004_A320'
receiver = 'A'
start = -5.
stop = +5.
analysis = EventAnalysis(event, receiver, start, stop)
We now have a nice object that gives easy access to all the data. For example, we can request the atmospheric pressure (in mbar) during that event
analysis.event.pressure
977.39999999999998
or the coordinates of the receiver (Swiss grid)
analysis.receiver.x, analysis.receiver.y, analysis.receiver.z
(682692.67099999997, 257054.26800000001, 422.048)
Obviously, we can also listen to the recording
from IPython.display import Audio
Audio(data=analysis.recording_as_signal, rate=analysis.recording_as_signal.fs)
or look at the instantaneous pressure that was measured as a pandas Series
analysis.recording.head()
2013-09-02 07:25:41.050000 0.618678 2013-09-02 07:25:41.050022676 0.577433 2013-09-02 07:25:41.050045352 0.587744 2013-09-02 07:25:41.050068028 0.659923 2013-09-02 07:25:41.050090704 0.763036 Name: signal, dtype: float64
The above values show you the instantaneous sound pressure as function of time. Perhaps more interesting is to consider the sound pressure level as function of time
_ = analysis.recording_as_signal.plot_levels()
or the spectrogram, showing the sound pressure not just as function of time, but also as function of frequency.
_ = analysis.recording_as_signal.spectrogram(ylim=(0.0, 8000.0))
The spectrogram clearly shows the Doppler shift. Also visible, though perhaps not so obvious with the current ranges, is that in the first couple of seconds tones are clearly visible and audible, however, in the latter they're almost gone. This is due to the directivity of the aircraft.
The emission model needs to be able to predict the emission as function of several parameters, like velocity of the aircraft and/or thrust. What is received by the receiver however also depends on this directivity, i.e., the emission as function of the angles between source and receiver.
## Sound propagation
As the sound waves travel from the source to the receiver several physical phenomena occur. First of all, the aircraft moves with a certain speed relative to the atmosphere, and this causes a Doppler shift. Then, in open spaces sound spreads out, in our case mostly spherically. Furthermore, due to relaxation processes the sound waves are attenuated. The receiver could move as well resulting in an additional Doppler shift. Luckily, we were using fixed microphones, so this is a problem we won't have to deal with.
However, an unwanted effect that can occur for which correcting would be very difficult are contributions due to reflections. And unfortunately we had ground reflections. To make matters worse, atmospheric turbulence adds a bit of randomness to the propagation. Joy!
## Calculating back to the source
In order to determine the actual emission of the aircraft we now have to undo these effects. This, however, is not that easy. I won't go into details regarding assumptions I made, nor the errors I would make, but instead just briefly explain what I did.
Using meteorological information I could estimate the speed of sound and attenuation in the atmosphere. Together with positional information of the aircraft and receiver I could undo the time lag and thus Doppler shift (basically by doing the inverse of what I posted earlier on the Doppler shift) as well as attenuation due to air absorption. These operations are all applied directly to the audio recording. Let's listen to the result!
Audio(data=analysis.reverted_as_signal, rate=analysis.reverted_as_signal.fs)
This sounds quite different from the original recording! The Doppler shift is gone, as you can also see in the spectrogram
_ = analysis.reverted_as_signal.spectrogram(ylim=(0.0, 8000.0))
and the level also doesn't change that much anymore over time either.
_ = analysis.reverted_as_signal.plot_levels()
Unfortunately there was a click audible at the start. Fixing that is on my to-do list.
## Investiging the emission spectrum
Now that we've calculated back to the source it is time to investigate the emission. The power spectral density shows many peaks, and there seems to be a pattern in the recurrence. It looks as if there is some constant spacing between the tones.
fig = analysis.reverted_as_signal.plot_power_spectrum(xscale='linear', xlim=(500, 3500))
Maybe we can check if this is the case and determine this spacing? Luckily, we can. Using cepstral analysis, or more specifically, using the complex cepstrum, we can determine the spacing between these recurrences. I won't show any further how I use the complex cepstrum but instead go straight to the result. The spacing corresponds to the frequency at which the engine's shaft rotates. This frequency is the $N_1$ frequency and is determined to be
analysis.N1
73.13432836293677
The tones that you see and hear are multiples of this $N_1$ frequency and are called Buzz-Saw tones. However, one very strong peak is visible at
blades = 36
analysis.N1 * blades
2632.8358210657238
This is the blade passing frequency, which is obtained by multiplying $N_1$ with the amount of blades of the fan. Multiples of the blade passing frequency are also present and audible. The figure below shows again the power spectral density along with black and red lines representing respectively Buzz-Saw tones and multiples of the blade passing frequency.
fig = analysis.reverted_as_signal.plot_power_spectrum(xscale='linear', xlim=(1500, 6000))
for harmonic in analysis.N1_harmonics:
fig.axes[0].axvline(harmonic, color='black')
for harmonic in np.arange(1, 3)*analysis.N1*blades:
fig.axes[0].axvline(harmonic, color='red')
## Extracting features
The next step is to estimate the power of the tones and the noise for all events and receiver combinations and using those estimations along with estimations of velocity and thrust to develop an emission model. As mentioned before, this means taking into account the directivity and thus the orientation from source to receiver (or vice versa). The figure below shows the components of the unit vector pointing from receiver to source.
_ = analysis.orientation_receiver_to_source.ix[::10000].plot()
Right now I'm writing the code needed to automatically extract the features. Hopefully, in a couple of days I can begin analysing the extracted features. I'm looking forward to that!
## Then what?
After I've extracted the features and developed the initial model I plan on doing some preliminary listening tests in our lab at Empa this summer. Depending on the results of those tests, I plan further listening tests in autumn at SP's listening room, which features a 3rd order Ambisonics system. And after that? Well, then I think it's time to write my thesis!