Jump to content

Acoustics/Print version

From Wikibooks, open books for an open world


Acoustics is the science that studies sound, in particular its production, transmission, and effects. Sound can often be

considered as something pleasant; music is an example. In that case a main application is room acoustics, since the purpose

of room acoustical design and optimisation is to make a room sound as good as possible. But some noises can also be

unpleasant and make people feel uncomfortable. In fact, noise reduction is actually a main challenge, in particular in the

industry of transportations, since people are becoming increasingly demanding. Furthermore, ultrasounds also have applications

in detection, such as sonar systems or non-destructive material testing. The articles in this wikibook describe the

fundamentals of acoustics and some of the major applications.

Table of contents

Fundamentals

  1. Fundamentals of Acoustics
  2. Fundamentals of Room Acoustics
  3. Fundamentals of Psychoacoustics
  4. Sound Speed
  5. Filter Design and Implementation
  6. Flow-induced oscillations of a Helmholtz resonator
  7. Active Control

Applications

Applications in Room Acoustics

  1. Anechoic and reverberation rooms
  2. Basic Room Acoustic Treatments

Applications in Psychoacoustics

  1. Human Vocal Fold
  2. Threshold of Hearing/Pain


Musical Acoustics Applications

  1. Microphone Technique
  2. Microphone Design and Operation
  3. Acoustic Loudspeaker
  4. Sealed Box Subwoofer Design

Miscellaneous Applications

  1. Bass-Reflex Enclosure Design
  2. Polymer-Film Acoustic Filters
  3. Noise in Hydraulic Systems
  4. Noise from Cooling Fans
  5. Piezoelectric Transducers


Fundamentals of Acoustics

Introduction

Sound is an oscillation of pressure transmitted through a gas, liquid, or solid in the form of a traveling wave, and can be generated by any localized pressure variation in a medium. An easy way to understand how sound propagates is to consider that space can be divided into thin layers. The vibration (the successive compression and relaxation) of these layers, at a certain velocity, enables the sound to propagate, hence producing a wave. The speed of sound depends on the compressibility and density of the medium.

In this chapter, we will only consider the propagation of sound waves in an area without any acoustic source, in a homogeneous fluid.

Equation of waves

Sound waves consist in the propagation of a scalar quantity, acoustic over-pressure. The propagation of sound waves in a stationary medium (e.g. still air or water) is governed by the following equation (see wave equation):

This equation is obtained using the conservation equations (mass, momentum and energy) and the thermodynamic equations of state of an ideal gas (or of an ideally compressible solid or liquid), supposing that the pressure variations are small, and neglecting viscosity and thermal conduction, which would give other terms, accounting for sound attenuation.

In the propagation equation of sound waves, is the propagation velocity of the sound wave (which has nothing to do with the vibration velocity of the air layers). This propagation velocity has the following expression:

where is the density and is the compressibility coefficient of the propagation medium.

Helmholtz equation

Since the velocity field for acoustic waves is irrotational we can define an acoustic potential by:

Using the propagation equation of the previous paragraph, it is easy to obtain the new equation:

Applying the Fourier Transform, we get the widely used Helmholtz equation:

where is the wave number associated with . Using this equation is often the easiest way to solve acoustical problems.

Acoustic intensity and decibel

The acoustic intensity represents the acoustic energy flux associated with the wave propagation:

We can then define the average intensity:

However, acoustic intensity does not give a good idea of the sound level, since the sensitivity of our ears is logarithmic. Therefore we define decibels, either using acoustic over-pressure or acoustic average intensity:

 ;

where for air, or for any other media, and .

Solving the wave equation

Plane waves

If we study the propagation of a sound wave, far from the acoustic source, it can be considered as a plane 1D wave. If the direction of propagation is along the x axis, the solution is:

where f and g can be any function. f describes the wave motion toward increasing x, whereas g describes the motion toward decreasing x.

The momentum equation provides a relation between and which leads to the expression of the specific impedance, defined as follows:

And still in the case of a plane wave, we get the following expression for the acoustic intensity:

Spherical waves

More generally, the waves propagate in any direction and are spherical waves. In these cases, the solution for the acoustic potential is:

The fact that the potential decreases linearly while the distance to the source rises is just a consequence of the conservation of energy. For spherical waves, we can also easily calculate the specific impedance as well as the acoustic intensity.

Boundary conditions

Concerning the boundary conditions which are used for solving the wave equation, we can distinguish two situations. If the medium is not absorptive, the boundary conditions are established using the usual equations for mechanics. But in the situation of an absorptive material, it is simpler to use the concept of acoustic impedance.

Non-absorptive material

In that case, we get explicit boundary conditions either on stresses and on velocities at the interface. These conditions depend on whether the media are solids, inviscid or viscous fluids.

Absorptive material

Here, we use the acoustic impedance as the boundary condition. This impedance, which is often given by experimental measurements depends on the material, the fluid and the frequency of the sound wave.

Fundamentals of Room Acoustics

Introduction

Three theories are used to understand room acoustics :

  1. The modal theory
  2. The geometric theory
  3. The theory of Sabine

The modal theory

This theory comes from the homogeneous Helmoltz equation . Considering a simple geometry of a parallelepiped (L1,L2,L3), the solution of this problem is with separated variables :

Hence each function X, Y and Z has this form :

With the boundary condition , for and (idem in the other directions), the expression of pressure is :

where ,, are whole numbers

It is a three-dimensional stationary wave. Acoustic modes appear with their modal frequencies and their modal forms. With a non-homogeneous problem, a problem with an acoustic source in , the final pressure in is the sum of the contribution of all the modes described above.

The modal density is the number of modal frequencies contained in a range of 1 Hz. It depends on the frequency , the volume of the room and the speed of sound  :

The modal density depends on the square frequency, so it increase rapidly with the frequency. At a certain level of frequency, the modes are not distinguished and the modal theory is no longer relevant.

The geometry theory

For rooms of high volume or with a complex geometry, the theory of acoustical geometry is critical and can be applied. The waves are modelised with rays carrying acoustical energy. This energy decrease with the reflection of the rays on the walls of the room. The reason of this phenomenon is the absorption of the walls.

The problem is this theory needs a very high power of calculation and that is why the theory of Sabine is often chosen because it is easier.

The theory of Sabine

Description of the theory

This theory uses the hypothesis of the diffuse field, the acoustical field is homogeneous and isotropic. In order to obtain this field, the room has to be sufficiently reverberant and the frequencies have to be high enough to avoid the effects of predominating modes.

The variation of the acoustical energy E in the room can be written as :

Where and are respectively the power generated by the acoustical source and the power absorbed by the walls.

The power absorbed is related to the voluminal energy in the room e :

Where a is the equivalent absorption area defined by the sum of the product of the absorption coefficient and the area of each material in the room :

The final equation is :

The level of stationary energy is :

Reverberation time

With this theory described, the reverberation time can be defined. It is the time for the level of energy to decrease of 60 dB. It depends on the volume of the room V and the equivalent absorption area a :

Sabine formula

This reverberation time is the fundamental parameter in room acoustics and depends trough the equivalent absorption area and the absorption coefficients on the frequency. It is used for several measurement :

  • Measurement of an absorption coefficient of a material
  • Measurement of the power of a source
  • Measurement of the transmission of a wall

Fundamentals of Psychoacoustics

Due to the famous principle enounced by Gustav Theodor Fechner, the sensation of perception doesn’t follow a linear law, but a logarithmic one. The perception of the intensity of light, or the sensation of weight, follow this law, as well. This observation legitimates the use of logarithmic scales in the field of acoustics. A 80 dB (10-4 W/m²) sound seems to be twice as loud as a 70 dB (10-5 W/m²) sound, although there is a factor 10 between the two acoustic powers. This is quite a naïve law, but it led to a new way of thinking about acoustics: trying to describe the auditive sensations. That’s the aim of psychoacoustics. As the neurophysiological mechanisms of human hearing haven’t been successfully modeled, the only way of dealing with psychoacoustics is by finding metrics that best describe the different aspects of sound.

Perception of sound

The study of sound perception is limited by the complexity of the human ear mechanisms. The figure below represents the domain of perception and the thresholds of pain and listening. The pain threshold is not frequency-dependent (around 120 dB in the audible bandwidth). At the opposite side, the listening threshold, as all the equal loudness curves, is frequency-dependent. In the center are typical frequency and loudness ranges for human voice and music.

Phons and sones

Phons

Two sounds of equal intensity do not have the same apparent loudness, because of the frequency sensibility of the human ear. An 80 dB tone at 100 Hz does not sound as loud as an 80 dB tone at 3 kHz. A new unit, the phon, is used to describe the loudness of a harmonic sound. X phons means “as loud as X dB at 1000 Hz”. Another tool is used : the equal loudness curves, a.k.a. Fletcher curves.

Sones

Another scale currently used is the sone, based upon the rule of thumb for loudness. This rule states that the sound must be increased in intensity by a factor 10 to be perceived as twice as loud. In decibel (or phon) scale, it corresponds to a 10 dB (or phons) increase. The sone scale’s purpose is to translate those scales into a linear one.

Where S is the sone level, and the phon level. The conversion table is as follows:

Phons Sones
100 64
90 32
80 16
70 8
60 4
50 2
40 1

Metrics

We will now present five psychoacoustics parameters to provide a way to predict the subjective human sensation.

dB A

The measurement of noise perception with the sone or phon scale is not easy. A widely used measurement method is a weighting of the sound pressure level, according to frequency repartition. For each frequency of the density spectrum, a level correction is made. Different kinds of weightings (dB A, dB B, dB C) exist in order to approximate the human ear at different sound intensities, but the most commonly used is the dB A filter. Its curve is made to match the ear equal loudness curve for 40 phons, and as a consequence it’s a good approximation of the phon scale.

Example : for a harmonic 40 dB sound, at 200 Hz, the correction is -10 dB, so this sound is 30 dB A.

Loudness

It measures the sound strength. Loudness can be measured in sone, and is a dominant metric in psychoacoustics.

Tonality

As the human ear is very sensible to the pure harmonic sounds, this metric is a very important one. It measures the number of pure tones in the noise spectrum. A broadwidth sound has a very low tonality, for example.

Roughness

It describes the human perception of temporal variations of sounds. This metric is measured in asper.

Sharpness

Sharpness is linked to the spectral characteristics of the sound. A high-frequency signal has a high value of sharpness. This metric is measured in acum.

Blocking effect

A sinusoidal sound can be masked by a white noise in a narrowing bandwidth. A white noise is a random signal with a flat power spectral density. In other words, the signal's power spectral density has equal power in any band, at any centre frequency, having a given bandwidth. If the intensity of the white noise is high enough, the sinusoidal sound will not be heard. For example, in a noisy environment (in the street, in a workshop), a great effort has to be made in order to distinguish someone’s talking.

Sound Speed

The speed of sound c (from Latin celeritas, "velocity") varies depending on the medium through which the sound waves pass. It is usually quoted in describing properties of substances (e.g. Sodium's Speed of Sound is listed under Other Properties). In conventional use and scientific literature, sound velocity v is the same as sound speed c. Sound velocity c or velocity of sound should not be confused with sound particle velocity v, which is the velocity of the individual particles.

More commonly the term refers to the speed of sound in air. The speed varies depending on atmospheric conditions; the most important factor is the temperature. Humidity has very little effect on the speed of sound, while the static sound pressure (air pressure) has none. Sound travels slower with an increased altitude (elevation if you are on solid earth), primarily as a result of temperature and humidity changes. An approximate speed (in metres per second) can be calculated from:

where (theta) is the temperature in degrees Celsius.

Details

A more accurate expression for the speed of sound is

where

  • R is the gas constant (287.05 J/(kg·K) for air). It is derived by dividing the universal gas constant (J/(mol·K)) by the molar mass of air (kg/mol), as is common practice in aerodynamics.
  • κ (kappa) is the adiabatic index (1.402 for air), sometimes noted as γ (gamma).
  • T is the absolute temperature in kelvins.

In the standard atmosphere:

T0 is 273.15 K (= 0 °C = 32 °F), giving a value of 331.5 m/s (= 1087.6 ft/s = 1193 km/h = 741.5 mph = 643.9 knots).
T20 is 293.15 K (= 20 °C = 68 °F), giving a value of 343.4 m/s (= 1126.6 ft/s = 1236 km/h = 768.2 mph = 667.1 knots).
T25 is 298.15 K (= 25 °C = 77 °F), giving a value of 346.3 m/s (= 1136.2 ft/s = 1246 km/h = 774.7 mph = 672.7 knots).

In fact, assuming an ideal gas, the speed of sound c depends on temperature only, not on the pressure. Air is almost an ideal gas. The temperature of the air varies with altitude, giving the following variations in the speed of sound using the standard atmosphere - actual conditions may vary. Any qualification of the speed of sound being "at sea level" is also irrelevant. Speed of sound varies with altitude (height) only because of the changing temperature!

Altitude Temperature m/s km/h mph knots
Sea level (?) 15 °C (59 °F) 340 1225 761 661
11,000 m–20,000 m
(Cruising altitude of commercial jets,
and first supersonic flight)
-57 °C (-70 °F) 295 1062 660 573
29,000 m (Flight of X-43A) -48 °C (-53 °F) 301 1083 673 585

In a Non-Dispersive Medium – Sound speed is independent of frequency. Therefore the speed of energy transport and sound propagation are the same. For audio sound range, air is a non-dispersive medium. We should also note that air contains CO2 which is a dispersive medium and it introduces dispersion to air at ultrasound frequencies (~28 kHz).
In a Dispersive Medium – Sound speed is a function of frequency. The spatial and temporal distribution of a propagating disturbance will continually change. Each frequency component propagates at its own phase speed, while the energy of the disturbance propagates at the group velocity. Water is an example of a dispersive medium.

In general, the speed of sound c is given by

where

C is a coefficient of stiffness
is the density

Thus the speed of sound increases with the stiffness of the material, and decreases with the density.

In a fluid the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces).

Hence the speed of sound in a fluid is given by

where

K is the adiabatic bulk modulus

For a gas, K is approximately given by

where

κ is the adiabatic index, sometimes called γ.
p is the pressure.

Thus, for a gas the speed of sound can be calculated using:

which using the ideal gas law is identical to:

(Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of κ but was otherwise correct.)

In a solid, there is a non-zero stiffness both for volumetric and shear deformations. Hence, in a solid it is possible to generate sound waves with different velocities dependent on the deformation mode.

In a solid rod (with thickness much smaller than the wavelength) the speed of sound is given by:

where

E is Young's modulus
(rho) is density

Thus in steel the speed of sound is approximately 5100 m/s.

In a solid with lateral dimensions much larger than the wavelength, the sound velocity is higher. It is found by replacing Young's modulus with the plane wave modulus, which can be expressed in terms of the Young's modulus and Poisson's ratio as:

For air, see density of air.

The speed of sound in water is of interest to those mapping the ocean floor. In saltwater, sound travels at about 1500 m/s and in freshwater 1435 m/s. These speeds vary due to pressure, depth, temperature, salinity and other factors.

For general equations of state, if classical mechanics are used, the speed of sound is given by

where differentiation is taken with respect to adiabatic change.

If relativistic effects are important, the speed of sound is given by:

(Note that is the relativistic internal energy density).

This formula differs from the classical case in that has been replaced by .

Speed of sound in air

Impact of temperature
θ in °C c in m/s ρ in kg/m³ Z in N·s/m³
−10 325.4 1.341 436.5
−5 328.5 1.316 432.4
0 331.5 1.293 428.3
+5 334.5 1.269 424.5
+10 337.5 1.247 420.7
+15 340.5 1.225 417.0
+20 343.4 1.204 413.5
+25 346.3 1.184 410.0
+30 349.2 1.164 406.6

Mach number is the ratio of the object's speed to the speed of sound in air (medium).

Sound in solids

In solids, the velocity of sound depends on density of the material, not its temperature. Solid materials, such as steel, conduct sound much faster than air.

Experimental methods

In air a range of different methods exist for the measurement of sound.

Single-shot timing methods

The simplest concept is the measurement made using two microphones and a fast recording device such as a digital storage scope. This method uses the following idea.

If a sound source and two microphones are arranged in a straight line, with the sound source at one end, then the following can be measured:

  1. The distance between the microphones (x)
  2. The time delay between the signal reaching the different microphones (t)

Then v = x/t

An older method is to create a sound at one end of a field with an object that can be seen to move when it creates the sound. When the observer sees the sound-creating device act they start a stopwatch and when the observer hears the sound they stop their stopwatch. Again using v = x/t you can calculate the speed of sound. A separation of at least 200 m between the two experimental parties is required for good results with this method.

Other methods

In these methods the time measurement has been replaced by a measurement of the inverse of time (frequency).

Kundt's tube is an example of an experiment which can be used to measure the speed of sound in a small volume, it has the advantage of being able to measure the speed of sound in any gas. This method uses a powder to make the nodes and antinodes visible to the human eye. This is an example of a compact experimental setup.

A tuning fork can be held near the mouth of a long pipe which is dipping into a barrel of water, in this system it is the case that the pipe can be brought to resonance if the length of the air column in the pipe is equal to ( {1+2n}/λ ) where n is an integer. As the antinodal point for the pipe at the open end is slightly outside the mouth of the pipe it is best to find two or more points of resonance and then measure half a wavelength between these.

Here it is the case that v = fλ

Filter Design and Implementation

Introduction

Acoustic filters, or mufflers, are used in a number of applications requiring the suppression or attenuation of sound. Although the idea might not be familiar to many people, acoustic mufflers make everyday life much more pleasant. Many common appliances, such as refrigerators and air conditioners, use acoustic mufflers to produce a minimal working noise. The application of acoustic mufflers is mostly directed to machine components or areas where there is a large amount of radiated sound such as high pressure exhaust pipes, gas turbines, and rotary pumps.

Although there are a number of applications for acoustic mufflers, there are really only two main types which are used. These are absorptive and reactive mufflers. Absorptive mufflers incorporate sound absorbing materials to attenuate the radiated energy in gas flow. Reactive mufflers use a series of complex passages to maximize sound attenuation while meeting set specifications, such as pressure drop, volume flow, etc. Many of the more complex mufflers today incorporate both methods to optimize sound attenuation and provide realistic specifications.

In order to fully understand how acoustic filters attenuate radiated sound, it is first necessary to briefly cover some basic background topics. For more information on wave theory and other material necessary to study acoustic filters please refer to the references below.

Basic wave theory

Although not fundamentally difficult to understand, there are a number of alternate techniques used to analyze wave motion which could seem overwhelming to a novice at first. Therefore, only 1-D wave motion will be analyzed to keep most of the mathematics as simple as possible. This analysis is valid, with not much error, for the majority of pipes and enclosures encountered in practice.

Plane-wave pressure distribution in pipes

The most important equation used is the wave equation in 1-D form.[1][2][3]

Therefore, it is reasonable to suggest, if plane waves are propagating, that the pressure distribution in a pipe is given by:

where Pi and Pr are incident and reflected wave amplitudes respectively. Also note that bold notation is used to indicate the possibility of complex terms. The first term represents a wave travelling in the +x direction and the second term, -x direction.

Since acoustic filters or mufflers typically attenuate the radiated sound power as much as possible, it is logical to assume that if we can find a way to maximize the ratio between reflected and incident wave amplitude then we will effectively attenuated the radiated noise at certain frequencies. This ratio is called the reflection coefficient and is given by:

It is important to point out that wave reflection only occurs when the impedance of a pipe changes. It is possible to match the end impedance of a pipe with the characteristic impedance of a pipe to get no wave reflection. For more information see [1] or [2].

Although the reflection coefficient isn't very useful in its current form since we want a relation describing sound power, a more useful form can be derived by recognizing that the power intensity coefficient is simply the magnitude of reflection coefficient square [1]:

As one would expect, the power reflection coefficient must be less than or equal to one. Therefore, it is useful to define the transmission coefficient as:

which is the amount of power transmitted. This relation comes directly from conservation of energy. When talking about the performance of mufflers, typically the power transmission coefficient is specified.

Basic filter design

For simple filters, a long wavelength approximation can be made to make the analysis of the system easier. When this assumption is valid (e.g. low frequencies) the components of the system behave as lumped acoustical elements. Equations relating the various properties are easily derived under these circumstances.

The following derivations assume long wavelength. Practical applications for most conditions are given later.

Low-pass filter

Tpi for Low-Pass Filter

These are devices that attenuate the radiated sound power at higher frequencies. This means the power transmission coefficient is approximately 1 across the band pass at low frequencies(see figure to right).

This is equivalent to an expansion in a pipe, with the volume of gas located in the expansion having an acoustic compliance (see figure to right). Continuity of acoustic impedance (see Java Applet at: Acoustic Impedance Visualization) at the junction, see [1], gives a power transmission coefficient of:

where k is the wavenumber (see Wave Properties), L & are length and area of expansion respectively, and S is the area of the pipe.

The cut-off frequency is given by:

High-pass filter

Tpi for High-Pass Filter

These are devices that attenuate the radiated sound power at lower frequencies. Like before, this means the power transmission coefficient is approximately 1 across the band pass at high frequencies (see figure to right).

This is equivalent to a short side branch (see figure to right) with a radius and length much smaller than the wavelength (lumped element assumption). This side branch acts like an acoustic mass and applies a different acoustic impedance to the system than the low-pass filter. Again using continuity of acoustic impedance at the junction yields a power transmission coefficient of the form [1]:

where a and L are the area and effective length of the small tube, and S is the area of the pipe.

The cut-off frequency is given by:

Band-stop filter

Tpi for Band-Stop Filter

These are devices that attenuate the radiated sound power over a certain frequency range (see figure to right). Like before, the power transmission coefficient is approximately 1 in the band pass region.

Since the band-stop filter is essentially a cross between a low and high pass filter, one might expect to create one by using a combination of both techniques. This is true in that the combination of a lumped acoustic mass and compliance gives a band-stop filter. This can be realized as a helmholtz resonator (see figure to right). Again, since the impedance of the helmholtz resonator can be easily determined, continuity of acoustic impedance at the junction can give the power transmission coefficient as [1]:

where is the area of the neck, L is the effective length of the neck, V is the volume of the helmholtz resonator, and S is the area of the pipe. It is interesting to note that the power transmission coefficient is zero when the frequency is that of the resonance frequency of the helmholtz. This can be explained by the fact that at resonance the volume velocity in the neck is large with a phase such that all the incident wave is reflected back to the source [1].

The zero power transmission coefficient location is given by:

This frequency value has powerful implications. If a system has the majority of noise at one frequency component, the system can be "tuned" using the above equation, with a helmholtz resonator, to perfectly attenuate any transmitted power (see examples below).

Helmholtz Resonator as a Muffler, f = 60 Hz
Helmholtz Resonator as a Muffler, f = fc

Design

If the long wavelength assumption is valid, typically a combination of methods described above are used to design a filter. A specific design procedure is outlined for a helmholtz resonator, and other basic filters follow a similar procedure (see 1).

Two main metrics need to be identified when designing a helmholtz resonator [3]:

  1. Resonance frequency desired: where .
  2. - Transmission loss: based on TL level. This constant is found from a TL graph (see HR pp. 6).

This will result in two equations with two unknowns which can be solved for the unknown dimensions of the helmholtz resonator. It is important to note that flow velocities degrade the amount of transmission loss at resonance and tend to move the resonance location upwards [3].

In many situations, the long wavelength approximation is not valid and alternative methods must be examined. These are much more mathematically rigorous and require a complete understanding acoustics involved. Although the mathematics involved are not shown, common filters used are given in the section that follows.

Actual filter design

As explained previously, there are two main types of filters used in practice: absorptive and reactive. The benefits and drawback of each will be briefly explained, along with their relative applications (see Absorptive Mufflers.

Absorptive

These are mufflers which incorporate sound absorbing materials to transform acoustic energy into heat. Unlike reactive mufflers which use destructive interference to minimize radiated sound power, absorptive mufflers are typically straight through pipes lined with multiple layers of absorptive materials to reduce radiated sound power. The most important property of absorptive mufflers is the attenuation constant. Higher attenuation constants lead to more energy dissipation and lower radiated sound power.

Advantages of Absorptive Mufflers [3]:
(1) - High amount of absorption at higher frequencies.

(2) - Good for applications involving broadband (constant across the spectrum) and narrowband noise.

(3) - Reduced amount of back pressure compared to reactive mufflers.

Disadvantages of Absorptive Mufflers [3]:
(1) - Poor performance at low frequencies.

(2) - Material can degrade under certain circumstances (high heat, etc.).

Examples

Absorptive Muffler

There are a number of applications for absorptive mufflers. The most well known application is in race cars, where engine performance is desired. Absorptive mufflers don't create a large amount of back pressure (as in reactive mufflers) to attenuate the sound, which leads to higher muffler performance. It should be noted however, that the radiated sound is much higher. Other applications include plenum chambers (large chambers lined with absorptive materials, see picture below), lined ducts, and ventilation systems.

Reactive

Reactive mufflers use a number of complex passages (or lumped elements) to reduce the amount of acoustic energy transmitted. This is accomplished by a change in impedance at the intersections, which gives rise to reflected waves (and effectively reduces the amount of transmitted acoustic energy). Since the amount of energy transmitted is minimized, the reflected energy back to the source is quite high. This can actually degrade the performance of engines and other sources. Opposite to absorptive mufflers, which dissipate the acoustic energy, reactive mufflers keep the energy contained within the system. See #The_reflector_muffler Reactive Mufflers for more information.

Advantages of Reactive Mufflers [3]:
(1) - High performance at low frequencies.

(2) - Typically give high insertion loss, IL, for stationary tones.

(3) - Useful in harsh conditions.

Disadvantages of Reactive Mufflers [3]:
(1) - Poor performance at high frequencies.

(2) - Not desirable characteristics for broadband noise.

Examples

Reflective Muffler

Reactive mufflers are the most widely used mufflers in combustion engines 1. Reactive mufflers are very efficient in low frequency applications (especially since simple lumped element analysis can be applied). Other application areas include: harsh environments (high temperature/velocity engines, turbines, etc.), specific frequency attenuation (using a helmholtz like device, a specific frequency can be toned to give total attenuation of radiated sound power), and a need for low radiated sound power (car mufflers, air conditioners, etc.).

Performance

There are 3 main metrics used to describe the performance of mufflers; Noise Reduction, Insertion Loss, and Transmission Loss. Typically when designing a muffler, 1 or 2 of these metrics is given as a desired value.

Noise Reduction (NR)

Defined as the difference between sound pressure levels on the source and receiver side. It is essentially the amount of sound power reduced between the location of the source and termination of the muffler system (it doesn't have to be the termination, but it is the most common location) [3].

where and is sound pressure levels at source and receiver respectively. Although NR is easy to measure, pressure typically varies at source side due to standing waves [3].

Insertion Loss (IL)

Defined as difference of sound pressure level at the receiver with and without sound attenuating barriers. This can be realized, in a car muffler, as the difference in radiated sound power with just a straight pipe to that with an expansion chamber located in the pipe. Since the expansion chamber will attenuate some of the radiate sound power, the pressure at the receiver with sound attenuating barriers will be less. Therefore, a higher insertion loss is desired [3].

where and are pressure levels at receiver without and with a muffler system respectively. Main problem with measuring IL is that the barrier or sound attenuating system needs to be removed without changing the source [3].

Transmission Loss (TL)

Defined as the difference between the sound power level of the incident wave to the muffler system and the transmitted sound power. For further information see Transmission Loss [3].

with

where and are the transmitted and incident wave power respectively. From this expression, it is obvious the problem with measure TL is decomposing the sound field into incident and transmitted waves which can be difficult to do for complex systems (analytically).

Examples

(1) - For a plenum chamber (see figure below):

in dB

where is average absorption coefficient.

Plenum Chamber
Transmission Loss vs. Theta

(2) - For an expansion (see figure below):

where

Expansion in Infinite Pipe
NR, IL, & TL for Expansion

(3) - For a helmholtz resonator (see figure below):

in dB

Helmholtz Resonator
TL for Helmholtz Resonator

gdnrb

  1. Muffler/silencer applications and descriptions of performance criteria Exhaust Silencers
  2. Engineering Acoustics, Purdue University - ME 513.
  3. Sound Propagation Animations
  4. Exhaust Muffler Design
  5. Project Proposal & Outline

References

  1. Kinsler, Lawrence E.; Frey, Austin R.; Coppens, Alan B.; Sanders, James V. Fundamentals of acoustics. Wiley. p. 560. ISBN 978-0471847892.
  2. Pierce, Allan D. (June 1989). Acoustics: An Introduction to Its Physical Principles and Applications. Acoustical Society of America. p. 678. ISBN 978-0883186121.
  3. Weisstein, Eric W. "Wave Equation--1-Dimensional". MathWorld.
  1. Fundamentals of Acoustics; Kinsler et al., John Wiley & Sons, 2000
  2. Acoustics; Pierce, Acoustical Society of America, 1989
  3. - ME 413 Noise Control, Dr. Mongeau, Purdue University

Flow-induced oscillations of a Helmholtz resonator

Acoustics/Flow-induced oscillations of a Helmholtz resonator

=Active Control= 

Introduction

The principle of active control of noise, is to create destructive interferences using a secondary source of noise. Thus, any noise can theoretically disappear. But as we will see in the following sections, only low frequencies noises can be reduced for usual applications, since the amount of secondary sources required increases very quickly with frequency. Moreover, predictable noises are much easier to control than unpredictable ones. The reduction can reach up to 20 dB for the best cases. But since good reduction can only be reached for low frequencies, the perception we have of the resulting sound is not necessarily as good as the theoretical reduction. This is due to psychoacoustics considerations, which will be discussed later on.

Fundamentals of active control of noise

Control of a monopole by another monopole

Even for the free space propagation of an acoustic wave created by a punctual source it is difficult to reduce noise in a large area, using active noise control, as we will see in the section.

In the case of an acoustic wave created by a monopolar source, the Helmholtz equation becomes:

where q is the flow of the noise sources.

The solution for this equation at any M point is:

where the p mark refers to the primary source.

Let us introduce a secondary source in order to perform active control of noise. The acoustic pressure at that same M point is now:

It is now obvious that if we chose there is no more noise at the M point. This is the most simple example of active control of noise. But it is also obvious that if the pressure is zero in M, there is no reason why it should also be zero at any other N point. This solution only allows to reduce noise in one very small area.

However, it is possible to reduce noise in a larger area far from the source, as we will see in this section. In fact the expression for acoustic pressure far from the primary source can be approximated by:

Control of a monopole by another monopole

As shown in the previous section we can adjust the secondary source in order to get no noise in M. In that case, the acoustic pressure in any other N point of the space remains low if the primary and secondary sources are close enough. More precisely, it is possible to have a pressure close to zero in the whole space if the M point is equally distant from the two sources and if: where D is the distance between the primary and secondary sources. As we will see later on, it is easier to perform active control of noise with more than on source controlling the primary source, but it is of course much more expensive.

A commonly admitted estimation of the number of secondary sources which are necessary to reduce noise in an R radius sphere, at a frequency f is:

This means that if you want no noise in a one meter diameter sphere at a frequency below 340 Hz, you will need 30 secondary sources. This is the reason why active control of noise works better at low frequencies.

Active control for waves propagation in ducts and enclosures

This section requires from the reader to know the basis of modal propagation theory, which will not be explained in this article.

Ducts

For an infinite and straight duct with a constant section, the pressure in areas without sources can be written as an infinite sum of propagation modes:

where are the eigen functions of the Helmoltz equation and a represent the amplitudes of the modes.

The eigen functions can either be obtained analytically, for some specific shapes of the duct, or numerically. By putting pressure sensors in the duct and using the previous equation, we get a relation between the pressure matrix P (pressure for the various frequencies) and the A matrix of the amplitudes of the modes. Furthermore, for linear sources, there is a relation between the A matrix and the U matrix of the signal sent to the secondary sources: and hence: .

Our purpose is to get: A=0, which means: . This is possible every time the rank of the K matrix is bigger than the number of the propagation modes in the duct.

Thus, it is theoretically possible to have no noise in the duct in a very large area not too close from the primary sources if the there are more secondary sources than propagation modes in the duct. Therefore, it is obvious that active noise control is more appropriate for low frequencies. In fact the more the frequency is low, the less propagation modes there will be in the duct. Experiences show that it is in fact possible to reduce the noise from over 60 dB.

Enclosures

The principle is rather similar to the one described above, except the resonance phenomenon has a major influence on acoustic pressure in the cavity. In fact, every mode that is not resonant in the considered frequency range can be neglected. In a cavity or enclosure, the number of these modes rise very quickly as frequency rises, so once again, low frequencies are more appropriate. Above a critical frequency, the acoustic field can be considered as diffuse. In that case, active control of noise is still possible, but it is theoretically much more complicated to set up.

Active control and psychoacoustics

As we have seen, it is possible to reduce noise with a finite number of secondary sources. Unfortunately, the perception of sound of our ears does not only depend on the acoustic pressure (or the decibels). In fact, it sometimes happen that even though the number of decibels has been reduced, the perception that we have is not really better than without active control.

Active control systems

Since the noise that has to be reduced can never be predicted exactly, a system for active control of noise requires an auto adaptable algorithm. We have to consider two different ways of setting up the system for active control of noise depending on whether it is possible or not to detect the noise from the primary source before it reaches the secondary sources. If this is possible, a feed forward technique will be used (aircraft engine for example). If not a feed back technique will be preferred.

Feedforward

In the case of a feed forward, two sensors and one secondary source are required. The sensors measure the sound pressure at the primary source (detector) and at the place we want noise to be reduced (control sensor). Furthermore, we should have an idea of what the noise from the primary source will become as he reaches the control sensor. Thus we approximately know what correction should be made, before the sound wave reaches the control sensor (forward). The control sensor will only correct an eventual or residual error. The feedforward technique allows to reduce one specific noise (aircraft engine for example) without reducing every other sound (conversations, …). The main issue for this technique is that the location of the primary source has to be known, and we have to be sure that this sound will be detected beforehand. Therefore portative systems based on feed forward are impossible since it would require having sensors all around the head.

Feedforward System

Feedback

In that case, we do not exactly know where the sound comes from; hence there is only one sensor. The sensor and the secondary source are very close from each other and the correction is done in real time: as soon as the sensor gets the information the signal is treated by a filter which sends the corrected signal to the secondary source. The main issue with feedback is that every noise is reduced and it is even theoretically impossible to have a standard conversation.

Feedback System

Applications

Noise cancelling headphone

Usual headphones become useless when the frequency gets too low. As we have just seen active noise cancelling headphones require the feedback technique since the primary sources can be located all around the head. This active control of noise is not really efficient at high frequencies since it is limited by the Larsen effect. Noise can be reduced up to 30 dB at a frequency range between 30 Hz and 500 Hz.

Active control for cars

Noise reduction inside cars can have a significant impact on the comfort of the driver. There are three major sources of noise in a car: the motor, the contact of tires on the road, and the aerodynamic noise created by the air flow around the car. In this section, active control for each of those sources will be briefly discussed.

Motor noise

This noise is rather predictable since it a consequence of the rotation of the pistons in the motor. Its frequency is not exactly the motor’s rotational speed though. However, the frequency of this noise is in between 20 Hz and 200 Hz, which means that an active control is theoretically possible. The following pictures show the result of an active control, both for low and high regime.

Low regime

Even though these results show a significant reduction of the acoustic pressure, the perception inside the car is not really better with this active control system, mainly for psychoacoustics reasons which were mentioned above. Moreover such a system is rather expensive and thus are not used in commercial cars.

Tires noise

This noise is created by the contact between the tires and the road. It is a broadband noise which is rather unpredictable since the mechanisms are very complex. For example, the different types of roads can have a significant impact on the resulting noise. Furthermore, there is a cavity around the tires, which generate a resonance phenomenon. The first frequency is usually around 200 Hz. Considering the multiple causes for that noise and its unpredictability, even low frequencies become hard to reduce. But since this noise is broadband, reducing low frequencies is not enough to reduce the overall noise. In fact an active control system would mainly be useful in the case of an unfortunate amplification of a specific mode.

Aerodynamic noise

This noise is a consequence of the interaction between the air flow around the car and the different appendixes such as the rear views for example. Once again, it is an unpredictable broadband noise, which makes it difficult to reduce with an active control system. However, this solution can become interesting in the case an annoying predictable resonance would appear.

Active control for aeronautics

The noise of aircraft propellers is highly predictable since the frequency is quite exactly the rotational frequency multiplied by the number of blades. Usually this frequency is around some hundreds of Hz. Hence, an active control system using the feedforward technique provides very satisfying noise reductions. The main issues are the cost and the weigh of such a system. The fan noise on aircraft engines can be reduced in the same manner.

Further reading


Anechoic and reverberation rooms

Introduction

Acoustic experiments often require to realise measurements in rooms with special characteristics. Two types of rooms can be distinguished: anechoic rooms and reverberation rooms.

Anechoic room

The principle of this room is to simulate a free field. In a free space, the acoustic waves are propagated from the source to infinity. In a room, the reflections of the sound on the walls produce a wave which is propagated in the opposite direction and comes back to the source. In anechoic rooms, the walls are very absorbent in order to eliminate these reflections. The sound seems to die down rapidly. The materials used on the walls are rockwool, glasswool or foams, which are materials that absorb sound in relatively wide frequency bands. Cavities are dug in the wool so that the large wavelength corresponding to bass frequencies are absorbed too. Ideally the sound pressure level of a punctual sound source decreases about 6 dB per a distance doubling.

Anechoic rooms are used in the following experiments:

Intensimetry: measurement of the acoustic power of a source.

Study of the source directivity.

Reverberation room

The walls of a reverberation room mostly consist of concrete and are covered with reflecting paint. Alternative design consist of sandwich panels with metal surface. The sound reflects off the walls many times before dying down. It gives a similar impression of a sound in a cathedral. Ideally all sound energy is absorbed by air. Because of all these reflections, a lot of plane waves with different directions of propagation interfere in each point of the room. Considering all the waves is very complicated so the acoustic field is simplified by the diffuse field hypothesis: the field is homogeneous and isotropic. Then the pressure level is uniform in the room. The truth of this thesis increases with ascending frequency, resulting in a lower limiting frequency for each reverberation room, where the density of standing waves is sufficient.

Several conditions are required for this approximation: The absorption coefficient of the walls must be very low (α<0.2) The room must have geometrical irregularities (non-parallel walls, diffusor objects) to avoid nodes of pressure of the resonance modes.

With this hypothesis, the theory of Sabine can be applied. It deals with the reverberation time which is the time required to the sound level to decrease of 60 dB. T depends on the volume of the room V, the absorption coefficient αi and the area Si of the different materials in the room :

Reverberation rooms are used in the following experiments:

measurement of the ability of a material to absorb a sound

measurement of the ability of a partition to transmit a sound

Intensimetry

measurement of sound power


Basic Room Acoustic Treatments

Introduction

Many people use one or two rooms in their living space as "theatrical" rooms where theater or music room activities commence. It is a common misconception that adding speakers to the room will enhance the quality of the room acoustics. There are other simple things that can be done to increase the room's acoustics to produce sound that is similar to "theater" sound. This site will take you through some simple background knowledge on acoustics and then explain some solutions that will help improve sound quality in a room.

Room sound combinations

The sound you hear in a room is a combination of direct sound and indirect sound. Direct sound will come directly from your speakers while the other sound you hear is reflected off of various objects in the room.

The Direct sound is coming right out of the TV to the listener, as you can see with the heavy black arrow. All of the other sound is reflected off surfaces before they reach the listener.

Good and bad reflected sound

Have you ever listened to speakers outside? You might have noticed that the sound is thin and dull. This occurs because when sound is reflected, it is fuller and louder than it would if it were in an open space. So when sound is reflected, it can add a fullness, or spaciousness. The bad part of reflected sound occurs when the reflections amplify some notes, while cancelling out others, making the sound distorted. It can also affect tonal quality and create an echo-like effect. There are three types of reflected sound, pure reflection, absorption, and diffusion. Each reflection type is important in creating a "theater" type acoustic room.

Reflected sound

Reflected sound waves, good and bad, affect the sound you hear, where it comes from, and the quality of the sound when it gets to you. The bad news when it comes to reflected sound is standing waves.

These waves are created when sound is reflected back and forth between any two parallel surfaces in your room, ceiling and floor or wall to wall.

Standing waves can distort noises 300 Hz and down. These noises include the lower mid frequency and bass ranges. Standing waves tend to collect near the walls and in corners of a room, these collecting standing waves are called room resonance modes.

Finding your room resonance modes

First, specify room dimensions (length, width, and height). Then follow this example:

Working with room resonance modes to increase sound quality

There are some room dimensions that produce the largest amount of standing waves.
  1. Cube
  2. Room with 2 out of the three dimensions equal
  3. Rooms with dimensions that are multiples of each other
Move chairs or sofas away from the walls or corners to reduce standing wave effects

Absorbed

The sound that humans hear is actually a form of acoustic energy. Different materials absorb different amounts of this energy at different frequencies. When considering room acoustics, there should be a good mix of high frequency absorbing materials and low frequency absorbing materials. A table including information on how different common household absorb sound can be found here.

Diffused sound

Using devices that diffuse sound is a fairly new way of increasing acoustic performance in a room. It is a means to create sound that appears to be "live". They can replace echo-like reflections without absorbing too much sound.

Some ways of determining where diffusive items should be placed were found on this website.

  1. If you have carpet or drapes already in your room, use diffusion to control side wall reflections.
  2. A bookcase filled with odd-sized books makes an effective diffuser.
  3. Use absorptive material on room surfaces between your listening position and your front speakers, and treat the back wall with diffusive material to re-distribute the reflections.

How to find overall trouble spots in a room

Every surface in a room does not have to be treated in order to have good room acoustics. Here is a simple method of finding trouble spots in a room.

  1. Grab a friend to hold a mirror along the wall near a certain speaker at speaker height.
  2. The listener sits in a spot of normal viewing.
  3. The friend then moves slowly toward the listening position (stay along the wall).
  4. Mark each spot on the wall where the listener can see any of the room speakers in the mirror.
  5. Congratulations! These are the trouble spots in the room that need an absorptive material in place. Don't forget that diffusive material can also be placed in those positions.

References

Human Vocal Fold

Physiology of vocal fold

The human vocal fold is a set of lip-like tissues located inside the larynx, and is the source of sound for humans and many animals.

The larynx is located at the top of the trachea. It is mainly composed of cartilages and muscles, and the largest cartilage, thyroid, is well known as the "Adam's Apple."

The organ has two main functions; to act as the last protector of the airway, and to act as a sound source for voice. This page focuses on the latter function.

Links on Physiology:

Voice production

Although the science behind sound production for a vocal fold is complex, it can be thought of as similar to a brass player's lips, or a whistle made out of grass. Basically, vocal folds (or lips or a pair of grass) make a constriction to the airflow, and as the air is forced through the narrow opening, the vocal folds oscillate. This causes a periodical change in the air pressure, which is perceived as sound.

Vocal Folds Video

When the airflow is introduced to the vocal folds, it forces open the two vocal folds which are nearly closed initially. Due to the stiffness of the folds, they will then try to close the opening again. And now the airflow will try to force the folds open etc... This creates an oscillation of the vocal folds, which in turn, as I stated above, creates sound. However, this is a damped oscillation, meaning it will eventually achieve an equilibrium position and stop oscillating. So how are we able to "sustain" sound?

As it will be shown later, the answer seems to be in the changing shape of vocal folds. In the opening and the closing stages of the oscillation, the vocal folds have different shapes. This affects the pressure in the opening, and creates the extra pressure needed to push the vocal folds open and sustain oscillation. This part is explained in more detail in the "Model" section.

This flow-induced oscillation, as with many fluid mechanics problems, is not an easy problem to model. Numerous attempts to model the oscillation of vocal folds have been made, ranging from a single mass-spring-damper system to finite element models. In this page I would like to use my single-mass model to explain the basic physics behind the oscillation of a vocal fold.

Information on vocal fold models: National Center for Voice and Speech

Model

Figure 1: Schematics

The most simple way of simulating the motion of vocal folds is to use a single mass-spring-damper system as shown above. The mass represents one vocal fold, and the second vocal fold is assumed to be symmetry about the axis of symmetry. Position 3 represents a location immediately past the exit (end of the mass), and position 2 represents the glottis (the region between the two vocal folds).

The pressure force

The major driving force behind the oscillation of vocal folds is the pressure in the glottis. The Bernoulli's equation from fluid mechanics states that:

-----EQN 1

Neglecting potential difference and applying EQN 1 to positions 2 and 3 of Figure 1,

-----EQN 2

Note that the pressure and the velocity at position 3 cannot change. This makes the right hand side of EQN 2 constant. Observation of EQN 2 reveals that in order to have oscillating pressure at 2, we must have oscillation velocity at 2. The flow velocity inside the glottis can be studied through the theories of the orifice flow.

The constriction of airflow at the vocal folds is much like an orifice flow with one major difference: with vocal folds, the orifice profile is continuously changing. The orifice profile for the vocal folds can open or close, as well as change the shape of the opening. In Figure 1, the profile is converging, but in another stage of oscillation it takes a diverging shape.

The orifice flow is described by Blevins as:

-----EQN 3

Where the constant C is the orifice coefficient, governed by the shape and the opening size of the orifice. This number is determined experimentally, and it changes throughout the different stages of oscillation.

Solving equations 2 and 3, the pressure force throughout the glottal region can be determined.

The Collision Force

As the video of the vocal folds shows, vocal folds can completely close during oscillation. When this happens, the Bernoulli equation fails. Instead, the collision force becomes the dominating force. For this analysis, Hertz collision model was applied.

-----EQN 4

where

Here delta is the penetration distance of the vocal fold past the line of symmetry.

Simulation of the model

The pressure and the collision forces were inserted into the equation of motion, and the result was simulated.

Figure 2: Area Opening and Volumetric Flow Rate

Figure 2 shows that an oscillating volumetric flow rate was achieved by passing a constant airflow through the vocal folds. When simulating the oscillation, it was found that the collision force limits the amplitude of oscillation rather than drive the oscillation. Which tells us that the pressure force is what allows the sustained oscillation to occur.

The acoustic output

This model showed that the changing profile of glottal opening causes an oscillating volumetric flow rate through the vocal folds. This will in turn cause an oscillating pressure past the vocal folds. This method of producing sound is unusual, because in most other means of sound production, air is compressed periodically by a solid such as a speaker cone.

Past the vocal folds, the produced sound enters the vocal tract. Basically this is the cavity in the mouth as well as the nasal cavity. These cavities act as acoustic filters, modifying the character of the sound. These are the characters that define the unique voice each person produces.

References

  1. Fundamentals of Acoustics; Kinsler et al, John Wiley & Sons, 2000
  2. Acoustics: An introduction to its Physical Principles and Applications; Pierce, Allan D., Acoustical Society of America, 1989.
  3. Blevins, R.D. (1984). Applied Fluid Dynamics Handbook. Van Nostrand Reinhold Co. 81-82.
  4. Titze, I. R. (1994). Principles of Voice Production. Prentice-Hall, Englewood Cliffs, NJ.
  5. Lucero, J. C., and Koenig, L. L. (2005). Simulations of temporal patterns of oral airflow in men and women using a two-mass model of the vocal folds under dynamic control, Journal of the Acostical Society of America 117, 1362-1372.
  6. Titze, I.R. (1988). The physics of small-amplitude oscillation of the vocal folds. Journal of the Acoustical Society of America 83, 1536–1552

Threshold of Hearing/Pain

Fig. 1: The Fletcher-Munson equal-loudness contours. The lowest of the curves is the ATH.

The threshold of hearing is the Sound pressure level SPL of 20 µPa (micropascals) = 2 × 10−5 pascal (Pa). This low threshold of amplitude (strength or sound pressure level) is frequency dependent. See the frequency curve in Fig. 2 below

The absolute threshold of hearing (ATH) is the minimum amplitude (level or strength) of a pure tone that the average ear with normal hearing can hear in a noiseless environment.

The threshold of pain is the SPL beyond which sound becomes unbearable for a human listener. This threshold varies only slightly with frequency. Prolonged exposure to sound pressure levels in excess of the threshold of pain can cause physical damage, potentially leading to hearing impairment.

Different values for the threshold of pain:

Threshold of pain
SPL sound pressure
120 dBSPL 20 Pa
130 dBSPL 63 Pa
134 dBSPL 100 Pa
137.5 dBSPL 150 Pa
140 dBSPL 200 Pa

The Threshold of hearing is frequency dependent, and typically shows a minimum (indicating the ear's maximum sensitivity) at frequencies between 1 kHz and 5 kHz. A typical ATH curve is pictured in Fig. 1. The absolute threshold of hearing represents the lowest curve amongst the set of equal-loudness contours, with the highest curve representing the threshold of pain.

In psychoacoustic audio compression, the ATH is used, often in combination with masking curves, to calculate which spectral components are inaudible and may thus be ignored in the coding process; any part of an audio spectrum which has an amplitude (level or strength) below the ATH may be removed from an audio signal without any audible change to the signal.

The ATH curve rises with age as the human ear becomes more insensitive to sound, with the greatest changes occurring at frequencies higher than 2 kHz. Curves for subjects of various age groups are illustrated in Fig. 2. The data is from the United States Occupational Health and Environment Control, Standard Number:1910.95 App F

[[../../Human Vocal Fold/]] - Acoustics - [[../../How an Acoustic Guitar Works/]]


Musical Acoustics Applications

Microphone Technique

General technique

  1. A microphone should be used whose frequency response will suit the frequency range of the voice or instrument being recorded.
  2. Vary microphone positions and distances until you achieve the monitored sound that you desire.
  3. In the case of poor room acoustics, place the microphone very close to the loudest part of the instrument being recorded or isolate the instrument.
  4. Personal taste is the most important component of microphone technique. Whatever sounds right to you, is right.

Types of microphones

Dynamic microphones

These are the most common general-purpose microphones. They do not require power to operate. If you have a microphone that is used for live performance, it is probably a dynamic mic.

They have the advantage that they can withstand very high sound pressure levels (high volume) without damage or distortion, and tend to provide a richer, more intense sound than other types. Traditionally, these mics did not provide as good a response on the highest frequencies (particularly above 10 kHz), but some recent models have come out that attempt to overcome this limitation.

In the studio, dynamic mics are often used for high sound pressure level instruments such as drums, guitar amps and brass instruments. Models that are often used in recording include the Shure SM57 and the Sennheiser MD421.

Condenser microphones

These microphones are often the most expensive microphones a studio owns. They require power to operate, either from a battery or phantom power, provided using the mic cable from an external mixer or pre-amp. These mics have a built-in pre-amplifier that uses the power. Some vintage microphones have a tube amplifier, and are referred to as tube condensers.

While they cannot withstand the very high sound pressure levels that dynamic mics can, they provide a flatter frequency response, and often the best response at the highest frequencies. Not as good at conveying intensity, they are much better at providing a balanced accurate sound.

Condenser mics come with a variety of sizes of transducers. They are usually grouped into smaller format condensers, which often are long cylinders about the size of a nickel coin in diameter, and larger format condensers, the transducers of which are often about an inch in diameter or slightly larger.

In the studio, condenser mics are often used for instruments with a wide frequency range, such as an acoustic piano, acoustic guitar, voice, violin, cymbals, or an entire band or chorus. On louder instruments they do not use close miking with condensers. Models that are often used in recording include the Shure SM81 (small format), AKG C414 (large format) and Neumann U87 (large format).

Ribbon microphones

Ribbon microphones are often used as an alternative to condenser microphones. Some modern ribbon microphones do not require power, and some do. The first ribbon microphones, developed at RCA in the 1930s, required no power, were quite fragile and could be destroyed by just blowing air through them. Modern ribbon mics are much more resiliant, and can be used with the same level of caution as condenser mics.

Ribbon microphones provide a warmer sound than a condenser mic, with a less brittle top end. Some vocalists (including Paul McCartney) prefer them to condenser mics. In the studio they are used on vocals, violins, and even drums. Popular models for recording include the Royer R121 and the AEA R84.

Working distance

Close miking

When miking at a distance of 1 inch to about 1 foot from the sound source, it is considered close miking. This technique generally provides a tight, present sound quality and does an effective job of isolating the signal and excluding other sounds in the acoustic environment.

Bleed

Bleeding occurs when the signal is not properly isolated and the microphone picks up another nearby instrument. This can make the mixdown process difficult if there are multiple voices on one track. Use the following methods to prevent leakage:

  • Place the microphones closer to the instruments.
  • Move the instruments farther apart.
  • Put some sort of acoustic barrier between the instruments.
  • Use directional microphones.

A B miking

The A B miking distance rule (ratio 3 - 1) is a general rule of thumb for close miking. To prevent phase anomalies and bleed, the microphones should be placed at least three times as far apart as the distance between the instrument and the microphone.

A B Miking

Distant miking

Distant miking refers to the placement of microphones at a distance of 3 feet or more from the sound source. This technique allows the full range and balance of the instrument to develop and it captures the room sound. This tends to add a live, open feeling to the recorded sound, but careful consideration needs to be given to the acoustic environment.

Accent miking

Accent miking is a technique used for solo passages when miking an ensemble. A soloist needs to stand out from an ensemble, but placing a microphone too close will sound unnaturally present compared the distant miking technique used with the rest of the ensemble. Therefore, the microphone should be placed just close enough to the soloist so that the signal can be mixed effectively without sounding completely excluded from the ensemble.

Ambient miking

Ambient miking is placing the microphones at such a distance that the room sound is more prominent than the direct signal. This technique is used to capture audience sound or the natural reverberation of a room or concert hall.

Stereo and surround technique

Stereo

Stereo miking is simply using two microphones to obtain a stereo left-right image of the sound. A simple method is the use of a spaced pair, which is placing two identical microphones several feet apart and using the difference in time and amplitude to create the image. Great care should be taken in the method as phase anomalies can occur due to the signal delay. This risk of phase anomaly can be reduced by using the X/Y method, where the two microphones are placed with the grills as close together as possible without touching. There should be an angle of 90 to 135 degrees between the mics. This technique uses only amplitude, not time, to create the image, so the chance of phase discrepancies is unlikely.

Spaced Pair X/Y Method

Surround

To take advantage of 5.1 sound or some other surround setup, microphones may be placed to capture the surround sound of a room. This technique essentially stems from stereo technique with the addition of more microphones. Because every acoustic environment is different, it is difficult to define a general rule for surround miking, so placement becomes dependent on experimentation. Careful attention must be paid to the distance between microphones and potential phase anomalies.

Placement for varying instruments

Amplifiers

When miking an amplified speaker, such as for electric guitars, the mic should be placed 2 to 12 inches from the speaker. Exact placement becomes more critical at a distance of less than 4 inches. A brighter sound is achieved when the mic faces directly into the center of the speaker cone and a more mellow sound is produced when placed slightly off-center. Placing off-center also reduces amplifier noise.

A bigger sound can often be achieved by using two mics. The first mic should be a dynamic mic, placed as described in the previous paragraph. Add to this a condenser mic placed at least 3 times further back (remember the 3:1 rule), which will pickup the blended sound of all speakers, as well as some room ambience. Run the mics into separate channels and combine them to your taste.

Brass instruments

High sound-pressure levels are produced by brass instruments due to the directional characteristics of mid to mid-high frequencies. Therefore, for brass instruments such as trumpets, trombones, and tubas, microphones should face slightly off of the bell's center at a distance of one foot or more to prevent overloading from wind blasts.

Guitars

Technique for acoustic guitars is dependent on the desired sound. Placing a microphone close to the sound hole will achieve the highest output possible, but the sound may be bottom-heavy because of how the sound hole resonates at low frequencies. Placing the mic slightly off-center at 6 to 12 inches from the hole will provide a more balanced pickup. Placing the mic closer to the bridge with the same working distance will ensure that the full range of the instrument is captured.

A technique that some engineers use places a large-format condenser mic 12-18 inches away from the 12th fret of the guitar, and a small-format condenser very close to the strings nearby. Combining the two signals can produce a rich tone.

Pianos

Ideally, microphones would be placed 4 to 6 feet from the piano to allow the full range of the instrument to develop before it is captured. This isn't always possible due to room noise, so the next best option is to place the microphone just inside the open lid. This applies to both grand and upright pianos.

Percussion

One overhead microphone can be used for a drum set, although two are preferable. If possible, each component of the drum set should be miked individually at a distance of 1 to 2 inches as if they were their own instrument. This also applies to other drums such as congas and bongos. For large, tuned instruments such as xylophones, multiple mics can be used as long as they are spaced according to the 3:1 rule. Typically, dynamic mics are used for individual drum miking, while small-format condensers are used for the overheads.

Voice

Standard technique is to put the microphone directly in front of the vocalist's mouth, although placing slightly off-center can alleviate harsh consonant sounds (such as "p") and prevent overloading due to excessive dynamic range.

Woodwinds

A general rule for woodwinds is to place the microphone around the middle of the instrument at a distance of 6 inches to 2 feet. The microphone should be tilted slightly towards the bell or sound hole, but not directly in front of it.

Sound Propagation

It is important to understand how sound propagates due to the nature of the acoustic environment so that microphone technique can be adjusted accordingly. There are four basic ways that this occurs:

Reflection

Sound waves are reflected by surfaces if the object is as large as the wavelength of the sound. It is the cause of echo (simple delay), reverberation (many reflections cause the sound to continue after the source has stopped), and standing waves (the distance between two parallel walls is such that the original and reflected waves in phase reinforce one another).

Absorption

Sound waves are absorbed by materials rather than reflected. This can have both positive and negative effects depending on whether you desire to reduce reverberation or retain a live sound.

Diffraction

Objects that may be between sound sources and microphones must be considered due to diffraction. Sound will be stopped by obstacles that are larger than its wavelength. Therefore, higher frequencies will be blocked more easily than lower frequencies.

Refraction

Sound waves bend as they pass through mediums with varying density. Wind or temperature changes can cause sound to seem like it is literally moving in a different direction than expected.

Sources

Microphone Design and Operation

Introduction

Microphones are devices which convert pressure fluctuations into electrical signals. There are two main methods of accomplishing this task that are used in the mainstream entertainment industry. They are known as dynamic microphones and condenser microphones. Piezoelectric crystals can also be used as microphones but are not commonly used in the entertainment industry. For further information on piezoelectric transducers Click Here.

Dynamic microphones

This type of microphone converts pressure fluctuations into electrical current. These microphones work by means of the principle known as Faraday’s Law. The principle states that when an electrical conductor is moved through a magnetic field, an electrical current is induced within the conductor. The magnetic field within the microphone is created using permanent magnets and the conductor is produced in two common arrangements.

Figure 1: Sectional View of Moving-Coil Dynamic Microphone

The first conductor arrangement is made of a coil of wire. The wire is typically copper and is attached to a circular membrane or piston usually made from lightweight plastic or occasionally aluminum. The impinging pressure fluctuation on the piston causes it to move in the magnetic field and thus creates the desired electrical current. Figure 1 provides a sectional view of a moving-coil microphone.

Figure 2: Dynamic Ribbon Microphone

The second conductor arrangement is a ribbon of metallic foil suspended between magnets. The metallic ribbon is what moves in response to a pressure fluctuation and in the same manner, an electrical current is produced. Figure 2 provides a sectional view of a ribbon microphone. In both configurations, dynamic microphones follow the same principles as acoustical transducers. For further information about transducers Click Here.

Condenser microphones

This type of microphone converts pressure fluctuations into electrical potentials through the use of changing an electrical capacitor. This is why condenser microphones are also known as capacitor microphones. An electrical capacitor is created when two charged electrical conductors are placed at a finite distance from each other. The basic relation that describes capacitors is:

Q=C*V

where Q is the electrical charge of the capacitor’s conductors, C is the capacitance, and V is the electric potential between the capacitor’s conductors. If the electrical charge of the conductors is held at a constant value, then the voltage between the conductors will be inversely proportional to the capacitance. Also, the capacitance is inversely proportional to the distance between the conductors. Condenser microphones utilize these two concepts.

Figure 3: Sectional View of Condenser Microphone

The capacitor in a condenser microphone is made of two parts: the diaphragm and the back plate. Figure 3 shows a section view of a condenser microphone. The diaphragm is what moves due to impinging pressure fluctuations and the back plate is held in a stationary position. When the diaphragm moves closer to the back plate, the capacitance increases and therefore a change in electric potential is produced. The diaphragm is typically made of metallic coated Mylar. The assembly that houses both the back plate and the diaphragm is commonly referred to as a capsule.

To keep the diaphragm and back plate at a constant charge, an electric potential must be presented to the capsule. There are various ways of performing this operation. The first of which is by simply using a battery to supply the needed DC potential to the capsule. A simplified schematic of this technique is displayed in figure 4. The resistor across the leads of the capsule is very high, in the range of 10 mega ohms, to keep the charge on the capsule close to constant.

Figure 4: Internal Battery Powered Condenser Microphone

Another technique of providing a constant charge on the capacitor is to supply a DC electric potential through the microphone cable that carries the microphones output signal. Standard microphone cable is known as XLR cable and is terminated by three pin connectors. Pin one connects to the shield around the cable. The microphone signal is transmitted between pins two and three. Figure 5 displays the layout of dynamic microphone attached to a mixing console via XLR cable.

Figure 5: Dynamic Microphone Connection to Mixing Console via XLR Cable

Phantom Supply/Powering (Audio Engineering Society, DIN 45596): The first and most popular method of providing a DC potential through a microphone cable is to supply +48 V to both of the microphone output leads, pins 2 and 3, and use the shield of the cable, pin 1, as the ground to the circuit. Because pins 2 and 3 see the same potential, any fluctuation of the microphone powering potential will not affect the microphone signal seen by the attached audio equipment. This configuration can be seen in figure 6. The +48 V will be stepped down at the microphone using a transformer and provide the potential to the back plate and diaphragm in a similar fashion as the battery solution. In fact, 9, 12, 24, 48 or 52 V can be supplied, but 48 V is the most frequent.

Figure 6: Condenser Microphone Powering Techniques

The second method of running the potential through the cable is to supply 12 V between pins 2 and 3. This method is referred to as T-powering (also known as Tonaderspeisung, AB powering; DIN 45595). The main problem with T-powering is that potential fluctuation in the powering of the capsule will be transmitted into an audio signal because the audio equipment analyzing the microphone signal will not see a difference between a potential change across pins 2 and 3 due to a pressure fluctuation and one due to the power source electric potential fluctuation.

Finally, the diaphragm and back plate can be manufactured from a material that maintains a fixed charge. These microphones are termed electrets. In early electret designs, the charge on the material tended to become unstable over time. Recent advances in science and manufacturing have allowed this problem to be eliminated in present designs.

Conclusion

Two branches of microphones exist in the entertainment industry. Dynamic microphones are found in the moving-coil and ribbon configurations. The movement of the conductor in dynamic microphones induces an electric current which is then transformed into the reproduction of sound. Condenser microphones utilize the properties of capacitors. Creating the charge on the capsule of condenser microphones can be accomplished by battery, phantom powering, T-powering, and by using fixed charge materials in manufacturing.

References

  • Sound Recording Handbook. Woram, John M. 1989.
  • Handbook of Recording Engineering Fourth Edition. Eargle, John. 2003.

Acoustic Loudspeaker

The purpose of the acoustic transducer is to convert electrical energy into acoustic energy. Many variations of acoustic transducers exist, although the most common is the moving coil-permanent magnet transducer. The classic loudspeaker is of the moving coil-permanent magnet type.

The classic electrodynamic loudspeaker driver can be divided into three key components:

  1. The Magnet Motor Drive System
  2. The Loudspeaker Cone System
  3. The Loudspeaker Suspension
Figure 1 Cut-away of a moving coil-permanent magnet loudspeaker

The Magnet Motor Drive System

The main purpose of the Magnet Motor Drive System is to establish a symmetrical magnetic field in which the voice coil will operate. The Magnet Motor Drive System is comprised of a front focusing plate, permanent magnet, back plate, and a pole piece. In figure 2, the assembled drive system is illustrated. In most cases, the back plate and the pole piece are built into one piece called the yoke. The yoke and the front focusing plate are normally made of a very soft cast iron. Iron is a material that is used in conjunction with magnetic structures because the iron is easily saturated when exposed to a magnetic field. Notice in figure 2, that an air gap was intentionally left between the front focusing plate and the yoke. The magnetic field is coupled through the air gap. The magnetic field strength (B) of the air gap is typically optimized for uniformity across the gap. [1]

Figure 2 Permanent Magnet Structure

When a coil of wire with a current flowing is placed inside the permanent magnetic field, a force is produced. B is the magnetic field strength, is the length of the coil, and is the current flowing through the coil. The electro-magnetic force is given by the expression of Laplace :

and are orthogonal, so the force is obtained by integration on the length of the wire (Re is the radius of a spire, n is the number of spires and is on the axis of the coil):

This force is directly proportional to the current flowing through the coil.

Figure 3 Voice Coil Mounted in Permanent Magnetic Structure

The coil is excited with the AC signal that is intended for sound reproduction, when the changing magnetic field of the coil interacts with the permanent magnetic field then the coil moves back and forth in order to reproduce the input signal. The coil of a loudspeaker is known as the voice coil.

The loudspeaker cone system

On a typical loudspeaker, the cone serves the purpose of creating a larger radiating area allowing more air to be moved when excited by the voice coil. The cone serves a piston that is excited by the voice coil. The cone then displaces air creating a sound wave. In an ideal environment, the cone should be infinitely rigid and have zero mass, but in reality neither is true. Cone materials vary from carbon fiber, paper, bamboo, and just about any other material that can be shaped into a stiff conical shape. The loudspeaker cone is a very critical part of the loudspeaker. Since the cone is not infinitely rigid, it tends to have different types of resonance modes form at different frequencies, which in turn alters and colors the reproduction of the sound waves. The shape of the cone directly influences the directivity and frequency response of the loudspeaker. When the cone is attached to the voice coil, a large gap above the voice coil is left exposed. This could be a problem if foreign particles make their way into the air gap of the voice coil and the permanent magnet structure. The solution to this problem is to place what is known as a dust cap on the cone to cover the air gap. Below a figure of the cone and dust cap are shown.

Figure 6 Cone and Dust Cap attached to Voice Coil

The speed of the cone can be expressed with an equation of a mass-spring system with a damping coefficient \xi :

The current intensity and the speed can also be related by this equation ( is the voltage, the electrical resistance and the inductance) :

By using a harmonic solution, the expression of the speed is :

The electrical impedance can be determined as the ratio of the voltage on the current intensity :

The frequency response of the loudspeaker is provided in Figure 7.

Figure 7 Electrical impedance

A phenomena of electrical resonance is observable around the frequency of 100 Hz. Besides, the inductance of the coil makes the impedance increase from the frequency of 400 Hz. So the range of frequency where the loudspeaker is used is 100 – 4000 Hz

The loudspeaker suspension

Most moving coil loudspeakers have a two piece suspension system, also known as a flexure system. The combination of the two flexures allows the voice coil to maintain linear travel as the voice coil is energized and provide a restoring force for the voice coil system. The two piece system consists of large flexible membrane surrounding the outside edge of the cone, called the surround, and an additional flexure connected directly to the voice coil, called the spider. The surround has another purpose and that is to seal the loudspeaker when mounted in an enclosure. Commonly, the surround is made of a variety of different materials, such as, folded paper, cloth, rubber, and foam. Construction of the spider consists of different woven cloth or synthetic materials that are compressed to form a flexible membrane. The following two figures illustrate where the suspension components are physically at on the loudspeaker and how they function as the loudspeaker operates.

Figure 8 Loudspeaker Suspension System
Figure 9 Moving Loudspeaker

Modeling the loudspeaker as a lumped system

Before implementing a loudspeaker into a specific application, a series of parameters characterizing the loudspeaker must be extracted. The equivalent circuit of the loudspeaker is key when developing enclosures. The circuit models all aspects of the loudspeaker through an equivalent electrical, mechanical, and acoustical circuit. Figure 9 shows how the three equivalent circuits are connected. The electrical circuit is comprised of the DC resistance of the voice coil, , the imaginary part of the voice coil inductance, , and the real part of the voice coil inductance, . The mechanical system has electrical components that model different physical parameters of the loudspeaker. In the mechanical circuit, , is the electrical capacitance due to the moving mass, , is the electrical inductance due to the compliance of the moving mass, and , is the electrical resistance due to the suspension system. In the acoustical equivalent circuit, models the air mass and models the radiation impedance[2]. This equivalent circuit allows insight into what parameters change the characteristics of the loudspeaker. Figure 10 shows the electrical input impedance as a function of frequency developed using the equivalent circuit of the loudspeaker.

Figure 9 Loudspeaker Analogous Circuit
Figure 10 Electrical Input Impedance

The acoustical enclosure

Function of the enclosure

The loudspeaker emits two waves : a front wave and a back wave. With a reflection on a wall, the back wave can be added with the front wave and produces destructive interferences. As a result, the sound pressure level in the room is not uniform. At certain positions, the interaction is additive, and the sound pressure level is higher. On the contrary, certain positions offer destructive interaction between the waves and the sound pressure level is lower.

Figure 11 Loudspeaker without baffle producing destructive interferences

The solution is to put a baffle round the loudspeaker in order to prevent the back wave from interfering with the front wave. The sound pressure level is uniform in the room and the quality of the loudspeaker is higher.

Figure 12 Loudspeakers with infinite baffle and enclosure

Loudspeaker-external fluid interaction

The external fluid exerts a pressure on the membrane of the loudspeaker cone. This additive force can be evaluate as an additive mass and an additive damping in the equation of vibration of the membrane.

When the fluid is the air, this additive mass and additive damping are negligible. For example, at the frequency of 1000 Hz, the additive mass is 3g.

Loudspeaker-internal fluid interaction

The volume of air in the enclosure constitutes an additive stiffness. This is called the acoustic load. In low frequencies, this additive stiffness can be four times the stiffness of the loudspeaker cone. The internal air stiffness is very high because of the boundary conditions inside the enclosure. The walls impose a condition of zero airspeed that makes the stiffness increase.

Figure 13 Stiffness of the loudspeaker cone and stiffness of the internal air

The stiffness of the internal air (in red) is fourth time higher than the stiffness of the loudspeaker cone (in blue). That is why the design of the enclosure is relevant in order to improve the quality of the sound and avoid a decrease of the sound pressure level in the room at some frequencies.

References

  1. The Loudspeaker Design Cookbook 5th Edition; Dickason, Vance., Audio Amateur Press, 1997.
  2. Beranek, L. L. Acoustics. 2nd ed. Acoustical Society of America, Woodbridge, NY. 1993.

Sealed Box Subwoofer Design

A sealed or closed box baffle is the most basic but often the cleanest sounding sub-woofer box design. The sub-woofer box in its most simple form, serves to isolate the back of the speaker from the front, much like the theoretical infinite baffle. The sealed box provides simple construction and controlled response for most sub-woofer applications. The slow low end roll-off provides a clean transition into the extreme frequency range. Unlike ported boxes, the cone excursion is reduced below the resonant frequency of the box and driver due to the added stiffness provided by the sealed box baffle.

Closed baffle boxes are typically constructed of a very rigid material such as MDF (medium density fiber board) or plywood .75 to 1 inch thick. Depending on the size of the box and material used, internal bracing may be necessary to maintain a rigid box. A rigid box is important to design in order to prevent unwanted box resonance.

As with any acoustics application, the box must be matched to the loudspeaker driver for maximum performance. The following will outline the procedure to tune the box or maximize the output of the sub-woofer box and driver combination.

Closed baffle circuit

The sealed box enclosure for sub-woofers can be modelled as a lumped element system if the dimensions of the box are significantly shorter than the shortest wavelength reproduced by the sub-woofer. Most sub-woofer applications are crossed over around 80 to 100 Hz. A 100 Hz wave in air has a wavelength of about 11 feet. Sub-woofers typically have all dimensions much shorter than this wavelength, thus the lumped element system analysis is accurate. Using this analysis, the following circuit represents a sub-woofer enclosure system.

where all of the following parameters are in the mechanical mobility analog

- voltage supply
- electrical resistance
- driver mass
- driver compliance
- resistance
- rear cone radiation resistance into the air
- front cone radiation reactance into the air
- rear cone radiation resistance into the box
- rear cone radiation reactance into the box

Driver parameters

In order to tune a sealed box to a driver, the driver parameters must be known. Some of the parameters are provided by the manufacturer, some are found experimentally, and some are found from general tables. For ease of calculations, all parameters will be represented in the SI units meter/kilogram/second. The parameters that must be known to determine the size of the box are as follows:

- driver free-air resonance
- mechanical compliance of the driver
- effective area of the driver

Resonance of the driver

The resonance of the driver is usually either provided by the manufacturer or must be found experimentally. It is a good idea to measure the resonance frequency even if it is provided by the manufacturer to account for inconsistent manufacturing processes.

The following diagram shows verga and the setup for finding resonance:


Where voltage is held constant and the variable frequency source is varied until is a maximum. The frequency where is a maximum is the resonance frequency for the driver.

Mechanical compliance

By definition compliance is the inverse of stiffness or what is commonly referred to as the spring constant. The compliance of a driver can be found by measuring the displacement of the cone when known masses are place on the cone when the driver is facing up. The compliance would then be the displacement of the cone in meters divided by the added weight in Newtons.

Effective area of the driver

The physical diameter of the driver does not lead to the effective area of the driver. The effective diameter can be found using the following diagram:

From this diameter, the area is found from the basic area of a circle equation.

Acoustic compliance

From the known mechanical compliance of the cone, the acoustic compliance can be found from the following equation:

Where is air density and the speed of sound at a given temperature and pressure.

From the driver acoustic compliance, the box acoustic compliance is found. This is where the final application of the sub-woofer is considered. The acoustic compliance of the box will determine the percent shift upwards of the resonant frequency. If a large shift is desire for high SPL applications, then a large ratio of driver to box acoustic compliance would be required. If a flat response is desired for high fidelity applications, then a lower ratio of driver to box acoustic compliance would be required. Specifically, the ratios can be found in the following figure using line (b) as reference.

- driver to box acoustic compliant ratio

Sealed box design

Volume of box

The volume of the sealed box can now be found from the box acoustic compliance. The following equation is used to calculate the box volume

VB= CAB&gam

Box dimensions

From the calculated box volume, the dimensions of the box can then be designed. There is no set formula for finding the dimensions of the box, but there are general guidelines to be followed. If the driver was mounted in the center of a square face, the waves generated by the cone would reach the edges of the box at the same time, thus when combined would create a strong diffracted wave in the listening space. In order to best prevent this, the driver should be either be mounted offset of a square face, or the face should be rectangular.

The face of the box which the driver is set in should not be a square.

Miscellaneous Applications

Bass-Reflex Enclosure Design

Bass-reflex enclosures improve the low-frequency response of loudspeaker systems. Bass-reflex enclosures are also called "vented-box design" or "ported-cabinet design". A bass-reflex enclosure includes a vent or port between the cabinet and the ambient environment. This type of design, as one may observe by looking at contemporary loudspeaker products, is still widely used today. Although the construction of bass-reflex enclosures is fairly simple, their design is not simple, and requires proper tuning. This reference focuses on the technical details of bass-reflex design. General loudspeaker information can be found here.

Effects of the Port on the Enclosure Response

Before discussing the bass-reflex enclosure, it is important to be familiar with the simpler sealed enclosure system performance. As the name suggests, the sealed enclosure system attaches the loudspeaker to a sealed enclosure (except for a small air leak included to equalize the ambient pressure inside). Ideally, the enclosure would act as an acoustical compliance element, as the air inside the enclosure is compressed and rarified. Often, however, an acoustic material is added inside the box to reduce standing waves, dissipate heat, and other reasons. This adds a resistive element to the acoustical lumped-element model. A non-ideal model of the effect of the enclosure actually adds an acoustical mass element to complete a series lumped-element circuit given in Figure 1. For more on sealed enclosure design, see the Sealed Box Subwoofer Design page.

Figure 1. Sealed enclosure acoustic circuit.

In the case of a bass-reflex enclosure, a port is added to the construction. Typically, the port is cylindrical and is flanged on the end pointing outside the enclosure. In a bass-reflex enclosure, the amount of acoustic material used is usually much less than in the sealed enclosure case, often none at all. This allows air to flow freely through the port. Instead, the larger losses come from the air leakage in the enclosure. With this setup, a lumped-element acoustical circuit has the form shown in the diagram below.

Figure 2. Bass-reflex enclosure acoustic circuit.

In this figure, represents the radiation impedance of the outside environment on the loudspeaker diaphragm. The loading on the rear of the diaphragm has changed when compared to the sealed enclosure case. If one visualizes the movement of air within the enclosure, some of the air is compressed and rarified by the compliance of the enclosure, some leaks out of the enclosure, and some flows out of the port. This explains the parallel combination of , , and . A truly realistic model would incorporate a radiation impedance of the port in series with , but for now it is ignored. Finally, , the acoustical mass of the enclosure, is included as discussed in the sealed enclosure case. The formulas which calculate the enclosure parameters are listed in Appendix B.

It is important to note the parallel combination of and . This forms a Helmholtz resonator (click here for more information). Physically, the port functions as the “neck” of the resonator and the enclosure functions as the “cavity.” In this case, the resonator is driven from the piston directly on the cavity instead of the typical Helmholtz case where it is driven at the “neck.” However, the same resonant behavior still occurs at the enclosure resonance frequency, . At this frequency, the impedance seen by the loudspeaker diaphragm is large (see Figure 3 below). Thus, the load on the loudspeaker reduces the velocity flowing through its mechanical parameters, causing an anti-resonance condition where the displacement of the diaphragm is a minimum. Instead, the majority of the volume velocity is actually emitted by the port itself instead of the loudspeaker. When this impedance is reflected to the electrical circuit, it is proportional to , thus a minimum in the impedance seen by the voice coil is small. Figure 3 shows a plot of the impedance seen at the terminals of the loudspeaker. In this example, was found to be about 40 Hz, which corresponds to the null in the voice-coil impedance.

Figure 3. Impedances seen by the loudspeaker diaphragm and voice coil.

Quantitative Analysis of Port on Enclosure

The performance of the loudspeaker is first measured by its velocity response, which can be found directly from the equivalent circuit of the system. As the goal of most loudspeaker designs is to improve the bass response (leaving high-frequency production to a tweeter), low frequency approximations will be made as much as possible to simplify the analysis. First, the inductance of the voice coil, , can be ignored as long as . In a typical loudspeaker, is of the order of 1 mH, while is typically 8, thus an upper frequency limit is approximately 1 kHz for this approximation, which is certainly high enough for the frequency range of interest.

Another approximation involves the radiation impedance, . It can be shown [1] that this value is given by the following equation (in acoustical ohms):

Where and are types of Bessel functions. For small values of ka,

and

Hence, the low-frequency impedance on the loudspeaker is represented with an acoustic mass [1]. For a simple analysis, , , , and (the transducer parameters, or Thiele-Small parameters) are converted to their acoustical equivalents. All conversions for all parameters are given in Appendix A. Then, the series masses, , , and , are lumped together to create . This new circuit is shown below.

Figure 4. Low-Frequency Equivalent Acoustic Circuit

Unlike sealed enclosure analysis, there are multiple sources of volume velocity that radiate to the outside environment. Hence, the diaphragm volume velocity, , is not analyzed but rather . This essentially draws a “bubble” around the enclosure and treats the system as a source with volume velocity . This “lumped” approach will only be valid for low frequencies, but previous approximations have already limited the analysis to such frequencies anyway. It can be seen from the circuit that the volume velocity flowing into the enclosure, , compresses the air inside the enclosure. Thus, the circuit model of Figure 3 is valid and the relationship relating input voltage, to may be computed.

In order to make the equations easier to understand, several parameters are combined to form other parameter names. First, and , the enclosure and loudspeaker resonance frequencies, respectively, are:

Based on the nature of the derivation, it is convenient to define the parameters and h, the Helmholtz tuning ratio:

A parameter known as the compliance ratio or volume ratio, , is given by:

Other parameters are combined to form what are known as quality factors:

This notation allows for a simpler expression for the resulting transfer function [1]:

where

Development of Low-Frequency Pressure Response

It can be shown [2] that for , a loudspeaker behaves as a spherical source. Here, a represents the radius of the loudspeaker. For a 15” diameter loudspeaker in air, this low frequency limit is about 150 Hz. For smaller loudspeakers, this limit increases. This limit dominates the limit which ignores , and is consistent with the limit that models by .

Within this limit, the loudspeaker emits a volume velocity , as determined in the previous section. For a simple spherical source with volume velocity , the far-field pressure is given by [1]:

It is possible to simply let for this analysis without loss of generality because distance is only a function of the surroundings, not the loudspeaker. Also, because the transfer function magnitude is of primary interest, the exponential term, which has a unity magnitude, is omitted. Hence, the pressure response of the system is given by [1]:

Where . In the following sections, design methods will focus on rather than , which is given by:

This also implicitly ignores the constants in front of since they simply scale the response and do not affect the shape of the frequency response curve.

Alignments

A popular way to determine the ideal parameters has been through the use of alignments. The concept of alignments is based upon filter theory. Filter development is a method of selecting the poles (and possibly zeros) of a transfer function to meet a particular design criterion. The criteria are the desired properties of a magnitude-squared transfer function, which in this case is . From any of the design criteria, the poles (and possibly zeros) of are found, which can then be used to calculate the numerator and denominator. This is the “optimal” transfer function, which has coefficients that are matched to the parameters of to compute the appropriate values that will yield a design that meets the criteria.

There are many different types of filter designs, each which have trade-offs associated with them. However, this design is limited because of the structure of . In particular, it has the structure of a fourth-order high-pass filter with all zeros at s = 0. Therefore, only those filter design methods which produce a low-pass filter with only poles will be acceptable methods to use. From the traditional set of algorithms, only Butterworth and Chebyshev low-pass filters have only poles. In addition, another type of filter called a quasi-Butterworth filter can also be used, which has similar properties to a Butterworth filter. These three algorithms are fairly simple, thus they are the most popular. When these low-pass filters are converted to high-pass filters, the transformation produces in the numerator.

More details regarding filter theory and these relationships can be found in numerous resources, including [5].

Butterworth Alignment

The Butterworth algorithm is designed to have a maximally flat pass band. Since the slope of a function corresponds to its derivatives, a flat function will have derivatives equal to zero. Since as flat of a pass band as possible is optimal, the ideal function will have as many derivatives equal to zero as possible at s = 0. Of course, if all derivatives were equal to zero, then the function would be a constant, which performs no filtering.

Often, it is better to examine what is called the loss function. Loss is the reciprocal of gain, thus

The loss function can be used to achieve the desired properties, then the desired gain function is recovered from the loss function.

Now, applying the desired Butterworth property of maximal pass-band flatness, the loss function is simply a polynomial with derivatives equal to zero at s = 0. At the same time, the original polynomial must be of degree eight (yielding a fourth-order function). However, derivatives one through seven can be equal to zero if [3]

With the high-pass transformation ,

It is convenient to define , since or -3 dB. This definition allows the matching of coefficients for the describing the loudspeaker response when . From this matching, the following design equations are obtained [1]:

Quasi-Butterworth Alignment

The quasi-Butterworth alignments do not have as well-defined of an algorithm when compared to the Butterworth alignment. The name “quasi-Butterworth” comes from the fact that the transfer functions for these responses appear similar to the Butterworth ones, with (in general) the addition of terms in the denominator. This will be illustrated below. While there are many types of quasi-Butterworth alignments, the simplest and most popular is the 3rd order alignment (QB3). The comparison of the QB3 magnitude-squared response against the 4th order Butterworth is shown below.

Notice that the case is the Butterworth alignment. The reason that this QB alignment is called 3rd order is due to the fact that as B increases, the slope approaches 3 dec/dec instead of 4 dec/dec, as in 4th order Butterworth. This phenomenon can be seen in Figure 5.

Figure 5: 3rd-Order Quasi-Butterworth Response for

Equating the system response with , the equations guiding the design can be found [1]:

Chebyshev Alignment

The Chebyshev algorithm is an alternative to the Butterworth algorithm. For the Chebyshev response, the maximally-flat passband restriction is abandoned. Now, a ripple, or fluctuation is allowed in the pass band. This allows a steeper transition or roll-off to occur. In this type of application, the low-frequency response of the loudspeaker can be extended beyond what can be achieved by Butterworth-type filters. An example plot of a Chebyshev high-pass response with 0.5 dB of ripple against a Butterworth high-pass response for the same is shown below.

Figure 6: Chebyshev vs. Butterworth High-Pass Response.

The Chebyshev response is defined by [4]:


is called the Chebyshev polynomial and is defined by [4]:

Fortunately, Chebyshev polynomials satisfy a simple recursion formula [4]:

For more information on Chebyshev polynomials, see the Wolfram Mathworld: Chebyshev Polynomials page.

When applying the high-pass transformation to the 4th order form of , the desired response has the form [1]:

The parameter determines the ripple. In particular, the magnitude of the ripple is dB and can be chosen by the designer, similar to B in the quasi-Butterworth case. Using the recursion formula for ,


Applying this equation to [1],

Thus, the design equations become [1]:

Choosing the Correct Alignment

With all the equations that have already been presented, the question naturally arises, “Which one should I choose?” Notice that the coefficients , , and are not simply related to the parameters of the system response. Certain combinations of parameters may indeed invalidate one or more of the alignments because they cannot realize the necessary coefficients. With this in mind, general guidelines have been developed to guide the selection of the appropriate alignment. This is very useful if one is designing an enclosure to suit a particular transducer that cannot be changed.

The general guideline for the Butterworth alignment focuses on and . Since the three coefficients , , and are a function of , , h, and , fixing one of these parameters yields three equations that uniquely determine the other three. In the case where a particular transducer is already given, is essentially fixed. If the desired parameters of the enclosure are already known, then is a better starting point.

In the case that the rigid requirements of the Butterworth alignment cannot be satisfied, the quasi-Butterworth alignment is often applied when is not large enough.. The addition of another parameter, B, allows more flexibility in the design.

For values that are too large for the Butterworth alignment, the Chebyshev alignment is typically chosen. However, the steep transition of the Chebyshev alignment may also be utilized to attempt to extend the bass response of the loudspeaker in the case where the transducer properties can be changed.

In addition to these three popular alignments, research continues in the area of developing new algorithms that can manipulate the low-frequency response of the bass-reflex enclosure. For example, a 5th order quasi-Butterworth alignment has been developed [6]. Another example [7] applies root-locus techniques to achieve results. In the modern age of high-powered computing, other researchers have focused their efforts in creating computerized optimization algorithms that can be modified to achieve a flatter response with sharp roll-off or introduce quasi-ripples which provide a boost in sub-bass frequencies [8].

References

[1] Leach, W. Marshall, Jr. Introduction to Electroacoustics and Audio Amplifier Design. 2nd ed. Kendall/Hunt, Dubuque, IA. 2001.

[2] Beranek, L. L. Acoustics. 2nd ed. Acoustical Society of America, Woodbridge, NY. 1993.

[3] DeCarlo, Raymond A. “The Butterworth Approximation.” Notes from ECE 445. Purdue University. 2004.

[4] DeCarlo, Raymond A. “The Chebyshev Approximation.” Notes from ECE 445. Purdue University. 2004.

[5] VanValkenburg, M. E. Analog Filter Design. Holt, Rinehart and Winston, Inc. Chicago, IL. 1982.

[6] Kreutz, Joseph and Panzer, Joerg. "Derivation of the Quasi-Butterworth 5 Alignments." Journal of the Audio Engineering Society. Vol. 42, No. 5, May 1994.

[7] Rutt, Thomas E. "Root-Locus Technique for Vented-Box Loudspeaker Design." Journal of the Audio Engineering Society. Vol. 33, No. 9, September 1985.

[8] Simeonov, Lubomir B. and Shopova-Simeonova, Elena. "Passive-Radiator Loudspeaker System Design Software Including Optimization Algorithm." Journal of the Audio Engineering Society. Vol. 47, No. 4, April 1999.

Appendix A: Equivalent Circuit Parameters

Name Electrical Equivalent Mechanical Equivalent Acoustical Equivalent
Voice-Coil Resistance
Driver (Speaker) Mass See
Driver (Speaker) Suspension Compliance
Driver (Speaker) Suspension Resistance
Enclosure Compliance
Enclosure Air-Leak Losses
Acoustic Mass of Port
Enclosure Mass Load See See
Low-Frequency Radiation Mass Load See See
Combination Mass Load

Appendix B: Enclosure Parameter Formulas

Figure 7: Important dimensions of bass-reflex enclosure.

Based on these dimensions [1],

(inside enclosure volume) (inside area of the side the speaker is mounted on)
specific heat of air at constant volume specific heat of filling at constant volume ()
mean density of air (about 1.3 kg/) density of filling
ratio of specific heats for air (1.4) speed of sound in air (about 344 m/s)
= effective density of enclosure. If little or no filling (acceptable assumption in a bass-reflex system but not for sealed enclosures),

Polymer-Film Acoustic Filters

Introduction

Acoustic filters are used in many devices such as mufflers, noise control materials (absorptive and reactive), and loudspeaker systems to name a few. Although the waves in simple (single-medium) acoustic filters usually travel in gases such as air and carbon-monoxide (in the case of automobile mufflers) or in materials such as fiberglass, polyvinylidene fluoride (PVDF) film, or polyethylene (Saran Wrap), there are also filters that couple two or three distinct media together to achieve a desired acoustic response. General information about basic acoustic filter design can be perused at the following wikibook page Acoustic Filter Design & Implementation. The focus of this article will be on acoustic filters that use multilayer air/polymer film-coupled media as its acoustic medium for sound waves to propagate through; concluding with an example of how these filters can be used to detect and extrapolate audio frequency information in high-frequency "carrier" waves that carry an audio signal. However, before getting into these specific type of acoustic filters, we need to briefly discuss how sound waves interact with the medium(media) in which it travels and how these factors can play a role when designing acoustic filters.

Changes in Media Properties Due to Sound Wave Characteristics

As with any system being designed, the filter response characteristics of an acoustic filter are tailored based on the frequency spectrum of the input signal and the desired output. The input signal may be infrasonic (frequencies below human hearing), sonic (frequencies within human hearing range), or ultrasonic (frequencies above human hearing range). In addition to the frequency content of the input signal, the density, and, thus, the characteristic impedance of the medium (media) being used in the acoustic filter must also be taken into account. In general, the characteristic impedance for a particular medium is expressed as...

where

      = (equilibrium) density of medium  
      = speed of sound in medium    

The characteristic impedance is important because this value simultaneously gives an idea of how fast or slow particles will travel as well as how much mass is "weighting down" the particles in the medium (per unit area or volume) when they are excited by a sound source. The speed in which sound travels in the medium needs to be taken into consideration because this factor can ultimately affect the time response of the filter (i.e. the output of the filter may not radiate or attenuate sound fast or slow enough if not designed properly). The intensity of a sound wave is expressed as...

.

is interpreted as the (time-averaged) rate of energy transmission of a sound wave through a unit area normal to the direction of propagation, and this parameter is also an important factor in acoustic filter design because the characteristic properties of the given medium can change relative to intensity of the sound wave traveling through it. In other words, the reaction of the particles (atoms or molecules) that make up the medium will respond differently when the intensity of the sound wave is very high or very small relative to the size of the control area (i.e. dimensions of the filter, in this case). Other properties such as the elasticity and mean propagation velocity (of a sound wave) can change in the acoustic medium as well, but focusing on frequency, impedance, and/or intensity in the design process usually takes care of these other parameters because most of them will inevitably be dependent on the aforementioned properties of the medium.

Why Coupled Acoustic Media in Acoustic Filters?

In acoustic transducers, media coupling is employed in acoustic transducers to either increase or decrease the impedance of the transducer, and, thus, control the intensity and speed of the signal acting on the transducer while converting the incident wave, or initial excitation sound wave, from one form of energy to another (e.g. converting acoustic energy to electrical energy). Specifically, the impedance of the transducer is augmented by inserting a solid structure (not necessarily rigid) between the transducer and the initial propagation medium (e.g. air). The reflective properties of the inserted medium is exploited to either increase or decrease the intensity and propagation speed of the incident sound wave. It is the ability to alter, and to some extent, control, the impedance of a propagation medium by (periodically) inserting (a) solid structure(s) such as thin, flexible films in the original medium (air) and its ability to concomitantly alter the frequency response of the original medium that makes use of multilayer media in acoustic filters attractive. The reflection factor and transmission factor and , respectively, between two media, expressed as...

and

,

are the tangible values that tell how much of the incident wave is being reflected from and transmitted through the junction where the media meet. Note that is the (total) input impedance seen by the incident sound wave upon just entering an air-solid acoustic media layer. In the case of multiple air-columns as shown in Fig. 2, is the aggregate impedance of each air-column layer seen by the incident wave at the input. Below in Fig. 1, a simple illustration explains what happens when an incident sound wave propagating in medium (1) and comes in contact with medium (2) at the junction of the both media (x=0), where the sound waves are represented by vectors.

As mentioned above, an example of three such successive air-solid acoustic media layers is shown in Fig. 2 and the electroacoustic equivalent circuit for Fig. 2 is shown in Fig. 3 where = (density of solid material)(thickness of solid material) = unit-area (or volume) mass, characteristic acoustic impedance of medium, and wavenumber. Note that in the case of a multilayer, coupled acoustic medium in an acoustic filter, the impedance of each air-solid section is calculated by using the following general purpose impedance ratio equation (also referred to as transfer matrices)...

where is the (known) impedance at the edge of the solid of an air-solid layer (on the right) and is the (unknown) impedance at the edge of the air column of an air-solid layer.

Effects of High-Intensity, Ultrasonic Waves in Acoustic Media in Audio Frequency Spectrum

When an ultrasonic wave is used as a carrier to transmit audio frequencies, three audio effects are associated with extrapolating the audio frequency information from the carrier wave: (a) beating effects, (b) parametric array effects, and (c) radiation pressure.

Beating occurs when two ultrasonic waves with distinct frequencies and propagate in the same direction, resulting in amplitude variations which consequently make the audio signal information go in and out of phase, or “beat”, at a frequency of .

Parametric array [1] effects occur when the intensity of an ultrasonic wave is so high in a particular medium that the high displacements of particles (atoms) per wave cycle changes properties of that medium so that it influences parameters like elasticity, density, propagation velocity, etc. in a non-linear fashion. The results of parametric array effects on modulated, high-intensity, ultrasonic waves in a particular medium (or coupled media) is the generation and propagation of audio frequency waves (not necessarily present in the original audio information) that are generated in a manner similar to the nonlinear process of amplitude demodulation commonly inherent in diode circuits (when diodes are forward biased).

Another audio effect that arises from high-intensity ultrasonic beams of sound is a static (DC) pressure called radiation pressure. Radiation pressure is similar to parametric array effects in that amplitude variations in the signal give rise to audible frequencies via amplitude demodulation. However, unlike parametric array effects, radiation pressure fluctuations that generate audible signals from amplitude demodulation can occur due to any low-frequency modulation and not just from pressure fluctuations occurring at the modulation frequency or beating frequency .

An Application of Coupled Media in Acoustic Filters

Figs. 1 - 3 were all from a research paper entitled New Type of Acoustics Filter Using Periodic Polymer Layers for Measuring Audio Signal Components Excited by Amplitude-Modulated High_Intensity Ultrasonic Waves submitted to the Audio Engineering Society (AES) by Minoru Todo, Primary Innovator at Measurement Specialties, Inc., in the October 2005 edition of the AES Journal. Figs. 4 and 5 below, also from this paper, are illustrations of test setups referred to in this paper. Specifically, Fig. 4 is a test setup used to measure the transmission (of an incident ultrasonic sound wave) through the acoustic filter described by Figs. 1 and 2. Fig. 5 is a block diagram of the test setup used for measuring radiation pressure, one of the audio effects mentioned in the previous section. It turns out that out of all of the audio effects mentioned in the previous section that are caused by high-intensity ultrasonic waves propagating in a medium, sound waves produced from radiated pressure are the hardest to detect when microphones and preamplifiers are used in the detection/receiver system. Although nonlinear noise artifacts occur due to overloading of the preamplifier present in the detection/receiver system, the bulk of the nonlinear noise comes from the inherent nonlinear noise properties of microphones. This is true because all microphones, even specialized measurement microphones designed for audio spectrum measurements that have sensitivity well beyond the threshold of hearing, have nonlinearities artifacts that (periodically) increase in magnitude with respect to increase at ultrasonic frequencies. These nonlinearities essentially mask the radiation pressure generated because the magnitude of these nonlinearities are orders of magnitude greater than the radiation pressure. The acoustic (low-pass) filter referred to in this paper was designed in order to filter out the "detrimental" ultrasonic wave that was inducing high nonlinear noise artifacts in the measurement microphones. The high-intensity, ultrasonic wave was producing radiation pressure (which is audible) within the initial acoustic medium (i.e. air). By filtering out the ultrasonic wave, the measurement microphone would only detect the audible radiation pressure that the ultrasonic wave was producing in air. Acoustic filters like these could possibly be used to detect/receive any high-intensity, ultrasonic signal that may carry audio information which may need to be extrapolated with an acceptable level of fidelity.

References

[1] Minoru Todo, "New Type of Acoustic Filter Using Periodic Polymer Layers for Measuring Audio Signal Components Excited by Amplitude-Modulated High-Intensity Ultrasonic Waves," Journal of Audio Engineering Society, Vol. 53, pp. 930–41 (2005 October)

[2] Fundamentals of Acoustics; Kinsler et al, John Wiley & Sons, 2000

[3] ME 513 Course Notes, Dr. Luc Mongeau, Purdue University

[4] http://www.ieee-uffc.org/archive/uffc/trans/Toc/abs/02/t0270972.htm

Created by Valdez L. Gant

Noise in Hydraulic Systems

Noise in Hydraulic Systems

Hydraulic systems are the most preferred source of power transmission in most of the industrial and mobile equipments due to their power density, compactness, flexibility, fast response and efficiency. The field hydraulics and pneumatics is also known as 'Fluid Power Technology'. Fluid power systems have a wide range of applications which include industrial, off-road vehicles, automotive systems, and aircraft. But, one of the main problems with the hydraulic systems is the noise generated by them. The health and safety issues relating to noise have been recognized for many years and legislation is now placing clear demands on manufacturers to reduce noise levels [1]. Hence, noise reduction in hydraulic systems demands lot of attention from the industrial as well as academic researchers. It needs a good understanding of how the noise is generated and propagated in a hydraulic system in order to reduce it.

Sound in fluids

The speed of sound in fluids can be determined using the following relation.

where K - fluid bulk modulus, - fluid density, c - velocity of sound

Typical value of bulk modulus range from 2e9 to 2.5e9 N/m2. For a particular oil, with a density of 889 kg/m3,

speed of sound

Source of Noise

The main source of noise in hydraulic systems is the pump which supplies the flow. Most of the pumps used are positive displacement pumps. Of the positive displacement pumps, axial piston swash plate type is mostly preferred due to their reliability and efficiency.

The noise generation in an axial piston pump can be classified under two categories (i) fluidborne noise and (ii) Structureborne noise

Fluidborne Noise (FBN)

Among the positive displacement pumps, highest levels of FBN are generated by axial piston pumps and lowest levels by screw pumps and in between these lie the external gear pump and vane pump [1]. The discussion in this page is mainly focused on axial piston swash plate type pumps. An axial piston pump has a fixed number of displacement chambers arranged in a circular pattern separated from each other by an angular pitch equal to where n is the number of displacement chambers. As each chamber discharges a specific volume of fluid, the discharge at the pump outlet is sum of all the discharge from the individual chambers. The discontinuity in flow between adjacent chambers results in a kinematic flow ripple. The amplitude of the kinematic ripple can be theoretical determined given the size of the pump and the number of displacement chambers. The kinematic ripple is the main cause of the fluidborne noise. The kinematic ripples is a theoretical value. The actual flow ripple at the pump outlet is much larger than the theoretical value because the kinematic ripple is combined with a compressibility component which is due to the fluid compressibility. These ripples (also referred as flow pulsations) generated at the pump are transmitted through the pipe or flexible hose connected to the pump and travel to all parts of the hydraulic circuit.

The pump is considered an ideal flow source. The pressure in the system will be decided by resistance to the flow or otherwise known as system load. The flow pulsations result in pressure pulsations. The pressure pulsations are superimposed on the mean system pressure. Both the flow and pressure pulsations easily travel to all part of the circuit and affect the performance of the components like control valve and actuators in the system and make the component vibrate, sometimes even resonate. This vibration of system components adds to the noise generated by the flow pulsations. The transmission of FBN in the circuit is discussed under transmission below.

A typical axial piston pump with 9 pistons running at 1000 rpm can produce a sound pressure level of more than 70 dBs.

Structureborne Noise (SBN)

In swash plate type pumps, the main sources of the structureborne noise are the fluctuating forces and moments of the swash plate. These fluctuating forces arise as a result of the varying pressure inside the displacement chamber. As the displacing elements move from suction stroke to discharge stroke, the pressure varies accordingly from few bars to few hundred bars. These pressure changes are reflected on the displacement elements (in this case, pistons) as forces and these force are exerted on the swash plate causing the swash plate to vibrate. This vibration of the swash plate is the main cause of structureborne noise. There are other components in the system which also vibrate and lead to structureborne noise, but the swash is the major contributor.

Fig. 1 shows an exploded view of axial piston pump. Also the flow pulsations and the oscillating forces on the swash plate, which cause FBN and SBN respectively are shown for one revolution of the pump.

Transmission

FBN

The transmission of FBN is a complex phenomenon. Over the past few decades, considerable amount of research had gone into mathematical modeling of pressure and flow transient in the circuit. This involves the solution of wave equations, with piping treated as a distributed parameter system known as a transmission line [1] & [3].

Lets consider a simple pump-pipe-loading valve circuit as shown in Fig. 2. The pressure and flow ripple at any location in the pipe can be described by the relations:

.........(1)
.....(2)

where and are frequency dependent complex coefficients which are directly proportional to pump (source) flow ripple, but also functions of the source impedance , characteristic impedance of the pipe and the termination impedance . These impedances ,usually vary as the system operating pressure and flow rate changes, can be determined experimentally.

For complex systems with several system components, the pressure and flow ripples are estimated using the transformation matrix approach. For this, the system components can be treated as lumped impedances (a throttle valve or accumulator), or distributed impedances (flexible hose or silencer). Various software packages are available today to predict the pressure pulsations.

SBN

The transmission of SBN follows the classic source-path-noise model. The vibrations of the swash plate, the main cause of SBN, are transferred to the pump casing which encloses all the rotating group in the pump including displacement chambers (also known as cylinder block), pistons, and the swash plate. The pump case, apart from vibrating itself, transfers the vibration down to the mount on which the pump is mounted. The mount then passes the vibrations down to the main mounted structure or the vehicle. Thus the SBN is transferred from the swash plate to the main structure or vehicle via pumpcasing and mount.

Some of the machine structures, along the path of transmission, are good at transmitting this vibrational energy and they even resonate and reinforce it. By converting only a fraction of 1% of the pump structureborne noise into sound, a member in the transmission path could radiate more ABN than the pump itself [4].

Airborne noise (ABN)

Both FBN and SBN , impart high fatigue loads on the system components and make them vibrate. All of these vibrations are radiated as airborne noise and can be heard by a human operator. Also, the flow and pressure pulsations make the system components such as a control valve to resonate. This vibration of the particular component again radiates airborne noise.

Noise reduction

The reduction of the noise radiated from the hydraulic system can be approached in two ways.

(i) Reduction at Source - which is the reduction of noise at the pump. A large amount of open literature are available on the reduction techniques with some techniques focusing on reducing FBN at source and others focusing on SBN. Reduction in FBN and SBN at the source has a large influence on the ABN that is radiated. Even though, a lot of progress had been made in reducing the FBN and SBN separately, the problem of noise in hydraulic systems is not fully solved and lot need to be done. The reason is that the FBN and SBN are interrelated, in a sense that, if one tried to reduce the FBN at the pump, it tends to affect the SBN characteristics. Currently, one of the main researches in noise reduction in pumps, is a systematic approach in understanding the coupling between FBN and SBN and targeting them simultaneously instead of treating them as two separate sources. Such an unified approach, demands not only well trained researchers but also sophisticated computer based mathematical model of the pump which can accurately output the necessary results for optimization of pump design. The amplitude of fluid pulsations can be reduced, at the source, with the use of an hydraulic attenuator(5).

(ii) Reduction at Component level - which focuses on the reduction of noise from individual component like hose, control valve, pump mounts and fixtures. This can be accomplished by a suitable design modification of the component so that it radiates least amount of noise. Optimization using computer based models can be one of the ways.

Hydraulic System noise

Fig.3 Domain of hydraulic system noise generation and transmission (Figure recreated from [1])

References

1. Designing Quieter Hydraulic Systems - Some Recent Developments and Contributions, Kevin Edge, 1999, Fluid Power: Forth JHPS International Symposium.

2. Fundamentals of Acoustics L.E. Kinsler, A.R. Frey, A.B.Coppens, J.V. Sanders. Fourth Edition. John Wiley & Sons Inc.

3. Reduction of Axial Piston Pump Pressure Ripple A.M. Harrison. PhD thesis, University of Bath. 1997

4. Noise Control of Hydraulic Machinery Stan Skaistis, 1988. MARCEL DEKKER , INC.

5 Hydraulic Power System Analysis, A. Akers, M. Gassman, & R. Smith, Taylor & Francis, New York, 2006, ISBN 0-8247-9956-9

Noise from Cooling Fans

Proposal

As electric/electronic devices get smaller and functional, the noise of cooling device becomes important. This page will explain the origins of noise generation from small axial cooling fans used in electronic goods like desktop/laptop computers. The source of fan noises includes aerodynamic noise as well as operating sound of the fan itself. This page will be focused on the aerodynamic noise generation mechanisms.

Introduction

If one opens a desktop computer, they may find three (or more) fans. For example, a fan is typically found on the heat sink of the CPU, in the back panel of the power supply unit, on the case ventilation hole, on the graphics card, and even on the motherboard chipset if it is a recent one. Computer noise which annoys many people is mostly due to cooling fans, if the hard drive(s) is fairly quiet. When Intel Pentium processors were first introduced, there was no need to have a fan on the CPU, however, contemporary CPUs cannot function even for several seconds without a cooling fan. As CPU densities increase, the heat transfer for nominal operation requires increased airflow, which causes more and more noise. The type of fans commonly used in desktop computers are axial fans, and centrifugal blowers in laptop computers. Several fan types are shown here (pdf format). Different fan types have different characteristics of noise generation and performance. The axial flow fan is mainly considered in this page.

Noise Generation Mechanisms

The figure below shows a typical noise spectrum of a 120 mm diameter electronic device cooling fan. One microphone was used at a point 1 m from the upstream side of the fan. The fan has 7 blades, 4 struts for motor mounting and operates at 13V. Certain amount of load is applied. The blue plot is background noise of anechoic chamber, and the green one is sound loudness spectrum when the fan is running.

(*BPF = Blade Passing Frequency)
Each noise elements shown in this figure is caused by one or more of following generation mechanisms.

Blade Thickness Noise - Monopole (But very weak)

Blade thickness noise is generated by volume displacement of fluid. A fan blade has its thickness and volume. As the rotor rotates, the volume of each blade displaces fluid volume, then they consequently fluctuate pressure of near field, and noise is generated. This noise is tonal at the running frequency and generally very weak for cooling fans, because their RPM is relatively low. Therefore, thickness of fan blades hardly affects to electronic cooling fan noise.
(This kind of noise can become severe for high speed turbomachines like helicopter rotors.)

Tonal Noise by Aerodynamic Forces - Dipole

Uniform Inlet Flow (Negligible)

The sound generation due to uniform and steady aerodynamic force has a characteristic very similar to the blade thickness noise. It is very weak for low speed fans, and depends on fan RPM. Since some steady blade forces are necessary for a fan to do its duty even in an ideal condition, this kind of noise is impossible to avoid. It is known that this noise can be reduced by increasing the number of blades.

Non-uniform Inlet Flow

Non-uniform (still steady) inlet flow causes non-uniform aerodynamic forces on blades as their angular positions change. This generates noise at blade passing frequency and its harmonics. It is one of the major noise sources of electronic cooling fans.

Rotor-Casing interaction

If the fan blades are very close to a structure which is not symmetric, unsteady interaction forces to blades are generated. Then the fan experiences a similar running condition as lying in non-uniform flow field. See Acoustics/Rotor Stator Interactions for details.

Impulsive Noise (Negligible)

This noise is caused by the interaction between a blade and blade-tip-vortex of the preceding blade, and not severe for cooling fans.

Noise from Stall

Rotating Stall

Click here to read the definition and an aerodynamic description of stall.

The noise due to stall is a complex phenomenon that occurs at low flow rates. For some reason, if flow is locally disturbed, it can cause stall on one of the blades. As a result, the upstream passage on this blade is partially blocked. Therefore, the mean flow is diverted away from this passage. This causes increasing of the angle of attack on the closest blade at the upstream side of the originally stalled blade, the flow is again stalled there. On the other hand, the other side of the first blade is un-stalled because of reduction of flow angle.

repeatedly, the stall cell turns around the blades at about 30~50% of the running frequency, and the direction is opposite to the blades. This series of phenomenon causes unsteady blade forces, and consequently generates noise and vibrations.

Non-uniform Rotor Geometry

Asymmetry of rotor causes noise at the rotating frequency and its harmonics (not blade passing frequency obviously), even when the inlet flow is uniform and steady.

Unsteady Flow Field

Unsteady flow causes random forces on the blades. It spreads the discrete spectrum noises and makes them continuous. In case of low-frequency variation, the spread continuous spectral noise is around rotating frequency, and narrowband noise is generated. The stochastic velocity fluctuations of inlet flow generates broadband noise spectrum. The generation of random noise components is covered by the following sections.

Random Noise by Unsteady Aerodynamic Forces

Turbulent Boundary Layer

Even in the steady and uniform inlet flow, there exist random force fluctuations on the blades. That is from turbulent blade boundary layer. Some noise is generated for this reason, but dominant noise is produced by the boundary layer passing the blade trailing edge. The blade trailing edges scatter the non-propagating near-field pressure into a propagatable sound field.

Incident Turbulent

Velocity fluctuations of the intake flow with a stochastic time history generate random forces on blades, and a broadband spectrum noise.

Vortex Shedding

For some reason, a vortex can separate from a blade. Then the circulating flow around the blade starts to be changed. This causes non-uniform forces on blades, and noises. A classical example for this phenomenon is 'Karman vortex street'. (some images and animations.) Vortex shedding mechanism can occur in a laminar boundary layer of low speed fan and also in a turbulent boundary layer of high frequency fan.

Flow Separation

Flow separation causes stall explained above. This phenomenon can cause random noise, which spreads all the discrete spectrum noises, and turns the noise into broadband.

Tip Vortex

Since cooling fans are ducted axial flow machines, the annular gap between the blade tips and the casing is important parameter for noise generation. While rotating, there is another flow through the annular gap due to pressure difference between upstream and downstream of fan. Because of this flow, tip vortex is generated through the gap, and broadband noise increases as the annular gap gets bigger.

Installation Effects

Once a fan is installed, even though the fan is well designed acoustically, unexpected noise problem can come up. It is called as installation effects, and two types are applicable to cooling fans.

Effect of Inlet Flow Conditions

A structure that affects the inlet flow of a fan causes installation effects. For example Hoppe & Neise [3] showed that with and without a bellmouth nozzle at the inlet flange of 500mm fan can change the noise power by 50dB (This application is for much larger and noisier fan though).

Acoustic Loading Effect

This effect is shown on duct system applications. Some high performance graphic cards apply duct system for direct exhaustion.
The sound power generated by a fan is not only a function of its impeller speed and operating condition, but also depends on the acoustic impedances of the duct systems connected to its inlet and outlet. Therefore, fan and duct system should be matched not only for aerodynamic noise reasons but also because of acoustic considerations.

Closing Comment

Noise reduction of cooling fans has some restrictions:
1. Active noise control is not economically effective. 80mm cooling fans are only 5~10 US dollars. It is only applicable for high-end electronic products.
2. Restricting certain aerodynamic phenomenon for noise reduction can cause serious performance reduction of the fan. Increasing RPM of the fan is of course much more dominant factor for noise.
Different stories of fan noise are introduced at some of the linked sites below like active RPM control or noise comparison of various bearings used in fans. If blade passing noise is dominant, a muffler would be beneficial.

Some practical issue of PC noise are presented at the following sites.
Cooling Fan Noise Comparison - Sleeve Bearing vs. Ball Bearing (pdf format)
Brief explanation of fan noise origins and noise reduction suggestions
Effect of sweep angle comparison
Comparisons of noise from various 80mm fans
Noise reduction of a specific desktop case
Noise reduction of another specific desktop case
Informal study for noise from CPU cooling fan
Informal study for noise from PC case fans
Active fan speed optimizators for minimum noise from desktop computers

References

[1] Neise, W., and Michel, U., "Aerodynamic Noise of Turbomachines"
[2] Anderson, J., "Fundamentals of Aerodynamics", 3rd edition, 2001, McGrawHill
[3] Hoppe, G., and Neise, W., "Vergleich verschiedener Gerauschmessnerfahren fur Ventilatoren. Forschungsbericht FLT 3/1/31/87, Forschungsvereinigung fur Luft- und Trocknungstechnik e. V., Frankfurt/Main, Germany

Piezoelectric Transducers

Introduction

Piezoelectricity from the Greek word "piezo" means pressure electricity. Certain crystalline substances generate electric charges under mechanical stress and conversely experience a mechanical strain in the presence of an electric field. The piezoelectric effect describes a situation where the transducing material senses input mechanical vibrations and produces a charge at the frequency of the vibration. An AC voltage causes the piezoelectric material to vibrate in an oscillatory fashion at the same frequency as the input current.

Quartz is the best known single crystal material with piezoelectric properties. Strong piezoelectric effects can be induced in materials with an ABO3, Perovskite crystalline structure. 'A' denotes a large divalent metal ion such as lead and 'B' denotes a smaller tetravalent ion such as titanium or zirconium.

For any crystal to exhibit the piezoelectric effect, its structure must have no center of symmetry. Either a tensile or compressive stress applied to the crystal alters the separation between positive and negative charge sights in the cell causing a net polarization at the surface of the crystal. The polarization varies directly with the applied stress and is direction dependent so that compressive and tensile stresses will result in electric fields of opposite voltages.

Vibrations & Displacements

Piezoelectric ceramics have non-centrosymmetric unit cells below the Curie temperature and centrosymmetric unit cells above the Curie temperature. Non-centrosymmetric structures provide a net electric dipole moment. The dipoles are randomly oriented until a strong DC electric field is applied causing permanent polarization and thus piezoelectric properties.

A polarized ceramic may be subjected to stress causing the crystal lattice to distort changing the total dipole moment of the ceramic. The change in dipole moment due to an applied stress causes a net electric field which varies linearly with stress.

Dynamic Performance

The dynamic performance of a piezoelectric material relates to how it behaves under alternating stresses near the mechanical resonance. The parallel combination of C2 with L1, C1, and R1 in the equivalent circuit below control the transducers reactance which is a function of frequency.

Equivalent Electric Circuit

Frequency Response

The graph below shows the impedance of a piezoelectric transducer as a function of frequency. The minimum value at fn corresponds to the resonance while the maximum value at fm corresponds to anti-resonance. Superscript textItalic text

Resonant Devices

Non resonant devices may be modeled by a capacitor representing the capacitance of the piezoelectric with an impedance modeling the mechanically vibrating system as a shunt in the circuit. The impedance may be modeled as a capacitor in the non resonant case which allows the circuit to reduce to a single capacitor replacing the parallel combination.

For resonant devices the impedance becomes a resistance or static capacitance at resonance. This is an undesirable effect. In mechanically driven systems this effect acts as a load on the transducer and decreases the electrical output. In electrically driven systems this effect shunts the driver requiring a larger input current. The adverse effect of the static capacitance experienced at resonant operation may be counteracted by using a shunt or series inductor resonating with the static capacitance at the operating frequency.

Applications

Mechanical Measurement

Because of the dielectric leakage current of piezoelectrics they are poorly suited for applications where force or pressure have a slow rate of change. They are, however, very well suited for highly dynamic measurements that might be needed in blast gauges and accelerometers.

Ultrasonic

High intensity ultrasound applications utilize half wavelength transducers with resonant frequencies between 18 kHz and 45 kHz. Large blocks of transducer material is needed to generate high intensities which is makes manufacturing difficult and is economically impractical. Also, since half wavelength transducers have the highest stress amplitude in the center the end sections act as inert masses. The end sections are often replaced with metal plates possessing a much higher mechanical quality factor giving the composite transducer a higher mechanical quality factor than a single-piece transducer.

The overall electro-acoustic efficiency is:

             Qm0 = unloaded mechanical quality factor
             QE  = electric quality factor
             QL  = quality factor due to the acoustic load alone

The second term on the right hand side is the dielectric loss and the third term is the mechanical loss.

Efficiency is maximized when:

then:

The maximum ultrasonic efficiency is described by:

Applications of ultrasonic transducers include:

 Welding of plastics
 Atomization of liquids
 Ultrasonic drilling
 Ultrasonic cleaning
 Ultrasonic foils in the paper machine wet end for more uniform fibre distribution
 Ultrasound
 Non-destructive testing
 etc.

More Information and Source of Information

MorganElectroCeramics


Ultra Technology