So, where do I put this thing?

So you’ve got yourself one of those FFT measurement systems and a shiny new microphone and you want to know where to put the mic!

This is one of the most asked questions about measurement systems I hear and the stock answer “it depends” isn’t very helpful. Let’s see if we can put people on the right track in obtaining useful, meaningful data. (How you interpret the data we’ll leave for another time).

The “it depends” answer really hinges on what you are trying to measure. Are we measuring the performance of the loudspeaker(s), or are we measuring how our loudspeaker(s) perform in a given space? It can be difficult to separate the two. Speaker designers/manufacturers are usually interested in the former and want the room to “go-away,” whereas end-users have to make the system “fit” or “play-nice” in the performance space.

Let’s assume for now you know how to set your measurement system up and get “wiggly-lines” to appear, now we need to decide where to put the microphone(s). If we accept every position is unique and all are important and the adjustments we make for one position will affect some or all of the other locations. We will never have the time to measure them all, and even if we could, what could we do about it? We will have to prioritize them practicing “audio-triage” to achieve a minimum variance throughout the coverage zone using our time efficiently.

Perfect Position?

Measurement microphone placement plays a critical part in the alignment process. While it is important to maintain good techniques for mic placement, do not give too much thought to finding the “perfect” position. Our goal is to provide us with global information so the system can be aligned for maximum consistency. Each microphone provides a local view of the system’s response with the global characteristics more or less hidden. The key to mic placement is to use positions that provide a global representation with a minimum of unique local conditions.

 

Primary Position

The primary position for any system or sub system would be on-axis (+/- 10%), about half way through coverage zone and not too far back the room reflections intrude and not too close you are into cabinet reflection issues. This position represents the “average” seat in the speaker system’s coverage area. This position is the reference point in terms of speaker positioning and will take first priority in terms of delay, level and EQ setting.

 

Horizontal

The coverage pattern can be verified by comparing side coverage areas to this position. When you have reached the –6 dB point you are at the edge of the pattern. If the pattern edges do not occur as intended, it is time to reposition the speaker.

 

Vertical

This position should be average in level for the seating area. As you move closer, expect to see a rise in level. As you move farther expect to see a loss. This is a complex function of the axial attenuation and propagation loss, which hopefully will combine to create a minimal difference in level over depth. (See Fig. 1)

 

Fig. 1

 

Guidelines for Measurement Mic Placement:

  • Avoid positions with obvious unique local conditions, e.g., a pillar a few feet away.
  • Avoid placing mics in aisles. They tend to have strong reflections from the open floor in front of them and are not representative of the area at large.
  • Always point the mic toward the speaker source. Even “omnidirectional” microphones become directional above 5 kHz at angles of 90° or greater.
  • Beware of exact center points in rooms since they will have unique reflection patterns.
  • If at all possible avoid the offer to run your mic through the house patch bay to save running the cable. It almost never works.
  • Do not place the mic at sitting head height. This position will have a strong local reflection off the next row. This is not representative of the response with audience members seated. Standing head height usually works better. When working in extreme proximity, however, such as frontfills, the sitting height may work better because the standing height will be out of the vertical pattern. (See Fig. 2)

Fig. 2

 

Level Setting

Once the speaker position is determined, the primary mic position serves as the relative level marker for this subsystem with respect to others. If all subsystems are set to create this same relative level, then maximum consistency will be achieved throughout the hall. This will be complicated, however, by the interaction of the subsystems, which tend to cause some additions.

 

Delay Setting

The primary position offers a good representation for delay alignment. For most situations the best result will be achieved when the primary mic position is used as the synchronous point of alignment.

 

EQ Setting

The primary position provides a first look. However, it is highly recommended additional positions be looked at and factored in to the final decision on the EQ setting.

 

The Mix Position

You might notice that no mention has been made of placing a mic at the mix position. This is because the mix position is, after all, just another seat. It is best if the mix position is a primary location. However, if it is not, it will not help to pretend otherwise. In fact, aligning a system to a poor position will create a worse effect for audience and mixer alike. There are many reasons why a mix position may not be suitable for alignment (or mixing for that matter), such as being off-axis, at the back wall, under the balcony, or all of the above. While it is true that the mixer’s reference point is critical, it is futile and destructive to align the system for a bad mix position.

 

Secondary Mic Positions

No matter how well placed the primary mic is, it is still only a single point in the room. I cannot say enough about the benefits of analyzing additional mic positions— or about the potential dangers of basing your entire alignment on a single position. Every position has unique local response characteristics in addition to more global ones. Extreme peaks and dips can be found at one position and disappear a few seats later. Moving the mic or using multiple mics is a form of insurance against making decisions that will not create global solutions. I can attest to many instances where a problem appeared to be solved at one position, only to be revealed later that the “solution” had merely repositioned the problem a few seats away.

 

Secondary mic positions provide secondary opinions on the data. The global aspects of the system become readily recognizable when multiple placements are compared. These are the major tendencies of the system, the ones that will be the keys to getting the system under control. Secondary positions are found within the coverage area of the speaker but away from primary position.

 

Secondary Mics

• Can be placed over a wide range within the speaker’s intended coverage area.

• Provide additional local information to help ascertain the global parameters of the speaker system     response.

 

The unfortunate side effect of taking secondary measurements is the presence of blatantly contradictory data. The simple technique of overlaying the inverted EQ curve over the room + speaker response becomes complicated in the face of these discrepancies. Typically, the equalizer settings are appropriate in some frequency ranges and contradictory in others, indicating the corrections for one measurement position hinders another.

 

In most cases the new positions will show normal variations in frequency response due to the interaction of the speakers and the room, they may also turn up unexpected results indicating coverage gaps or excess overlap. The secondary data can be compared to the primary. Major trends will emerge where the responses match. Decisions must be made about those areas where they differ.

 

Averaging

At first glance it might appear that taking the two different samples and creating an average would make a suitable “average” response. But unfortunately, it doesn’t work that way (unless you use “coherence-averaging”). A 20 dB peak and a 20 dB dip of a one-tenth octave bandwidth average out to 0 dB but we hear the 20 dB peak as a massive coloration, while the 20 dB dip is only marginally perceived. If you leave the 20 dB peak in the system you will soon be looking for employment.

 

In some cases a similar peak in the response may appear in both positions but differ in amplitude or bandwidth. In such cases, an average between them is applicable.

 

Tertiary Microphone Positions

Tertiary positions are a third class of placements used to verify various aspects of the speaker, such as proper wiring, gain structure and position. The data from these positions is not typically used to make level, delay, or equalization decisions.

 

Tertiary Mic Position Sample Applications

• Coverage angle verification: The mic is placed at the expected axial edge.

• Seam analysis: The mic is placed at the transition zone between systems to verify the coverage has no gaps.

• Sound leakage onto the stage: The mic is placed on stage after the system is aligned to observe the nature of the leakage.

• Analysis of a particular seat your client is very concerned about, such as for a critic.

 

Common Misconceptions about Multiple Microphones

When multiple measurement microphones are mentioned there are a few misconceptions that tend to arise. The most common is we will sum the microphones electrically to produce an average response. This has no validity whatsoever due to the comb filtering resulting from the summation of the signals with their different propagation delays. Such an idea could only work in a world without phase. Another misconception is we multiplex the mics, switching from one to another in rapid succession. This is a vestige of real-time analysis, and again totally neglects phase.

 

A third misconception is the idea of “spatial averaging,” where the mic is moved around while measurements are in progress. This may be useful for noise analysis but not for the alignment of a speaker system.

 

What does work in multiple microphones is “coherence-averaging” where only the good, or high coherence data is summed and poor or bad data is rejected.

 

A final point regarding multiple mic positions is they are not a random sampling as might be performed to check the chlorine content of a swimming pool. Each mic position is carefully chosen to give data about a particular speaker system, so decisions can be made about that speaker’s position, level, delay and equalization. A random sampling may provide interesting data but this is an alignment process, not a survey.

 

Ground Plane Measurements

A final tip: Placing a microphone on a stand introduces reflections from a boundary – in this case the floor. This will introduce a comb filter as the late arriving reflection is summed at the microphone. To get a clearer look at the response, you can place the mic on the floor (See Fig. 3)

Fig. 3

Another solution is to place a surface on the seats. (See Fig. 4)

 

Fig. 4

 

 

About Martyn “Ferritt” Rowe

Industry veteran and OSA’s Director of Engineering Services Martyn “Ferrit” Rowe brings nearly three decades of real-world experience in live event technical services. Ferrit most recently came from Martin-Audio as the technical training manager for MLA, and uses his vast knowledge and expertise of the multi-cellular technology to support client projects as well as support and train engineers and technicians.

Ferrit began his career running cables on a Thin Lizzy “Live and Dangerous Tour,” and then taking on the roles of running monitors, front of house, and system technician for some of the most popular acts in music, such Judas Priest, Ozzy Osborne, Black Sabbath, The Police, KISS, The Who, Elton John, Poison, Bon Jovi and Van Halen.

Ferrit’s training career began in 2000 when the first of the line array’s from V-DOSC emerged and became an instructor on the line array theory and continued that path with various systems over the years before joining the Martin-Audio MLA division, and then bringing his knowledge and expertise to OSA International, Inc.

End-fire vs. Cardioid vs. Arc delay

A look at the pros and cons of each system

Now that we have a firm grasp of each of these building blocks for directional arrays, let’s look at the issues they have.

Cardioid

If we imagine a cardioid setup configuration as in Fig. 1, with the rear source reverse polarity and delay equal to the center-to-center propagation time for D.

Fig. 1: Cardioid Configuration

 

Let’s place observers both in front and behind and see what’s happening at those locations. First we’ll look at the front position. First arrival to this observer will be from the front source as it’s closest and has no delay. We’ll represent it as a sine wave, but any periodic signal will do. (See Fig. 2)

Fig. 2: First Source Arrival

At the forward observer point the wave front of the second source now arrives with reverse polarity. (See Fig. 3)

Fig. 3: Second Source Reverse Polarity

This second wave front arrives late by ¼ wavelength, which is equal to the separation distance D. (See fig. 4)

Fig. 4: Shift Due to Source Separation.

And finally when we add the electronic delay T, it shifts another ¼ wavelength back. (See Fig. 5)

Fig. 5: Wave Front Shift Due to T

So it’s now been shifted back a ½ wave length (D+T) and inverted, which is going to give us summation of +6 dB except for the first half cycle. (See Fig. 6)

Fig. 6: Summation

If we separate the sources by 1.0m (D), and add a delay of 2.902 msec (T), then the total delay would equal  5.814 msec (D+T) This means that for the first 5.184 msec the signal is at 0db, effectively only the first source is contributing here, and then the second source summation after this delay (D+T) causes the signal to rise to +6db.

Does this affect the transient envelope of the signal? I plan on trying this setup and measuring the impulse response to see what is happening to the transient envelope. (I suspect that at 80Hz and below it won’t make an audible difference.)

Let us now have a quick look at the rear observation point here in Fig. 7. We see first arrival from the rear source, the wave is out of polarity and travelling in the direction of the arrow.

Fig. 7: Rear Source Wave Front (reverse polarity)

Now the second wave front arrives from the front source but it’s delayed by distance D and its in polarity. (See Fig. 8)

Fig. 8: Front Source Delayed by Separation (D)

The first wave (rear source) is now delayed by time (T) (see Fig. 9), which produces cancellation (see Fig. 10).

Fig. 9: Rear Source Delayed by Time Delay (T)

Fig. 10: Cancellation

Cardioid arrays have perfect cancellation behind (within reason) but the possibility of transient smear in front. They are also very sensitive to boundary interference. See Fig. 11a, 11b, and 11c. Here a reflective surface is within two meters of the array and you can see how the directivity has been altered.

Fig. 11: Boundary Interference (a) 31Hz (b) 63Hz (c) 125Hz

End-Fire

The end-fire array works exactly in reverse to the cardioid array. In the forward direction the delay on the first source (T) matches the source separation (D) so the arrival time of the two sources are in time (phase) so we get summation (+6dB) but in the rearwards direction the first half wavelength is not cancelled, hence it’s leakage to the rear but it still achieves good rejection. So better transient-response at the expense of rear leakage.

Again these arrays are sensitive to boundaries (See Fig .12a, 12b, and 12c). Again, there is a reflective surface within 2m of the array and the altered directivity.

Fig. 12: Boundary Interference (a) 31Hz (b) 63Hz (c) 125Hz

Arc-Delay Processing Extreme Near-Field Issue
If an observer is within or less than half the effective radius of an arc-delay array he enters a region of cancellation.

If we take path length as our first arrival and then take the rest of the path length differences, b-a, c-a, d-a, and e-a, we can calculate these path length deltas in terms of time (see diagram 1 and table 1).

Table 1: Path Length Deltas in Msec

We can now plot the magnitude and phase of these delay interactions see figures 13-16

We can then perform an acoustic summation of these time offsets and plot the response at this point (see Fig. 17).

Fig. 17: Summed Response on 1/3 Octave Centers

As we can see a general broadband dip centered on 100 Hz, this has both good and bad attributes. Those people in the front row who are looking for “slam” or impact aren’t going to feel it; it only becomes apparent beyond the half- radius point from the array. The good part is, you don’t get extreme levels close to the system and the same happens behind so the “front-vocal line” is spared from excessive levels.

Conclusion:
Knowing what the pros and cons are of these arrays, allows us to pick the right tool for the job and also how to combine these types to our own devious ends. J

Next more complex arrays.

About Martyn “Ferritt” Rowe

Industry veteran and OSA’s Director of Engineering Services Martyn “Ferrit” Rowe brings nearly three decades of real-world experience in live event technical services. Ferrit most recently came from Martin-Audio as the technical training manager for MLA, and uses his vast knowledge and expertise of the multi-cellular technology to support client projects as well as support and train engineers and technicians.

Ferrit began his career running cables on a Thin Lizzy “Live and Dangerous Tour,” and then taking on the roles of running monitors, front of house, and system technician for some of the most popular acts in music, such Judas Priest, Ozzy Osborne, Black Sabbath, The Police, KISS, The Who, Elton John, Poison, Bon Jovi and Van Halen.

Ferrit’s training career began in 2000 when the first of the line array’s from V-DOSC emerged and became an instructor on the line array theory and continued that path with various systems over the years before joining the Martin-Audio MLA division, and then bringing his knowledge and expertise to OSA International, Inc.

Broadside Arc-Delay Processing

Part 3 of “building-blocks” is the broadside Arc-Delay array

Here in part three were going to look at another fundamental building-block, the broadside array and the processing that can be applied to it.

First we should look at the reasons for deploying this kind of array by looking at the traditional L-R split-array, as shown in Fig. 1, and the problems associated with it. The “power-alley” in the center and interference resulting in an uneven coverage pattern are the typical problems this configuration suffers from. –  (Hard to believe this was the industry standard until recently.)

Fig. 1: Standard L-R Split-Array Configuration

Anyone who has ever put one of these up, shouldn’t be surprised by the SPL maps presented in Fig. 2a, 2b, & 2c

Fig. 2: Split L-R SPL Maps (a) 31Hz (b) 63Hz (c) 125Hz

But what may come as a surprise, is if we consider the 3D aspect, we are pushing energy upwards towards the ceiling and backwards towards the stage (See Fig. 3) – thanks to Dave Rat for pointing this out. 🙂

Fig. 3: 3-D Radiation Pattern (Dave Rat)

One suggestion is to angle the arrays out approximately 30-45 degrees to spread the energy away from the middle and maybe pickup some extra side coverage (See Fig. 4)

Fig. 4: “Splayed “ L-R Split Array Configuration

 

Observe what happens in Fig. 5a, 5b, & 5c, especially on the stage with this layout – yikes!

Fig. 5: Splayed Split L-R Array (a) 31Hz (b) 63Hz (c) 125Hz

So it should be obvious by now when two sources are separated by multiple wavelengths you will get interference from their summation. (Equal signals, equal levels, equal delays, etc.)

To eliminate this…what happens if we position a mono center cluster? In this case in front of the stage, as in Fig. 6.

Broadside Array

Here in Fig. 6, we see the typical setup for this type of array – makes a great stage extension.


Fig. 6: Mono Broadside Array

In this example, we are going to place ten enclosures on their ends (portrait mode) in a row in front of the stage, each enclosure dimensions are 112cm (44in) x 60cm (24in) x 102cm (47in). These dimensions are important to calculate the center-to-center spacing of this array for our calculations (more on this later).

Here in Fig. 7, we see the layout, (stage omitted for clarity).

Fig. 7: Broadside Array Configuration

Here in Fig. 8a, 8b, & 8c, we can see the progressive narrowing of the beam-width horizontally as we rise in frequency, which is a typical behavior of this layout. If the venue was long and narrow, this directivity would be of some use, but if we need to broaden the beam-width to increase coverage…..

Fig. 8: Broadside Array (no processing) (a) 31Hz (b) 63Hz (c) 125Hz

In an attempt to broaden coverage we are going to physically shape the array, with the help of some sturdy stage-hands, we shape the array as shown in Fig. 9 and then look at the results in Fig. 10a, 10b, & 10c.

Fig. 9: Physically Curved Array

As we can see this has the unfortunate effect of focusing the energy onto the stage; and, in the upper frequencies it’s softened the middle of coverage. (This is a fine example of Mister Murphy’s first law of “unintended-consequences.”) 🙁

Fig. 10: Physically Curved Array (a) 31 Hz (b) 61Hz (c) 125 Hz

Let’s restore the array to its earlier condition, as in Fig. 7 (flat-wall), and apply incremental delays starting from the center and heading outwards as in diagram 1.

Diagram 1: Delay Tap Layout

The cabinets are wired in pairs from the center out and each pair is connected to a different delay tap, T-0 to T-5. We’re going to use an excel spreadsheet to calculate the delay times to place these cabinets on a parabola to produce 90 degrees of beam-width. Given the physical spacing and dimensions of the enclosures, the values in table 1 were calculated and applied.

Table 1

Now let’s look at these SPL maps. See Fig. 11a, 11b, & 11c.

 

Fig. 11: Arc-Delay Processed (90 degrees) (a) 31hz (b) 63Hz (c) 125Hz

We can see two things, 1) the coverage has broadened out, and 2) behind the array as well.

Using electronic delay has bent the array in both directions.

As another example we’ll apply 120 degrees of coverage using the values in table 2 and polars in Fig. 12a, 12b, & 12c.


Table 2

Fig. 12: Arc-Delay Processed (120 degrees) (a) 31Hz (b) 63 Hz (c) 125 Hz

One caution, if the array gets too long it can suffer from self-interference and fall apart. The same can also happen if the elements are spaced too far apart.

Now that we have three basic building blocks, we can start to combine for more complex arrays and directivity control.

Next up is end-fire vs cardioid.

About Martyn “Ferritt” Rowe

Industry veteran and OSA’s Director of Engineering Services Martyn “Ferrit” Rowe brings nearly three decades of real-world experience in live event technical services. Ferrit most recently came from Martin-Audio as the technical training manager for MLA, and uses his vast knowledge and expertise of the multi-cellular technology to support client projects as well as support and train engineers and technicians.

Ferrit began his career running cables on a Thin Lizzy “Live and Dangerous Tour,” and then taking on the roles of running monitors, front of house, and system technician for some of the most popular acts in music, such Judas Priest, Ozzy Osborne, Black Sabbath, The Police, KISS, The Who, Elton John, Poison, Bon Jovi and Van Halen.

Ferrit’s training career began in 2000 when the first of the line array’s from V-DOSC emerged and became an instructor on the line array theory and continued that path with various systems over the years before joining the Martin-Audio MLA division, and then bringing his knowledge and expertise to OSA International, Inc.

End-Fire Arrays

How to build end-fire arrays, a variation on gradient arrays

In our last article we covered the basics of gradient arrays, now we’ll build on that knowledge to look at end-fire arrays.

Here, in Fig 1, we see two sources separated from their acoustic centers by a distance D and a delay has been applied to the forward cabinet, this delay is equal to the propagation of sound over distance D. The arrow shows the direction of propagation. The idea is as the wave-front arrives at the front source, there is addition. In the opposite direction the delayed signal causes cancellation.

Fig. 1: Source Configuration

If we set the distance D to be equal to say 1.0 meter (40in) grill-to-grill, then the delay time will equal 2.9 ms and we will obtain directivity patterns as shown in Fig 2a, 2b, and 2c.

Fig. 2: End-Fire Polars (a 31Hz, b 86Hz c 125Hz)

The one meter spacing clearly shows the null point at 86Hz which equals the ¼ wavelength

(1.0m =344hz /4 = 86Hz) changing this spacing/delay time will allow us to control the directivity of this array.

Adding more elements to the array gives us greater control, the next example is a standard end-fire array (See fig 3), using 4 elements.

 

Fig. 3: Standard End-Fire Configuration (Mauricio Ramirez)

This configuration is usually setup as in Fig. 4a either in single elements or vertical stacks Fig. 4b

Fig. 4: Array Setup

In the first example we are going to space them 1 meter separation (40in) grill–to-grill and the delays will be 0ms, 2.9ms, 5.8ms and 8.7ms from the back (0) to the front (8.7) and the responses are shown in Fig 5a, 5b, and 5c.

We’ll call this the Standard-End-fire array, notice it’s about an average of 90 degrees spread but goes long distances.

Fig 5: Standard End-Fire Spaced 1m at (a) 31Hz, (b) 63Hz, and (c) 125Hz

The next configuration varies the spacing between elements using “borrowed” antenna theory and logarithmic spacing of 0.85m, 1.0m and 1.4m and delay times of 0ms, 2.4ms, 5.3ms and 9.4ms. (See Fig. 6). We’ll call this Log-Spaced End-Fire Array.

 

Fig 6: Log-Spaced End-Fire Configuration (Mitchell Hart)

Changing to this spacing and delay times broadens the front coverage pattern (See fig 7a, 7b, and 7c)

Now notice that the coverage opens up to about 110 degrees but the wave front has flattened and we’ve lost a little “throw.”

 

Fig 7: Log-Spaced End-Fire Polars (a) 31Hz, (b) 63Hz, (c) 125Hz

The final variation is shown in Fig 8, here we have kept the same spacing but varied the delays using logarithmic spacing of the intervals, the new delay times are now 0, 1.8ms, 4.0ms, and 7.1ms, we’ll call this Log-Log-Spaced End-fire Array*.

fig 8

Fig 8: Log-Log End-Fire Array Configuration. (Mitchell Hart)

Fig 9a, 9b, and 9c shows the polars and we can see a broadening of the pattern again at the expense of throw distance, but we’re out to about 125 degrees now.

Fig 9: Log-Log-Spaced End-Fire Polars (a) 31Hz, (b) 63Hz, (c) 125Hz

Next time we’ll discuss broadside arc-delayed arrays

Assuming Omni-directional sources and anechoic conditions –YMMV – always verify coverage with measurements to avoid Mister Murphy’s first law of “unintended consequences.”

*If you make them – you get to name them!

 

About Martyn “Ferritt” Rowe

Industry veteran and OSA’s Director of Engineering Services Martyn “Ferrit” Rowe brings nearly three decades of real-world experience in live event technical services. Ferrit most recently came from Martin-Audio as the technical training manager for MLA, and uses his vast knowledge and expertise of the multi-cellular technology to support client projects as well as support and train engineers and technicians.

Ferrit began his career running cables on a Thin Lizzy “Live and Dangerous Tour,” and then taking on the roles of running monitors, front of house, and system technician for some of the most popular acts in music, such Judas Priest, Ozzy Osborne, Black Sabbath, The Police, KISS, The Who, Elton John, Poison, Bon Jovi and Van Halen.

Ferrit’s training career began in 2000 when the first of the line array’s from V-DOSC emerged and became an instructor on the line array theory and continued that path with various systems over the years before joining the Martin-Audio MLA division, and then bringing his knowledge and expertise to OSA International, Inc.

Subwoofer Directivity

Polar pattern control of multiple sources [3] Using directional techniques on Subwoofers can have beneficial advantages due to room coupling, etc. But a full understanding of how they work and what some of the shortcomings are of each system is needed. There’s no “magic-bullet,” but when used correctly these techniques can go a long way to taming the “last-frontier” in audio.

Gradient Arrays  Subwoofer Directivity can be achieved using the principles presented by Olson [1] concerning gradient loudspeakers. Gradient loudspeakers utilize the techniques of microphone polar pattern control, but applied in reverse. These methods (with the exception of zero-order variety) call for two or more spaced sources within the array to achieve the desired directionality. A number of configurations are presented by Olson which provides a useful tool set for low-frequency pattern control. (You can skip-over the math stuff; it’s just there for the math-geeks!)

Zero Order The zero-order gradient source is the building block for all higher-order gradient sources. This consists of a single source which radiates energy equally in all directions, omni-directional* (Fig 1).

Fig. 1: Zero-order gradient source (a) configuration and (b) polar pattern

*NOTE: an assumption of dimensionless point source behavior is assumed. In reality, sources should be modeled as CDPS (complex directional point sources), and boundary effects should be allowed for.

First-order First-order gradient sources combine two zero-order sources one of reverse polarity (Fig 2). This configuration’s polar pattern is highly-dependant on the physical separation of the two sources (Eq. 1).

tt3_subwolfer_fig2

Fig. 2: First order gradient source configuration (dipole)

Eq. 1.

Spacing at a quarter-wavelength of the target frequency results in a dipole pattern, but spacing at a full wavelength gives a four-lobed pattern (fig 3a,b).

tt3_subwolfer_fig3ab

Fig 3: First-order gradient source (dipole) polar pattern with drive-unit spacing at (a) ¼ wavelength and (b) full wavelength

Cardioid The dipole first-order gradient source can be adjusted to give a cardioid pattern. This involves adding electronic delay to the second source which directly corresponds to driver spacing (Fig.4). Again, the polar pattern is highly-sensitive to source-separation (Eq. 2) as a quarter wavelength spacing gives a cardioid pattern, but full wavelength spacing gives a dipole pattern rotated 90 degrees off-axis (Fig 5).

Fig 4: First-order gradient source configuration (cardioid)

Eq 2.

Fig 5: First-order gradient source (cardioid) polar pattern with drive-unit spacing at (a) ¼ wavelength and (b) full wavelength.

Given that the range of wavelengths usually covered by atypical subwoofer systems (20Hz – 100Hz) are in the region of 17m (56ft) to 3.4m (11ft), a ratio of approx 5:1, we can see in order to achieve pattern control over the whole pass-band using this approach is impossible. A compromise is to pick a control frequency somewhere in the middle, say 45Hz, and using ¼ wavelength spacing (approx. 1.91m / 6.27 ft) and a delay time of 5.55ms accepting it will “fall-apart” at the extremes of the pass band.

Second-order Second-order gradient sources are formed using two dipole first-order sources and placing them together with a physical separation and an electronic delay on the second first-order source directly corresponding to the separation distance (Fig 6). The polar pattern is described by Eq. 3 and, like with other gradient configurations, is largely dependent on source spacing (Fig 7).

Fig 6: Second-order gradient source configuration.

Eq. 3

Fig 7: Second-order gradient source polar pattern with drive-unit spacing at (a) ¼ wavelength and (b) full wavelength.

Higher order gradient sources are realized by combining zero-, first- and second—order configurations in a similar manner. It is expected that as source order increases, polar pattern becomes increasingly focused. It must be noted, however, that as order increases source efficiency decreases due to destructive interference between drive units. Modifying a subwoofer’s polar pattern not only causes a decrease in efficiency, but can also drastically affecst optimal source placement within an acoustic space.

Smaller Spaces: Cardioid sources have been suggested to be relatively position-independent in terms of room-mode coupling in smaller spaces, Ferekidis, Kempe [2], but cardioid sources can be fine-tuned to best suit the acoustical space by adjusting their orientation. Cardioid sources have also shown to be unaffected by substantial changes in room absorption and are not greatly influenced by room asymmetry due to its highly directional radiation pattern. Below, the first room-mode cardioid sources show no clear advantage over omnidirectional sources and, in fact, can be a poor choice if system efficiency is important. With this in mind, a subwoofer array with a hybrid, frequency-dependant polar pattern below the lowest room-mode, the subwoofer operates as an omnidirectional source, while above this it operates as a cardioid source. This approach ensures system efficiency at VLF while simultaneously reducing source position sensitivity.

Hybrid Array There are two ways of achieving this; one approach (Fig 8a) involves applying an all-pass filter (APF) to one drive-unit to modify its phase, achieving the desired polar pattern. The alternative approach* (Fig 8b) places a high-pass filter (HPF) before one drive unit so only one of the drivers radiates energy below a defined frequency. The first method is preferable since it has the advantage of two drivers radiating below the lowest mode, resulting in better pressurization of the room.

Fig 8: Hybrid loudspeaker configurations.

*A variation of this second hybrid approach is to utilize drivers of different sizes/pass-bands to assemble the array. Next time we’ll look at end-fire arrays and the pro’s and cons between end-fire and cardioid.   REFERENCES: [1] Olson, H.F. “Gradient Loudspeakers” JAES vol. 21 #2. Pp 86-93. March 1973. [2] Ferekidis, L; U. Kempe. “Beneficial coupling of cardioid sources to small rooms” AES 116 AES paper 6110. May, 2004 [3} Taken from: Adam J. Hill Doctoral Thesis, University of Essex, Jan 2012. Pgs. 65-68  

About Martyn “Ferritt” Rowe Industry veteran and OSA’s Director of Engineering Services Martyn “Ferrit” Rowe brings nearly three decades of real-world experience in live event technical services. Ferrit most recently came from Martin-Audio as the technical training manager for MLA, and uses his vast knowledge and expertise of the multi-cellular technology to support client projects as well as support and train engineers and technicians. Ferrit began his career running cables on a Thin Lizzy “Live and Dangerous Tour,” and then taking on the roles of running monitors, front of house, and system technician for some of the most popular acts in music, such Judas Priest, Ozzy Osborne, Black Sabbath, The Police, KISS, The Who, Elton John, Poison, Bon Jovi and Van Halen. Ferrit’s training career began in 2000 when the first of the line array’s from V-DOSC emerged and became an instructor on the line array theory and continued that path with various systems over the years before joining the Martin-Audio MLA division, and then bringing his knowledge and expertise to OSA International, Inc.  

Imaging

In the last article we talked about setting the timing for delay or front fill systems. In this article we’ll talk about imaging.

I’m sure we’ve all heard of the “precedence effect” or “Hass effect,” perhaps one of the most misunderstood principles in audio. It goes something like this: “I timed the arrival time between the mains and delays – and then added 10 ms for precedence/Haas effect.”

If our goal is to achieve high speech intelligibility, this “extra” delay is exactly the wrong thing to do. However, if we are making artistic judgments with regard to music or realism then there might be a valid case for additional delay but not just some arbitrary 10 ms.

Quick History Lesson…

The “precedence effect” was described and named in 1949 by Wallach et al. They showed when two identical sounds are presented in close succession they will be heard as a single fused sound. In their experiments, fusion occurred when the lag between the two sounds was in the range of 1 to 5 ms for clicks, and up to 40 ms for more complex sounds such as speech or piano music. When the lag was longer, the second sound was heard as an echo.

Additionally, Wallach et al. demonstrated when successive sounds coming from sources at different locations were heard as fused, the apparent location of the perceived sound was dominated by the location of the sound that reached the ears first (i.e. the first-arriving wave front). The second-arriving sound had only a very small (albeit measurable) effect on the perceived location of the fused sound. They designated this phenomenon as the “precedence effect,” and noted it explains why sound localization is possible in the typical situation where sounds reverberate from walls, furniture and the like, thus providing multiple, successive stimuli. They also noted that the “precedence effect” is an important factor in the perception of stereophonic sound.

Wallach et al. did not systematically vary the intensities of the two sounds, although they cited research by Langmuir et al. Which suggested if the second-arriving sound is at least 15 dB louder than the first, the “precedence effect” breaks down.

The “Haas effect” derives from a 1951 paper by Helmut Haas.

In 1951, Haas examined how the perception of speech is affected in the presence of a single, coherent sound reflection.  To create anechoic conditions, the experiment was carried out on the rooftop of a freestanding building. Another test was carried out in a room with a reverberation time of 1.6 ms.  The test signal (recorded speech) was emitted from two similar loudspeakers at locations 45° to the left and to the right in 3 m distance to the listener.

Haas found humans localize sound sources in the direction of the first arriving sound despite the presence of a single reflection from a different direction. A single auditory event is perceived. A reflection arriving later than 1 ms after the direct sound increases the perceived level and spaciousness (more precisely the perceived width of the sound source). A single reflection arriving within 5 to 30 ms can be up to 10 dB louder than the direct sound without being perceived as a secondary auditory event (echo). This time span varies with the reflection level. If the direct sound is coming from the same direction the listener is facing, the reflection’s direction has no significant effect on the results. A reflection with attenuated higher frequencies expands the time span that echo suppression is active.Increased room reverberation time also expands the time span of echo suppression.

The “precedence effect” appears if the subsequent wave fronts arrive between 2 ms and about 50 ms later than the first wave front. This range is signal dependent. For speech, the “precedence effect” disappears for delays above 50 ms, but for music the precedence effect can also appear for delays of some 100 ms.

Now the important thing to take from this is these results were obtained in the horizontal plane where our hearing resolution has an average sensitivity of one degree. In the vertical domain our hearing has a resolution of about four to five degrees. (One of the reasons Vertical Line sources are successful).

If we place a signal into two sources separated horizontally we get a “phantom” or ghost center image. This rapidly collapses to a mono source as we shift position or change levels or delay times. The same can happen between vertical separated sources but it’s much coarser.

Knowing time offsets between sources can affect imaging but at the expense of ripple (comb filter) or smoothness is the tradeoff of Art vs. Science.

With wireless tablet control of these sub-systems it’s relatively easy to change these delay offsets and hear what’s happening in real time. We can “pull” or “push” the image around but remember the tradeoffs.

tt2_imaging_fig1

Fig. 1

Here in Fig 1, you can see how relative dB offsets and gains can be used to shift the image around for music. Just try to stay out of the yellow area. “One man’s comb-filter is another man’s stereo enhancement.”

In the next series of Articles, I’m going to touch on “Subs – The Final Frontier”

About Martyn “Ferritt” Rowe

Industry veteran and OSA’s Director of Engineering Services Martyn “Ferrit” Rowe brings nearly three decades of real-world experience in live event technical services. Ferrit most recently came from Martin-Audio as the technical training manager for MLA, and uses his vast knowledge and expertise of the multi-cellular technology to support client projects as well as support and train engineers and technicians.

Ferrit began his career running cables on a Thin Lizzy “Live and Dangerous Tour,” and then taking on the roles of running monitors, front of house, and system technician for some of the most popular acts in music, such Judas Priest, Ozzy Osborne, Black Sabbath, The Police, KISS, The Who, Elton John, Poison, Bon Jovi and Van Halen.

Ferrit’s training career began in 2000 when the first of the line array’s from V-DOSC emerged and became an instructor on the line array theory and continued that path with various systems over the years before joining the Martin-Audio MLA division, and then bringing his knowledge and expertise to OSA International, Inc.

Setting Audio System Delays

Many times we use multiple systems to meet our coverage needs, but getting these systems to “play well together” can sometimes be problematical.

A typical strategy when aligning systems is to work from loudest to quietist, for now let’s assume that the speaker manufacturer has done his homework at the component level, so it’s these sub-systems we are integrating: mains, front fills, delays, out-fills, sub woofers, etc.

Although we say to work from loudest to quietest, remember there is no negative delay (yet) so we always have to delay back to the furthest source.

The concept is to get the sound to arrive at the same time to some point from these sources, but where is this point?

As we say in the measurement game you have to pick a point to make (place) your stand. Typically half way through the intended coverage zone is a primary measurement point, but when integrating sub-systems we really need to look at the “overlap” zones where these sub-systems are going to interact.

If we have two sources of the same signal with a time offset between them, the comb filter will be at its worst when the amplitude of the signals is equal (see fig 1). Typically we will see “peaks” of +6dB and cancellations or “nulls” of up to -60dB and the first null can be over an octave wide. (Yikes!)

 comb filter

Fig. 1 comb filter

Reducing the amplitude of one of these sources will reduce the amount of comb-filtering, so we can use this effect to our advantage. (See Fig. 2).

ripple

Fig. 2 ripple

For example, let’s say we have a main system (probably flown) and a fill enclosure (ground stacked). The fill enclosures function in this case is as a nearfield to cover the region close or even underneath the flown array. This nearfield is coverage zone, let’s say just the first 20 feet or so, and further back than that and we are then in the main coverage of the flown system.

As we approach the system we walk out of the main flown coverage and into the nearfield coverage and somewhere in the overlap region between them there will be a point where the two arrays are at equal amplitude. This will be the point where any time offset between the sources will produce the maximum comb filter. If we time it at this point, not only will we fix it at this point but the relative amplitude offsets with the varying distances from the sources will also reduce the ripple.

The take-away is it’s important to set level offsets first between the systems and if we change the level of either system you will have to re-time as the equal amplitude position will change.

Next time we’ll look at creating “imaging” from these systems and that bloke “Haas” and the trouble he’s caused.

If you have any questions or suggestions for topics contact me at Ferrit@osacorp.com

 

About Martyn “Ferritt” Rowe

Industry veteran and OSA’s Director of Engineering Services Martyn “Ferrit” Rowe brings nearly three decades of real-world experience in live event technical services. Ferrit most recently came from Martin-Audio as the technical training manager for MLA, and uses his vast knowledge and expertise of the multi-cellular technology to support client projects as well as support and train engineers and technicians.

Ferrit began his career running cables on a Thin Lizzy “Live and Dangerous Tour,” and then taking on the roles of running monitors, front of house, and system technician for some of the most popular acts in music, such Judas Priest, Ozzy Osborne, Black Sabbath, The Police, KISS, The Who, Elton John, Poison, Bon Jovi and Van Halen.

Ferrit’s training career began in 2000 when the first of the line array’s from V-DOSC emerged and became an instructor on the line array theory and continued that path with various systems over the years before joining the Martin-Audio MLA division, and then bringing his knowledge and expertise to OSA International, Inc.