 Research
 Open Access
 Published:
An active noise control algorithm with gain and power constraints on the adaptive filter
EURASIP Journal on Advances in Signal Processing volume 2013, Article number: 17 (2013)
Abstract
This article develops a new adaptive filter algorithm intended for use in active noise control systems where it is required to place gain or power constraints on the filter output to prevent overdriving the transducer, or to maintain a specified system power budget. When the frequencydomain version of the leastmeansquare algorithm is used for the adaptive filter, this limiting can be done directly in the frequency domain, allowing the adaptive filter response to be reduced in frequency regions of constraint violation, with minimal effect at other frequencies. We present the development of a new adaptive filter algorithm that uses a penalty function formulation to place multiple constraints on the filter directly in the frequency domain. The new algorithm performs better than existing ones in terms of improved convergence rate and frequencyselective limiting.
1. Introduction
Active noise control (ANC) systems can be used to remove interference by generating an antinoise output that can be used in the system to destructively cancel the interference [1]. In some applications, it is required to limit the maximum output level to prevent overdriving the transducer, or to maintain a specified system power budget. In a frequencydomain implementation of the leastmeansquare (LMS) algorithm, the limiting constraints can be placed directly in the frequency domain, allowing the adaptive filter response to be reduced in the frequency regions of constraint violation, with minimal effect at other frequencies [2]. Constraints can be placed on either the filter gain, or filter output power, as appropriate for the application.
A general block diagram of an ANC system is illustrated in Figure 1, with H(z) representing the primary path or plant (e.g., an acoustic duct), W(z) representing the adaptive filter, and S(z) representing the secondary path (which may include the D/A converter, power output amplifier, and the transducer). The adaptive filter W(z) typically uses the filteredX LMS algorithm, where the input to the LMS algorithm is first filtered by an estimate of the secondary path [3]. The adaptive filter will need to simultaneously identify H(z) and equalize S(z), with the additional constraint of limiting the maximum level delivered to S(z).
Applications of gainconstrained adaptive systems include systems that use a microphone for feedback, and due to the acoustic path to the microphone notches or peaks occur in the microphone frequency response (which may not be present in other locations). Adding gain constraints to the adaptive filter prevents distortion at those frequencies by limiting the peak magnitude of the filter coefficients [2]. Applications of powerconstrained adaptive systems include requirements to limit the maximum power delivered to S(z) to a predetermined constraint value to prevent overdriving the transducer, prevent output amplifier saturation, or prevent other nonlinear behavior [4]. The primary difference between these implementations is that the gainconstrained algorithm does not take the input power into account when determining the constraint violation.
Previous implementations of gain and output power limiting include output rescaling, the leaky LMS, and a class of algorithms termed constrained steepest descent (CSD) previously presented in [2]. We develop a new class of gainconstrained and powerconstrained algorithms termed constrained minimal disturbance (CMD). The new CMD algorithms provide faster convergence compared to previous algorithms, and the ability to handle multiple constraints.
This article is organized as follows. Section 2 presents a review of prior work. Section 3 presents the CMD algorithm development. Section 4 presents a convergence analysis. Section 5 presents simulations with comparisons to other algorithms. Section 6 provides some concluding remarks.
2. Review of prior work
For comparison purposes, the following notation is used.n Adaptive filter size and block size N Sample number in the time domain m Block number in the time or frequency domain W Weight in the frequency domain X Input in the frequency domain E Error in the frequency domain D Plant output in the frequency domain Y Filter output in the frequency domain C Gain or power constraint S Secondary path
Lowercase w, x, e, d, and y are the timedomain representations of their respective frequencydomain counterparts. Vectors will be denoted in boldface, and the subscript k is used to denote an individual component of a vector. The superscript * is used to denote complex conjugate, and the superscript T denotes vector transpose. The parameter μ is used as a convergence stepsize coefficient, and the parameter γ is used as a leakage coefficient.
The first two methods of power limiting were described in detail in [5] and are briefly restated here. The first “output clipping” simply limits the output power to a maximum value. This is what would normally happen in a real system (e.g., the output amplifier would saturate). With the filter output y at iteration n denoted by y(n) and the output constraint by C, the output clipping algorithm is given by
A potential problem in using output clipping for adaptive filtering applications is that the weight updates for w(n) continue to occur while the filter output remains clipped, causing potential stability problems since the filter weight update is decoupled from the filter output. To prevent this, the ”output rescaling” algorithm can be used, which is given by
Here, in addition to the output being clipped, the adaptive filter weights are also rescaled; filter adaptation continues from the appropriate weight value corresponding to the actual output.
The next algorithm to be considered for gain or power limiting is the leaky LMS [6], which is given by
The leaky LMS reduces the filter gain each iteration, with the leakage coefficient γ controlling the rate of reduction. The coefficient γ is determined experimentally according to the application, but gain reduction occurs at all frequencies, resulting in a larger steadystate convergence error. When the leakage is zero, this algorithm reduces to the standard LMS [7].
The algorithms described thus far are processed directly in the time domain. However, with large filter lengths the required convolutions become computationally expensive, and alternative methods can be more efficient. If the processing is done in block form and a fast Fourier transform (FFT) used, the required convolutions become multiplications. This also allows additional constraints to be added to limit the filter response directly in the frequency domain. For example, in [8], an ANC system using a loudspeaker with poor lowfrequency response was stabilized in the FFT domain by zeroing out the lowfrequency components, preventing adaptation at those frequencies. However, using block processing will result in a one block delay, which may be undesirable in some realtime applications. A delayless structure [9], with filtering in the timedomain and signal processing in the frequencydomain, can be used to mitigate this delay. A block diagram of the delayless frequencydomain LMS (FDLMS) is shown in Figure 2. In delayless ANC applications with a secondary path S(z), the adaptive filter input vector x(m) is first filtered by an estimate of the secondary path. The adaptive filter weight, input, and error vectors are defined as
A block size of N is used for both the filter and each new set of data to maximize computational efficiency, with m representing the block iteration. To avoid circular convolution effects, each FFT uses blocks of size 2N[10]. The frequencydomain input and error vectors (size 2N) are defined as
where 0 is the Npoint zero vector.
The delayless FDLMS weight update equation at iteration m without a gain or power constraint is given by [9]
where the + subscript denotes the causal part of the IFFT (corresponding to the gradient constraint in [10]), and μ is the convergence coefficient. Adding a leakage factor to (6) results in a frequencydomain version of the leaky LMS which can be used to limit the adaptive filter output [2], and is given by
where γ is the leakage factor.
The next two weight update equations were developed in [2], which processes the constraints in the frequency domain using an algorithm based on the method of steepest descent. The delayless form of the gainconstrained version is given by
where the z subscript sets the result in the brackets to 0 if the value in the brackets is less than 0 (the constraint is satisfied), or to the value of the difference (the constraint is violated). The constraint is individually applied to each frequency bin. Here, α controls the “tightness” of the penalty: a larger α places a stiffer penalty on constraint violation at the expense of a larger steadystate convergence error.
The delayless form of the powerconstrained algorithm is given by
The output power P(m) is determined by the squared Euclidean norm of the filter output, which is required to be limited to a constraint value C, or equivalently
Note that in (10) there is only one constraint. When used for comparison purposes, we will denote (8) and (9) as constrained steepest descent (CSD) algorithms.
3. New algorithm development
The new CMD algorithm will be developed using the principle of minimal disturbance, which states that the weight vector should be changed in a minimal manner from one iteration to the next [11]. A constraint is added for filter convergence, and a constraint is also added for either the filter gain (coefficient’s magnitude in each frequency bin), or the filter output power, depending on which we intend to limit. The method of Lagrange multipliers [11, 12] is then used to solve this constrained optimization problem [13].
3.1. Gainconstrained algorithm
At each block update m, the new algorithm will minimize the squared Euclidean norm of the frequencydomain weight change in each individual frequency bin k, where the weight change is given by
subject to the condition of a posteriori filter convergence in the frequency domain
In gainconstrained applications, the algorithm will additionally add a penalty based on the amount of magnitude violation above a maximum constraint value, requiring
or equivalently
The three requirements given by (11), (12), and (14) are combined into a single cost function, written as
where the Lagrange multiplier λ controls the convergence requirement of (12), and the Lagrange multiplier α_{max,k } with subscript k controls the individual frequency bin magnitude constraint; parameter α_{max,k } controls the “tightness” of the penalty term, with a larger value placing more emphasis on meeting the constraint at the expense of increasing the convergence error [2]. The cost function (15) is differentiated with respect to each of the three variables and set to 0. For each frequency bin k
Rearranging (16) gives
We now propose the following interpretation of the gainconstraint term. In steady state (after convergence), we would expect the successive weight values to be approximately the same for a small convergence coefficient step size. Therefore, as long as the constraint of (14) was satisfied in the previous iteration, the penalty is set to 0. However, if the magnitude of the filter weight exceeds the constraint value, then the penalty is scaled in proportion to the constraint violation (similar to the method in [14], which initiates the penalty at 90% of the constraint). We define
where the z subscript term will force α _{ k } to zero if the constraint of (14) is satisfied.
Substituting (20) into (19) at frequency bin k, conjugating both sides, and rearranging into a recursion results in
Substituting (21) into (12) gives
Rearranging (22) yields
The first term in brackets is the error at frequency bin k, E _{ k }(m). Solving for λ results in
Rearranging (21) into a recursion, using (24), and introducing a convergence step size parameter μ to control the rate of adaptation yields
Noting that D _{ k }(m) in (25) can be written as D _{ k }(m) = E _{ k }(m) + S _{ k } W _{ k }(m)X _{ k }(m) results in
For small μ, (26) can be approximated as
Using the definitions
and
the weight update given by (27) can be written for each frequency bin as
Taking the IFFT of both sides and casting into a delayless structure results in the new CMD algorithm given by
where
is a diagonal matrix of variable leakage factors as determined by (28).
The ║X(m)║^{2} term provides an estimate of the input power P _{ x,k }(m) in frequency bin k,
which can be determined recursively by [15]
where β is a smoothing constant slightly less than 1. (Note: In equations such as (31) which use an estimated power value in the denominator, low power in a particular frequency bin may result in division by a very small number, potentially causing numerical instability. To guard against this, a small positive regularization parameter is added to the denominator to ensure numerical stability [11]).
The following observations can be made of the CMD algorithm given by (31), which is shown in Figure 3:

1.
If the constraint of (14) is violated, the CMD algorithm will reduce the magnitude of the adaptive filter frequency response in proportion to the level of constraint violation.

2.
The CMD algorithm normalizes the weight update in a manner similar to the normalizedLMS with leakage. The amount of leakage is dependent on the level of constraint violation.

3.
The CMD algorithm scales the weight update by the inverse of secondary path frequency response, resulting in a faster convergence in regions corresponding to valleys (low magnitude response) in the secondary path.
3.2. Powerconstrained algorithm
In applications where the filter output power is to be limited, the gain coefficient constraint is replaced by an output power constraint. If total control effort is to be limited [2, 6], a single output power constraint can be expressed as
where
The new power constrained cost function then becomes
Following the development of the gainconstrained algorithm, this cost function is differentiated with respect to each of the three variables and set to 0. The resulting equations are
Rearranging (38) yields
Using the same procedure previously described after (19), the term in (20) is replaced by
Following a development similar to the gainconstrained case results in the CMD algorithm given by (31) using a new diagonal matrix of leakage factors (32).
Better frequency performance can be achieved by estimating the power in each frequency bin, making the algorithm more selective in attenuating those frequencies in violation of the constraint. The output power in each frequency bin is determined by
Using C _{ k } as the power constraint, it is required that
The resulting cost function is given by
Following the development of the gainconstrained algorithm, this cost function is differentiated with respect to each of the three variables and set to 0. The resulting equations are
Rearranging (46) yields
Using the same procedure previously described after (19), the term in (20) is replaced by
Following a development similar to the gainconstrained case, and using a new diagonal matrix of leakage factors (32) results in the CMD algorithm, repeated below.
where
4. Convergence analysis
We assume that all signals are white, zeromean, Gaussian widesense stationary, and employ the independence assumption [7] under a steadystate condition, where the constraint violation is constant and the transformdomain weights are mutually uncorrelated (which occurs as the filter size N grows large [16]). We will also use a normalized input power of unity in (34), which then allows the analysis to apply to both gainconstrained and powerconstrained cases. Uncorrelated white measurement noise with a variance of σ _{ n } ^{2} will be denoted by η _{ k }.
4.1. Mean value
The weight update equation (30) can be written as
or equivalently
where W _{ k,opt} denotes the optimal Wiener solution [given as S _{ k } ^{–1} (m)D _{ k } (m)]. Taking expectations of both sides, using the assumptions, and noting that the input power is normalized per (29) results in
By induction, this recursion can be written as
Convergence requirements on μ are given below. When these conditions are satisfied the result is
which converges in the limit to the steadystate solution W _{ k, ss}.
4.2. Convergence in the mean
The deviation from the steadystate solution in bin k is defined by a weight error [17] given by
allowing the CMD algorithm to be expressed as
Taking expectations of both sides results in
By induction, this recursion can be written as
For this to converge requires the exponential term to decay
resulting in
with the upper bound on μ occurring for maximum constraint violation, given by
4.3. Convergence in the mean square
Both sides of (60) are first postmultiplied by their respective conjugate transposes, rearranged, and after taking expectations the result is
Rearranging and employing the assumptions [18] gives
As the weight error variance update depends on the mean coefficient error vector, V _{ k }(m), a statespace model can be defined as
and the update defined as the real component of
with
and
where
For stability, it is required that the eigenvalues in the state transition matrix A have a magnitude less than 1 [19], requiring matrix entry A _{11} in (70) to be bounded to magnitude less than 1, resulting in
or
with the upper bound on μ occurring for maximum constraint violation, given by
5. Simulations
In the simulations, the experimental data from [3] is used for the plant, modeled by a 512term allzero filter centered at N/2. The output rescaling algorithm (2) is applied in the frequencydomain to determine the steadystate adaptive filter final coefficients. We demonstrate the improved convergence performance of the CMD algorithm as compared to the CSD algorithm and the leaky LMS in both gainconstrained and powerconstrained applications. The values of constraint terms C, α _{maxk }, and C _{ k } are held constant in the simulations, but could be shaped over frequency for specific applications. External uncorrelated Gaussian white noise with a variance of 0.01 is added for the convergence comparisons, and an average of 100 runs is plotted. In the simulations, we are assuming prior knowledge of the secondary path transfer function; methods for online and offline secondary path identification are presented in, e.g., [20, 21].
5.1. Gainconstrained algorithm
Using a unity gain secondary path, a 3dB coefficient gain constraint is imposed, and Figure 4 shows the plant frequency response and the response of the new CMD algorithm, illustrating the clipping effect of the algorithm.
Using the experimental data from [3] for the secondary path, the algorithms should converge to the filter in Figure 5, which shows the CMD algorithm response, the plant frequency response, and the secondary path frequency response. The adaptive filter in this case will need to simultaneously identify H(z) and equalize S(z), while still maintaining the gain constraints. The convergence comparison for the three algorithms for the system in Figure 5 for a white noise input is displayed in Figure 6. The CMD algorithm has the fastest convergence performance. The CSD algorithm began converging in a similar manner, but was not able to fully achieve the relatively high 20 dB gain required at the lowest frequencies in Figure 5. However, other simulations without deep secondary path nulls showed that the two algorithms converge to similar final weight values, with the CMD having a faster convergence rate. The leaky LMS attenuates all frequencies (and not just those in violation of the constraint) and has the poorest convergence performance. (The leaky LMS appears smoother than the other two algorithms, but this is due to the logarithmic scale of the yaxis in the plots.)
Figure 7 compares the convergence of the three algorithms for colored noise input, created by filtering the input with a first order AR(1) low pass filter process with coefficients [1–0.95]. The CSD algorithm requires a significant reduction of μ in (8) to maintain stability, resulting in a slow response. However, the increased energy in the lower frequency regions due to the low pass input process improved the misadjustment for this case. The leaky LMS attenuates all frequencies (and not just those in violation of the constraint) and has the poorest convergence performance and highest excess misadjustment.
5.2. Powerconstrained algorithm
The frequency response and convergence of the three algorithms is compared in Figures 8 and 9, respectively, using a single output power constraint of 25% of the unconstrained value (−6 dB). The CMD algorithm has the fastest convergence performance and maintains a 6dB power reduction over frequency. The CSD displays similar performance, but again was not able to fully achieve the relatively high 20 dB gain required at the lowest frequencies. The leaky LMS has the poorest convergence performance, primarily due to its inability to track the lowest frequencies. Both the CMD and CSD algorithms allow the power constraint to be set explicitly, while the leaky LMS requires a trial and error approach to determine the parameters.
The CMD algorithm frequency response for the individual binconstrained case using the constraint of (44) is shown in Figure 10 for a 3dB power limit with a wideband white noise input. Comparing this to Figure 8 illustrates how the new CMD algorithm reduces the output in the frequencies of powerconstraint violation, while minimizing the effect at other frequencies.
6. Conclusion
A new algorithm was presented, the CMD LMS, for gainconstrained and powerconstrained adaptive filter applications. Analysis results were developed for the stability bounds in the mean and meansquare sense. The CMD algorithm was compared to the algorithm developed in [2] and the leaky LMS for filteredX ANC applications. The new CMD algorithm provides faster convergence and improved frequency response performance, especially in colored noise environments. Additionally, the new CMD algorithm has the ability to handle multiple constraints in both gainconstrained and powerconstrained applications.
References
 1.
Nelson PA, Elliott SJ: Active Control of Sound. Academic Press, London; 1992.
 2.
Rafaely B, Elliot S: A computationally efficient frequencydomain LMS algorithm with constraints on the adaptive filter. IEEE Trans. Signal Process 2000, 48(6):16491655. 10.1109/78.845922
 3.
Kuo SM, Morgan DR: Active Noise Control Systems: Algorithms and DSP Implementations. Wiley, New York; 1996.
 4.
Taringoo F, Poshtan J, Kahaei MH: Analysis of effort constraint algorithm in active noise control systems. EURASIP J. Appl. Signal Process 2006, 2006: 19.
 5.
Qiu X, Hansen CH: A study of timedomain FXLMS algorithms with control output constraint. J. Acoust. Soc. Am 2001, 1097(6):28152823.
 6.
Darlington P: Performance surfaces of minimum effort estimators and controllers. IEEE Trans. Signal Process 1995, 43(2):536539. 10.1109/78.348136
 7.
Widrow B: SD Stearns: Adaptive Signal Processing. PrenticeHall, Upper Saddle River, NJ; 1985.
 8.
Nowak MP, Van Veen BD: A constrained transformdomain adaptive IIR filter structure for active noise control. IEEE Trans. Speech Audio Process 1997, 5(5):334347.
 9.
Morgan DR, Thi JC: A delayless subband adaptive filter architecture. IEEE Trans. Signal Process 1995, 43(8):18191830. 10.1109/78.403341
 10.
Shynk JJ: Frequencydomain and multirate adaptive filtering. IEEE Signal Process. Mag 1993, 9: 337339.
 11.
Haykin S: Adaptive Filter Theory. PrenticeHall, Upper Saddle River, NJ; 2002.
 12.
Fletcher R: Practical Methods of Optimization. Wiley, New York; 1987.
 13.
Kozacky WJ, Ogunfunmi T: Convergence analysis of a frequencydomain adaptive filter with constraints on the output weights. In Proceedings of the Asilomar Conference on Signals, Systems, and Computers. Pacific Grove, USA; 2009:13501355.
 14.
Elliott SJ, Beck KH: Effort constraints in adaptive feedforward control. IEEE Signal Process. Lett 1996, 3(1):79.
 15.
Sommen PCW, Van Gerwen PJ, Kotmans HJ, Janssen JEM: Convergence analysis of a frequencydomain adaptive filter with exponential power averaging and generalized window function. IEEE Trans. Circuits Syst 1987, 34(7):788798. 10.1109/TCS.1987.1086205
 16.
FarhangBoroujeny B, Chan KS: Analysis of the frequencydomain block LMS algorithm. IEEE Trans. Signal Process 2000, 48(8):23322342. 10.1109/78.852014
 17.
Mayyas K, Aboulnast T: Leaky LMS algorithm: MSE analysis for Gaussian data. IEEE Trans. Signal Process 1997, 45(4):927934. 10.1109/78.564181
 18.
Douglas SC: Performance comparison of two implementations of the leaky LMS adaptive filter. IEEE Trans. Signal Process 1997, 45(8):21252129. 10.1109/78.611231
 19.
Mayyas K, Aboulnasr T: Leaky LMS: a detailed analysis, in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS) . Seattle, USA 1995, 2: 12551258.
 20.
Kuo SM, Vijayan D: A secondary path modeling technique for active noise control systems. IEEE Trans. Speech Audio Process 1997, 5(4):374377. 10.1109/89.593319
 21.
Akhtar MT, Abe M, Kawamata M: On active noise control systems with online acoustic feedback path modeling. IEEE Trans. Audio Speech Lang. Process 2007, 15(2):593600.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contribution
WJK and TO derived the equations, carried out and reviewed the simulations, and drafted the manuscript. Both authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Kozacky, W.J., Ogunfunmi, T. An active noise control algorithm with gain and power constraints on the adaptive filter. EURASIP J. Adv. Signal Process. 2013, 17 (2013). https://doi.org/10.1186/16876180201317
Received:
Accepted:
Published:
Keywords
 Adaptive filtering
 Adaptive signal processing
 Discrete fourier transforms
 FXLMS
 Optimization