 Research
 Open Access
 Published:
Superresolution for simultaneous realization of resolution enhancement and motion blur removal based on adaptive prior settings
EURASIP Journal on Advances in Signal Processing volume 2013, Article number: 30 (2013)
Abstract
A superresolution method for simultaneously realizing resolution enhancement and motion blur removal based on adaptive prior settings are presented in this article. In order to obtain highresolution (HR) video sequences from motionblurred lowresolution video sequences, both of the resolution enhancement and the motion blur removal have to be performed. However, if one is performed after the other, errors in the first process may cause performance deterioration of the subsequent process. Therefore, in the proposed method, a new problem, which simultaneously performs the resolution enhancement and the motion blur removal, is derived. Specifically, a maximum a posterior estimation problem which estimates original HR frames with motion blur kernels is introduced into our method. Furthermore, in order to obtain the posterior probability based on Bayes’ rule, a prior probability of the original HR frame, whose distribution can adaptively be set for each area, is newly defined. By adaptively setting the distribution of the prior probability, preservation of the sharpness in edge regions and suppression of the ringing artifacts in smooth regions are realized. Consequently, based on these novel approaches, the proposed method can perform successful reconstruction of the HR frames. Experimental results show impressive improvements of the proposed method over previously reported methods.
1 Introduction
Highresolution (HR) video sequences are necessary for various fundamental applications, and acquisition of data with an HR image sensor makes quality improvement straightforwardly. However, it is often difficult to capture video sequences with sufficient high quality from current image sensors. Furthermore, video sequences often include motion blurs in many situations, e.g., there is not enough light to avoid the use of a long shutter speed. Then image processing methodologies for increasing the visual quality are necessary to bridge the gap between demands of applications and physical constraints. Many researchers have proposed superresolution (SR) methods for increasing the resolution levels of lowresolution (LR) video sequences [1–30]. Most SR methods are broadly categorized into two approaches, learningbased (examplebased) approach and reconstructionbased approach. The learningbased approach estimates the HR frame from only its LR frame, but several other HR frames are utilized to learn a prior on the original HR frame [1–9]. On the other hand, the reconstructionbased approach estimates the HR frame from their multiple LR frames, and many methods based on this approach have been proposed [10–30]. In this article, we focus on the reconstructionbased approach and discuss its details.
The reconstructionbased SR was first proposed by Tsai and Huang [10]. They used a frequency domain approach, and their formulation was extended by Kim et al. [11, 12]. In general, the frequency domain approaches have strength of theoretical simplicity and high computational efficiency. However, in these frequency domain approaches [10–12], the observation model of LR frames is restricted to only translational motion [17]. Due to the lack of data correlation in the frequency domain, it is difficult to effectively use the spatial domain knowledge. Therefore, spatial domain approaches have often been developed to overcome the weakness of the frequency domain approaches [13–24, 27–30].
In general, since the estimation of HR frames is an illposed problem, the prior information is introduced to determine the solution of the SR problems. Also, it is represented as a prior probability or a regularization term, and they are adopted to stabilize the inversion of the illposed problem. Typically, intensity gradients are used for the regularization, and their L _{1}norm or L _{2}norm regularization approaches are often used [13, 14, 18]. Total variation (TV) [31] is utilized as the most common regularization. This means the conventional methods assume that the TV obtained from the original HR frames is based on the predefined distribution. Since L _{2}norm regularization penalizes highfrequency components severely, the solution tends to become oversmoothed. On the other hand, although L _{1}norm regularization keeps sharpness compared to L _{2}norm regularization, it tends to increase artifacts in smooth regions.
In addition to the above problems, these conventional SR methods try to only recover HR frames from their LR frames. However, motion blurs are also caused in the image acquisition process, and their removal must be performed with the resolution enhancement. Therefore, many methods for removing motion blurs have been proposed [32–36]. In order to realize the resolution enhancement and the motion blur removal, the conventional methods tend to separately perform these two procedures. Then, since the performance of the first procedure may cause the deterioration of the performance in the subsequent procedure, some artifacts such as blurring and ringing artifacts are enhanced in the final output.
As shown in the above discussions, the conventional methods have the following problems: (i) simultaneous resolution enhancement and motion blur removal cannot be realized, successfully, and (ii) regularization, i.e., prior information cannot be provided for target video sequences, adaptively.
This article presents an SR method for realizing the simultaneous resolution enhancement and motion blur removal based on adaptive prior settings. The main contributions in the proposed method are twofold.

(i)
Simultaneous estimation of the HR frame and the motion blur kernels: In order to estimate the original HR frame from its motionblurred LR frames, a posterior probability of the original HR frame and the motion blur kernels is newly defined. Then, by using the Maximum a Posterior (MAP) estimation, the proposed method performs the simultaneous resolution enhancement and motion blur removal. This enables suppression of the performance degradation due to the separate processing (problem (i)). Note that for realizing the successful estimation of the HR frame in this approach, the following approach becomes necessary.

(ii)
A new prior probability for the HR frame: The proposed method derives a new prior probability distribution of the HR frame, whose shape can adaptively be set to the suitable one for each area. By estimating the optimal shape adaptively, oversmooth in edge regions and artifacts in smooth regions can be suppressed. Furthermore, the proposed method introduces a new weight factor concerning edge and blur directions into the derivation of the prior probability to reduce the oversmooth, which occurs in the blur direction, and the ringing artifacts. Then the problem (ii) can be alleviated by this approach.
Then, by combining the above two approaches, accurate reconstruction of the HR video sequences can be expected.
The remainder of this article is organized as follows. Section 2 shows the observation model of LR video sequences which is utilized in the proposed method. The resolution enhancement method of motionblurred LR video sequences is presented in Section 3. In Section 4, the effectiveness of our method is verified by some results of experiments. Concluding remarks are shown in Section 5.
2 Observation model of motionblurred LR video sequences
In this section, we present the observation model utilized in the proposed method. Let j th frame of a motionblurred LR video sequence be denoted in a vector form by ${\mathbf{y}}^{\left(j\right)}={\left[{y}_{1}^{\left(j\right)},{y}_{2}^{\left(j\right)},\cdots \phantom{\rule{0.3em}{0ex}},{y}_{{N}_{1}{N}_{2}}^{\left(j\right)}\right]}^{T}\left(\in {\mathbf{R}}^{{N}_{L}}\right)$, where N _{1}×N _{2} is the size of the LR frame, and N _{ L }=N _{1} N _{2}. In this article, ^{T} denotes a vector/matrix transpose operator. An i th frame of its HR video sequence is denoted in a vector form by ${\mathbf{x}}^{\left(i\right)}={\left[{x}_{1}^{\left(i\right)},{x}_{2}^{\left(i\right)},\cdots \phantom{\rule{0.3em}{0ex}},{x}_{{q}_{1}{N}_{1}{q}_{2}{N}_{2}}^{\left(i\right)}\right]}^{T}\left(\in {\mathbf{R}}^{{N}_{H}}\right)$, where q _{1} N _{1}×q _{2} N _{2} is the size of the HR frame, N _{ H }=q _{1} N _{1} q _{2} N _{2} and q _{1}≥1, q 2≥1. Note that j ∈ {iM,iM+1,…,i,…,i+M1,i+M}, i.e., the i th HR frame is reconstructed from the 2M+1 motionblurred LR frames by our method in the following section.
The observation model of j th LR frame is defined by the following equation.
where
In the above equations, ${\mathbf{F}}^{(i,j)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{N}_{H}\times {N}_{H}}\right)$ is a motion operator between i th HR frame x ^{(i)} and the original HR frame corresponding to j th motionblurred LR frame y ^{(j)}, ${\mathbf{H}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{N}_{H}\times {N}_{H}}\right)$ is a blurring operator due to the motion blur in j th frame, B $\left(\in {\mathbf{R}}^{{N}_{H}\times {N}_{H}}\right)$ is a low pass filter, D $\left(\in {\mathbf{R}}^{{N}_{L}\times {N}_{H}}\right)$ is a downsampling operator, ${\mathbf{v}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{N}_{L}}\right)$ is an additive white noise vector in j th LR frame. In this article, we assume D and B are known, and they are the bicubic operator. Furthermore, F ^{(i,j)} is calculated by using the simple block matching method whose function “cvCalcOpticalFlowBM” is published in the libraries of OpenCV [37].
3 SR algorithm based on adaptive prior settings
This section presents the SR algorithm for simultaneously realizing the resolution enhancement and the motion blur removal based on the adaptive prior settings. In order to simultaneously estimate the HR frame and the motion blur kernels, the proposed method defines their posterior probability. Specifically, this posterior probability can be obtained by using Bayes’ rule as follows:
In the above equation, β ^{(i)} is a parameter vector for a prior probability of the HR frame x ^{(i)}, where its details such as the role and the dimension are explained in Section 3.1. Furthermore,
As described above, 2M+1 is the number of the motionblurred LR frames for estimating the HR frame x ^{(i)}. In Equation (4), the motion blur kernel of j th (j=iM,…,i+M) frame, which corresponds to the blurring operator H ^{(j)}, is denoted in a vector form by ${\mathbf{k}}^{\left(j\right)}={\left[{k}_{1}^{\left(j\right)},{k}_{2}^{\left(j\right)},\dots ,{k}_{{L}_{1}{L}_{2}}^{\left(j\right)}\right]}^{T}\left(\in {\mathbf{R}}^{{L}_{1}{L}_{2}}\right)$, where L _{1}×L _{2} is the size of the motion blur kernel. The blurring operator H ^{(j)}, which is a Toeplitz matrix, satisfies the following equation:
where ${\mathbf{x}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{N}_{H}}\right)$ is a vector form of an original HR frame (j th HR frame) of j th motionblurred LR frame y ^{(j)}. Furthermore, ${\mathbf{K}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{L}_{1}\times {L}_{2}}\right)$ and ${\mathbf{X}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{q}_{1}{N}_{1}\times {q}_{2}{N}_{2}}\right)$ are, respectively, the matrix forms of j th motion blur kernel k ^{(j)} and j th HR frame x ^{(j)}, and ⊗ is a convolution operator. In addition, vec[·] is a vectorization operator.
Since we generally assume the denominator Pr(y) in Equation (3) is constant, the following equation is obtained.
In the above equation, since the motion blur kernels k are independent on the HR frame x ^{(i)} and the parameter β ^{(i)}, the prior probability Pr(x ^{(i)},β ^{(i)},k) is rewritten as
Then we calculate the HR frame x ^{(i)} and the motion blur kernels k from the obtained posterior probability based on the MAP estimation as follows:
From Equations (7) and (8), the above equation can be rewritten as follows.
In the above equation, we can utilize the observation model shown in the previous section for the likelihood Pr(yx ^{(i)},β ^{(i)},k), where its details are shown in Section 3.2. Furthermore, the proposed method defines a new prior probability distribution of the HR frame Pr(x ^{(i)},β ^{(i)}), whose shape can adaptively be set for each area by determining the parameters β ^{(i)}, for accurately reconstructing the target HR frame. In addition, a new weight factor is determined by using motion blur and edge directions and introduced into this prior probability to suppresses the oversmooth in edge regions and the noise and ringing artifacts in smooth regions. Then successful estimation of the HR frame and the motion blur kernels from the obtained posterior probability based on the MAP estimation can be expected.
As described above, we adopt the probability model. Furthermore, we use the probability model simultaneously representing the original HR frame x ^{(i)}, the parameters β ^{(i)} and the motion blur kernels k. Thus, we briefly explain the reason why the probability model is adopted and the reason why the probability model simultaneously representing x ^{(i)}, β ^{(i)} and k is used, respectively.
Reason why we adopt the probability model There have been proposed many frameworks which do not adopt probability models. In general, in these methods, they tend to assume that the distribution of the estimation target is represented by only one simple distribution such as the Gaussian distribution. On the other hand, in the methods which adopt probability models, it becomes feasible to adaptively estimate the distribution from the statistic characteristic of the estimation target. Then, as shown in the proposed method, the probability model whose distribution matches the estimation target can directly be used for its reconstruction. Therefore, due to its high degree of freedom, the proposed method uses the probability model.
Reason why the probability model simultaneously representing x ^{(i)} , β ^{(i)} , and k is used Since the proposed method tries to simultaneously perform the SR and the motion blur removal, we must estimate both of the motion blur kernels k and the original HR frame x ^{(i)} from only the motion blurred LR frames y. Furthermore, it is difficult to represent the original HR frame x ^{(i)} by using a simple fixed distribution, and we have to model it by using a distribution whose shape can adaptively be determined for each area based on its parameters β ^{(i)}. Therefore, the original HR frame x ^{(i)} depends on the parameters β ^{(i)}, and the motion blurred LR frames y are generated from the original HR frame x ^{(i)} and the motion blur kernels k. In order to estimate these three unknowns x ^{(i)}, β ^{(i)}, and k from only the motionblurred LR frames y without suffering from their contradictions, the proposed method adopts the probability model which enables their simultaneous representation.
This section is organized as follows. Section 3.1 shows the prior probability distribution used in the proposed method. The algorithm for the reconstruction of the HR frame is presented in Section 3.2.
3.1 Definition of prior probability distributions
This section explains the prior probability distributions of the HR frame, its parameters, and the motion blur kernels utilized in our method. As shown in Equation (8), the prior probability Pr(x ^{(i)},β ^{(i)},k) is divided into Pr(x ^{(i)},β ^{(i)}) and Pr (k). Thus, in this section, we explain the details of Pr(x ^{(i)},β ^{(i)}) and Pr (k).
3.1.1 Prior probability of HR frame and its parameters
First, the prior probability of the HR frame and its parameters is defined as follows.
where we assume the prior probability Pr(β ^{(i)})∝c o n s t. in the above equation. The conditional probability Pr_{ e } (x ^{(i)}β ^{(i)}) is defined in such a way that sharp edges in the HR frame are kept. Therefore, we adaptively set its distribution based on intensity gradients for each area in the HR frame. Furthermore, the conditional probability Pr_{ s }(x ^{(i)}β ^{(i)}) is adopted to suppress noises and ringing artifacts in smooth regions of the HR frame. By using the intensity gradients of the motion blurred LR frame, we suppress the increase of the intensity gradients at smooth regions in the HR frame. The details of Pr_{ e }(x ^{(i)}β ^{(i)}) and Pr_{ s }(x ^{(i)}β ^{(i)}) are shown below.
The details of Pr_{ e }(x^{(i)}β ^{(i)})
The conditional probability Pr_{ e }(x ^{(i)}β ^{(i)}) in Equation. (11) is defined by Generalized Gaussian Distribution (GGD) [38] as follows:
where
and ${\mathcal{N}}_{s}$ is the set of pixels neighboring s, s $\notin {\mathcal{N}}_{s}$, and $s\in {\mathcal{N}}_{t}\iff t\in {\mathcal{N}}_{s}$. In the above equation, ${x}_{s}^{\left(i\right)}$ is an s th element of x ^{(i)}. In Equation (12), μ,α, and ${\beta}_{s,t}^{\left(i\right)}$ are the mean, scale, and shape parameters of the GGD, respectively. In addition, Γ(·) is the Gamma function, which is defined as
Note that β ^{(i)} in Equation (12) contains all of ${\beta}_{s,t}^{\left(i\right)}$. Note that each pixel s has $\left{\mathcal{N}}_{s}\right$ parameters ${\beta}_{s,t}^{\left(i\right)}$, where $\left{\mathcal{N}}_{s}\right$ is the number of pixels in the neighborhood. Thus, the dimension of β ^{(i)} becomes ${N}_{H}\left{\mathcal{N}}_{s}\right$. In the proposed method, the shape of the prior probability distribution Pr (x ^{(i)},β ^{(i)}) changes at each area in the HR frame by introducing the shape parameter ${\beta}_{s,t}^{\left(i\right)}$ ($1\le {\beta}_{s,t}^{\left(i\right)}\le 2$) into the definition of the conditional probability Pr _{ e }(x ^{(i)}β ^{(i)}). If ${\beta}_{s,t}^{\left(i\right)}=1$ and ${\beta}_{s,t}^{\left(i\right)}=2$, the distribution of Pr _{ e }(x ^{(i)}β ^{(i)}), respectively, equals to the Laplace distribution and the Gaussian distribution. It should be noted that the HR frame generally contains both of edge regions and smooth regions. If the prior probability of the HR frame is defined by one distribution, it means both of edge and smooth regions have the same properties. However, these regions actually have different properties each other. Thus, the proposed method estimates the parameter of the GGD, which determines its shape, at each area in the HR frame. In the HR frame, the edge regions should have large values of the intensity gradients. By automatically estimating the distribution, which nearly becomes the Laplace distribution, the penalty of the intensity gradient becomes weaker than that defined by using the Gaussian distribution, where the details of its estimation are shown in Section 3.2. Consequently, by keeping the large intensity gradients, the edge regions can preserve the sharpness.
The details of Pr _{ s }(x^{(i)}β ^{(i)})
Next, the conditional probability Pr_{ s }(x ^{(i)}β ^{(i)}) in Equation (11) is defined as follows:
where ${\stackrel{~}{y}}_{s}^{\left(i\right)}$ is s th element of ${\stackrel{~}{\mathbf{y}}}^{\left(i\right)}={\left[{\stackrel{~}{y}}_{1}^{\left(i\right)},{\stackrel{~}{y}}_{2}^{\left(i\right)},\dots ,{\stackrel{~}{y}}_{{N}_{H}}^{\left(i\right)}\right]}^{T}$, and ${\stackrel{~}{\mathbf{y}}}^{\left(i\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{N}_{H}}\right)$ is an enlarged result of y ^{(i)} by the cubic interpolation. Note that in this article, we utilize the cubic interpolation to obtain ${\stackrel{~}{\mathbf{y}}}^{\left(i\right)}$ for its simplicity. Several approaches for obtaining ${\stackrel{~}{\mathbf{y}}}^{\left(i\right)}$ can be adopted, and better estimation results can be also expected. Furthermore, (σ _{1})^{2} is a variance of the Gaussian distribution, and
In Equation (15), we use the LR frame to constraint the intensity gradients of the HR frame for suppressing noises and ringing artifacts. Equation (15) is motivated by the fact that the motion blur can generally be considered as a smooth filtering process. In a locally smooth region of the LR frame, the corresponding region in the HR frame should be also smooth. In Equation (17), since the estimated value of ${\beta}_{s,t}^{\left(i\right)}$ becomes larger in smooth regions, m _{ s } also becomes larger in such regions. In the region having the large value of m _{ s }, the intensity gradient is strongly constrained by the LR frame. Since the LR frame does not have any ringing and noise artifacts, increasing of the intensity gradient is prevented in the estimated HR frame, and those artifacts can be suppressed.
3.1.2 Prior probability of motion blur kernels
As the prior probability of the motion blur kernels, we define its distribution as follows.
where
and η ^{(j)} is a rate parameter. It is commonly observed that since a motion blur kernel identifies the path of the camera, it tends to be sparse with most values close to zero. This prior probability for the motion blur kernel is used in [19], and we adopt the same prior probability in this article.
3.1.3 Discussion of effectiveness of new prior probability
As shown in the above explanations, the proposed method tries to perform the resolution enhancement and the motion blur removal with keeping the sharpness in edge regions and suppressing noises and ringing artifacts in smooth regions. In general, in order to derive the posterior probability of the HR frame based on Bayes’ rule as shown in Equation (3), the likelihood and the prior probability should be defined. Note that the likelihood is derived from the observation model, and its distribution tends to be common between different methods, where the details of the likelihood is shown in the following section. Therefore, the proposed method focuses on the prior probability and introduces the following novel points to solve the conventional problems.

Adaptive setting of the distribution shape of the prior probability in Equation (12)
The proposed method adaptively determines the parameters ${\beta}_{s,t}^{\left(i\right)}$ which set the distribution shape of the prior probability in such a way that the reconstructed HR frame, respectively, keeps sharpness and smoothness in edge and smooth regions.

Suppression of noises and ringing artifacts in Equation (15)
The proposed method monitors the parameters ${\beta}_{s,t}^{\left(i\right)}$, which represent the distribution shape, in Equation (17) and derives the new prior probability to suppress the occurrence of noises and ringing artifacts in smooth regions.
The proposed method divides Pr(x ^{(i)},β ^{(i)}) into Pr_{ e } (x ^{(i)}β ^{(i)}) and Pr_{ s }(x ^{(i)}β ^{(i)}) in order to deal with edge and smooth areas separately during the SR process. It should be noted that Pr_{ e }(x ^{(i)}β ^{(i)}) is defined by the GGD, and it has a distribution between the Laplace distribution and the Gaussian distribution. On the other hand, Pr_{ s }(x ^{(i)}β ^{(i)}) is defined as the Gaussian distribution, i.e., Pr_{ e }(x ^{(i)}β ^{(i)}) has a higher degree of freedom. This is because Pr_{ e }(x ^{(i)}β ^{(i)}) and Pr_{ s }(x ^{(i)}β ^{(i)}), respectively, have different roles in the proposed method. Specifically, Pr_{ e } (x ^{(i)}β ^{(i)}) is adopted for correctly representing the prior on intensity gradients of the original HR frame. Therefore, the proposed method uses the GGD for providing its distribution correctly. Unfortunately, since there is a limitation to perfectly represent the prior even if we use the GGD, some artifacts may occur as a result, and Pr_{ s }(x ^{(i)}β ^{(i)}) becomes necessary to remove such artifacts by smoothing the corresponding regions. In the proposed method, we try to perform a simple smoothing, and thus, Pr_{ s }(x ^{(i)}β ^{(i)}) based on the Gaussian distribution using L _{2}norm is utilized. Note that since the smoothing should not be performed in edge regions, the proposed method monitors ${\beta}_{s,t}^{\left(i\right)}$ to avoid the oversmoothing in those regions.
It is also possible to apply some postprocessing techniques, such as smoothing filters, to the removal of those artifacts. In this case, we should introduce some functions such as those shown in Equations (15)–(17) into the design of the filters. Nevertheless, since artifacts (i.e., errors) caused in smooth regions affect the estimation of the whole target HR frame during the optimization and also cause the estimation errors, we simultaneously use Pr_{ s }(x ^{(i)}β ^{(i)}) with Pr_{ e }(x ^{(i)}β ^{(i)}). Then it is expected that the errors in the smooth regions can be suppressed in the reconstruction process, and the propagation of those errors to the other areas tends to be avoided.
Then, from the above novel points, our method provides a solution to the problems of the conventional methods not being able to perform adaptive reconstruction.
3.2 Algorithm for reconstructing HR frame
In this section, we present the algorithm for reconstructing the HR frame. The proposed method simultaneously estimates the optimal results of the HR frame x ^{(i)}, its parameters β ^{(i)}, and the motion blur kernels k by using the MAP estimation scheme, and thus, they can be obtained as Equation (10). In Equation (10), the conditional probability, i.e., the likelihood Pr(yx ^{(i)},β ^{(i)},k) is obtained as
where we assume y ^{(j)} is independent on β ^{(i)}. Using the model in Equation (1) and assuming that the noise is a zero mean white Gaussian noise of variance ${\left({\sigma}_{2}^{\left(j\right)}\right)}^{2}$, the likelihood of the LR frame y ^{(j)} can be written as
By substituting Equations (11),(12),(15),(18), and (21) into Equation (10), the minimization cost function is calculated as follows:
where ${\lambda}_{1}=\frac{1}{2{\left({\sigma}_{1}\right)}^{2}}$ and ${\lambda}_{2}^{\left(j\right)}=\frac{1}{2{\left({\sigma}_{2}^{\left(j\right)}\right)}^{2}}$. In the above equation, ${w}_{s,t}^{\left(i\right)}$, which is a new weight factor considering motion blur direction, is introduced into the third term and defined as follows:
where ${\Delta}_{y}^{(s,t)}$ and ${\Delta}_{x}^{(s,t)}$ are distances from s th pixel to t th pixel through x and ycoordinates, respectively. The matrix K ^{(i)} corresponds to the matrix shown in Equation (6), and K ^{(i)}(u,v) (u=1,2,…,L _{1};v=1,2,…,L _{2}) is a (u,v)th element of K ^{(i)}. If the direction between s th pixel and t th pixel becomes parallel to the main direction of the motion blur, the weight factor becomes small. If the resolution enhancement is only performed, the regularization term is dependent only on the characteristic of the HR frame since the blur is commonly constant in all directions. However, due to both of the resolution reduction and the motion blur, the blur does not become constant in all directions. This weight factor is utilized for avoiding the oversmooth due to the regularization term.
Finally, we explain the optimization procedures of Equation (22). Since the cost function shown in Equation (22) consists of the three large sets of unknowns (the HR frame x ^{(i)}, the parameters β ^{(i)}, and the motion blur kernels k), the use of direct search techniques is intractable. Therefore, the following cyclic coordinate descent optimization procedures are adopted to estimate the unknowns. Specifically, we iteratively perform the following three procedures.
Step 1: Update of the HR frame x ^{(i)}
The parameters β ^{(i)} and the motion blur kernels k are fixed, and the HR frame is estimated by performing the following iterations:
where r and h _{1}, respectively, represents the iteration number and the step size, and the cost function E _{1}(x ^{(i)}) is defined as
Step 2: Update of the parameters β ^{(i)}
The HR frame x ^{(i)} and the motion blur kernels k are fixed, and the parameters β ^{(i)} are estimated by performing the following iterations:
where h _{2} is the step size, and the cost function E _{2}(β ^{(i)}) is defined as
Step 3: Update of the motion blur kernels k
The HR frame x ^{(i)} and the parameters β ^{(i)} are fixed, and the motion blur kernel k ^{(j)} (j=iM,…,i+M) of j th frame is estimated by performing the following iterations:
where h _{3} is the step size, and the cost function E _{3} k ^{(j)} is defined as
Note that ${\mathbf{A}}_{\mathbf{k}}^{(i,j)}$ is defined in the following equations:
where ${\stackrel{~}{\mathbf{X}}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{N}_{H}\times {L}_{1}{L}_{2}}\right)$ satisfies
Then we can simultaneously estimate the HR frame ${\widehat{\mathbf{x}}}^{\left(i\right)}$, the parameters ${\widehat{\mathit{\beta}}}^{\left(i\right)}$, and the motion blur kernels $\widehat{\mathbf{k}}$. This optimization method is based on the steepest descend algorithm. Thus, the convergence of the iterative process may not be guaranteed.
In the proposed method, we newly define the posterior probability for simultaneous estimation of the HR frame and the motion blur kernels. Furthermore, the proposed method introduces the new prior probability, and by estimating the optimal parameter determining its distribution in each area, the sharpness in edge regions is preserved. In smooth regions, noises and ringing artifacts are reduced by using the information of the motion blurred LR frames. Therefore, the proposed method performs the reconstruction more adaptively than the conventional methods, and accurate restoration and resolution enhancement by our method can expected.
4 Experimental results
The performance of the proposed method is verified in this section. We used video sequences shown in Table 1. According to Equation (1), motionblurred LR video sequences shown in Table 2 were generated from the motion blur kernels (PSF) shown in Figure 1. Then we applied the proposed method to the LR video sequences and generated resolutionenhanced video sequences at the original resolution. When applying the proposed method to the test sequences, we simply set α=1, μ=0, ${\lambda}_{1}^{\left(j\right)}=\frac{1}{ij}$, λ _{2}=1.0×10^{3}, h _{1}=2.0×10^{2}, h _{2}=1.0×10^{5}, h _{3}=1.0×10^{11} and η=5.0×10^{7}. It should be noted that α, μ, and ${\lambda}_{1}^{\left(j\right)}$ have been set to the reasonable values. Furthermore, since h _{1}, h _{2}, and h _{3} only determine the step size in the cyclic coordinate descend optimization procedures, they do not affect the performance of the proposed method if we set them to sufficiently small values. Then the parameters which seem to affect the performance of the proposed method are λ _{2} and η, and we set these parameters from some preliminary experiments. In addition to the setting of the above parameters, we also show the conditions in the experiments below.

Number of frames used to reconstruct each HR frame : 5 (i.e., M=2)

Number of iterations for the whole optimization: 10

Note that in each iteration, we also performed the following iterations for x ^{(i)}, β ^{(i)} and k ^{(j)}:

Number of iterations for optimizing x ^{(i)}: 300

Number of iterations for optimizing β ^{(i)}: 50

Number of iterations for optimizing k ^{(j)}: 10


Block size used in the block matching algorithm : 7×7 pixels

Neighborhood ${\mathcal{N}}_{s}$: Eight neighboring pixels of pixel s

Initial conditions of x ^{(i)}, β ^{(i)} and k:
For comparison, we respectively performed the following reconstruction by using the conventional methods [13, 18, 39]:

1.
Comparative methods 1 and 2
For comparison of the proposed method, we used the conventional methods [13, 18]. These methods are only the resolution enhance methods, which, respectively, use the L _{2}norm or L _{1}norm regularization term. In order to compare the proposed method and the conventional methods, the degradation model including the motion blur is used, and the motion blur kernels are estimated by Fergus et al. [34]. The proposed method adaptively determines the prior distribution, i.e., the regularization term is adaptively determined for the target video sequences. Thus, these comparative methods are suitable for the comparison of the proposed method.

2.
Comparative method 3
The conventional method [39], which is implemented by using the software provided by the authors, is only a resolution enhancement method utilizing the frequency domain approach. In order to compare the performance between the proposed method and this method, we remove the motion blur by Fergus et al. [34] after applying the resolution enhancement. This methods is used as the benchmarking method.
In order to perform the same experiments between different methods, we performed the registration (motion estimation) by using the simple block matching procedures shown in Section 2 for the proposed method and Comparative methods 1 and 2. It should be noted that Comparative method 3 based on [39] is a different approach, and thus, we used their proposed motion estimation approach for this comparative method. Recently, many successful registration methods have been proposed, and the performance of the SR can drastically be improved. However, since the main focus of this article is the reconstruction algorithm, we adopted such simple procedures.
The estimated HR results of “Mobile & Calendar” are shown in Figure 2. For better subjective evaluation, their enlarged portions are shown in Figure 3. It can be seen that the use of the proposed method has achieved improvements compared to the conventional methods. Specifically, the proposed method preserves the sharpness more successfully than do the conventional methods. Furthermore, the estimated kernels are shown in Figure 4. In this result, the proposed method successfully estimates the kernel with keeping its sparseness. Different experimental results are shown in Figures 5, 6, and 7. Compared to the results obtained by the conventional methods, it can be seen that various kinds of the motion blurs are accurately removed, and successful resolution enhancement is realized by using the proposed method. Therefore, high performance of the proposed method was verified by the experiments.
Table 3 shows the mean PSNR values obtained by the proposed method and the comparative methods. The experimental results show that the proposed method outperforms other conventional methods. Furthermore, Figure 8 shows the details of the PSNR results obtained from the estimated HR video sequences.
From the obtained results, we can see the proposed method enables the successful reconstruction of the HR video sequences from the motion blurred LR video sequences. As shown in the previous section, the proposed method newly adopts the following two novel approaches:

(i)
Simultaneous resolution enhancement and motion blur removal
The proposed method uses the posterior probability for simultaneously estimating the HR frame and the motion blur kernels from the target motion blurred LR frames. Then this provides a solution to the problems of the conventional methods that separately perform these two reconstructions, i.e., the problem that errors caused in the first reconstruction affect the performance of the subsequent reconstruction.

(ii)
Adaptive setting of prior probability on HR frame
In the proposed method, the prior probability is adaptively set for the target video sequence. Specifically, we calculate the parameters which determine the distribution shape of the prior probability on intensity gradients to keep the sharpness in edge regions. Furthermore, the prior probability is also determined in such a way that noises and ringing artifacts are suppressed in smooth regions.
First, in order to confirm the effectiveness of (i), we show an example result which is obtained by separately performing the motion blur removal and the resolution enhancement in Figure 9. Specifically, after the resolution enhancement based on the proposed SR method, the motion blur removal by Fergus et al. [34] is performed. From the obtained results, we can see that the results obtained by the separate procedures tend to suffer from some degradations compared to those obtained by the proposed method. Thus, from this experiment, the effectiveness of (i) can be confirmed.
When estimating unknown data from its observed data, its estimation generally becomes an illposed problem. Therefore, we have to provide some prior information on the estimation target. In general, since it is quite difficult to perfectly provide the prior, the estimation error is always caused due to the mismatch of the prior. In methods which separately perform the SR and the motion blur removal, this problem occurs in each process, and the estimation performance of the original HR frame is degraded. Furthermore, after finishing the first process, the obtained result contains errors due to the above problem, and the remaining second process estimates the original HR frame by regarding the result obtained by the first process as an observation. However, the model in the second process does not generally consider the error caused in the first process, and its compensation becomes difficult.
Therefore, we think if one is performed after the other, errors in the first process may cause performance degradation of the subsequent process. Thus, the probability model simultaneously estimating all unknowns is introduced into the proposed method.
Next, in order to confirm the effectiveness of (ii), we show two results obtained by fixing the parameters, which determine the distribution shape of the prior probability, as ${\beta}_{s,t}^{\left(i\right)}=1$ and ${\beta}_{s,t}^{\left(i\right)}=2$ in Figure 10a,b, respectively. Furthermore, in Figure 10f, we show a map of the estimated parameters ${\beta}_{s,t}^{\left(i\right)}$. From the obtained results, it can be seen that the proposed method can adaptively set the parameters and enable the successful reconstruction of the HR video sequences. Consequently, the proposed method effectively solves the conventional problems and realizes the accurate reconstruction of HR video sequences from motion blurred LR video sequences.
In the above results and discussions, we can confirm the effectiveness of the novelties in the proposed method. Furthermore, we compare the performance of the proposed method with those of recent works. As the recent previous works, we selected the studies of [26, 36]. Since these methods realize SR, we performed the motion blur removal by using[34]. Note that in [26], the authors used L _{1}norm or L _{2}norm based regularization terms. Thus, in this experiment, we used both results obtained by these two regularization terms. Also, for implementing the method in [26], we used the region segmentation algorithm [40] instead of their reported one. Results obtained by applying the above recent methods to “Mobile & Calendar” and “Susie” are shown in Figures 11 and 12. Quantitative comparison between the proposed method and these methods are also shown in Figure 13. Furthermore, in Table 4, the mean PSNR values obtained by these methods are also shown. From the obtained results, it can be seen that the proposed method enables successful reconstruction compared to these conventional methods.
In addition to the above results, we also show results obtained by applying the proposed method to some real video sequences. Specifically, we used the video sequences shown in [41] and performed the resolution enhancement by using the proposed method. Note that since these sequences do not include motion blurs, we focus on how successfully resolution enhancement can be achieved. Figures 14, 15, and 16 show the results of the proposed method. From the obtained results, we can see that the proposed method enables successful resolution enhancement in several areas of the obtained HR frames. On the other hand, the results of some areas are not satisfactory. Thus, we focus on the problem of the proposed method below.
Finally, we discuss the limitation of the proposed method and show its future outlook. In the proposed method, we calculate the motion vectors for estimating F ^{(i,j)} by using the simple block matching method [37]. It is well known that results of SR severely depend on the estimation performance of F ^{(i,j)}. In this article, we only focus on the performance of the reconstruction algorithm, but it is necessary to adopt more accurate motion estimation algorithms for improving the performance of SR.
Next, enhancing only the spatial resolution with removing motion blurs will introduce temporal aliasing effects. Therefore, videotovideo SR becomes necessary for reducing the above problem. There have been proposed many spacetime based methods such as [36, 41, 42], and we also have to expand our method to the videotovideo version.
Furthermore, in this article, we only considered motion blur caused by the ego (camera) motion. However, in real applications, we have to focus on the motion blur caused by two different factors: the ego (global) motion and the objects’ (local) motions. For the successful videotovideo SR, we have to consider both global/local motion blurs.
These topics are future work in our study.
5 Conclusion
A resolution enhancement method of motionblurred LR video sequences based on the SR technique has been presented in this article. In the proposed method, we introduce the following two approaches:

(i)
simultaneous estimation of the HR frame and the motion blur kernels, and (ii) a new prior probability for correctly representing the HR frame. Then we simultaneously estimate the HR frame and the motion blur kernels based on the new prior probability. Consequently, successful reconstruction of HR video sequences can be realized with preserving the sharpness and suppressing some artifacts.
Note that although the proposed method can perform the accurate SR in the experiments, some artifacts occur between the edge regions and smooth regions. This is because it becomes difficult to accurately estimate the parameters in the prior distribution of the original HR frame from only the motion blurred LR frames. We will have to tackle this problem in the future work. Furthermore, we simply set the parameters used in the proposed method. These parameters were set to the values that output the highest performance. However, they should be determined from the target video sequences, adaptively.
In addition, we also have to realize the videotovideo SR approach for reducing temporal aliasing effects. Therefore, we will study this point in the subsequent report.
References
 1.
Freeman WT, Pasztor EC, Carmichael OT: Learning lowlevel vision. Int. J. Comput. Vis 2000, 40: 2547. 10.1023/A:1026501619075
 2.
Hertzmann A, Jacobs CE, Oliver N, Curless B, Salesimn DH: Image analogies. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’01 2001, 327340.
 3.
Freeman WT, Jones TR, Pasztor EC: Examplebased superresolution. IEEE Comput. Graph. Appl 2002, 22: 5665.
 4.
Sun J, Zheng NN, Tao H, Shum HY: Image hallucination with primal sketch priors, vol. 2. Proceedings. 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003 2003, 729736.
 5.
Wang Q, Tang X, Shum H: Patch based blind image super resolution, vol. 1. Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) 2005, 709716.
 6.
Stephenson TA, Chen T: Adaptive Markov random fields for examplebased superresolution of faces. EURASIP J. Appl. Signal Process 2006, 2006: 225225.
 7.
Jiji CV, Chaudhuri S: Singleframe image superresolution through contourlet learning. EURASIP J. Appl. Signal Process 2006, 2006: 235235.
 8.
Jiji CV, Chaudhuri S, Chatterjee P: Single frame image superresolution: should we process locally or globally? Multidimen. Syst. Signal Process 2007, 18: 123152. 10.1007/s1104500700241
 9.
Li X, Lam KM, Qiu G, Shen L, Wang S: An efficient examplebased approach for image superresolution. International Conference on Neural Networks and Signal Processing, 2008 2008, 575580.
 10.
Tsai R, Huang T: Multiframe image restoration and registration. Adv. Comput. Vis. Image Process 1984, 1: 317339.
 11.
Kim S, Bose N, Valenzuela H: Recursive reconstruction of high resolution image from noisy undersampled multiframes. IEEE Trans. Acoust. Speech Signal Process 1990, 38(6):10131027. 10.1109/29.56062
 12.
Kim S, Su WY: Recursive highresolution reconstruction of blurred multiframe images. IEEE Trans. Image Process 1993, 2(4):534539. 10.1109/83.242363
 13.
Schultz R, Stevenson R: Extraction of high resolution frames from video sequences. IEEE Trans. Image Process 1996, 5: 9961011. 10.1109/83.503915
 14.
Hardie R, Barnard K, Armstrong E: Joint MAP registration and highresolution image estimation using a sequence of undersampled images. IEEE Trans. Image Process 1997, 6(12):16211633. 10.1109/83.650116
 15.
Baker S, Kanade T: Limits on superresolution and how to break them. IEEE Trans. Pattern Anal. Mach. Intell 2002, 24(9):11671183. 10.1109/TPAMI.2002.1033210
 16.
Farsiu S, Robinson D, Elad M, Milanfar P: Robust shift and add approach to superresolution. Appl. Digi. Image Process. XXVI 2003, 5203: 121130. 10.1117/12.507194
 17.
Park S, Park M, Moon G: Superresolution image reconstruction: a technical over view. IEEE Signal Process. Mag 2003, 20(3):2136. 10.1109/MSP.2003.1203207
 18.
Farsiu S, Robinson M, Elad M, Milanfar P: Fast and robust multiframe super resolution. IEEE Trans. Image Process 2004, 13(10):13271344. 10.1109/TIP.2004.834669
 19.
Hu H, Kondai L: A regularization framework for joint blur estimation and superresolution of video sequences. ICIP 2005, 3: 329332.
 20.
van Ouwerkerk J: Image superresolution survey. Image Vis. Comput 2006, 24(10):10391052. 10.1016/j.imavis.2006.02.026
 21.
Shen H, Zhang L, Huang B, Li P: A MAP approach for joint motion estimation, segmentation, and super resolution. IEEE Trans. Image Process 2007, 16(2):479490.
 22.
Takeda H, Frasiu S, Milanfar P: Kernel regression for image processing and reconstruction. IEEE Trans. Image Process 2007, 16(2):349366.
 23.
YuanRan L, DaoQing D: Color superresolution reconstruction and demosaicing using elastic net and tight frame. IEEE Trans. Circuits Syst. I: Regular Papers 2008, 55(11):35003512.
 24.
Omer O, Tanaka T: Joint blur identification and highresolution image estimation based on weighted mixednorm with outlier rejection. IEEE International Conference on Acoustics, Speech and Signal Processing, 2008, ICASSP 2008 2008, 13051308.
 25.
Omer O, Tanaka T: Extraction of highresolution frame from lowresolution video sequence using regionbased motion estimation. IEICE Trans. Fund 2010, E93A(4):742751. 10.1587/transfun.E93.A.742
 26.
Omer O, Tanaka T: Regionbased weightednorm with adaptive regularization for resolution enhancement. Digi. Signal Process 2011, 21(4):508516. 10.1016/j.dsp.2011.02.005
 27.
Protter M, Elad M, Takeda H, Milanfar P: Generalizing the nonlocalmeans to superresolution reconstruction. IEEE Trans. Image Process 2009, 18: 3651.
 28.
Baboulaz L, Dragotti P: Extract feature extraction using finite rate of innovation principles with an application to image superresolution. IEEE Trans. Image Process 2009, 18(2):281298.
 29.
Takeda H, Milanfar P, Protter M, Elad M: Superresolution without explicit subpixel motion estimation. IEEE Trans. Image Process 2009, 18(9):19581975.
 30.
Lee IH, Bose N, Lin CW: Locally adaptive regularized superresolution on video with arbitrary motion. 17th IEEE International Conference on Image Processing (ICIP), 2010 2010, 897900.
 31.
Rudin L, Osher S, Fatemi E: Nonlinear total variation based on removal algorithms. Physica D 1992, 60: 259268. 10.1016/01672789(92)90242F
 32.
Richardson H: Bayesianbased iterative method of image restoration. J. Opt. Soc. Am 1972, 62: 5559. 10.1364/JOSA.62.000055
 33.
Lucy L: An iterative technique for the rectification of observed distributions. Astron. J 1974, 79(6):745754.
 34.
Fergus R, Singh B, Hertzmann A, Roweis S, Freeman W: Removing camera shake from a single photograph. ACM Trans. Graph. (SIGGRAPH) 2006, 25(3):787794. 10.1145/1141911.1141956
 35.
Yuan L, Sun J, Quan L, Shum H: Progressive interscale and intrascale nonblind image deconvolution. ACM Trans. Graph. (SIGGRAPH) 2008, 27(3):110.
 36.
Takeda H, Milanfar P: Removing motion blur with spacetime processing. IEEE Trans. Image Process 2011, 20(10):29903000.
 37.
Bradski G: The OpenCV library. Dr. Dobb’s Journal of Software Tools, 2000
 38.
Do M, Vetteli M: Waveletbased texture retrieval using generalized gaussian density and KullbackLeibler distance. IEEE Trans. Image Process 2002, 11: 146158. 10.1109/83.982822
 39.
Vandewalle P, Süsstrunk S, Vetterli M: A frequency domain approach to registration of aliased images with application to superresolution. EURASIP J. Appl. Signal Process 2006, 2006: 114.
 40.
Meyer F: Topographic distance and watershed lines. Signal Process 1994, 38: 113125. 10.1016/01651684(94)900604
 41.
Faktor A, Irani M: Spacetime superresolution from a single video. In Proceedings of CVPR. : ; 2011:33533360.
 42.
Shechtman E, Caspi Y, Irani M: Spacetime superresolution. IEEE Trans. Pattern Anal. Mach. Intell 2005, 27(4):531545.
Acknowledgements
This research was partly supported by a GrantinAid for Scientific Research (B) 21300030, from the Japan Society for the Promotion of Science (JSPS).
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Ogawa, T., Izumi, D., Yoshizaki, A. et al. Superresolution for simultaneous realization of resolution enhancement and motion blur removal based on adaptive prior settings. EURASIP J. Adv. Signal Process. 2013, 30 (2013). https://doi.org/10.1186/16876180201330
Received:
Accepted:
Published:
Keywords
 Video Sequence
 Prior Probability
 Edge Region
 Intensity Gradient
 Smooth Region