Hindawi Publishing Corporation The Scientific World Journal Volume 2013, Article ID 192795, 6 pages http://dx.doi.org/10.1155/2013/192795
Research Article Deterministic Sensing Matrices in Compressive Sensing: A Survey Thu L. N. Nguyen and Yoan Shin School of Electronic Engineering, Soongsil University, Seoul 156-743, Republic of Korea Correspondence should be addressed to Yoan Shin;
[email protected] Received 5 August 2013; Accepted 30 September 2013 Academic Editors: Z. Cai, Y. Qi, and Y. Wu Copyright Β© 2013 T. L. N. Nguyen and Y. Shin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.
1. Introduction Consider a scenario that x β Rπ is a vector we wish to recover. Let y β Rπ (π βͺ π) be a linear measurements of the vector x, which is given by y = Ax,
(1)
where A is the measurement matrix or sensing matrix. Because this system is underdetermined, the recovery problem of the vector x from the measurement vector y is an ill-posed problem. However, two papers by Donoho [1] and Cand`es et al. [2] gave us a breakthrough by exploiting sparsity in recovery problems. The authors show that a sparse signal can be reconstructed from very few measurements by solving via π0 -minimization minπ βzβ0 zβR
subject to Az = y,
(P0 )
subject to Az = y,
(P1 )
π1 -minimization minπ βzβ1 zβR
or adopting a strategy between (P0 ) and (P1 ) minπ βzβπ zβR
subject to Az = y.
(Pπ )
The sufficient conditions for having the solution of (P0 ) to coincide with that of (P1 ) are dependent on either mutual coherence or Restricted Isometry Property (RIP). These conditions are closely related to each other and play an important role in the construction of sensing matrices. Consider A = [a1 β
β
β
aπ ] is an π Γ π sensing matrix we investigate. Then its mutual coherence is defined as σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨β¨aπ , aπ β©σ΅¨σ΅¨σ΅¨ σ΅¨ π (A) = max σ΅© σ΅© σ΅©σ΅© σ΅©σ΅©σ΅¨ , π, π = 1, . . . , π. (2) σ΅©σ΅©aπ σ΅©σ΅© σ΅©σ΅©aπ σ΅©σ΅© σ΅© σ΅©2 σ΅© σ΅©2 Lemma 1 (see [3]). For an π Γ π sensing matrix A, the Welch bound is given by β
πβπ β€ π (A) β€ 1. π (π β 1)
(3)
The existence and uniqueness of the solution can be guaranteed as soon as the measurement matrix A satisfies the RIP of order π; that is, (1 β πΏπ ) βzβ22 β€ βAzβ22 β€ (1 + πΏπ ) βzβ22 ,
where βzβ0 β€ π. (4)
2 The smallest value of πΏπ is called the Restricted Isometry Constant (RIC). A strict condition πΏ2π < 1 also guarantees exact solution via π0 -minimization. However, the problem (P0 ) remains NP-hard; that is, it cannot be solved in practice. For 0 < π β€ 1, there is no numerical scheme to compute solutions with minimal ππ -norm as well. Furthermore, the problem (P1 ) is a convex optimization problem, and in fact, it can be formulated as a linear optimization problem. Then solving via π1 -minimization is efficient with high probability. Hence, most researchers are interested in the recovery via π1 -minimization. There are two common ways to solve these problems. First, we can exactly recover x via π1 -minimization by solving the problem (P1 ) or (P1,π ) which is given as σ΅© σ΅© minπ βzβ1 subject to σ΅©σ΅©σ΅©Az β yσ΅©σ΅©σ΅©2 β€ π. (P1,π ) zβR The second method is using greedy algorithms for π0 minimization, such as Matching Pursuit (MP), Orthogonal Matching Pursuit (OMP), or their modifications [4β8]. However, in order to ensure unique and stable reconstruction, the sensing matrix A must satisfy some criteria. One of the well-known criteria is 2π-RIP. More attention has been paid to random sensing matrices generated by identical and independent distributions (i.i.d.) such as Gaussian, Bernoulli, and random Fourier ensembles, to name a few. Their applications have been shown in medical images processing [9], geophysical data analysis [10], communications [11, 12], and other various signal processing problems. Even though random sensing matrices ensure high probability in reconstruction, they also have many drawbacks such as excessive complexity in reconstruction, significant space requirement for storage, and no efficient algorithm to verify whether a sensing matrix satisfies RIP property with small RIC value. Hence, exploiting specific structures of deterministic sensing matrices is required to solve these problems of the random sensing matrices. Recently, several deterministic sensing matrices have been proposed. We can classify them into two categories. First are those matrices which are based on coherence [13β15]. Second are those matrices which are based on RIP or some weaker RIPs [16β20]. In this paper, we introduce some highlighted results such as deterministic construction of sensing matrices via algebraic curves over finite fields in term of coherence and chirp sensing matrices, second-order ReedMuller codes, binary Bose-Chaudhuri-Hocquenghem (BCH) codes, and the sensing matrices with statistical RIP in terms of the RIP. The rest of this paper is organized as follows. Section 2 introduces some random sensing matrices and their practical disadvantages. In Section 3, we present some highlighted results in terms of deterministic constructions. Section 4 concludes this paper.
2. Random Sensing Matrices and Their Drawbacks Recall that x β Rπ is the vector we want to recover. Because the number of measurements is much smaller than its
The Scientific World Journal dimension (π βͺ π), we cannot find a linear identity reconstruction map; that is, unique solution does not exist for all x in Rπ . However, if we assume that the signal x belongs to a certain subset Ξ£π β Rπ which is the set of all π-sparse vectors as Ξ£π (x) = {x β Rπ : π₯π = 0, π β π}
(5)
for each index set π β {1, . . . , π}, the π-best approximation is given by ππ (x)π = inf {βx β zβπ , βzβ0 β€ π} ,
(6)
where β β
βπ can be any norm in Rπ . As we noted above, the use of randomly generated sensing matrices has become powerful in compressive sensing. For an upper bound πβ€
ππ , log (π/π)
(7)
where π is a positive constant, the i.i.d. Gaussian matrix achieves the π-RIP as well, which guarantees to recover sparse signals with high probability [21, 22]. The condition in (7) is also known to hold for the symmetric Bernoulli distribution case and changed to π β€ ππ/(log π)6 for the Fourier measurements [23]. For noiseless recovery, it can be stated as follows. Theorem 2 (see [24]). If z β Rπ is a π-sparse vector and the sensing matrix A satisfies πΏπ (A) + πΏ2π (A) + πΏ3π (A) < 1,
(8)
then x is the unique minimizer to (P1 ). In practice, the original signals may be affected by noise, so the recovered signals are not exact, and rather they are almost sparse instead. Hence, some modified criteria were proposed as follows. Theorem 3 (see [25]). Suppose that x β Ξ£π and the noise e = Az β π¦ satisfies βeβ2 β€ π. If the sensing matrix A has RIP such that πΏ3π (A) + 3πΏ4π (A) < 2,
(9)
then xβ which is the output of the reconstruction algorithm applied to x via (P1,π ) will obey σ΅©σ΅© β σ΅© σ΅©σ΅©x β xσ΅©σ΅©σ΅©2 β€ ππ,
(10)
where the constant π depends on sparsity π. A new result on RIC was proposed by Cand`es as follows. Theorem 4 (see [25]). Given x β Ξ£π (x) and an upper bound of noise βeβ2 = βAz β yβ2 β€ π, if the sensing matrix A has RIP such that πΏ2π < β2 β 1,
(11)
The Scientific World Journal
3
then any solution xβ of (P1,π ) obeys π (x) σ΅©σ΅© β σ΅© σ΅©σ΅©x β xσ΅©σ΅©σ΅©2 β€ π1 π + π2 π 2 , βπ
(12)
where π1 and π2 are two positive constants depending on sparsity π. Several inequalities in terms of RIC have been discovered, such as πΏ2π < 2/(3 + β2) in [26], πΏ2π < 3/(4 + β6) in [27], or πΏ2π < 2/(2 + β5) in [28], to name a few. In sum, we can obtain stable and unique solution by using tools from random sensing matrices. Random matrices are easy to construct and ensure high probability reconstruction. However, they also have many drawbacks. First, storing random matrices requires a lot of storage. Second, there is no efficient algorithm verifying RIP condition. So far, it is not a good approach because of its lack of efficiency. The recovery problems may be difficult when the dimension of the signal becomes large, and we have to construct a measurement matrix that satisfies RIP with a small πΏπ , such as Theorem 4.
3. Deterministic Sensing Matrices 3.1. Chirp Sensing Matrices. A discrete chirp of length π has the form π΄ π,π =
2ππ 1 2ππ 2 exp { ππ + ππ } , βπ π π
π, π, π β Zπ ,
(13)
where P is a πΓπ binary symmetric matrix, b is a πΓ1 binary vector in Z2π , and π€(b) is the weight of b, that is, number of bit-1 entries. In practice, the matrices P are set as all-zero matrices or the matrices with zero diagonals. Thus, there are only 2π(πβ1)/2 matrices P satisfying this condition, which are {P1 , . . . , P2π(πβ1)/2 }, and the functions {πP,b (a)} are real valued. The set π
FP = {πP,b | b β Z2 } π
forms a basis of Z2 . The inner product on FP is defined as follows. For any two vectors πP,b and πPσΈ ,bσΈ in FP , 1 { β¨πP,b , πPσΈ ,bσΈ β© = { β2π {0
(14)
Each matrix Uππ‘ (π‘ = 1, . . . , π) is an π Γ π matrix with columns given by the chirp signals having a fixed chirp rate ππ‘ with base frequency π varying from 0 to π β 1. For instance, given π = 2 and π, π, π β {0, 1}, we obtain 1 1 Uπ1 = [ ππ ] , 1 π
1 1 Uπ2 = [ ππ ππ+π2π ] . π π
(15)
Hence, we get the 2 Γ 4 deterministic sensing matrix A as Achirp
= [Uπ1
Uπ2 ] = [
1 1 1 1 ]. 1 πππ πππ πππ+π2π
(16)
Note that when π = π1 = 0, the matrix Uπ1 corresponding to chirp rate π1 becomes the Discrete Fourier Transform (DFT) matrix. Most of the sensing chirp matrices admit a fast reconstruction algorithm which reduces the complexity to π(ππ log π). 3.2. Second-Order Reed-Muller Sensing Matrices. The secondorder Reed-Muller code is given as follows: (β1)π€(b) (2b+Pa)π a π , πP,b (a) = β2π
(17)
2π times, 2π β 2π times,
(19)
where π = rank(P β PσΈ ). The deterministic sensing matrix has the form ARM = [UP1 UP2 β
β
β
UP2π(πβ1)/2 ]2π Γ2π(π+1)/2 ,
(20)
where UPi is the unitary matrix corresponding to Fππ (π = 1, . . . , 2π(πβ1)/2 ). Note that if we set π = 2π and π = 2π(π+1)/2 , we get an π Γ π sensing matrix ARM . For instance, let π = 2; then 0 0 1 1 Z22 = {[ ] , [ ] , [ ] , [ ]} . 0 1 0 1
where π is the chirp rate and π is the base frequency. The full chirp sensing matrix A of size π Γ π2 can be written as Achirp = [Uπ1 Uπ2 β
β
β
Uππ ] .
(18)
(21)
There are only 22(2β1)/2 = 2 binary symmetric matrices P of size 2 Γ 2 satisfying the condition. These are 0 0 P1 = [ ], 0 0
P2 = [
0 1 ]. 1 0
(22)
Hence, we get the deterministic sensing matrix A as ARM = [UP1 UP2 ] 1 [1 =[ [1 [1
β1 1 β1 1
β1 β1 1 1
1 β1 β1 1
1 1 1 β1
β1 1 β1 β1
β1 β1 1 β1
1 β1] ] . β1] β1]22 Γ23
(23)
Reconstruction algorithms using the second-order ReedMuller sensing matrices can outperform the standard compressive sensing using random matrices via π1 -minimization, especially when the original signal is not sparse and the noise is present. Moreover, the nesting of the Delsarter-Goethals sets of the Reed-Muller codes is still feasible if the dimension of the original signal is large [17, 29]. 3.3. Binary BCH Matrices. Denote π as a divisor of 2π β 1 for some integer π β₯ 3, and πΎ β GF(2π ) as a primitive πth root of unity and assume that π is the smallest integer for which
4
The Scientific World Journal π
π divides 2π β 1. If we set πΌ = πΎ(2 Define a code C over GF(2π ) by 1 πΌ πΌ2 [ 3 πΌ6 [1 πΌ [ .. .. [ .. [. . . [ H=[ 2πβ1 2(2πβ1) πΌ 1 πΌ [ [. .. .. [. [. . . 2πβ1 2(2πβ1) πΌ [1 πΌ
β1)/π
, then πΌ has order π.
β
β
β
πΌ(πβ1) ] β
β
β
πΌ3(πβ1) ] ] .. .. ] ] . . ] . β
β
β
πΌ(πβ1)(2πβ1) ] ] ] .. .. ] ] . . (πβ1)(2πβ1) β
β
β
πΌ ]πΓπ
with respect to a uniform distribution of vector x among all π-sparse vectors in Rπ of the same norm. Definition 6 (UStRIP). An π Γ π matrix π΄ is called UStRIP of order π with constant πΏ and probability 1 β π if it satisfies StRIP and {z β Rπ : Az = Ax} = {x}
(24)
with probability exceeding 1 β π with respect to a uniform distribution of vector x among all π-sparse vectors in Rπ of the same norm.
The BCH code C is defined by C = {π β F2π | π» β
πβ = 0} .
(25)
In other words, if we denote N as the null-space of the above matrix of GF(2π ), then the BCH code C = N β© F2π . An example of binary matrices formed by BCH code is given as follows. Let π = 15, π = 4, and π‘ = 1, and let πΎ be a primitive element of GF(24 ) = GF(16) satisfying πΎ4 +πΎ+1 = 0. π 4 Then πΌ = πΎ(2 β1)/π = πΎ(2 β1)/15 = πΎ. The BCH code is the set of 15 tuples that lie in the null space of the matrix H = [1 πΌ πΌ2 β
β
β
πΌ14 ] .
(26)
Since π (π₯) = (π₯4 + π₯ + 1) (π₯4 + π₯3 + π₯2 + π₯ + 1)
(27)
= π₯8 + π₯7 + π₯6 + π₯4 + 1
= 0, we have satisfies π(πΌ3 ) [1 0 0 0 1 0 1 1 1 0 0 0 0 0 0]π as a codeword in the BCH code. The binary matrix is obtained as follows:
These constructions allow recovery methods for which expected performance is sublinear in π and quadratic in π, compared to the superlinear in π of the BP or the MP algorithms. The criteria are simple; however, when satisfied by a deterministic sensing matrix, they guarantee successful recovery in an exponentially small fraction of π-sparse signals. The authors also showed that such sensing matrices satisfying these aforementioned properties could be constructed by chirps, second-order Reed-Muller codes, and BCH codes [16β18]. 3.5. Deterministic Construction of Sensing Matrices via Algebraic Curves over Finite Fields. In [14], DeVore used polynomials over finite field Fπ to construct binary sensing matrices of size π2 Γ ππ+1 , where π is a primer number. Let {π0 , π1 , . . . , ππ } be a subset of Fπ , and let ππ be the set of all polynomials of degree no more than π on Fπ as ππ = {π β Fπ [x] : deg π β€ π} .
0 0 1 0
0 1 0 0
1 0 0 0
0 0 1 1
0 1 1 0
1 1 0 0
1 0 1 1
0 1 0 1
1 0 1 0
0 1 1 1
1 1 1 0
1 1 1 1
1 1 0 1
1 0] 0] . 1]
(28)
Since BCH code is cyclic, we can describe it in terms of a generator polynomial which is the smallest degree polynomial having zeros πΌ, πΌ3 , . . . , πΌ2πβ1 , . . . , πΌ2πβ1 . The advantages of these matrices are deterministic construction, simplicity in sampling process, and reduced computational complexity compared with the DeVoiceβs binary sensing matrices. However, the generated matrices formed by BCH codes do not acchieve the RIP constraint yet. 3.4. Sensing Matrices with Statistical Restricted Isometry Property. In [18], the authors proposed some weaker Statistical Restricted Isometry Properties (StRIPs) defined as follows. Definition 5 (StRIP). An π Γ π matrix π΄ is called StRIP of order π with constant πΏ and probability 1 β π if Pr (βAxβ2 β βxβ2 ) β₯ 1 β π
(29)
(31)
There are π = ππ+1 such polynomials. Any polynomial π β ππ can be described as a mapping π : Fπ σ³¨β Fπ ,
Abin 0 [0 = [0 [1
(30)
(π₯0 , π₯1 , . . . , π₯π ) σ³¨σ³¨β π (x) = π0 π₯0 + π1 π₯1 + β
β
β
+ ππ ππ₯π . (32) The set of (x, π(x)) is a subset of Fπ ΓFπ . We order the element of Fπ Γ Fπ as (0, 0), (0, 1), . . . , (π β 1, π β 1). For given Fπ , the set Fπ Γ Fπ has π = π2 of order pairs. For each π β ππ , we denote kπ as the vector indexed on Fπ Γ Fπ which is defined by π
[π0,0 , . . . , π0,πβ1 , π1,0 , . . . , π1,πβ1 , . . . , ππβ1,0 , . . . , ππβ1,πβ1 ] , (33) where ππ,π = {
1, if π (π) = π, 0, otherwise
(34)
for π, π = 0, 1, . . . , π β 1. Theorem 7. Let A0 be the πΓπ matrices with columns vπ , π β ππ with these columns ordered lexicographically with respect to the coefficients of the polynomial. Then the matrix A = (1/βπ)A0 satisfies the RIP of order π < π/π + 1 with RIC value πΏπ = (π β 1)π/π.
The Scientific World Journal
5
There are several deterministic constructions of sensing matrices via algebraic curves over finite fields called algebraic geometry codes [30β33]. Goppaβs code is one of well-known results which contain many linear codes with many good parameters. Hence, these kinds of sensing matrices are good candidates in reconstruction issues using compressive sensing. 3.6. Binary Sensing Matrices Generated by Unbalanced Expander Graphs. In [20], a large class of deterministic sensing matrices based on unbalanced expander graphs, that is, the combinatorial structures, was proposed. Denoting [π] = {1 β
β
β
π}, these bipartite graphs are formalized through the following definitions. Definition 8. A bipartite graph with π left vertices, π rightvertices, and left-degree π is specified by a function Ξ : [π] Γ [π] β [π], where Ξ(π₯, π¦) denotes the π¦th neighbor of π₯. For a set π β [π], we write Ξ(π) to denote its set of neighbors {Ξ(π₯, π¦) : π₯ β π, π¦ β [π·]}. Definition 9. A bipartite graph Ξ : [π]Γ[π] β [π] is a (πΎ, π΄) expander if for every set π β [π] of size π, we have |Ξ(π)| β₯ π΄ β
πΎ. They constructed a large class of binary and sparse matrices satisfying a different form of the RIP property called RIP-π as (1 β πΏπ ) βzβππ β€ βAzβππ β€ (1 + πΏπ ) βzβππ ,
where βzβ0 β€ π. (35)
If the sensing matrix A is an adjacency matrix of high-quality unbalanced expander, then the RIP-π holds for 1 β€ π β€ 11 + π(1)/ log(π). Theorem 10 (see [19]). Consider any π Γ π matrix A0 which is the adjacency matrix of an (π, π) unbalanced expander πΊ = (π΄, π΅, πΈ), |π΄| = π, |π΅| = π with left degree π, such that 1/π and π < π. Then the scaled matrix A = (1/βπ)A0 satisfies the RIP with RIC value πΏ = ππ for some positive constant π > 1. This approach utilizes sparse matrices interpreted as adjacency matrices of sparsity to recover an approximation to the original signal. The new property RIP-π suffices to guarantee exact recovery algorithms.
4. Concluding Remarks In this paper, various deterministic sensing matrices have been investigated and presented in terms of coherence and RIP. The advantages of these matrices, in addition to their deterministic constructions, are the simplicity in sampling and recovery process as well as small storage requirement. It can be possible to make further improvement in both reconstruction efficiency and accuracy using these deterministic matrices in compressive sensing, particularly when some a priori information on location of nonzero components is available.
Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments This research was partly supported by Mid-career Researcher Program through NRF Grant funded by MEST, Korea (No. 2013-030059), and by the MSIP, Korea, in the ICT R&D Program 2013 (KCA-2012-12-911-01-107).
References [1] D. L. Donoho, βCompressed sensing,β IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289β1306, 2006. [2] E. J. Cand`es, J. Romberg, and T. Tao, βRobust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,β IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489β509, 2006. [3] L. R. Welch, βLower bounds on the maximum cross correlation of signals,β IEEE Transactions on Information Theory, vol. 20, no. 3, pp. 397β399, 1974. [4] W. Dai and O. Milenkovic, βSubspace pursuit for compressive sensing signal reconstruction,β IEEE Transactions on Information Theory, vol. 55, no. 5, pp. 2230β2249, 2009. [5] D. Needell and J. A. Tropp, βCoSaMP: iterative signal recovery from incomplete and inaccurate samples,β Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301β321, 2009. [6] D. Needell and R. Vershynin, βUniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit,β Foundations of Computational Mathematics, vol. 9, no. 3, pp. 317β334, 2009. [7] J. A. Tropp and A. C. Gilbert, βSignal recovery from random measurements via orthogonal matching pursuit,β IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655β4666, 2007. [8] S. K. Ambat, S. Chatterjee, and K. V. S. Hari, βAdaptive selection of search space in look ahead orthogonal matching pursuit,β in Proceedings of the National Conference on Communications (NCC β12), pp. 1β5, Kharagpur, India, February 2012. [9] A. Basarab, H. Liebgott, O. Bernard, D. Friboulet, and D. Kouame, βMedical ultrasound image reconstruction using distributed compressive sampling,β in Proceedings of the 10th IEEE International Symposium on Biomedical Imaging (ISBI β13), pp. 628β631, San Francisco, Calif, USA, April 2013. [10] X. Li, A. Y. Aravkin, T. V. Leeuwen, and F. J. Herrmann, βFast randomized full-waveform inversion with compressive sensing,β Geophysics, vol. 77, no. 3, pp. A13βA17, 2012. [11] W. U. Bajwa, J. Haupt, A. M. Sayeed, and R. Nowak, βCompressed channel sensing: a new approach to estimating sparse multipath channels,β Proceedings of the IEEE, vol. 98, no. 6, pp. 1058β1076, 2010. [12] S. Hwang, J. Park, D. Kim, and J. Yang, βSNR efficient transmission for compressive sensing based wireless sensor networks,β in Proceedings of the 6th Joint IFIP Wireless and Mobile Networking Conference (WMNC β13), pp. 1β6, Dubai, India, April 2013. [13] A. Amini, V. Montazerhodjat, and F. Marvasti, βMatrices with small coherence using p-ary block codes,β IEEE Transactions on Signal Processing, vol. 60, no. 1, pp. 172β181, 2012.
6 [14] R. A. DeVore, βDeterministic constructions of compressed sensing matrices,β Journal of Complexity, vol. 23, no. 4β6, pp. 918β 925, 2007. [15] J. Bourgain, S. Dilworth, K. Ford, S. Konyagin, and D. Kutzarova, βExplicit constructions of RIP matrices and related problems,β Duke Mathematical Journal, vol. 159, no. 1, pp. 145β185, 2011. [16] L. Applebaum, S. D. Howard, S. Searle, and R. Calderbank, βChirp sensing codes: deterministic compressed sensing measurements for fast recovery,β Applied and Computational Harmonic Analysis, vol. 26, no. 2, pp. 283β290, 2009. [17] S. D. Howard, A. R. Calderbank, and S. J. Searle, βA fast reconstruction algorithm for deterministic compressive sensing using second order reed-muller codes,β in Proceedings of the 42nd Annual Conference on Information Sciences and Systems (CISS β08), pp. 11β15, Princeton, NJ, USA, March 2008. [18] R. Calderbank, S. Howard, and S. Jafarpour, βConstruction of a large class of deterministic sensing matrices that satisfy a statistical isometry property,β IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 358β374, 2010. [19] R. Berinde, A. C. Gilbert, P. Indyk, H. Karloff, and M. J. Strauss, βCombining geometry and combinatorics: a unified approach to sparse signal recovery,β in Proceedings of the 46th Annual Allerton Conference on Communication, Control, and Computing, pp. 798β805, Urbana-Champaign, Ill, USA, September 2008. [20] P. Indyk, βExplicit constructions for compressed sensing of sparse signals,β in Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 30β33, San Francisco, Calif, USA, January 2008. [21] K. R. Davidson and S. J. Szarek, βLocal operator theory, random matrices and Banach spaces,β in Handbook of the Geometry of Banach Spaces, W. B. Johnson and J. Lindenstrauss, Eds., vol. 1, pp. 317β366, North-Holland, Amsterdam, The Netherlands, 2001. [22] S. J. Szarek, βCondition numbers of random matrices,β Journal of Complexity, vol. 7, no. 2, pp. 131β149, 1991. [23] E. J. Candes and T. Tao, βNear-optimal signal recovery from random projections: universal encoding strategies?β IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406β5425, 2006. [24] E. J. Candes and T. Tao, βDecoding by linear programming,β IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203β4215, 2005. [25] E. J. Cand`es, J. K. Romberg, and T. Tao, βStable signal recovery from incomplete and inaccurate measurements,β Communications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 1207β 1223, 2006. [26] S. Foucart and M. J. Lai, βSparsest solutions of underdetermined linear systems via βπ -minimization for 0 < π < 1,β Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 395β407, 2009. [27] S. Foucart, βA note on guaranteed sparse recovery via β1 -minimization,β Applied and Computational Harmonic Analysis, vol. 29, no. 1, pp. 97β103, 2010. [28] T. T. Cai, L. Wang, and G. Xu, βNew bounds for restricted isometry constants,β IEEE Transactions on Information Theory, vol. 56, no. 9, pp. 4388β4394, 2010. [29] P. Delsarte and J. M. Goethals, βAlternating bilinear forms over GF(q),β Journal of Combinatorial Theory A, vol. 19, no. 1, pp. 26β 50, 1975. [30] V. D. Goppa, βCodes on algebraic curves,β Doklady Akademii Nauk SSSR, vol. 259, no. 6, pp. 1289β1290, 1981.
The Scientific World Journal [31] I. Blake, C. Heegard, T. Holdt, and V. Wei, βAlgebraic-geometry codes,β IEEE Transactions on Information Theory, vol. 44, no. 6, pp. 2596β2618, 1998. [32] M. A. Tsfasman and S. G. Vladut, Algebraic-Geometric Codes, vol. 58 of Mathematics and Its Applications, Springer, Dordrecht, The Netherlands, 1991. [33] C. P. Xing, H. Niederreiter, and K. Y. Lam, βConstructions of algebraic-geometry codes,β IEEE Transactions on Information Theory, vol. 45, no. 4, pp. 1186β1193, 1999.
International Journal of
Rotating Machinery
Engineering Journal of
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Distributed Sensor Networks
Journal of
Sensors Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Control Science and Engineering
Advances in
Civil Engineering Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at http://www.hindawi.com Journal of
Journal of
Electrical and Computer Engineering
Robotics Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
VLSI Design Advances in OptoElectronics
International Journal of
Navigation and Observation Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Chemical Engineering Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Active and Passive Electronic Components
Antennas and Propagation Hindawi Publishing Corporation http://www.hindawi.com
Aerospace Engineering
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
International Journal of
International Journal of
International Journal of
Modelling & Simulation in Engineering
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Shock and Vibration Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Acoustics and Vibration Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014