Why Us? >>

  • - Open Access
  • - Peer-reviewed
  • - Rapid publication
  • - Lifetime hosting
  • - Free indexing service
  • - Free promotion service
  • - More citations
  • - Search engine friendly

Free SCIRP Newsletters>>

Add your e-mail address to receive free newsletters from SCIRP.

 

Contact Us >>

WhatsApp  +86 18163351462(WhatsApp)
   
Paper Publishing WeChat
Book Publishing WeChat
(or Email:book@scirp.org)

Article citations

More>>

Candes, E.J. and Tao, T. (2006) Near-Optimal Signal Recovery from Random Projections: Universal Encoding Strategies? IEEE Transactions on Information Theory, 52, 5406-5425.
http://dx.doi.org/10.1109/TIT.2006.885507

has been cited by the following article:

  • TITLE: Compressive Sensing Algorithms for Signal Processing Applications: A Survey

    AUTHORS: Mohammed M. Abo-Zahhad, Aziza I. Hussein, Abdelfatah M. Mohamed

    KEYWORDS: Compressive Sensing, Shannon Sampling Theory, Sensing Matrices, Sparsity, Coherence

    JOURNAL NAME: International Journal of Communications, Network and System Sciences, Vol.8 No.6, June 9, 2015

    ABSTRACT: In digital signal processing (DSP), Nyquistrate sampling completely describes a signal by exploiting its bandlimitedness. Compressed Sensing (CS), also known as compressive sampling, is a DSP technique efficiently acquiring and reconstructing a signal completely from reduced number of measurements, by exploiting its compressibility. The measurements are not point samples but more general linear functions of the signal. CS can capture and represent sparse signals at a rate significantly lower than ordinarily used in the Shannon’s sampling theorem. It is interesting to notice that most signals in reality are sparse; especially when they are represented in some domain (such as the wavelet domain) where many coefficients are close to or equal to zero. A signal is called K-sparse, if it can be exactly represented by a basis, , and a set of coefficients , where only K coefficients are nonzero. A signal is called approximately K-sparse, if it can be represented up to a certain accuracy using K non-zero coefficients. As an example, a K-sparse signal is the class of signals that are the sum of K sinusoids chosen from the N harmonics of the observed time interval. Taking the DFT of any such signal would render only K non-zero values . An example of approximately sparse signals is when the coefficients , sorted by magnitude, decrease following a power law. In this case the sparse approximation constructed by choosing the K largest coefficients is guaranteed to have an approximation error that decreases with the same power law as the coefficients. The main limitation of CS-based systems is that they are employing iterative algorithms to recover the signal. The sealgorithms are slow and the hardware solution has become crucial for higher performance and speed. This technique enables fewer data samples than traditionally required when capturing a signal with relatively high bandwidth, but a low information rate. As a main feature of CS, efficient algorithms such as -minimization can be used for recovery. This paper gives a survey of both theoretical and numerical aspects of compressive sensing technique and its applications. The theory of CS has many potential applications in signal processing, wireless communication, cognitive radio and medical imaging.