Symmetry, Integrability and Geometry: Methods and Applications (SIGMA)


SIGMA 2 (2006), 039, 21 pages      math.OC/0604220      https://doi.org/10.3842/SIGMA.2006.039

Combined Reduced-Rank Transform

Anatoli Torokhti and Phil Howlett
School of Mathematics and Statistics, University of South Australia, Australia

Received November 25, 2005, in final form March 22, 2006; Published online April 07, 2006

Abstract
We propose and justify a new approach to constructing optimal nonlinear transforms of random vectors. We show that the proposed transform improves such characteristics of rank-reduced transforms as compression ratio, accuracy of decompression and reduces required computational work. The proposed transform Tp is presented in the form of a sum with p terms where each term is interpreted as a particular rank-reduced transform. Moreover, terms in Tp are represented as a combination of three operations Fk, Qk and φk with k = 1,...,p. The prime idea is to determine Fk separately, for each k = 1,...,p, from an associated rank-constrained minimization problem similar to that used in the Karhunen-Loève transform. The operations Qk andφk are auxiliary for finding Fk. The contribution of each term in Tp improves the entire transform performance. A corresponding unconstrained nonlinear optimal transform is also considered. Such a transform is important in its own right because it is treated as an optimal filter without signal compression. A rigorous analysis of errors associated with the proposed transforms is given.

Key words: best approximation; Fourier series in Hilbert space; matrix computation.

pdf (428 kb)   ps (235 kb)   tex (24 kb)

References

  1. Hotelling H., Analysis of a complex of statistical variables into Principal Components, J. Educ. Psychol., 1933, V.24, 417-441, 498-520.
  2. Karhunen K., Über Lineare Methoden in der Wahrscheinlichkeitsrechnung, Ann. Acad. Sci. Fennicae, Ser. A, 1947, V.137.
  3. Loève M., Fonctions aléatoires de second order, in P. Lévy, Processus Stochastiques et Mouvement Brownien, Paris, Hermann, 1948.
  4. Jolliffe I.T., Principal component analysis, New York, Springer Verlag, 1986 (2 ed., 2002).
  5. Scharf L.L., The SVD and reduced rank signal processing, Signal Processing, 1991, V.25, 113-133.
  6. Yamashita Y., Ogawa H., Relative Karhunen-Loéve transform, IEEE Trans. on Signal Processing, 1996, V.44, 371-378.
  7. Hua Y., Liu W.Q., Generalized Karhunen-Loève transform, IEEE Signal Processing Letters, 1998, V.5, 141-143.
  8. Vapnik V., Statistical Learning Theory, Wiley, 1998.
  9. Ocaña F.A., Aguilera A.M., Valderrama M.J., Functional principal componenets analysis by choice of norm, J. Multivariate Anal., 1999, V.71, 262-276.
  10. Tipping M.E., Bishop C.M., Probabilistic principal component analysis, J. of the Royal Statistical Society, Ser. A, 1999, V.61, 611-619.
  11. Tipping M.E., Bishop C.M., Mixtures of probabilistic principal component analysers, Neural Computation, 1999, V.11, 443-482.
  12. Schölkopf B., Smola A.J., Müller K.-R., Kernel principal component analysis, in Advances in Kernel Methods. Support Vector Learning, Editors B. Schölkopf, C.J.C. Burges and A.J. Smola, Cambridge, MIT Press, 1999, 327-352.
  13. Tenenbaum J.B., de Silva V., Langford J.C., A global geometric framework for nonlinear dimensionality reduction, Science, 2000, V.290, Issue 5500, 2319-2323.
  14. Rowers S.T., Saul L.K., Nonlinear dimensionality reduction by locally linear embedding, Science, 2000, V.290, Issue 5500, 2323-2326.
  15. Cristianini N., Shawe-Taylor J., An introduction to support vector machines and other kernel-based learning methods, Cambridge, Cambridge University Press, 2000.
  16. Yamada I., Sekiguchi T., Sakaniwa K., Reduced rank Volterra filter for robust identification of nonlinear systems, in Proc. 2nd Int. Workshop on Multidimensional (ND) Systems - NDS2000, Poland, Czocha Castle, 2000, 171-175.
  17. Hua Y., Nikpour M., Stoica P., Optimal reduced-rank estimation and filtering, IEEE Trans. on Signal Processing, 2001, V.49, 457-469.
  18. Kneip A., Utikal K.J., Inference for density families using functional principal component analysis, Journal of the American Statistical Association, 2001, V.96, 519-542.
  19. Honig M.L., Xiao W., Performance of reduced-rank linear interferrence suppression, IEEE Trans. on Information Theory, 2001, V.47, 1928-1946.
  20. Chen W., Mitra U., Schniter P., On the equivalence of three rediced rank linear estimators with applications to DS-CDMA, IEEE Trans. on Information Theory, 2002, V.48, 2609-2614.
  21. Honig M.L., Goldstein J.S., Adaptive reduced-rank interference suppression based on multistage Wiener filter, IEEE Trans. on Communications, 2002, V.50, 986-994.
  22. Stock J.H., Watson M.W., Forecasting using principal components from a large number of predictors, Journal of the American Statistical Association, 2002, V.97, 1167-1179.
  23. Fukunaga K., Introduction to statistical pattern recognition, Boston, Academic Press, 1990.
  24. Kraut S., Anderson R.H., Krolik J.L., A generalized Karhunen-Loève basis for efficient estimation of tropospheric refractivity using radar clutter, IEEE Trans. on Signal Processing, 2004, V.52, 48-60.
  25. Torokhti A., Howlett P., An optimal filter of the second order, IEEE Trans. on Signal Processing, 2001, V.49, 1044-1048.
  26. Torokhti A., Howlett P., Optimal fixed rank transform of the second degree, IEEE Trans. on Circuits and Systems. Part II, Analog & Digital Signal Processing, 2001, V.48, 309-315.
  27. Torokhti A., Howlett P., Pearce C., New perspectives on optimal transforms of random vectors, Optimization: Theory and Applications, to appear.
  28. Torokhti A., Howlett P., Constructing fixed rank optimal estimators with method of recurrent best approximations, J. Multivariate Analysis, 2002, V.86, 293-309.
  29. Torokhti A., Howlett P., Best operator approximation in modelling of nonlinear Systems, IEEE Trans. on Circuits and Systems. Part I, Fundamental Theory and Applications, 2002, V.49, 1792-1798.
  30. Torokhti A., Howlett P., Method of recurrent best estimators of second degree for optimal filtering of random signals, Signal Processing, 2003, V.83, 1013-1024.
  31. Torokhti A., Howlett P., Best causal mathematical models for a nonlinear system, IEEE Trans. on Circuits and Systems. Part I, Fundamental Theory and Applications, to appear.
  32. Sontag E.D., Polynomial response maps, Lecture Notes in Control and Information Sciences, 1979. Vol. 13.
  33. Chen S., Billings S.A., Representation of non-linear systems: NARMAX model, Int. J. Control, 1989, V.49, 1013-1032.
  34. Howlett P.G., Torokhti A.P., Pearce C.E.M., A philosophy for the modelling of realistic non-linear systems, Proc. of Amer. Math. Soc., 2003, V.132, 353-363.
  35. Cotlar M., Cignoli R., An introduction to functional analysis, Amsterdam - London, North-Holland Publishing Company, 1974, 114-116.
  36. Perlovsky L.I., Marzetta T.L., Estimating a covariance matrix from incomplete realizations of a random vector, IEEE Trans. on Signal Processing, 1992, V.40, 2097-2100.
  37. Kauermann G., Carroll R.J., A note on the efficiency of Sandwich covariance matrix estimation, Journal of the American Statistical Association, 2001, V.96, 1387-1396.
  38. Schneider M.K., Willsky A.S., A Krylov subspace method for covariance approximation and simulation of a random process and fields, Int. J. Multidim. Syst. & Signal Processing, 2003, V.14, 295-318.
  39. Kubokawa T., Srivastava M.S., Estimating the covariance matrix: a new approach, J. Multivariate Analysis, 2003, V.86, 28-47.
  40. Ledoit O., Wolf M., A well-conditioned estimator for large-dimensional covariance matrices, J. Multivariate Analysis, 2004, V.88, 365-411.
  41. Leung P.L., Ng F.Y., Improved estimation of a covariance matrix in an elliptically contoured matrix distribution, J. Multivariate Analysis, 2004, V.88, 131-137.
  42. Higham N.J., Stable iterations for the matrix square root, Numerical Algorithms, 1997, V.15, 227-241.
  43. Golub G.H., van Loan C.F., Matrix computations, Baltimore, Johns Hopkins University Press, 1996.
  44. Kowalski M.A., Sikorski K.A., Stenger F., Selected topics in approximation and computations, New York - Oxford, Oxford University Press, 1995.
  45. Ben-Israel A., Greville T.N.E., Generalized inverses: theory and applications, New York, John Wiley & Sons, 1974.
  46. Mathews V.J., Sicuranza G.L., Polynomial signal processing, J. Wiley & Sons, 2001.
  47. Goldstein J.S., Reed I., Scharf L.L., A multistage representation of the Wiener filter based on orthogonal projections, IEEE Trans. on Information Theory, 1998, V.44, 2943-2959.


Previous article   Next article   Contents of Volume 2 (2006)