Matrix compression methods

Federal University of Santa Catarina, Curitibanos, Santa Catarina, Brazil
Applied Mathematics School, Getulio Vargas Foundation, Rio De Janeiro, Rio de Janeiro, Brazil
DOI
10.7287/peerj.preprints.849v1
Subject Areas
Algorithms and Analysis of Algorithms, Databases, Optimization Theory and Computation, Scientific Computing and Simulation, Programming Languages
Keywords
memory optimization, compression, bitstring
Copyright
© 2015 Paixão et al.
Licence
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ PrePrints) and either DOI or URL of the article must be cited.
Cite this article
Paixão CA, Coelho FC. 2015. Matrix compression methods. PeerJ PrePrints 3:e849v1

Abstract

The biggest cost of computing with large matrices in any modern computer is related to memory latency and bandwidth. The average latency of modern RAM reads is 150 times greater than a clock step of the processor (Alted, 2010). Throughput is a little better but still 25 times slower than the CPU can consume. The application of bitstring compression allows for larger matrices to be moved entirely to the cache memory of the computer, which has much better latency and bandwidth (average latency of L1 cache is 3 to 4 clock steps). This allows for massive performance gains as well as the ability to simulate much larger models efficiently. In this work, we propose a methodology to compress matrices in such a way that they retain their mathematical properties. Considerable compression of the data is also achieved in the process. Thus allowing for the computation of much larger linear problems within the same memory constraints when compared with the traditional representation of matrices.

Author Comment

This is a submission to PeerJ for review.