As the gap between CPU performance and memory access time continues to grow, reducing memory access costs is becoming increasingly important to solving very large sparse systems of linear equations efficiently. This talk will identify and describe some existing memory-efficient algorithms. Because efficient data re-use is crucial to reducing memory access costs, we are particularly interested in exploring alternatives based on blocking the sparse matrix operations. We are focusing on variants of GMRES with the initial goal of contributing to the understanding of the numerical properties of some of these algorithms. Our eventual goal is to develop a robust, memory-efficient sparse linear solver suitable for inclusion in Argonne National Laboratory's PETSc or a similar toolkit for scientific computation.