Please join us Wednesday, September 28, 2011, from 1-2pm in the NCAR Mesa Lab Main Seminar Room.
While consumer electronics has enthusiastically embraced lossy compression, signalprocessing applications such as medical imaging, wireless communications, and sensor processing have been slow to adopt lossy compression. Supercomputing has similarly not employed compression but is increasingly limited by a variety of I/O issues: DRAM access speeds (the "memory wall"), rotating and flash storage rates, and Infiniband networking bottlenecks for data distribution and exchange. Signal processing applications typically process integer data from analog-to-digital converters (ADCs), while supercomputing mostly operates on just two datatypes: 32-bit and 64-bit floating-point values. In this talk, we demonstrate that lossy real-time compression of integer ADC samples at compression ratios between 3:1 and 4:1 has no effect on the final results of several signal processing applications, including CT scanner images viewed by radiologists. We also describe results from seismic supercomputing algorithms where lossy floatingpoint compression between 2:1 and 6:1 does not change the quality of the resulting 3D seismic images. Finally, we propose that the I/O problems of Petascale and Exascale computing are larger than their compute problems, and that compression can solve, or at least significantly reduce, these I/O problems. We would like NCAR researchers who have HPC and multi-core I/O bottlenecks to try lossy floating-point compression, at up to 300+ Mfloats/sec (single core speed), to validate that their results are still acceptable while being achieved more quickly because compression reduced I/O bottlenecks.
Supercomputing Has I/O Problems That Compression Can Solve