In a small-device world, bigger may still be better by Angela Herring April 19, 2012 Share Facebook LinkedIn Twitter In the early days, standard computers could be as large as a single story house. Over the last several decades, many development efforts have focused on shrinking them for use in the home and eventually anywhere in the world — on the train, in a cave, you name it. That is, if you want to use standard computer-based tools, like the Internet or iTunes. Today’s tiny devices are capable of crunching lots of data pretty quickly, but what if “lots of data” means tens or hundreds of terabytes or more, amounts that would take a typical PC days or weeks to process? For that we need supercomputers, which are still big and expensive. Peter Desonyers, assistant professor in the College of Computer and Information Science, recently received a CAREER award from the National Science Foundation to explore solid-state drives, which are data-storage devices that use flash memory, as new computational tools. If successful, these devices could revolutionize the industry by making large-scale computation possible for the masses. Flash was originally designed to replace hard drives as a faster data-storage method. “It is somewhat faster for large files than hard drives,” Desnoyers said. But more important, it is “far more nimble, able to switch from one small file to another at electronic speeds while a hard drive must wait for mechanical parts to move.” The only problem is flash came too late. Over the last several decades, computer scientists have optimized software to run on hard drives. Anything that would run better on flash has not yet been designed. “We’ve stopped trying to do anything that involves complex data structures outside of the computer’s memory,” Desnoyers said. “We’ve stopped trying to do the things that flash is best at.” Before computer scientists can start designing new uses for flash they must first understand how it behaves. In particular, Desnoyers’ team is looking at fragmentation, in which creating and deleting files over time causes a storage system to become randomly arranged. Hard drives, Desnoyers explained, slow down but continue to work as they become fragmented. But flash must constantly defragment in order to work at all. It must constantly rearrange blocks of data like a sliding tiles puzzle, shuffling it between unoccupied areas in order to clear more space. This process causes the drive to run slower and eventually reduces its lifetime. “We’re trying to understand it so we can design better algorithms to deal with it,” Desnoyers said. In addition to making personal computers more powerful, solid-state storage devices could also extend the power of supercomputers beyond their current capacity. Desnoyers’ team is working with Oak Ridge National Laboratories to explore ways of making that possible. Still, Desnoyers isn’t convinced that flash is the future of computing. “Disk is getting bigger and cheaper faster than flash is,” he said. “For flash to become really widespread, we need to develop new approaches to make it worth the price — it has to enable us to do things with computers that we couldn’t do before.”