SMR drives may be incorporated into the storage stack as drive-managed devices, via host-resident block translation layers, or through the use of SMR-specific log-structured file systems (LFSs), at an engineering cost ranging from modest (drive-managed) to very large (LFS). The first generation of drive-managed SMR devices has shown significant performance deficiencies when compared to conventional drives, but at this point little is known about how well SMR can perform with better translation algorithms or tuned file systems.
This work proposes a combination of high-level (trace analysis and simulation) and low-level (in-kernel implementation and benchmarking) investigation into both translation layers and file systems, to determine how fast SMR can be on realistic workloads, and at what cost – i.e. whether good SMR performance requires a change in file system, or may be achieved via translation layers in the host or device.
The investigation builds on the base of SMR and flash translation layer research performed at Northeastern by the PI over the last eight years, uses novel software artifacts developed in the PI’s lab (NSTL, an in-kernel translation layer with reprogrammable cleaning and placement algorithms), and leverages partnerships with key Linux file system developers.