2016 Poster Sessions : Automatic Generation of Efficient Accelerator Designs for Reconfigurable Hardware

Student Name : David Koeplinger
Advisor : Oyekunle Olukotun
Research Areas: Computer Systems
Abstract:
Acceleration in the form of customized datapaths offer large performance and energy improvements over general purpose processors. FPGAs in particular are gaining popularity for use as application-specific accelerators. Unfortunately, existing design tools for targeting FPGAs have inadequate support for high-level programming, resource estimation, and quick, automatic design space exploration. We describe a design framework that addresses these challenges, including a new representation for hardware using parameterized templates that captures locality and data parallelism at multiple levels of nesting. This representation is designed to be automatically generated from high-level parallel patterns like map and reduce. We describe the framework's hybrid area estimation technique which uses template-level models and design-level artificial neural networks to account for effects from low-level place-and route tools, including routing, register and block RAM duplication, and LUT packing. We show our estimation capabilities can be used to rapidly explore a large space of designs across tile sizes, parallelization factors, and optional coarse-grained pipelining, all at multiple loop levels. We show that estimates average 5% error for logic resources, 6.8% error for runtimes, and are 36 to 854 times faster than a commercial high-level synthesis tool.

Bio:
David Koeplinger is a third year PhD student advised by Professor Kunle Olukotun. His research interests are in domain specific languages and optimization and code generation of high level programs for heterogeneous hardware targets, with focus currently on reconfigurable architectures. David received his B.S. in Electrical Engineering in 2013 from the University of Delaware and his M.S. in Electrical Engineering in 2015 from Stanford.