Jian Fang, Jianyu Chen, et al.
FCCM 2019
The newly proposed posit number format uses a significantly different approach to represent floating point numbers. This paper introduces a framework for posit arithmetic in reconfigurable logic that maintains full precision in intermediate results. We present the design and implementation of a L1 BLAS arithmetic accelerator on posit vectors leveraging Apache Arrow. For a vector dot product with an input vector length of 106 elements, a hardware speedup of approximately 104 is achieved as compared to posit software emulation. For 32-bit numbers, the decimal accuracy of the posit dot product results improve by one decimal of accuracy on average compared to a software implementation, and two extra decimals compared to the IEEE754 format. We also present a posit-based implementation of pair-HMM. In this case, the hardware speedup vs. a posit-based software implementation ranges from 105 to 106. With appropriate initial scaling constants, accuracy improves on an implementation based on IEEE 754.
Jian Fang, Jianyu Chen, et al.
FCCM 2019
Tanveer Ahmad, Nauman Ahmed, et al.
BMC Genomics
Jianyu Chen, Zaid Al-Ars, et al.
CoNGA 2018