Zhiqiang (Walkie) Que

Huxley Building, Imperial College London, London, UK, SW7 2BZ
Email: z.que [ at ] imperial.ac.uk

I am a research assistant pursuing Ph.D. degree under the supervision of Prof. Wayne Luk at the Computing Department of Imperial College London. I received my B.S. in Microelectronics and M.S. in Computer Science from Shanghai Jiao Tong University (SJTU) in 2008 and 2011 respectively. From 2011 to 2015, I worked on microarchitecture design and verification of ARM-compliant CPUs with Marvell Semiconductor.
I served as a peer reviewer for many conferences and Journals including FCCM, FPL, FPT, IEEE TCAS, JPDC, IECIE, etc. Our research has received best paper awards at SmartCloud'18, CSE'10 and best paper nomination at FCCM'20, ASAP'19, FPT'19, FPT'18.
My research interests include computer architecture, embedded systems, high-performance computing and computer-aided design (CAD) tools for hardware design optimization.

Some News

  • October 2021 - Will appear in the FPT'21: Optimizing Bayesian Recurrent Neural Networks on an FPGA-based Accelerator, Co-first author. This paper is about accelerating Bayesian LSTMs via a co-design framework.
    [PDF]

  • September 2021 - A talk for FastML group about the II balancing for multi-layer LSTM acceleration on FPGAs.
    [Slides]

  • June 2021 - ASAP'21: Accelerating Recurrent Neural Networks for Gravitational Wave Experiments. This paper presents novel reconfigurable architectures with balanced IIs for reducing the latency of multi-layer LSTM-based autoencoder that is used for detecting gravitational waves.
    [PDF] [Github]

  • May 2021 - Journal of Systems Architecture (JSA) : In-Circuit Tuning of Deep Learning Designs. It is an extension of our ICCAD'19 paper about the In-Circuit Tuning.
    [PDF]

  • March 2021 - FCCM'21: Instrumentation for Live On-Chip Debug of Machine Learning Training on FPGAs. Co-author.
    [PDF]

  • November 2020 - FPT'20: A Reconfigurable Multithreaded Accelerator for Recurrent Neural Networks, 2020 International Conference on Field-Programmable Technologies. Acceptance Rate: 24.7%.
    [VIDEO] [PDF]

  • October 2020 - ICCD'20, Short-paper: Optimizing FPGA-based CNN Accelerator using Differentiable Neural Architecture Search , the 38th IEEE International Conference on Computer Design. Co-author.

  • July 2020 - Journal of Signal Processing Systems (JSPS) paper: Mapping Large LSTMs to FPGAs with Weight Reuse. It is an extension of our ASAP'19 paper about the weights reuse for LSTMs with blocking & batching strategy.
    [Link] [PDF]

  • May 2020 - FCCM'20 paper: Optimizing Reconfigurable Recurrent Neural Networks. Conventional design of matrix-vector multiplications (MVM) for RNNs is row-wise, however, it will bring system stall due to data dependency. To eliminate the data dependency in RNNs, we proposed column-wise MVM for RNNs in this paper.
    [Link] [PDF]

  • May 2020 - Best paper nomination in FCCM'20 : High-Throughput Convolutional Neural Network on an FPGA by Customized JPEG Compression. In this paper, we propose customized JPEG+CNN to address the data transfer bandwidth problems for cloud-based FPGAs.
    [Link] [PDF]

  • December 2019 - FPT'19 paper: Real-time Anomaly Detection for Flight Testing using AutoEncoder and LSTM. In this work, a novel Timestep(TS)-buffer is proposed to avoid redundant calculations of LSTM gate operations to reduce system latency.
    [Link] [PDF]

  • November 2019 - ICCAD'19 paper: Towards In-Circuit Tuning of Deep Learning Designs.
    [Link] [PDF]

  • July 2019 - ASAP'19 paper: Efficient Weight Reuse for Large LSTMs. In this paper, we proposed a blocking & batching strategy to reuse the LSTM weights.
    [Link] [PDF]