Godkända
Implementation of a Deep Learning Inference Accelerator on the FPGA
Shenbagaraman Ramakrishnan (2017)
Start
2018-11-01
Presentation
2019-04-01
Plats:
Avslutat:
2020-03-26
Examensrapport:
Sammanfattning
Today the main challenge for the high end technology devices is to understand the world like humans and react appropriately. By understanding the environment these devices can enable a perceptive and interactive ecosystem. Perceptive devices should be able to parse and understand the relationships between the objects in a scene. These devices can thus apprehend the world around them and make verbal interaction with its users, just like how the human beings do. With massive amounts of computational power these machines can recognize objects, translate speeches and do a myriad of activities real time. Thus the Artificial Intelligence is finally getting smart. Recently Convolution neural networks have come up as the state of art for classification and detection algorithms. However, these algorithms are computational and memory intensive making them difficult to be used in wearable devices, embedded systems such as smart phones or in IoT devices. Local processing on the specific embedded devices is increasingly preferred due to privacy/security concerns and latency requirements. Thus their energy consumption should be reduced drastically. As a result, designing an energy efficient DNN processor is so critical to realize AI applications in embedded devices.
Handledare: Liang Liu (EIT)
Examinator: Erik Larsson (EIT)