<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">FYI<br class=""><div><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font>----- Mensagem reencaminhada -----<br class="">De: "las" <<a href="mailto:las@inesc-id.pt" class="">las@inesc-id.pt</a>><br class="">Para: <a href="mailto:deec-docentes@deec.ist.utl.pt" class="">deec-docentes@deec.ist.utl.pt</a><br class="">Cc: "Gabriel Falcão" <<a href="mailto:gff@deec.uc.pt" class="">gff@deec.uc.pt</a>>, "Nuno Paulino" <<a href="mailto:nunop@uninova.pt" class="">nunop@uninova.pt</a>><br class="">Itens enviados: Segunda-feira, 15 de Novembro de 2021 15:04:06<br class="">Assunto: 19/11, 14h, EA3 [DL-IST/DLS-INESC-ID/DLP-IEEE]: Keshab K. Parhi on "Accelerator Architectures for Deep Neural Networks: Inference and Training"<br class=""><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font>Caros colegas,<br class=""><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font>No próximo dia 19 de novembro, sexta-feira, às 14h, no anfiteatro EA3, <br class="">Torre Norte, tem lugar uma apresentação pelo Prof. Keshab K. Parhi.<br class=""><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font>Se não conseguirem estar presentes nas instalações do IST da Alameda, <br class="">poderão assistir à palestra remotamente. Inscrevendo-se, receberão no <br class="">dia anterior o link para a sala zoom:<br class=""><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font><a href="https://forms.gle/BouGfJhQJiPbzHei9" class="">https://forms.gle/BouGfJhQJiPbzHei9</a><br class=""><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font>Título: *Accelerator Architectures for Deep Neural Networks: Inference <br class="">and Training*<br class=""><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font>Prof. Keshab K. Parhi: Distinguished McKnight University Professor and <br class="">Edgar F. Johnson Professor do Departmento de EEC da University of <br class="">Minnesota (http://people.ece.umn.edu/~parhi), Fellow do IEEE, do ACM, <br class="">da AAAS e da National Academy of Inventors.<br class=""><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font>Abaixo segue Abstract da apresentação e Bio do Prof. Keshab K. Parhi.<br class=""><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font>Cumprimentos,<br class="">Leonel Sousa<br class=""><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font>--------------------------------------------------------------------<br class="">Abstract: Machine learning and data analytics continue to expand the <br class="">fourth industrial revolution and affect many aspects of our lives. The <br class="">talk will explore hardware accelerator architectures for deep neural <br class="">networks (DNNs). I will present a brief review of history of neural <br class="">networks. I will talk about our recent work on Perm-DNN based on <br class="">permuted-diagonal interconnections in deep convolutional neural networks <br class="">and how structured sparsity can reduce energy consumption associated <br class="">with memory access in these systems (MICRO-2018). I will then talk about <br class="">reducing latency and memory access in accelerator architectures for <br class="">training DNNs by gradient interleaving using systolic arrays <br class="">(ISCAS-2020). Then I will present our recent work on LayerPipe, an <br class="">approach for training deep neural networks that leads to simultaneous <br class="">intra-layer and inter-layer pipelining (ICCAD-2021). This approach can <br class="">increase processor utilization efficiency and increase speed of training <br class="">without increasing communication costs.<br class=""><font color="#5856d6" class=""><span style="caret-color: rgb(88, 86, 214);" class=""><br class=""></span></font>Bio: Keshab K. Parhi received the B.Tech. degree from the Indian <br class="">Institute of Technology (IIT), Kharagpur, in 1982, the M.S.E.E. degree <br class="">from the University of Pennsylvania, Philadelphia, in 1984, and the <br class="">Ph.D. degree from the University of California, Berkeley, in 1988. He <br class="">has been with the University of Minnesota, Minneapolis, since 1988, <br class="">where he is currently Distinguished McKnight University Professor and <br class="">Edgar F. Johnson Professor of Electronic Communication in the Department <br class="">of Electrical and Computer Engineering. He has published over 650 <br class="">papers, is the inventor of 32 patents, and has authored the textbook <br class="">VLSI Digital Signal Processing Systems (Wiley, 1999) and coedited the <br class="">reference book Digital Signal Processing for Multimedia Systems (Marcel <br class="">Dekker, 1999). His current research addresses VLSI architecture design <br class="">of machine learning systems, hardware security, data-driven neuroscience <br class="">and molecular/DNA computing. Dr. Parhi is the recipient of numerous <br class="">awards including the 2017 Mac Van Valkenburg award and the 2012 Charles <br class="">A. Desoer Technical Achievement award from the IEEE Circuits and Systems <br class="">Society, the 2004 F. E. Terman award from the American Society of <br class="">Engineering Education, and the 2003 IEEE Kiyo Tomiyasu Technical Field <br class="">Award. He served as the Editor-in-Chief of the IEEE Trans. Circuits and <br class="">Systems, Part-I during 2004 and 2005. He is a Fellow of IEEE, ACM, AAAS <br class="">and the National Academy of Inventors.</div></body></html>