<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
        {font-family:Helvetica;
        panose-1:2 11 6 4 2 2 2 2 2 4;}
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
        {font-family:"Segoe UI";
        panose-1:2 11 5 2 4 2 4 2 2 3;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0cm;
        margin-bottom:.0001pt;
        font-size:12.0pt;
        font-family:"Times New Roman",serif;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:#0563C1;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:#954F72;
        text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
        {mso-style-name:msonormal;
        mso-margin-top-alt:auto;
        margin-right:0cm;
        mso-margin-bottom-alt:auto;
        margin-left:0cm;
        font-size:12.0pt;
        font-family:"Times New Roman",serif;}
span.EstiloCorreioEletrnico18
        {mso-style-type:personal;
        font-family:"Calibri",sans-serif;
        color:#1F497D;}
span.EstiloCorreioEletrnico19
        {mso-style-type:personal-compose;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}
@page WordSection1
        {size:612.0pt 792.0pt;
        margin:70.85pt 3.0cm 70.85pt 3.0cm;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=PT link="#0563C1" vlink="#954F72"><div class=WordSection1><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;mso-fareast-language:EN-US'>KIND REMINDER</span><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;mso-fareast-language:EN-US'><o:p></o:p></span></p><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;mso-fareast-language:EN-US'><o:p> </o:p></span></p><div><div style='border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm'><p class=MsoNormal><b><span style='font-size:11.0pt;font-family:"Calibri",sans-serif'>De:</span></b><span style='font-size:11.0pt;font-family:"Calibri",sans-serif'> allusers-bounces@lists.isr.uc.pt [mailto:allusers-bounces@lists.isr.uc.pt] <b>Em nome de </b>Jorge Batista<br><b>Enviada:</b> 2 de abril de 2024 16:43<br><b>Para:</b> docentes@deec.uc.pt; allusers@isr.uc.pt; alunos@deec.uc.pt<br><b>Assunto:</b> [AllUsers-ISR] Palestra do Doutor João Filipe Henriques - Anfiteatro ISR, 4 de Abril, 16h00<o:p></o:p></span></p></div></div><p class=MsoNormal><o:p> </o:p></p><div><p class=MsoNormal><o:p> </o:p></p></div><div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Boa tarde colegas,<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Boa tarde à comunidade estudantil do DEEC,<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'> <o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Gostava de vos convidar para uma palestra do Prof. João Filipe Henriques, antigo aluno de mestrado e de doutoramento do DEEC, que está de visita a Coimbra. <o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>O doutor João Henriques é Research Fellow da Royal Academy of Engineering e investigador no Visual Geometry Group na Universidade de Oxford.<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Tópico da Palestra “<b>Le</b></span><b><span style='font-family:"Segoe UI",sans-serif;color:black'>arning Location-Consistent Visual Features</span></b><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>".<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'> <o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Dia: <b>04-Abril</b> (quinta-feira)<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Hora: Início às <b>16h00</b><o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Duração : aproximadamente 60min<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Local: Anfiteatro do ISR<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Mais detalhes: ver abaixo.<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'> <o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Com os melhores cumprimentos<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'>Jorge Batista<o:p></o:p></span></p></div></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-size:10.5pt;font-family:"Helvetica",sans-serif;color:black'>Jorge Manuel M.C. Pereira Batista<br>Associate Professor w/ Habilitation<br>ISR Senior Researcher<br>DEEC/FCTUC<br>University of Coimbra<br>Coimbra, PORTUGAL</span><span style='font-family:"Calibri",sans-serif;color:black'><o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-size:9.0pt;font-family:"Helvetica",sans-serif;color:black'>_________________________________________________________________________________________________</span><span style='font-size:11.0pt;font-family:"Calibri",sans-serif;color:black'><o:p></o:p></span></p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>Dr. João Henriques is a Research Fellow of the Royal Academy of Engineering, working at the Visual Geometry Group (VGG) at the University of Oxford. His research focuses on computer vision and deep learning, with the goal of making machines more perceptive, intelligent and capable of helping people. He created the KCF and SiameseFC visual object trackers, which won the highly competitive VOT Challenge twice, and are widely deployed in consumer hardware, from Facebook apps to commercial drones. His research spans many topics: robot mapping and navigation, including reinforcement learning and 3D geometry; multi-agent cooperation and "friendly" AI; as well as various forms of learning, from self-supervised, causal, and meta-learning, including optimisation theory.<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>In this talk I will discuss recent work on learning location-consistent visual features, and time-permitting will also briefly discuss recent work on robotics.<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>"LoCo: Memory-Efficient Learning of Location-Consistent Features "<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>Image feature extractors are rendered substantially more useful if different views of the same 3D location yield similar features. A feature extractor that achieves this goal even under significant viewpoint changes must recognise not just the semantic categories present in a scene, but also understand how different objects relate to each other in three dimensions.<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>We present a method for memory-efficient learning of location-consistent features that reformulates and approximates the smooth average precision objective. This novel loss function enables improvements in memory efficiency by a factor of 2000, mitigating a key bottleneck of previous methods and allowing much larger models to be trained with the same computational resources.<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>"Rapid Motor Adaptation for Robotic Manipulator Arms "<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>Developing generalizable manipulation skills is a core challenge in embodied AI. This includes generalization across diverse task configurations, encompassing variations in object shape, density, friction coefficient, and external disturbances such as forces applied to the robot. <o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>Rapid Motor Adaptation (RMA) offers a promising solution to this challenge.<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>It posits that essential hidden variables influencing an agent's task performance, such as object mass and shape, can be effectively inferred from the agent's action and proprioceptive history. <o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>Drawing inspiration from RMA in locomotion and in-hand rotation, we use depth perception to develop agents tailored for rapid motor adaptation in a variety of manipulation tasks.<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-family:"Segoe UI",sans-serif;color:black'>We evaluated our agents on four challenging tasks from the Maniskill2 benchmark, namely pick-and-place operations with hundreds of objects from the YCB and EGAD datasets, peg insertion with precise position and orientation, and operating a variety of faucets and handles, with customized environment variations.<o:p></o:p></span></p></div></div><div><p class=MsoNormal><o:p> </o:p></p></div></div></body></html>