Cookie usage policy

The website of the University Carlos III of Madrid use its own cookies and third-party cookies to improve our services by analyzing their browsing habits. By continuing navigation, we understand that it accepts our cookie policy. "Usage rules"

Cabecera de página Seminarios Master Ciencia y Tecnología Informatica

Generative AI for Virtual Embodied Conversational Agents (Giulio Jacucci)

Title: Generative AI for Virtual Embodied Conversational Agents

Speaker: Giulio Jacucci, University of Helsinki

Sessions:

  • March 5, 10:00-14:00
  • March 6, 10:00-13:00
  • March 7, 10:00-13:00

Place: Classroom 2.0.C03

Organizer: Andrea Bellucci

Language: English

Seminar overview:

The workshop is designed to provide a comprehensive understanding of Embodied Conversational Agents (ECAs) and their integration into Virtual Reality (VR).

  • Day 1 introduces participants to the fundamentals of VR and ECAs, covering hardware, development tools, and a practical "Hello World" exercise to familiarize them with basic setup and development.
  • Day 2 dives into the theoretical foundations of ECAs, including key concepts such as embodiment, social presence, user experience (UX), ethics, and essential technologies like natural language processing (NLP) and computer vision. This day also includes a hands-on breakdown of ECA components.
  • Day 3 focuses on building a basic ECA, emphasizing scene understanding, decision-making, and multimodal interactions. The teaching methods combine lectures to introduce theoretical concepts with hands-on workshops to develop practical skills.

Short bio: 

Giulio Jacucci PhD is Professor at the Department of Computer Science at the University of Helsinki. He has been Professor  at the Aalto University, Department of Design 2009-2010. His research  investigated mobile group media, public multitouch displays, augmented reality, and multimodal interaction. Contributions included intent modelling and visualization for search and recommendation, end user programming for ubicomp, sensor based behavior change. He has chaired ACM IUI and is part of the conference steering committee. Currently the research focuses on affective interaction in XR and context aware conversational systems. He is co-founder of several research spin offs and co-authored two patents in the area of information seeking and on modular screens.

Learning Technologies for Robot Autonomy through Imitation and Reinforcement (Roberto Martin-Martin)

Title: Learning Technologies for Robot Autonomy through Imitation and Reinforcement

Speaker: Roberto Martin-Martin, University of Texas at Austin

Sessions: 

  • March 17, 11:00-14:00
  • March 18, 10:00-14:00
  • March 19, 11:00-14:00

Place: Classroom 2.0.C03

Organizer: Fernando Fernández

Language: English

Seminar overview:

In this seminar, we will cover some of the fundamentals of machine learning applied to robotics including techniques such as Imitation Learning (supervised and self-supervised), Reinforcement Learning and the use of Foundation Models. We will have a brief overview of the basics of those techniques and a deep dive into their application in real-world robotic tasks. The seminar will also include some fundamentals of computer vision and their application to robotics and robot learning.

Short bio:

Roberto Martin-Martin is Assistant Professor of Computer Science at University of Texas at Austin. His research connects robotics, computer vision and machine learning. He studies and develops novel AI algorithms that enable robots to perform tasks in human uncontrolled environments such as homes and offices. In that endeavor, he creates novel decision-making solutions based on reinforcement learning, imitation learning, planning and control, and explores topics in robot-perception such as pose estimation and tracking, video prediction and parsing.

Martin-Martin received his Ph.D. from Berlin Institute of Technology (TUB) prior to a postdoctoral position at the Stanford Vision and Learning Lab under the supervision of Fei-Fei Li and Silvio Savarese. His work has been selected as RSS Best Systems Paper Award, RSS Pioneer, Winner of the Amazon Picking Challenge, and ICRA and IROS Best Paper Nominee. He is chair of the IEEE Technical Committee in Mobile Manipulation.

Introduction to SysML v2: foundations and applications (Ed Seidewitz)

Title: Introduction to SysML v2: foundations and applications

Speaker: Ed Seidewitz

Affiliation: Model Driven Solutions

Sessions:

  • May 5, 9:00-13:00
  • May 6, 9:00-13:00
  • May 7, 9:00-11:00

Lugar: Classroom 2.0.C03

Organizer: Ana Granados

Language: English

Seminar overview:

This seminar provides an introduction to the Systems Modeling Language (SysML), version 2. It covers the foundational concepts of the language, including its basic syntax and its innovative approach to extendable, library-based semantics. Participants will also learn how to apply SysML v2 for key systems modeling tasks and how the new language resolves practical limitations in SysML v1.

Short bio:

Ed Seidewitz is Chief Technology Officer at Model Driven Solutions, Inc., a long-time provider of systems modeling and semantic technology services. Mr. Seidewitz has extensive background in software and systems technologies and leading expertise in modeling languages and methodologies. He has 40 years of professional experience with the modeling, architecture and development of systems spanning diverse domains including aerospace, finance, acquisition and health care. He has been active with the Object Management Group (OMG) for 25 years, including involvement in Unified Modeling Language (UML) and Systems Modeling Language (SysML) standardization. He was primary author of the Foundational UML (fUML) and Action Language for Foundational UML (Alf) specifications and maintains open-source reference implementations for both. He was co-leader of the SysML v2 Submission Team and is currently co-chair of the OMG Systems Modeling Community and the Kernel Modeling Language (KerML) and SysML v2 Finalization Task Forces.