Faraz Torabi
faraztrb at cs dot utexas dot edu

I am interested in reinforcement learning and imitation learning, particularly in how to use already existing resources to train agents. I am currently focused on learning to imitate skills from raw video observation. I am also a part of the UT Austin Villa RoboCup 3D simultation team.

I am a fourth year PhD student at the University of Texas at Austin. Currently, I am a member of the Learning Agents Research Group (LARG) led by Peter Stone in the Computer Science Department.

Email  /  GitHub  /  LinkedIn

News

  • 01/27/2019: Heading to AAAI 2019 to present my research in Doctoral Consortium.
  • 06/21/2018: The UT Austin Villa 3D simulation team won the RoboCup 2018 competition held in Montreal, Canada.
  • 04/16/2018: Our paper, Behavioral Cloning from Observation, got accepted to IJACI-2018.
  • 12/15/2017: I received my Masters degree in Computational Science, Engineering, and Mathematics.

Pre-prints
Generative Adversarial Imitation from Observation
Generative Adversarial Imitation from Observation
Generative Adversarial Imitation from Observation
Faraz Torabi, Garrett Warnell, Peter Stone
arXiv / bibtex

We propose an imitation from observation (IfO) algorithm that enables the agent to imitate another agent's state only demonstrations by bringing the distribution of the state transitions of the imitator to that of the demonstrator using a network inspired by GANs. We conduct experiments in two different settings: (1) when demonstrations consist of low-level, manually-defined state features, and (2) when demonstrations consist of raw visual data. We demonstrate that our approach outperforms other IfO methods.

Publications
Behavioral Cloning from Observation
Behavioral Cloning from Observation
Behavioral Cloning from Observation
Faraz Torabi, Garrett Warnell, Peter Stone
Interantional Joint Conference on Artificial intelligence (IJCAI), 2018
arXiv / bibtex

We propose a two-phase, autonomous imitation from observation technique called behavioral cloning from observation (BCO). We allow the agent to acquire experience in a self-supervised fashion. This experience is used to develop a model which is then utilized to learn a particular task by observing an expert perform that task without the knowledge of the specific actions taken.

Workshops, Symposia, and Extended Abstracts
Adversarial Imitation Learning from State-only Demonstrations
Faraz Torabi, Garrett Warnell, Peter Stone
Autonomous Agents and Multi-Agent Systems (AAMAS) Extended Abstract, 2019
bibtex

Imitation Learning from Observation
Faraz Torabi
Association for the Advancement of Artificial Intelligence (AAAI) Doctoral Consortium, 2019
bibtex


Website template credits.