Songyang Han

Songyang Han

Applied Scientist | Ph.D. | Artificial General Intelligence

Amazon Science

Biography

I am an applied scientist at Amazon AWS AI Labs working on Amazon Q Developer (Large language model based code generation). I got my Ph.D. degree in computer science and engineering working on artificial intelligence advised by Prof. Fei Miao at the University of Connecticut. I worked on game theoretic energy management approaches in Dynamic Systems Control Lab at the University of Michigan-Shanghai Jiao Tong University Joint Institute. Before joining Amazon, I worked on a superhuman racing AI agent that mastered the highly realistic game Gran Turismo at Sony AI America.

My current research interests include large language model, artificial general intelligence, reinforcement learning, generative AI, and computer vision.

News

  • [2024/12] Our project, Amazon Q Developer code review, is announced by AWS CEO Matt Garman at AWS re:Invent 2024. website
  • [2024/9] I am excited to join Amazon AWS AI Labs as an applied scientist.
  • [2024/8] Our team’s paper “A Super-human Vision-based Reinforcement Learning Agent for Autonomous Racing in Gran Turismo” received the Outstanding Paper Award at the 1st annual Reinforcement Learning Conference (RLC) 2024.
  • [2024/1] Our paper “Real-time Human Presence Estimation For Indoor Robots” is accepted by 2024 IEEE International Conference on Robotics and Automation (ICRA).
  • [2024/1] Our paper “Collaborative Multi-Object Tracking with Conformal Uncertainty Propagation” is accepted by IEEE Robotics and Automation Letters (RA-L). Available on arxiv, website.
  • [2024/1] Our paper “What is the Solution for State-Adversarial Multi-Agent Reinforcement Learning?” is accepted by Transactions on Machine Learning Research (TMLR). Available on arxiv, website.
  • [2023/10] Our paper “A Multi-Agent Reinforcement Learning Approach For Safe and Efficient Behavior Planning Of Connected Autonomous Vehicles” is accepted by IEEE Transactions on Intelligent Transportation Systems. Available on arxiv, website.
  • [2023/8] I am excited to join Sony AI as a research scientist.
  • [2023/6] Our paper “Towards Safe Autonomy in Hybrid Traffic: Detecting Unpredictable Abnormal Behaviors of Human Drivers via Information Sharing” is accepted by ACM Transactions on Cyber-Physical Systems (TCPS).
  • [2023/5] Our paper “Robust Multi-Agent Reinforcement Learning with State Uncertainty” is accepted by Transactions on Machine Learning Research (TMLR).
  • [2023/5] I receive the CSE Predoctoral Fellowship.
  • [2023/5] I give an invited Seminar “Safe, Stable, and Robust Multi-Agent Reinforcement Learning for Connected Autonomous Vehicles” at MIT.
  • [2023/2] Our paper “Shared Information-Based Safe And Efficient Behavior Planning For Connected Autonomous Vehicles” gets the Best Paper Award in the DCAA workshop at AAAI 2023, Washington, DC. Available on arxiv.
  • [2023/1] Our paper “Uncertainty Quantification of Collaborative Detection for Self-Driving” is accepted by the 2023 IEEE International Conference on Robotics and Automation (ICRA), available on arxiv, website.
  • [2023/1] Our paper “Spatial-Temporal-Aware Safe Multi-Agent Reinforcement Learning of Connected Autonomous Vehicles in Challenging Scenarios” is accepted by the 2023 IEEE International Conference on Robotics and Automation (ICRA), available on arxiv.
  • [2022/8] I get the General Electric (GE) fellowship of excellence. The GE Fellowship for Excellence program is established to recognize excellence of current graduate students and to facilitate their completion of the Ph.D. program.
  • [2022/7] Our paper “Towards Safe Autonomy in Hybrid Traffic: The Power of Information Sharing in Detecting Abnormal Human Drivers Behaviors” is presented in the AI4TS workshop at the 31st International Joint Conference On Artificial Intelligence (IJCAI 2022).
  • [2022/7] Our paper “DeResolver: A Decentralized Negotiation and Conflict Resolution Framework for Smart City Services” is accepted by ACM Transactions on Cyber-Physical Systems. (available online).
  • [2022/5] Our paper “Stable and Efficient Shapley Value-Based Reward Reallocation for Multi-Agent Reinforcement Learning of Autonomous Vehicles” is presented on the 2022 IEEE International Conference on Robotics and Automation (ICRA), available online.
Interests
  • Large Lanaguage Model
  • Artificial General Intelligence
  • Reinforcement learning
  • Generative AI
  • Autonomous driving
  • Computer vision
Education
  • PhD in Computer Science and Engineering, 2023

    University of Connecticut

  • MS in Electrical and Computer Engineering, 2018

    University of Michigan-Shanghai Jiao Tong University Joint Institute

  • BEng in Automation, 2015

    Nanjing University

Experience

 
 
 
 
 
Research scientist
Aug 2023 – Sep 2024 New York City, NY, USA
  • Working in the Reinforcement Learning group led by Peter Stone and Peter Wurman.
  • A revolutionary superhuman racing AI agent that has mastered the highly realistic game of Gran Turismo, to race against and elevate the gaming experience of GT drivers.
 
 
 
 
 
Applied scientist intern
May 2023 – Aug 2023 Sunnyvale, CA, USA
  • Mentored by Apaar Sadhwani
  • Crafted machine learning-driven solutions to efficiently handle time-series data characterized by sparse observations.
 
 
 
 
 
Research intern
May 2020 – Dec 2020 Sunnyvale, CA, USA
  • Mentored by Shiyu Song
  • Summarized exiting reinforcement learning methods and the state-of-art deep learning methods used in autonomous driving.
  • Built a prototype platform to train and test RL algorithms for autonomous vehicles in the Apollo platform and Amazon AWS.
 
 
 
 
 
Research assistant
Aug 2018 – May 2023 Storrs, CT, USA
  • Design a safe and scalable multi-agent reinforcement learning framework for the behavior planning and control of connected autonomous vehicles to improve traffic efficiency and safety.
  • Propose a stable and efficient reward reallocation algorithm to motivate cooperation for multi-agent reinforcement learning assuming all agents are self-interested.
  • Study the fundamental properties of the robust multi-agent reinforcement learning under adversarial state perturbations and propose a new objective and an algorithm to learn robust policy.
 
 
 
 
 
Research assistant
Sep 2015 – Mar 2018 Shanghai, China
  • Proposed a flexible energy management approach to handle the uncertainties of weather and sizing in an isolated microgrid, which would not be influenced dramatically by different weather conditions.
  • Designed and fabricated high efficient bidirectional DC/DC converters to conduct and validate energy management approaches in a downsized system.

Research

*

Publications

Quickly discover relevant content by filtering publications.
(2024). Collaborative multi-object tracking with conformal uncertainty propagation. In IEEE RAL.

PDF Cite Project

(2024). What is the Solution for State-Adversarial Multi-Agent Reinforcement Learning?. In TMLR.

PDF Cite

(2023). A Multi-Agent Reinforcement Learning Approach For Safe and Efficient Behavior Planning Of Connected Autonomous Vehicles. In IEEE T-ITS.

PDF Cite Project Video

(2023). Uncertainty Quantification of Collaborative Detection for Self-Driving. In ICRA 2023.

PDF Cite Project Video

Contact