Explicable Task Planning
We are currently using it in our work on making robots show explicable or comprehensible behaviors in a human-robot teaming scenario [link to paper
Cloudy with a Chance of Synergy
In our project for the Microsoft Imagine Cup, we set out to solve one of the major challenges faced by the manufacturing industry today [link
]. The last few years has witnessed unprecedented investment in automating the workshop floor, with an exponential number of robots being deployed every year to work alongside humans. However, the state-of-the-art in human-robot interaction techniques does not allow for such large-scale joint operations, as a result of which the capability of the robots, and subsequently the promise of human-robot collaboration, remains quite limited. At the heart of this issue is a robot's inability to communicate effectively with the humans in the loop. We aim to resolve this impedance mismatch between humans and artificial agents that has so far limited effective collaboration on the factory floor. To this end, we propose to build the next generation safety helmet which combines EEG feedback and Augmented Reality to enable smooth and safe human-robot interactions in a shared workspace. Our technology consists of two key components - (1) the Consciousness Cloud which provides the robots real-time shared access to the mental state of all the humans in the workspace; and (2) the Augmented Workspace which allows the robots to communicate effectively with their human co-workers in the virtual space using a shared vocabulary of holograms. Together these two components will usher in the next generation of manufacturing workspaces by improving safety and collaboration among humans and automated components.
Plug & Play Adaptive Multiagent Systems
This video demonstrates a multi-agent planning system based on Robot Operating System which allows multiple agents to collaborate to complete a goal. A centralized planner calculates an optimal plan given the agents at its disposal. When a new agent connects, the system determines its location and includes it into the plan, if its inclusion reduces the cost of the plan. If an agent unexpectedly disconnects, the system will recover by generating a new plan excluding the lost agent.
Your wish is my command
Sprinkles can take simple requests such as "Deliver this Room 566", autonomously plans a path to get to the destination. It is currently deployed on the 5th floor of the CS department.
Sprinkles interacts with humans by utilizing facial recognition software, voice prompts and speech recognition, as well as touch input from an onboard web application. It uses a Microsoft Kinect sensor and a built-in ultrasonic sensor for mapping and obstacle avoidance.
To satisfy user commands, Sprinkles uses an automated planner to generate a course of action, and executes it.
Plan Context Based Low Level Action Selection
With Newman, a Baxter robot, we aim to develop a way of mapping high level tasks to low level trajectories. We start with the assumption, that any given grounded action can map to multiple DMPs and in our approach we attempt to map the action to a single DMP based on the action's context with in the plan. In the video on the left, our Newman performs a lateral pickup on the cup, since it later needs to perform a pour action. In the video on the right, we perform a vertical pickup on the cup, since it only needs to place the cup at a different position.
Teach Me How To Work
We also have a fleet of NAO-bots, that we have used in various planning projects. One such project featured in the System Demonstrations and Exhibits Track at ICAPS 2014.
Pictures from the Yochan Exibition Desk at AAAI, 2016
Yochan participated in the Robotics Exhibition at AAAI, 2016. We showcased Kramer's block stacking capabilities. The audience also had a fun time controlling the robot through the leap motion sensor. Our participation received media coverage in Channel One and City of Phoenix News.