Overview
- Built at Binghamton University, the prototype explains routes before a trip and narrates surroundings during travel using large language models.
- In a study with seven legally blind participants inside a large office space, most preferred the mix of pre-trip planning and real-time scene descriptions.
- The robot asked for a destination, offered route options with estimated times, and then guided users while calling out corridors and obstacles.
- In additional simulations, GPT-4 controlled the robot through 77 navigation scenarios, which it completed successfully.
- The team plans more user studies, greater autonomy, and longer indoor and outdoor routes, which could expand access to mobility help where trained guide dogs are scarce and costly.