Shimi Will Now Sing to You in an Adorable Robot

Shimi robot

Human- interaction is easy to do badly, and very difficult to do well. One approach that has worked well for robots from R2-D2 to Kuri is to avoid the problem of language—rather than use real words to communicate with humans, you can do pretty well (on an emotional level, at least) with a variety of bleeps and bloops. But as anyone who's watched Star Wars knows, R2-D2 really has a lot going on with the noises that it makes, and those noises were carefully designed to be both expressive and responsive.

Most actual robots don't have the luxury of a professional sound team (and as much post-production editing as you need), so the question becomes how to teach a robot to make the right noises at the right times. At Georgia Tech's Center for Music Technology (GTCMT), Gil Weinberg and his students have a lot of experience with robots that make noise of various sorts, and they've used a new deep learning-based technique to teach their musical robot Shimi a basic understanding of human emotions, and how to communicate back to those humans in just the right way, using music.

You might also like

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. AcceptRead More