This is what computer speech sounded like in 1963

Status
Not open for further replies.

Bateluer

Lifer
Jun 23, 2001
27,730
8
0
http://www.theverge.com/2012/12/26/...puter-synthesized-speech-sounded-like-in-1963

Taking a look into the past, the 365 Days Project has a recording of several tests done by Bell Telephone Laboratories back in 1963. While the technology we use today is undoubtedly improved, it's intriguing to hear how some of the fundamental problems with computer-generated speech still haven't been resolved almost 50 years later. You can listen to the full audio here — and don't forget to hear the rendition of "Daisy," a test that is said to have inspired HAL's dying singsong in 2001.

Interesting listen. There's surprisingly little improvement in 40 years in some ways.
 

Crono

Lifer
Aug 8, 2001
23,720
1,503
136
Human speech is very complex. It says more about the difficulty in smoothly approximating human speech than it does about the development of software and computer technology, which has advanced a lot in other areas.

I'm sure more could be done and will be done by learning from speech pathology and how the brain acquires language early in human development. In some ways it's a good thing there aren't perfect speech synthesizers, as all kinds of fraud (aside from prank calls) could be committed once the tech is perfected. But it would be cool for people who can't speak anymore (like Stephen Hawking) to be able to generate their own voice perfectly.
 
Last edited:

Crono

Lifer
Aug 8, 2001
23,720
1,503
136
Are you kidding? You can have a conversation with a cell phone now.

That's more a development in AI than purely in voice synthesization.
Think of it like you would of an instrument. You can generate the sound of a drum fairly accurately (though maybe not completely convincingly) with music software, but a high quality recording of an actual drum is almost always better.

Synthesizing human voice, though, is even harder. Even the best software doesn't sound completely normal or fluid. It's akin to the uncanny valley for humanoid robots. We are engineered to detect subtle cues in speech such as intonation that are hard to interpret into computer programming. There probably just isn't enough research as of yet, and the models of speech in the future will have to be a lot more sophisticated to get the sound right. Once a computer can "sell" sarcasm, you know we're on the right track. :D
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86

Crono

Lifer
Aug 8, 2001
23,720
1,503
136
It would be great for other people, maybe, but not Stephen Hawking. It would be great if they could reverse-engineer his voice, but it would need to be identifiable as approximately the same as what the old hardware produces.

http://www.newscientist.com/article/dn21323-the-man-who-saves-stephen-hawkings-voice.html

I get being attached to that voice since it's identified with him (even in popular media like The Simpsons), but if I were in that situation I might want the voice I had before ALS. Hard to say that with certainty, of course, since I'm not in that situation. But since he's had that robotic voice for so long I understand why he might want to keep it. If my voice suddenly became different I would perhaps feel like part of my identity had changed.
 

Bateluer

Lifer
Jun 23, 2001
27,730
8
0
Are you kidding? You can have a conversation with a cell phone now.

The piece is about voice synthesis, not AI and data connections used with services like Google Now, Google Voice Actions, Google Navigation, and to a less extend, Siri.

Up until ICS, the voice in Google Nav, for example, still sounded very computerized and very much like the efforts from the 1960s. The data it returned and its ability to recognize what the user was trying to accomplish are obviously vastly improved.
 
Status
Not open for further replies.