[ad_1]
Each time you utilize your voice to generate a message on a Samsung Galaxy cell phone or activate a Google Residence system, you’re utilizing instruments Chanwoo Kim helped develop. The previous government vp of Samsung Analysis’s International AI Facilities focuses on end-to-end speech recognition, end-to-end text-to-speech instruments, and language modeling.
“Essentially the most rewarding a part of my profession helps to develop applied sciences that my family and friends members use and luxuriate in,” Kim says.
He not too long ago left Samsung to proceed his work within the subject at Korea College, in Seoul, main the varsity’s speech and language processing laboratory. A professor of synthetic intelligence, he says he’s obsessed with instructing the subsequent technology of tech leaders.
“I’m excited to have my very own lab on the faculty and to information college students in analysis,” he says.
Bringing Google Residence to market
When Amazon introduced in 2014 it was growing good audio system with AI assistive know-how, a gadget now often called Echo, Google determined to develop its personal model. Kim noticed a job for his experience within the endeavor—he has a Ph.D. in language and knowledge know-how from Carnegie Mellon, and he specialised in sturdy speech recognition. Mates of his who have been engaged on such tasks at Google in Mountain View, Calif., inspired him to use for a software program engineering job there. He left Microsoft in Seattle the place he had labored for 3 years as a software program improvement engineer and speech scientist.
After becoming a member of Google’s acoustic modeling group in 2013, he labored to make sure the corporate’s AI assistive know-how, utilized in Google Residence merchandise, might carry out within the presence of background noise.
Chanwoo Kim
Employer
Korea College in Seoul
Title
Director of the the speech and language processing lab and professor of synthetic intelligence
Member grade
Member
Alma maters
Seoul Nationwide College; Carnegie Mellon
He led an effort to enhance Google Residence’s speech-recognition algorithms, together with using acoustic modeling, which permits a tool to interpret the connection between speech and phonemes (phonetic items in languages).
“When individuals used the speech-recognition operate on their cell phones, they have been solely standing about 1 meter away from the system at most,” he says. “For the speaker, my group and I had to verify it understood the person once they have been speaking farther away.”
Kim proposed utilizing large-scale information augmentation that simulates far-field speech information to boost the system’s speech-recognition capabilities. Information augmentation analyzes coaching information obtained and artificially generates further coaching information to enhance recognition accuracy.
His contributions enabled the corporate to launch its first Google Residence product, a sensible speaker, in 2016.
“That was a very rewarding expertise,” he says.
That very same yr, Kim moved as much as senior software program engineer and continued enhancing the algorithms utilized by Google Residence for large-scale information augmentation. He additionally additional developed applied sciences to cut back the time and computing energy utilized by the neural community and to enhance multi-microphone beamforming for far-field speech recognition.
Kim, who grew up in South Korea, missed his household, and in 2018 he moved again, becoming a member of Samsung as vp of its AI Middle in Seoul.
When he joined Samsung, he aimed to develop end-to-end speech recognition and text-to-speech recognition engines for the corporate’s merchandise, specializing in on-device processing. To assist him attain his targets, he based a speech processing lab and led a group of researchers growing neural networks to interchange the traditional speech-recognition programs then utilized by Samsung’s AI gadgets.
“Essentially the most rewarding a part of my work helps to develop applied sciences that my family and friends members use and luxuriate in.”
These programs included an acoustic mannequin, a language mannequin, a pronunciation mannequin, a weighted finite state transducer, and an inverse textual content normalizer. The language mannequin appears to be like on the relationship between the phrases being spoken by the person, whereas the pronunciation mannequin acts as a dictionary. The inverse textual content normalizer, most frequently utilized by text-to-speech instruments on telephones, converts speech into written expressions.
As a result of the elements have been cumbersome, it was not doable to develop an correct, on-device speech-recognition system utilizing standard know-how, Kim says. An end-to-end neural community would full all of the duties and “significantly simplify speech-recognition programs,” he says.
Chanwoo Kim [top row, seventh from the right] with among the members of his speech processing lab at Samsung Analysis.Chanwoo Kim
He and his group used a streaming attention-based strategy to develop their mannequin. An enter sequence—the spoken phrases—are encoded, then decoded right into a goal sequence with the assistance of a context vector, a numeric illustration of phrases generated by a pretrained deep studying mannequin for machine translation.
The mannequin was commercialized in 2019 and is now a part of Samsung’s Galaxy cellphone. That very same yr, a cloud model of the system was commercialized and is utilized by the cellphone’s digital assistant, Bixby.
Kim’s group continued to enhance speech recognition and text-to-speech programs in different merchandise, and yearly they commercialized a brand new engine.
They embody the power-normalized cepstral coefficients, which enhance the accuracy of speech recognition in environments with disturbances comparable to additive noise, modifications within the sign, a number of audio system, and reverberation. It suppresses the consequences of background noise by utilizing statistics to estimate traits. It’s now utilized in quite a lot of Samsung merchandise together with air conditioners, cellphones, and robotic vacuum cleaners.
Samsung promoted Kim in 2021 to government vp of its six International AI Facilities, positioned in Cambridge, England; Montreal; Seoul; Silicon Valley; New York; and Toronto.
In that function he oversaw analysis on incorporating synthetic intelligence and machine studying into Samsung merchandise. He’s the youngest individual to be an government vp on the firm.
He additionally led the event of Samsung’s generative giant language fashions, which advanced in Samsung Gauss. The suite of generative AI fashions can generate code, pictures, and textual content.
In March he left the corporate to affix Korea College as a professor of synthetic intelligence—which is a dream come true, he says.
“Once I first began my doctoral work, my dream was to pursue a profession in academia,” Kim says. “However after incomes my Ph.D., I discovered myself drawn to the affect my analysis might have on actual merchandise, so I made a decision to enter trade.”
He says he was excited to affix Korea College, as “it has a robust presence in synthetic intelligence” and is among the high universities within the nation.
Kim says his analysis will concentrate on generative speech fashions, multimodal processing, and integrating generative speech with language fashions.
Chasing his dream at Carnegie Mellon
Kim’s father was {an electrical} engineer, and from a younger age, Kim needed to comply with in his footsteps, he says. He attended a science-focused highschool in Seoul to get a head begin in studying engineering matters and programming. He earned his bachelor’s and grasp’s levels in electrical engineering from Seoul Nationwide College in 1998 and 2001, respectively.
Kim lengthy had hoped to earn a doctoral diploma from a U.S. college as a result of he felt it could give him extra alternatives.
And that’s precisely what he did. He left for Pittsburgh in 2005 to pursue a Ph.D. in language and knowledge know-how at Carnegie Mellon.
“I made a decision to main in speech recognition as a result of I used to be fascinated about elevating the usual of high quality,” he says. “I additionally appreciated that the sphere is multifaceted, and I might work on {hardware} or software program and simply shift focus from real-time sign processing to picture sign processing or one other sector of the sphere.”
Kim did his doctoral work below the steering of IEEE Life Fellow Richard Stern, who most likely is greatest identified for his theoretical work in how the human mind compares sound coming from every ear to evaluate the place the sound is coming from.
“At the moment, I needed to enhance the accuracy of automated speech recognition programs in noisy environments or when there have been a number of audio system,” he says. He developed a number of sign processing algorithms that used mathematical representations created from details about how people course of auditory data.
Kim earned his Ph.D. in 2010 and joined Microsoft in Seattle as a software program improvement engineer and speech scientist. He labored at Microsoft for 3 years earlier than becoming a member of Google.
Entry to reliable data
Kim joined IEEE when he was a doctoral scholar so he might current his analysis papers at IEEE conferences. In 2016 a paper he wrote with Stern was printed within the IEEE/ACM Transactions on Audio, Speech, and Language Processing. It received them the 2019 IEEE Sign Processing Society’s Finest Paper Award. Kim felt honored, he says, to obtain this “prestigious award.”
Kim maintains his IEEE membership partly as a result of, he says, IEEE is a reliable supply of data, and he can entry the newest technical data.
One other advantage of membership is IEEE’s world community, Kim says.
“By being a member, I’ve the chance to fulfill different engineers in my subject,” he says.
He’s an everyday attendee on the annual IEEE Convention for Acoustics, Speech, and Sign Processing. This yr he’s the technical program committee’s vice chair for the assembly, which is scheduled for subsequent month in Seoul.
[ad_2]