Friday, November 29, 2019

Samsung Research Centers Around the World Take First Place in Prestigious AI Challenges

Samsung Electronics’ Global Research & Development (R&D) Centers play a key part in developing artificial intelligence (AI) capabilities for real-world usage. A credit to the work this advanced R&D branch of Samsung undertakes, both Samsung R&D Institute Poland and Samsung Research America AI Center have recently won two prestigious global challenges.

 

 

Samsung R&D Institute Poland at IWSLT 2019

 

2019 marks the third year in a row that Samsung R&D Institute Poland, in partnership with the U.K.’s University of Edinburgh (UEDIN), has received accolades at the International Workshop on Spoken Language Translation (IWSLT), one of the top two global workshops on automatic language translation, along with the Workshop on Machine Translation (WMT). This year, Samsung R&D Institute Poland won first place in two categories, the first being text-to-text translation from English to Czech and the second – an end-to-end system translating English speech into German text.

 

For the text-to-text translation category, researchers worked to develop a model to translate the transcript of a spoken English-language TED Talk into Czech. Developing their winning model required the Samsung team to develop large, filtered corpora from which to work and generate as much synthetic data as possible. The work done by the Samsung R&D Institute Poland team, together with additional modeling help from UEDIN, was selected as the best in the challenge by human evaluators. This means that the translations produced by Samsung R&D Institute Poland’s system scored the highest both in fluency and adequacy.

 

Samsung R&D Institute Poland’s participation in their second winning category this year, the end-to-end translation system from English to German, was a first for the team. The task was to produce a German-language transcription of an English-language TED Talk audio recording. This task required the development of a single model that could take an audio file input and subsequently produce a translated transcription. It was made more difficult by the deficiency of the provided audio sources, compared to typical speech recognition task. Samsung R&D Institute Poland proposed several innovative methods for end-to-end speech translation that mitigated this source paucity, obtaining a state-of-the-art result with their final system that won them first place in the challenge.

 

 

Samsung Research America at ICCV 2019

 

This October, researchers from Samsung Research America’s AI Center received first place in the International Conference on Computer Vision (ICCV)’s challenge: Linguistic Meets Image and Video Retrieval (Fashion IQ). ICCV is a premier international computer vision conference that took place in Seoul, Korea, this year.

 

The challenge Samsung Research America AI Center took part in, sponsored by IBM research, aims to develop conversational shopping assistants that are more natural and real-world applicable. The task given to Samsung Research America AI Center‘s team, the ‘Superraptors’, belonged to the domain of image retrieval. In the task, an input query was specified in the form of a candidate image as well as in two natural language expressions that describe the visual differences of the search target. The goal of this challenge was to gather opinions and experience from researchers on the emerging space of visual content retrieval with a natural language interface.

 

Samsung Research America AI Center’s submission to the challenge, “Multimodal Ensemble of Diverse Models for Image Retrieval Using Natural Language Feedback”, blended the given data in different modalities with multiple deep learning models. The team’s win marks the first time a Samsung Research team has won a multimodal (language and vision) challenge; previously, Samsung AI Center Moscow, Samsung R&D Institute Poland and Samsung R&D Institute China-Beijing have received awards in single modality challenges.



* This article was originally published here

No comments:

Post a Comment