LG AI Research has announced the launch of South Korea’s first open-source AI model EXAONE 3.0. It makes South Korea’s entry into the global AI field dominated by American technology giants and emerging companies in China and the Middle East. Notably, it currently supports only two languages – English and Korean.

The EXAONE 3.0 is an open-source model, which is reportedly based on the Decoder-only Transformer architecture. It boasts 7.8B parameters and 8T of training data (tokens).
“Among the EXAONE 3.0 language model lineup built for various purposes, the 7.8B instruction adjustment model is being open-sourced in advance so that it can be used for research,” said an LG press release. The company hopes that the release of this model will help AI researchers in both the home country and abroad conduct more meaningful research and help the AI ecosystem move one step forward.

According to the company’s tests, the model’s English ability has reached the “world’s top-level”. The average score of real use cases reportedly ranks first, surpassing a number of models such as Llama 3.0.
When it comes to mathematical calculations and coding, EXAONE 3.0 also ranks first in average score and has strong reasoning ability.

Not to mention, EXAONE 3.0 ranked first in average scores for both actual use cases and single benchmarks in Korea. The model is claimed to reduce inference time by 56%, memory usage by 35%, and operating costs by 72% compared to its previous generation model.
The latest model has reportedly been trained on 60 million cases of professional data related to patents, codes, mathematics, and chemistry. The company plans to expand the training data to 100 million cases in various fields by the end of the year.
To reduce the power consumption of operating the AI model, LG AI Research focused on researching optimization technologies and making the model lightweight, in which they succeeded and reduced the size of the model by a whopping 97% while increasing performance compared to EXAONE 1.0.







Comments