Large Language Model Trained by Houou: RIKEN Ichikara-Instruction RIKEN Data Set

Posted by Atsuhi Kojima, Researcher, Money Forward Lab.

This article aims for engineers and students who are interested in or are involved in natural language processing, and introduces research and development of Large Language Model (LLM), which Money Forwards’s research institute Money Forward Lab has been working on.

As a training method to improve the instruction followability of an LLM, there is Supervised Fine-Tuning (SFT). In an SFT model, instruction data consisting of pairs of a prompt and a completion is prepared, and training is done by fine-tuning against pre-trained models.

Preparing this kind of instruction data requires manually describing the prompt and the completion, making the cost of annotation very high, except when the method of utilizing the output of a pre-trained model such as GPT-4 is adopted. Therefore, most Japanese instruction data had been yielded by translating instruction data created in English into Japanese.

In order to solve this problem, Money Forward Lab initiated a joint research with RIKEN Center for Advanced Intelligence Project Language Information Access Technology Team (henceforth RIKEN) in September 2023 to create Japanese instruction data from full scratch.

As a result of this joint research, we have publicized as open source an LLM that was trained using Ichikara-Instruction, which is a set of instruction data created by RIKEN and provided to joint research companies. The newest data set was used, which includes 4802 datum.

The model was named houou (鳳凰), and is publicized at Hugging Face under LLAMA 2 license. https://huggingface.co/moneyforward/houou-instruction-7b-v2

Winning Rate of Houou on Rakuda Benchmark

We evaluated the performance of houou using Rakuda Benchmark, which consists of forty free form questions about Japan. In the experiment, we used automatic evaluation by gpt-4 for efficient evaluation.

 By the experiment, it was observed that houou exceeded SFT models respectively trained by dolly and OASST translated into Japanese. Furthermore, in comparison against gpt-3.5-turbo-1106, houou was able to make an output superior to gpt-3.5-turbo-1106 for 67.5% of the questions.

Evaluation using other data sets and details of manual evaluation will be reported at The Association for Natural Language Processing 30th Annual Meeting (NLP2024) to be held in Kobe in March 2024.

Money Forward’s Presentation at NLP2024

Money Forward will present three papers at The Association for Natural Language Processing 30th Annual Meeting regarding natural language processing, including the result of houou.

  • large language model houou (鳳凰): Training and evaluation using RIKEN ichikara-instruction data set. Atsushi Kojima, Ikuo Kitagishi
  • Comparative analysis of manual evaluation and analysis of the output of LLM and automatic evaluation by GPT-4.
    Satoshi Sekine, Atsushi Kojima, Kugatsu Sadamitsu, Ikuo Kitagishi
    (coauthored with RIKEN)
  • Consideration on automatic generation of reply mail in customer support based on retrieval-augmented generation. Atsushi Kojima

We Money Forward are very happy to be a platinum sponsor and the title sponsor of the conference. We look forward to seeing you at the conference and the company booth.

Money Forward Lab is currently recruiting new members.

Mid-career recruitment

FY2025 New graduate recruitment

Published-date