🔥 Burn Fat Fast. Discover How! 💪

LaMDA: Safe, Grounded, and High-Quality Dialog Model from Goog | Big Data Science

LaMDA: Safe, Grounded, and High-Quality Dialog Model from Google AI
LaMDA
is created by fine-tuning a family of dialogue-specific Transformer-based neural language models with model parameters up to 137B and training the models to use external knowledge sources. LaMDA has three key goals:
Quality, which is measured in terms of Sensibleness, Specificity, and Interestingness. These indicators are evaluated by people. Reasonableness indicates the presence of meaning in the context of the dialogue, for example, the absence of absurd answers from the ML-model and contradictions with earlier answers. Specificity indicates whether the system's response is specific to the context of the previous dialog. Interestingness measures the emotional reaction of the interlocutor to the answers of the ML model.
Safety so that the model's responses do not contain offensive and dangerous statements.
Groundedness - modern language models often generate statements that seem plausible, but in fact contradict the true facts in external sources. Groundedness is defined as the percentage of responses with statements about the outside world that can be verified by reputable external sources. A related metric, Informativeness, is defined as the percentage of responses with information about the outside world that can be confirmed by known sources.
LaMDA models undergo two-stage training: pre-training and fine-tuning. The first stage was performed on a data set of 1.56 thousand words from publicly available dialogue data and public web documents. After tokenizing the data set of 2.81T tokens, the model was trained to predict each next token in the sentence, given the previous ones. The pretrained LaMDA model has also been widely used for NLP research at Google, including program synthesis, zero-shot learning, and more.
In the fine-tuning phase, LaMDA is trained to combine generative tasks to generate natural language responses in given contexts and classification tasks to determine the safety and quality of the model. This results in a single multitasking model: the LaMDA generator is trained to predict the next token in the dialogue dataset, and the classifiers are trained to predict the security and response quality scores in context using annotated data.
The test results showed that LaMDA significantly outperforms the pre-trained model in every dimension and at every scale. Quality metrics improve as the number of model parameters increases, with or without fine-tuning. Safety is not improved by scaling the model alone, but compensated for by fine-tuning. Groundedness improves as the size of the model grows, due to the ability to remember unusual knowledge. And fine-tuning allows the model to access external sources and effectively transfer part of the burden of remembering knowledge to them. By fine-tuning, the human-level quality gap can be reduced, although the performance of the model remains below human-level in terms of safety and Groundedness.
https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html