LLM in Practice by Accumulation Point

LLM blog posts and term-definitions: a practical collection

Back to term definitions

Architecture

The architecture of a large language model (LLM) consists of a deep learning network built upon the transformer architecture, which utilizes attention mechanisms to weigh the influence of different input words on each output word. This setup allows LLMs to generate coherent and contextually relevant text based on the input they receive, by pre-training on a vast corpus of text data and then fine-tuning for specific tasks or applications.

Last updated 25th March 2023.