From 8c1f1b280f7d1c589ae8c05999bf3e142af28055 Mon Sep 17 00:00:00 2001 From: madroid Date: Sun, 6 Aug 2023 14:23:57 +0800 Subject: [PATCH 01/79] Update README.md: format notable forks --- README.md | 47 +++++++++++++++++++++++++++++------------------ 1 file changed, 29 insertions(+), 18 deletions(-) diff --git a/README.md b/README.md index 1b46f29..34e4aed 100644 --- a/README.md +++ b/README.md @@ -208,25 +208,36 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg ## notable forks -- [llama2.rs](https://github.com/gaxler/llama2.rs) by @gaxler: a Rust port of this project -- [go-llama2](https://github.com/tmc/go-llama2) by @tmc: a Go port of this project -- [llama2.go](https://github.com/nikolaydubina/llama2.go) by @nikolaydubina: a Go port of this project -- [llama2.go](https://github.com/haormj/llama2.go) by @haormj: a Go port of this project -- [llama2.go](https://github.com/saracen/llama2.go) by @saracen: a Go port of this project -- [llama2.c-android](https://github.com/Manuel030/llama2.c-android): by @Manuel030: adds Android binaries of this project -- [llama2.c-android-wrapper](https://github.com/celikin/llama2.c-android-wrapper): by @celikin: added JNI wrapper, PoC -- [llama2.cpp](https://github.com/leloykun/llama2.cpp) by @leloykun: a C++ port of this project -- [llama2.js](https://github.com/epicure/llama2.js) by @epicure: a JavaScript port of this project -- [llama2.zig](https://github.com/cgbur/llama2.zig) by @cgbur: A Zig port of this project -- [llama2.zig](https://github.com/vodkaslime/llama2.zig) by @vodkaslime: a Zig port of this project -- [llama2.jl](https://github.com/juvi21/llama2.jl) by @juvi21: a Julia port of this project +- Rust + - [llama2.rs](https://github.com/gaxler/llama2.rs) by @gaxler: a Rust port of this project + - [llama2.rs](https://github.com/leo-du/llama2.rs) by @leo-du: A Rust port of this project +- Go + - [go-llama2](https://github.com/tmc/go-llama2) by @tmc: a Go port of this project + - [llama2.go](https://github.com/nikolaydubina/llama2.go) by @nikolaydubina: a Go port of this project + - [llama2.go](https://github.com/haormj/llama2.go) by @haormj: a Go port of this project + - [llama2.go](https://github.com/saracen/llama2.go) by @saracen: a Go port of this project +- Android + - [llama2.c-android](https://github.com/Manuel030/llama2.c-android): by @Manuel030: adds Android binaries of this project + - [llama2.c-android-wrapper](https://github.com/celikin/llama2.c-android-wrapper): by @celikin: added JNI wrapper, PoC +- C++ + - [llama2.cpp](https://github.com/leloykun/llama2.cpp) by @leloykun: a C++ port of this project +- JavaScript + - [llama2.js](https://github.com/epicure/llama2.js) by @epicure: a JavaScript port of this project + - [llama2.c-emscripten](https://github.com/gohai/llama2.c-emscripten) by @gohai: Emscripten (JavaScript) port, based on @ggerganov's initial prototype +- Zig + - [llama2.zig](https://github.com/cgbur/llama2.zig) by @cgbur: A Zig port of this project + - [llama2.zig](https://github.com/vodkaslime/llama2.zig) by @vodkaslime: a Zig port of this project + - [llama2.zig](https://github.com/clebert/llama2.zig) by @clebert: a Zig port of this project +- Julia + - [llama2.jl](https://github.com/juvi21/llama2.jl) by @juvi21: a Julia port of this project +- Scala + - [llama2.scala](https://github.com/jrudolph/llama2.scala) by @jrudolph: a Scala port of this project +- Java + - [llama2.java](https://github.com/mukel/llama2.java) by @mukel: a Java port of this project +- Kotlin + - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @madroidmaq: a Kotlin port of this project - [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @trholding: Standalone, Bootable & Portable Binary Llama 2 -- [llama2.rs](https://github.com/leo-du/llama2.rs) by @leo-du: A Rust port of this project -- [llama2.scala](https://github.com/jrudolph/llama2.scala) by @jrudolph: a Scala port of this project -- [llama2.c-emscripten](https://github.com/gohai/llama2.c-emscripten) by @gohai: Emscripten (JavaScript) port, based on @ggerganov's initial prototype -- [llama2.java](https://github.com/mukel/llama2.java) by @mukel: a Java port of this project -- [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @madroidmaq: a Kotlin port of this project -- [llama2.zig](https://github.com/clebert/llama2.zig) by @clebert: a Zig port of this project + ## unsorted todos From fcb4cdef8b38e6bfa5f620a8630ede7d324529eb Mon Sep 17 00:00:00 2001 From: Daniel Grittner Date: Sun, 6 Aug 2023 10:44:48 +0200 Subject: [PATCH 02/79] add a Rust port --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index c9926fc..ad0103c 100644 --- a/README.md +++ b/README.md @@ -227,6 +227,7 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - [llama2.java](https://github.com/mukel/llama2.java) by @mukel: a Java port of this project - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @madroidmaq: a Kotlin port of this project - [llama2.zig](https://github.com/clebert/llama2.zig) by @clebert: a Zig port of this project +- [llama2-rs](https://github.com/danielgrittner/llama2-rs) by @danielgrittner: a Rust port of this project ## unsorted todos From baefaaaf769dc7c383676ce3f75afb81950f051a Mon Sep 17 00:00:00 2001 From: madroid Date: Sun, 6 Aug 2023 17:42:31 +0800 Subject: [PATCH 03/79] Update README.md: add notable forks author's link --- README.md | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/README.md b/README.md index 34e4aed..4054869 100644 --- a/README.md +++ b/README.md @@ -209,34 +209,34 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg ## notable forks - Rust - - [llama2.rs](https://github.com/gaxler/llama2.rs) by @gaxler: a Rust port of this project - - [llama2.rs](https://github.com/leo-du/llama2.rs) by @leo-du: A Rust port of this project + - [llama2.rs](https://github.com/gaxler/llama2.rs) by @[gaxler](https://github.com/gaxler): a Rust port of this project + - [llama2.rs](https://github.com/leo-du/llama2.rs) by @[leo-du](https://github.com/leo-du): A Rust port of this project - Go - - [go-llama2](https://github.com/tmc/go-llama2) by @tmc: a Go port of this project - - [llama2.go](https://github.com/nikolaydubina/llama2.go) by @nikolaydubina: a Go port of this project - - [llama2.go](https://github.com/haormj/llama2.go) by @haormj: a Go port of this project - - [llama2.go](https://github.com/saracen/llama2.go) by @saracen: a Go port of this project + - [go-llama2](https://github.com/tmc/go-llama2) by @[tmc](https://github.com/tmc): a Go port of this project + - [llama2.go](https://github.com/nikolaydubina/llama2.go) by @[nikolaydubina](https://github.com/nikolaydubina): a Go port of this project + - [llama2.go](https://github.com/haormj/llama2.go) by @[haormj](https://github.com/haormj): a Go port of this project + - [llama2.go](https://github.com/saracen/llama2.go) by @[saracen](https://github.com/saracen): a Go port of this project - Android - - [llama2.c-android](https://github.com/Manuel030/llama2.c-android): by @Manuel030: adds Android binaries of this project - - [llama2.c-android-wrapper](https://github.com/celikin/llama2.c-android-wrapper): by @celikin: added JNI wrapper, PoC + - [llama2.c-android](https://github.com/Manuel030/llama2.c-android): by @[Manuel030](https://github.com/Manuel030): adds Android binaries of this project + - [llama2.c-android-wrapper](https://github.com/celikin/llama2.c-android-wrapper): by @[celikin](https://github.com/celikin): added JNI wrapper, PoC - C++ - - [llama2.cpp](https://github.com/leloykun/llama2.cpp) by @leloykun: a C++ port of this project + - [llama2.cpp](https://github.com/leloykun/llama2.cpp) by @[leloykun](https://github.com/leloykun): a C++ port of this project - JavaScript - - [llama2.js](https://github.com/epicure/llama2.js) by @epicure: a JavaScript port of this project - - [llama2.c-emscripten](https://github.com/gohai/llama2.c-emscripten) by @gohai: Emscripten (JavaScript) port, based on @ggerganov's initial prototype + - [llama2.js](https://github.com/epicure/llama2.js) by @[epicure](https://github.com/epicure): a JavaScript port of this project + - [llama2.c-emscripten](https://github.com/gohai/llama2.c-emscripten) by @[gohai](https://github.com/gohai): Emscripten (JavaScript) port, based on @ggerganov's initial prototype - Zig - - [llama2.zig](https://github.com/cgbur/llama2.zig) by @cgbur: A Zig port of this project - - [llama2.zig](https://github.com/vodkaslime/llama2.zig) by @vodkaslime: a Zig port of this project - - [llama2.zig](https://github.com/clebert/llama2.zig) by @clebert: a Zig port of this project + - [llama2.zig](https://github.com/cgbur/llama2.zig) by @[cgbur](https://github.com/cgbur): A Zig port of this project + - [llama2.zig](https://github.com/vodkaslime/llama2.zig) by @[vodkaslime](https://github.com/vodkaslime): a Zig port of this project + - [llama2.zig](https://github.com/clebert/llama2.zig) by @[clebert](https://github.com/clebert): a Zig port of this project - Julia - - [llama2.jl](https://github.com/juvi21/llama2.jl) by @juvi21: a Julia port of this project + - [llama2.jl](https://github.com/juvi21/llama2.jl) by @[juvi21](https://github.com/juvi21): a Julia port of this project - Scala - - [llama2.scala](https://github.com/jrudolph/llama2.scala) by @jrudolph: a Scala port of this project + - [llama2.scala](https://github.com/jrudolph/llama2.scala) by @[jrudolph](https://github.com/jrudolph): a Scala port of this project - Java - - [llama2.java](https://github.com/mukel/llama2.java) by @mukel: a Java port of this project + - [llama2.java](https://github.com/mukel/llama2.java) by @[mukel](https://github.com/mukel): a Java port of this project - Kotlin - - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @madroidmaq: a Kotlin port of this project -- [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @trholding: Standalone, Bootable & Portable Binary Llama 2 + - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @[madroidmaq](https://github.com/madroidmaq): a Kotlin port of this project +- [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 ## unsorted todos From 9cfb7efb8541c37612dcc5bbf6aa25a118897ca9 Mon Sep 17 00:00:00 2001 From: rdentato Date: Sun, 6 Aug 2023 09:53:02 +0000 Subject: [PATCH 04/79] Changed all the printf() for error/info messages so that they print on stderr. --- run.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/run.c b/run.c index b8bd0b6..24c98fe 100644 --- a/run.c +++ b/run.c @@ -99,7 +99,7 @@ void malloc_run_state(RunState* s, Config* p) { if (!s->x || !s->xb || !s->xb2 || !s->hb || !s->hb2 || !s->q || !s->k || !s->v || !s->att || !s->logits || !s->key_cache || !s->value_cache || !s->probindex) { - printf("malloc failed!\n"); + fprintf(stderr,"malloc failed!\n"); exit(EXIT_FAILURE); } } @@ -362,7 +362,7 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u for (char *c = text; *c != '\0'; c++) { sprintf(str_buffer, "%c", *c); int id = str_lookup(str_buffer, vocab, vocab_size); - if (id == -1) { printf("not good\n"); exit(EXIT_FAILURE); } + if (id == -1) { fprintf(stderr,"not good\n"); exit(EXIT_FAILURE); } tokens[*n_tokens] = id; (*n_tokens)++; } @@ -500,14 +500,14 @@ int sample_topp(float* probabilities, int n, float topp, ProbIndex* probindex) { // int main void error_usage() { - printf("Usage: run [options]\n"); - printf("Example: run model.bin -n 256 -i \"Once upon a time\"\n"); - printf("Options:\n"); - printf(" -t temperature, default 1.0\n"); - printf(" -p p value in top-p (nucleus) sampling. default 0.9, 0 = off\n"); - printf(" -s random seed, default time(NULL)\n"); - printf(" -n number of steps to run for, default 256. 0 = max_seq_len\n"); - printf(" -i input prompt\n"); + fprintf(stderr,"Usage: run [options]\n"); + fprintf(stderr,"Example: run model.bin -n 256 -i \"Once upon a time\"\n"); + fprintf(stderr,"Options:\n"); + fprintf(stderr," -t temperature, default 1.0\n"); + fprintf(stderr," -p p value in top-p (nucleus) sampling. default 0.9, 0 = off\n"); + fprintf(stderr," -s random seed, default time(NULL)\n"); + fprintf(stderr," -n number of steps to run for, default 256. 0 = max_seq_len\n"); + fprintf(stderr," -i input prompt\n"); exit(EXIT_FAILURE); } @@ -536,7 +536,7 @@ int main(int argc, char *argv[]) { else if (argv[i][1] == 'i') { prompt = argv[i + 1]; } else { error_usage(); } } - if(rng_seed == 0) { printf("Cannot use seed=0 because of the rng alg used\n"); return 1; } + if(rng_seed == 0) { fprintf(stderr,"Cannot use seed=0 because of the rng alg used\n"); return 1; } // read in the model.bin file Config config; @@ -546,7 +546,7 @@ int main(int argc, char *argv[]) { ssize_t file_size; // size of the checkpoint file in bytes { FILE *file = fopen(checkpoint, "rb"); - if (!file) { printf("Couldn't open file %s\n", checkpoint); return 1; } + if (!file) { fprintf(stderr,"Couldn't open file %s\n", checkpoint); return 1; } // read in the config header if (fread(&config, sizeof(Config), 1, file) != 1) { return 1; } // negative vocab size is hacky way of signaling unshared weights. bit yikes. @@ -558,9 +558,9 @@ int main(int argc, char *argv[]) { fclose(file); // memory map the Transformer weights into the data pointer fd = open(checkpoint, O_RDONLY); // open in read only mode - if (fd == -1) { printf("open failed!\n"); return 1; } + if (fd == -1) { fprintf(stderr,"open failed!\n"); return 1; } data = mmap(NULL, file_size, PROT_READ, MAP_PRIVATE, fd, 0); - if (data == MAP_FAILED) { printf("mmap failed!\n"); return 1; } + if (data == MAP_FAILED) { fprintf(stderr,"mmap failed!\n"); return 1; } float* weights_ptr = data + sizeof(Config)/sizeof(float); checkpoint_init_weights(&weights, &config, weights_ptr, shared_weights); } @@ -573,14 +573,14 @@ int main(int argc, char *argv[]) { unsigned int max_token_length; { FILE *file = fopen("tokenizer.bin", "rb"); - if (!file) { printf("couldn't load tokenizer.bin\n"); return 1; } - if (fread(&max_token_length, sizeof(int), 1, file) != 1) { printf("failed read\n"); return 1; } + if (!file) { fprintf(stderr,"couldn't load tokenizer.bin\n"); return 1; } + if (fread(&max_token_length, sizeof(int), 1, file) != 1) { fprintf(stderr,"failed read\n"); return 1; } int len; for (int i = 0; i < config.vocab_size; i++) { - if (fread(vocab_scores + i, sizeof(float), 1, file) != 1) { printf("failed read\n"); return 1;} - if (fread(&len, sizeof(int), 1, file) != 1) { printf("failed read\n"); return 1; } + if (fread(vocab_scores + i, sizeof(float), 1, file) != 1) { fprintf(stderr,"failed read\n"); return 1;} + if (fread(&len, sizeof(int), 1, file) != 1) { fprintf(stderr,"failed read\n"); return 1; } vocab[i] = (char *)malloc(len + 1); - if (fread(vocab[i], len, 1, file) != 1) { printf("failed read\n"); return 1; } + if (fread(vocab[i], len, 1, file) != 1) { fprintf(stderr,"failed read\n"); return 1; } vocab[i][len] = '\0'; // add the string terminating token } fclose(file); @@ -647,7 +647,7 @@ int main(int argc, char *argv[]) { // report achieved tok/s long end = time_in_ms(); - printf("\nachieved tok/s: %f\n", (steps-1) / (double)(end-start)*1000); + fprintf(stderr,"\nachieved tok/s: %f\n", (steps-1) / (double)(end-start)*1000); // memory and file handles cleanup free_run_state(&state); From 4e8a3e8d5d86a4fce6d5c086a2602338016394ba Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 6 Aug 2023 15:51:58 +0000 Subject: [PATCH 05/79] fix style issue space with stderr printing --- run.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/run.c b/run.c index 24c98fe..8202534 100644 --- a/run.c +++ b/run.c @@ -99,7 +99,7 @@ void malloc_run_state(RunState* s, Config* p) { if (!s->x || !s->xb || !s->xb2 || !s->hb || !s->hb2 || !s->q || !s->k || !s->v || !s->att || !s->logits || !s->key_cache || !s->value_cache || !s->probindex) { - fprintf(stderr,"malloc failed!\n"); + fprintf(stderr, "malloc failed!\n"); exit(EXIT_FAILURE); } } @@ -362,7 +362,7 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u for (char *c = text; *c != '\0'; c++) { sprintf(str_buffer, "%c", *c); int id = str_lookup(str_buffer, vocab, vocab_size); - if (id == -1) { fprintf(stderr,"not good\n"); exit(EXIT_FAILURE); } + if (id == -1) { fprintf(stderr, "not good\n"); exit(EXIT_FAILURE); } tokens[*n_tokens] = id; (*n_tokens)++; } @@ -500,14 +500,14 @@ int sample_topp(float* probabilities, int n, float topp, ProbIndex* probindex) { // int main void error_usage() { - fprintf(stderr,"Usage: run [options]\n"); - fprintf(stderr,"Example: run model.bin -n 256 -i \"Once upon a time\"\n"); - fprintf(stderr,"Options:\n"); - fprintf(stderr," -t temperature, default 1.0\n"); - fprintf(stderr," -p p value in top-p (nucleus) sampling. default 0.9, 0 = off\n"); - fprintf(stderr," -s random seed, default time(NULL)\n"); - fprintf(stderr," -n number of steps to run for, default 256. 0 = max_seq_len\n"); - fprintf(stderr," -i input prompt\n"); + fprintf(stderr, "Usage: run [options]\n"); + fprintf(stderr, "Example: run model.bin -n 256 -i \"Once upon a time\"\n"); + fprintf(stderr, "Options:\n"); + fprintf(stderr, " -t temperature, default 1.0\n"); + fprintf(stderr, " -p p value in top-p (nucleus) sampling. default 0.9, 0 = off\n"); + fprintf(stderr, " -s random seed, default time(NULL)\n"); + fprintf(stderr, " -n number of steps to run for, default 256. 0 = max_seq_len\n"); + fprintf(stderr, " -i input prompt\n"); exit(EXIT_FAILURE); } @@ -536,7 +536,7 @@ int main(int argc, char *argv[]) { else if (argv[i][1] == 'i') { prompt = argv[i + 1]; } else { error_usage(); } } - if(rng_seed == 0) { fprintf(stderr,"Cannot use seed=0 because of the rng alg used\n"); return 1; } + if(rng_seed == 0) { fprintf(stderr, "Cannot use seed=0 because of the rng alg used\n"); return 1; } // read in the model.bin file Config config; @@ -546,7 +546,7 @@ int main(int argc, char *argv[]) { ssize_t file_size; // size of the checkpoint file in bytes { FILE *file = fopen(checkpoint, "rb"); - if (!file) { fprintf(stderr,"Couldn't open file %s\n", checkpoint); return 1; } + if (!file) { fprintf(stderr, "Couldn't open file %s\n", checkpoint); return 1; } // read in the config header if (fread(&config, sizeof(Config), 1, file) != 1) { return 1; } // negative vocab size is hacky way of signaling unshared weights. bit yikes. @@ -558,9 +558,9 @@ int main(int argc, char *argv[]) { fclose(file); // memory map the Transformer weights into the data pointer fd = open(checkpoint, O_RDONLY); // open in read only mode - if (fd == -1) { fprintf(stderr,"open failed!\n"); return 1; } + if (fd == -1) { fprintf(stderr, "open failed!\n"); return 1; } data = mmap(NULL, file_size, PROT_READ, MAP_PRIVATE, fd, 0); - if (data == MAP_FAILED) { fprintf(stderr,"mmap failed!\n"); return 1; } + if (data == MAP_FAILED) { fprintf(stderr, "mmap failed!\n"); return 1; } float* weights_ptr = data + sizeof(Config)/sizeof(float); checkpoint_init_weights(&weights, &config, weights_ptr, shared_weights); } @@ -573,14 +573,14 @@ int main(int argc, char *argv[]) { unsigned int max_token_length; { FILE *file = fopen("tokenizer.bin", "rb"); - if (!file) { fprintf(stderr,"couldn't load tokenizer.bin\n"); return 1; } - if (fread(&max_token_length, sizeof(int), 1, file) != 1) { fprintf(stderr,"failed read\n"); return 1; } + if (!file) { fprintf(stderr, "couldn't load tokenizer.bin\n"); return 1; } + if (fread(&max_token_length, sizeof(int), 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1; } int len; for (int i = 0; i < config.vocab_size; i++) { - if (fread(vocab_scores + i, sizeof(float), 1, file) != 1) { fprintf(stderr,"failed read\n"); return 1;} - if (fread(&len, sizeof(int), 1, file) != 1) { fprintf(stderr,"failed read\n"); return 1; } + if (fread(vocab_scores + i, sizeof(float), 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1;} + if (fread(&len, sizeof(int), 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1; } vocab[i] = (char *)malloc(len + 1); - if (fread(vocab[i], len, 1, file) != 1) { fprintf(stderr,"failed read\n"); return 1; } + if (fread(vocab[i], len, 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1; } vocab[i][len] = '\0'; // add the string terminating token } fclose(file); @@ -647,7 +647,7 @@ int main(int argc, char *argv[]) { // report achieved tok/s long end = time_in_ms(); - fprintf(stderr,"\nachieved tok/s: %f\n", (steps-1) / (double)(end-start)*1000); + fprintf(stderr, "\nachieved tok/s: %f\n", (steps-1) / (double)(end-start)*1000); // memory and file handles cleanup free_run_state(&state); From 79791f39b49703f14fb015b558a2d8d6e692eb49 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 6 Aug 2023 16:33:23 +0000 Subject: [PATCH 06/79] let's start respecting the BOS token. Don't print it explicitly, and terminate sequence if it appears. This makes sense especially after the recent addition of prompting. Also be careful with timings and making sure they come out right if we exit early in this data-dependent manner --- run.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/run.c b/run.c index 8202534..02cc877 100644 --- a/run.c +++ b/run.c @@ -603,12 +603,12 @@ int main(int argc, char *argv[]) { int next; // will store the next token in the sequence int token = 1; // init with token 1 (=BOS), as done in Llama-2 sentencepiece tokenizer int pos = 0; // position in the sequence - printf("\n"); // explicit print the initial BOS token for stylistic symmetry reasons while (pos < steps) { // forward the transformer to get logits for the next token transformer(token, pos, &config, &state, &weights); + // advance the state state machine if(pos < num_prompt_tokens) { // if we are still processing the input prompt, force the next prompt token next = prompt_tokens[pos]; @@ -632,22 +632,27 @@ int main(int argc, char *argv[]) { } } } + pos++; - // following BOS token (1), sentencepiece decoder strips any leading whitespace (see PR #89) + // data-dependent terminating condition: the BOS (1) token delimits sequences + if (next == 1) { break; } + + // following BOS (1) token, sentencepiece decoder strips any leading whitespace (see PR #89) char *token_str = (token == 1 && vocab[next][0] == ' ') ? vocab[next]+1 : vocab[next]; printf("%s", token_str); fflush(stdout); - - // advance forward token = next; - pos++; - // init our timer here because the first iteration is slow due to memmap + + // init the timer here because the first iteration can be slower if (start == 0) { start = time_in_ms(); } } + printf("\n"); - // report achieved tok/s - long end = time_in_ms(); - fprintf(stderr, "\nachieved tok/s: %f\n", (steps-1) / (double)(end-start)*1000); + // report achieved tok/s (pos-1 because the timer starts after first iteration) + if (pos > 1) { + long end = time_in_ms(); + fprintf(stderr, "achieved tok/s: %f\n", (pos-1) / (double)(end-start)*1000); + } // memory and file handles cleanup free_run_state(&state); From 7178facb751a7b33083938bc3b915af535d278b2 Mon Sep 17 00:00:00 2001 From: Aydyn Tairov Date: Sun, 6 Aug 2023 18:45:47 +0100 Subject: [PATCH 07/79] Rebase changes to master --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index ccd77c5..63bb04e 100644 --- a/README.md +++ b/README.md @@ -234,11 +234,12 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - [llama2.scala](https://github.com/jrudolph/llama2.scala) by @[jrudolph](https://github.com/jrudolph): a Scala port of this project - Java - [llama2.java](https://github.com/mukel/llama2.java) by @[mukel](https://github.com/mukel): a Java port of this project +- Python + - [llama2.py](https://github.com/tairov/llama2.py) by @tairov: a simple one file pure Python port of this project with zero dependencies - Kotlin - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @[madroidmaq](https://github.com/madroidmaq): a Kotlin port of this project - [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 - ## unsorted todos - should calculate freq_cis online in the script run.c instead of loading them From 6734eaeff54da0394ffab7788a0b48d7365e5746 Mon Sep 17 00:00:00 2001 From: Aydyn Tairov Date: Sun, 6 Aug 2023 18:47:05 +0100 Subject: [PATCH 08/79] Rebase chanes to master --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 63bb04e..e860a23 100644 --- a/README.md +++ b/README.md @@ -234,10 +234,10 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - [llama2.scala](https://github.com/jrudolph/llama2.scala) by @[jrudolph](https://github.com/jrudolph): a Scala port of this project - Java - [llama2.java](https://github.com/mukel/llama2.java) by @[mukel](https://github.com/mukel): a Java port of this project -- Python - - [llama2.py](https://github.com/tairov/llama2.py) by @tairov: a simple one file pure Python port of this project with zero dependencies - Kotlin - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @[madroidmaq](https://github.com/madroidmaq): a Kotlin port of this project +- Python + - [llama2.py](https://github.com/tairov/llama2.py) by @tairov: a simple one file pure Python port of this project with zero dependencies - [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 ## unsorted todos From 2297d158e3b1d31a9d382e7611eabd965c3e1b68 Mon Sep 17 00:00:00 2001 From: Aydyn Tairov Date: Sun, 6 Aug 2023 21:47:05 +0100 Subject: [PATCH 09/79] Fix link to a github profile --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index e860a23..72e426c 100644 --- a/README.md +++ b/README.md @@ -237,7 +237,7 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - Kotlin - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @[madroidmaq](https://github.com/madroidmaq): a Kotlin port of this project - Python - - [llama2.py](https://github.com/tairov/llama2.py) by @tairov: a simple one file pure Python port of this project with zero dependencies + - [llama2.py](https://github.com/tairov/llama2.py) by @[tairov](https://github.com/tairov): a simple one file pure Python port of this project with zero dependencies - [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 ## unsorted todos From 999b1bf7760bb235695c9e3601898faf3dbe92b5 Mon Sep 17 00:00:00 2001 From: rdentato Date: Sun, 6 Aug 2023 21:07:09 +0000 Subject: [PATCH 10/79] Added conditinal include of the OpenMP header. --- run.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/run.c b/run.c index 02cc877..4ff9e5d 100644 --- a/run.c +++ b/run.c @@ -20,6 +20,11 @@ $ ./run #include #include #endif + +#ifdef _OPENMP +#include +#endif + // ---------------------------------------------------------------------------- // Transformer and RunState structs, and related memory management From 98b515e44d23687258c08ec19e1e2458b57aa5ae Mon Sep 17 00:00:00 2001 From: Nicolas Pinto Date: Sun, 6 Aug 2023 14:48:47 -0700 Subject: [PATCH 11/79] FIX: model.generate() This patch fixes a simple bug in `generate()` due to model's `forward()` only returning logits and not losses since `f2e34e6b0ac55accd6ba930a04c6f683f5158b29`. --- model.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/model.py b/model.py index 66304e7..f7edbb6 100644 --- a/model.py +++ b/model.py @@ -317,7 +317,7 @@ class Transformer(nn.Module): # if the sequence context is growing too long we must crop it at block_size idx_cond = idx if idx.size(1) <= self.params.max_seq_len else idx[:, -self.params.max_seq_len:] # forward the model to get the logits for the index in the sequence - logits, _ = self(idx_cond) + logits = self(idx_cond) logits = logits[:, -1, :] # crop to just the final time step if temperature == 0.0: # "sample" the single most likely index From e49c16caa5ffdbf2428174adb3d27af7d5c3e3a2 Mon Sep 17 00:00:00 2001 From: rdentato Date: Mon, 7 Aug 2023 06:51:57 +0000 Subject: [PATCH 12/79] Changed how rng_seed is handled. Now 0 is treated as time(NULL). --- run.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/run.c b/run.c index 4ff9e5d..5cb1b96 100644 --- a/run.c +++ b/run.c @@ -522,7 +522,7 @@ int main(int argc, char *argv[]) { char *checkpoint = NULL; // e.g. out/model.bin float temperature = 1.0f; // 0.0 = greedy deterministic. 1.0 = original. don't set higher float topp = 0.9f; // top-p in nucleus sampling - rng_seed = (unsigned int)time(NULL); // seed rng with time by default + rng_seed = 0; // seed rng with time by default int steps = 256; // number of steps to run for char *prompt = NULL; // prompt string @@ -541,7 +541,7 @@ int main(int argc, char *argv[]) { else if (argv[i][1] == 'i') { prompt = argv[i + 1]; } else { error_usage(); } } - if(rng_seed == 0) { fprintf(stderr, "Cannot use seed=0 because of the rng alg used\n"); return 1; } + if(rng_seed == 0) { rng_seed = (unsigned int)time(NULL);} // read in the model.bin file Config config; From ff6a2f0a7a257fcfd0f82759b542d0f09af924f6 Mon Sep 17 00:00:00 2001 From: rdentato Date: Mon, 7 Aug 2023 07:28:03 +0000 Subject: [PATCH 13/79] Reset the #include --- run.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/run.c b/run.c index 5cb1b96..9f4a1b2 100644 --- a/run.c +++ b/run.c @@ -20,11 +20,6 @@ $ ./run #include #include #endif - -#ifdef _OPENMP -#include -#endif - // ---------------------------------------------------------------------------- // Transformer and RunState structs, and related memory management From c02865df300f3bd9e567ce061000dc23bf785a17 Mon Sep 17 00:00:00 2001 From: atamyrat Date: Fri, 4 Aug 2023 04:18:20 +0300 Subject: [PATCH 14/79] prompt tokenizer improvements: utf8 support, add_dummy_prefix and byte_fallback options to match sentencepiece --- run.c | 46 ++++++++++++++++++++++++++++++++++++++-------- tokenizer.bin | Bin 432717 -> 433869 bytes tokenizer.py | 2 -- 3 files changed, 38 insertions(+), 10 deletions(-) diff --git a/run.c b/run.c index 02cc877..f69c21a 100644 --- a/run.c +++ b/run.c @@ -356,15 +356,34 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u // a temporary buffer to merge two consecutive tokens char* str_buffer = malloc((max_token_length*2+1) * sizeof(char)); // *2 for concat, +1 for null terminator + size_t str_len = 0; + + // add_dummy_prefix is true by default + tokens[0] = str_lookup(" ", vocab, vocab_size); + *n_tokens = 1; // the number of tokens // first encode every individual byte in the input string - *n_tokens = 0; // the number of tokens for (char *c = text; *c != '\0'; c++) { - sprintf(str_buffer, "%c", *c); + // reset buffer if the current byte is ASCII or leading byte + if ((*c & 0xC0) != 0x80) + str_len = 0; + + str_buffer[str_len++] = *c; // append byte to the buffer + str_buffer[str_len] = '\0'; + + if ((*(c+1) & 0xC0) == 0x80) // skip if in middle of multi-byte utf8 encoding + continue; + int id = str_lookup(str_buffer, vocab, vocab_size); - if (id == -1) { fprintf(stderr, "not good\n"); exit(EXIT_FAILURE); } - tokens[*n_tokens] = id; - (*n_tokens)++; + + if (id != -1) { + tokens[(*n_tokens)++] = id; + } else { + // byte_fallback encoding + for (int i=0; i' into '\x01' + static char byte_piece[4]; + if (sscanf(token_str, "<0x%02X>", (int*)(&byte_piece)) == 1) { + byte_piece[1] = '\0'; + token_str = byte_piece; + } + return token_str; +} + // ---------------------------------------------------------------------------- // utilities: time / rng @@ -637,9 +669,7 @@ int main(int argc, char *argv[]) { // data-dependent terminating condition: the BOS (1) token delimits sequences if (next == 1) { break; } - // following BOS (1) token, sentencepiece decoder strips any leading whitespace (see PR #89) - char *token_str = (token == 1 && vocab[next][0] == ' ') ? vocab[next]+1 : vocab[next]; - printf("%s", token_str); + printf("%s", token_to_str(vocab, next, token)); fflush(stdout); token = next; diff --git a/tokenizer.bin b/tokenizer.bin index e0a8a7bc47fda2ecaab1e2cc2255d140925ed704..e6c1b23ec0c18ebfd7162ef838a77d559b7e3316 100644 GIT binary patch delta 3646 zcmY+_JC55h6op~uHGp?8m{pi>5(o_Nxh8ikQlyPlwZ=WA;@0b2^I-P!AZkNk-Je;;9kR*~q(nto$A~_`gNc~wK@G>GLq>NOMDpH%1 z08nhe(LeM8LII(GP(Uak6cCE1@BtJ6iXGV3dI$xC0zv_yfKWmxH{lAD0LlZ{hh9P` zA(RkG2qlCPLb(eUpaf7(z&>6=C?S*(DhL&X3PN=VN4)}2&A@(B5Gn{2gbG3hp@L9N z!ch;P0M4WWimL#SurO#`R_)Ez<%p@vXHs3Fu4Y6vxe20$bF{Er$44TJ_l z1EGP?KxhCo0GfDR?w{?}GY}dG4TJ_l1EGb`VuThzJAh07^U}2tS_mzK7D5Z5h0tPz z7C^fINB>q2p@q;w=pb|uItU#`=m2ypuy6GaLI&;jTkz-|9I2t9-z zLJy&b&_n1kLJy#S0+;!Z8_GlIA@mS>2t9-zLXQ!80K*0x{i8lW7$6J~1_%R$0m6V0 z1^~kjT=!pqFhCd}3=jqg1B7uCuD}RjJb=soix5T#BZLvc2w{XU?!x8!MgZdk9Q!Xq z7$J-hMhFvx3Bq&;M|}b?&A@(35GDu{gbBg~VS+GC!ch-k0aLzrjb zEd!VV%pJlEVTLe6m?6v%W(YHY8NhPE{woj`2n&P-!UAD|umD&9ECKtkKv*Cw5Ecjv zgayKi5mo@}08aa_LRcZJ5LO5)gcZVy5mo@}0zCF#g|I?cA*>KK2pfbABWwV+6}a!e z24RD+LD(Q{5H<)KM%Vys58&%_`!@(%yZ*YLf8Wnv#`!wVw{gCY^W)$B{PO4hKiNUJ AVE_OC delta 2485 zcmYM!Rdd^56h%=7%Pv#OOfEAsGcz+YgVLsqDKn?cZOV|Dp6C3hN^N{jOfy961DvQLEgNA#5$4I^ZMY>)$TK{-$!Q~(u0B~Teui4la~Nk&ycHBcSY z05w4^P#e?%bwNE)ALM}sX@vV5GHL`GgC?LUXa<^t7N8|)1zLkPpe<-uh9J?NQ3sF$ z9YH718FT?%K{wDH^Z-3UFVGtVeNbP}5A+8Ez(6nv3g5d0;+R02YEpU@=$%mV#x8 zXn(YRIinR|C0GSkgEe3+SO?aF4PYbK1U7>$A%b}QRz};vcCZ8N1iQd)um|h~`@nv1 z02~B|BE;{XID+Vv^EW@5#X4*qu?noC)-mh2b;3Gnow80_XX5)GuRH7EIqST2!78*a zT9>TL))nijb4eFzn#pY}%&KUu}rXX}gg)%s?Aw|-bZtzXt}>(Br6N*F5Na)fe( za)fe(a)fe(a)fe(a)ctnAfd3rbn|kAa)fe(a)fe(a)fe(a)fe(a)fe(iX#l7`sE1a z2;~Un2;~Un2;~Un2=%~!$q~vC$`Pt4e*eao;0Wai'): - t = chr(int(t[3:5], 16)) # e.g. make '<0x01>' into '\x01' t = t.replace('▁', ' ') # sentencepiece uses this character as whitespace b = t.encode('utf-8') # bytes of this token, utf-8 encoded From 57ca3c0401e7b02b6d2ed50004c9837b07022af2 Mon Sep 17 00:00:00 2001 From: madroid Date: Tue, 8 Aug 2023 01:28:07 +0800 Subject: [PATCH 15/79] Add run.ipynb for easier feel the magic --- run.ipynb | 121 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 run.ipynb diff --git a/run.ipynb b/run.ipynb new file mode 100644 index 0000000..f02a4cf --- /dev/null +++ b/run.ipynb @@ -0,0 +1,121 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "HLdoj4cz-xal" + }, + "source": [ + "# Run.c\n", + "\n", + "More details can be found in the [README.md](README.md) ." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nX78K1Fi-38d" + }, + "source": [ + "## Clone Project" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Une3Ozlnu1B7" + }, + "outputs": [], + "source": [ + "!git clone https://github.com/karpathy/llama2.c.git\n", + "%cd llama2.c" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "1MB5LGla-8Ln" + }, + "source": [ + "## Build" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "V1EGferJv_7o" + }, + "outputs": [], + "source": [ + "!make run" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MuMpJio8_AKi" + }, + "source": [ + "## Run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "MRy23xxavJNO" + }, + "outputs": [], + "source": [ + "# run stories15M\n", + "# !wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin\n", + "# !./run stories110M.bin\n", + "\n", + "# run stories42M\n", + "# !wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin\n", + "# !./run stories42M.bin\n", + "\n", + "# run stories110M\n", + "!wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin\n", + "!./run stories110M.bin" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Wi48eJKI_FKO" + }, + "source": [ + "## Run with args" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "YFOv6U74vSSZ" + }, + "outputs": [], + "source": [ + "!./run stories110M.bin -t 0.8 -n 256 -i \"One day, Lily met a Shoggoth\"" + ] + } + ], + "metadata": { + "colab": { + "private_outputs": true, + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} From 27c5fc76b130cfd88bb41399b707a7651496be5d Mon Sep 17 00:00:00 2001 From: madroid Date: Tue, 8 Aug 2023 01:50:19 +0800 Subject: [PATCH 16/79] Add Google Colab button --- README.md | 2 ++ run.ipynb | 2 ++ 2 files changed, 4 insertions(+) diff --git a/README.md b/README.md index 241c822..86c5848 100644 --- a/README.md +++ b/README.md @@ -10,6 +10,8 @@ Please note that this started recently as just a fun weekend project: I took my ## feel the magic +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/karpathy/llama2.c/blob/master/run.ipynb) + First, navigate to the folder when you keep your projects and clone this repository to this folder: ```bash diff --git a/run.ipynb b/run.ipynb index f02a4cf..26b4b77 100644 --- a/run.ipynb +++ b/run.ipynb @@ -8,6 +8,8 @@ "source": [ "# Run.c\n", "\n", + "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/karpathy/llama2.c/blob/master/run.ipynb)\n", + "\n", "More details can be found in the [README.md](README.md) ." ] }, From 9713609023fe330c7679a955150a63f3c0c4cde7 Mon Sep 17 00:00:00 2001 From: madroid Date: Tue, 8 Aug 2023 19:10:45 +0800 Subject: [PATCH 17/79] Add Colab GUI: select model/temperature/prompt/etc --- run.ipynb | 94 +++++++++++++++++++++++-------------------------------- 1 file changed, 40 insertions(+), 54 deletions(-) diff --git a/run.ipynb b/run.ipynb index 26b4b77..95c5b0f 100644 --- a/run.ipynb +++ b/run.ipynb @@ -13,15 +13,6 @@ "More details can be found in the [README.md](README.md) ." ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "nX78K1Fi-38d" - }, - "source": [ - "## Clone Project" - ] - }, { "cell_type": "code", "execution_count": null, @@ -30,78 +21,73 @@ }, "outputs": [], "source": [ + "#@title Clone Project\n", + "\n", "!git clone https://github.com/karpathy/llama2.c.git\n", "%cd llama2.c" ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "1MB5LGla-8Ln" - }, - "source": [ - "## Build" - ] - }, { "cell_type": "code", "execution_count": null, - "metadata": { - "id": "V1EGferJv_7o" - }, + "metadata": {}, "outputs": [], "source": [ + "#@title Build\n", + "\n", "!make run" ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": null, "metadata": { - "id": "MuMpJio8_AKi" + "id": "thm0ZBrtSgoC" }, + "outputs": [], "source": [ - "## Run" + "#@title Pick Your Model\n", + "\n", + "#@markdown Choose model\n", + "model = \"stories15M\" #@param [\"stories15M\", \"stories42M\", \"stories110M\"]\n", + "\n", + "download_url = \"\"\n", + "\n", + "if(model == \"stories15M\"):\n", + " download_url = \"https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin\"\n", + "if(model == \"stories42M\"):\n", + " download_url = \"https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin\"\n", + "if(model == \"stories110M\"):\n", + " download_url = \"https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin\"\n", + "\n", + "print(f\"download_url: {download_url}\")\n", + "\n", + "!wget $download_url\n", + "\n", + "model_file = model + \".bin\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "id": "MRy23xxavJNO" + "id": "OgAc3KjuT-NM" }, "outputs": [], "source": [ - "# run stories15M\n", - "# !wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin\n", - "# !./run stories110M.bin\n", + "#@title Generate Stories\n", "\n", - "# run stories42M\n", - "# !wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin\n", - "# !./run stories42M.bin\n", + "# Generate args\n", + "max_token = 256 #@param {type:\"slider\", min:32, max:1024, step:32}\n", + "temperature = 0.8 #@param {type:\"slider\", min:0.0, max:1, step:0.05}\n", + "top_p = 0.9 #@param {type:\"slider\", min:0.0, max:1.0, step:0.05}\n", + "prompt = \"One day, Lily met a Shoggoth\" #@param {type:\"string\"}\n", "\n", - "# run stories110M\n", - "!wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin\n", - "!./run stories110M.bin" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Wi48eJKI_FKO" - }, - "source": [ - "## Run with args" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "YFOv6U74vSSZ" - }, - "outputs": [], - "source": [ - "!./run stories110M.bin -t 0.8 -n 256 -i \"One day, Lily met a Shoggoth\"" + "print(f\"model: {model_file}, max_token: {max_token}, temperature: {temperature}, top_p: {top_p}, prompt: {prompt}\")\n", + "print(f\"----------------------------\\n\")\n", + "\n", + "cmd = f'./run {model_file} -t {temperature} -p {top_p} -n {max_token} -i \"{prompt}\"'\n", + "!{cmd}" ] } ], From 96873b02746f106eba9bb48bd91bb8ff89ef1025 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Wed, 9 Aug 2023 02:08:33 +0000 Subject: [PATCH 18/79] refine todos section make more concrete and sort --- README.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index ccd77c5..323bbcf 100644 --- a/README.md +++ b/README.md @@ -241,14 +241,15 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg ## unsorted todos -- should calculate freq_cis online in the script run.c instead of loading them -- support Llama 2 7B Chat models and tune run.c to Chat UI/UX -- speed up 7B Llama 2 models sufficiently to work at interactive rates on Apple Silicon MacBooks -- investigate precisions other than just fp32: fp16, and quantization -- investigate running on other backends, especially GPUs - add multiquery support into run.c +- add custom bpe training code and the ability to train a smaller vocabulary (32K is to much) +- should calculate freq_cis online in the script run.c instead of loading them +- int4/8 quantization +- export the model in a more sensible output format with a proper header, etc. +- train a tiny Llama test model (committed to repo) and use it as reference in unit tests +- support Llama 2 7B Chat models and tune run.c to Chat UI/UX +- llama2.cu investigate and merge - (LoRA) finetuning and export of Llama 2 models -- make more better tests to decrease yolo ## License From 256e7f885bac3f5f98cb21287ec064c02c3987fc Mon Sep 17 00:00:00 2001 From: Rahul TR Date: Wed, 9 Aug 2023 17:59:47 +0530 Subject: [PATCH 19/79] Added C# port information in readme --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index b62a8d8..126c901 100644 --- a/README.md +++ b/README.md @@ -238,7 +238,9 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - Kotlin - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @[madroidmaq](https://github.com/madroidmaq): a Kotlin port of this project - Python - - [llama2.py](https://github.com/tairov/llama2.py) by @[tairov](https://github.com/tairov): a simple one file pure Python port of this project with zero dependencies + - [llama2.py](https://github.com/tairov/llama2.py) by @[tairov](https://github.com/tairov): a simple one file pure Python port of this project with zero dependencies +- C# + - [llama2.cs](https://github.com/trrahul/llama2.cs) by @[trrahul](https://github.com/trrahul): a C# port of this project - [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 ## unsorted todos From 3f69c6cdc43a82a65e4bdc0270fc4ecd9dca7cf9 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Thu, 10 Aug 2023 05:06:49 +0000 Subject: [PATCH 20/79] change the default to use runfast, which imo works just fine --- run.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/run.ipynb b/run.ipynb index 95c5b0f..cd69b79 100644 --- a/run.ipynb +++ b/run.ipynb @@ -35,7 +35,7 @@ "source": [ "#@title Build\n", "\n", - "!make run" + "!make runfast" ] }, { From d45a36cdd2ef86b93094cd6020dd0296e8ad5667 Mon Sep 17 00:00:00 2001 From: Krishnaraj Bhat Date: Thu, 10 Aug 2023 10:59:39 +0530 Subject: [PATCH 21/79] Update readme for openmp on mac --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index b62a8d8..c83b11d 100644 --- a/README.md +++ b/README.md @@ -160,7 +160,7 @@ If compiling with gcc, try experimenting with `-funroll-all-loops`, see PR [#183 ### OpenMP Big improvements can also be achieved by compiling with OpenMP, which "activates" the `#pragma omp parallel for` inside the matmul and attention, allowing the work in the loops to be split up over multiple processors. -You'll need to install the OpenMP library and the clang compiler first (e.g. `apt install clang libomp-dev` on ubuntu). I was not able to get improvements from OpenMP on my MacBook, though. Then you can compile with `make runomp`, which does: +You'll need to install the OpenMP library and the clang compiler first (e.g. `apt install clang libomp-dev` on ubuntu). Then you can compile with `make runomp`, which does: ```bash clang -Ofast -fopenmp -march=native run.c -lm -o run @@ -180,6 +180,8 @@ On **Windows**, use `build_msvc.bat` in a Visual Studio Command Prompt to build On **Centos 7**, **Amazon Linux 2018** use `rungnu` Makefile target: `make rungnu` or `make runompgnu` to use openmp. +On **Mac**, use clang from brew for openmp build. Install clang as `brew install llvm` and use the installed clang binary to compile with openmp: `make runomp CC=/opt/homebrew/opt/llvm/bin/clang` + ## ack I trained the llama2.c storyteller models on a 4X A100 40GB box graciously provided by the excellent [Lambda labs](https://lambdalabs.com/service/gpu-cloud), thank you. From c42641205ffe17871af3464f35f51b201e58ebeb Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Thu, 10 Aug 2023 15:23:05 +0000 Subject: [PATCH 22/79] turn off topp sampling by default because it is a bit too slow to be the default. it is likely that turning it on, e.g. -p 0.9 is midlly higher quality and safer samples, but this comes at a cost of too much performance in double digit percent sometimes, for it to be on by default i think... --- README.md | 4 +++- run.c | 6 +++--- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 0721c19..be6d8d9 100644 --- a/README.md +++ b/README.md @@ -56,7 +56,9 @@ You can also prompt the model with a prefix or a number of additional command li > One day, Lily met a Shoggoth. He was very shy, but was also very generous. Lily said “Hello Shoggy! Can I be your friend?” Shoggy was happy to have a friend and said “Yes, let’s explore the universe together!” So they set off on a journey to explore the universe. As they travelled, Shoggy was happy to explain to Lily about all the wonderful things in the universe. At the end of the day, Lily and Shoggy had gathered lots of wonderful things from the universe, and they both felt very proud. They promised to explore the universe as one big pair and to never stop being generous to each other. -There is also an even better 110M param model available, see [models](#models). Quick note on sampling, the recommendation for good results is to use `-t 1.0 -p 0.9`, i.e. top-p sampling at 0.9 with temperature 1.0 (this is the default). To control the diversity of samples use either the temperature (i.e. vary `-t` between 0 and 1 and keep top-p off with `-p 0`) or the top-p value (i.e. vary `-p` between 0 and 1 and keep `-t 1`), but not both. Nice explainers on LLM sampling strategies include [this](https://peterchng.com/blog/2023/05/02/token-selection-strategies-top-k-top-p-and-temperature/), [this](https://docs.cohere.com/docs/controlling-generation-with-top-k-top-p) or [this](https://huggingface.co/blog/how-to-generate). +There is also an even better 110M param model available, see [models](#models). + +Quick note on sampling, the recommendation for ~best results is to sample with `-t 1.0 -p 0.9`, i.e. temperature 1.0 (default) but also top-p sampling at 0.9 (not default!). The top-p sampling is turned off by default because it can run quite a bit slower. More generally, to control the diversity of samples use either the temperature (i.e. vary `-t` between 0 and 1 and keep top-p off with `-p 0`) or the top-p value (i.e. vary `-p` between 0 and 1 and keep `-t 1`), but not both. Nice explainers on LLM sampling strategies include [this](https://peterchng.com/blog/2023/05/02/token-selection-strategies-top-k-top-p-and-temperature/), [this](https://docs.cohere.com/docs/controlling-generation-with-top-k-top-p) or [this](https://huggingface.co/blog/how-to-generate). ## Meta's Llama 2 models diff --git a/run.c b/run.c index 9f4a1b2..afe695f 100644 --- a/run.c +++ b/run.c @@ -504,7 +504,7 @@ void error_usage() { fprintf(stderr, "Example: run model.bin -n 256 -i \"Once upon a time\"\n"); fprintf(stderr, "Options:\n"); fprintf(stderr, " -t temperature, default 1.0\n"); - fprintf(stderr, " -p p value in top-p (nucleus) sampling. default 0.9, 0 = off\n"); + fprintf(stderr, " -p p value in top-p (nucleus) sampling. default 1.0 (=off)\n"); fprintf(stderr, " -s random seed, default time(NULL)\n"); fprintf(stderr, " -n number of steps to run for, default 256. 0 = max_seq_len\n"); fprintf(stderr, " -i input prompt\n"); @@ -516,7 +516,7 @@ int main(int argc, char *argv[]) { // default inits char *checkpoint = NULL; // e.g. out/model.bin float temperature = 1.0f; // 0.0 = greedy deterministic. 1.0 = original. don't set higher - float topp = 0.9f; // top-p in nucleus sampling + float topp = 1.0f; // top-p in nucleus sampling. 1.0 = off. 0.9 works well, but slower rng_seed = 0; // seed rng with time by default int steps = 256; // number of steps to run for char *prompt = NULL; // prompt string @@ -623,7 +623,7 @@ int main(int argc, char *argv[]) { // apply softmax to the logits to get the probabilities for next token softmax(state.logits, config.vocab_size); // we sample from this distribution to get the next token - if (topp <= 0) { + if (topp <= 0 || topp >= 1) { // simply sample from the predicted probability distribution next = sample(state.logits, config.vocab_size); } else { From 4c6f0af9ff3671b0b8053c6a3a512a06bad5c676 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Fri, 11 Aug 2023 03:58:22 +0000 Subject: [PATCH 23/79] add the ability to train a custom sentencepiece tokenizer with a given vocab_size, and pretok with it. some more changes still needed to merge this branch, in train.py and ofc run.c. did this in a sadly bit ugly, but fully backwards compatible way. basically when we use custom tokenizer we create a whole new directory structure for that --- tinystories.py | 115 ++++++++++++++++++++++++++++++++++++++------ tokenizer.py | 13 ++--- train_vocab.sh | 126 +++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 233 insertions(+), 21 deletions(-) create mode 100755 train_vocab.sh diff --git a/tinystories.py b/tinystories.py index 419e0d5..d41f8fc 100644 --- a/tinystories.py +++ b/tinystories.py @@ -9,6 +9,7 @@ import os import random from typing import List from concurrent.futures import ProcessPoolExecutor +from functools import partial import numpy as np import requests @@ -37,7 +38,7 @@ def download_file(url: str, fname: str, chunk_size=1024): def download(): - """Downloads the dataset to disk.""" + """Downloads the TinyStories dataset to DATA_CACHE_DIR""" os.makedirs(DATA_CACHE_DIR, exist_ok=True) # download the TinyStories dataset, unless it's already downloaded @@ -66,10 +67,63 @@ def download(): print(f"Number of shards: {len(shard_filenames)}") print(f"Example story:\n{data[0]}") +def train_vocab(vocab_size): + """ + Trains a custom sentencepiece tokenizer on the TinyStories dataset. + The custom tokenizer files will be saved in DATA_CACHE_DIR/tok{N} directories, + where N is the vocab size. This is also where the pretok .bin files will go. + """ + assert vocab_size > 0, "Vocab size must be positive" -def process_shard(args): + # output file prefix path for sentencepiece + prefix = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") + + # how many shards we'll use for vocab training, kept low for efficiency + num_shards = 10 + + # 1) export a large chunk of text as a single text file tiny.txt + tiny_file = os.path.join(DATA_CACHE_DIR, "tiny.txt") + data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") + shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) + + print(f"Writing temporary file {tiny_file} with {num_shards} shards...") + with open(tiny_file, "w") as of: + for shard in tqdm(shard_filenames[:num_shards]): + with open(shard, "r") as f: + data = json.load(f) + for example in data: + text = example["story"] + text = text.strip() + of.write(text + "\n") + print(f"Size is: {os.path.getsize(tiny_file) / 1024 / 1024:.2f} MB") + + # 2) run the train_vocab.sh script that trains the sentencepiece model + print("Will now train the vocab with:") + cmd = f"bash train_vocab.sh {tiny_file} {prefix} {vocab_size}" + print(cmd) + print("OK? [y/N] ") + dec = input() + if dec.lower() != "y": + print("Exiting...") + return + os.system(cmd) + + # 3) optional cleanup, ask the user if they'd like to delete tiny.txt + dec = input(f"Delete the temporary file {tiny_file}? [y/N] ") + if dec.lower() == "y": + os.remove(tiny_file) + print(f"Deleted {tiny_file}") + + print(f"Trained tokenizer is in {prefix}.model") + print("Done.") + + +def process_shard(args, vocab_size): shard_id, shard = args - enc = Tokenizer() + tokenizer_model = None + if vocab_size > 0: + tokenizer_model = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}.model") + enc = Tokenizer(tokenizer_model) with open(shard, "r") as f: data = json.load(f) all_tokens = [] @@ -80,21 +134,37 @@ def process_shard(args): all_tokens.extend(tokens) # convert to uint16 nparray all_tokens = np.array(all_tokens, dtype=np.uint16) - # write to disk - tokenized_filename = shard.replace(".json", ".bin") + # calculate the output filename + if vocab_size == 0: + # if we're using Llama 2, just save the tokenized file in the same dir + tokenized_filename = shard.replace(".json", ".bin") + else: + # save .bin files into a new tok{N} directory + bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") + shard_basename = os.path.basename(shard) + bin_basename = shard_basename.replace(".json", ".bin") + tokenized_filename = os.path.join(bin_dir, bin_basename) + # write the bytes with open(tokenized_filename, "wb") as f: f.write(all_tokens.tobytes()) - print(f"Saved {tokenized_filename}") + # calculate the average sequence length (they are separated by BOS=1) + avg_seq_len = all_tokens.size / ((all_tokens == 1).sum()) + print(f"Saved {tokenized_filename}, average seqlen: {avg_seq_len:.2f}") -def pretokenize(): +def pretokenize(vocab_size): # iterate the shards and tokenize all of them one by one data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) + if vocab_size > 0: + # .bin files will be saved into tok{N} directory, create it once here + bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") + os.makedirs(bin_dir, exist_ok=True) # process all the shards in a process pool + fun = partial(process_shard, vocab_size=vocab_size) with ProcessPoolExecutor() as executor: - executor.map(process_shard, enumerate(shard_filenames)) + executor.map(fun, enumerate(shard_filenames)) print("Done.") @@ -155,14 +225,29 @@ class Task: if __name__ == "__main__": + """ + These stages are designed to be run in order. + + To tokenize data with the Llama 2 tokenizer: + python tinystories.py download + python tinystories.py pretokenize + + To tokenize data with a custom tokenizer we train ourselves with sentencepiece, e.g.: + python tinystories.py download + python tinystories.py train_vocab --vocab_size=2048 + python tinystories.py pretokenize --vocab_size=2048 + """ parser = argparse.ArgumentParser() - parser.add_argument("stage", type=str, choices=["download", "train_tokenizer", "pretokenize"]) + parser.add_argument("stage", type=str, choices=["download", "pretokenize", "train_vocab"]) + parser.add_argument("--vocab_size", type=int, default=0, help="pretokenization vocab size. 0 = use Llama 2 tokenizer.") args = parser.parse_args() # depending on the stage call the appropriate function - fun = { - "download": download, - "pretokenize": pretokenize, - } - fun[args.stage]() - + if args.stage == "download": + download() + elif args.stage == "train_vocab": + train_vocab(vocab_size=args.vocab_size) + elif args.stage == "pretokenize": + pretokenize(vocab_size=args.vocab_size) + else: + raise ValueError(f"Unknown stage {args.stage}") diff --git a/tokenizer.py b/tokenizer.py index 35eee20..981b2ac 100644 --- a/tokenizer.py +++ b/tokenizer.py @@ -10,14 +10,13 @@ from typing import List from sentencepiece import SentencePieceProcessor TOKENIZER_MODEL = "tokenizer.model" # the llama sentencepiece tokenizer model -TOKENIZER_BIN = "tokenizer.bin" # binary version of the tokenizer for inference in C class Tokenizer: - def __init__(self): - model_path = TOKENIZER_MODEL + def __init__(self, tokenizer_model=None): + model_path = tokenizer_model if tokenizer_model else TOKENIZER_MODEL assert os.path.isfile(model_path), model_path self.sp_model = SentencePieceProcessor(model_file=model_path) - #print(f"Loaded SentencePiece model from {model_path}") + self.model_path = model_path # BOS / EOS token IDs self.n_words: int = self.sp_model.vocab_size() @@ -59,12 +58,14 @@ class Tokenizer: tokens.append(b) scores.append(s) - + # record the max token length max_token_length = max(len(t) for t in tokens) # write to a binary file - with open(TOKENIZER_BIN, 'wb') as f: + # the tokenizer.bin file is the same as .model file, but .bin + tokenizer_bin = self.model_path.replace('.model', '.bin') + with open(tokenizer_bin, 'wb') as f: f.write(struct.pack("I", max_token_length)) for bytes, score in zip(tokens, scores): f.write(struct.pack("fI", score, len(bytes))) diff --git a/train_vocab.sh b/train_vocab.sh new file mode 100755 index 0000000..7803af8 --- /dev/null +++ b/train_vocab.sh @@ -0,0 +1,126 @@ +#!/bin/bash + +# Trains a sentencepiece tokenizer model on a bunch of given data, my best +# effort attempt to replicate how Meta trained their Llama 2 tokenizer. + +# usage: $ train_vocab.sh +# example: +# ./train_vocab.sh tiny.txt tokenizer_tiny 1024 +# requirements: +# install https://github.com/google/sentencepiece + +# check if the correct number of arguments are provided +if [ $# -ne 3 ]; then + echo "Usage: $0 " + exit 1 +fi + +# assign command-line arguments to variables +input=$1 +model_prefix=$2 +vocab_size=$3 + +# check if input file exists +if [ ! -f "$input" ]; then + echo "Usage: $0 " + echo "input '$input' not found." + exit 1 +fi + +# check if vocab_size is a positive integer +if ! [[ "$vocab_size" =~ ^[0-9]+$ ]] || [ "$vocab_size" -lt 1 ]; then + echo "Usage: $0 " + echo "vocab_size size must be a positive integer." + exit 1 +fi + +# Print the processed inputs +echo "Input: $input" +echo "Model Prefix: $model_prefix" +echo "Vocabulary Size: $vocab_size" + +# train a sentencepiece tokenizer model +# Llama 2 config can be printed as follows: + +# import sentencepiece.sentencepiece_model_pb2 +# mp = sentencepiece.sentencepiece_model_pb2.ModelProto() +# mp.ParseFromString(open("tokenizer.model", "rb").read()) +# print(mp.trainer_spec) +# print(mp.normalizer_spec) + +# this gives: + +# trainer_spec { +# input: "/large_experiments/theorem/datasets/MERGED/all.test1.merged" +# model_prefix: "spm_model_32k_200M_charcov099995_allowWSO__v2" +# model_type: BPE +# vocab_size: 32000 +# self_test_sample_size: 0 +# input_format: "text" +# character_coverage: 0.9999499917030334 +# input_sentence_size: 200000000 +# seed_sentencepiece_size: 1000000 +# shrinking_factor: 0.75 +# num_threads: 80 +# num_sub_iterations: 2 +# max_sentence_length: 4192 +# shuffle_input_sentence: true +# max_sentencepiece_length: 16 +# split_by_unicode_script: true +# split_by_whitespace: true +# split_by_number: true +# treat_whitespace_as_suffix: false +# split_digits: true +# allow_whitespace_only_pieces: true +# vocabulary_output_piece_score: true +# hard_vocab_limit: true +# use_all_vocab: false +# byte_fallback: true +# required_chars: "" +# unk_id: 0 +# bos_id: 1 +# eos_id: 2 +# pad_id: -1 +# unk_surface: " \342\201\207 " +# unk_piece: "" +# bos_piece: "" +# eos_piece: "" +# pad_piece: "" +# train_extremely_large_corpus: false +# enable_differential_privacy: false +# differential_privacy_noise_level: 0.0 +# differential_privacy_clipping_threshold: 0 +# } +# normalizer_spec { +# name: "identity" +# precompiled_charsmap: "" +# add_dummy_prefix: true +# remove_extra_whitespaces: false +# normalization_rule_tsv: "" +# } + +# let's now use spm_train to train this exact model +# options docs: https://github.com/google/sentencepiece/blob/master/doc/options.md + +# we'll depart on a few settings: +# character_coverage -> 1.0 + +# other important notes: +# --split-digits = true, per the paper +# --allow_whitespace_only_pieces is true, default in spm is false +# --byte_fallback is true, default in spm is false +# --normalization_rule_name is identity, default in spm is nmt_nfkc + +spm_train --input="$input" \ + --model_prefix="$model_prefix" \ + --model_type=bpe \ + --vocab_size="$vocab_size" \ + --self_test_sample_size=0 \ + --input_format="text" \ + --character_coverage=1.0 \ + --num_threads="$(nproc)" \ + --split_digits=true \ + --allow_whitespace_only_pieces=true \ + --byte_fallback=true \ + --unk_surface=" \342\201\207 " \ + --normalization_rule_name=identity \ From f96c7afb2d6a8cac90c8d64ef97f51ed3cb3d2f7 Mon Sep 17 00:00:00 2001 From: icpp Date: Fri, 11 Aug 2023 10:11:32 -0400 Subject: [PATCH 24/79] Notable fork section for WebAssembly Added my repo `icpp-lmm` for running it on the Internet Computer --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index be6d8d9..fd0726f 100644 --- a/README.md +++ b/README.md @@ -245,6 +245,8 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - [llama2.py](https://github.com/tairov/llama2.py) by @[tairov](https://github.com/tairov): a simple one file pure Python port of this project with zero dependencies - C# - [llama2.cs](https://github.com/trrahul/llama2.cs) by @[trrahul](https://github.com/trrahul): a C# port of this project +- WebAssembly + - [icpp-llm](https://github.com/icppWorld/icpp-llm): LLMs for the Internet Computer - [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 ## unsorted todos From b0cfa2458d65747424fb4712f072680e2b3bc330 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Fri, 11 Aug 2023 16:47:29 +0000 Subject: [PATCH 25/79] ok i can train and sample a model with a custom tokenizer --- model.py | 5 +++-- sample.py | 6 +++++- tinystories.py | 37 +++++++++++++++++++++++++++++-------- train.py | 14 +++++++++++--- 4 files changed, 48 insertions(+), 14 deletions(-) diff --git a/model.py b/model.py index f7edbb6..7329d6c 100644 --- a/model.py +++ b/model.py @@ -11,12 +11,13 @@ from torch import nn @dataclass class ModelArgs: + # default hyperparameters for the Llama 7B model dim: int = 4096 n_layers: int = 32 n_heads: int = 32 n_kv_heads: Optional[int] = None - vocab_size: int = -1 # defined later by tokenizer - multiple_of: int = 256 # make SwiGLU hidden layer size multiple of large power of 2 + vocab_size: int = 32000 + multiple_of: int = 256 # MLP hidden layer size will be multiple of norm_eps: float = 1e-5 max_seq_len: int = 2048 dropout: float = 0.0 diff --git a/sample.py b/sample.py index 040bc14..93c9407 100644 --- a/sample.py +++ b/sample.py @@ -9,6 +9,8 @@ import tiktoken from model import ModelArgs, Transformer from tokenizer import Tokenizer +from tinystories import get_tokenizer_model_path + # ----------------------------------------------------------------------------- out_dir = 'out' # ignored if init_from is not 'resume' start = "" # or "<|endoftext|>" or etc. Can also specify a file, use as: "FILE:prompt.txt" @@ -51,7 +53,9 @@ if compile: model = torch.compile(model) # requires PyTorch 2.0 (optional) # load the tokenizer -enc = Tokenizer() +assert checkpoint["config"]["dataset"] == "tinystories" # TODO: generalize +tokenizer_model = get_tokenizer_model_path(vocab_size=gptconf.vocab_size) +enc = Tokenizer(tokenizer_model=tokenizer_model) # encode the beginning of the prompt if start.startswith('FILE:'): diff --git a/tinystories.py b/tinystories.py index d41f8fc..278c817 100644 --- a/tinystories.py +++ b/tinystories.py @@ -120,9 +120,7 @@ def train_vocab(vocab_size): def process_shard(args, vocab_size): shard_id, shard = args - tokenizer_model = None - if vocab_size > 0: - tokenizer_model = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}.model") + tokenizer_model = get_tokenizer_model_path() enc = Tokenizer(tokenizer_model) with open(shard, "r") as f: data = json.load(f) @@ -171,10 +169,12 @@ def pretokenize(vocab_size): class PretokDataset(torch.utils.data.IterableDataset): """Loads pretokenized examples from disk and yields them as PyTorch tensors.""" - def __init__(self, split, max_seq_len): + def __init__(self, split, max_seq_len, vocab_size, vocab_source): super().__init__() self.split = split self.max_seq_len = max_seq_len + self.vocab_size = vocab_size + self.vocab_source = vocab_source def __iter__(self): # get worker info within a DataLoader @@ -186,8 +186,14 @@ class PretokDataset(torch.utils.data.IterableDataset): seed = 42 + worker_id + 1337 * rank rng = random.Random(seed) print(f"Created a PretokDataset with rng seed {seed}") - data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") - shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.bin"))) + if self.vocab_source == "llama2": + # the .bin files are right along the .json files + bin_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") + shard_filenames = sorted(glob.glob(os.path.join(bin_dir, "*.bin"))) + elif self.vocab_source == "custom": + # the .bin files are in tok{N} directory + bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{self.vocab_size}") + shard_filenames = sorted(glob.glob(os.path.join(bin_dir, "*.bin"))) # train/test split. let's use only shard 0 for test split, rest train shard_filenames = shard_filenames[1:] if self.split == "train" else shard_filenames[:1] while True: @@ -209,12 +215,25 @@ class PretokDataset(torch.utils.data.IterableDataset): y = chunk[1:] yield x, y +# ----------------------------------------------------------------------------- +# public interface functions + +def get_tokenizer_model_path(vocab_size): + """ + Returns path to the sentencepiece tokenizer model for a given vocab size + vocab_size = 0 designates the default Llama 2 tokenizer, in that case + None is returned. + """ + if vocab_size == 0: + return None + else: + return os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}.model") class Task: @staticmethod - def iter_batches(split, batch_size, max_seq_len, device, num_workers=0): - ds = PretokDataset(split, max_seq_len) + def iter_batches(batch_size, device, num_workers=0, **dataset_kwargs): + ds = PretokDataset(**dataset_kwargs) dl = torch.utils.data.DataLoader( ds, batch_size=batch_size, pin_memory=True, num_workers=num_workers ) @@ -223,6 +242,8 @@ class Task: y = y.to(device, non_blocking=True) yield x, y +# ----------------------------------------------------------------------------- +# CLI for constructing the dataset if __name__ == "__main__": """ diff --git a/train.py b/train.py index dbf0b24..662afcf 100644 --- a/train.py +++ b/train.py @@ -47,6 +47,8 @@ wandb_run_name = "run" + datetime.now().strftime("%Y_%m_%d_%H_%M_%S") # data batch_size = 128 # if gradient_accumulation_steps > 1, this is the micro-batch size max_seq_len = 256 +vocab_source = "custom" # llama2|custom; use Lllama 2 vocab from Meta, or custom trained +vocab_size = 512 dataset = "tinystories" # tinystories|tinyshakespeare # model dim = 288 @@ -83,6 +85,10 @@ config = {k: globals()[k] for k in config_keys} # will be useful for logging lr_decay_iters = max_iters # should be ~= max_iters per Chinchilla min_lr = 0.0 # minimum learning rate, should be ~= learning_rate/10 per Chinchilla +# validating checks +assert vocab_source in ["llama2", "custom"] +assert vocab_source == "custom" or vocab_size == 32000, "The vocab from Meta has 32K tokens" + # various inits, derived attributes, I/O setup ddp = int(os.environ.get("RANK", -1)) != -1 # is this a ddp run? if ddp: @@ -128,6 +134,8 @@ iter_batches = partial( task.iter_batches, batch_size=batch_size, max_seq_len=max_seq_len, + vocab_size=vocab_size, + vocab_source=vocab_source, device=device, num_workers=0, ) @@ -142,7 +150,7 @@ model_args = dict( n_layers=n_layers, n_heads=n_heads, n_kv_heads=n_heads, - vocab_size=32000, + vocab_size=vocab_size, multiple_of=multiple_of, max_seq_len=max_seq_len, dropout=dropout, @@ -206,7 +214,7 @@ def estimate_loss(): out = {} model.eval() for split in ["train", "val"]: - batch_iter = iter_batches(split) + batch_iter = iter_batches(split=split) losses = torch.zeros(eval_iters) # keep on CPU for k in range(eval_iters): X, Y = next(batch_iter) @@ -238,7 +246,7 @@ if wandb_log and master_process: wandb.init(project=wandb_project, name=wandb_run_name, config=config) # training loop -train_batch_iter = iter_batches("train") +train_batch_iter = iter_batches(split="train") X, Y = next(train_batch_iter) # fetch the very first batch t0 = time.time() local_iter_num = 0 # number of iterations in the lifetime of this process From d421a95b2bfe593b2d9e5c147f3efc8d128afe0e Mon Sep 17 00:00:00 2001 From: Johannes Rudolph Date: Sat, 12 Aug 2023 20:31:19 +0200 Subject: [PATCH 26/79] optimize sample_topp by filtering out small value elements up front This works because we know that in worst case only 1 element will be selected and therefore the remaining (n-1) elements have to split the remaining (1-topp) probability. Probabilities smaller than that cannot be selected and can be filtered out up front. --- run.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/run.c b/run.c index afe695f..9fd8f76 100644 --- a/run.c +++ b/run.c @@ -465,17 +465,24 @@ int sample_topp(float* probabilities, int n, float topp, ProbIndex* probindex) { // tokens that exceed probability topp. This way we never sample tokens that // have very low probabilities and are less likely to go "off the rails". + int n0 = 0; // quicksort indices in descending order of probabilities + // elements smaller than (1 - topp) / (n - 1) cannot be part of the result + // and can be filtered out directly + const float cutoff = (1.0f - topp) / (n - 1); for (int i = 0; i < n; i++) { - probindex[i].index = i; - probindex[i].prob = probabilities[i]; + if (probabilities[i] >= cutoff) { + probindex[n0].index = i; + probindex[n0].prob = probabilities[i]; + n0++; + } } - qsort(probindex, n, sizeof(ProbIndex), compare); + qsort(probindex, n0, sizeof(ProbIndex), compare); // truncate the list where cumulative probability exceeds topp float cumulative_prob = 0.0f; - int last_idx = 0; - for (int i = 0; i < n; i++) { + int last_idx = n0 - 1; // in case of rounding errors consider all elements + for (int i = 0; i < n0; i++) { cumulative_prob += probindex[i].prob; if (cumulative_prob > topp) { last_idx = i; From ea4cedc5884ddbf18da82dc088f33a3ae980f1c6 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 02:00:19 +0000 Subject: [PATCH 27/79] add ability to export custom tokenizer to .bin format for run.c file --- tokenizer.py | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/tokenizer.py b/tokenizer.py index 981b2ac..bc2a35a 100644 --- a/tokenizer.py +++ b/tokenizer.py @@ -4,7 +4,7 @@ import os import struct -from logging import getLogger +import argparse from typing import List from sentencepiece import SentencePieceProcessor @@ -72,5 +72,9 @@ class Tokenizer: f.write(bytes) if __name__ == "__main__": - t = Tokenizer() + parser = argparse.ArgumentParser() + parser.add_argument("-t", "--tokenizer-model", type=str, help="optional path to custom tokenizer ") + args = parser.parse_args() + + t = Tokenizer(args.tokenizer_model) t.export() From f5fc0c245fe10826d4b038d9b9ddd3a6bfc01b92 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 02:12:13 +0000 Subject: [PATCH 28/79] final piece: run.c support for new tokenizer, super ez --- run.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/run.c b/run.c index afe695f..14469ad 100644 --- a/run.c +++ b/run.c @@ -508,6 +508,7 @@ void error_usage() { fprintf(stderr, " -s random seed, default time(NULL)\n"); fprintf(stderr, " -n number of steps to run for, default 256. 0 = max_seq_len\n"); fprintf(stderr, " -i input prompt\n"); + fprintf(stderr, " -z optional path to custom tokenizer\n"); exit(EXIT_FAILURE); } @@ -515,6 +516,7 @@ int main(int argc, char *argv[]) { // default inits char *checkpoint = NULL; // e.g. out/model.bin + char *tokenizer = "tokenizer.bin"; float temperature = 1.0f; // 0.0 = greedy deterministic. 1.0 = original. don't set higher float topp = 1.0f; // top-p in nucleus sampling. 1.0 = off. 0.9 works well, but slower rng_seed = 0; // seed rng with time by default @@ -534,6 +536,7 @@ int main(int argc, char *argv[]) { else if (argv[i][1] == 's') { rng_seed = atoi(argv[i + 1]); } else if (argv[i][1] == 'n') { steps = atoi(argv[i + 1]); } else if (argv[i][1] == 'i') { prompt = argv[i + 1]; } + else if (argv[i][1] == 'z') { tokenizer = argv[i + 1]; } else { error_usage(); } } if(rng_seed == 0) { rng_seed = (unsigned int)time(NULL);} @@ -567,13 +570,13 @@ int main(int argc, char *argv[]) { // right now we cannot run for more than config.seq_len steps if (steps <= 0 || steps > config.seq_len) { steps = config.seq_len; } - // read in the tokenizer.bin file + // read in the tokenizer .bin file char** vocab = (char**)malloc(config.vocab_size * sizeof(char*)); float* vocab_scores = (float*)malloc(config.vocab_size * sizeof(float)); unsigned int max_token_length; { - FILE *file = fopen("tokenizer.bin", "rb"); - if (!file) { fprintf(stderr, "couldn't load tokenizer.bin\n"); return 1; } + FILE *file = fopen(tokenizer, "rb"); + if (!file) { fprintf(stderr, "couldn't load %s\n", tokenizer); return 1; } if (fread(&max_token_length, sizeof(int), 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1; } int len; for (int i = 0; i < config.vocab_size; i++) { From 00a61dc7f92a94069c0b03bc83c8bf30db1b4aa2 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 02:18:30 +0000 Subject: [PATCH 29/79] remove the tinyshakespeare dataset until i can bring it back later in a nicer form, otherwise right now we just have a ton of copy paste code here --- tinyshakespeare.py | 140 --------------------------------------------- train.py | 5 +- 2 files changed, 1 insertion(+), 144 deletions(-) delete mode 100644 tinyshakespeare.py diff --git a/tinyshakespeare.py b/tinyshakespeare.py deleted file mode 100644 index 602624c..0000000 --- a/tinyshakespeare.py +++ /dev/null @@ -1,140 +0,0 @@ -""" -Download, preprocess and serve the TinyShakespeare dataset as a DataLoader. - -Follows the same interface as the TinyStories dataset. -""" - -import argparse -import os -import random - -import numpy as np -import requests -import torch -import torch.distributed as dist -from tqdm import tqdm - -from tokenizer import Tokenizer - -DATA_CACHE_DIR = "data" - -def download_file(url: str, fname: str, chunk_size=1024): - """Helper function to download a file from a given url""" - resp = requests.get(url, stream=True) - total = int(resp.headers.get("content-length", 0)) - with open(fname, "wb") as file, tqdm( - desc=fname, - total=total, - unit="iB", - unit_scale=True, - unit_divisor=1024, - ) as bar: - for data in resp.iter_content(chunk_size=chunk_size): - size = file.write(data) - bar.update(size) - - -def download(): - """Downloads the dataset to disk.""" - os.makedirs(DATA_CACHE_DIR, exist_ok=True) - - # download the TinyShakespeare dataset, unless it's already downloaded - data_url = "https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt" - data_filename = os.path.join(DATA_CACHE_DIR, "tinyshakespeare.txt") - if not os.path.exists(data_filename): - print(f"Downloading {data_url} to {data_filename}...") - download_file(data_url, data_filename) - else: - print(f"{data_filename} already exists, skipping download...") - - print("Download done.") - -def pretokenize(): - enc = Tokenizer() - - data_file = os.path.join(DATA_CACHE_DIR, "tinyshakespeare.txt") - - all_tokens = [] - with open(data_file, "r") as f: - for line in f: - text = line.strip() - tokens = enc.encode(text, bos=True, eos=False) - all_tokens.extend(tokens) - all_tokens = np.array(all_tokens, dtype=np.uint16) - print(f"Total tokens: {len(all_tokens)}") - with open(data_file.replace(".txt", ".bin"), "wb") as f: - f.write(all_tokens.tobytes()) - print(f"Saved {data_file.replace('.txt', '.bin')}") - print("Done.") - - -class PretokDataset(torch.utils.data.IterableDataset): - """Loads pretokenized examples from disk and yields them as PyTorch tensors.""" - - def __init__(self, split, max_seq_len): - super().__init__() - self.split = split - self.max_seq_len = max_seq_len - - def __iter__(self): - # get worker info within a DataLoader - worker_info = torch.utils.data.get_worker_info() - worker_id = worker_info.id if worker_info else 0 - # get DDP rank info - rank = dist.get_rank() if dist.is_initialized() else 0 - # combine the worker_id and worker_rank to create a unique seed for rng - seed = 42 + worker_id + 1337 * rank - rng = random.Random(seed) - print(f"Created a PretokDataset with rng seed {seed}") - data_file = os.path.join(DATA_CACHE_DIR, "tinyshakespeare.bin") - m_all = np.memmap(data_file, dtype=np.uint16, mode="r") - - # split out 10% of the data for validation - split_ix = int(len(m_all) * 0.9) - if self.split == "train": - m = m_all[:split_ix] - else: - m = m_all[split_ix:] - - num_batches = len(m) // self.max_seq_len - num_batches -= 1 # drop the last partial batch - assert num_batches > 0, "this split is way too small? investigate." - - while True: - ixs = list(range(num_batches)) - rng.shuffle(ixs) - for ix in ixs: - start = ix * self.max_seq_len - end = start + self.max_seq_len + 1 - # calling .astype will copy the data into a new numpy array, now in RAM - chunk = torch.from_numpy((m[start:end]).astype(np.int64)) - x = chunk[:-1] - y = chunk[1:] - yield x, y - - -class ShakespeareTask: - - @staticmethod - def iter_batches(split, batch_size, max_seq_len, device, num_workers=0): - ds = PretokDataset(split, max_seq_len) - dl = torch.utils.data.DataLoader( - ds, batch_size=batch_size, pin_memory=True, num_workers=num_workers - ) - for x, y in dl: - x = x.to(device, non_blocking=True) - y = y.to(device, non_blocking=True) - yield x, y - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("stage", type=str, choices=["download", "train_tokenizer", "pretokenize"]) - args = parser.parse_args() - - # depending on the stage call the appropriate function - fun = { - "download": download, - "pretokenize": pretokenize, - } - fun[args.stage]() \ No newline at end of file diff --git a/train.py b/train.py index 662afcf..39b4f49 100644 --- a/train.py +++ b/train.py @@ -29,7 +29,6 @@ from torch.distributed import destroy_process_group, init_process_group from torch.nn.parallel import DistributedDataParallel as DDP from tinystories import Task -from tinyshakespeare import ShakespeareTask # ----------------------------------------------------------------------------- # I/O @@ -49,7 +48,6 @@ batch_size = 128 # if gradient_accumulation_steps > 1, this is the micro-batch max_seq_len = 256 vocab_source = "custom" # llama2|custom; use Lllama 2 vocab from Meta, or custom trained vocab_size = 512 -dataset = "tinystories" # tinystories|tinyshakespeare # model dim = 288 n_layers = 6 @@ -129,9 +127,8 @@ ctx = ( ) # task-specific setup -task = {'tinystories': Task, 'tinyshakespeare': ShakespeareTask}[dataset] iter_batches = partial( - task.iter_batches, + Task.iter_batches, batch_size=batch_size, max_seq_len=max_seq_len, vocab_size=vocab_size, From 9c3cfb46a32cc529792f8ae08217035d997c1b3b Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 03:08:07 +0000 Subject: [PATCH 30/79] make default be the llama2 tokenizer --- train.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/train.py b/train.py index 39b4f49..24d6fa6 100644 --- a/train.py +++ b/train.py @@ -46,8 +46,8 @@ wandb_run_name = "run" + datetime.now().strftime("%Y_%m_%d_%H_%M_%S") # data batch_size = 128 # if gradient_accumulation_steps > 1, this is the micro-batch size max_seq_len = 256 -vocab_source = "custom" # llama2|custom; use Lllama 2 vocab from Meta, or custom trained -vocab_size = 512 +vocab_source = "llama2" # llama2|custom; use Lllama 2 vocab from Meta, or custom trained +vocab_size = 32000 # the Llama 2 tokenizer has 32K tokens # model dim = 288 n_layers = 6 From fe49eb222c88787853f47fd3ae5223bb6a5419f3 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 03:16:18 +0000 Subject: [PATCH 31/79] readme for custom tokenizers --- README.md | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/README.md b/README.md index be6d8d9..95fd98a 100644 --- a/README.md +++ b/README.md @@ -142,6 +142,47 @@ Which gives the same results. More detailed testing will be done in `test_all.py $ pytest ``` +## custom tokenizers + +In everything above, we've assumed the custom Lllama 2 tokenizer with 32,000 tokens. However, in many boutique LLMs, using vocabulary this big might be an overkill. If you have a small application you have in mind, you might be much better off training your own tokenizers. This can make everything nicer - with smaller vocabs your model has fewer parameters (because the token embedding table is a lot smaller), the inference is faster (because there are fewer tokens to predict), and your average sequence length per example could also get smaller (because the compression is a lot more efficient on your data). So let's see how we train a custom tokenizer. + +By default, to pretokenize the tinystories dataset we had to run, in order: + +``` +python tinystories.py download +python tinystories.py pretokenize +``` + +The `pretokenize` stage here loads the Llama 2 tokenizer (vocab size 32,000) and uses it to convert the downloaded text into integers, and saves that to file. We now change this as follows, to train an example 4096-token tokenizer: + +``` +python tinystories.py download +python tinystories.py train_vocab --vocab_size=4096 +python tinystories.py pretokenize --vocab_size=4096 +``` + +The `train_vocab` stage will call the `train_vocab.sh` script, which calls the `sentencepiece` library to train the tokenizer, storing it in a new file `data/tok4096.model`. I tried to reproduce as well as I could the settings that (I think) Meta used to train their vocabulary. This uses the Byte Pair Encoding algorithm that starts out with raw utf8 byte sequences of the text data and then iteratively merges the most common consecutive pairs of tokens to form the vocabulary. Inspect the `tinystories.py` file - the custom tokenizers are stored in a special directory structure indexed by the vocab size. + +Now that we have pretokenized the dataset with our custom tokenizer, we can train the model. The training script `train.py` doesn't care about the exact tokens, it only cares about the vocabulary size so it can correctly initialize the model. So when training your model, make sure to pass in + +``` +python train.py --vocab_source=custom --vocab_size=4096 +``` + +(The defaults are `llama2` and `32000` respectively, which indicates the default Llama 2 tokenizer). This trains the model. Finally we are ready to run inference with our `run.c` script. For that we need two things. Number one, we have to export our tokenizer in the `.bin` format, do that with: + +``` +python tokenizer.py --tokenizer-model=data/tok4096.model +``` + +This writes the tokenizer to `data/tok4096.bin`. Now we can run inference, pointing it to this tokenizer using the `-z` flag: + +``` +./run out/model.bin -z data/tok4096.bin +``` + +This should print the samples. If you leave out the `-z` flag, it will use the default Llama 2 tokenizer, which would generate a good sequence of integers, but they would get translated using a different vocabulary to text, so it would look like gibberish. + ## performance There are many ways to potentially speed up this code depending on your system. Have a look at the [Makefile](Makefile), which contains a lot of notes. The `make run` command currently uses the `-O3` optimization by default, i.e.: From 1d14cb8dd8884eefa3f15d06263ec4ab95a4b703 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 03:19:35 +0000 Subject: [PATCH 32/79] add note about 4096 vs 32000 token size on tinystories --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 95fd98a..331bb7a 100644 --- a/README.md +++ b/README.md @@ -163,6 +163,8 @@ python tinystories.py pretokenize --vocab_size=4096 The `train_vocab` stage will call the `train_vocab.sh` script, which calls the `sentencepiece` library to train the tokenizer, storing it in a new file `data/tok4096.model`. I tried to reproduce as well as I could the settings that (I think) Meta used to train their vocabulary. This uses the Byte Pair Encoding algorithm that starts out with raw utf8 byte sequences of the text data and then iteratively merges the most common consecutive pairs of tokens to form the vocabulary. Inspect the `tinystories.py` file - the custom tokenizers are stored in a special directory structure indexed by the vocab size. +A quick note of interest is that vocab size of 4096 trained specifically on tinystories creates integer sequences with about the same sequence length per example as the default Llama 2 tokenizer of 32000 tokens! This means that our custom, tailored tokenizer is a lot better adapted to our specific text, and can compress it very effectively. So our trained models are smaller and faster. + Now that we have pretokenized the dataset with our custom tokenizer, we can train the model. The training script `train.py` doesn't care about the exact tokens, it only cares about the vocabulary size so it can correctly initialize the model. So when training your model, make sure to pass in ``` From 9ff459b9258c20a5fcf6539e988f003e6e31f255 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 03:24:31 +0000 Subject: [PATCH 33/79] todo changes --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 331bb7a..2c3614e 100644 --- a/README.md +++ b/README.md @@ -292,12 +292,12 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg ## unsorted todos +- revive tests; train a tiny Llama test model (committed to repo) and use it as reference in unit tests +- make it easier to add a new dataset with not too much pain - add multiquery support into run.c -- add custom bpe training code and the ability to train a smaller vocabulary (32K is to much) - should calculate freq_cis online in the script run.c instead of loading them - int4/8 quantization - export the model in a more sensible output format with a proper header, etc. -- train a tiny Llama test model (committed to repo) and use it as reference in unit tests - support Llama 2 7B Chat models and tune run.c to Chat UI/UX - llama2.cu investigate and merge - (LoRA) finetuning and export of Llama 2 models From daa9fd9b8a288996b5a3c7913881a46d15cc3932 Mon Sep 17 00:00:00 2001 From: atamyrat Date: Sat, 12 Aug 2023 23:12:35 +0300 Subject: [PATCH 34/79] sort vocabulary for faster lookup with bsearch() --- run.c | 37 ++++++++++++++++++++++++++----------- 1 file changed, 26 insertions(+), 11 deletions(-) diff --git a/run.c b/run.c index f69c21a..46d7a41 100644 --- a/run.c +++ b/run.c @@ -342,24 +342,38 @@ void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* // ---------------------------------------------------------------------------- // byte pair encoding (BPE) tokenizer, encodes strings into tokens so we can prompt -int str_lookup(char *str, char **vocab, int vocab_size) { - // find the first perfect match for str in vocab, return its index or -1 if not found - for (int i = 0; i < vocab_size; i++) { - if (strcmp(str, vocab[i]) == 0) { - return i; - } - } - return -1; +typedef struct { + char *str; + int id; +} TokenIndex; + +int compare_tokens(const void *a, const void *b) { + return strcmp(((TokenIndex*)a)->str, ((TokenIndex*)b)->str); +} + +int str_lookup(char *str, TokenIndex *sorted_vocab, int vocab_size) { + // find the perfect match for str in vocab, return its index or -1 if not found + TokenIndex tok = {str=str}; + TokenIndex *res = bsearch(&tok, sorted_vocab, vocab_size, sizeof(TokenIndex), compare_tokens); + return res!=NULL ? res->id : -1; } void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, unsigned int max_token_length, int *tokens, int *n_tokens) { + // sort vocabulary + TokenIndex *sorted_vocab = malloc(vocab_size * sizeof(TokenIndex)); + for (int i = 0; i < vocab_size; i++) { + sorted_vocab[i].str = vocab[i]; + sorted_vocab[i].id = i; + } + qsort(sorted_vocab, vocab_size, sizeof(TokenIndex), compare_tokens); + // a temporary buffer to merge two consecutive tokens char* str_buffer = malloc((max_token_length*2+1) * sizeof(char)); // *2 for concat, +1 for null terminator size_t str_len = 0; // add_dummy_prefix is true by default - tokens[0] = str_lookup(" ", vocab, vocab_size); + tokens[0] = str_lookup(" ", sorted_vocab, vocab_size); *n_tokens = 1; // the number of tokens // first encode every individual byte in the input string @@ -374,7 +388,7 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u if ((*(c+1) & 0xC0) == 0x80) // skip if in middle of multi-byte utf8 encoding continue; - int id = str_lookup(str_buffer, vocab, vocab_size); + int id = str_lookup(str_buffer, sorted_vocab, vocab_size); if (id != -1) { tokens[(*n_tokens)++] = id; @@ -395,7 +409,7 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u for (int i=0; i < (*n_tokens-1); i++) { // check if we can merge the pair (tokens[i], tokens[i+1]) sprintf(str_buffer, "%s%s", vocab[tokens[i]], vocab[tokens[i+1]]); - int id = str_lookup(str_buffer, vocab, vocab_size); + int id = str_lookup(str_buffer, sorted_vocab, vocab_size); if (id != -1 && vocab_scores[id] > best_score) { // this merge pair exists in vocab! record its score and position best_score = vocab_scores[id]; @@ -418,6 +432,7 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u } free(str_buffer); + free(sorted_vocab); } // convert token to printable string From 27adb082f1b71147616e104081bf6a86a93e06b1 Mon Sep 17 00:00:00 2001 From: Tian Lin Date: Sun, 13 Aug 2023 21:58:14 +0800 Subject: [PATCH 35/79] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 2c3614e..57f981c 100644 --- a/README.md +++ b/README.md @@ -259,6 +259,7 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - [llama2.rs](https://github.com/gaxler/llama2.rs) by @[gaxler](https://github.com/gaxler): a Rust port of this project - [llama2.rs](https://github.com/leo-du/llama2.rs) by @[leo-du](https://github.com/leo-du): A Rust port of this project - [llama2-rs](https://github.com/danielgrittner/llama2-rs) by @[danielgrittner](https://github.com/danielgrittner): a Rust port of this project + - [llama2.rs](https://github.com/lintian06/llama2.rs) by @[lintian06](https://github.com/lintian06): A Rust port of this project - Go - [go-llama2](https://github.com/tmc/go-llama2) by @[tmc](https://github.com/tmc): a Go port of this project - [llama2.go](https://github.com/nikolaydubina/llama2.go) by @[nikolaydubina](https://github.com/nikolaydubina): a Go port of this project From 570789aa04e2c487c18778d71f16c33f1bf45d04 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Mihai=20Nad=C4=83=C8=99?= Date: Sun, 13 Aug 2023 17:49:10 +0300 Subject: [PATCH 36/79] Fixes https://github.com/karpathy/llama2.c/issues/280 There was a small bug in tinystories.py, described here: https://github.com/karpathy/llama2.c/issues/280 This commit simply passes vocab_size to get_tokenizer_model_path to avoid silent crash when processing shards (in process_shard) --- tinystories.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tinystories.py b/tinystories.py index 278c817..690cb02 100644 --- a/tinystories.py +++ b/tinystories.py @@ -120,7 +120,7 @@ def train_vocab(vocab_size): def process_shard(args, vocab_size): shard_id, shard = args - tokenizer_model = get_tokenizer_model_path() + tokenizer_model = get_tokenizer_model_path(vocab_size) enc = Tokenizer(tokenizer_model) with open(shard, "r") as f: data = json.load(f) From 1d68a36d14b13200a191e5fe88fbd97db4d88a39 Mon Sep 17 00:00:00 2001 From: Oleksandr Nikitin Date: Sun, 13 Aug 2023 19:10:07 +0300 Subject: [PATCH 37/79] Add TypeScript port I've never been so happy to have missed that the JS port already exists :D also it was nice to discover that the JS can reach 80% of the single-threaded C speed (10 tokens/s for TinyStories-110M) --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 2c3614e..62fdb76 100644 --- a/README.md +++ b/README.md @@ -271,6 +271,7 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - [llama2.cpp](https://github.com/leloykun/llama2.cpp) by @[leloykun](https://github.com/leloykun): a C++ port of this project - JavaScript - [llama2.js](https://github.com/epicure/llama2.js) by @[epicure](https://github.com/epicure): a JavaScript port of this project + - [llama2.ts](https://github.com/wizzard0/llama2.ts) by @[oleksandr_now](https://twitter.com/oleksandr_now): a TypeScript port of this project - [llama2.c-emscripten](https://github.com/gohai/llama2.c-emscripten) by @[gohai](https://github.com/gohai): Emscripten (JavaScript) port, based on @ggerganov's initial prototype - Zig - [llama2.zig](https://github.com/cgbur/llama2.zig) by @[cgbur](https://github.com/cgbur): A Zig port of this project From 0e6213c6e0f636d9609761b19e6bc97e4109fd95 Mon Sep 17 00:00:00 2001 From: Oleksandr Nikitin Date: Sun, 13 Aug 2023 20:02:34 +0300 Subject: [PATCH 38/79] Mention I can run the full 7B model --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 62fdb76..fee7fc5 100644 --- a/README.md +++ b/README.md @@ -271,7 +271,7 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - [llama2.cpp](https://github.com/leloykun/llama2.cpp) by @[leloykun](https://github.com/leloykun): a C++ port of this project - JavaScript - [llama2.js](https://github.com/epicure/llama2.js) by @[epicure](https://github.com/epicure): a JavaScript port of this project - - [llama2.ts](https://github.com/wizzard0/llama2.ts) by @[oleksandr_now](https://twitter.com/oleksandr_now): a TypeScript port of this project + - [llama2.ts](https://github.com/wizzard0/llama2.ts) by @[oleksandr_now](https://twitter.com/oleksandr_now): a TypeScript port of this project. Full Llama2-7B capable. - [llama2.c-emscripten](https://github.com/gohai/llama2.c-emscripten) by @[gohai](https://github.com/gohai): Emscripten (JavaScript) port, based on @ggerganov's initial prototype - Zig - [llama2.zig](https://github.com/cgbur/llama2.zig) by @[cgbur](https://github.com/cgbur): A Zig port of this project From 38bfac90a887a1f8d7b61849f4ec58e26b267efe Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 19:34:05 +0000 Subject: [PATCH 39/79] bigchange: add multiquery support in run.c. we can now train and inference multiquery models (where n_kv_heads < n_heads). this also means that we, in principle, support Llama 2 34B and 70B models, which are multiquery --- README.md | 1 - model.py | 1 + run.c | 53 ++++++++++++++++++++++++++++++----------------------- sample.py | 1 - train.py | 3 ++- 5 files changed, 33 insertions(+), 26 deletions(-) diff --git a/README.md b/README.md index 2c3614e..664fb0f 100644 --- a/README.md +++ b/README.md @@ -294,7 +294,6 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - revive tests; train a tiny Llama test model (committed to repo) and use it as reference in unit tests - make it easier to add a new dataset with not too much pain -- add multiquery support into run.c - should calculate freq_cis online in the script run.c instead of loading them - int4/8 quantization - export the model in a more sensible output format with a proper header, etc. diff --git a/model.py b/model.py index 7329d6c..c8c82a9 100644 --- a/model.py +++ b/model.py @@ -94,6 +94,7 @@ class Attention(nn.Module): def __init__(self, args: ModelArgs): super().__init__() self.n_kv_heads = args.n_heads if args.n_kv_heads is None else args.n_kv_heads + assert args.n_heads % self.n_kv_heads == 0 model_parallel_size = 1 self.n_local_heads = args.n_heads // model_parallel_size self.n_local_kv_heads = self.n_kv_heads // model_parallel_size diff --git a/run.c b/run.c index 14469ad..4a6e8c2 100644 --- a/run.c +++ b/run.c @@ -39,11 +39,11 @@ typedef struct { // weights for rmsnorms float* rms_att_weight; // (layer, dim) rmsnorm weights float* rms_ffn_weight; // (layer, dim) - // weights for matmuls - float* wq; // (layer, dim, dim) - float* wk; // (layer, dim, dim) - float* wv; // (layer, dim, dim) - float* wo; // (layer, dim, dim) + // weights for matmuls. note dim == n_heads * head_size + float* wq; // (layer, dim, n_heads * head_size) + float* wk; // (layer, dim, n_kv_heads * head_size) + float* wv; // (layer, dim, n_kv_heads * head_size) + float* wo; // (layer, n_heads * head_size, dim) // weights for ffn float* w1; // (layer, hidden_dim, dim) float* w2; // (layer, dim, hidden_dim) @@ -82,6 +82,7 @@ typedef struct { void malloc_run_state(RunState* s, Config* p) { // we calloc instead of malloc to keep valgrind happy + int kv_dim = (p->dim * p->n_kv_heads) / p->n_heads; s->x = calloc(p->dim, sizeof(float)); s->xb = calloc(p->dim, sizeof(float)); s->xb2 = calloc(p->dim, sizeof(float)); @@ -93,8 +94,8 @@ void malloc_run_state(RunState* s, Config* p) { s->att = calloc(p->n_heads * p->seq_len, sizeof(float)); s->logits = calloc(p->vocab_size, sizeof(float)); s->probindex = calloc(p->vocab_size, sizeof(ProbIndex)); - s->key_cache = calloc(p->n_layers * p->seq_len * p->dim, sizeof(float)); - s->value_cache = calloc(p->n_layers * p->seq_len * p->dim, sizeof(float)); + s->key_cache = calloc(p->n_layers * p->seq_len * kv_dim, sizeof(float)); + s->value_cache = calloc(p->n_layers * p->seq_len * kv_dim, sizeof(float)); // ensure all mallocs went fine if (!s->x || !s->xb || !s->xb2 || !s->hb || !s->hb2 || !s->q || !s->k || !s->v || !s->att || !s->logits || !s->key_cache @@ -124,19 +125,20 @@ void free_run_state(RunState* s) { // initialization: read from checkpoint void checkpoint_init_weights(TransformerWeights *w, Config* p, float* f, int shared_weights) { + int head_size = p->dim / p->n_heads; float* ptr = f; w->token_embedding_table = ptr; ptr += p->vocab_size * p->dim; w->rms_att_weight = ptr; ptr += p->n_layers * p->dim; w->wq = ptr; - ptr += p->n_layers * p->dim * p->dim; + ptr += p->n_layers * p->dim * (p->n_heads * head_size); w->wk = ptr; - ptr += p->n_layers * p->dim * p->dim; + ptr += p->n_layers * p->dim * (p->n_kv_heads * head_size); w->wv = ptr; - ptr += p->n_layers * p->dim * p->dim; + ptr += p->n_layers * p->dim * (p->n_kv_heads * head_size); w->wo = ptr; - ptr += p->n_layers * p->dim * p->dim; + ptr += p->n_layers * (p->n_heads * head_size) * p->dim; w->rms_ffn_weight = ptr; ptr += p->n_layers * p->dim; w->w1 = ptr; @@ -148,7 +150,6 @@ void checkpoint_init_weights(TransformerWeights *w, Config* p, float* f, int sha w->rms_final_weight = ptr; ptr += p->dim; w->freq_cis_real = ptr; - int head_size = p->dim / p->n_heads; ptr += p->seq_len * head_size / 2; w->freq_cis_imag = ptr; ptr += p->seq_len * head_size / 2; @@ -218,6 +219,8 @@ void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* // a few convenience variables float *x = s->x; int dim = p->dim; + int kv_dim = (p->dim * p->n_kv_heads) / p->n_heads; + int kv_mul = p->n_heads / p->n_kv_heads; // integer multiplier of the kv sharing in multiquery int hidden_dim = p->hidden_dim; int head_size = dim / p->n_heads; @@ -237,29 +240,33 @@ void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* // qkv matmuls for this position matmul(s->q, s->xb, w->wq + l*dim*dim, dim, dim); - matmul(s->k, s->xb, w->wk + l*dim*dim, dim, dim); - matmul(s->v, s->xb, w->wv + l*dim*dim, dim, dim); + matmul(s->k, s->xb, w->wk + l*dim*kv_dim, dim, kv_dim); + matmul(s->v, s->xb, w->wv + l*dim*kv_dim, dim, kv_dim); // RoPE relative positional encoding: complex-valued rotate q and k by freq_cis in each head for (int i = 0; i < dim; i+=2) { float q0 = s->q[i]; float q1 = s->q[i+1]; - float k0 = s->k[i]; - float k1 = s->k[i+1]; float fcr = freq_cis_real_row[(i % head_size) / 2]; float fci = freq_cis_imag_row[(i % head_size) / 2]; s->q[i] = q0 * fcr - q1 * fci; s->q[i+1] = q0 * fci + q1 * fcr; + } + for (int i = 0; i < kv_dim; i+=2) { + float k0 = s->k[i]; + float k1 = s->k[i+1]; + float fcr = freq_cis_real_row[(i % head_size) / 2]; + float fci = freq_cis_imag_row[(i % head_size) / 2]; s->k[i] = k0 * fcr - k1 * fci; s->k[i+1] = k0 * fci + k1 * fcr; } // save key,value at this time step (pos) to our kv cache - int loff = l * p->seq_len * dim; // kv cache layer offset for convenience - float* key_cache_row = s->key_cache + loff + pos * dim; - float* value_cache_row = s->value_cache + loff + pos * dim; - memcpy(key_cache_row, s->k, dim*sizeof(*key_cache_row)); - memcpy(value_cache_row, s->v, dim*sizeof(*value_cache_row)); + int loff = l * p->seq_len * kv_dim; // kv cache layer offset for convenience + float* key_cache_row = s->key_cache + loff + pos * kv_dim; + float* value_cache_row = s->value_cache + loff + pos * kv_dim; + memcpy(key_cache_row, s->k, kv_dim * sizeof(*key_cache_row)); + memcpy(value_cache_row, s->v, kv_dim * sizeof(*value_cache_row)); // multihead attention. iterate over all heads int h; @@ -272,7 +279,7 @@ void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* // iterate over all timesteps, including the current one for (int t = 0; t <= pos; t++) { // get the key vector for this head and at this timestep - float* k = s->key_cache + loff + t * dim + h * head_size; + float* k = s->key_cache + loff + t * kv_dim + (h / kv_mul) * head_size; // calculate the attention score as the dot product of q and k float score = 0.0f; for (int i = 0; i < head_size; i++) { @@ -291,7 +298,7 @@ void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* memset(xb, 0, head_size * sizeof(float)); for (int t = 0; t <= pos; t++) { // get the value vector for this head and at this timestep - float* v = s->value_cache + loff + t * dim + h * head_size; + float* v = s->value_cache + loff + t * kv_dim + (h / kv_mul) * head_size; // get the attention weight for this timestep float a = att[t]; // accumulate the weighted value into xb diff --git a/sample.py b/sample.py index 93c9407..2f66e7f 100644 --- a/sample.py +++ b/sample.py @@ -53,7 +53,6 @@ if compile: model = torch.compile(model) # requires PyTorch 2.0 (optional) # load the tokenizer -assert checkpoint["config"]["dataset"] == "tinystories" # TODO: generalize tokenizer_model = get_tokenizer_model_path(vocab_size=gptconf.vocab_size) enc = Tokenizer(tokenizer_model=tokenizer_model) diff --git a/train.py b/train.py index 24d6fa6..b1972dc 100644 --- a/train.py +++ b/train.py @@ -52,6 +52,7 @@ vocab_size = 32000 # the Llama 2 tokenizer has 32K tokens dim = 288 n_layers = 6 n_heads = 6 +n_kv_heads = 6 multiple_of = 32 dropout = 0.0 # adamw optimizer @@ -146,7 +147,7 @@ model_args = dict( dim=dim, n_layers=n_layers, n_heads=n_heads, - n_kv_heads=n_heads, + n_kv_heads=n_kv_heads, vocab_size=vocab_size, multiple_of=multiple_of, max_seq_len=max_seq_len, From 36b54321e519cdabac2ecb3a1247db82f2aea4bb Mon Sep 17 00:00:00 2001 From: atamyrat Date: Sun, 13 Aug 2023 23:23:32 +0300 Subject: [PATCH 40/79] bugfix: allocate +1 in tokens buffer for dummy whitespace --- run.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/run.c b/run.c index 46d7a41..4680dc5 100644 --- a/run.c +++ b/run.c @@ -641,7 +641,7 @@ int main(int argc, char *argv[]) { int *prompt_tokens = NULL; int num_prompt_tokens = 0; if (prompt != NULL) { - prompt_tokens = (int*)malloc(strlen(prompt) * sizeof(int)); + prompt_tokens = (int*)malloc((strlen(prompt)+1) * sizeof(int)); bpe_encode(prompt, vocab, vocab_scores, config.vocab_size, max_token_length, prompt_tokens, &num_prompt_tokens); } From 58075b5ac5935d1f22c3935fdedbcf60de3e1474 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 20:31:32 +0000 Subject: [PATCH 41/79] update API of sample.py to be better, small changes here --- README.md | 3 +-- sample.py | 17 ++++++++++------- 2 files changed, 11 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 9b054dc..19db674 100644 --- a/README.md +++ b/README.md @@ -132,8 +132,7 @@ Watch the tokens stream by, fun! We can also run the PyTorch inference script fo ```bash wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt -P out15M -mv out15M/stories15M.pt out15M/ckpt.pt # sorry the sample script current assumes this directory structure / filename... -python sample.py --out_dir=out15M +python sample.py --checkpoint=out15M/stories15M.pt ``` Which gives the same results. More detailed testing will be done in `test_all.py`. Currently you will need two files to test or sample: both the .bin file, and the .ckpt file inside a directory (see `test_all.py` for details). Sorry this is a bit janky right now, I have to think through running the tests without having to download 200MB of data. But run the tests with pytest: diff --git a/sample.py b/sample.py index 2f66e7f..64bb177 100644 --- a/sample.py +++ b/sample.py @@ -12,12 +12,13 @@ from tokenizer import Tokenizer from tinystories import get_tokenizer_model_path # ----------------------------------------------------------------------------- -out_dir = 'out' # ignored if init_from is not 'resume' +checkpoint = 'out/ckpt.pt' start = "" # or "<|endoftext|>" or etc. Can also specify a file, use as: "FILE:prompt.txt" num_samples = 1 # number of samples to draw max_new_tokens = 100 # number of tokens generated in each sample temperature = 1.0 # 1.0 = no change, < 1.0 = less random, > 1.0 = more random, in predictions top_k = 300 # retain only the top_k most likely tokens, clamp others to have 0 probability +tokenizer = "" # override the tokenizer model path seed = 1337 device = 'cuda' if torch.cuda.is_available() else 'cpu' # examples: 'cpu', 'cuda', 'cuda:0', 'cuda:1', etc. #dtype = 'bfloat16' if torch.cuda.is_available() and torch.cuda.is_bf16_supported() else 'float16' # 'float32' or 'bfloat16' or 'float16' @@ -35,11 +36,10 @@ ptdtype = {'float32': torch.float32, 'bfloat16': torch.bfloat16, 'float16': torc ctx = nullcontext() if device_type == 'cpu' else torch.amp.autocast(device_type=device_type, dtype=ptdtype) # init from a model saved in a specific directory -ckpt_path = os.path.join(out_dir, 'ckpt.pt') -checkpoint = torch.load(ckpt_path, map_location=device) -gptconf = ModelArgs(**checkpoint['model_args']) +checkpoint_dict = torch.load(checkpoint, map_location=device) +gptconf = ModelArgs(**checkpoint_dict['model_args']) model = Transformer(gptconf) -state_dict = checkpoint['model'] +state_dict = checkpoint_dict['model'] unwanted_prefix = '_orig_mod.' for k,v in list(state_dict.items()): if k.startswith(unwanted_prefix): @@ -52,8 +52,11 @@ if compile: print("Compiling the model...") model = torch.compile(model) # requires PyTorch 2.0 (optional) -# load the tokenizer -tokenizer_model = get_tokenizer_model_path(vocab_size=gptconf.vocab_size) +# load the tokenizer, either provided, or attempt to find it +if tokenizer: + tokenizer_model = tokenizer +else: + tokenizer_model = get_tokenizer_model_path(vocab_size=gptconf.vocab_size) enc = Tokenizer(tokenizer_model=tokenizer_model) # encode the beginning of the prompt From 3e989e21f2c25b29caa9ea9f7e22bdb4385c4780 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 20:38:05 +0000 Subject: [PATCH 42/79] link to stories260K model --- README.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 19db674..8b49b74 100644 --- a/README.md +++ b/README.md @@ -85,11 +85,12 @@ base models... ¯\\_(ツ)_/¯. Since we can inference the base model, it should For the sake of examples of smaller, from-scratch models, I trained a small model series on TinyStories. All of these trained in a few hours on my training setup (4X A100 40GB GPUs). The 110M took around 24 hours. I am hosting them on huggingface hub [tinyllamas](https://huggingface.co/karpathy/tinyllamas), both in the original PyTorch .pt, and also in the llama2.c format .bin: -| model | dim | n_layers | n_heads | max context length | parameters | val loss | download -| --- | --- | --- | --- | --- | --- | --- | --- | -| OG | 288 | 6 | 6 | 256 | 15M | 1.072 | [stories15M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin) | -| 42M| 512 | 8 | 8 | 1024 | 42M | 0.847 | [stories42M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin) | -| 110M| 768 | 12 | 12 | 1024 | 110M | 0.760 | [stories110M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin) | +| model | dim | n_layers | n_heads | n_kv_heads | max context length | parameters | val loss | download +| --- | --- | --- | | --- | --- | --- | --- | --- | --- | +| 260K | 64 | 5 | 8 | 4 | 512 | 260K | 1.2968 | [stories260K](https://huggingface.co/karpathy/tinyllamas/tree/main/stories260K) +| OG | 288 | 6 | 6 | 6 | 256 | 15M | 1.072 | [stories15M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin) | +| 42M| 512 | 8 | 8 | 8 | 1024 | 42M | 0.847 | [stories42M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin) | +| 110M| 768 | 12 | 12 | 12 | 1024 | 110M | 0.760 | [stories110M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin) | You'll notice that the 110M model is equivalent to GPT-1 in size. Alternatively, this is also the smallest model in the GPT-2 series (`GPT-2 small`), except the max context length is only 1024 instead of 2048. The only notable changes from GPT-1/2 architecture is that Llama uses RoPE relatively positional embeddings instead of absolute/learned positional embeddings, a bit more fancy SwiGLU non-linearity in the MLP, RMSNorm instead of LayerNorm, bias=False on all Linear layers, and is optionally multiquery (but this is not yet supported in llama2.c). From b2cce341e06bb4699edc8307812643a1da9943c7 Mon Sep 17 00:00:00 2001 From: Andrej Date: Sun, 13 Aug 2023 13:39:12 -0700 Subject: [PATCH 43/79] oops typo fix in readme --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 8b49b74..15efce0 100644 --- a/README.md +++ b/README.md @@ -86,7 +86,7 @@ base models... ¯\\_(ツ)_/¯. Since we can inference the base model, it should For the sake of examples of smaller, from-scratch models, I trained a small model series on TinyStories. All of these trained in a few hours on my training setup (4X A100 40GB GPUs). The 110M took around 24 hours. I am hosting them on huggingface hub [tinyllamas](https://huggingface.co/karpathy/tinyllamas), both in the original PyTorch .pt, and also in the llama2.c format .bin: | model | dim | n_layers | n_heads | n_kv_heads | max context length | parameters | val loss | download -| --- | --- | --- | | --- | --- | --- | --- | --- | --- | +| --- | --- | --- | --- | --- | --- | --- | --- | --- | | 260K | 64 | 5 | 8 | 4 | 512 | 260K | 1.2968 | [stories260K](https://huggingface.co/karpathy/tinyllamas/tree/main/stories260K) | OG | 288 | 6 | 6 | 6 | 256 | 15M | 1.072 | [stories15M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin) | | 42M| 512 | 8 | 8 | 8 | 1024 | 42M | 0.847 | [stories42M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin) | From 0805cb2c317146a38e893bad8286b0f14860fe97 Mon Sep 17 00:00:00 2001 From: Andrej Date: Sun, 13 Aug 2023 13:40:09 -0700 Subject: [PATCH 44/79] tiny whitespace fix to try to eliminate scrollbar --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 15efce0..5c04483 100644 --- a/README.md +++ b/README.md @@ -87,7 +87,7 @@ For the sake of examples of smaller, from-scratch models, I trained a small mode | model | dim | n_layers | n_heads | n_kv_heads | max context length | parameters | val loss | download | --- | --- | --- | --- | --- | --- | --- | --- | --- | -| 260K | 64 | 5 | 8 | 4 | 512 | 260K | 1.2968 | [stories260K](https://huggingface.co/karpathy/tinyllamas/tree/main/stories260K) +| 260K | 64 | 5 | 8 | 4 | 512 | 260K | 1.297 | [stories260K](https://huggingface.co/karpathy/tinyllamas/tree/main/stories260K) | OG | 288 | 6 | 6 | 6 | 256 | 15M | 1.072 | [stories15M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin) | | 42M| 512 | 8 | 8 | 8 | 1024 | 42M | 0.847 | [stories42M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin) | | 110M| 768 | 12 | 12 | 12 | 1024 | 110M | 0.760 | [stories110M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin) | From f0024cfc885a1f5bac58200ee4aaf00caefcf0b4 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 21:22:44 +0000 Subject: [PATCH 45/79] revive tests. now that we have a tiny stories260K model this only requires a 2MB download. phew --- README.md | 17 +++++++---- test_all.py | 86 ++++++++++++++++++++++++++++++++++++----------------- 2 files changed, 70 insertions(+), 33 deletions(-) diff --git a/README.md b/README.md index 5c04483..d2a478a 100644 --- a/README.md +++ b/README.md @@ -136,11 +136,7 @@ wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt -P ou python sample.py --checkpoint=out15M/stories15M.pt ``` -Which gives the same results. More detailed testing will be done in `test_all.py`. Currently you will need two files to test or sample: both the .bin file, and the .ckpt file inside a directory (see `test_all.py` for details). Sorry this is a bit janky right now, I have to think through running the tests without having to download 200MB of data. But run the tests with pytest: - -```bash -$ pytest -``` +Which gives the same results. ## custom tokenizers @@ -227,6 +223,17 @@ On **Windows**, use `build_msvc.bat` in a Visual Studio Command Prompt to build On **Centos 7**, **Amazon Linux 2018** use `rungnu` Makefile target: `make rungnu` or `make runompgnu` to use openmp. +## tests + +You can run tests simply with pytest: + +```bash +$ pip install pytest +$ pytest +``` + +This will currently invoke two tests inside `test_all.py`, which forward the model in both C and Python for 200 steps and check the output against a known good expected output. The tests currently run in only a few seconds, but will have to download and cache the stories260K models in a temporary `test` directory (only ~2MB download). + ## ack I trained the llama2.c storyteller models on a 4X A100 40GB box graciously provided by the excellent [Lambda labs](https://lambdalabs.com/service/gpu-cloud), thank you. diff --git a/test_all.py b/test_all.py index 8563614..e8590ea 100644 --- a/test_all.py +++ b/test_all.py @@ -4,37 +4,65 @@ $ pytest """ import os import pytest # pip install pytest +import requests import subprocess + import torch from model import ModelArgs, Transformer +from tokenizer import Tokenizer -def test_argmax_inference(): - """ - Only the simplest test for now: run inference with temperature 0 - (for determinism) in both C and PyTorch, and see that the sampled tokens - are the same. - """ - test_ckpt_dir = "out" # TODO create a dummy test checkpoint for this? +# ----------------------------------------------------------------------------- +# test utilities - # run C version - model_path = os.path.join(test_ckpt_dir, "model.bin") - command = ["./run", model_path, "0.0"] - proc = subprocess.Popen(command, stdout=subprocess.PIPE) - c_tokens = [] - for line in proc.stdout: - token = int(line.decode('utf-8').strip()) - c_tokens.append(token) - proc.wait() - #print(c_tokens) +test_ckpt_dir = "test" - # run PyTorch version - device = "cuda" if torch.cuda.is_available() else "cpu" - ckpt_path = os.path.join(test_ckpt_dir, "ckpt.pt") - checkpoint = torch.load(ckpt_path, map_location=device) - gptconf = ModelArgs(**checkpoint['model_args']) +def download_file(url, filename): + print(f"Downloading {url} to {filename}") + response = requests.get(url, stream=True) + response.raise_for_status() # Raise an HTTPError on bad status code + with open(filename, 'wb') as file: + for chunk in response.iter_content(chunk_size=8192): + file.write(chunk) + +def attempt_download_files(): + os.makedirs(test_ckpt_dir, exist_ok=True) + root_url = "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories260K" + need = ["stories260K.bin", "stories260K.pt", "tok512.bin", "tok512.model"] + for file in need: + url = os.path.join(root_url, file) + filename = os.path.join(test_ckpt_dir, file) + if not os.path.exists(filename): + download_file(url, filename) + +expected_stdout = b'Once upon a time, there was a little girl named Lily. She loved to play outside in the park. One day, she saw a big, red ball. She wanted to play with it, but it was too high.\nLily\'s mom said, "Lily, let\'s go to the park." Lily was sad and didn\'t know what to do. She said, "I want to play with your ball, but I can\'t find it."\nLily was sad and didn\'t know what to do. She said, "I\'m sorry, Lily. I didn\'t know what to do."\nLily didn\'t want to help her mom, so she' + +# ----------------------------------------------------------------------------- +# actual tests + +def test_runc(): + """ Forwards a model against a known-good desired outcome in run.c for 200 steps""" + + model_path = os.path.join(test_ckpt_dir, "stories260K.bin") + tokenizer_path = os.path.join(test_ckpt_dir, "tok512.bin") + command = ["./run", model_path, "-z", tokenizer_path, "-t", "0.0", "-n", "200"] + proc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + + stdout, stderr = proc.communicate() + + # strip the very last \n that is added by run.c for aesthetic reasons + stdout = stdout[:-1] + assert stdout == expected_stdout + +def test_python(): + """ Forwards a model against a known-good desired outcome in sample.py for 200 steps""" + + device = "cpu" # stories260K is small enough to just breeze through it on CPU + checkpoint = os.path.join(test_ckpt_dir, "stories260K.pt") + checkpoint_dict = torch.load(checkpoint, map_location=device) + gptconf = ModelArgs(**checkpoint_dict['model_args']) model = Transformer(gptconf) - state_dict = checkpoint['model'] + state_dict = checkpoint_dict['model'] unwanted_prefix = '_orig_mod.' for k,v in list(state_dict.items()): if k.startswith(unwanted_prefix): @@ -44,10 +72,12 @@ def test_argmax_inference(): model.to(device) x = torch.tensor([[1]], dtype=torch.long, device=device) # 1 is BOS with torch.inference_mode(): - y = model.generate(x, max_new_tokens=gptconf.max_seq_len, temperature=0.0) + y = model.generate(x, max_new_tokens=200, temperature=0.0) pt_tokens = y[0].tolist() - pt_tokens = pt_tokens[1:] # remove BOS - #print(pt_tokens) - # compare - assert c_tokens == pt_tokens + tokenizer_model = os.path.join(test_ckpt_dir, "tok512.model") + enc = Tokenizer(tokenizer_model=tokenizer_model) + text = enc.decode(pt_tokens) + text = text.encode('ascii') # turn into bytes + + assert text == expected_stdout \ No newline at end of file From 850603618597cd1ef88de482b2ba49be2190cfd1 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 21:23:27 +0000 Subject: [PATCH 46/79] remove 'revive tests' as a todo from the readme --- README.md | 1 - 1 file changed, 1 deletion(-) diff --git a/README.md b/README.md index d2a478a..e14e39f 100644 --- a/README.md +++ b/README.md @@ -302,7 +302,6 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg ## unsorted todos -- revive tests; train a tiny Llama test model (committed to repo) and use it as reference in unit tests - make it easier to add a new dataset with not too much pain - should calculate freq_cis online in the script run.c instead of loading them - int4/8 quantization From 86325bf7e83392e488e4442649b65f73d70d2b07 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 23:35:29 +0000 Subject: [PATCH 47/79] attempt to upgrade the CI to run our pytest --- .github/workflows/build.yml | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index a954469..f8b216b 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -4,10 +4,12 @@ on: push: branches: - master - paths: ['.github/workflows/**', '**/Makefile', '**/*.c', '**/*.h'] + paths: ['.github/workflows/**', '**/Makefile', '**/*.c', '**/*.h', '**/*.py'] pull_request: types: [opened, synchronize, reopened] - paths: ['**/Makefile', '**/*.c', '**/*.h'] + paths: ['**/Makefile', '**/*.c', '**/*.h', '**/*.py'] + # for manual triggering + workflow_dispatch: env: BRANCH_NAME: ${{ github.head_ref || github.ref_name }} @@ -15,7 +17,7 @@ env: jobs: # check basic builds to avoid breaking changes ubuntu-focal-make: - runs-on: ubuntu-20.04 + runs-on: ubuntu-latest steps: - name: Clone @@ -28,6 +30,16 @@ jobs: sudo apt-get update sudo apt-get install build-essential -y + - name: Set up Python 3.10 + uses: actions/setup-python@v3 + with: + python-version: "3.10" + + - name: Pip setup + run: | + python -m pip install --upgrade pip + if [ -f requirements.txt ]; then pip install -r requirements.txt; fi + - name: Build id: make_build run: | @@ -38,6 +50,10 @@ jobs: run: | make runfast + - name: Test with pytest + run: | + pytest + macOS-latest-make: runs-on: macos-latest From 223a67048adede28f43993dd862d49d6950c4347 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 23:39:37 +0000 Subject: [PATCH 48/79] add optional manual dispatch of actions --- .github/workflows/build.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index a954469..13b5be4 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -8,6 +8,8 @@ on: pull_request: types: [opened, synchronize, reopened] paths: ['**/Makefile', '**/*.c', '**/*.h'] + # for manual triggering + workflow_dispatch: env: BRANCH_NAME: ${{ github.head_ref || github.ref_name }} From c970f69334fa8f07a8d359430097bca86a96e754 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sun, 13 Aug 2023 23:48:01 +0000 Subject: [PATCH 49/79] oops i should probably call this function lol --- test_all.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/test_all.py b/test_all.py index e8590ea..625af44 100644 --- a/test_all.py +++ b/test_all.py @@ -42,6 +42,7 @@ expected_stdout = b'Once upon a time, there was a little girl named Lily. She lo def test_runc(): """ Forwards a model against a known-good desired outcome in run.c for 200 steps""" + attempt_download_files() model_path = os.path.join(test_ckpt_dir, "stories260K.bin") tokenizer_path = os.path.join(test_ckpt_dir, "tok512.bin") @@ -56,6 +57,7 @@ def test_runc(): def test_python(): """ Forwards a model against a known-good desired outcome in sample.py for 200 steps""" + attempt_download_files() device = "cpu" # stories260K is small enough to just breeze through it on CPU checkpoint = os.path.join(test_ckpt_dir, "stories260K.pt") From 854c97b660fc8527a979ab5cf26436a6146f2ade Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Mon, 14 Aug 2023 00:12:45 +0000 Subject: [PATCH 50/79] turn topp 0.9 back on by default thanks to recent PR contributions truncating before quicksort --- README.md | 2 +- run.c | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index fe1b32f..99416d5 100644 --- a/README.md +++ b/README.md @@ -58,7 +58,7 @@ You can also prompt the model with a prefix or a number of additional command li There is also an even better 110M param model available, see [models](#models). -Quick note on sampling, the recommendation for ~best results is to sample with `-t 1.0 -p 0.9`, i.e. temperature 1.0 (default) but also top-p sampling at 0.9 (not default!). The top-p sampling is turned off by default because it can run quite a bit slower. More generally, to control the diversity of samples use either the temperature (i.e. vary `-t` between 0 and 1 and keep top-p off with `-p 0`) or the top-p value (i.e. vary `-p` between 0 and 1 and keep `-t 1`), but not both. Nice explainers on LLM sampling strategies include [this](https://peterchng.com/blog/2023/05/02/token-selection-strategies-top-k-top-p-and-temperature/), [this](https://docs.cohere.com/docs/controlling-generation-with-top-k-top-p) or [this](https://huggingface.co/blog/how-to-generate). +Quick note on sampling, the recommendation for ~best results is to sample with `-t 1.0 -p 0.9`, i.e. temperature 1.0 (default) but also top-p sampling at 0.9 (default). Intuitively, top-p ensures that tokens with tiny probabilities do not get sampled, so we can't get "unlucky" during sampling, and we are less likely to go "off the rails" afterwards. More generally, to control the diversity of samples use either the temperature (i.e. vary `-t` between 0 and 1 and keep top-p off with `-p 0`) or the top-p value (i.e. vary `-p` between 0 and 1 and keep `-t 1`), but not both. Nice explainers on LLM sampling strategies include [this](https://peterchng.com/blog/2023/05/02/token-selection-strategies-top-k-top-p-and-temperature/), [this](https://docs.cohere.com/docs/controlling-generation-with-top-k-top-p) or [this](https://huggingface.co/blog/how-to-generate). ## Meta's Llama 2 models diff --git a/run.c b/run.c index d66e838..426e7e8 100644 --- a/run.c +++ b/run.c @@ -474,8 +474,8 @@ int sample_topp(float* probabilities, int n, float topp, ProbIndex* probindex) { int n0 = 0; // quicksort indices in descending order of probabilities - // elements smaller than (1 - topp) / (n - 1) cannot be part of the result - // and can be filtered out directly + // values smaller than (1 - topp) / (n - 1) cannot be part of the result + // so for efficiency we crop these out as candidates before sorting const float cutoff = (1.0f - topp) / (n - 1); for (int i = 0; i < n; i++) { if (probabilities[i] >= cutoff) { @@ -518,7 +518,7 @@ void error_usage() { fprintf(stderr, "Example: run model.bin -n 256 -i \"Once upon a time\"\n"); fprintf(stderr, "Options:\n"); fprintf(stderr, " -t temperature, default 1.0\n"); - fprintf(stderr, " -p p value in top-p (nucleus) sampling. default 1.0 (=off)\n"); + fprintf(stderr, " -p p value in top-p (nucleus) sampling. default 0.9\n"); fprintf(stderr, " -s random seed, default time(NULL)\n"); fprintf(stderr, " -n number of steps to run for, default 256. 0 = max_seq_len\n"); fprintf(stderr, " -i input prompt\n"); @@ -532,7 +532,7 @@ int main(int argc, char *argv[]) { char *checkpoint = NULL; // e.g. out/model.bin char *tokenizer = "tokenizer.bin"; float temperature = 1.0f; // 0.0 = greedy deterministic. 1.0 = original. don't set higher - float topp = 1.0f; // top-p in nucleus sampling. 1.0 = off. 0.9 works well, but slower + float topp = 0.9f; // top-p in nucleus sampling. 1.0 = off. 0.9 works well, but slower rng_seed = 0; // seed rng with time by default int steps = 256; // number of steps to run for char *prompt = NULL; // prompt string From 45afa91dca8808f4d767d132210e7093c42f004c Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Mon, 14 Aug 2023 02:54:27 +0000 Subject: [PATCH 51/79] the accum function has been bothering me, there is no real need to add a function here, it does something trivial and is only used twice, scrap --- run.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/run.c b/run.c index 426e7e8..df95e6f 100644 --- a/run.c +++ b/run.c @@ -159,12 +159,6 @@ void checkpoint_init_weights(TransformerWeights *w, Config* p, float* f, int sha // ---------------------------------------------------------------------------- // neural net blocks -void accum(float *a, float *b, int size) { - for (int i = 0; i < size; i++) { - a[i] += b[i]; - } -} - void rmsnorm(float* o, float* x, float* weight, int size) { // calculate sum of squares float ss = 0.0f; @@ -312,7 +306,9 @@ void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* matmul(s->xb2, s->xb, w->wo + l*dim*dim, dim, dim); // residual connection back into x - accum(x, s->xb2, dim); + for (int i = 0; i < dim; i++) { + x[i] += s->xb2[i]; + } // ffn rmsnorm rmsnorm(s->xb, x, w->rms_ffn_weight + l*dim, dim); @@ -336,7 +332,9 @@ void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* matmul(s->xb, s->hb, w->w2 + l*dim*hidden_dim, hidden_dim, dim); // residual connection - accum(x, s->xb, dim); + for (int i = 0; i < dim; i++) { + x[i] += s->xb[i]; + } } // final rmsnorm From bae0bcf484493df65097a9fdae8b6157f338bf8d Mon Sep 17 00:00:00 2001 From: Andrej Date: Sun, 13 Aug 2023 20:03:00 -0700 Subject: [PATCH 52/79] Small tweaks to Readme intro --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 99416d5..a180208 100644 --- a/README.md +++ b/README.md @@ -4,9 +4,11 @@ Cute Llama

-With the code in this repo you can train the Llama 2 LLM architecture from scratch in PyTorch, then export the weights to a binary file, and load that into one ~simple 500-line C file ([run.c](run.c)) that inferences the model. Alternatively, you can load, finetune, and inference Meta's Llama 2 (but this is still being actively fleshed out). Hence, this repo is a "fullstack" train + inference solution for Llama 2 LLM, with a focus on minimalism and simplicity. You might think that you need many billion parameter LLMs to do anything useful, but in fact very small LLMs can have surprisingly strong performance if you make the domain narrow enough. I recommend looking at the [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) paper for inspiration. +Train the Llama 2 LLM architecture in PyTorch then inference it with one simple 700-line C file ([run.c](run.c)). You might think that you need many billion parameter LLMs to do anything useful, but in fact very small LLMs can have surprisingly strong performance if you make the domain narrow enough (ref: [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) paper). This repo is a "fullstack" train + inference solution for Llama 2 LLM, with focus on minimalism and simplicity. -Please note that this started recently as just a fun weekend project: I took my earlier [nanoGPT](https://github.com/karpathy/nanoGPT), tuned it to implement the Llama-2 architecture instead of GPT-2, and the meat of it was writing the C inference engine in [run.c](run.c). So the project is young and moving quickly. Hat tip to the awesome [llama.cpp](https://github.com/ggerganov/llama.cpp) for inspiring this project. I wanted something super minimal so I chose to hard-code the Llama 2 architecture, stick to fp32, and just roll one inference file of pure C with no dependencies. +As the architecture is identical, you can also load and inference Meta's Llama 2 models. However, the current code only inferences models in fp32, so you will most likely not be able to productively load models larger than 7B. Work on model quantization is currently ongoing. + +Please note that this repo started recently as a fun weekend project: I took my earlier [nanoGPT](https://github.com/karpathy/nanoGPT), tuned it to implement the Llama-2 architecture instead of GPT-2, and the meat of it was writing the C inference engine in [run.c](run.c). So the project is young and moving quickly. Hat tip to the awesome [llama.cpp](https://github.com/ggerganov/llama.cpp) for inspiring this project. Compred to llama.cpp, I wanted something super simple, minimal, and educational so I chose to hard-code the Llama 2 architecture and just roll one inference file of pure C with no dependencies. ## feel the magic From c39f19f1a9e32f1d33eb47fa5f674d10dd2d382f Mon Sep 17 00:00:00 2001 From: Nikhil Gupta Date: Mon, 14 Aug 2023 10:18:51 +0530 Subject: [PATCH 53/79] [Feat]: Add support for meta llama hf model conversion Description: Llama 2 hf models have weights stored with diff name Signed-off-by: Nikhil Gupta --- export_meta_llama_hf_bin.py | 113 ++++++++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 export_meta_llama_hf_bin.py diff --git a/export_meta_llama_hf_bin.py b/export_meta_llama_hf_bin.py new file mode 100644 index 0000000..e3a8c73 --- /dev/null +++ b/export_meta_llama_hf_bin.py @@ -0,0 +1,113 @@ +""" +This script exports the Llama 2 weights in llama2c.bin format. +""" +import os +import sys +import struct +from pathlib import Path +import json + +import torch + +from model import precompute_freqs_cis + + +def export(p, state_dict, filepath='model.bin'): + """export the model weights in fp32 into .bin file to be read from C""" + f = open(filepath, 'wb') + + def serialize(key): + print(f"writing {key}...") + t = state_dict[key].contiguous().view(-1).type(torch.float32).numpy() + f.write(memoryview(t)) + del state_dict[key] + + # first write out the header + hidden_dim = state_dict['model.layers.0.mlp.gate_proj.weight'].shape[0] + p['vocab_size'] = 32000 + p['max_seq_len'] = 2048 + + n_kv_heads = p.get('n_kv_heads') or p['n_heads'] + header = struct.pack( + 'iiiiiii', + p['dim'], hidden_dim, p['n_layers'], p['n_heads'], + n_kv_heads, -p['vocab_size'], p['max_seq_len'] + ) + # NOTE ABOVE: -ve vocab_size is indicating that the classifier weights are present + # in the checkpoint and should be loaded. + f.write(header) + + # next write out the embedding weights + print("writing tok_embeddings...") + serialize('model.embed_tokens.weight') + + # now all the layers + # attention weights + for i in range(p['n_layers']): serialize(f'model.layers.{i}.input_layernorm.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.q_proj.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.k_proj.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.v_proj.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.o_proj.weight') + # ffn weights + for i in range(p['n_layers']): serialize(f'model.layers.{i}.post_attention_layernorm.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.mlp.gate_proj.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.mlp.down_proj.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.mlp.up_proj.weight') + + # final rmsnorm + serialize('model.norm.weight') + # freqs_cos, freqs_sin + freqs_cos, freqs_sin = precompute_freqs_cis(p['dim'] // p['n_heads'], p['max_seq_len'] * 2) + state_dict['freqs_cos'] = freqs_cos[:p['max_seq_len']] + state_dict['freqs_sin'] = freqs_sin[:p['max_seq_len']] + # check if this requires addtional conversion + serialize('freqs_cos') + serialize('freqs_sin') + + # finally write the output weights + serialize('lm_head.weight') + + f.close() + print(f"wrote {filepath}") + + +def concat_weights(models): + state_dict = {} + for name in list(models[0]): + tensors = [model[name] for model in models] + if len(tensors) == 1 or len(tensors[0].shape) == 1: + state_dict[name] = tensors[0] + continue + is_axis_1 = ( + name.startswith('model.embed_tokens.weight') + or name.endswith('.self_attn.o_proj.weight') + or name.endswith('.mlp.down_proj.weight') + ) + axis = 1 if is_axis_1 else 0 + state_dict[name] = torch.cat(tensors, dim=axis) + for model in models: + del model[name] + return state_dict + + +def load_and_export(model_path, output_path): + params_path = os.path.join(model_path, 'params.json') + with open(params_path) as f: + params = json.load(f) + print(params) + + model_paths = sorted(list(Path(model_path).glob('consolidated.*.pth'))) + models = [torch.load(p, map_location='cpu') for p in model_paths] + state_dict = concat_weights(models) + del models + export(params, state_dict, output_path) + + +if __name__ == '__main__': + if len(sys.argv) == 1: + print('[Llama model folder path] [output path]') + exit() + + model_path = sys.argv[1] + output_path = sys.argv[2] + load_and_export(model_path, output_path) From 82ad2ba34ead544883ac84248c2dbd98a690c0aa Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Mon, 14 Aug 2023 05:53:57 +0000 Subject: [PATCH 54/79] remove tiktoken as dependency --- requirements.txt | 1 - sample.py | 1 - 2 files changed, 2 deletions(-) diff --git a/requirements.txt b/requirements.txt index e3f97c4..7187a73 100644 --- a/requirements.txt +++ b/requirements.txt @@ -2,7 +2,6 @@ numpy==1.23.5 pytest==7.4.0 Requests==2.31.0 sentencepiece==0.1.99 -tiktoken==0.3.3 torch==2.0.1 tqdm==4.64.1 wandb==0.15.5 diff --git a/sample.py b/sample.py index 64bb177..b26e277 100644 --- a/sample.py +++ b/sample.py @@ -5,7 +5,6 @@ import os import pickle from contextlib import nullcontext import torch -import tiktoken from model import ModelArgs, Transformer from tokenizer import Tokenizer From 79900ff68ee662ddc72e5392843ba1e4c4bf860d Mon Sep 17 00:00:00 2001 From: chenyang Date: Mon, 14 Aug 2023 15:00:33 +0800 Subject: [PATCH 55/79] update readme wiht a simple line to introduce llama2.c-zh --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index a180208..2a76e47 100644 --- a/README.md +++ b/README.md @@ -302,6 +302,7 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - WebAssembly - [icpp-llm](https://github.com/icppWorld/icpp-llm): LLMs for the Internet Computer - [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 +- [llama2.c-zh - Bilingual Chinese and English](https://github.com/chenyangMl/llama2.c-zh) by @[chenyangMl](https://github.com/chenyangMl): Expand tokenizer to support training and inferencing in both Chinese and English ## unsorted todos From 2a9a4c4e14d548a216503dae19f58c3af284f953 Mon Sep 17 00:00:00 2001 From: chenyang Date: Mon, 14 Aug 2023 15:12:30 +0800 Subject: [PATCH 56/79] update readme wiht a simple line to introduce llama2.c-zh --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 2a76e47..d2d19d9 100644 --- a/README.md +++ b/README.md @@ -302,7 +302,7 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - WebAssembly - [icpp-llm](https://github.com/icppWorld/icpp-llm): LLMs for the Internet Computer - [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 -- [llama2.c-zh - Bilingual Chinese and English](https://github.com/chenyangMl/llama2.c-zh) by @[chenyangMl](https://github.com/chenyangMl): Expand tokenizer to support training and inferencing in both Chinese and English +- [llama2.c-zh - Bilingual Chinese and English](https://github.com/chenyangMl/llama2.c-zh) by @[chenyangMl](https://github.com/chenyangMl): Expand tokenizer to support training and inference in both Chinese and English ## unsorted todos From 32c1ff97fbe69a4d030e0bc05b156a3733da396c Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Mon, 14 Aug 2023 14:52:07 +0000 Subject: [PATCH 57/79] missed p->dim to kv_dim for k,v vectors. we're not doing anything wrong we're just being wasteful with memory. thanks @xefoci7612 for pointing out --- run.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/run.c b/run.c index df95e6f..56ceff5 100644 --- a/run.c +++ b/run.c @@ -89,8 +89,8 @@ void malloc_run_state(RunState* s, Config* p) { s->hb = calloc(p->hidden_dim, sizeof(float)); s->hb2 = calloc(p->hidden_dim, sizeof(float)); s->q = calloc(p->dim, sizeof(float)); - s->k = calloc(p->dim, sizeof(float)); - s->v = calloc(p->dim, sizeof(float)); + s->k = calloc(kv_dim, sizeof(float)); + s->v = calloc(kv_dim, sizeof(float)); s->att = calloc(p->n_heads * p->seq_len, sizeof(float)); s->logits = calloc(p->vocab_size, sizeof(float)); s->probindex = calloc(p->vocab_size, sizeof(ProbIndex)); From 4bf36ecc1792ce2ed579d6c5718fc38b5a035677 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Tue, 15 Aug 2023 01:04:10 +0000 Subject: [PATCH 58/79] get rid of the special byte decoding logic --- run.c | 19 ++++--------------- 1 file changed, 4 insertions(+), 15 deletions(-) diff --git a/run.c b/run.c index 8da8823..33560fe 100644 --- a/run.c +++ b/run.c @@ -358,7 +358,7 @@ int compare_tokens(const void *a, const void *b) { int str_lookup(char *str, TokenIndex *sorted_vocab, int vocab_size) { // find the perfect match for str in vocab, return its index or -1 if not found - TokenIndex tok = {str=str}; + TokenIndex tok = {str=str}; TokenIndex *res = bsearch(&tok, sorted_vocab, vocab_size, sizeof(TokenIndex), compare_tokens); return res!=NULL ? res->id : -1; } @@ -440,19 +440,6 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u free(sorted_vocab); } -// convert token to printable string -char *token_to_str(char **vocab, int token, int prev_token) { - // following BOS (1) token, sentencepiece decoder strips any leading whitespace (see PR #89) - char *token_str = (prev_token == 1 && vocab[token][0] == ' ') ? vocab[token]+1 : vocab[token]; - // make '<0x01>' into '\x01' - static char byte_piece[4]; - if (sscanf(token_str, "<0x%02X>", (int*)(&byte_piece)) == 1) { - byte_piece[1] = '\0'; - token_str = byte_piece; - } - return token_str; -} - // ---------------------------------------------------------------------------- // utilities: time / rng @@ -699,7 +686,9 @@ int main(int argc, char *argv[]) { // data-dependent terminating condition: the BOS (1) token delimits sequences if (next == 1) { break; } - printf("%s", token_to_str(vocab, next, token)); + // following BOS (1) token, sentencepiece decoder strips any leading whitespace (see PR #89) + char *token_str = (token == 1 && vocab[next][0] == ' ') ? vocab[next]+1 : vocab[next]; + printf("%s", token_str); fflush(stdout); token = next; From d459fd4243cddf5893231cbaa70da26e598cfa53 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Tue, 15 Aug 2023 01:42:33 +0000 Subject: [PATCH 59/79] add back careful processing of the byte tokens --- run.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/run.c b/run.c index 33560fe..37d3018 100644 --- a/run.c +++ b/run.c @@ -10,6 +10,7 @@ $ ./run #include #include +#include #include #include #include @@ -688,7 +689,20 @@ int main(int argc, char *argv[]) { // following BOS (1) token, sentencepiece decoder strips any leading whitespace (see PR #89) char *token_str = (token == 1 && vocab[next][0] == ' ') ? vocab[next]+1 : vocab[next]; - printf("%s", token_str); + // careful, some tokens designate raw bytes, and look like e.g. '<0x01>' + unsigned char byte_val; + if (sscanf(token_str, "<0x%02hhX>", &byte_val) == 1) { + // ok this token is a raw byte token, carefuly to only print printable chars or whitespace + // some of the other bytes can be various control codes, backspace, etc. => skip + if (isprint(byte_val) || isspace(byte_val)) { + char byte_piece[2]; + byte_piece[0] = byte_val; + byte_piece[1] = '\0'; + printf("%s", byte_piece); + } + } else { + printf("%s", token_str); + } fflush(stdout); token = next; From a9a0628c9254c0efcc0249cdf3d5dc0b692201a6 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Tue, 15 Aug 2023 02:18:49 +0000 Subject: [PATCH 60/79] thoroughly commented the UTF-8 byte reading code --- run.c | 49 ++++++++++++++++++++++++++++++++++++------------- 1 file changed, 36 insertions(+), 13 deletions(-) diff --git a/run.c b/run.c index 37d3018..8f565cd 100644 --- a/run.c +++ b/run.c @@ -358,10 +358,10 @@ int compare_tokens(const void *a, const void *b) { } int str_lookup(char *str, TokenIndex *sorted_vocab, int vocab_size) { - // find the perfect match for str in vocab, return its index or -1 if not found - TokenIndex tok = {str=str}; + // efficiently find the perfect match for str in vocab, return its index or -1 if not found + TokenIndex tok = { .str = str }; // acts as the key to search for TokenIndex *res = bsearch(&tok, sorted_vocab, vocab_size, sizeof(TokenIndex), compare_tokens); - return res!=NULL ? res->id : -1; + return res != NULL ? res->id : -1; } void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, unsigned int max_token_length, int *tokens, int *n_tokens) { @@ -374,7 +374,7 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u } qsort(sorted_vocab, vocab_size, sizeof(TokenIndex), compare_tokens); - // a temporary buffer to merge two consecutive tokens + // create a temporary buffer that will store merge candidates of always two consecutive tokens char* str_buffer = malloc((max_token_length*2+1) * sizeof(char)); // *2 for concat, +1 for null terminator size_t str_len = 0; @@ -382,25 +382,48 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u tokens[0] = str_lookup(" ", sorted_vocab, vocab_size); *n_tokens = 1; // the number of tokens - // first encode every individual byte in the input string - for (char *c = text; *c != '\0'; c++) { - // reset buffer if the current byte is ASCII or leading byte - if ((*c & 0xC0) != 0x80) - str_len = 0; + // Okay UTF-8 time. This will get messy. Here is the reference from Wikipedia: + // Code point ↔ UTF-8 conversion + // First code point Last code point Byte 1 Byte 2 Byte 3 Byte 4 + // U+0000 U+007F 0xxxxxxx + // U+0080 U+07FF 110xxxxx 10xxxxxx + // U+0800 U+FFFF 1110xxxx 10xxxxxx 10xxxxxx + // U+10000 U+10FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx - str_buffer[str_len++] = *c; // append byte to the buffer + // process the raw (UTF-8) byte sequence of the input string + for (char *c = text; *c != '\0'; c++) { + + // reset buffer if the current byte is ASCII or a leading byte + // 0xC0 is 11000000, so (*c & 0xC0) keeps the first 2 bits and zeros the rest + // 0x80 is 10000000 + // in UTF-8, all continuation bytes start with "10" in first two bits + // so in English this is: "if this byte is not a continuation byte" + if ((*c & 0xC0) != 0x80) { + // this byte must be either a leading byte (11...) or an ASCII char (0x...) + // => reset our location, as we're starting a new UTF-8 codepoint + str_len = 0; + } + + // append the current byte to the buffer + str_buffer[str_len++] = *c; // ++ is post-increment, incremented after this line str_buffer[str_len] = '\0'; - if ((*(c+1) & 0xC0) == 0x80) // skip if in middle of multi-byte utf8 encoding + // while the next character is a continuation byte, continue appending + if ((*(c+1) & 0xC0) == 0x80) { continue; + } + // ok c+1 is not a continuation byte, so we've read in a full codepoint int id = str_lookup(str_buffer, sorted_vocab, vocab_size); if (id != -1) { + // we found this codepoint in vocab, add it as a token tokens[(*n_tokens)++] = id; } else { - // byte_fallback encoding - for (int i=0; i, , + // so the individual bytes only start at index 3 + for (int i=0; i < str_len; i++) { tokens[(*n_tokens)++] = (unsigned char)str_buffer[i] + 3; } } From fe2de68688ec35502b566fcef227a94935a3f3b7 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Tue, 15 Aug 2023 02:33:01 +0000 Subject: [PATCH 61/79] fix sample.py from tokenizer changes before --- sample.py | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/sample.py b/sample.py index b26e277..d2f56ea 100644 --- a/sample.py +++ b/sample.py @@ -51,11 +51,16 @@ if compile: print("Compiling the model...") model = torch.compile(model) # requires PyTorch 2.0 (optional) -# load the tokenizer, either provided, or attempt to find it +# load the tokenizer +vocab_source = checkpoint_dict.get("vocab_source", "llama2") +vocab_size = gptconf.vocab_size if tokenizer: + # a specific tokenizer is provided, use it tokenizer_model = tokenizer else: - tokenizer_model = get_tokenizer_model_path(vocab_size=gptconf.vocab_size) + # let's try to find the tokenizer model automatically. bit gross here... + query_vocab_size = 0 if vocab_source == "llama2" else vocab_size + tokenizer_model = get_tokenizer_model_path(vocab_size=query_vocab_size) enc = Tokenizer(tokenizer_model=tokenizer_model) # encode the beginning of the prompt From 88eb238255a44536a7d8adfadbf49e2bfa093d64 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Tue, 15 Aug 2023 15:57:27 +0000 Subject: [PATCH 62/79] add tests into Makefile convenience --- Makefile | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/Makefile b/Makefile index 8debdc6..a4c6588 100644 --- a/Makefile +++ b/Makefile @@ -45,6 +45,16 @@ rungnu: runompgnu: $(CC) -Ofast -fopenmp -std=gnu11 run.c -lm -o run +# run all tests +.PHONY: test +test: + pytest + +# run only tests for run.c C implementation (is a bit faster if only C code changed) +.PHONY: testc +testc: + pytest -k runc + .PHONY: clean clean: rm -f run From 66c9f5e6c82b592eaee5b3e9d50de57285952e0c Mon Sep 17 00:00:00 2001 From: Ruhollah Majdoddin Date: Tue, 15 Aug 2023 15:58:04 +0000 Subject: [PATCH 63/79] Adding pytest with the tiny model to macOS and windows (except amd64_arm64) runners --- .github/workflows/build.yml | 66 ++++++++++++++++++++++++++++++++++--- 1 file changed, 61 insertions(+), 5 deletions(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index f8b216b..16bbbe8 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -68,6 +68,21 @@ jobs: run: | brew update + - name: Set up Python 3.10 + uses: actions/setup-python@v3 + with: + python-version: "3.10" + + - name: Pip setup + run: | + python -m pip install --upgrade pip + if [ -f requirements.txt ]; then pip install -r requirements.txt; fi + + - name: Build clang + id: make_build_clang + run: | + make run CC=clang + - name: Build id: make_build run: | @@ -77,16 +92,18 @@ jobs: id: make_build_runfast run: | make runfast + + - name: Test with pytest + run: pytest + + - - name: Build clang - id: make_build_clang - run: | - make run CC=clang windows-latest-make: runs-on: windows-latest strategy: + fail-fast: false #necessary, otherwise the matrix breaks matrix: arch: - amd64 @@ -106,11 +123,30 @@ jobs: with: arch: ${{ matrix.arch }} + - name: Set up Python 3.10 + if: matrix.arch != 'amd64_arm64' + uses: actions/setup-python@v3 + with: + python-version: "3.10" + + - name: Pip setup + if: matrix.arch != 'amd64_arm64' + run: | + python -m pip install --upgrade pip + if (Test-Path requirements.txt) { + pip install -r requirements.txt + } + - name: Build ${{ matrix.arch }} id: build_msvc run: | .\build_msvc.bat + #cross-comiled, cannot be run on host + - name: Test with pytest + if: matrix.arch != 'amd64_arm64' + run: pytest + windows-latest-mingw: runs-on: windows-latest @@ -135,6 +171,26 @@ jobs: install: mingw-w64-${{matrix.env}}-gcc make - name: Build ${{ matrix.sys }} ${{ matrix.env }} - id: build_mingw + id: build_mingw run: | make win64 + + - name: Set up Python 3.10 + uses: actions/setup-python@v3 + with: + python-version: "3.10" + + - name: Pip setup + shell: powershell + run: | + python -m pip install --upgrade pip + if (Test-Path requirements.txt) { + pip install -r requirements.txt + } + + - name: Test with pytest + shell: powershell + run: pytest + + + \ No newline at end of file From 87b11edf270feefd9606662e73dd2a202c5b4b7a Mon Sep 17 00:00:00 2001 From: Ruhollah Majdoddin Date: Tue, 15 Aug 2023 16:01:53 +0000 Subject: [PATCH 64/79] modifiying test_all so it can safely run on windows --- test_all.py | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/test_all.py b/test_all.py index 625af44..4423cb7 100644 --- a/test_all.py +++ b/test_all.py @@ -30,7 +30,7 @@ def attempt_download_files(): root_url = "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories260K" need = ["stories260K.bin", "stories260K.pt", "tok512.bin", "tok512.model"] for file in need: - url = os.path.join(root_url, file) + url = root_url + '/' + file #os.path.join inserts \\ on windows filename = os.path.join(test_ckpt_dir, file) if not os.path.exists(filename): download_file(url, filename) @@ -46,13 +46,17 @@ def test_runc(): model_path = os.path.join(test_ckpt_dir, "stories260K.bin") tokenizer_path = os.path.join(test_ckpt_dir, "tok512.bin") - command = ["./run", model_path, "-z", tokenizer_path, "-t", "0.0", "-n", "200"] - proc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE) - - stdout, stderr = proc.communicate() - + command = ["./run", model_path, "-z", tokenizer_path, "-t", "0.0", "-n", "200"] + with open('err.txt', mode='wb') as fe: + with open('stdout.txt', mode='wb') as fo: + proc = subprocess.Popen(command, stdout=fo, stderr=fe) #pipe in windows terminal does funny things like replacing \n with \r\n + proc.wait() + + with open('stdout.txt', mode='r') as f: + stdout = f.read() # strip the very last \n that is added by run.c for aesthetic reasons - stdout = stdout[:-1] + stdout = stdout[:-1].encode('ascii') + assert stdout == expected_stdout def test_python(): @@ -82,4 +86,6 @@ def test_python(): text = enc.decode(pt_tokens) text = text.encode('ascii') # turn into bytes - assert text == expected_stdout \ No newline at end of file + assert text == expected_stdout + +test_runc() \ No newline at end of file From a47f9b3969e9f2eb4e41eb177d8c39e33d45153b Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Tue, 15 Aug 2023 16:03:11 +0000 Subject: [PATCH 65/79] collapsing copy paste code because it's driving my ocd crazy --- run.c | 26 +++++++++++--------------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/run.c b/run.c index 8f565cd..fb9a428 100644 --- a/run.c +++ b/run.c @@ -239,21 +239,17 @@ void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* matmul(s->v, s->xb, w->wv + l*dim*kv_dim, dim, kv_dim); // RoPE relative positional encoding: complex-valued rotate q and k by freq_cis in each head - for (int i = 0; i < dim; i+=2) { - float q0 = s->q[i]; - float q1 = s->q[i+1]; - float fcr = freq_cis_real_row[(i % head_size) / 2]; - float fci = freq_cis_imag_row[(i % head_size) / 2]; - s->q[i] = q0 * fcr - q1 * fci; - s->q[i+1] = q0 * fci + q1 * fcr; - } - for (int i = 0; i < kv_dim; i+=2) { - float k0 = s->k[i]; - float k1 = s->k[i+1]; - float fcr = freq_cis_real_row[(i % head_size) / 2]; - float fci = freq_cis_imag_row[(i % head_size) / 2]; - s->k[i] = k0 * fcr - k1 * fci; - s->k[i+1] = k0 * fci + k1 * fcr; + for (int v = 0; v < 2; v++) { + float* vec = v == 0 ? s->q : s->k; // the vector to rotate (query or key) + int vec_size = v == 0 ? dim : kv_dim; // the size of the vector + for (int i = 0; i < vec_size; i+=2) { + float v0 = vec[i]; + float v1 = vec[i+1]; + float fcr = freq_cis_real_row[(i % head_size) / 2]; + float fci = freq_cis_imag_row[(i % head_size) / 2]; + vec[i] = v0 * fcr - v1 * fci; + vec[i+1] = v0 * fci + v1 * fcr; + } } // save key,value at this time step (pos) to our kv cache From 4c63c5608d5f567dc62aa6a76e3754e743203812 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Tue, 15 Aug 2023 16:07:48 +0000 Subject: [PATCH 66/79] shorten top comment on run.c file --- run.c | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/run.c b/run.c index fb9a428..6152919 100644 --- a/run.c +++ b/run.c @@ -1,12 +1,4 @@ -/* -Inference for Llama-2 Transformer model in pure C. - -Example compile: (see README for more details) -$ gcc -O3 -o run run.c -lm - -Then run with: -$ ./run -*/ +/* Inference for Llama-2 Transformer model in pure C */ #include #include From ca67253f28f95b11a8d3b76a3058eccd70c2b471 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Tue, 15 Aug 2023 16:09:33 +0000 Subject: [PATCH 67/79] smallfix: not sure what the point of this indirection was --- run.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/run.c b/run.c index 6152919..43af271 100644 --- a/run.c +++ b/run.c @@ -117,9 +117,8 @@ void free_run_state(RunState* s) { // ---------------------------------------------------------------------------- // initialization: read from checkpoint -void checkpoint_init_weights(TransformerWeights *w, Config* p, float* f, int shared_weights) { +void checkpoint_init_weights(TransformerWeights *w, Config* p, float* ptr, int shared_weights) { int head_size = p->dim / p->n_heads; - float* ptr = f; w->token_embedding_table = ptr; ptr += p->vocab_size * p->dim; w->rms_att_weight = ptr; From 62a6d69d86670fca162473f866011ffb617e8ba4 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Wed, 16 Aug 2023 02:22:13 +0000 Subject: [PATCH 68/79] style changes and remove spurious runc test call at the bottom --- .github/workflows/build.yml | 25 +++++++++++-------------- test_all.py | 12 +++++------- 2 files changed, 16 insertions(+), 21 deletions(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 16bbbe8..7e6474d 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -92,10 +92,10 @@ jobs: id: make_build_runfast run: | make runfast - + - name: Test with pytest run: pytest - + @@ -103,7 +103,7 @@ jobs: runs-on: windows-latest strategy: - fail-fast: false #necessary, otherwise the matrix breaks + fail-fast: false #necessary, otherwise the matrix breaks matrix: arch: - amd64 @@ -128,7 +128,7 @@ jobs: uses: actions/setup-python@v3 with: python-version: "3.10" - + - name: Pip setup if: matrix.arch != 'amd64_arm64' run: | @@ -144,8 +144,8 @@ jobs: #cross-comiled, cannot be run on host - name: Test with pytest - if: matrix.arch != 'amd64_arm64' - run: pytest + if: matrix.arch != 'amd64_arm64' + run: pytest windows-latest-mingw: runs-on: windows-latest @@ -171,15 +171,15 @@ jobs: install: mingw-w64-${{matrix.env}}-gcc make - name: Build ${{ matrix.sys }} ${{ matrix.env }} - id: build_mingw + id: build_mingw run: | make win64 - + - name: Set up Python 3.10 uses: actions/setup-python@v3 with: python-version: "3.10" - + - name: Pip setup shell: powershell run: | @@ -187,10 +187,7 @@ jobs: if (Test-Path requirements.txt) { pip install -r requirements.txt } - + - name: Test with pytest shell: powershell - run: pytest - - - \ No newline at end of file + run: pytest diff --git a/test_all.py b/test_all.py index 4423cb7..a4d0976 100644 --- a/test_all.py +++ b/test_all.py @@ -30,7 +30,7 @@ def attempt_download_files(): root_url = "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories260K" need = ["stories260K.bin", "stories260K.pt", "tok512.bin", "tok512.model"] for file in need: - url = root_url + '/' + file #os.path.join inserts \\ on windows + url = root_url + '/' + file #os.path.join inserts \\ on windows filename = os.path.join(test_ckpt_dir, file) if not os.path.exists(filename): download_file(url, filename) @@ -46,17 +46,17 @@ def test_runc(): model_path = os.path.join(test_ckpt_dir, "stories260K.bin") tokenizer_path = os.path.join(test_ckpt_dir, "tok512.bin") - command = ["./run", model_path, "-z", tokenizer_path, "-t", "0.0", "-n", "200"] - with open('err.txt', mode='wb') as fe: + command = ["./run", model_path, "-z", tokenizer_path, "-t", "0.0", "-n", "200"] + with open('err.txt', mode='wb') as fe: with open('stdout.txt', mode='wb') as fo: proc = subprocess.Popen(command, stdout=fo, stderr=fe) #pipe in windows terminal does funny things like replacing \n with \r\n proc.wait() - + with open('stdout.txt', mode='r') as f: stdout = f.read() # strip the very last \n that is added by run.c for aesthetic reasons stdout = stdout[:-1].encode('ascii') - + assert stdout == expected_stdout def test_python(): @@ -87,5 +87,3 @@ def test_python(): text = text.encode('ascii') # turn into bytes assert text == expected_stdout - -test_runc() \ No newline at end of file From befe4867b34723d0ba95d30784a42e4f522a4057 Mon Sep 17 00:00:00 2001 From: rdentato Date: Wed, 16 Aug 2023 07:42:53 +0000 Subject: [PATCH 69/79] minimal protection against invalid UTF8 encoding. --- run.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/run.c b/run.c index 43af271..70951c0 100644 --- a/run.c +++ b/run.c @@ -396,7 +396,8 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u str_buffer[str_len] = '\0'; // while the next character is a continuation byte, continue appending - if ((*(c+1) & 0xC0) == 0x80) { + // but if there are too many of them, just stop to avoid overruning str_buffer size. + if ((*(c+1) & 0xC0) == 0x80 && str_len < 4) { continue; } @@ -414,6 +415,7 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u tokens[(*n_tokens)++] = (unsigned char)str_buffer[i] + 3; } } + str_len = 0; // protect against a sequence of stray UTF8 continuation bytes } // merge the best consecutive pair each iteration, according the scores in vocab_scores From 55e60740f5c94ec37f66212864242bb6ee910065 Mon Sep 17 00:00:00 2001 From: rdentato Date: Wed, 16 Aug 2023 07:58:07 +0000 Subject: [PATCH 70/79] Added space to str_buffer in case max_token_length is 1. --- run.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/run.c b/run.c index 70951c0..513eda9 100644 --- a/run.c +++ b/run.c @@ -362,7 +362,7 @@ void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, u qsort(sorted_vocab, vocab_size, sizeof(TokenIndex), compare_tokens); // create a temporary buffer that will store merge candidates of always two consecutive tokens - char* str_buffer = malloc((max_token_length*2+1) * sizeof(char)); // *2 for concat, +1 for null terminator + char* str_buffer = malloc((max_token_length*2 +1 +2) * sizeof(char)); // *2 for concat, +1 for null terminator +2 for UTF8 (in case max_token_lenght is 1) size_t str_len = 0; // add_dummy_prefix is true by default From 9fbe96fc2e80ffa38d08229e6062e38681b36d54 Mon Sep 17 00:00:00 2001 From: madroid Date: Wed, 16 Aug 2023 20:23:27 +0800 Subject: [PATCH 71/79] Jupter Notebook: Add run Meta's Llama 2 models --- run.ipynb | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/run.ipynb b/run.ipynb index cd69b79..ac57593 100644 --- a/run.ipynb +++ b/run.ipynb @@ -89,6 +89,27 @@ "cmd = f'./run {model_file} -t {temperature} -p {top_p} -n {max_token} -i \"{prompt}\"'\n", "!{cmd}" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#@title Run Meta's Llama 2 models\n", + "\n", + "#@markdown input your huggingface [access token](https://huggingface.co/settings/tokens) to download Meta's Llama 2 models.\n", + "\n", + "from huggingface_hub import snapshot_download\n", + "\n", + "token = \"replace your huggingface access token\" #@param {type:\"string\"}\n", + "path = snapshot_download(repo_id=\"meta-llama/Llama-2-7b\",cache_dir=\"Llama-2-7b\", use_auth_token=token)\n", + "\n", + "!python export_meta_llama_bin.py $path llama2_7b.bin\n", + "\n", + "print(\"./run llama2_7b.bin\\n\")\n", + "!./run llama2_7b.bin" + ] } ], "metadata": { From bd182289c596fa6059eb7b3b7c8ccd04b5c90fc3 Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Thu, 17 Aug 2023 04:13:13 +0000 Subject: [PATCH 72/79] calculate the freq_cis online, no need to write/read them to/from checkpoints --- run.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/run.c b/run.c index 513eda9..10d468b 100644 --- a/run.c +++ b/run.c @@ -43,7 +43,7 @@ typedef struct { float* w3; // (layer, hidden_dim, dim) // final rmsnorm float* rms_final_weight; // (dim,) - // freq_cis for RoPE relatively positional embeddings + // freq_cis for RoPE relatively positional embeddings (not used anymore) float* freq_cis_real; // (seq_len, head_size/2) float* freq_cis_imag; // (seq_len, head_size/2) // (optional) classifier weights for the logits, on the last layer @@ -214,10 +214,6 @@ void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* float* content_row = &(w->token_embedding_table[token * dim]); memcpy(x, content_row, dim*sizeof(*x)); - // pluck out the "pos" row of freq_cis_real and freq_cis_imag - float* freq_cis_real_row = w->freq_cis_real + pos * head_size / 2; - float* freq_cis_imag_row = w->freq_cis_imag + pos * head_size / 2; - // forward all the layers for(int l = 0; l < p->n_layers; l++) { @@ -229,15 +225,18 @@ void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* matmul(s->k, s->xb, w->wk + l*dim*kv_dim, dim, kv_dim); matmul(s->v, s->xb, w->wv + l*dim*kv_dim, dim, kv_dim); - // RoPE relative positional encoding: complex-valued rotate q and k by freq_cis in each head - for (int v = 0; v < 2; v++) { - float* vec = v == 0 ? s->q : s->k; // the vector to rotate (query or key) - int vec_size = v == 0 ? dim : kv_dim; // the size of the vector - for (int i = 0; i < vec_size; i+=2) { + // RoPE relative positional encoding: complex-valued rotate q and k in each head + for (int i = 0; i < dim; i+=2) { + int head_dim = i % head_size; + float freq = 1.0f / powf(10000.0f, head_dim / (float)head_size); + float val = pos * freq; + float fcr = cosf(val); + float fci = sinf(val); + int rotn = i < kv_dim ? 2 : 1; // how many vectors? 2 = q & k, 1 = q only + for (int v = 0; v < rotn; v++) { + float* vec = v == 0 ? s->q : s->k; // the vector to rotate (query or key) float v0 = vec[i]; float v1 = vec[i+1]; - float fcr = freq_cis_real_row[(i % head_size) / 2]; - float fci = freq_cis_imag_row[(i % head_size) / 2]; vec[i] = v0 * fcr - v1 * fci; vec[i+1] = v0 * fci + v1 * fcr; } From 8607b11ea1f287c2f0fdff6c40cd915a55dcd89b Mon Sep 17 00:00:00 2001 From: YiMing Han Date: Fri, 18 Aug 2023 15:07:41 -0400 Subject: [PATCH 73/79] working one --- .dart_tool/package_config.json | 20 + Makefile | 60 --- ORIGINAL.md | 322 +++++++++++++ README.md | 356 ++------------- build_msvc.bat | 1 - pubspec.lock | 13 + pubspec.yaml | 10 + run.c | 740 ------------------------------ run.dart | 799 +++++++++++++++++++++++++++++++++ test_all.py | 89 ---- win.c | 180 -------- win.h | 69 --- 12 files changed, 1210 insertions(+), 1449 deletions(-) create mode 100644 .dart_tool/package_config.json delete mode 100644 Makefile create mode 100644 ORIGINAL.md delete mode 100644 build_msvc.bat create mode 100644 pubspec.lock create mode 100644 pubspec.yaml delete mode 100644 run.c create mode 100644 run.dart delete mode 100644 test_all.py delete mode 100644 win.c delete mode 100644 win.h diff --git a/.dart_tool/package_config.json b/.dart_tool/package_config.json new file mode 100644 index 0000000..ca60c60 --- /dev/null +++ b/.dart_tool/package_config.json @@ -0,0 +1,20 @@ +{ + "configVersion": 2, + "packages": [ + { + "name": "args", + "rootUri": "file:///Users/yiminghan/.pub-cache/hosted/pub.dev/args-2.4.2", + "packageUri": "lib/", + "languageVersion": "2.19" + }, + { + "name": "llama2.dart", + "rootUri": "../", + "packageUri": "lib/", + "languageVersion": "3.1" + } + ], + "generated": "2023-08-18T18:58:12.764817Z", + "generator": "pub", + "generatorVersion": "3.1.0" +} diff --git a/Makefile b/Makefile deleted file mode 100644 index a4c6588..0000000 --- a/Makefile +++ /dev/null @@ -1,60 +0,0 @@ -# choose your compiler, e.g. gcc/clang -# example override to clang: make run CC=clang -CC = gcc - -# the most basic way of building that is most likely to work on most systems -.PHONY: run -run: run.c - $(CC) -O3 -o run run.c -lm - -# useful for a debug build, can then e.g. analyze with valgrind, example: -# $ valgrind --leak-check=full ./run out/model.bin -n 3 -rundebug: run.c - $(CC) -g -o run run.c -lm - -# https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html -# https://simonbyrne.github.io/notes/fastmath/ -# -Ofast enables all -O3 optimizations. -# Disregards strict standards compliance. -# It also enables optimizations that are not valid for all standard-compliant programs. -# It turns on -ffast-math, -fallow-store-data-races and the Fortran-specific -# -fstack-arrays, unless -fmax-stack-var-size is specified, and -fno-protect-parens. -# It turns off -fsemantic-interposition. -# In our specific application this is *probably* okay to use -.PHONY: runfast -runfast: run.c - $(CC) -Ofast -o run run.c -lm - -# additionally compiles with OpenMP, allowing multithreaded runs -# make sure to also enable multiple threads when running, e.g.: -# OMP_NUM_THREADS=4 ./run out/model.bin -.PHONY: runomp -runomp: run.c - $(CC) -Ofast -fopenmp -march=native run.c -lm -o run - -.PHONY: win64 -win64: - x86_64-w64-mingw32-gcc -Ofast -D_WIN32 -o run.exe -I. run.c win.c - -# compiles with gnu99 standard flags for amazon linux, coreos, etc. compatibility -.PHONY: rungnu -rungnu: - $(CC) -Ofast -std=gnu11 -o run run.c -lm - -.PHONY: runompgnu -runompgnu: - $(CC) -Ofast -fopenmp -std=gnu11 run.c -lm -o run - -# run all tests -.PHONY: test -test: - pytest - -# run only tests for run.c C implementation (is a bit faster if only C code changed) -.PHONY: testc -testc: - pytest -k runc - -.PHONY: clean -clean: - rm -f run diff --git a/ORIGINAL.md b/ORIGINAL.md new file mode 100644 index 0000000..35d20a2 --- /dev/null +++ b/ORIGINAL.md @@ -0,0 +1,322 @@ +## llama2.c + +

+ Cute Llama +

+ +Train the Llama 2 LLM architecture in PyTorch then inference it with one simple 700-line C file ([run.c](run.c)). You might think that you need many billion parameter LLMs to do anything useful, but in fact very small LLMs can have surprisingly strong performance if you make the domain narrow enough (ref: [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) paper). This repo is a "fullstack" train + inference solution for Llama 2 LLM, with focus on minimalism and simplicity. + +As the architecture is identical, you can also load and inference Meta's Llama 2 models. However, the current code only inferences models in fp32, so you will most likely not be able to productively load models larger than 7B. Work on model quantization is currently ongoing. + +Please note that this repo started recently as a fun weekend project: I took my earlier [nanoGPT](https://github.com/karpathy/nanoGPT), tuned it to implement the Llama-2 architecture instead of GPT-2, and the meat of it was writing the C inference engine in [run.c](run.c). So the project is young and moving quickly. Hat tip to the awesome [llama.cpp](https://github.com/ggerganov/llama.cpp) for inspiring this project. Compred to llama.cpp, I wanted something super simple, minimal, and educational so I chose to hard-code the Llama 2 architecture and just roll one inference file of pure C with no dependencies. + +## feel the magic + +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/karpathy/llama2.c/blob/master/run.ipynb) + +First, navigate to the folder when you keep your projects and clone this repository to this folder: + +```bash +git clone https://github.com/karpathy/llama2.c.git +``` + +Then, open the repository folder: + +```bash +cd llama2.c +``` + +Now, let's just run a baby Llama 2 model in C. You need a model checkpoint. Download this 15M parameter model I trained on the [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) dataset (~60MB download): + +```bash +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin +``` + +Compile and run the C code: + +```bash +make run +./run stories15M.bin +``` + +You'll see the text stream a sample. On my M1 MacBook Air this runs at ~110 tokens/s. See [performance](#performance) or the Makefile for compile flags that can significantly speed this up. We can also try a bit bigger 42M parameter model: + +```bash +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin +./run stories42M.bin +``` + +This still runs at interactive rates and samples more coherent and diverse stories: + +> Once upon a time, there was a little girl named Lily. She loved playing with her toys on top of her bed. One day, she decided to have a tea party with her stuffed animals. She poured some tea into a tiny teapot and put it on top of the teapot. Suddenly, her little brother Max came into the room and wanted to join the tea party too. Lily didn't want to share her tea and she told Max to go away. Max started to cry and Lily felt bad. She decided to yield her tea party to Max and they both shared the teapot. But then, something unexpected happened. The teapot started to shake and wiggle. Lily and Max were scared and didn't know what to do. Suddenly, the teapot started to fly towards the ceiling and landed on the top of the bed. Lily and Max were amazed and they hugged each other. They realized that sharing was much more fun than being selfish. From that day on, they always shared their tea parties and toys. + +You can also prompt the model with a prefix or a number of additional command line arguments, e.g. to sample at temperature 0.8 for 256 steps and with a prompt: + +```bash +./run stories42M.bin -t 0.8 -n 256 -i "One day, Lily met a Shoggoth" +``` + +> One day, Lily met a Shoggoth. He was very shy, but was also very generous. Lily said “Hello Shoggy! Can I be your friend?” Shoggy was happy to have a friend and said “Yes, let’s explore the universe together!” So they set off on a journey to explore the universe. As they travelled, Shoggy was happy to explain to Lily about all the wonderful things in the universe. At the end of the day, Lily and Shoggy had gathered lots of wonderful things from the universe, and they both felt very proud. They promised to explore the universe as one big pair and to never stop being generous to each other. + +There is also an even better 110M param model available, see [models](#models). + +Quick note on sampling, the recommendation for ~best results is to sample with `-t 1.0 -p 0.9`, i.e. temperature 1.0 (default) but also top-p sampling at 0.9 (default). Intuitively, top-p ensures that tokens with tiny probabilities do not get sampled, so we can't get "unlucky" during sampling, and we are less likely to go "off the rails" afterwards. More generally, to control the diversity of samples use either the temperature (i.e. vary `-t` between 0 and 1 and keep top-p off with `-p 0`) or the top-p value (i.e. vary `-p` between 0 and 1 and keep `-t 1`), but not both. Nice explainers on LLM sampling strategies include [this](https://peterchng.com/blog/2023/05/02/token-selection-strategies-top-k-top-p-and-temperature/), [this](https://docs.cohere.com/docs/controlling-generation-with-top-k-top-p) or [this](https://huggingface.co/blog/how-to-generate). + +## Meta's Llama 2 models + +As the neural net architecture is identical, we can also inference the Llama 2 models released by Meta. Sadly there is a bit of friction here due to licensing (I can't directly upload the checkpoints, I think). So Step 1, get the Llama 2 checkpoints by following the [Meta instructions](https://github.com/facebookresearch/llama). Once we have those checkpoints, we have to convert them into the llama2.c format. +For this we need to install the python dependencies (`pip install -r requirements.txt`) and then use the `export_meta_llama_bin.py` file, e.g. for 7B model: + +```bash +python export_meta_llama_bin.py path/to/llama/model/7B llama2_7b.bin +``` + +The export will take ~10 minutes or so and generate a 26GB file (the weights of the 7B model in float32) called `llama2_7b.bin` in the current directory. It has been [reported](https://github.com/karpathy/llama2.c/pull/85) that despite efforts, the 13B export currently doesn't work for unknown reasons (accepting PRs for fix). We can run the model as normal: + +```bash +./run llama2_7b.bin +``` + +This ran at about 4 tokens/s compiled with [OpenMP](#OpenMP) on 96 threads on my CPU Linux box in the cloud. (On my MacBook Air M1, currently it's closer to 30 seconds per token if you just build with `make runfast`.) Example output: + +> The purpose of this document is to highlight the state-of-the-art of CoO generation technologies, both recent developments and those in commercial use. The focus is on the technologies with the highest merit to become the dominating processes of the future and therefore to be technologies of interest to S&T ... R&D. As such, CoO generation technologies developed in Russia, Japan and Europe are described in some depth. The document starts with an introduction to cobalt oxides as complex products and a short view on cobalt as an essential material. The document continues with the discussion of the available CoO generation processes with respect to energy and capital consumption as well as to environmental damage. + +base models... ¯\\_(ツ)_/¯. Since we can inference the base model, it should be possible to also inference the chat model quite easily, and have a conversation with it. And if we can find a way to run 7B more efficiently, we can start adding LoRA to our training script, and going wild with finetunes all within the repo! + +## models + +For the sake of examples of smaller, from-scratch models, I trained a small model series on TinyStories. All of these trained in a few hours on my training setup (4X A100 40GB GPUs). The 110M took around 24 hours. I am hosting them on huggingface hub [tinyllamas](https://huggingface.co/karpathy/tinyllamas), both in the original PyTorch .pt, and also in the llama2.c format .bin: + +| model | dim | n_layers | n_heads | n_kv_heads | max context length | parameters | val loss | download | +| ----- | --- | -------- | ------- | ---------- | ------------------ | ---------- | -------- | ------------------------------------------------------------------------------------------ | +| 260K | 64 | 5 | 8 | 4 | 512 | 260K | 1.297 | [stories260K](https://huggingface.co/karpathy/tinyllamas/tree/main/stories260K) | +| OG | 288 | 6 | 6 | 6 | 256 | 15M | 1.072 | [stories15M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin) | +| 42M | 512 | 8 | 8 | 8 | 1024 | 42M | 0.847 | [stories42M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin) | +| 110M | 768 | 12 | 12 | 12 | 1024 | 110M | 0.760 | [stories110M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin) | + +You'll notice that the 110M model is equivalent to GPT-1 in size. Alternatively, this is also the smallest model in the GPT-2 series (`GPT-2 small`), except the max context length is only 1024 instead of 2048. The only notable changes from GPT-1/2 architecture is that Llama uses RoPE relatively positional embeddings instead of absolute/learned positional embeddings, a bit more fancy SwiGLU non-linearity in the MLP, RMSNorm instead of LayerNorm, bias=False on all Linear layers, and is optionally multiquery (but this is not yet supported in llama2.c). + +## training + +Let's see how we can train a baby Llama 2 from scratch using the code in this repo. First let's download and pretokenize some source dataset, e.g. I like [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) so this is the only example currently available in this repo. But it should be very easy to add datasets, see the code. + +```bash +python tinystories.py download +python tinystories.py pretokenize +``` + +Then train our model: + +```bash +python train.py +``` + +**brief training guide**. See the train.py script for more exotic launches and hyperparameter overrides. Here is a brief guide to how to set the parameters. Look at the table at the very end of the [Chinchilla paper](https://arxiv.org/abs/2203.15556) to get a sense of how the Transformer parameters (dim, n*layers, n_heads) grow or shrink together. Extrapolate/interpolate this pattern to get bigger or smaller transformers. Set the max context length however you wish, depending on the problem: this should be the max number of tokens that matter to predict the next token. E.g. Llama 2 uses 2048. Next, you want the \_total* batch size per update (printed by the script as "tokens per iteration will be:") to be somewhere around 100K tokens for medium-sized applications. For tiny applications it could be lower, for large training (e.g. GPTs/LLamas) it is usually ~0.5M, or even more. You get there by first maxing out the batch*size to whatever your system allows (e.g. mine was 16 in a recent run because after that my GPU runs out of memory), and then you want to increase gradient_accumulation_steps to be as high as necessary to reach the total batch size of ~100K. Finally, you want to tune your learning_rate (LR). You want this to be as high as your training allows. Very small networks can get away with a large LR (e.g. 1e-3 or even higher). Large networks need lower LRs. 3e-4 is a safe choice in most medium-sized applications, but can be too low for small networks, so try to increase it! Finally, max_iters is the length of training. Play with different settings. I mostly only ever tune these parameters and leave most of the others unchanged. Here is an example of how I trained the 110M model, which I don't think is anywhere near optimal, but looked sensible to me: dim 768, n_layers 12, n_heads 12 (so size of each head is 768 / 12 = 64 channels), seq len of 1024, batch size 16 (this is the most that fit my A100 40GB GPU), gradient_accumulation_steps = 8 was needed to get total tokens batch size to be 16 batch size * 1024 tokens in sequence \_ 8 grad_accum = 131,072 tokens per update. Good. Learning rate 4e-4 (probably a little too low). max_iters 200K (probably a bit too high). Dropout 0.1, as that usually helps a bit at medium size. That was it. I ran using Distributed Data Parallel (DDP) on 4 GPUs on my cloud machine, training took ~day or so. + +Totally understand if you want to skip model training, for simple demo just download one of the pretrained models (see [models](#models) section), e.g.: + +```bash +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin +``` + +Once we have the model.bin file, we can inference in C. Compile the C code first: + +```bash +make run +``` + +You can now run it simply as + +```bash +./run stories15M.bin +``` + +Watch the tokens stream by, fun! We can also run the PyTorch inference script for a comparison. Download one of the models again from huggingface hub and point the `sample.py` script at it: + +```bash +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt -P out15M +python sample.py --checkpoint=out15M/stories15M.pt +``` + +Which gives the same results. + +## custom tokenizers + +In everything above, we've assumed the custom Lllama 2 tokenizer with 32,000 tokens. However, in many boutique LLMs, using vocabulary this big might be an overkill. If you have a small application you have in mind, you might be much better off training your own tokenizers. This can make everything nicer - with smaller vocabs your model has fewer parameters (because the token embedding table is a lot smaller), the inference is faster (because there are fewer tokens to predict), and your average sequence length per example could also get smaller (because the compression is a lot more efficient on your data). So let's see how we train a custom tokenizer. + +By default, to pretokenize the tinystories dataset we had to run, in order: + +``` +python tinystories.py download +python tinystories.py pretokenize +``` + +The `pretokenize` stage here loads the Llama 2 tokenizer (vocab size 32,000) and uses it to convert the downloaded text into integers, and saves that to file. We now change this as follows, to train an example 4096-token tokenizer: + +``` +python tinystories.py download +python tinystories.py train_vocab --vocab_size=4096 +python tinystories.py pretokenize --vocab_size=4096 +``` + +The `train_vocab` stage will call the `train_vocab.sh` script, which calls the `sentencepiece` library to train the tokenizer, storing it in a new file `data/tok4096.model`. I tried to reproduce as well as I could the settings that (I think) Meta used to train their vocabulary. This uses the Byte Pair Encoding algorithm that starts out with raw utf8 byte sequences of the text data and then iteratively merges the most common consecutive pairs of tokens to form the vocabulary. Inspect the `tinystories.py` file - the custom tokenizers are stored in a special directory structure indexed by the vocab size. + +A quick note of interest is that vocab size of 4096 trained specifically on tinystories creates integer sequences with about the same sequence length per example as the default Llama 2 tokenizer of 32000 tokens! This means that our custom, tailored tokenizer is a lot better adapted to our specific text, and can compress it very effectively. So our trained models are smaller and faster. + +Now that we have pretokenized the dataset with our custom tokenizer, we can train the model. The training script `train.py` doesn't care about the exact tokens, it only cares about the vocabulary size so it can correctly initialize the model. So when training your model, make sure to pass in + +``` +python train.py --vocab_source=custom --vocab_size=4096 +``` + +(The defaults are `llama2` and `32000` respectively, which indicates the default Llama 2 tokenizer). This trains the model. Finally we are ready to run inference with our `run.c` script. For that we need two things. Number one, we have to export our tokenizer in the `.bin` format, do that with: + +``` +python tokenizer.py --tokenizer-model=data/tok4096.model +``` + +This writes the tokenizer to `data/tok4096.bin`. Now we can run inference, pointing it to this tokenizer using the `-z` flag: + +``` +./run out/model.bin -z data/tok4096.bin +``` + +This should print the samples. If you leave out the `-z` flag, it will use the default Llama 2 tokenizer, which would generate a good sequence of integers, but they would get translated using a different vocabulary to text, so it would look like gibberish. + +## performance + +There are many ways to potentially speed up this code depending on your system. Have a look at the [Makefile](Makefile), which contains a lot of notes. The `make run` command currently uses the `-O3` optimization by default, i.e.: + +```bash +gcc -O3 -o run run.c -lm +``` + +-O3 includes optimizations that are expensive in terms of compile time and memory usage. Including vectorization, loop unrolling, and predicting branches. + +To get a much better performance, try to compile with `make runfast`. This turns on the `-Ofast` flag, which includes additional optimizations that may break compliance with the C/IEEE specifications, in addition to `-O3`. See [the GCC docs](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html) for more information. + +Try `-march=native` to compile the program to use the architecture of the machine you're compiling on rather than a more generic CPU. This may enable additional optimizations and hardware-specific tuning such as improved vector instructions/width. + +The fastest throughput I saw so far on my MacBook Air (M1) so far is with `make runfast`. + +You can also experiment with replacing `gcc` with `clang`. + +If compiling with gcc, try experimenting with `-funroll-all-loops`, see PR [#183](https://github.com/karpathy/llama2.c/pull/183) + +### OpenMP + +Big improvements can also be achieved by compiling with OpenMP, which "activates" the `#pragma omp parallel for` inside the matmul and attention, allowing the work in the loops to be split up over multiple processors. +You'll need to install the OpenMP library and the clang compiler first (e.g. `apt install clang libomp-dev` on ubuntu). Then you can compile with `make runomp`, which does: + +```bash +clang -Ofast -fopenmp -march=native run.c -lm -o run +``` + +When you run inference make sure to use OpenMP flags to set the number of threads, e.g.: + +```bash +OMP_NUM_THREADS=4 ./run out/model.bin +``` + +Depending on your system resources you may want to tweak these hyperparameters and use more threads. But more is not always better, usually this is a bit U shaped. + +## platforms + +On **Windows**, use `build_msvc.bat` in a Visual Studio Command Prompt to build with msvc, or you can use `make win64` to use mingw compiler toolchain from linux or windows to build the windows target. MSVC build will automatically use openmp and max threads appropriate for your CPU unless you set `OMP_NUM_THREADS` env. + +On **Centos 7**, **Amazon Linux 2018** use `rungnu` Makefile target: `make rungnu` or `make runompgnu` to use openmp. + +On **Mac**, use clang from brew for openmp build. Install clang as `brew install llvm` and use the installed clang binary to compile with openmp: `make runomp CC=/opt/homebrew/opt/llvm/bin/clang` + +## tests + +You can run tests simply with pytest: + +```bash +$ pip install pytest +$ pytest +``` + +This will currently invoke two tests inside `test_all.py`, which forward the model in both C and Python for 200 steps and check the output against a known good expected output. The tests currently run in only a few seconds, but will have to download and cache the stories260K models in a temporary `test` directory (only ~2MB download). + +## ack + +I trained the llama2.c storyteller models on a 4X A100 40GB box graciously provided by the excellent [Lambda labs](https://lambdalabs.com/service/gpu-cloud), thank you. + +## discord + +Figured it's possible to reuse my existing discord channel (that I use for my [zero to hero youtube series](https://karpathy.ai/zero-to-hero.html)), see #llama2c channel on [discord](https://discord.gg/3zy8kqD9Cp), for any quick questions, related discussions, etc. + +## contributing + +A few words on this repo and the kinds of PRs that are likely to be accepted. What is the goal of this repo? Basically I think there will be a lot of interest in training or finetuning custom micro-LLMs (think ~100M - ~1B params, but let's say up to ~10B params) across a large diversity of applications, and deploying them in edge-adjacent environments (think MCUs, phones, web browsers, laptops, etc.). I'd like this repo to be the simplest, smallest, most hackable repo to support this workflow, both training and inference. In particular, this repo is not a complex framework with a 1000 knobs controlling inscrutible code across a nested directory structure of hundreds of files. Instead, I expect most applications will wish to create a fork of this repo and hack it to their specific needs and deployment platforms. + +People who care about deployment efficiency above all else should look at [llama.cpp](https://github.com/ggerganov/llama.cpp). This repo still cares about efficiency, but not at the cost of simplicity, readability or portability. Basically, I expect that a lot of people come to this repo because the training code is 2 readable .py files and the inference code is 500 lines of C. So I'd like this to continue to be a kind of simplest "reference implementation" that can be easily hacked in a separate fork into whatever downstream application people are excited about. It shouldn't be full-featured. It shouldn't take 100 different options or settings. It shouldn't be the most efficient. A few examples: + +- someone re-ordered two loops to improve data locality for a small efficieny win => instant merge. +- someone added the one line "pragma omp parallel for", which allows you to compile with OpenMP and dramatically speed up the code, or acts as just a comment if you don't compile it that way => instant merge. +- bug fixes and touchups etc. => happy to merge + +A few examples of PRs are that are not an excellent fit: + +- adding more than several #ifdefs all over the place in code. If they are localized / few, might be okay. +- adding a lot of code that is very specific to some specific platform (e.g. MCUs, or some special version of linux or processor). These may be a better fit for forks of the project, and I am very happy to maintain a list of these forks in section below. +- adding hundreds of lines of code to run.c that are only active in specific scenarios or platforms. + +If your candidate PRs have elements of these it doesn't mean they won't get merged, it just means they will make it into the gray territory. TLDR: I am eager to merge any mostly small, mostly localized, broadly applicable, clean changes that improve the efficiency and portability of the repo, while keep its hackability and readability. I appreciate all PRs seeking to help me improve the project, thank you! <3. + +## notable forks + +- Rust + - [llama2.rs](https://github.com/gaxler/llama2.rs) by @[gaxler](https://github.com/gaxler): a Rust port of this project + - [llama2.rs](https://github.com/leo-du/llama2.rs) by @[leo-du](https://github.com/leo-du): A Rust port of this project + - [llama2-rs](https://github.com/danielgrittner/llama2-rs) by @[danielgrittner](https://github.com/danielgrittner): a Rust port of this project + - [llama2.rs](https://github.com/lintian06/llama2.rs) by @[lintian06](https://github.com/lintian06): A Rust port of this project +- Go + - [go-llama2](https://github.com/tmc/go-llama2) by @[tmc](https://github.com/tmc): a Go port of this project + - [llama2.go](https://github.com/nikolaydubina/llama2.go) by @[nikolaydubina](https://github.com/nikolaydubina): a Go port of this project + - [llama2.go](https://github.com/haormj/llama2.go) by @[haormj](https://github.com/haormj): a Go port of this project + - [llama2.go](https://github.com/saracen/llama2.go) by @[saracen](https://github.com/saracen): a Go port of this project +- Android + - [llama2.c-android](https://github.com/Manuel030/llama2.c-android): by @[Manuel030](https://github.com/Manuel030): adds Android binaries of this project + - [llama2.c-android-wrapper](https://github.com/celikin/llama2.c-android-wrapper): by @[celikin](https://github.com/celikin): added JNI wrapper, PoC +- C++ + - [llama2.cpp](https://github.com/leloykun/llama2.cpp) by @[leloykun](https://github.com/leloykun): a C++ port of this project +- JavaScript + - [llama2.js](https://github.com/epicure/llama2.js) by @[epicure](https://github.com/epicure): a JavaScript port of this project + - [llama2.ts](https://github.com/wizzard0/llama2.ts) by @[oleksandr_now](https://twitter.com/oleksandr_now): a TypeScript port of this project. Full Llama2-7B capable. + - [llama2.c-emscripten](https://github.com/gohai/llama2.c-emscripten) by @[gohai](https://github.com/gohai): Emscripten (JavaScript) port, based on @ggerganov's initial prototype +- Zig + - [llama2.zig](https://github.com/cgbur/llama2.zig) by @[cgbur](https://github.com/cgbur): A Zig port of this project + - [llama2.zig](https://github.com/vodkaslime/llama2.zig) by @[vodkaslime](https://github.com/vodkaslime): a Zig port of this project + - [llama2.zig](https://github.com/clebert/llama2.zig) by @[clebert](https://github.com/clebert): a Zig port of this project +- Julia + - [llama2.jl](https://github.com/juvi21/llama2.jl) by @[juvi21](https://github.com/juvi21): a Julia port of this project +- Scala + - [llama2.scala](https://github.com/jrudolph/llama2.scala) by @[jrudolph](https://github.com/jrudolph): a Scala port of this project +- Java + - [llama2.java](https://github.com/mukel/llama2.java) by @[mukel](https://github.com/mukel): a Java port of this project +- Kotlin + - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @[madroidmaq](https://github.com/madroidmaq): a Kotlin port of this project +- Python + - [llama2.py](https://github.com/tairov/llama2.py) by @[tairov](https://github.com/tairov): a simple one file pure Python port of this project with zero dependencies +- C# + - [llama2.cs](https://github.com/trrahul/llama2.cs) by @[trrahul](https://github.com/trrahul): a C# port of this project +- WebAssembly + - [icpp-llm](https://github.com/icppWorld/icpp-llm): LLMs for the Internet Computer +- [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 +- [llama2.c-zh - Bilingual Chinese and English](https://github.com/chenyangMl/llama2.c-zh) by @[chenyangMl](https://github.com/chenyangMl): Expand tokenizer to support training and inference in both Chinese and English + +## unsorted todos + +- make it easier to add a new dataset with not too much pain +- should calculate freq_cis online in the script run.c instead of loading them +- int4/8 quantization +- export the model in a more sensible output format with a proper header, etc. +- support Llama 2 7B Chat models and tune run.c to Chat UI/UX +- llama2.cu investigate and merge +- (LoRA) finetuning and export of Llama 2 models + +## License + +MIT diff --git a/README.md b/README.md index 8c36285..de13c23 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,48 @@ -## llama2.c +## llama2.dart + +This is a fork of Andrej Karpathy's [llama2.c](https://github.com/karpathy/llama2.c), implemented in (Almost) Pure Dart, except for some args parsing utility library. + +### To run : + +Instal Dart + +```bash +brew tap dart-lang/dart +brew install dart +``` + +Install the arg parsing dependency + +```bash +dart pub add args +``` + +Download the dataset: + +```bash +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin +``` + +```bash +dart run run.dart -c ./stories15M.bin -i "PROMPT GOES HERE" +``` + +## Performance + +Dart suprisingly ok performance being a single threaded language, tho it's starting to struggle at 110M: +Tested on M2 Max Chip + +| Model | Token/s | +| ----- | ------------ | +| 15M | tok/s: 17.78 | +| 42M | tok/s: 6.43 | +| 110M | tok/s: 2.47 | + +### Original README + +Extract from the original Repo:

Cute Llama @@ -10,312 +54,4 @@ As the architecture is identical, you can also load and inference Meta's Llama 2 Please note that this repo started recently as a fun weekend project: I took my earlier [nanoGPT](https://github.com/karpathy/nanoGPT), tuned it to implement the Llama-2 architecture instead of GPT-2, and the meat of it was writing the C inference engine in [run.c](run.c). So the project is young and moving quickly. Hat tip to the awesome [llama.cpp](https://github.com/ggerganov/llama.cpp) for inspiring this project. Compred to llama.cpp, I wanted something super simple, minimal, and educational so I chose to hard-code the Llama 2 architecture and just roll one inference file of pure C with no dependencies. -## feel the magic - -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/karpathy/llama2.c/blob/master/run.ipynb) - -First, navigate to the folder when you keep your projects and clone this repository to this folder: - -```bash -git clone https://github.com/karpathy/llama2.c.git -``` - -Then, open the repository folder: - -```bash -cd llama2.c -``` - -Now, let's just run a baby Llama 2 model in C. You need a model checkpoint. Download this 15M parameter model I trained on the [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) dataset (~60MB download): - -```bash -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin -``` - -Compile and run the C code: - -```bash -make run -./run stories15M.bin -``` - -You'll see the text stream a sample. On my M1 MacBook Air this runs at ~110 tokens/s. See [performance](#performance) or the Makefile for compile flags that can significantly speed this up. We can also try a bit bigger 42M parameter model: - -```bash -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin -./run stories42M.bin -``` - -This still runs at interactive rates and samples more coherent and diverse stories: - -> Once upon a time, there was a little girl named Lily. She loved playing with her toys on top of her bed. One day, she decided to have a tea party with her stuffed animals. She poured some tea into a tiny teapot and put it on top of the teapot. Suddenly, her little brother Max came into the room and wanted to join the tea party too. Lily didn't want to share her tea and she told Max to go away. Max started to cry and Lily felt bad. She decided to yield her tea party to Max and they both shared the teapot. But then, something unexpected happened. The teapot started to shake and wiggle. Lily and Max were scared and didn't know what to do. Suddenly, the teapot started to fly towards the ceiling and landed on the top of the bed. Lily and Max were amazed and they hugged each other. They realized that sharing was much more fun than being selfish. From that day on, they always shared their tea parties and toys. - -You can also prompt the model with a prefix or a number of additional command line arguments, e.g. to sample at temperature 0.8 for 256 steps and with a prompt: - -```bash -./run stories42M.bin -t 0.8 -n 256 -i "One day, Lily met a Shoggoth" -``` - -> One day, Lily met a Shoggoth. He was very shy, but was also very generous. Lily said “Hello Shoggy! Can I be your friend?” Shoggy was happy to have a friend and said “Yes, let’s explore the universe together!” So they set off on a journey to explore the universe. As they travelled, Shoggy was happy to explain to Lily about all the wonderful things in the universe. At the end of the day, Lily and Shoggy had gathered lots of wonderful things from the universe, and they both felt very proud. They promised to explore the universe as one big pair and to never stop being generous to each other. - -There is also an even better 110M param model available, see [models](#models). - -Quick note on sampling, the recommendation for ~best results is to sample with `-t 1.0 -p 0.9`, i.e. temperature 1.0 (default) but also top-p sampling at 0.9 (default). Intuitively, top-p ensures that tokens with tiny probabilities do not get sampled, so we can't get "unlucky" during sampling, and we are less likely to go "off the rails" afterwards. More generally, to control the diversity of samples use either the temperature (i.e. vary `-t` between 0 and 1 and keep top-p off with `-p 0`) or the top-p value (i.e. vary `-p` between 0 and 1 and keep `-t 1`), but not both. Nice explainers on LLM sampling strategies include [this](https://peterchng.com/blog/2023/05/02/token-selection-strategies-top-k-top-p-and-temperature/), [this](https://docs.cohere.com/docs/controlling-generation-with-top-k-top-p) or [this](https://huggingface.co/blog/how-to-generate). - -## Meta's Llama 2 models - -As the neural net architecture is identical, we can also inference the Llama 2 models released by Meta. Sadly there is a bit of friction here due to licensing (I can't directly upload the checkpoints, I think). So Step 1, get the Llama 2 checkpoints by following the [Meta instructions](https://github.com/facebookresearch/llama). Once we have those checkpoints, we have to convert them into the llama2.c format. -For this we need to install the python dependencies (`pip install -r requirements.txt`) and then use the `export_meta_llama_bin.py` file, e.g. for 7B model: - -```bash -python export_meta_llama_bin.py path/to/llama/model/7B llama2_7b.bin -``` - -The export will take ~10 minutes or so and generate a 26GB file (the weights of the 7B model in float32) called `llama2_7b.bin` in the current directory. It has been [reported](https://github.com/karpathy/llama2.c/pull/85) that despite efforts, the 13B export currently doesn't work for unknown reasons (accepting PRs for fix). We can run the model as normal: - -```bash -./run llama2_7b.bin -``` - -This ran at about 4 tokens/s compiled with [OpenMP](#OpenMP) on 96 threads on my CPU Linux box in the cloud. (On my MacBook Air M1, currently it's closer to 30 seconds per token if you just build with `make runfast`.) Example output: - -> The purpose of this document is to highlight the state-of-the-art of CoO generation technologies, both recent developments and those in commercial use. The focus is on the technologies with the highest merit to become the dominating processes of the future and therefore to be technologies of interest to S&T ... R&D. As such, CoO generation technologies developed in Russia, Japan and Europe are described in some depth. The document starts with an introduction to cobalt oxides as complex products and a short view on cobalt as an essential material. The document continues with the discussion of the available CoO generation processes with respect to energy and capital consumption as well as to environmental damage. - -base models... ¯\\_(ツ)_/¯. Since we can inference the base model, it should be possible to also inference the chat model quite easily, and have a conversation with it. And if we can find a way to run 7B more efficiently, we can start adding LoRA to our training script, and going wild with finetunes all within the repo! - -## models - -For the sake of examples of smaller, from-scratch models, I trained a small model series on TinyStories. All of these trained in a few hours on my training setup (4X A100 40GB GPUs). The 110M took around 24 hours. I am hosting them on huggingface hub [tinyllamas](https://huggingface.co/karpathy/tinyllamas), both in the original PyTorch .pt, and also in the llama2.c format .bin: - -| model | dim | n_layers | n_heads | n_kv_heads | max context length | parameters | val loss | download -| --- | --- | --- | --- | --- | --- | --- | --- | --- | -| 260K | 64 | 5 | 8 | 4 | 512 | 260K | 1.297 | [stories260K](https://huggingface.co/karpathy/tinyllamas/tree/main/stories260K) -| OG | 288 | 6 | 6 | 6 | 256 | 15M | 1.072 | [stories15M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin) | -| 42M| 512 | 8 | 8 | 8 | 1024 | 42M | 0.847 | [stories42M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin) | -| 110M| 768 | 12 | 12 | 12 | 1024 | 110M | 0.760 | [stories110M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin) | - -You'll notice that the 110M model is equivalent to GPT-1 in size. Alternatively, this is also the smallest model in the GPT-2 series (`GPT-2 small`), except the max context length is only 1024 instead of 2048. The only notable changes from GPT-1/2 architecture is that Llama uses RoPE relatively positional embeddings instead of absolute/learned positional embeddings, a bit more fancy SwiGLU non-linearity in the MLP, RMSNorm instead of LayerNorm, bias=False on all Linear layers, and is optionally multiquery (but this is not yet supported in llama2.c). - -## training - -Let's see how we can train a baby Llama 2 from scratch using the code in this repo. First let's download and pretokenize some source dataset, e.g. I like [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) so this is the only example currently available in this repo. But it should be very easy to add datasets, see the code. - -```bash -python tinystories.py download -python tinystories.py pretokenize -``` - -Then train our model: - -```bash -python train.py -``` - -**brief training guide**. See the train.py script for more exotic launches and hyperparameter overrides. Here is a brief guide to how to set the parameters. Look at the table at the very end of the [Chinchilla paper](https://arxiv.org/abs/2203.15556) to get a sense of how the Transformer parameters (dim, n_layers, n_heads) grow or shrink together. Extrapolate/interpolate this pattern to get bigger or smaller transformers. Set the max context length however you wish, depending on the problem: this should be the max number of tokens that matter to predict the next token. E.g. Llama 2 uses 2048. Next, you want the _total_ batch size per update (printed by the script as "tokens per iteration will be:") to be somewhere around 100K tokens for medium-sized applications. For tiny applications it could be lower, for large training (e.g. GPTs/LLamas) it is usually ~0.5M, or even more. You get there by first maxing out the batch_size to whatever your system allows (e.g. mine was 16 in a recent run because after that my GPU runs out of memory), and then you want to increase gradient_accumulation_steps to be as high as necessary to reach the total batch size of ~100K. Finally, you want to tune your learning_rate (LR). You want this to be as high as your training allows. Very small networks can get away with a large LR (e.g. 1e-3 or even higher). Large networks need lower LRs. 3e-4 is a safe choice in most medium-sized applications, but can be too low for small networks, so try to increase it! Finally, max_iters is the length of training. Play with different settings. I mostly only ever tune these parameters and leave most of the others unchanged. Here is an example of how I trained the 110M model, which I don't think is anywhere near optimal, but looked sensible to me: dim 768, n_layers 12, n_heads 12 (so size of each head is 768 / 12 = 64 channels), seq len of 1024, batch size 16 (this is the most that fit my A100 40GB GPU), gradient_accumulation_steps = 8 was needed to get total tokens batch size to be 16 batch size * 1024 tokens in sequence * 8 grad_accum = 131,072 tokens per update. Good. Learning rate 4e-4 (probably a little too low). max_iters 200K (probably a bit too high). Dropout 0.1, as that usually helps a bit at medium size. That was it. I ran using Distributed Data Parallel (DDP) on 4 GPUs on my cloud machine, training took ~day or so. - -Totally understand if you want to skip model training, for simple demo just download one of the pretrained models (see [models](#models) section), e.g.: - -```bash -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin -``` - -Once we have the model.bin file, we can inference in C. Compile the C code first: - -```bash -make run -``` - -You can now run it simply as - -```bash -./run stories15M.bin -``` - -Watch the tokens stream by, fun! We can also run the PyTorch inference script for a comparison. Download one of the models again from huggingface hub and point the `sample.py` script at it: - -```bash -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt -P out15M -python sample.py --checkpoint=out15M/stories15M.pt -``` - -Which gives the same results. - -## custom tokenizers - -In everything above, we've assumed the custom Lllama 2 tokenizer with 32,000 tokens. However, in many boutique LLMs, using vocabulary this big might be an overkill. If you have a small application you have in mind, you might be much better off training your own tokenizers. This can make everything nicer - with smaller vocabs your model has fewer parameters (because the token embedding table is a lot smaller), the inference is faster (because there are fewer tokens to predict), and your average sequence length per example could also get smaller (because the compression is a lot more efficient on your data). So let's see how we train a custom tokenizer. - -By default, to pretokenize the tinystories dataset we had to run, in order: - -``` -python tinystories.py download -python tinystories.py pretokenize -``` - -The `pretokenize` stage here loads the Llama 2 tokenizer (vocab size 32,000) and uses it to convert the downloaded text into integers, and saves that to file. We now change this as follows, to train an example 4096-token tokenizer: - -``` -python tinystories.py download -python tinystories.py train_vocab --vocab_size=4096 -python tinystories.py pretokenize --vocab_size=4096 -``` - -The `train_vocab` stage will call the `train_vocab.sh` script, which calls the `sentencepiece` library to train the tokenizer, storing it in a new file `data/tok4096.model`. I tried to reproduce as well as I could the settings that (I think) Meta used to train their vocabulary. This uses the Byte Pair Encoding algorithm that starts out with raw utf8 byte sequences of the text data and then iteratively merges the most common consecutive pairs of tokens to form the vocabulary. Inspect the `tinystories.py` file - the custom tokenizers are stored in a special directory structure indexed by the vocab size. - -A quick note of interest is that vocab size of 4096 trained specifically on tinystories creates integer sequences with about the same sequence length per example as the default Llama 2 tokenizer of 32000 tokens! This means that our custom, tailored tokenizer is a lot better adapted to our specific text, and can compress it very effectively. So our trained models are smaller and faster. - -Now that we have pretokenized the dataset with our custom tokenizer, we can train the model. The training script `train.py` doesn't care about the exact tokens, it only cares about the vocabulary size so it can correctly initialize the model. So when training your model, make sure to pass in - -``` -python train.py --vocab_source=custom --vocab_size=4096 -``` - -(The defaults are `llama2` and `32000` respectively, which indicates the default Llama 2 tokenizer). This trains the model. Finally we are ready to run inference with our `run.c` script. For that we need two things. Number one, we have to export our tokenizer in the `.bin` format, do that with: - -``` -python tokenizer.py --tokenizer-model=data/tok4096.model -``` - -This writes the tokenizer to `data/tok4096.bin`. Now we can run inference, pointing it to this tokenizer using the `-z` flag: - -``` -./run out/model.bin -z data/tok4096.bin -``` - -This should print the samples. If you leave out the `-z` flag, it will use the default Llama 2 tokenizer, which would generate a good sequence of integers, but they would get translated using a different vocabulary to text, so it would look like gibberish. - -## performance - -There are many ways to potentially speed up this code depending on your system. Have a look at the [Makefile](Makefile), which contains a lot of notes. The `make run` command currently uses the `-O3` optimization by default, i.e.: - -```bash -gcc -O3 -o run run.c -lm -``` - --O3 includes optimizations that are expensive in terms of compile time and memory usage. Including vectorization, loop unrolling, and predicting branches. - -To get a much better performance, try to compile with `make runfast`. This turns on the `-Ofast` flag, which includes additional optimizations that may break compliance with the C/IEEE specifications, in addition to `-O3`. See [the GCC docs](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html) for more information. - -Try `-march=native` to compile the program to use the architecture of the machine you're compiling on rather than a more generic CPU. This may enable additional optimizations and hardware-specific tuning such as improved vector instructions/width. - -The fastest throughput I saw so far on my MacBook Air (M1) so far is with `make runfast`. - -You can also experiment with replacing `gcc` with `clang`. - -If compiling with gcc, try experimenting with `-funroll-all-loops`, see PR [#183](https://github.com/karpathy/llama2.c/pull/183) - -### OpenMP -Big improvements can also be achieved by compiling with OpenMP, which "activates" the `#pragma omp parallel for` inside the matmul and attention, allowing the work in the loops to be split up over multiple processors. -You'll need to install the OpenMP library and the clang compiler first (e.g. `apt install clang libomp-dev` on ubuntu). Then you can compile with `make runomp`, which does: - -```bash -clang -Ofast -fopenmp -march=native run.c -lm -o run -``` - -When you run inference make sure to use OpenMP flags to set the number of threads, e.g.: - -```bash -OMP_NUM_THREADS=4 ./run out/model.bin -``` - -Depending on your system resources you may want to tweak these hyperparameters and use more threads. But more is not always better, usually this is a bit U shaped. - -## platforms - -On **Windows**, use `build_msvc.bat` in a Visual Studio Command Prompt to build with msvc, or you can use `make win64` to use mingw compiler toolchain from linux or windows to build the windows target. MSVC build will automatically use openmp and max threads appropriate for your CPU unless you set `OMP_NUM_THREADS` env. - -On **Centos 7**, **Amazon Linux 2018** use `rungnu` Makefile target: `make rungnu` or `make runompgnu` to use openmp. - -On **Mac**, use clang from brew for openmp build. Install clang as `brew install llvm` and use the installed clang binary to compile with openmp: `make runomp CC=/opt/homebrew/opt/llvm/bin/clang` - -## tests - -You can run tests simply with pytest: - -```bash -$ pip install pytest -$ pytest -``` - -This will currently invoke two tests inside `test_all.py`, which forward the model in both C and Python for 200 steps and check the output against a known good expected output. The tests currently run in only a few seconds, but will have to download and cache the stories260K models in a temporary `test` directory (only ~2MB download). - -## ack - -I trained the llama2.c storyteller models on a 4X A100 40GB box graciously provided by the excellent [Lambda labs](https://lambdalabs.com/service/gpu-cloud), thank you. - -## discord - -Figured it's possible to reuse my existing discord channel (that I use for my [zero to hero youtube series](https://karpathy.ai/zero-to-hero.html)), see #llama2c channel on [discord](https://discord.gg/3zy8kqD9Cp), for any quick questions, related discussions, etc. - -## contributing - -A few words on this repo and the kinds of PRs that are likely to be accepted. What is the goal of this repo? Basically I think there will be a lot of interest in training or finetuning custom micro-LLMs (think ~100M - ~1B params, but let's say up to ~10B params) across a large diversity of applications, and deploying them in edge-adjacent environments (think MCUs, phones, web browsers, laptops, etc.). I'd like this repo to be the simplest, smallest, most hackable repo to support this workflow, both training and inference. In particular, this repo is not a complex framework with a 1000 knobs controlling inscrutible code across a nested directory structure of hundreds of files. Instead, I expect most applications will wish to create a fork of this repo and hack it to their specific needs and deployment platforms. - -People who care about deployment efficiency above all else should look at [llama.cpp](https://github.com/ggerganov/llama.cpp). This repo still cares about efficiency, but not at the cost of simplicity, readability or portability. Basically, I expect that a lot of people come to this repo because the training code is 2 readable .py files and the inference code is 500 lines of C. So I'd like this to continue to be a kind of simplest "reference implementation" that can be easily hacked in a separate fork into whatever downstream application people are excited about. It shouldn't be full-featured. It shouldn't take 100 different options or settings. It shouldn't be the most efficient. A few examples: - -- someone re-ordered two loops to improve data locality for a small efficieny win => instant merge. -- someone added the one line "pragma omp parallel for", which allows you to compile with OpenMP and dramatically speed up the code, or acts as just a comment if you don't compile it that way => instant merge. -- bug fixes and touchups etc. => happy to merge - -A few examples of PRs are that are not an excellent fit: - -- adding more than several #ifdefs all over the place in code. If they are localized / few, might be okay. -- adding a lot of code that is very specific to some specific platform (e.g. MCUs, or some special version of linux or processor). These may be a better fit for forks of the project, and I am very happy to maintain a list of these forks in section below. -- adding hundreds of lines of code to run.c that are only active in specific scenarios or platforms. - -If your candidate PRs have elements of these it doesn't mean they won't get merged, it just means they will make it into the gray territory. TLDR: I am eager to merge any mostly small, mostly localized, broadly applicable, clean changes that improve the efficiency and portability of the repo, while keep its hackability and readability. I appreciate all PRs seeking to help me improve the project, thank you! <3. - -## notable forks - -- Rust - - [llama2.rs](https://github.com/gaxler/llama2.rs) by @[gaxler](https://github.com/gaxler): a Rust port of this project - - [llama2.rs](https://github.com/leo-du/llama2.rs) by @[leo-du](https://github.com/leo-du): A Rust port of this project - - [llama2-rs](https://github.com/danielgrittner/llama2-rs) by @[danielgrittner](https://github.com/danielgrittner): a Rust port of this project - - [llama2.rs](https://github.com/lintian06/llama2.rs) by @[lintian06](https://github.com/lintian06): A Rust port of this project -- Go - - [go-llama2](https://github.com/tmc/go-llama2) by @[tmc](https://github.com/tmc): a Go port of this project - - [llama2.go](https://github.com/nikolaydubina/llama2.go) by @[nikolaydubina](https://github.com/nikolaydubina): a Go port of this project - - [llama2.go](https://github.com/haormj/llama2.go) by @[haormj](https://github.com/haormj): a Go port of this project - - [llama2.go](https://github.com/saracen/llama2.go) by @[saracen](https://github.com/saracen): a Go port of this project -- Android - - [llama2.c-android](https://github.com/Manuel030/llama2.c-android): by @[Manuel030](https://github.com/Manuel030): adds Android binaries of this project - - [llama2.c-android-wrapper](https://github.com/celikin/llama2.c-android-wrapper): by @[celikin](https://github.com/celikin): added JNI wrapper, PoC -- C++ - - [llama2.cpp](https://github.com/leloykun/llama2.cpp) by @[leloykun](https://github.com/leloykun): a C++ port of this project -- JavaScript - - [llama2.js](https://github.com/epicure/llama2.js) by @[epicure](https://github.com/epicure): a JavaScript port of this project - - [llama2.ts](https://github.com/wizzard0/llama2.ts) by @[oleksandr_now](https://twitter.com/oleksandr_now): a TypeScript port of this project. Full Llama2-7B capable. - - [llama2.c-emscripten](https://github.com/gohai/llama2.c-emscripten) by @[gohai](https://github.com/gohai): Emscripten (JavaScript) port, based on @ggerganov's initial prototype -- Zig - - [llama2.zig](https://github.com/cgbur/llama2.zig) by @[cgbur](https://github.com/cgbur): A Zig port of this project - - [llama2.zig](https://github.com/vodkaslime/llama2.zig) by @[vodkaslime](https://github.com/vodkaslime): a Zig port of this project - - [llama2.zig](https://github.com/clebert/llama2.zig) by @[clebert](https://github.com/clebert): a Zig port of this project -- Julia - - [llama2.jl](https://github.com/juvi21/llama2.jl) by @[juvi21](https://github.com/juvi21): a Julia port of this project -- Scala - - [llama2.scala](https://github.com/jrudolph/llama2.scala) by @[jrudolph](https://github.com/jrudolph): a Scala port of this project -- Java - - [llama2.java](https://github.com/mukel/llama2.java) by @[mukel](https://github.com/mukel): a Java port of this project -- Kotlin - - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @[madroidmaq](https://github.com/madroidmaq): a Kotlin port of this project -- Python - - [llama2.py](https://github.com/tairov/llama2.py) by @[tairov](https://github.com/tairov): a simple one file pure Python port of this project with zero dependencies -- C# - - [llama2.cs](https://github.com/trrahul/llama2.cs) by @[trrahul](https://github.com/trrahul): a C# port of this project -- WebAssembly - - [icpp-llm](https://github.com/icppWorld/icpp-llm): LLMs for the Internet Computer -- [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 -- [llama2.c-zh - Bilingual Chinese and English](https://github.com/chenyangMl/llama2.c-zh) by @[chenyangMl](https://github.com/chenyangMl): Expand tokenizer to support training and inference in both Chinese and English - -## unsorted todos - -- make it easier to add a new dataset with not too much pain -- should calculate freq_cis online in the script run.c instead of loading them -- int4/8 quantization -- export the model in a more sensible output format with a proper header, etc. -- support Llama 2 7B Chat models and tune run.c to Chat UI/UX -- llama2.cu investigate and merge -- (LoRA) finetuning and export of Llama 2 models - -## License - -MIT +Please refer to [Original README](/ORIGINAL.md) or the upstream repo for more information on llama2.c diff --git a/build_msvc.bat b/build_msvc.bat deleted file mode 100644 index f3b2c98..0000000 --- a/build_msvc.bat +++ /dev/null @@ -1 +0,0 @@ -cl.exe /fp:fast /Ox /openmp /I. run.c win.c diff --git a/pubspec.lock b/pubspec.lock new file mode 100644 index 0000000..fae61c8 --- /dev/null +++ b/pubspec.lock @@ -0,0 +1,13 @@ +# Generated by pub +# See https://dart.dev/tools/pub/glossary#lockfile +packages: + args: + dependency: "direct main" + description: + name: args + sha256: eef6c46b622e0494a36c5a12d10d77fb4e855501a91c1b9ef9339326e58f0596 + url: "https://pub.dev" + source: hosted + version: "2.4.2" +sdks: + dart: ">=3.1.0 <4.0.0" diff --git a/pubspec.yaml b/pubspec.yaml new file mode 100644 index 0000000..47a3ad5 --- /dev/null +++ b/pubspec.yaml @@ -0,0 +1,10 @@ +name: llama2.dart +description: A one file implementation of llama2 inference +version: 1.0.0 + +environment: + sdk: ^3.1.0 + +# Add regular dependencies here. +dependencies: + args: ^2.4.2 diff --git a/run.c b/run.c deleted file mode 100644 index 10d468b..0000000 --- a/run.c +++ /dev/null @@ -1,740 +0,0 @@ -/* Inference for Llama-2 Transformer model in pure C */ - -#include -#include -#include -#include -#include -#include -#include -#if defined _WIN32 - #include "win.h" -#else - #include - #include -#endif -// ---------------------------------------------------------------------------- -// Transformer and RunState structs, and related memory management - -typedef struct { - int dim; // transformer dimension - int hidden_dim; // for ffn layers - int n_layers; // number of layers - int n_heads; // number of query heads - int n_kv_heads; // number of key/value heads (can be < query heads because of multiquery) - int vocab_size; // vocabulary size, usually 256 (byte-level) - int seq_len; // max sequence length -} Config; - -typedef struct { - // token embedding table - float* token_embedding_table; // (vocab_size, dim) - // weights for rmsnorms - float* rms_att_weight; // (layer, dim) rmsnorm weights - float* rms_ffn_weight; // (layer, dim) - // weights for matmuls. note dim == n_heads * head_size - float* wq; // (layer, dim, n_heads * head_size) - float* wk; // (layer, dim, n_kv_heads * head_size) - float* wv; // (layer, dim, n_kv_heads * head_size) - float* wo; // (layer, n_heads * head_size, dim) - // weights for ffn - float* w1; // (layer, hidden_dim, dim) - float* w2; // (layer, dim, hidden_dim) - float* w3; // (layer, hidden_dim, dim) - // final rmsnorm - float* rms_final_weight; // (dim,) - // freq_cis for RoPE relatively positional embeddings (not used anymore) - float* freq_cis_real; // (seq_len, head_size/2) - float* freq_cis_imag; // (seq_len, head_size/2) - // (optional) classifier weights for the logits, on the last layer - float* wcls; -} TransformerWeights; - -typedef struct { - float prob; - int index; -} ProbIndex; // struct used when sorting probabilities during top-p sampling - -typedef struct { - // current wave of activations - float *x; // activation at current time stamp (dim,) - float *xb; // same, but inside a residual branch (dim,) - float *xb2; // an additional buffer just for convenience (dim,) - float *hb; // buffer for hidden dimension in the ffn (hidden_dim,) - float *hb2; // buffer for hidden dimension in the ffn (hidden_dim,) - float *q; // query (dim,) - float *k; // key (dim,) - float *v; // value (dim,) - float *att; // buffer for scores/attention values (n_heads, seq_len) - float *logits; // output logits - ProbIndex *probindex; // buffer used in top-p sampling - // kv cache - float* key_cache; // (layer, seq_len, dim) - float* value_cache; // (layer, seq_len, dim) -} RunState; - -void malloc_run_state(RunState* s, Config* p) { - // we calloc instead of malloc to keep valgrind happy - int kv_dim = (p->dim * p->n_kv_heads) / p->n_heads; - s->x = calloc(p->dim, sizeof(float)); - s->xb = calloc(p->dim, sizeof(float)); - s->xb2 = calloc(p->dim, sizeof(float)); - s->hb = calloc(p->hidden_dim, sizeof(float)); - s->hb2 = calloc(p->hidden_dim, sizeof(float)); - s->q = calloc(p->dim, sizeof(float)); - s->k = calloc(kv_dim, sizeof(float)); - s->v = calloc(kv_dim, sizeof(float)); - s->att = calloc(p->n_heads * p->seq_len, sizeof(float)); - s->logits = calloc(p->vocab_size, sizeof(float)); - s->probindex = calloc(p->vocab_size, sizeof(ProbIndex)); - s->key_cache = calloc(p->n_layers * p->seq_len * kv_dim, sizeof(float)); - s->value_cache = calloc(p->n_layers * p->seq_len * kv_dim, sizeof(float)); - // ensure all mallocs went fine - if (!s->x || !s->xb || !s->xb2 || !s->hb || !s->hb2 || !s->q - || !s->k || !s->v || !s->att || !s->logits || !s->key_cache - || !s->value_cache || !s->probindex) { - fprintf(stderr, "malloc failed!\n"); - exit(EXIT_FAILURE); - } -} - -void free_run_state(RunState* s) { - free(s->x); - free(s->xb); - free(s->xb2); - free(s->hb); - free(s->hb2); - free(s->q); - free(s->k); - free(s->v); - free(s->att); - free(s->logits); - free(s->probindex); - free(s->key_cache); - free(s->value_cache); -} - -// ---------------------------------------------------------------------------- -// initialization: read from checkpoint - -void checkpoint_init_weights(TransformerWeights *w, Config* p, float* ptr, int shared_weights) { - int head_size = p->dim / p->n_heads; - w->token_embedding_table = ptr; - ptr += p->vocab_size * p->dim; - w->rms_att_weight = ptr; - ptr += p->n_layers * p->dim; - w->wq = ptr; - ptr += p->n_layers * p->dim * (p->n_heads * head_size); - w->wk = ptr; - ptr += p->n_layers * p->dim * (p->n_kv_heads * head_size); - w->wv = ptr; - ptr += p->n_layers * p->dim * (p->n_kv_heads * head_size); - w->wo = ptr; - ptr += p->n_layers * (p->n_heads * head_size) * p->dim; - w->rms_ffn_weight = ptr; - ptr += p->n_layers * p->dim; - w->w1 = ptr; - ptr += p->n_layers * p->dim * p->hidden_dim; - w->w2 = ptr; - ptr += p->n_layers * p->hidden_dim * p->dim; - w->w3 = ptr; - ptr += p->n_layers * p->dim * p->hidden_dim; - w->rms_final_weight = ptr; - ptr += p->dim; - w->freq_cis_real = ptr; - ptr += p->seq_len * head_size / 2; - w->freq_cis_imag = ptr; - ptr += p->seq_len * head_size / 2; - w->wcls = shared_weights ? w->token_embedding_table : ptr; -} - -// ---------------------------------------------------------------------------- -// neural net blocks - -void rmsnorm(float* o, float* x, float* weight, int size) { - // calculate sum of squares - float ss = 0.0f; - for (int j = 0; j < size; j++) { - ss += x[j] * x[j]; - } - ss /= size; - ss += 1e-5f; - ss = 1.0f / sqrtf(ss); - // normalize and scale - for (int j = 0; j < size; j++) { - o[j] = weight[j] * (ss * x[j]); - } -} - -void softmax(float* x, int size) { - // find max value (for numerical stability) - float max_val = x[0]; - for (int i = 1; i < size; i++) { - if (x[i] > max_val) { - max_val = x[i]; - } - } - // exp and sum - float sum = 0.0f; - for (int i = 0; i < size; i++) { - x[i] = expf(x[i] - max_val); - sum += x[i]; - } - // normalize - for (int i = 0; i < size; i++) { - x[i] /= sum; - } -} - -void matmul(float* xout, float* x, float* w, int n, int d) { - // W (d,n) @ x (n,) -> xout (d,) - // by far the most amount of time is spent inside this little function - int i; - #pragma omp parallel for private(i) - for (i = 0; i < d; i++) { - float val = 0.0f; - for (int j = 0; j < n; j++) { - val += w[i * n + j] * x[j]; - } - xout[i] = val; - } -} - -void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* w) { - - // a few convenience variables - float *x = s->x; - int dim = p->dim; - int kv_dim = (p->dim * p->n_kv_heads) / p->n_heads; - int kv_mul = p->n_heads / p->n_kv_heads; // integer multiplier of the kv sharing in multiquery - int hidden_dim = p->hidden_dim; - int head_size = dim / p->n_heads; - - // copy the token embedding into x - float* content_row = &(w->token_embedding_table[token * dim]); - memcpy(x, content_row, dim*sizeof(*x)); - - // forward all the layers - for(int l = 0; l < p->n_layers; l++) { - - // attention rmsnorm - rmsnorm(s->xb, x, w->rms_att_weight + l*dim, dim); - - // qkv matmuls for this position - matmul(s->q, s->xb, w->wq + l*dim*dim, dim, dim); - matmul(s->k, s->xb, w->wk + l*dim*kv_dim, dim, kv_dim); - matmul(s->v, s->xb, w->wv + l*dim*kv_dim, dim, kv_dim); - - // RoPE relative positional encoding: complex-valued rotate q and k in each head - for (int i = 0; i < dim; i+=2) { - int head_dim = i % head_size; - float freq = 1.0f / powf(10000.0f, head_dim / (float)head_size); - float val = pos * freq; - float fcr = cosf(val); - float fci = sinf(val); - int rotn = i < kv_dim ? 2 : 1; // how many vectors? 2 = q & k, 1 = q only - for (int v = 0; v < rotn; v++) { - float* vec = v == 0 ? s->q : s->k; // the vector to rotate (query or key) - float v0 = vec[i]; - float v1 = vec[i+1]; - vec[i] = v0 * fcr - v1 * fci; - vec[i+1] = v0 * fci + v1 * fcr; - } - } - - // save key,value at this time step (pos) to our kv cache - int loff = l * p->seq_len * kv_dim; // kv cache layer offset for convenience - float* key_cache_row = s->key_cache + loff + pos * kv_dim; - float* value_cache_row = s->value_cache + loff + pos * kv_dim; - memcpy(key_cache_row, s->k, kv_dim * sizeof(*key_cache_row)); - memcpy(value_cache_row, s->v, kv_dim * sizeof(*value_cache_row)); - - // multihead attention. iterate over all heads - int h; - #pragma omp parallel for private(h) - for (h = 0; h < p->n_heads; h++) { - // get the query vector for this head - float* q = s->q + h * head_size; - // attention scores for this head - float* att = s->att + h * p->seq_len; - // iterate over all timesteps, including the current one - for (int t = 0; t <= pos; t++) { - // get the key vector for this head and at this timestep - float* k = s->key_cache + loff + t * kv_dim + (h / kv_mul) * head_size; - // calculate the attention score as the dot product of q and k - float score = 0.0f; - for (int i = 0; i < head_size; i++) { - score += q[i] * k[i]; - } - score /= sqrtf(head_size); - // save the score to the attention buffer - att[t] = score; - } - - // softmax the scores to get attention weights, from 0..pos inclusively - softmax(att, pos + 1); - - // weighted sum of the values, store back into xb - float* xb = s->xb + h * head_size; - memset(xb, 0, head_size * sizeof(float)); - for (int t = 0; t <= pos; t++) { - // get the value vector for this head and at this timestep - float* v = s->value_cache + loff + t * kv_dim + (h / kv_mul) * head_size; - // get the attention weight for this timestep - float a = att[t]; - // accumulate the weighted value into xb - for (int i = 0; i < head_size; i++) { - xb[i] += a * v[i]; - } - } - } - - // final matmul to get the output of the attention - matmul(s->xb2, s->xb, w->wo + l*dim*dim, dim, dim); - - // residual connection back into x - for (int i = 0; i < dim; i++) { - x[i] += s->xb2[i]; - } - - // ffn rmsnorm - rmsnorm(s->xb, x, w->rms_ffn_weight + l*dim, dim); - - // Now for FFN in PyTorch we have: self.w2(F.silu(self.w1(x)) * self.w3(x)) - // first calculate self.w1(x) and self.w3(x) - matmul(s->hb, s->xb, w->w1 + l*dim*hidden_dim, dim, hidden_dim); - matmul(s->hb2, s->xb, w->w3 + l*dim*hidden_dim, dim, hidden_dim); - - // F.silu; silu(x)=x*σ(x),where σ(x) is the logistic sigmoid - for (int i = 0; i < hidden_dim; i++) { - s->hb[i] = s->hb[i] * (1.0f / (1.0f + expf(-s->hb[i]))); - } - - // elementwise multiply with w3(x) - for (int i = 0; i < hidden_dim; i++) { - s->hb[i] = s->hb[i] * s->hb2[i]; - } - - // final matmul to get the output of the ffn - matmul(s->xb, s->hb, w->w2 + l*dim*hidden_dim, hidden_dim, dim); - - // residual connection - for (int i = 0; i < dim; i++) { - x[i] += s->xb[i]; - } - } - - // final rmsnorm - rmsnorm(x, x, w->rms_final_weight, dim); - - // classifier into logits - matmul(s->logits, x, w->wcls, p->dim, p->vocab_size); -} - -// ---------------------------------------------------------------------------- -// byte pair encoding (BPE) tokenizer, encodes strings into tokens so we can prompt - -typedef struct { - char *str; - int id; -} TokenIndex; - -int compare_tokens(const void *a, const void *b) { - return strcmp(((TokenIndex*)a)->str, ((TokenIndex*)b)->str); -} - -int str_lookup(char *str, TokenIndex *sorted_vocab, int vocab_size) { - // efficiently find the perfect match for str in vocab, return its index or -1 if not found - TokenIndex tok = { .str = str }; // acts as the key to search for - TokenIndex *res = bsearch(&tok, sorted_vocab, vocab_size, sizeof(TokenIndex), compare_tokens); - return res != NULL ? res->id : -1; -} - -void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, unsigned int max_token_length, int *tokens, int *n_tokens) { - - // sort vocabulary - TokenIndex *sorted_vocab = malloc(vocab_size * sizeof(TokenIndex)); - for (int i = 0; i < vocab_size; i++) { - sorted_vocab[i].str = vocab[i]; - sorted_vocab[i].id = i; - } - qsort(sorted_vocab, vocab_size, sizeof(TokenIndex), compare_tokens); - - // create a temporary buffer that will store merge candidates of always two consecutive tokens - char* str_buffer = malloc((max_token_length*2 +1 +2) * sizeof(char)); // *2 for concat, +1 for null terminator +2 for UTF8 (in case max_token_lenght is 1) - size_t str_len = 0; - - // add_dummy_prefix is true by default - tokens[0] = str_lookup(" ", sorted_vocab, vocab_size); - *n_tokens = 1; // the number of tokens - - // Okay UTF-8 time. This will get messy. Here is the reference from Wikipedia: - // Code point ↔ UTF-8 conversion - // First code point Last code point Byte 1 Byte 2 Byte 3 Byte 4 - // U+0000 U+007F 0xxxxxxx - // U+0080 U+07FF 110xxxxx 10xxxxxx - // U+0800 U+FFFF 1110xxxx 10xxxxxx 10xxxxxx - // U+10000 U+10FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx - - // process the raw (UTF-8) byte sequence of the input string - for (char *c = text; *c != '\0'; c++) { - - // reset buffer if the current byte is ASCII or a leading byte - // 0xC0 is 11000000, so (*c & 0xC0) keeps the first 2 bits and zeros the rest - // 0x80 is 10000000 - // in UTF-8, all continuation bytes start with "10" in first two bits - // so in English this is: "if this byte is not a continuation byte" - if ((*c & 0xC0) != 0x80) { - // this byte must be either a leading byte (11...) or an ASCII char (0x...) - // => reset our location, as we're starting a new UTF-8 codepoint - str_len = 0; - } - - // append the current byte to the buffer - str_buffer[str_len++] = *c; // ++ is post-increment, incremented after this line - str_buffer[str_len] = '\0'; - - // while the next character is a continuation byte, continue appending - // but if there are too many of them, just stop to avoid overruning str_buffer size. - if ((*(c+1) & 0xC0) == 0x80 && str_len < 4) { - continue; - } - - // ok c+1 is not a continuation byte, so we've read in a full codepoint - int id = str_lookup(str_buffer, sorted_vocab, vocab_size); - - if (id != -1) { - // we found this codepoint in vocab, add it as a token - tokens[(*n_tokens)++] = id; - } else { - // byte_fallback encoding: just encode each byte as a token - // +3 is here because the first 3 vocab elements are , , - // so the individual bytes only start at index 3 - for (int i=0; i < str_len; i++) { - tokens[(*n_tokens)++] = (unsigned char)str_buffer[i] + 3; - } - } - str_len = 0; // protect against a sequence of stray UTF8 continuation bytes - } - - // merge the best consecutive pair each iteration, according the scores in vocab_scores - while (1) { - float best_score = -1e10; - int best_id = -1; - int best_idx = -1; - - for (int i=0; i < (*n_tokens-1); i++) { - // check if we can merge the pair (tokens[i], tokens[i+1]) - sprintf(str_buffer, "%s%s", vocab[tokens[i]], vocab[tokens[i+1]]); - int id = str_lookup(str_buffer, sorted_vocab, vocab_size); - if (id != -1 && vocab_scores[id] > best_score) { - // this merge pair exists in vocab! record its score and position - best_score = vocab_scores[id]; - best_id = id; - best_idx = i; - } - } - - if (best_idx == -1) { - break; // we couldn't find any more pairs to merge, so we're done - } - - // merge the consecutive pair (best_idx, best_idx+1) into new token best_id - tokens[best_idx] = best_id; - // delete token at position best_idx+1, shift the entire sequence back 1 - for (int i = best_idx+1; i < (*n_tokens-1); i++) { - tokens[i] = tokens[i+1]; - } - (*n_tokens)--; // token length decreased - } - - free(str_buffer); - free(sorted_vocab); -} - -// ---------------------------------------------------------------------------- -// utilities: time / rng - -long time_in_ms() { - // return time in milliseconds, for benchmarking the model speed - struct timespec time; - clock_gettime(CLOCK_REALTIME, &time); - return time.tv_sec * 1000 + time.tv_nsec / 1000000; -} - -unsigned long long rng_seed; -unsigned int random_u32() { - // xorshift rng: https://en.wikipedia.org/wiki/Xorshift#xorshift.2A - rng_seed ^= rng_seed >> 12; - rng_seed ^= rng_seed << 25; - rng_seed ^= rng_seed >> 27; - return (rng_seed * 0x2545F4914F6CDD1Dull) >> 32; -} -float random_f32() { // random float32 in [0,1) - return (random_u32() >> 8) / 16777216.0f; -} - -// ---------------------------------------------------------------------------- -// sampling can be done in a few ways: greedy argmax, sampling, top-p sampling - -int argmax(float* probabilities, int n) { - // return the index that has the highest probability - int max_i = 0; - float max_p = probabilities[0]; - for (int i = 1; i < n; i++) { - if (probabilities[i] > max_p) { - max_i = i; - max_p = probabilities[i]; - } - } - return max_i; -} - -int sample(float* probabilities, int n) { - // sample index from probabilities (they must sum to 1!) - float r = random_f32(); - float cdf = 0.0f; - for (int i = 0; i < n; i++) { - cdf += probabilities[i]; - if (r < cdf) { - return i; - } - } - return n - 1; // in case of rounding errors -} - -int compare(const void* a, const void* b) { - ProbIndex* a_ = (ProbIndex*) a; - ProbIndex* b_ = (ProbIndex*) b; - if (a_->prob > b_->prob) return -1; - if (a_->prob < b_->prob) return 1; - return 0; -} - -int sample_topp(float* probabilities, int n, float topp, ProbIndex* probindex) { - // top-p sampling (or "nucleus sampling") samples from the smallest set of - // tokens that exceed probability topp. This way we never sample tokens that - // have very low probabilities and are less likely to go "off the rails". - - int n0 = 0; - // quicksort indices in descending order of probabilities - // values smaller than (1 - topp) / (n - 1) cannot be part of the result - // so for efficiency we crop these out as candidates before sorting - const float cutoff = (1.0f - topp) / (n - 1); - for (int i = 0; i < n; i++) { - if (probabilities[i] >= cutoff) { - probindex[n0].index = i; - probindex[n0].prob = probabilities[i]; - n0++; - } - } - qsort(probindex, n0, sizeof(ProbIndex), compare); - - // truncate the list where cumulative probability exceeds topp - float cumulative_prob = 0.0f; - int last_idx = n0 - 1; // in case of rounding errors consider all elements - for (int i = 0; i < n0; i++) { - cumulative_prob += probindex[i].prob; - if (cumulative_prob > topp) { - last_idx = i; - break; // we've exceeded topp by including last_idx - } - } - - // sample from the truncated list - float r = random_f32() * cumulative_prob; - float cdf = 0.0f; - for (int i = 0; i <= last_idx; i++) { - cdf += probindex[i].prob; - if (r < cdf) { - return probindex[i].index; - } - } - return probindex[last_idx].index; // in case of rounding errors -} - - -// ---------------------------------------------------------------------------- -// int main - -void error_usage() { - fprintf(stderr, "Usage: run [options]\n"); - fprintf(stderr, "Example: run model.bin -n 256 -i \"Once upon a time\"\n"); - fprintf(stderr, "Options:\n"); - fprintf(stderr, " -t temperature, default 1.0\n"); - fprintf(stderr, " -p p value in top-p (nucleus) sampling. default 0.9\n"); - fprintf(stderr, " -s random seed, default time(NULL)\n"); - fprintf(stderr, " -n number of steps to run for, default 256. 0 = max_seq_len\n"); - fprintf(stderr, " -i input prompt\n"); - fprintf(stderr, " -z optional path to custom tokenizer\n"); - exit(EXIT_FAILURE); -} - -int main(int argc, char *argv[]) { - - // default inits - char *checkpoint = NULL; // e.g. out/model.bin - char *tokenizer = "tokenizer.bin"; - float temperature = 1.0f; // 0.0 = greedy deterministic. 1.0 = original. don't set higher - float topp = 0.9f; // top-p in nucleus sampling. 1.0 = off. 0.9 works well, but slower - rng_seed = 0; // seed rng with time by default - int steps = 256; // number of steps to run for - char *prompt = NULL; // prompt string - - // poor man's C argparse so we can override the defaults above from the command line - if (argc >= 2) { checkpoint = argv[1]; } else { error_usage(); } - for (int i = 2; i < argc; i+=2) { - // do some basic validation - if (i + 1 >= argc) { error_usage(); } // must have arg after flag - if (argv[i][0] != '-') { error_usage(); } // must start with dash - if (strlen(argv[i]) != 2) { error_usage(); } // must be -x (one dash, one letter) - // read in the args - if (argv[i][1] == 't') { temperature = atof(argv[i + 1]); } - else if (argv[i][1] == 'p') { topp = atof(argv[i + 1]); } - else if (argv[i][1] == 's') { rng_seed = atoi(argv[i + 1]); } - else if (argv[i][1] == 'n') { steps = atoi(argv[i + 1]); } - else if (argv[i][1] == 'i') { prompt = argv[i + 1]; } - else if (argv[i][1] == 'z') { tokenizer = argv[i + 1]; } - else { error_usage(); } - } - if(rng_seed == 0) { rng_seed = (unsigned int)time(NULL);} - - // read in the model.bin file - Config config; - TransformerWeights weights; - int fd = 0; // file descriptor for memory mapping - float* data = NULL; // memory mapped data pointer - ssize_t file_size; // size of the checkpoint file in bytes - { - FILE *file = fopen(checkpoint, "rb"); - if (!file) { fprintf(stderr, "Couldn't open file %s\n", checkpoint); return 1; } - // read in the config header - if (fread(&config, sizeof(Config), 1, file) != 1) { return 1; } - // negative vocab size is hacky way of signaling unshared weights. bit yikes. - int shared_weights = config.vocab_size > 0 ? 1 : 0; - config.vocab_size = abs(config.vocab_size); - // figure out the file size - fseek(file, 0, SEEK_END); // move file pointer to end of file - file_size = ftell(file); // get the file size, in bytes - fclose(file); - // memory map the Transformer weights into the data pointer - fd = open(checkpoint, O_RDONLY); // open in read only mode - if (fd == -1) { fprintf(stderr, "open failed!\n"); return 1; } - data = mmap(NULL, file_size, PROT_READ, MAP_PRIVATE, fd, 0); - if (data == MAP_FAILED) { fprintf(stderr, "mmap failed!\n"); return 1; } - float* weights_ptr = data + sizeof(Config)/sizeof(float); - checkpoint_init_weights(&weights, &config, weights_ptr, shared_weights); - } - // right now we cannot run for more than config.seq_len steps - if (steps <= 0 || steps > config.seq_len) { steps = config.seq_len; } - - // read in the tokenizer .bin file - char** vocab = (char**)malloc(config.vocab_size * sizeof(char*)); - float* vocab_scores = (float*)malloc(config.vocab_size * sizeof(float)); - unsigned int max_token_length; - { - FILE *file = fopen(tokenizer, "rb"); - if (!file) { fprintf(stderr, "couldn't load %s\n", tokenizer); return 1; } - if (fread(&max_token_length, sizeof(int), 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1; } - int len; - for (int i = 0; i < config.vocab_size; i++) { - if (fread(vocab_scores + i, sizeof(float), 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1;} - if (fread(&len, sizeof(int), 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1; } - vocab[i] = (char *)malloc(len + 1); - if (fread(vocab[i], len, 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1; } - vocab[i][len] = '\0'; // add the string terminating token - } - fclose(file); - } - - // create and init the application RunState - RunState state; - malloc_run_state(&state, &config); - - // process the prompt, if any - int *prompt_tokens = NULL; - int num_prompt_tokens = 0; - if (prompt != NULL) { - prompt_tokens = (int*)malloc((strlen(prompt)+1) * sizeof(int)); - bpe_encode(prompt, vocab, vocab_scores, config.vocab_size, max_token_length, prompt_tokens, &num_prompt_tokens); - } - - // start the main loop - long start = 0; // used to time our code, only initialized after first iteration - int next; // will store the next token in the sequence - int token = 1; // init with token 1 (=BOS), as done in Llama-2 sentencepiece tokenizer - int pos = 0; // position in the sequence - while (pos < steps) { - - // forward the transformer to get logits for the next token - transformer(token, pos, &config, &state, &weights); - - // advance the state state machine - if(pos < num_prompt_tokens) { - // if we are still processing the input prompt, force the next prompt token - next = prompt_tokens[pos]; - } else { - // sample the next token - if (temperature == 0.0f) { - // greedy argmax sampling: take the token with the highest probability - next = argmax(state.logits, config.vocab_size); - } else { - // apply the temperature to the logits - for (int q=0; q= 1) { - // simply sample from the predicted probability distribution - next = sample(state.logits, config.vocab_size); - } else { - // top-p (nucleus) sampling, clamping the least likely tokens to zero - next = sample_topp(state.logits, config.vocab_size, topp, state.probindex); - } - } - } - pos++; - - // data-dependent terminating condition: the BOS (1) token delimits sequences - if (next == 1) { break; } - - // following BOS (1) token, sentencepiece decoder strips any leading whitespace (see PR #89) - char *token_str = (token == 1 && vocab[next][0] == ' ') ? vocab[next]+1 : vocab[next]; - // careful, some tokens designate raw bytes, and look like e.g. '<0x01>' - unsigned char byte_val; - if (sscanf(token_str, "<0x%02hhX>", &byte_val) == 1) { - // ok this token is a raw byte token, carefuly to only print printable chars or whitespace - // some of the other bytes can be various control codes, backspace, etc. => skip - if (isprint(byte_val) || isspace(byte_val)) { - char byte_piece[2]; - byte_piece[0] = byte_val; - byte_piece[1] = '\0'; - printf("%s", byte_piece); - } - } else { - printf("%s", token_str); - } - fflush(stdout); - token = next; - - // init the timer here because the first iteration can be slower - if (start == 0) { start = time_in_ms(); } - } - printf("\n"); - - // report achieved tok/s (pos-1 because the timer starts after first iteration) - if (pos > 1) { - long end = time_in_ms(); - fprintf(stderr, "achieved tok/s: %f\n", (pos-1) / (double)(end-start)*1000); - } - - // memory and file handles cleanup - free_run_state(&state); - for (int i = 0; i < config.vocab_size; i++) { free(vocab[i]); } - free(vocab); - free(vocab_scores); - if (prompt_tokens != NULL) free(prompt_tokens); - if (data != MAP_FAILED) munmap(data, file_size); - if (fd != -1) close(fd); - return 0; -} diff --git a/run.dart b/run.dart new file mode 100644 index 0000000..7b7be5f --- /dev/null +++ b/run.dart @@ -0,0 +1,799 @@ +import 'dart:convert'; +import 'dart:developer'; +import 'dart:io'; +import 'dart:math'; +import 'dart:typed_data'; + +import 'package:args/args.dart'; + +class Config { + // transformer dimension + late int dim; + // for ffn layers + late int hidden_dim; + // number of layers + late int n_layers; + // number of query heads + late int n_heads; + // number of key/value heads (can be < query heads because of multiquery) + late int n_kv_heads; + // vocabulary size, usually 256 (byte-level) + late int vocab_size; + // max sequence length + late int seq_len; + + @override + String toString() { + return "Config(dim: $dim, hidden_dim: $hidden_dim, n_layers: $n_layers, n_heads: $n_heads, n_kv_heads: $n_kv_heads, vocab_size: $vocab_size, seq_len: $seq_len)"; + } +} + +const configByteSize = 7 * 4; + +//We are using 32 bit percision floats here +class TransformerWeights { + // token embedding table + late Float32List token_embedding_table; // (vocab_size, dim) + // weights for rmsnorms + late Float32List rms_att_weight; // (layer, dim) rmsnorm weights + late Float32List rms_ffn_weight; // (layer, dim) + // weights for matmuls. note dim == n_heads * head_size + late Float32List wq; // (layer, dim, n_heads * head_size) + late Float32List wk; // (layer, dim, n_kv_heads * head_size) + late Float32List wv; // (layer, dim, n_kv_heads * head_size) + late Float32List wo; // (layer, n_heads * head_size, dim) + // weights for ffn + late Float32List w1; // (layer, hidden_dim, dim) + late Float32List w2; // (layer, dim, hidden_dim) + late Float32List w3; // (layer, hidden_dim, dim) + // final rmsnorm + late Float32List rms_final_weight; // (dim,) + // freq_cis for RoPE relatively positional embeddings + late Float32List freq_cis_real; // (seq_len, head_size/2) + late Float32List freq_cis_imag; // (seq_len, head_size/2) + // (optional) classifier weights for the logits, on the last layer + late Float32List wcls; +} + +class ProbIndex { + double prob; + int index; + ProbIndex(this.prob, this.index); +} + +class TokenIndex { + String str; + int id; + TokenIndex(this.str, this.id); +} + +class RunState { + // current wave of activations + late Float32List x; // activation at current time stamp (dim,) + late Float32List xb; // same, but inside a residual branch (dim,) + late Float32List xb2; // an additional buffer just for convenience (dim,) + late Float32List hb; // buffer for hidden dimension in the ffn (hidden_dim,) + late Float32List hb2; // buffer for hidden dimension in the ffn (hidden_dim,) + late Float32List q; // query (dim,) + late Float32List k; // key (dim,) + late Float32List v; // value (dim,) + late Float32List att; // buffer for scores/attention values (n_heads, seq_len) + late Float32List logits; // output logits + late List probindex; // buffer used in top-p sampling + // kv cache + late Float32List key_cache; // (layer, seq_len, dim) + late Float32List value_cache; // (layer, seq_len, dim) +} + +initialize_run_state(RunState s, Config config) { + // we calloc instead of malloc to keep valgrind happy + int kv_dim = (config.dim * config.n_kv_heads) ~/ config.n_heads; + s.x = Float32List(config.dim); + s.xb = Float32List(config.dim); + s.xb2 = Float32List(config.dim); + s.hb = Float32List(config.hidden_dim); + s.hb2 = Float32List(config.hidden_dim); + s.q = Float32List(config.dim); + s.k = Float32List(kv_dim); + s.v = Float32List(kv_dim); + s.att = Float32List(config.n_heads * config.seq_len); + s.logits = Float32List(config.vocab_size); + s.probindex = []; + s.key_cache = Float32List(config.n_layers * config.seq_len * kv_dim); + s.value_cache = Float32List(config.n_layers * config.seq_len * kv_dim); +} + +class Tokenizer { + List vocab; + List vocab_scores; + Tokenizer( + this.vocab, + this.vocab_scores, + ); + + bpe_encode(String text, List tokens, int n_tokens) { + tokens = []; + + // First pass, combine raw tokens + text.runes.forEach((element) { + String decoded = utf8.decode([element]); + if (vocab.contains(decoded)) { + tokens.add(vocab.indexOf(decoded)); + } + }); + + // Second pass, combine bpe tokens + while (true) { + double best_score = -1e10; + int best_id = -1; + int best_index = -1; + + for (int i = 0; i < tokens.length - 1; i++) { + String newStr = vocab[tokens[i]] + vocab[tokens[i + 1]]; + int newStrIndex = vocab.indexOf(newStr); + if (newStrIndex != -1 && vocab_scores[newStrIndex] > best_score) { + best_score = vocab_scores[newStrIndex]; + best_id = newStrIndex; + best_index = i; + } + } + + if (best_index == -1) break; + + tokens[best_index] = best_id; + tokens.removeAt(best_index + 1); + } + return tokens; + } +} + +// ---------------------------------------------------------------------------- +// sampling can be done in a few ways: greedy argmax, sampling, top-p sampling + +int argmax(Float32List probabilities) { + // return the index that has the highest probability + int max_i = 0; + double max_p = probabilities[0]; + for (int i = 1; i < probabilities.length; i++) { + if (probabilities[i] > max_p) { + max_i = i; + max_p = probabilities[i]; + } + } + return max_i; +} + +int sample(Float32List probabilities) { + // sample index from probabilities (they must sum to 1!) + double r = Random().nextDouble(); + double cdf = 0.0; + for (int i = 0; i < probabilities.length; i++) { + cdf += probabilities[i]; + if (r < cdf) return i; + } + return probabilities.length - 1; // in case of rounding errors +} + +int sample_topp(Float32List probabilities, double topp) { + // top-p sampling (or "nucleus sampling") samples from the smallest set of + // tokens that exceed probability topp. This way we never sample tokens that + // have very low probabilities and are less likely to go "off the rails". + + // quicksort indices in descending order of probabilities + // values smaller than (1 - topp) / (n - 1) cannot be part of the result + // In the original llama.c they crop these out as candidates before sorting + List probindex = []; + + double cutoff = (1.0 - topp) / (probabilities.length - 1); + + for (int i = 0; i < probabilities.length; i++) { + if (probabilities[i] >= cutoff) { + probindex.add(ProbIndex(probabilities[i], i)); + } + } + + probindex.sort((a, b) => b.prob.compareTo(a.prob)); + + // truncate the list where cumulative probability exceeds topp + double cumulative_prob = 0.0; + int last_idx = + probindex.length - 1; // in case of rounding errors consider all elements + for (int i = 0; i < probindex.length; i++) { + cumulative_prob += probindex[i].prob; + if (cumulative_prob > topp) { + last_idx = i; + break; // we've exceeded topp by including last_idx + } + } + + probindex.removeRange(last_idx + 1, probindex.length); + + // sample from the truncated list + double r = new Random().nextDouble() * cumulative_prob; + double cdf = 0.0; + for (int i = 0; i <= last_idx; i++) { + cdf += probindex[i].prob; + if (r < cdf) { + return probindex[i].index; + } + } + return probindex[last_idx].index; // in case of rounding errors +} + +rmsnorm(Float32List out, Float32List x, Float32List weight) { + assert(out.length == x.length); + assert(x.length == weight.length); + // calculate sum of squares + double ss = 0.0; + x.forEach((element) { + ss += element * element; + }); + ss /= x.length; + ss += 1e-5; + ss = 1.0 / sqrt(ss); // sqr mean sum of squares + + // normalize and scale + for (int j = 0; j < x.length; j++) { + out[j] = weight[j] * (ss * x[j]); + } +} + +void softmax(Float32List x, int size) { + // find max value (for numerical stability) + double max_val = x[0]; + for (int i = 1; i < size; i++) { + if (x[i] > max_val) { + max_val = x[i]; + } + } + // exp and sum + double sum = 0.0; + for (int i = 0; i < size; i++) { + x[i] = exp(x[i] - max_val); + sum += x[i]; + } + // normalize + for (int i = 0; i < size; i++) x[i] /= sum; +} + +void matmul(Float32List out, Float32List x, Float32List w, int n, int d) { + assert(out.length == d); + assert(x.length == n); + assert(w.length == n * d); + + // W (d,n) @ x (n,) -> xout (d,) + // by far the most amount of time is spent inside this little function + for (int i = 0; i < d; i++) { + double val = 0.0; + for (int j = 0; j < n; j++) { + val += w[i * n + j] * x[j]; + } + out[i] = val; + } +} + +transformer(int token, int pos, Config config, RunState state, + TransformerWeights weights) { + int dim = config.dim; + int kv_dim = config.dim * config.n_kv_heads ~/ config.n_heads; + int kv_mul = config.n_kv_heads ~/ + config.n_heads; // integer multiplier of the kv sharing in multiquery + int hidden_dim = config.hidden_dim; + int head_size = config.dim ~/ config.n_heads; + + // copy the token embedding into x + Float32List current_row = Float32List.sublistView( + weights.token_embedding_table, + token * config.dim, + (token + 1) * config.dim); + for (int i = 0; i < config.dim; i++) state.x[i] = current_row[i]; + + // Note: Divide by 2 here because Rope Parameters repeat after every 2 dimensions + Float32List freq_cis_real_row = weights.freq_cis_real + .sublist(pos * head_size ~/ 2, (pos + 1) * head_size ~/ 2); + Float32List freq_cis_imag_row = weights.freq_cis_imag + .sublist(pos * head_size ~/ 2, (pos + 1) * head_size ~/ 2); + + // forward all the layers + for (int l = 0; l < config.n_layers; l++) { + rmsnorm( + state.xb, + state.x, + Float32List.sublistView( + weights.rms_att_weight, l * dim, (l + 1) * dim)); + + // qkv matmuls for this position + // NOTE:yiming This look slike a place for lots of paralle work :thinking: + // x = x @ wq, wq with dim * dim + matmul( + state.q, + state.xb, + Float32List.sublistView(weights.wq, l * dim * dim, (l + 1) * dim * dim), + dim, + dim); + + // x = x @ wk, wq with dim * kv_dim + matmul( + state.k, + state.xb, + Float32List.sublistView( + weights.wk, l * dim * kv_dim, (l + 1) * dim * kv_dim), + dim, + kv_dim); + + // x = x @ wv, wq with dim * kv_dim + matmul( + state.v, + state.xb, + Float32List.sublistView( + weights.wv, l * dim * kv_dim, (l + 1) * dim * kv_dim), + dim, + kv_dim); + + // RoPE relative positional encoding: complex-valued rotate q and k by freq_cis in each head + // https://arxiv.org/pdf/2104.09864v4.pdf + // We are just reusing the loop for k and q distance calculation + for (int v = 0; v < 2; v++) { + Float32List vec = + v == 0 ? state.q : state.k; // the vector to rotate (query or key) + int vec_size = v == 0 ? dim : kv_dim; // the size of the vector + + // We are only rotating in a group of 2 + for (int i = 0; i < vec_size; i += 2) { + double v0 = vec[i]; + double v1 = vec[i + 1]; + double fcr = freq_cis_real_row[(i % head_size) ~/ 2]; + double fci = freq_cis_imag_row[(i % head_size) ~/ 2]; + // See the RoPE paper for this section + // 3.4.2 Computational efficient realization of rotary matrix multiplication + // x1 = x1 + cos mθ_1 - x2 sin mθ_1 + vec[i] = v0 * fcr - v1 * fci; + // x2 = x1 sin mθ_1 + x2 + cos mθ_1 + vec[i + 1] = v0 * fci + v1 * fcr; + } + } + + // save key,value at this time step (pos) to our kv cache + // offset by n_layer * seq_len * kv_dim + int loff = + l * config.seq_len * kv_dim; // kv cache layer offset for convenience + // key cache = loff + pos * kv_dim + int key_cache_row_offset = loff + pos * kv_dim; + // save k,v into kv cache + for (int i = 0; i < state.k.length; i++) + state.key_cache[key_cache_row_offset + i] = state.k[i]; + + for (int i = 0; i < state.v.length; i++) + state.value_cache[key_cache_row_offset + i] = state.v[i]; + + // multihead attention. iterate over all heads + for (int h = 0; h < config.n_heads; h++) { + // get the query vector for this head + Float32List q = + Float32List.sublistView(state.q, h * head_size, (h + 1) * head_size); + // attention scores for this head + Float32List att = Float32List.sublistView( + state.att, h * config.seq_len, (h + 1) * config.seq_len); + // iterate over all timesteps, including the current one + for (int t = 0; t <= pos; t++) { + // get the key vector for this head and at this timestep + // kv_mul is just 1 now + int key_cache_offset = loff + + t * kv_dim + + (h ~/ kv_mul) * + head_size; // it's still offset by head size kv_dim = head_size * h! + // but sometimes multiple head can share a key_cache + Float32List k = Float32List.sublistView( + state.key_cache, key_cache_offset, key_cache_offset + kv_dim); + // calculate the attention score as the dot product of q and k + double score = 0.0; + for (int ll = 0; ll < head_size; ll++) { + score += q[ll] * k[ll]; + } + // TODO(yiming): reread the paper to understand better + score /= sqrt(head_size); + // save the score to the attention buffer + att[t] = score; + } + + // softmax the scores to get attention weights, from 0..pos inclusively + // soft max happens before attention * v + // softmax is done on the entire attention + // I think there's some trick in pytorch for this + softmax(att, pos + 1); + + // Now we have calculated the weighted attention vector, it's time to apply attention value + // weighted sum of the values, store back into xb + // Clear out xb for the next stage + for (int i = 0; i < head_size; i++) { + state.xb[h * head_size + i] = 0.0; + } + + Float32List xb_off = + Float32List.sublistView(state.xb, h * head_size, (h + 1) * head_size); + for (int t = 0; t <= pos; t++) { + // get the value vector for this head and at this timestep + int v_cache_offset = loff + t * kv_dim + (h ~/ kv_mul) * head_size; + Float32List v = Float32List.sublistView( + state.value_cache, v_cache_offset, v_cache_offset + head_size); + // get the attention weight for this timestep + double a = att[t]; + // accumulate the weighted value into xb + for (int i = 0; i < head_size; i++) { + xb_off[i] += a * v[i]; + } + } + } + + // final matmul to get the output of the attention + // The "Aggregate output" of all the attention heads + matmul( + state.xb2, + state.xb, + Float32List.sublistView(weights.wo, l * dim * dim, (l + 1) * dim * dim), + dim, + dim); + + // residual connection back into x + for (int i = 0; i < dim; i++) { + state.x[i] += state.xb2[i]; + } + + // ffn rmsnorm + rmsnorm( + state.xb, + state.x, + Float32List.sublistView( + weights.rms_ffn_weight, l * dim, (l + 1) * dim)); + + // Now for FFN in PyTorch we have: self.w2(F.silu(self.w1(x)) * self.w3(x)) + // first calculate self.w1(x) and self.w3(x) + matmul( + state.hb, + state.xb, + Float32List.sublistView( + weights.w1, (l * dim * hidden_dim), (l + 1) * dim * hidden_dim), + dim, + hidden_dim); + + matmul( + state.hb2, + state.xb, + Float32List.sublistView( + weights.w3, (l * dim * hidden_dim), (l + 1) * dim * hidden_dim), + dim, + hidden_dim); + + // F.silu; silu(x)=x*σ(x),where σ(x) is the logistic sigmoid + for (int i = 0; i < hidden_dim; i++) { + state.hb[i] = state.hb[i] * (1.0 / (1.0 + exp(-state.hb[i]))); + } + + // elementwise multiply with w3(x) + // F.silu(self.w1(x)) * self.w3(x) + for (int i = 0; i < hidden_dim; i++) { + state.hb[i] = state.hb[i] * state.hb2[i]; + } + + // final matmul to get the output of the ffn + // here we are reusing xb again! + // x = self.w2(F.silu(self.w1(x)) * self.w3(x)) + matmul( + state.xb, + state.hb, + Float32List.sublistView( + weights.w2, l * dim * hidden_dim, (l + 1) * dim * hidden_dim), + hidden_dim, + dim); + + // residual connection + for (int i = 0; i < dim; i++) { + state.x[i] += state.xb[i]; + } + } + + // final rmsnorm + rmsnorm(state.x, state.x, weights.rms_final_weight); + + // classifier into logits + matmul(state.logits, state.x, weights.wcls, config.dim, config.vocab_size); +} + +void main(List args) { + String? checkpoint_path = "./stories15M.bin"; + String tokenizer_path = "tokenizer.bin"; + double temperature = 1.0; + double top_p = 0.9; + int rng_seed = 0; // seed rng with time by default + int steps = 256; // number of steps to run for + String? prompt = " One"; + + var parser = ArgParser(); + parser.addOption( + 'checkpoint_path', + abbr: 'c', + callback: (value) => checkpoint_path = value, + ); + parser.addOption('temp', + abbr: 't', + callback: (value) => + {if (value != null) temperature = double.parse(value)}, + defaultsTo: "1.0"); + parser.addOption('topp', + abbr: 'p', + callback: (value) => {if (value != null) top_p = double.parse(value)}, + defaultsTo: "0.9"); + parser.addOption('seed', + abbr: 's', + callback: (value) => {if (value != null) rng_seed = int.parse(value)}, + defaultsTo: "0"); + parser.addOption('steps', + abbr: 'n', + callback: (value) => {if (value != null) steps = int.parse(value)}, + defaultsTo: "256"); + parser.addOption('prompt', + abbr: 'i', + callback: (value) => {if (value != null) prompt = value}, + defaultsTo: ""); + parser.addOption('tokenizer_path', + abbr: 'z', + callback: (value) => {if (value != null) tokenizer_path = value}); + + parser.parse(args); + + if (rng_seed == 0) rng_seed = Timeline.now; + + print("===========llama2.dart==========="); + print("check_point_path: $checkpoint_path"); + print("tokenizer_path: $tokenizer_path"); + print("temperature: $temperature"); + print("top_p: $top_p"); + print("rng_seed: $rng_seed"); + print("steps: $steps"); + print("prompt: $prompt"); + + var config = Config(); + var weights = TransformerWeights(); + + if (checkpoint_path == null) return print("No checkpoint path provided"); + + print("========= Reading Weights ========="); + + // Read Weights and Config from file + { + Uint8List checkpoint_bytes = File(checkpoint_path!).readAsBytesSync(); + print("Read ${checkpoint_bytes.length} bytes from $checkpoint_path"); + + { + // Reading Config + Uint8List config_bytes = checkpoint_bytes.sublist(0, configByteSize); + Int32List config_ints = config_bytes.buffer.asInt32List(); + config.dim = config_ints[0]; + config.hidden_dim = config_ints[1]; + config.n_layers = config_ints[2]; + config.n_heads = config_ints[3]; + config.n_kv_heads = config_ints[4]; + config.vocab_size = config_ints[5]; + config.seq_len = config_ints[6]; + print("Read Config: $config"); + } + + { + bool shared_weights = config.vocab_size > 0; + // negative vocab size is hacky way of signaling unshared weights. bit yikes. + config.vocab_size = config.vocab_size.abs(); + // Load the weights + int offset = 0; + Float32List weight_floats = + checkpoint_bytes.buffer.asFloat32List(configByteSize); + + int head_size = config.dim ~/ config.n_heads; + weights.token_embedding_table = weight_floats.sublist( + offset, offset + config.vocab_size * config.dim); + offset += config.vocab_size * config.dim; + print( + "Read ${weights.token_embedding_table.lengthInBytes} bytes into token_embedding_table"); + + weights.rms_att_weight = + weight_floats.sublist(offset, offset + config.n_layers * config.dim); + offset += config.n_layers * config.dim; + print( + "Read ${weights.rms_att_weight.lengthInBytes} bytes into rms_att_weight"); + + weights.wq = weight_floats.sublist(offset, + offset + config.n_layers * config.dim * config.n_heads * head_size); + offset += config.n_layers * config.dim * config.n_heads * head_size; + print("Read ${weights.wq.lengthInBytes} bytes into wq"); + + weights.wk = weight_floats.sublist( + offset, + offset + + config.n_layers * config.dim * config.n_kv_heads * head_size); + offset += config.n_layers * config.dim * config.n_kv_heads * head_size; + print("Read ${weights.wk.lengthInBytes} bytes into wk"); + + weights.wv = weight_floats.sublist( + offset, + offset + + config.n_layers * config.dim * config.n_kv_heads * head_size); + offset += config.n_layers * config.dim * config.n_kv_heads * head_size; + print("Read ${weights.wv.lengthInBytes} bytes into wv"); + + weights.wo = weight_floats.sublist(offset, + offset + config.n_layers * config.n_heads * head_size * config.dim); + offset += config.n_layers * config.n_heads * head_size * config.dim; + print("Read ${weights.wo.lengthInBytes} bytes into wo"); + + weights.rms_ffn_weight = + weight_floats.sublist(offset, offset + config.n_layers * config.dim); + offset += config.n_layers * config.dim; + print( + "Read ${weights.rms_ffn_weight.lengthInBytes} bytes into rms_ffn_weight"); + + weights.w1 = weight_floats.sublist( + offset, offset + config.n_layers * config.hidden_dim * config.dim); + offset += config.n_layers * config.hidden_dim * config.dim; + print("Read ${weights.w1.lengthInBytes} bytes into w1"); + + weights.w2 = weight_floats.sublist( + offset, offset + config.n_layers * config.dim * config.hidden_dim); + offset += config.n_layers * config.dim * config.hidden_dim; + print("Read ${weights.w2.lengthInBytes} bytes into w2"); + + weights.w3 = weight_floats.sublist( + offset, offset + config.n_layers * config.hidden_dim * config.dim); + offset += config.n_layers * config.hidden_dim * config.dim; + print("Read ${weights.w3.lengthInBytes} bytes into w3"); + + weights.rms_final_weight = + weight_floats.sublist(offset, offset + config.dim); + offset += config.dim; + print( + "Read ${weights.rms_final_weight.lengthInBytes} bytes into rms_final_weight"); + + weights.freq_cis_real = weight_floats.sublist( + offset, offset + config.seq_len * head_size ~/ 2); + offset += config.seq_len * head_size ~/ 2; + print( + "Read ${weights.freq_cis_real.lengthInBytes} bytes into freq_cis_real"); + + weights.freq_cis_imag = weight_floats.sublist( + offset, offset + config.seq_len * head_size ~/ 2); + offset += config.seq_len * head_size ~/ 2; + print( + "Read ${weights.freq_cis_imag.lengthInBytes} bytes into freq_cis_imag"); + + if (shared_weights) { + print("Read shared weights into wcls"); + weights.wcls = weights.token_embedding_table; + } else { + weights.wcls = weight_floats.sublist( + offset, offset + config.vocab_size * config.dim); + offset += config.dim; + print("Read ${weights.wcls.lengthInBytes} bytes into wcls"); + } + } + } + + // clamp number of steps to supported range + if (steps <= 0 || steps > config.seq_len) { + steps = config.seq_len; + } + + // read in the tokenizer .bin file + List vocab = new List.filled( + config.vocab_size, new Uint8List(0)); // config.vocab_size; + Float32List vocab_scores = new Float32List(config.vocab_size); + { + ByteData tokenizer_bytes = + File(tokenizer_path).readAsBytesSync().buffer.asByteData(0); + int offset = 0; + // Not being used but read anyways + int max_token_length = tokenizer_bytes.getUint32(offset, Endian.little); + offset += 4; + int next_str_length = 0; + for (int i = 0; i < config.vocab_size; i++) { + double score = tokenizer_bytes.getFloat32(offset, Endian.little); + offset += 4; + next_str_length = tokenizer_bytes.getUint32(offset, Endian.little); + offset += 4; + Uint8List next_chunk = + tokenizer_bytes.buffer.asUint8List(offset, next_str_length); + vocab_scores[i] = score; + offset += next_str_length; + vocab[i] = next_chunk; + } + } + + print("=====beginning generation====="); + + Tokenizer tokenizer; + tokenizer = + Tokenizer(vocab.map((e) => utf8.decode(e)).toList(), vocab_scores); + + // process the prompt, if any + List prompt_tokens = []; + int num_prompt_tokens = 0; + if (prompt != null) { + prompt_tokens = + tokenizer.bpe_encode(prompt!, prompt_tokens, num_prompt_tokens); + } + + RunState state = RunState(); + + initialize_run_state(state, config); + // Finally! the main loop + // used to time our code, only initialized after first iteration + int start = 0; + int next; // will store the next token in the sequence + // init with token 1 (=BOS), as done in Llama-2 sentencepiece tokenizer + int token = 1; + int pos = 0; // position in the sequence + + while (pos < steps) { + // transformer! Run the model + transformer(token, pos, config, state, weights); + + // advance the state state machine + if (pos < prompt_tokens.length) { + // if we are still processing the input prompt, force the next prompt token + next = prompt_tokens[pos]; + } else { + // sample the next token + if (temperature == 0.0) { + // greedy argmax sampling: take the token with the highest probability + next = argmax(state.logits); + } else { + // apply the temperature to the logits + for (int q = 0; q < config.vocab_size; q++) { + state.logits[q] /= temperature; + } + // apply softmax to the logits to get the probabilities for next token + softmax(state.logits, state.logits.length); + + // we sample from this distribution to get the next token + if (top_p <= 0 || top_p >= 1) { + // simply sample from the predicted probability distribution + next = sample(state.logits); + } else { + // top-p (nucleus) sampling, clamping the least likely tokens to zero + next = sample_topp(state.logits, top_p); + } + } + } + pos++; + + // data-dependent terminating condition: the BOS (1) token delimits sequences + if (next == 1) { + break; + } + + // following BOS (1) token, sentencepiece decoder strips any leading whitespace (see PR #89) + Uint8List token_str = + (token == 1 && (vocab[next][0] == ' ')) ? vocab[next + 1] : vocab[next]; + + // careful, some tokens designate raw bytes, and look like e.g. '<0x01>' + String str; + str = utf8.decode(token_str); + + // In the original llama2.c they check for a lot of special tokens, but I've only seen this token really being used + // Being a little lazy here Hehe. + if (str == "<0x0A>") { + str = "\n"; + } + stdout.write("$str"); + token = next; + + // init the timer here because the first iteration can be slower + if (start == 0) { + start = DateTime.now().millisecondsSinceEpoch; + } + } + stdout.write("\n"); + + // report achieved tok/s (pos-1 because the timer starts after first iteration) + if (pos > 1) { + int end = DateTime.now().millisecondsSinceEpoch; + print("achieved tok/s: ${(pos - 1) / (end - start) * 1000} \n"); + } +} diff --git a/test_all.py b/test_all.py deleted file mode 100644 index a4d0976..0000000 --- a/test_all.py +++ /dev/null @@ -1,89 +0,0 @@ -""" -Run simply with -$ pytest -""" -import os -import pytest # pip install pytest -import requests -import subprocess - - -import torch -from model import ModelArgs, Transformer -from tokenizer import Tokenizer - -# ----------------------------------------------------------------------------- -# test utilities - -test_ckpt_dir = "test" - -def download_file(url, filename): - print(f"Downloading {url} to {filename}") - response = requests.get(url, stream=True) - response.raise_for_status() # Raise an HTTPError on bad status code - with open(filename, 'wb') as file: - for chunk in response.iter_content(chunk_size=8192): - file.write(chunk) - -def attempt_download_files(): - os.makedirs(test_ckpt_dir, exist_ok=True) - root_url = "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories260K" - need = ["stories260K.bin", "stories260K.pt", "tok512.bin", "tok512.model"] - for file in need: - url = root_url + '/' + file #os.path.join inserts \\ on windows - filename = os.path.join(test_ckpt_dir, file) - if not os.path.exists(filename): - download_file(url, filename) - -expected_stdout = b'Once upon a time, there was a little girl named Lily. She loved to play outside in the park. One day, she saw a big, red ball. She wanted to play with it, but it was too high.\nLily\'s mom said, "Lily, let\'s go to the park." Lily was sad and didn\'t know what to do. She said, "I want to play with your ball, but I can\'t find it."\nLily was sad and didn\'t know what to do. She said, "I\'m sorry, Lily. I didn\'t know what to do."\nLily didn\'t want to help her mom, so she' - -# ----------------------------------------------------------------------------- -# actual tests - -def test_runc(): - """ Forwards a model against a known-good desired outcome in run.c for 200 steps""" - attempt_download_files() - - model_path = os.path.join(test_ckpt_dir, "stories260K.bin") - tokenizer_path = os.path.join(test_ckpt_dir, "tok512.bin") - command = ["./run", model_path, "-z", tokenizer_path, "-t", "0.0", "-n", "200"] - with open('err.txt', mode='wb') as fe: - with open('stdout.txt', mode='wb') as fo: - proc = subprocess.Popen(command, stdout=fo, stderr=fe) #pipe in windows terminal does funny things like replacing \n with \r\n - proc.wait() - - with open('stdout.txt', mode='r') as f: - stdout = f.read() - # strip the very last \n that is added by run.c for aesthetic reasons - stdout = stdout[:-1].encode('ascii') - - assert stdout == expected_stdout - -def test_python(): - """ Forwards a model against a known-good desired outcome in sample.py for 200 steps""" - attempt_download_files() - - device = "cpu" # stories260K is small enough to just breeze through it on CPU - checkpoint = os.path.join(test_ckpt_dir, "stories260K.pt") - checkpoint_dict = torch.load(checkpoint, map_location=device) - gptconf = ModelArgs(**checkpoint_dict['model_args']) - model = Transformer(gptconf) - state_dict = checkpoint_dict['model'] - unwanted_prefix = '_orig_mod.' - for k,v in list(state_dict.items()): - if k.startswith(unwanted_prefix): - state_dict[k[len(unwanted_prefix):]] = state_dict.pop(k) - model.load_state_dict(state_dict, strict=False) - model.eval() - model.to(device) - x = torch.tensor([[1]], dtype=torch.long, device=device) # 1 is BOS - with torch.inference_mode(): - y = model.generate(x, max_new_tokens=200, temperature=0.0) - pt_tokens = y[0].tolist() - - tokenizer_model = os.path.join(test_ckpt_dir, "tok512.model") - enc = Tokenizer(tokenizer_model=tokenizer_model) - text = enc.decode(pt_tokens) - text = text.encode('ascii') # turn into bytes - - assert text == expected_stdout diff --git a/win.c b/win.c deleted file mode 100644 index 5cd7f1c..0000000 --- a/win.c +++ /dev/null @@ -1,180 +0,0 @@ -#include "win.h" -#include -#include - -#ifndef FILE_MAP_EXECUTE -#define FILE_MAP_EXECUTE 0x0020 -#endif /* FILE_MAP_EXECUTE */ - -static int __map_mman_error(const uint32_t err, const int deferr) -{ - if (err == 0) - return 0; - //TODO: implement - return err; -} - -static uint32_t __map_mmap_prot_page(const int prot) -{ - uint32_t protect = 0; - - if (prot == PROT_NONE) - return protect; - - if ((prot & PROT_EXEC) != 0) - { - protect = ((prot & PROT_WRITE) != 0) ? - PAGE_EXECUTE_READWRITE : PAGE_EXECUTE_READ; - } - else - { - protect = ((prot & PROT_WRITE) != 0) ? - PAGE_READWRITE : PAGE_READONLY; - } - - return protect; -} - -static uint32_t __map_mmap_prot_file(const int prot) -{ - uint32_t desiredAccess = 0; - - if (prot == PROT_NONE) - return desiredAccess; - - if ((prot & PROT_READ) != 0) - desiredAccess |= FILE_MAP_READ; - if ((prot & PROT_WRITE) != 0) - desiredAccess |= FILE_MAP_WRITE; - if ((prot & PROT_EXEC) != 0) - desiredAccess |= FILE_MAP_EXECUTE; - - return desiredAccess; -} - -void* mmap(void *addr, size_t len, int prot, int flags, int fildes, ssize_t off) -{ - HANDLE fm, h; - void * map = MAP_FAILED; - -#ifdef _MSC_VER -#pragma warning(push) -#pragma warning(disable: 4293) -#endif - - const uint32_t dwFileOffsetLow = (uint32_t)(off & 0xFFFFFFFFL); - const uint32_t dwFileOffsetHigh = (uint32_t)((off >> 32) & 0xFFFFFFFFL); - const uint32_t protect = __map_mmap_prot_page(prot); - const uint32_t desiredAccess = __map_mmap_prot_file(prot); - - const ssize_t maxSize = off + (ssize_t)len; - - const uint32_t dwMaxSizeLow = (uint32_t)(maxSize & 0xFFFFFFFFL); - const uint32_t dwMaxSizeHigh = (uint32_t)((maxSize >> 32) & 0xFFFFFFFFL); - -#ifdef _MSC_VER -#pragma warning(pop) -#endif - - errno = 0; - - if (len == 0 - /* Unsupported flag combinations */ - || (flags & MAP_FIXED) != 0 - /* Usupported protection combinations */ - || prot == PROT_EXEC) - { - errno = EINVAL; - return MAP_FAILED; - } - - h = ((flags & MAP_ANONYMOUS) == 0) ? - (HANDLE)_get_osfhandle(fildes) : INVALID_HANDLE_VALUE; - - if ((flags & MAP_ANONYMOUS) == 0 && h == INVALID_HANDLE_VALUE) - { - errno = EBADF; - return MAP_FAILED; - } - - fm = CreateFileMapping(h, NULL, protect, dwMaxSizeHigh, dwMaxSizeLow, NULL); - - if (fm == NULL) - { - errno = __map_mman_error(GetLastError(), EPERM); - return MAP_FAILED; - } - - map = MapViewOfFile(fm, desiredAccess, dwFileOffsetHigh, dwFileOffsetLow, len); - - CloseHandle(fm); - - if (map == NULL) - { - errno = __map_mman_error(GetLastError(), EPERM); - return MAP_FAILED; - } - - return map; -} - -int munmap(void *addr, size_t len) -{ - if (UnmapViewOfFile(addr)) - return 0; - - errno = __map_mman_error(GetLastError(), EPERM); - - return -1; -} - -int mprotect(void *addr, size_t len, int prot) -{ - uint32_t newProtect = __map_mmap_prot_page(prot); - uint32_t oldProtect = 0; - - if (VirtualProtect(addr, len, newProtect, &oldProtect)) - return 0; - - errno = __map_mman_error(GetLastError(), EPERM); - - return -1; -} - -int msync(void *addr, size_t len, int flags) -{ - if (FlushViewOfFile(addr, len)) - return 0; - - errno = __map_mman_error(GetLastError(), EPERM); - - return -1; -} - -int mlock(const void *addr, size_t len) -{ - if (VirtualLock((LPVOID)addr, len)) - return 0; - - errno = __map_mman_error(GetLastError(), EPERM); - - return -1; -} - -int munlock(const void *addr, size_t len) -{ - if (VirtualUnlock((LPVOID)addr, len)) - return 0; - - errno = __map_mman_error(GetLastError(), EPERM); - - return -1; -} - -// Portable clock_gettime function for Windows -int clock_gettime(int clk_id, struct timespec *tp) { - uint32_t ticks = GetTickCount(); - tp->tv_sec = ticks / 1000; - tp->tv_nsec = (ticks % 1000) * 1000000; - return 0; -} diff --git a/win.h b/win.h deleted file mode 100644 index 383cfad..0000000 --- a/win.h +++ /dev/null @@ -1,69 +0,0 @@ -#ifndef _WIN_H_ -#define _WIN_H_ - -#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers -#include -#include -#include - -#define ssize_t int64_t -#define ftell _ftelli64 - -// Below code is originally from mman-win32 -// -/* - * sys/mman.h - * mman-win32 - */ - -#ifndef _WIN32_WINNT // Allow use of features specific to Windows XP or later. -#define _WIN32_WINNT 0x0501 // Change this to the appropriate value to target other versions of Windows. -#endif - -/* All the headers include this file. */ -#ifndef _MSC_VER -#include <_mingw.h> -#endif - -#include - -#ifdef __cplusplus -extern "C" { -#endif - -#define PROT_NONE 0 -#define PROT_READ 1 -#define PROT_WRITE 2 -#define PROT_EXEC 4 - -#define MAP_FILE 0 -#define MAP_SHARED 1 -#define MAP_PRIVATE 2 -#define MAP_TYPE 0xf -#define MAP_FIXED 0x10 -#define MAP_ANONYMOUS 0x20 -#define MAP_ANON MAP_ANONYMOUS - -#define MAP_FAILED ((void *)-1) - -/* Flags for msync. */ -#define MS_ASYNC 1 -#define MS_SYNC 2 -#define MS_INVALIDATE 4 - -/* Flags for portable clock_gettime call. */ -#define CLOCK_REALTIME 0 - -void* mmap(void *addr, size_t len, int prot, int flags, int fildes, ssize_t off); -int munmap(void *addr, size_t len); -int mprotect(void *addr, size_t len, int prot); -int msync(void *addr, size_t len, int flags); -int mlock(const void *addr, size_t len); -int munlock(const void *addr, size_t len); -int clock_gettime(int clk_id, struct timespec *tp); - -#ifdef __cplusplus -}; -#endif - -#endif /* _WIN_H_ */ From 01df3731d6747659ad4d8cf7d9f4bcb27eb6d5f0 Mon Sep 17 00:00:00 2001 From: YiMing Han Date: Fri, 18 Aug 2023 15:09:24 -0400 Subject: [PATCH 74/79] only dart --- .github/workflows/build.yml | 193 ------------------ configurator.py | 47 ----- export_meta_llama_bin.py | 112 ----------- export_meta_llama_hf_bin.py | 113 ----------- model.py | 392 ------------------------------------ requirements.txt | 7 - run.ipynb | 130 ------------ sample.py | 79 -------- save_torchscript.py | 66 ------ tinystories.py | 274 ------------------------- tokenizer.py | 78 ------- train.py | 342 ------------------------------- train_vocab.sh | 126 ------------ 13 files changed, 1959 deletions(-) delete mode 100644 .github/workflows/build.yml delete mode 100644 configurator.py delete mode 100644 export_meta_llama_bin.py delete mode 100644 export_meta_llama_hf_bin.py delete mode 100644 model.py delete mode 100644 requirements.txt delete mode 100644 run.ipynb delete mode 100644 sample.py delete mode 100755 save_torchscript.py delete mode 100644 tinystories.py delete mode 100644 tokenizer.py delete mode 100644 train.py delete mode 100755 train_vocab.sh diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml deleted file mode 100644 index 7e6474d..0000000 --- a/.github/workflows/build.yml +++ /dev/null @@ -1,193 +0,0 @@ -name: Continuous Integration - -on: - push: - branches: - - master - paths: ['.github/workflows/**', '**/Makefile', '**/*.c', '**/*.h', '**/*.py'] - pull_request: - types: [opened, synchronize, reopened] - paths: ['**/Makefile', '**/*.c', '**/*.h', '**/*.py'] - # for manual triggering - workflow_dispatch: - -env: - BRANCH_NAME: ${{ github.head_ref || github.ref_name }} - -jobs: - # check basic builds to avoid breaking changes - ubuntu-focal-make: - runs-on: ubuntu-latest - - steps: - - name: Clone - id: checkout - uses: actions/checkout@v3 - - - name: Dependencies - id: depends - run: | - sudo apt-get update - sudo apt-get install build-essential -y - - - name: Set up Python 3.10 - uses: actions/setup-python@v3 - with: - python-version: "3.10" - - - name: Pip setup - run: | - python -m pip install --upgrade pip - if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - - - name: Build - id: make_build - run: | - make - - - name: Build runfast - id: make_build_runfast - run: | - make runfast - - - name: Test with pytest - run: | - pytest - - macOS-latest-make: - runs-on: macos-latest - - steps: - - name: Clone - id: checkout - uses: actions/checkout@v3 - - - name: Dependencies - id: depends - continue-on-error: true - run: | - brew update - - - name: Set up Python 3.10 - uses: actions/setup-python@v3 - with: - python-version: "3.10" - - - name: Pip setup - run: | - python -m pip install --upgrade pip - if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - - - name: Build clang - id: make_build_clang - run: | - make run CC=clang - - - name: Build - id: make_build - run: | - make - - - name: Build runfast - id: make_build_runfast - run: | - make runfast - - - name: Test with pytest - run: pytest - - - - - windows-latest-make: - runs-on: windows-latest - - strategy: - fail-fast: false #necessary, otherwise the matrix breaks - matrix: - arch: - - amd64 - - amd64_x86 - - amd64_arm64 - - steps: - - name: Clone - id: checkout - uses: actions/checkout@v3 - - - name: Setup MSBuild - uses: microsoft/setup-msbuild@v1 - - - name: Setup MSVC ${{ matrix.arch }} - uses: ilammy/msvc-dev-cmd@v1 - with: - arch: ${{ matrix.arch }} - - - name: Set up Python 3.10 - if: matrix.arch != 'amd64_arm64' - uses: actions/setup-python@v3 - with: - python-version: "3.10" - - - name: Pip setup - if: matrix.arch != 'amd64_arm64' - run: | - python -m pip install --upgrade pip - if (Test-Path requirements.txt) { - pip install -r requirements.txt - } - - - name: Build ${{ matrix.arch }} - id: build_msvc - run: | - .\build_msvc.bat - - #cross-comiled, cannot be run on host - - name: Test with pytest - if: matrix.arch != 'amd64_arm64' - run: pytest - - windows-latest-mingw: - runs-on: windows-latest - - defaults: - run: - shell: msys2 {0} - - strategy: - matrix: - include: - - { sys: mingw64, env: x86_64 } - - steps: - - name: Checkout - id: checkout - uses: actions/checkout@v3 - - - uses: msys2/setup-msys2@v2 - id: setup-msys2 - with: - msystem: ${{ matrix.sys }} - install: mingw-w64-${{matrix.env}}-gcc make - - - name: Build ${{ matrix.sys }} ${{ matrix.env }} - id: build_mingw - run: | - make win64 - - - name: Set up Python 3.10 - uses: actions/setup-python@v3 - with: - python-version: "3.10" - - - name: Pip setup - shell: powershell - run: | - python -m pip install --upgrade pip - if (Test-Path requirements.txt) { - pip install -r requirements.txt - } - - - name: Test with pytest - shell: powershell - run: pytest diff --git a/configurator.py b/configurator.py deleted file mode 100644 index a8bba95..0000000 --- a/configurator.py +++ /dev/null @@ -1,47 +0,0 @@ -""" -Poor Man's Configurator. Probably a terrible idea. Example usage: -$ python train.py config/override_file.py --batch_size=32 -this will first run config/override_file.py, then override batch_size to 32 - -The code in this file will be run as follows from e.g. train.py: ->>> exec(open('configurator.py').read()) - -So it's not a Python module, it's just shuttling this code away from train.py -The code in this script then overrides the globals() - -I know people are not going to love this, I just really dislike configuration -complexity and having to prepend config. to every single variable. If someone -comes up with a better simple Python solution I am all ears. -""" - -import sys -from ast import literal_eval - -for arg in sys.argv[1:]: - if '=' not in arg: - # assume it's the name of a config file - assert not arg.startswith('--') - config_file = arg - print(f"Overriding config with {config_file}:") - with open(config_file) as f: - print(f.read()) - exec(open(config_file).read()) - else: - # assume it's a --key=value argument - assert arg.startswith('--') - key, val = arg.split('=') - key = key[2:] - if key in globals(): - try: - # attempt to eval it it (e.g. if bool, number, or etc) - attempt = literal_eval(val) - except (SyntaxError, ValueError): - # if that goes wrong, just use the string - attempt = val - # ensure the types match ok - assert type(attempt) == type(globals()[key]) - # cross fingers - print(f"Overriding: {key} = {attempt}") - globals()[key] = attempt - else: - raise ValueError(f"Unknown config key: {key}") diff --git a/export_meta_llama_bin.py b/export_meta_llama_bin.py deleted file mode 100644 index 4e42197..0000000 --- a/export_meta_llama_bin.py +++ /dev/null @@ -1,112 +0,0 @@ -""" -This script exports the Llama 2 weights in llama2c.bin format. -""" -import os -import sys -import struct -from pathlib import Path -import json - -import torch - -from model import precompute_freqs_cis - - -def export(p, state_dict, filepath='model.bin'): - """export the model weights in fp32 into .bin file to be read from C""" - f = open(filepath, 'wb') - - def serialize(key): - print(f"writing {key}...") - t = state_dict[key].contiguous().view(-1).type(torch.float32).numpy() - f.write(memoryview(t)) - del state_dict[key] - - # first write out the header - hidden_dim = state_dict['layers.0.feed_forward.w1.weight'].shape[0] - p['vocab_size'] = 32000 - p['max_seq_len'] = 2048 - - n_kv_heads = p.get('n_kv_heads') or p['n_heads'] - header = struct.pack( - 'iiiiiii', - p['dim'], hidden_dim, p['n_layers'], p['n_heads'], - n_kv_heads, -p['vocab_size'], p['max_seq_len'] - ) - # NOTE ABOVE: -ve vocab_size is indicating that the classifier weights are present - # in the checkpoint and should be loaded. - f.write(header) - - # next write out the embedding weights - print("writing tok_embeddings...") - serialize('tok_embeddings.weight') - - # now all the layers - # attention weights - for i in range(p['n_layers']): serialize(f'layers.{i}.attention_norm.weight') - for i in range(p['n_layers']): serialize(f'layers.{i}.attention.wq.weight') - for i in range(p['n_layers']): serialize(f'layers.{i}.attention.wk.weight') - for i in range(p['n_layers']): serialize(f'layers.{i}.attention.wv.weight') - for i in range(p['n_layers']): serialize(f'layers.{i}.attention.wo.weight') - # ffn weights - for i in range(p['n_layers']): serialize(f'layers.{i}.ffn_norm.weight') - for i in range(p['n_layers']): serialize(f'layers.{i}.feed_forward.w1.weight') - for i in range(p['n_layers']): serialize(f'layers.{i}.feed_forward.w2.weight') - for i in range(p['n_layers']): serialize(f'layers.{i}.feed_forward.w3.weight') - - # final rmsnorm - serialize('norm.weight') - # freqs_cos, freqs_sin - freqs_cos, freqs_sin = precompute_freqs_cis(p['dim'] // p['n_heads'], p['max_seq_len'] * 2) - state_dict['freqs_cos'] = freqs_cos[:p['max_seq_len']] - state_dict['freqs_sin'] = freqs_sin[:p['max_seq_len']] - serialize('freqs_cos') - serialize('freqs_sin') - - # finally write the output weights - serialize('output.weight') - - f.close() - print(f"wrote {filepath}") - - -def concat_weights(models): - state_dict = {} - for name in list(models[0]): - tensors = [model[name] for model in models] - if len(tensors) == 1 or len(tensors[0].shape) == 1: - state_dict[name] = tensors[0] - continue - is_axis_1 = ( - name.startswith('tok_embeddings.') - or name.endswith('.attention.wo.weight') - or name.endswith('.feed_forward.w2.weight') - ) - axis = 1 if is_axis_1 else 0 - state_dict[name] = torch.cat(tensors, dim=axis) - for model in models: - del model[name] - return state_dict - - -def load_and_export(model_path, output_path): - params_path = os.path.join(model_path, 'params.json') - with open(params_path) as f: - params = json.load(f) - print(params) - - model_paths = sorted(list(Path(model_path).glob('consolidated.*.pth'))) - models = [torch.load(p, map_location='cpu') for p in model_paths] - state_dict = concat_weights(models) - del models - export(params, state_dict, output_path) - - -if __name__ == '__main__': - if len(sys.argv) == 1: - print('[Llama model folder path] [output path]') - exit() - - model_path = sys.argv[1] - output_path = sys.argv[2] - load_and_export(model_path, output_path) diff --git a/export_meta_llama_hf_bin.py b/export_meta_llama_hf_bin.py deleted file mode 100644 index e3a8c73..0000000 --- a/export_meta_llama_hf_bin.py +++ /dev/null @@ -1,113 +0,0 @@ -""" -This script exports the Llama 2 weights in llama2c.bin format. -""" -import os -import sys -import struct -from pathlib import Path -import json - -import torch - -from model import precompute_freqs_cis - - -def export(p, state_dict, filepath='model.bin'): - """export the model weights in fp32 into .bin file to be read from C""" - f = open(filepath, 'wb') - - def serialize(key): - print(f"writing {key}...") - t = state_dict[key].contiguous().view(-1).type(torch.float32).numpy() - f.write(memoryview(t)) - del state_dict[key] - - # first write out the header - hidden_dim = state_dict['model.layers.0.mlp.gate_proj.weight'].shape[0] - p['vocab_size'] = 32000 - p['max_seq_len'] = 2048 - - n_kv_heads = p.get('n_kv_heads') or p['n_heads'] - header = struct.pack( - 'iiiiiii', - p['dim'], hidden_dim, p['n_layers'], p['n_heads'], - n_kv_heads, -p['vocab_size'], p['max_seq_len'] - ) - # NOTE ABOVE: -ve vocab_size is indicating that the classifier weights are present - # in the checkpoint and should be loaded. - f.write(header) - - # next write out the embedding weights - print("writing tok_embeddings...") - serialize('model.embed_tokens.weight') - - # now all the layers - # attention weights - for i in range(p['n_layers']): serialize(f'model.layers.{i}.input_layernorm.weight') - for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.q_proj.weight') - for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.k_proj.weight') - for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.v_proj.weight') - for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.o_proj.weight') - # ffn weights - for i in range(p['n_layers']): serialize(f'model.layers.{i}.post_attention_layernorm.weight') - for i in range(p['n_layers']): serialize(f'model.layers.{i}.mlp.gate_proj.weight') - for i in range(p['n_layers']): serialize(f'model.layers.{i}.mlp.down_proj.weight') - for i in range(p['n_layers']): serialize(f'model.layers.{i}.mlp.up_proj.weight') - - # final rmsnorm - serialize('model.norm.weight') - # freqs_cos, freqs_sin - freqs_cos, freqs_sin = precompute_freqs_cis(p['dim'] // p['n_heads'], p['max_seq_len'] * 2) - state_dict['freqs_cos'] = freqs_cos[:p['max_seq_len']] - state_dict['freqs_sin'] = freqs_sin[:p['max_seq_len']] - # check if this requires addtional conversion - serialize('freqs_cos') - serialize('freqs_sin') - - # finally write the output weights - serialize('lm_head.weight') - - f.close() - print(f"wrote {filepath}") - - -def concat_weights(models): - state_dict = {} - for name in list(models[0]): - tensors = [model[name] for model in models] - if len(tensors) == 1 or len(tensors[0].shape) == 1: - state_dict[name] = tensors[0] - continue - is_axis_1 = ( - name.startswith('model.embed_tokens.weight') - or name.endswith('.self_attn.o_proj.weight') - or name.endswith('.mlp.down_proj.weight') - ) - axis = 1 if is_axis_1 else 0 - state_dict[name] = torch.cat(tensors, dim=axis) - for model in models: - del model[name] - return state_dict - - -def load_and_export(model_path, output_path): - params_path = os.path.join(model_path, 'params.json') - with open(params_path) as f: - params = json.load(f) - print(params) - - model_paths = sorted(list(Path(model_path).glob('consolidated.*.pth'))) - models = [torch.load(p, map_location='cpu') for p in model_paths] - state_dict = concat_weights(models) - del models - export(params, state_dict, output_path) - - -if __name__ == '__main__': - if len(sys.argv) == 1: - print('[Llama model folder path] [output path]') - exit() - - model_path = sys.argv[1] - output_path = sys.argv[2] - load_and_export(model_path, output_path) diff --git a/model.py b/model.py deleted file mode 100644 index c8c82a9..0000000 --- a/model.py +++ /dev/null @@ -1,392 +0,0 @@ -import math -import struct -import inspect -from dataclasses import dataclass -from typing import Any, Optional, Tuple - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - -@dataclass -class ModelArgs: - # default hyperparameters for the Llama 7B model - dim: int = 4096 - n_layers: int = 32 - n_heads: int = 32 - n_kv_heads: Optional[int] = None - vocab_size: int = 32000 - multiple_of: int = 256 # MLP hidden layer size will be multiple of - norm_eps: float = 1e-5 - max_seq_len: int = 2048 - dropout: float = 0.0 - - -class RMSNorm(torch.nn.Module): - def __init__(self, dim: int, eps: float): - super().__init__() - self.eps = eps - self.weight = nn.Parameter(torch.ones(dim)) - - def _norm(self, x): - return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps) - - def forward(self, x): - output = self._norm(x.float()).type_as(x) - return output * self.weight - - -def precompute_freqs_cis(dim: int, end: int, theta: float = 10000.0): - freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim)) - t = torch.arange(end, device=freqs.device) # type: ignore - freqs = torch.outer(t, freqs).float() # type: ignore - freqs_cos = torch.cos(freqs) # real part - freqs_sin = torch.sin(freqs) # imaginary part - return freqs_cos, freqs_sin - -def reshape_for_broadcast(freqs_cis: torch.Tensor, x: torch.Tensor): - ndim = x.ndim - assert 0 <= 1 < ndim - assert freqs_cis.shape == (x.shape[1], x.shape[-1]) - shape = [d if i == 1 or i == ndim - 1 else 1 for i, d in enumerate(x.shape)] - return freqs_cis.view(shape) - -def apply_rotary_emb( - xq: torch.Tensor, - xk: torch.Tensor, - freqs_cos: torch.Tensor, - freqs_sin: torch.Tensor -) -> Tuple[torch.Tensor, torch.Tensor]: - - # reshape xq and xk to match the complex representation - xq_r, xq_i = xq.float().reshape(xq.shape[:-1] + (-1, 2)).unbind(-1) - xk_r, xk_i = xk.float().reshape(xk.shape[:-1] + (-1, 2)).unbind(-1) - - # reshape freqs_cos and freqs_sin for broadcasting - freqs_cos = reshape_for_broadcast(freqs_cos, xq_r) - freqs_sin = reshape_for_broadcast(freqs_sin, xq_r) - - # apply rotation using real numbers - xq_out_r = xq_r * freqs_cos - xq_i * freqs_sin - xq_out_i = xq_r * freqs_sin + xq_i * freqs_cos - xk_out_r = xk_r * freqs_cos - xk_i * freqs_sin - xk_out_i = xk_r * freqs_sin + xk_i * freqs_cos - - # flatten last two dimensions - xq_out = torch.stack([xq_out_r, xq_out_i], dim=-1).flatten(3) - xk_out = torch.stack([xk_out_r, xk_out_i], dim=-1).flatten(3) - - return xq_out.type_as(xq), xk_out.type_as(xk) - -def repeat_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep)""" - bs, slen, n_kv_heads, head_dim = x.shape - if n_rep == 1: - return x - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - -class Attention(nn.Module): - def __init__(self, args: ModelArgs): - super().__init__() - self.n_kv_heads = args.n_heads if args.n_kv_heads is None else args.n_kv_heads - assert args.n_heads % self.n_kv_heads == 0 - model_parallel_size = 1 - self.n_local_heads = args.n_heads // model_parallel_size - self.n_local_kv_heads = self.n_kv_heads // model_parallel_size - self.n_rep = self.n_local_heads // self.n_local_kv_heads - self.head_dim = args.dim // args.n_heads - self.wq = nn.Linear(args.dim, args.n_heads * self.head_dim, bias=False) - self.wk = nn.Linear(args.dim, self.n_kv_heads * self.head_dim, bias=False) - self.wv = nn.Linear(args.dim, self.n_kv_heads * self.head_dim, bias=False) - self.wo = nn.Linear(args.n_heads * self.head_dim, args.dim, bias=False) - self.attn_dropout = nn.Dropout(args.dropout) - self.resid_dropout = nn.Dropout(args.dropout) - self.dropout = args.dropout - - # use flash attention or a manual implementation? - self.flash = hasattr(torch.nn.functional, 'scaled_dot_product_attention') - if not self.flash: - print("WARNING: using slow attention. Flash Attention requires PyTorch >= 2.0") - mask = torch.full((1, 1, args.max_seq_len, args.max_seq_len), float("-inf")) - mask = torch.triu(mask, diagonal=1) - self.register_buffer("mask", mask) - - def forward( - self, - x: torch.Tensor, - freqs_cos: torch.Tensor, - freqs_sin: torch.Tensor, - ): - bsz, seqlen, _ = x.shape - - # QKV - xq, xk, xv = self.wq(x), self.wk(x), self.wv(x) - xq = xq.view(bsz, seqlen, self.n_local_heads, self.head_dim) - xk = xk.view(bsz, seqlen, self.n_local_kv_heads, self.head_dim) - xv = xv.view(bsz, seqlen, self.n_local_kv_heads, self.head_dim) - - # RoPE relative positional embeddings - xq, xk = apply_rotary_emb(xq, xk, freqs_cos, freqs_sin) - - # grouped multiquery attention: expand out keys and values - xk = repeat_kv(xk, self.n_rep) # (bs, seqlen, n_local_heads, head_dim) - xv = repeat_kv(xv, self.n_rep) # (bs, seqlen, n_local_heads, head_dim) - - # make heads into a batch dimension - xq = xq.transpose(1, 2) # (bs, n_local_heads, seqlen, head_dim) - xk = xk.transpose(1, 2) - xv = xv.transpose(1, 2) - - # flash implementation - if self.flash: - output = torch.nn.functional.scaled_dot_product_attention(xq, xk, xv, attn_mask=None, dropout_p=self.dropout if self.training else 0.0, is_causal=True) - else: - # manual implementation - scores = torch.matmul(xq, xk.transpose(2, 3)) / math.sqrt(self.head_dim) - assert hasattr(self, 'mask') - scores = scores + self.mask[:, :, :seqlen, :seqlen] # (bs, n_local_heads, seqlen, cache_len + seqlen) - scores = F.softmax(scores.float(), dim=-1).type_as(xq) - scores = self.attn_dropout(scores) - output = torch.matmul(scores, xv) # (bs, n_local_heads, seqlen, head_dim) - - # restore time as batch dimension and concat heads - output = output.transpose(1, 2).contiguous().view(bsz, seqlen, -1) - - # final projection into the residual stream - output = self.wo(output) - output = self.resid_dropout(output) - return output - - -class FeedForward(nn.Module): - def __init__(self, dim: int, hidden_dim: int, multiple_of: int, dropout: float): - super().__init__() - hidden_dim = int(2 * hidden_dim / 3) - hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of) - self.w1 = nn.Linear(dim, hidden_dim, bias=False) - self.w2 = nn.Linear(hidden_dim, dim, bias=False) - self.w3 = nn.Linear(dim, hidden_dim, bias=False) - self.dropout = nn.Dropout(dropout) - - def forward(self, x): - return self.dropout(self.w2(F.silu(self.w1(x)) * self.w3(x))) - - -class TransformerBlock(nn.Module): - def __init__(self, layer_id: int, args: ModelArgs): - super().__init__() - self.n_heads = args.n_heads - self.dim = args.dim - self.head_dim = args.dim // args.n_heads - self.attention = Attention(args) - self.feed_forward = FeedForward( - dim=args.dim, - hidden_dim=4 * args.dim, - multiple_of=args.multiple_of, - dropout=args.dropout, - ) - self.layer_id = layer_id - self.attention_norm = RMSNorm(args.dim, eps=args.norm_eps) - self.ffn_norm = RMSNorm(args.dim, eps=args.norm_eps) - - def forward(self, x, freqs_cos, freqs_sin): - h = x + self.attention.forward(self.attention_norm(x), freqs_cos, freqs_sin) - out = h + self.feed_forward.forward(self.ffn_norm(h)) - return out - - -class Transformer(nn.Module): - last_loss: Optional[torch.Tensor] - - def __init__(self, params: ModelArgs): - super().__init__() - self.params = params - self.vocab_size = params.vocab_size - self.n_layers = params.n_layers - - self.tok_embeddings = nn.Embedding(params.vocab_size, params.dim) - self.dropout = nn.Dropout(params.dropout) - self.layers = torch.nn.ModuleList() - for layer_id in range(params.n_layers): - self.layers.append(TransformerBlock(layer_id, params)) - self.norm = RMSNorm(params.dim, eps=params.norm_eps) - self.output = nn.Linear(params.dim, params.vocab_size, bias=False) - - # share the unembedding parameters with the embedding parameters - self.tok_embeddings.weight = self.output.weight # https://paperswithcode.com/method/weight-tying - - # some useful precompute for the RoPE relative positional embeddings - freqs_cos, freqs_sin = precompute_freqs_cis(self.params.dim // self.params.n_heads, self.params.max_seq_len) - self.register_buffer("freqs_cos", freqs_cos, persistent=False) - self.register_buffer("freqs_sin", freqs_sin, persistent=False) - - # init all weights - self.apply(self._init_weights) - # apply special scaled init to the residual projections, per GPT-2 paper - for pn, p in self.named_parameters(): - if pn.endswith('w3.weight') or pn.endswith('wo.weight'): - torch.nn.init.normal_(p, mean=0.0, std=0.02/math.sqrt(2 * params.n_layers)) - - # Initialize attribute for the loss of the last forward call. This will be set if the forward is called with a targets tensor. - self.last_loss = None - - def _init_weights(self, module): - if isinstance(module, nn.Linear): - torch.nn.init.normal_(module.weight, mean=0.0, std=0.02) - if module.bias is not None: - torch.nn.init.zeros_(module.bias) - elif isinstance(module, nn.Embedding): - torch.nn.init.normal_(module.weight, mean=0.0, std=0.02) - - def forward(self, tokens: torch.Tensor, targets: Optional[torch.Tensor] = None) -> torch.Tensor: - _bsz, seqlen = tokens.shape - h = self.tok_embeddings(tokens) - h = self.dropout(h) - freqs_cos = self.freqs_cos[:seqlen] - freqs_sin = self.freqs_sin[:seqlen] - - for layer in self.layers: - h = layer(h, freqs_cos, freqs_sin) - h = self.norm(h) - - if targets is not None: - # if we are given some desired targets also calculate the loss - logits = self.output(h) - self.last_loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), ignore_index=-1) - else: - # inference-time mini-optimization: only forward the output on the very last position - logits = self.output(h[:, [-1], :]) # note: using list [-1] to preserve the time dim - self.last_loss = None - - return logits - - def configure_optimizers(self, weight_decay, learning_rate, betas, device_type): - # start with all of the candidate parameters - param_dict = {pn: p for pn, p in self.named_parameters()} - # filter out those that do not require grad - param_dict = {pn: p for pn, p in param_dict.items() if p.requires_grad} - # create optim groups. Any parameters that is 2D will be weight decayed, otherwise no. - # i.e. all weight tensors in matmuls + embeddings decay, all biases and layernorms don't. - decay_params = [p for n, p in param_dict.items() if p.dim() >= 2] - nodecay_params = [p for n, p in param_dict.items() if p.dim() < 2] - optim_groups = [ - {'params': decay_params, 'weight_decay': weight_decay}, - {'params': nodecay_params, 'weight_decay': 0.0} - ] - num_decay_params = sum(p.numel() for p in decay_params) - num_nodecay_params = sum(p.numel() for p in nodecay_params) - print(f"num decayed parameter tensors: {len(decay_params)}, with {num_decay_params:,} parameters") - print(f"num non-decayed parameter tensors: {len(nodecay_params)}, with {num_nodecay_params:,} parameters") - # Create AdamW optimizer and use the fused version if it is available - fused_available = 'fused' in inspect.signature(torch.optim.AdamW).parameters - use_fused = fused_available and device_type == 'cuda' - extra_args = dict(fused=True) if use_fused else dict() - optimizer = torch.optim.AdamW(optim_groups, lr=learning_rate, betas=betas, **extra_args) - print(f"using fused AdamW: {use_fused}") - - return optimizer - - def estimate_mfu(self, fwdbwd_per_iter, dt): - """ estimate model flops utilization (MFU) in units of A100 bfloat16 peak FLOPS """ - # first estimate the number of flops we do per iteration. - # see PaLM paper Appendix B as ref: https://arxiv.org/abs/2204.02311 - N = sum(p.numel() for p in self.parameters()) - cfg = self.params - L, H, Q, T = cfg.n_layers, cfg.n_heads, cfg.dim//cfg.n_heads, cfg.max_seq_len - flops_per_token = 6*N + 12*L*H*Q*T - flops_per_fwdbwd = flops_per_token * T - flops_per_iter = flops_per_fwdbwd * fwdbwd_per_iter - # express our flops throughput as ratio of A100 bfloat16 peak flops - flops_achieved = flops_per_iter * (1.0/dt) # per second - flops_promised = 312e12 # A100 GPU bfloat16 peak flops is 312 TFLOPS - mfu = flops_achieved / flops_promised - return mfu - - @torch.inference_mode() - def generate(self, idx, max_new_tokens, temperature=1.0, top_k=None): - """ - Take a conditioning sequence of indices idx (LongTensor of shape (b,t)) and complete - the sequence max_new_tokens times, feeding the predictions back into the model each time. - Most likely you'll want to make sure to be in model.eval() mode of operation for this. - Also note this is a super inefficient version of sampling with no key/value cache. - """ - for _ in range(max_new_tokens): - # if the sequence context is growing too long we must crop it at block_size - idx_cond = idx if idx.size(1) <= self.params.max_seq_len else idx[:, -self.params.max_seq_len:] - # forward the model to get the logits for the index in the sequence - logits = self(idx_cond) - logits = logits[:, -1, :] # crop to just the final time step - if temperature == 0.0: - # "sample" the single most likely index - _, idx_next = torch.topk(logits, k=1, dim=-1) - else: - # pluck the logits at the final step and scale by desired temperature - logits = logits / temperature - # optionally crop the logits to only the top k options - if top_k is not None: - v, _ = torch.topk(logits, min(top_k, logits.size(-1))) - logits[logits < v[:, [-1]]] = -float('Inf') - # apply softmax to convert logits to (normalized) probabilities - probs = F.softmax(logits, dim=-1) - idx_next = torch.multinomial(probs, num_samples=1) - # append sampled index to the running sequence and continue - idx = torch.cat((idx, idx_next), dim=1) - - return idx - - def export(self, filepath='model.bin'): - """export the model weights in fp32 into .bin file to be read from C""" - f = open(filepath, 'wb') - - def serialize(t): - d = t.detach().cpu().view(-1).numpy().astype(np.float32) - b = struct.pack(f'{len(d)}f', *d) - f.write(b) - - # first write out the header - hidden_dim = self.layers[0].feed_forward.w1.weight.shape[0] - p = self.params - n_kv_heads = p.n_heads if p.n_kv_heads is None else p.n_kv_heads - header = struct.pack('iiiiiii', p.dim, hidden_dim, p.n_layers, p.n_heads, - n_kv_heads, p.vocab_size, p.max_seq_len) - f.write(header) - - # next write out the embedding weights - serialize(self.tok_embeddings.weight) - - # now all the layers - # attention weights - for layer in self.layers: - serialize(layer.attention_norm.weight) - for layer in self.layers: - serialize(layer.attention.wq.weight) - for layer in self.layers: - serialize(layer.attention.wk.weight) - for layer in self.layers: - serialize(layer.attention.wv.weight) - for layer in self.layers: - serialize(layer.attention.wo.weight) - # ffn weights - for layer in self.layers: - serialize(layer.ffn_norm.weight) - for layer in self.layers: - serialize(layer.feed_forward.w1.weight) - for layer in self.layers: - serialize(layer.feed_forward.w2.weight) - for layer in self.layers: - serialize(layer.feed_forward.w3.weight) - # final rmsnorm - serialize(self.norm.weight) - # note: no need to write final classifier weights due to weight sharing - # freqs_cis - serialize(self.freqs_cos[:p.max_seq_len]) - serialize(self.freqs_sin[:p.max_seq_len]) - - # write to binary file - f.close() - print(f"wrote {filepath}") diff --git a/requirements.txt b/requirements.txt deleted file mode 100644 index 7187a73..0000000 --- a/requirements.txt +++ /dev/null @@ -1,7 +0,0 @@ -numpy==1.23.5 -pytest==7.4.0 -Requests==2.31.0 -sentencepiece==0.1.99 -torch==2.0.1 -tqdm==4.64.1 -wandb==0.15.5 diff --git a/run.ipynb b/run.ipynb deleted file mode 100644 index ac57593..0000000 --- a/run.ipynb +++ /dev/null @@ -1,130 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "HLdoj4cz-xal" - }, - "source": [ - "# Run.c\n", - "\n", - "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/karpathy/llama2.c/blob/master/run.ipynb)\n", - "\n", - "More details can be found in the [README.md](README.md) ." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Une3Ozlnu1B7" - }, - "outputs": [], - "source": [ - "#@title Clone Project\n", - "\n", - "!git clone https://github.com/karpathy/llama2.c.git\n", - "%cd llama2.c" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#@title Build\n", - "\n", - "!make runfast" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "thm0ZBrtSgoC" - }, - "outputs": [], - "source": [ - "#@title Pick Your Model\n", - "\n", - "#@markdown Choose model\n", - "model = \"stories15M\" #@param [\"stories15M\", \"stories42M\", \"stories110M\"]\n", - "\n", - "download_url = \"\"\n", - "\n", - "if(model == \"stories15M\"):\n", - " download_url = \"https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin\"\n", - "if(model == \"stories42M\"):\n", - " download_url = \"https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin\"\n", - "if(model == \"stories110M\"):\n", - " download_url = \"https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin\"\n", - "\n", - "print(f\"download_url: {download_url}\")\n", - "\n", - "!wget $download_url\n", - "\n", - "model_file = model + \".bin\"" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "OgAc3KjuT-NM" - }, - "outputs": [], - "source": [ - "#@title Generate Stories\n", - "\n", - "# Generate args\n", - "max_token = 256 #@param {type:\"slider\", min:32, max:1024, step:32}\n", - "temperature = 0.8 #@param {type:\"slider\", min:0.0, max:1, step:0.05}\n", - "top_p = 0.9 #@param {type:\"slider\", min:0.0, max:1.0, step:0.05}\n", - "prompt = \"One day, Lily met a Shoggoth\" #@param {type:\"string\"}\n", - "\n", - "print(f\"model: {model_file}, max_token: {max_token}, temperature: {temperature}, top_p: {top_p}, prompt: {prompt}\")\n", - "print(f\"----------------------------\\n\")\n", - "\n", - "cmd = f'./run {model_file} -t {temperature} -p {top_p} -n {max_token} -i \"{prompt}\"'\n", - "!{cmd}" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#@title Run Meta's Llama 2 models\n", - "\n", - "#@markdown input your huggingface [access token](https://huggingface.co/settings/tokens) to download Meta's Llama 2 models.\n", - "\n", - "from huggingface_hub import snapshot_download\n", - "\n", - "token = \"replace your huggingface access token\" #@param {type:\"string\"}\n", - "path = snapshot_download(repo_id=\"meta-llama/Llama-2-7b\",cache_dir=\"Llama-2-7b\", use_auth_token=token)\n", - "\n", - "!python export_meta_llama_bin.py $path llama2_7b.bin\n", - "\n", - "print(\"./run llama2_7b.bin\\n\")\n", - "!./run llama2_7b.bin" - ] - } - ], - "metadata": { - "colab": { - "private_outputs": true, - "provenance": [] - }, - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - }, - "language_info": { - "name": "python" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} diff --git a/sample.py b/sample.py deleted file mode 100644 index d2f56ea..0000000 --- a/sample.py +++ /dev/null @@ -1,79 +0,0 @@ -""" -Sample from the trained model with PyTorch -""" -import os -import pickle -from contextlib import nullcontext -import torch -from model import ModelArgs, Transformer -from tokenizer import Tokenizer - -from tinystories import get_tokenizer_model_path - -# ----------------------------------------------------------------------------- -checkpoint = 'out/ckpt.pt' -start = "" # or "<|endoftext|>" or etc. Can also specify a file, use as: "FILE:prompt.txt" -num_samples = 1 # number of samples to draw -max_new_tokens = 100 # number of tokens generated in each sample -temperature = 1.0 # 1.0 = no change, < 1.0 = less random, > 1.0 = more random, in predictions -top_k = 300 # retain only the top_k most likely tokens, clamp others to have 0 probability -tokenizer = "" # override the tokenizer model path -seed = 1337 -device = 'cuda' if torch.cuda.is_available() else 'cpu' # examples: 'cpu', 'cuda', 'cuda:0', 'cuda:1', etc. -#dtype = 'bfloat16' if torch.cuda.is_available() and torch.cuda.is_bf16_supported() else 'float16' # 'float32' or 'bfloat16' or 'float16' -dtype = "float32" -compile = False # use PyTorch 2.0 to compile the model to be faster -exec(open('configurator.py').read()) # overrides from command line or config file -# ----------------------------------------------------------------------------- - -torch.manual_seed(seed) -torch.cuda.manual_seed(seed) -torch.backends.cuda.matmul.allow_tf32 = True # allow tf32 on matmul -torch.backends.cudnn.allow_tf32 = True # allow tf32 on cudnn -device_type = 'cuda' if 'cuda' in device else 'cpu' # for later use in torch.autocast -ptdtype = {'float32': torch.float32, 'bfloat16': torch.bfloat16, 'float16': torch.float16}[dtype] -ctx = nullcontext() if device_type == 'cpu' else torch.amp.autocast(device_type=device_type, dtype=ptdtype) - -# init from a model saved in a specific directory -checkpoint_dict = torch.load(checkpoint, map_location=device) -gptconf = ModelArgs(**checkpoint_dict['model_args']) -model = Transformer(gptconf) -state_dict = checkpoint_dict['model'] -unwanted_prefix = '_orig_mod.' -for k,v in list(state_dict.items()): - if k.startswith(unwanted_prefix): - state_dict[k[len(unwanted_prefix):]] = state_dict.pop(k) -model.load_state_dict(state_dict, strict=False) - -model.eval() -model.to(device) -if compile: - print("Compiling the model...") - model = torch.compile(model) # requires PyTorch 2.0 (optional) - -# load the tokenizer -vocab_source = checkpoint_dict.get("vocab_source", "llama2") -vocab_size = gptconf.vocab_size -if tokenizer: - # a specific tokenizer is provided, use it - tokenizer_model = tokenizer -else: - # let's try to find the tokenizer model automatically. bit gross here... - query_vocab_size = 0 if vocab_source == "llama2" else vocab_size - tokenizer_model = get_tokenizer_model_path(vocab_size=query_vocab_size) -enc = Tokenizer(tokenizer_model=tokenizer_model) - -# encode the beginning of the prompt -if start.startswith('FILE:'): - with open(start[5:], 'r', encoding='utf-8') as f: - start = f.read() -start_ids = enc.encode(start, bos=True, eos=False) -x = (torch.tensor(start_ids, dtype=torch.long, device=device)[None, ...]) - -# run generation -with torch.no_grad(): - with ctx: - for k in range(num_samples): - y = model.generate(x, max_new_tokens, temperature=temperature, top_k=top_k) - print(enc.decode(y[0].tolist())) - print('---------------') diff --git a/save_torchscript.py b/save_torchscript.py deleted file mode 100755 index af3a299..0000000 --- a/save_torchscript.py +++ /dev/null @@ -1,66 +0,0 @@ -#!/usr/bin/env python -"""Saves the model as a TorchScript. - -Usage examples: - ./save_torchscript.py - ./save_torchscript.py --dim=300 - ./save_torchscript.py --gzip_output=True --zero_params=True - -The resulting file can be loaded in C++ code and then used for training or -inference with: - #include - torch::jit::Module module = torch::jit::load("model.pt") - -Note that the serialized model includes the initial parameters and with the default -ModelArgs the file is 59M and gzips down to 55M. If you want to serialize/distribute -the model parameters separately you can zero out the parameters before saving it and -it will gzip down to 780K. -""" -import gzip -import os -import shutil -from inspect import signature - -import torch - -from model import ModelArgs, Transformer - -# Model args config -dim = 288 -n_layers = 6 -n_heads = 6 -n_kv_heads = n_heads -multiple_of = 32 -max_seq_len = 256 -dropout = 0.0 -vocab_size = 32000 -norm_eps = 1e-5 -# Save config -model_path = "model.pt" -zero_params = False -gzip_output = False -# Allow config overrides -exec(open("configurator.py").read()) - - -def main() -> None: - model_args = {k: globals()[k] for k in signature(ModelArgs).parameters} - model = Transformer(ModelArgs(**model_args)) - - # If requested zero params before saving the model. This is useful in - # conjunction with gzip_output. - if zero_params: - for p in model.parameters(): - p.detach().zero_() - - torch.jit.save(torch.jit.script(model), model_path) - - if gzip_output: - with open(model_path, "rb") as f_in: - with gzip.open(f"{model_path}.gz", "wb") as f_out: - shutil.copyfileobj(f_in, f_out) - os.unlink(model_path) - - -if __name__ == "__main__": - main() diff --git a/tinystories.py b/tinystories.py deleted file mode 100644 index 690cb02..0000000 --- a/tinystories.py +++ /dev/null @@ -1,274 +0,0 @@ -""" -Download, preprocess and serve the TinyStories dataset as a DataLoader. -""" - -import argparse -import glob -import json -import os -import random -from typing import List -from concurrent.futures import ProcessPoolExecutor -from functools import partial - -import numpy as np -import requests -import torch -import torch.distributed as dist -from tqdm import tqdm - -from tokenizer import Tokenizer - -DATA_CACHE_DIR = "data" - -def download_file(url: str, fname: str, chunk_size=1024): - """Helper function to download a file from a given url""" - resp = requests.get(url, stream=True) - total = int(resp.headers.get("content-length", 0)) - with open(fname, "wb") as file, tqdm( - desc=fname, - total=total, - unit="iB", - unit_scale=True, - unit_divisor=1024, - ) as bar: - for data in resp.iter_content(chunk_size=chunk_size): - size = file.write(data) - bar.update(size) - - -def download(): - """Downloads the TinyStories dataset to DATA_CACHE_DIR""" - os.makedirs(DATA_CACHE_DIR, exist_ok=True) - - # download the TinyStories dataset, unless it's already downloaded - data_url = "https://huggingface.co/datasets/roneneldan/TinyStories/resolve/main/TinyStories_all_data.tar.gz" - data_filename = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data.tar.gz") - if not os.path.exists(data_filename): - print(f"Downloading {data_url} to {data_filename}...") - download_file(data_url, data_filename) - else: - print(f"{data_filename} already exists, skipping download...") - - # unpack the tar.gz file into all the data shards (json files) - data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") - if not os.path.exists(data_dir): - os.makedirs(data_dir, exist_ok=True) - print(f"Unpacking {data_filename}...") - os.system(f"tar -xzf {data_filename} -C {data_dir}") - else: - print(f"{data_dir} already exists, skipping unpacking...") - - # print a single example just for debugging and such - shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) - with open(shard_filenames[0], "r") as f: - data = json.load(f) - print("Download done.") - print(f"Number of shards: {len(shard_filenames)}") - print(f"Example story:\n{data[0]}") - -def train_vocab(vocab_size): - """ - Trains a custom sentencepiece tokenizer on the TinyStories dataset. - The custom tokenizer files will be saved in DATA_CACHE_DIR/tok{N} directories, - where N is the vocab size. This is also where the pretok .bin files will go. - """ - assert vocab_size > 0, "Vocab size must be positive" - - # output file prefix path for sentencepiece - prefix = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") - - # how many shards we'll use for vocab training, kept low for efficiency - num_shards = 10 - - # 1) export a large chunk of text as a single text file tiny.txt - tiny_file = os.path.join(DATA_CACHE_DIR, "tiny.txt") - data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") - shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) - - print(f"Writing temporary file {tiny_file} with {num_shards} shards...") - with open(tiny_file, "w") as of: - for shard in tqdm(shard_filenames[:num_shards]): - with open(shard, "r") as f: - data = json.load(f) - for example in data: - text = example["story"] - text = text.strip() - of.write(text + "\n") - print(f"Size is: {os.path.getsize(tiny_file) / 1024 / 1024:.2f} MB") - - # 2) run the train_vocab.sh script that trains the sentencepiece model - print("Will now train the vocab with:") - cmd = f"bash train_vocab.sh {tiny_file} {prefix} {vocab_size}" - print(cmd) - print("OK? [y/N] ") - dec = input() - if dec.lower() != "y": - print("Exiting...") - return - os.system(cmd) - - # 3) optional cleanup, ask the user if they'd like to delete tiny.txt - dec = input(f"Delete the temporary file {tiny_file}? [y/N] ") - if dec.lower() == "y": - os.remove(tiny_file) - print(f"Deleted {tiny_file}") - - print(f"Trained tokenizer is in {prefix}.model") - print("Done.") - - -def process_shard(args, vocab_size): - shard_id, shard = args - tokenizer_model = get_tokenizer_model_path(vocab_size) - enc = Tokenizer(tokenizer_model) - with open(shard, "r") as f: - data = json.load(f) - all_tokens = [] - for example in tqdm(data, position=shard_id): - text = example["story"] - text = text.strip() # get rid of leading/trailing whitespace - tokens = enc.encode(text, bos=True, eos=False) # encode the text, use BOS - all_tokens.extend(tokens) - # convert to uint16 nparray - all_tokens = np.array(all_tokens, dtype=np.uint16) - # calculate the output filename - if vocab_size == 0: - # if we're using Llama 2, just save the tokenized file in the same dir - tokenized_filename = shard.replace(".json", ".bin") - else: - # save .bin files into a new tok{N} directory - bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") - shard_basename = os.path.basename(shard) - bin_basename = shard_basename.replace(".json", ".bin") - tokenized_filename = os.path.join(bin_dir, bin_basename) - # write the bytes - with open(tokenized_filename, "wb") as f: - f.write(all_tokens.tobytes()) - # calculate the average sequence length (they are separated by BOS=1) - avg_seq_len = all_tokens.size / ((all_tokens == 1).sum()) - print(f"Saved {tokenized_filename}, average seqlen: {avg_seq_len:.2f}") - - -def pretokenize(vocab_size): - # iterate the shards and tokenize all of them one by one - data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") - shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) - if vocab_size > 0: - # .bin files will be saved into tok{N} directory, create it once here - bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") - os.makedirs(bin_dir, exist_ok=True) - - # process all the shards in a process pool - fun = partial(process_shard, vocab_size=vocab_size) - with ProcessPoolExecutor() as executor: - executor.map(fun, enumerate(shard_filenames)) - print("Done.") - - -class PretokDataset(torch.utils.data.IterableDataset): - """Loads pretokenized examples from disk and yields them as PyTorch tensors.""" - - def __init__(self, split, max_seq_len, vocab_size, vocab_source): - super().__init__() - self.split = split - self.max_seq_len = max_seq_len - self.vocab_size = vocab_size - self.vocab_source = vocab_source - - def __iter__(self): - # get worker info within a DataLoader - worker_info = torch.utils.data.get_worker_info() - worker_id = worker_info.id if worker_info else 0 - # get DDP rank info - rank = dist.get_rank() if dist.is_initialized() else 0 - # combine the worker_id and worker_rank to create a unique seed for rng - seed = 42 + worker_id + 1337 * rank - rng = random.Random(seed) - print(f"Created a PretokDataset with rng seed {seed}") - if self.vocab_source == "llama2": - # the .bin files are right along the .json files - bin_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") - shard_filenames = sorted(glob.glob(os.path.join(bin_dir, "*.bin"))) - elif self.vocab_source == "custom": - # the .bin files are in tok{N} directory - bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{self.vocab_size}") - shard_filenames = sorted(glob.glob(os.path.join(bin_dir, "*.bin"))) - # train/test split. let's use only shard 0 for test split, rest train - shard_filenames = shard_filenames[1:] if self.split == "train" else shard_filenames[:1] - while True: - rng.shuffle(shard_filenames) - for shard in shard_filenames: - # open the dataset for reading but keep it on disk with memmap - m = np.memmap(shard, dtype=np.uint16, mode="r") - num_batches = len(m) // self.max_seq_len - num_batches -= 1 # drop the last partial batch - assert num_batches > 0, "this shard is way too small? investigate." - ixs = list(range(num_batches)) - rng.shuffle(ixs) - for ix in ixs: - start = ix * self.max_seq_len - end = start + self.max_seq_len + 1 - # calling .astype will copy the data into a new numpy array, now in RAM - chunk = torch.from_numpy((m[start:end]).astype(np.int64)) - x = chunk[:-1] - y = chunk[1:] - yield x, y - -# ----------------------------------------------------------------------------- -# public interface functions - -def get_tokenizer_model_path(vocab_size): - """ - Returns path to the sentencepiece tokenizer model for a given vocab size - vocab_size = 0 designates the default Llama 2 tokenizer, in that case - None is returned. - """ - if vocab_size == 0: - return None - else: - return os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}.model") - -class Task: - - @staticmethod - def iter_batches(batch_size, device, num_workers=0, **dataset_kwargs): - ds = PretokDataset(**dataset_kwargs) - dl = torch.utils.data.DataLoader( - ds, batch_size=batch_size, pin_memory=True, num_workers=num_workers - ) - for x, y in dl: - x = x.to(device, non_blocking=True) - y = y.to(device, non_blocking=True) - yield x, y - -# ----------------------------------------------------------------------------- -# CLI for constructing the dataset - -if __name__ == "__main__": - """ - These stages are designed to be run in order. - - To tokenize data with the Llama 2 tokenizer: - python tinystories.py download - python tinystories.py pretokenize - - To tokenize data with a custom tokenizer we train ourselves with sentencepiece, e.g.: - python tinystories.py download - python tinystories.py train_vocab --vocab_size=2048 - python tinystories.py pretokenize --vocab_size=2048 - """ - parser = argparse.ArgumentParser() - parser.add_argument("stage", type=str, choices=["download", "pretokenize", "train_vocab"]) - parser.add_argument("--vocab_size", type=int, default=0, help="pretokenization vocab size. 0 = use Llama 2 tokenizer.") - args = parser.parse_args() - - # depending on the stage call the appropriate function - if args.stage == "download": - download() - elif args.stage == "train_vocab": - train_vocab(vocab_size=args.vocab_size) - elif args.stage == "pretokenize": - pretokenize(vocab_size=args.vocab_size) - else: - raise ValueError(f"Unknown stage {args.stage}") diff --git a/tokenizer.py b/tokenizer.py deleted file mode 100644 index f3c0cc3..0000000 --- a/tokenizer.py +++ /dev/null @@ -1,78 +0,0 @@ -# Taken from llama code and lightly modified -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. - -import os -import struct -import argparse -from typing import List - -from sentencepiece import SentencePieceProcessor - -TOKENIZER_MODEL = "tokenizer.model" # the llama sentencepiece tokenizer model - -class Tokenizer: - def __init__(self, tokenizer_model=None): - model_path = tokenizer_model if tokenizer_model else TOKENIZER_MODEL - assert os.path.isfile(model_path), model_path - self.sp_model = SentencePieceProcessor(model_file=model_path) - self.model_path = model_path - - # BOS / EOS token IDs - self.n_words: int = self.sp_model.vocab_size() - self.bos_id: int = self.sp_model.bos_id() - self.eos_id: int = self.sp_model.eos_id() - self.pad_id: int = self.sp_model.pad_id() - #print(f"#words: {self.n_words} - BOS ID: {self.bos_id} - EOS ID: {self.eos_id}") - assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() - - def encode(self, s: str, bos: bool, eos: bool) -> List[int]: - assert type(s) is str - t = self.sp_model.encode(s) - if bos: - t = [self.bos_id] + t - if eos: - t = t + [self.eos_id] - return t - - def decode(self, t: List[int]) -> str: - return self.sp_model.decode(t) - - def export(self): - - # get all the tokens (postprocessed) and their scores as floats - tokens, scores = [], [] - for i in range(self.n_words): - - # decode the token and light postprocessing - t = self.sp_model.id_to_piece(i) - s = self.sp_model.get_score(i) - if i == self.bos_id: - t = '\n\n' - elif i == self.eos_id: - t = '\n\n' - t = t.replace('▁', ' ') # sentencepiece uses this character as whitespace - b = t.encode('utf-8') # bytes of this token, utf-8 encoded - - tokens.append(b) - scores.append(s) - - # record the max token length - max_token_length = max(len(t) for t in tokens) - - # write to a binary file - # the tokenizer.bin file is the same as .model file, but .bin - tokenizer_bin = self.model_path.replace('.model', '.bin') - with open(tokenizer_bin, 'wb') as f: - f.write(struct.pack("I", max_token_length)) - for bytes, score in zip(tokens, scores): - f.write(struct.pack("fI", score, len(bytes))) - f.write(bytes) - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("-t", "--tokenizer-model", type=str, help="optional path to custom tokenizer ") - args = parser.parse_args() - - t = Tokenizer(args.tokenizer_model) - t.export() diff --git a/train.py b/train.py deleted file mode 100644 index b1972dc..0000000 --- a/train.py +++ /dev/null @@ -1,342 +0,0 @@ -""" -This training script can be run both on a single gpu in debug mode, -and also in a larger training run with distributed data parallel (ddp). - -To run on a single GPU small debug run, example: -$ python -m train.py --compile=False --eval_iters=10 --batch_size=8 - -To run with DDP on 4 gpus on 1 node, example: -$ torchrun --standalone --nproc_per_node=4 train.py - -To run with DDP on 4 gpus across 2 nodes, example: -- Run on the first (master) node with example IP 123.456.123.456: -$ torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr=123.456.123.456 --master_port=1234 train.py -- Run on the worker node: -$ torchrun --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr=123.456.123.456 --master_port=1234 train.py -(If your cluster does not have Infiniband interconnect prepend NCCL_IB_DISABLE=1) -""" - -import math -import os -import time -from contextlib import nullcontext -from datetime import datetime -from functools import partial - -import torch -from model import Transformer, ModelArgs -from torch.distributed import destroy_process_group, init_process_group -from torch.nn.parallel import DistributedDataParallel as DDP - -from tinystories import Task - -# ----------------------------------------------------------------------------- -# I/O -out_dir = "out" -eval_interval = 2000 -log_interval = 1 -eval_iters = 100 -eval_only = False # if True, script exits right after the first eval -always_save_checkpoint = False # if True, always save a checkpoint after each eval -init_from = "scratch" # 'scratch' or 'resume' -# wandb logging -wandb_log = False # disabled by default -wandb_project = "llamac" -wandb_run_name = "run" + datetime.now().strftime("%Y_%m_%d_%H_%M_%S") -# data -batch_size = 128 # if gradient_accumulation_steps > 1, this is the micro-batch size -max_seq_len = 256 -vocab_source = "llama2" # llama2|custom; use Lllama 2 vocab from Meta, or custom trained -vocab_size = 32000 # the Llama 2 tokenizer has 32K tokens -# model -dim = 288 -n_layers = 6 -n_heads = 6 -n_kv_heads = 6 -multiple_of = 32 -dropout = 0.0 -# adamw optimizer -gradient_accumulation_steps = 4 # used to simulate larger batch sizes -learning_rate = 5e-4 # max learning rate -max_iters = 100000 # total number of training iterations -weight_decay = 1e-1 -beta1 = 0.9 -beta2 = 0.95 -grad_clip = 1.0 # clip gradients at this value, or disable if == 0.0 -# learning rate decay settings -decay_lr = True # whether to decay the learning rate -warmup_iters = 1000 # how many steps to warm up for -# system -device = "cuda" # examples: 'cpu', 'cuda', 'cuda:0', 'cuda:1' etc., or try 'mps' on macbooks -dtype = "bfloat16" # float32|bfloat16|float16 -compile = True # use PyTorch 2.0 to compile the model to be faster -# ----------------------------------------------------------------------------- -config_keys = [ - k - for k, v in globals().items() - if not k.startswith("_") and isinstance(v, (int, float, bool, str)) -] -exec(open("configurator.py").read()) # overrides from command line or config file -config = {k: globals()[k] for k in config_keys} # will be useful for logging -# ----------------------------------------------------------------------------- - -# fixing some hyperparams to sensible defaults -lr_decay_iters = max_iters # should be ~= max_iters per Chinchilla -min_lr = 0.0 # minimum learning rate, should be ~= learning_rate/10 per Chinchilla - -# validating checks -assert vocab_source in ["llama2", "custom"] -assert vocab_source == "custom" or vocab_size == 32000, "The vocab from Meta has 32K tokens" - -# various inits, derived attributes, I/O setup -ddp = int(os.environ.get("RANK", -1)) != -1 # is this a ddp run? -if ddp: - init_process_group(backend="nccl") - ddp_rank = int(os.environ["RANK"]) - ddp_local_rank = int(os.environ["LOCAL_RANK"]) - ddp_world_size = int(os.environ["WORLD_SIZE"]) - device = f"cuda:{ddp_local_rank}" - torch.cuda.set_device(device) - master_process = ddp_rank == 0 # this process will do logging, checkpointing etc. - seed_offset = ddp_rank # each process gets a different seed - # world_size number of processes will be training simultaneously, so we can scale - # down the desired gradient accumulation iterations per process proportionally - assert gradient_accumulation_steps % ddp_world_size == 0 - gradient_accumulation_steps //= ddp_world_size -else: - # if not ddp, we are running on a single gpu, and one process - master_process = True - seed_offset = 0 - ddp_world_size = 1 -tokens_per_iter = gradient_accumulation_steps * ddp_world_size * batch_size * max_seq_len -if master_process: - print(f"tokens per iteration will be: {tokens_per_iter:,}") - print(f"breaks down as: {gradient_accumulation_steps} grad accum steps * {ddp_world_size} processes * {batch_size} batch size * {max_seq_len} max seq len") - -if master_process: - os.makedirs(out_dir, exist_ok=True) -torch.manual_seed(1337 + seed_offset) -torch.backends.cuda.matmul.allow_tf32 = True # allow tf32 on matmul -torch.backends.cudnn.allow_tf32 = True # allow tf32 on cudnn -device_type = "cuda" if "cuda" in device else "cpu" # for later use in torch.autocast -# note: float16 data type will automatically use a GradScaler -ptdtype = {"float32": torch.float32, "bfloat16": torch.bfloat16, "float16": torch.float16}[dtype] -ctx = ( - nullcontext() - if device_type == "cpu" - else torch.amp.autocast(device_type=device_type, dtype=ptdtype) -) - -# task-specific setup -iter_batches = partial( - Task.iter_batches, - batch_size=batch_size, - max_seq_len=max_seq_len, - vocab_size=vocab_size, - vocab_source=vocab_source, - device=device, - num_workers=0, -) - -# init these up here, can override if init_from='resume' (i.e. from a checkpoint) -iter_num = 0 -best_val_loss = 1e9 - -# model init -model_args = dict( - dim=dim, - n_layers=n_layers, - n_heads=n_heads, - n_kv_heads=n_kv_heads, - vocab_size=vocab_size, - multiple_of=multiple_of, - max_seq_len=max_seq_len, - dropout=dropout, -) # start with model_args from command line -if init_from == "scratch": - # init a new model from scratch - print("Initializing a new model from scratch") - gptconf = ModelArgs(**model_args) - model = Transformer(gptconf) -elif init_from == "resume": - print(f"Resuming training from {out_dir}") - # resume training from a checkpoint. - ckpt_path = os.path.join(out_dir, "ckpt.pt") - checkpoint = torch.load(ckpt_path, map_location=device) - checkpoint_model_args = checkpoint["model_args"] - # force these config attributes to be equal otherwise we can't even resume training - # the rest of the attributes (e.g. dropout) can stay as desired from command line - for k in ["dim", "n_layers", "n_heads", "n_kv_heads", "vocab_size", "multiple_of", "max_seq_len"]: - model_args[k] = checkpoint_model_args[k] - # create the model - gptconf = ModelArgs(**model_args) - model = Transformer(gptconf) - state_dict = checkpoint["model"] - # fix the keys of the state dictionary :( - # honestly no idea how checkpoints sometimes get this prefix, have to debug more - unwanted_prefix = "_orig_mod." - for k, v in list(state_dict.items()): - if k.startswith(unwanted_prefix): - state_dict[k[len(unwanted_prefix) :]] = state_dict.pop(k) - model.load_state_dict(state_dict) - iter_num = checkpoint["iter_num"] - best_val_loss = checkpoint["best_val_loss"] -model.to(device) - -# initialize a GradScaler. If enabled=False scaler is a no-op -scaler = torch.cuda.amp.GradScaler(enabled=(dtype == "float16")) - -# optimizer -optimizer = model.configure_optimizers(weight_decay, learning_rate, (beta1, beta2), device_type) -if init_from == "resume" and "optimizer" in checkpoint: - optimizer.load_state_dict(checkpoint["optimizer"]) -checkpoint = None # free up memory - -# compile the model -if compile: - print("compiling the model... (takes a ~minute)") - unoptimized_model = model - model = torch.compile(model) # requires PyTorch 2.0 - -# wrap model into DDP container -if ddp: - # Ignore the `freqs_cis` buffer so that DDP does not broadcast it at - # construction time since NCCL does not support `ComplexFloat` - prefix = "_orig_mod." if compile else "" - model._ddp_params_and_buffers_to_ignore = {prefix + "freqs_cis"} - model = DDP(model, device_ids=[ddp_local_rank]) - -# helps estimate an arbitrarily accurate loss over either split using many batches -@torch.no_grad() -def estimate_loss(): - out = {} - model.eval() - for split in ["train", "val"]: - batch_iter = iter_batches(split=split) - losses = torch.zeros(eval_iters) # keep on CPU - for k in range(eval_iters): - X, Y = next(batch_iter) - with ctx: - logits = model(X, Y) - loss = raw_model.last_loss - losses[k] = loss.item() - out[split] = losses.mean() - model.train() - return out - -# learning rate decay scheduler (cosine with warmup) -def get_lr(it): - # 1) linear warmup for warmup_iters steps - if it < warmup_iters: - return learning_rate * it / warmup_iters - # 2) if it > lr_decay_iters, return min learning rate - if it > lr_decay_iters: - return min_lr - # 3) in between, use cosine decay down to min learning rate - decay_ratio = (it - warmup_iters) / (lr_decay_iters - warmup_iters) - assert 0 <= decay_ratio <= 1 - coeff = 0.5 * (1.0 + math.cos(math.pi * decay_ratio)) # coeff ranges 0..1 - return min_lr + coeff * (learning_rate - min_lr) - -# logging -if wandb_log and master_process: - import wandb - wandb.init(project=wandb_project, name=wandb_run_name, config=config) - -# training loop -train_batch_iter = iter_batches(split="train") -X, Y = next(train_batch_iter) # fetch the very first batch -t0 = time.time() -local_iter_num = 0 # number of iterations in the lifetime of this process -raw_model = model.module if ddp else model # unwrap DDP container if needed -running_mfu = -1.0 -while True: - # determine and set the learning rate for this iteration - lr = get_lr(iter_num) if decay_lr else learning_rate - for param_group in optimizer.param_groups: - param_group["lr"] = lr - - # evaluate the loss on train/val sets and write checkpoints - if iter_num % eval_interval == 0 and master_process: - losses = estimate_loss() - print(f"step {iter_num}: train loss {losses['train']:.4f}, val loss {losses['val']:.4f}") - if wandb_log: - try: - wandb.log( - { - "iter": iter_num, - "tokens": iter_num * tokens_per_iter, - "loss/train": losses["train"], - "loss/val": losses["val"], - "lr": lr, - "mfu": running_mfu * 100, # convert to percentage - } - ) - except Exception as e: - print(f"logging to wandb failed: {e}") - if losses["val"] < best_val_loss or always_save_checkpoint: - best_val_loss = losses["val"] - if iter_num > 0: - checkpoint = { - "model": raw_model.state_dict(), - "optimizer": optimizer.state_dict(), - "model_args": model_args, - "iter_num": iter_num, - "best_val_loss": best_val_loss, - "config": config, - } - print(f"saving checkpoint to {out_dir}") - torch.save(checkpoint, os.path.join(out_dir, "ckpt.pt")) - raw_model.export(os.path.join(out_dir, "model.bin")) - if iter_num == 0 and eval_only: - break - - # forward backward update, with optional gradient accumulation to simulate larger batch size - # and using the GradScaler if data type is float16 - for micro_step in range(gradient_accumulation_steps): - if ddp: - # in DDP training we only need to sync gradients at the last micro step. - # the official way to do this is with model.no_sync() context manager, but - # I really dislike that this bloats the code and forces us to repeat code - # looking at the source of that context manager, it just toggles this variable - model.require_backward_grad_sync = micro_step == gradient_accumulation_steps - 1 - with ctx: - logits = model(X, Y) - loss = raw_model.last_loss - loss = loss / gradient_accumulation_steps - # immediately async prefetch next batch while model is doing the forward pass on the GPU - X, Y = next(train_batch_iter) - # backward pass, with gradient scaling if training in fp16 - scaler.scale(loss).backward() - # clip the gradient - if grad_clip != 0.0: - scaler.unscale_(optimizer) - torch.nn.utils.clip_grad_norm_(model.parameters(), grad_clip) - # step the optimizer and scaler if training in fp16 - scaler.step(optimizer) - scaler.update() - # flush the gradients as soon as we can, no need for this memory anymore - optimizer.zero_grad(set_to_none=True) - - # timing and logging - t1 = time.time() - dt = t1 - t0 - t0 = t1 - if iter_num % log_interval == 0 and master_process: - # get loss as float, scale up due to the divide above. note: this is a CPU-GPU sync point - lossf = loss.item() * gradient_accumulation_steps - if local_iter_num >= 5: # let the training loop settle a bit - mfu = raw_model.estimate_mfu(batch_size * gradient_accumulation_steps, dt) - running_mfu = mfu if running_mfu == -1.0 else 0.9 * running_mfu + 0.1 * mfu - print( - f"{iter_num} | loss {lossf:.4f} | lr {lr:e} | {dt*1000:.2f}ms | mfu {running_mfu*100:.2f}%" - ) - iter_num += 1 - local_iter_num += 1 - - # termination conditions - if iter_num > max_iters: - break - -if ddp: - destroy_process_group() diff --git a/train_vocab.sh b/train_vocab.sh deleted file mode 100755 index 7803af8..0000000 --- a/train_vocab.sh +++ /dev/null @@ -1,126 +0,0 @@ -#!/bin/bash - -# Trains a sentencepiece tokenizer model on a bunch of given data, my best -# effort attempt to replicate how Meta trained their Llama 2 tokenizer. - -# usage: $ train_vocab.sh -# example: -# ./train_vocab.sh tiny.txt tokenizer_tiny 1024 -# requirements: -# install https://github.com/google/sentencepiece - -# check if the correct number of arguments are provided -if [ $# -ne 3 ]; then - echo "Usage: $0 " - exit 1 -fi - -# assign command-line arguments to variables -input=$1 -model_prefix=$2 -vocab_size=$3 - -# check if input file exists -if [ ! -f "$input" ]; then - echo "Usage: $0 " - echo "input '$input' not found." - exit 1 -fi - -# check if vocab_size is a positive integer -if ! [[ "$vocab_size" =~ ^[0-9]+$ ]] || [ "$vocab_size" -lt 1 ]; then - echo "Usage: $0 " - echo "vocab_size size must be a positive integer." - exit 1 -fi - -# Print the processed inputs -echo "Input: $input" -echo "Model Prefix: $model_prefix" -echo "Vocabulary Size: $vocab_size" - -# train a sentencepiece tokenizer model -# Llama 2 config can be printed as follows: - -# import sentencepiece.sentencepiece_model_pb2 -# mp = sentencepiece.sentencepiece_model_pb2.ModelProto() -# mp.ParseFromString(open("tokenizer.model", "rb").read()) -# print(mp.trainer_spec) -# print(mp.normalizer_spec) - -# this gives: - -# trainer_spec { -# input: "/large_experiments/theorem/datasets/MERGED/all.test1.merged" -# model_prefix: "spm_model_32k_200M_charcov099995_allowWSO__v2" -# model_type: BPE -# vocab_size: 32000 -# self_test_sample_size: 0 -# input_format: "text" -# character_coverage: 0.9999499917030334 -# input_sentence_size: 200000000 -# seed_sentencepiece_size: 1000000 -# shrinking_factor: 0.75 -# num_threads: 80 -# num_sub_iterations: 2 -# max_sentence_length: 4192 -# shuffle_input_sentence: true -# max_sentencepiece_length: 16 -# split_by_unicode_script: true -# split_by_whitespace: true -# split_by_number: true -# treat_whitespace_as_suffix: false -# split_digits: true -# allow_whitespace_only_pieces: true -# vocabulary_output_piece_score: true -# hard_vocab_limit: true -# use_all_vocab: false -# byte_fallback: true -# required_chars: "" -# unk_id: 0 -# bos_id: 1 -# eos_id: 2 -# pad_id: -1 -# unk_surface: " \342\201\207 " -# unk_piece: "" -# bos_piece: "" -# eos_piece: "" -# pad_piece: "" -# train_extremely_large_corpus: false -# enable_differential_privacy: false -# differential_privacy_noise_level: 0.0 -# differential_privacy_clipping_threshold: 0 -# } -# normalizer_spec { -# name: "identity" -# precompiled_charsmap: "" -# add_dummy_prefix: true -# remove_extra_whitespaces: false -# normalization_rule_tsv: "" -# } - -# let's now use spm_train to train this exact model -# options docs: https://github.com/google/sentencepiece/blob/master/doc/options.md - -# we'll depart on a few settings: -# character_coverage -> 1.0 - -# other important notes: -# --split-digits = true, per the paper -# --allow_whitespace_only_pieces is true, default in spm is false -# --byte_fallback is true, default in spm is false -# --normalization_rule_name is identity, default in spm is nmt_nfkc - -spm_train --input="$input" \ - --model_prefix="$model_prefix" \ - --model_type=bpe \ - --vocab_size="$vocab_size" \ - --self_test_sample_size=0 \ - --input_format="text" \ - --character_coverage=1.0 \ - --num_threads="$(nproc)" \ - --split_digits=true \ - --allow_whitespace_only_pieces=true \ - --byte_fallback=true \ - --unk_surface=" \342\201\207 " \ - --normalization_rule_name=identity \ From bc7cb7d0e87ac7cbaa67cd51cdcc52cbfcacce32 Mon Sep 17 00:00:00 2001 From: YiMing Han Date: Fri, 18 Aug 2023 15:13:59 -0400 Subject: [PATCH 75/79] Revert "only dart" This reverts commit 01df3731d6747659ad4d8cf7d9f4bcb27eb6d5f0. --- .github/workflows/build.yml | 193 ++++++++++++++++++ configurator.py | 47 +++++ export_meta_llama_bin.py | 112 +++++++++++ export_meta_llama_hf_bin.py | 113 +++++++++++ model.py | 392 ++++++++++++++++++++++++++++++++++++ requirements.txt | 7 + run.ipynb | 130 ++++++++++++ sample.py | 79 ++++++++ save_torchscript.py | 66 ++++++ tinystories.py | 274 +++++++++++++++++++++++++ tokenizer.py | 78 +++++++ train.py | 342 +++++++++++++++++++++++++++++++ train_vocab.sh | 126 ++++++++++++ 13 files changed, 1959 insertions(+) create mode 100644 .github/workflows/build.yml create mode 100644 configurator.py create mode 100644 export_meta_llama_bin.py create mode 100644 export_meta_llama_hf_bin.py create mode 100644 model.py create mode 100644 requirements.txt create mode 100644 run.ipynb create mode 100644 sample.py create mode 100755 save_torchscript.py create mode 100644 tinystories.py create mode 100644 tokenizer.py create mode 100644 train.py create mode 100755 train_vocab.sh diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml new file mode 100644 index 0000000..7e6474d --- /dev/null +++ b/.github/workflows/build.yml @@ -0,0 +1,193 @@ +name: Continuous Integration + +on: + push: + branches: + - master + paths: ['.github/workflows/**', '**/Makefile', '**/*.c', '**/*.h', '**/*.py'] + pull_request: + types: [opened, synchronize, reopened] + paths: ['**/Makefile', '**/*.c', '**/*.h', '**/*.py'] + # for manual triggering + workflow_dispatch: + +env: + BRANCH_NAME: ${{ github.head_ref || github.ref_name }} + +jobs: + # check basic builds to avoid breaking changes + ubuntu-focal-make: + runs-on: ubuntu-latest + + steps: + - name: Clone + id: checkout + uses: actions/checkout@v3 + + - name: Dependencies + id: depends + run: | + sudo apt-get update + sudo apt-get install build-essential -y + + - name: Set up Python 3.10 + uses: actions/setup-python@v3 + with: + python-version: "3.10" + + - name: Pip setup + run: | + python -m pip install --upgrade pip + if [ -f requirements.txt ]; then pip install -r requirements.txt; fi + + - name: Build + id: make_build + run: | + make + + - name: Build runfast + id: make_build_runfast + run: | + make runfast + + - name: Test with pytest + run: | + pytest + + macOS-latest-make: + runs-on: macos-latest + + steps: + - name: Clone + id: checkout + uses: actions/checkout@v3 + + - name: Dependencies + id: depends + continue-on-error: true + run: | + brew update + + - name: Set up Python 3.10 + uses: actions/setup-python@v3 + with: + python-version: "3.10" + + - name: Pip setup + run: | + python -m pip install --upgrade pip + if [ -f requirements.txt ]; then pip install -r requirements.txt; fi + + - name: Build clang + id: make_build_clang + run: | + make run CC=clang + + - name: Build + id: make_build + run: | + make + + - name: Build runfast + id: make_build_runfast + run: | + make runfast + + - name: Test with pytest + run: pytest + + + + + windows-latest-make: + runs-on: windows-latest + + strategy: + fail-fast: false #necessary, otherwise the matrix breaks + matrix: + arch: + - amd64 + - amd64_x86 + - amd64_arm64 + + steps: + - name: Clone + id: checkout + uses: actions/checkout@v3 + + - name: Setup MSBuild + uses: microsoft/setup-msbuild@v1 + + - name: Setup MSVC ${{ matrix.arch }} + uses: ilammy/msvc-dev-cmd@v1 + with: + arch: ${{ matrix.arch }} + + - name: Set up Python 3.10 + if: matrix.arch != 'amd64_arm64' + uses: actions/setup-python@v3 + with: + python-version: "3.10" + + - name: Pip setup + if: matrix.arch != 'amd64_arm64' + run: | + python -m pip install --upgrade pip + if (Test-Path requirements.txt) { + pip install -r requirements.txt + } + + - name: Build ${{ matrix.arch }} + id: build_msvc + run: | + .\build_msvc.bat + + #cross-comiled, cannot be run on host + - name: Test with pytest + if: matrix.arch != 'amd64_arm64' + run: pytest + + windows-latest-mingw: + runs-on: windows-latest + + defaults: + run: + shell: msys2 {0} + + strategy: + matrix: + include: + - { sys: mingw64, env: x86_64 } + + steps: + - name: Checkout + id: checkout + uses: actions/checkout@v3 + + - uses: msys2/setup-msys2@v2 + id: setup-msys2 + with: + msystem: ${{ matrix.sys }} + install: mingw-w64-${{matrix.env}}-gcc make + + - name: Build ${{ matrix.sys }} ${{ matrix.env }} + id: build_mingw + run: | + make win64 + + - name: Set up Python 3.10 + uses: actions/setup-python@v3 + with: + python-version: "3.10" + + - name: Pip setup + shell: powershell + run: | + python -m pip install --upgrade pip + if (Test-Path requirements.txt) { + pip install -r requirements.txt + } + + - name: Test with pytest + shell: powershell + run: pytest diff --git a/configurator.py b/configurator.py new file mode 100644 index 0000000..a8bba95 --- /dev/null +++ b/configurator.py @@ -0,0 +1,47 @@ +""" +Poor Man's Configurator. Probably a terrible idea. Example usage: +$ python train.py config/override_file.py --batch_size=32 +this will first run config/override_file.py, then override batch_size to 32 + +The code in this file will be run as follows from e.g. train.py: +>>> exec(open('configurator.py').read()) + +So it's not a Python module, it's just shuttling this code away from train.py +The code in this script then overrides the globals() + +I know people are not going to love this, I just really dislike configuration +complexity and having to prepend config. to every single variable. If someone +comes up with a better simple Python solution I am all ears. +""" + +import sys +from ast import literal_eval + +for arg in sys.argv[1:]: + if '=' not in arg: + # assume it's the name of a config file + assert not arg.startswith('--') + config_file = arg + print(f"Overriding config with {config_file}:") + with open(config_file) as f: + print(f.read()) + exec(open(config_file).read()) + else: + # assume it's a --key=value argument + assert arg.startswith('--') + key, val = arg.split('=') + key = key[2:] + if key in globals(): + try: + # attempt to eval it it (e.g. if bool, number, or etc) + attempt = literal_eval(val) + except (SyntaxError, ValueError): + # if that goes wrong, just use the string + attempt = val + # ensure the types match ok + assert type(attempt) == type(globals()[key]) + # cross fingers + print(f"Overriding: {key} = {attempt}") + globals()[key] = attempt + else: + raise ValueError(f"Unknown config key: {key}") diff --git a/export_meta_llama_bin.py b/export_meta_llama_bin.py new file mode 100644 index 0000000..4e42197 --- /dev/null +++ b/export_meta_llama_bin.py @@ -0,0 +1,112 @@ +""" +This script exports the Llama 2 weights in llama2c.bin format. +""" +import os +import sys +import struct +from pathlib import Path +import json + +import torch + +from model import precompute_freqs_cis + + +def export(p, state_dict, filepath='model.bin'): + """export the model weights in fp32 into .bin file to be read from C""" + f = open(filepath, 'wb') + + def serialize(key): + print(f"writing {key}...") + t = state_dict[key].contiguous().view(-1).type(torch.float32).numpy() + f.write(memoryview(t)) + del state_dict[key] + + # first write out the header + hidden_dim = state_dict['layers.0.feed_forward.w1.weight'].shape[0] + p['vocab_size'] = 32000 + p['max_seq_len'] = 2048 + + n_kv_heads = p.get('n_kv_heads') or p['n_heads'] + header = struct.pack( + 'iiiiiii', + p['dim'], hidden_dim, p['n_layers'], p['n_heads'], + n_kv_heads, -p['vocab_size'], p['max_seq_len'] + ) + # NOTE ABOVE: -ve vocab_size is indicating that the classifier weights are present + # in the checkpoint and should be loaded. + f.write(header) + + # next write out the embedding weights + print("writing tok_embeddings...") + serialize('tok_embeddings.weight') + + # now all the layers + # attention weights + for i in range(p['n_layers']): serialize(f'layers.{i}.attention_norm.weight') + for i in range(p['n_layers']): serialize(f'layers.{i}.attention.wq.weight') + for i in range(p['n_layers']): serialize(f'layers.{i}.attention.wk.weight') + for i in range(p['n_layers']): serialize(f'layers.{i}.attention.wv.weight') + for i in range(p['n_layers']): serialize(f'layers.{i}.attention.wo.weight') + # ffn weights + for i in range(p['n_layers']): serialize(f'layers.{i}.ffn_norm.weight') + for i in range(p['n_layers']): serialize(f'layers.{i}.feed_forward.w1.weight') + for i in range(p['n_layers']): serialize(f'layers.{i}.feed_forward.w2.weight') + for i in range(p['n_layers']): serialize(f'layers.{i}.feed_forward.w3.weight') + + # final rmsnorm + serialize('norm.weight') + # freqs_cos, freqs_sin + freqs_cos, freqs_sin = precompute_freqs_cis(p['dim'] // p['n_heads'], p['max_seq_len'] * 2) + state_dict['freqs_cos'] = freqs_cos[:p['max_seq_len']] + state_dict['freqs_sin'] = freqs_sin[:p['max_seq_len']] + serialize('freqs_cos') + serialize('freqs_sin') + + # finally write the output weights + serialize('output.weight') + + f.close() + print(f"wrote {filepath}") + + +def concat_weights(models): + state_dict = {} + for name in list(models[0]): + tensors = [model[name] for model in models] + if len(tensors) == 1 or len(tensors[0].shape) == 1: + state_dict[name] = tensors[0] + continue + is_axis_1 = ( + name.startswith('tok_embeddings.') + or name.endswith('.attention.wo.weight') + or name.endswith('.feed_forward.w2.weight') + ) + axis = 1 if is_axis_1 else 0 + state_dict[name] = torch.cat(tensors, dim=axis) + for model in models: + del model[name] + return state_dict + + +def load_and_export(model_path, output_path): + params_path = os.path.join(model_path, 'params.json') + with open(params_path) as f: + params = json.load(f) + print(params) + + model_paths = sorted(list(Path(model_path).glob('consolidated.*.pth'))) + models = [torch.load(p, map_location='cpu') for p in model_paths] + state_dict = concat_weights(models) + del models + export(params, state_dict, output_path) + + +if __name__ == '__main__': + if len(sys.argv) == 1: + print('[Llama model folder path] [output path]') + exit() + + model_path = sys.argv[1] + output_path = sys.argv[2] + load_and_export(model_path, output_path) diff --git a/export_meta_llama_hf_bin.py b/export_meta_llama_hf_bin.py new file mode 100644 index 0000000..e3a8c73 --- /dev/null +++ b/export_meta_llama_hf_bin.py @@ -0,0 +1,113 @@ +""" +This script exports the Llama 2 weights in llama2c.bin format. +""" +import os +import sys +import struct +from pathlib import Path +import json + +import torch + +from model import precompute_freqs_cis + + +def export(p, state_dict, filepath='model.bin'): + """export the model weights in fp32 into .bin file to be read from C""" + f = open(filepath, 'wb') + + def serialize(key): + print(f"writing {key}...") + t = state_dict[key].contiguous().view(-1).type(torch.float32).numpy() + f.write(memoryview(t)) + del state_dict[key] + + # first write out the header + hidden_dim = state_dict['model.layers.0.mlp.gate_proj.weight'].shape[0] + p['vocab_size'] = 32000 + p['max_seq_len'] = 2048 + + n_kv_heads = p.get('n_kv_heads') or p['n_heads'] + header = struct.pack( + 'iiiiiii', + p['dim'], hidden_dim, p['n_layers'], p['n_heads'], + n_kv_heads, -p['vocab_size'], p['max_seq_len'] + ) + # NOTE ABOVE: -ve vocab_size is indicating that the classifier weights are present + # in the checkpoint and should be loaded. + f.write(header) + + # next write out the embedding weights + print("writing tok_embeddings...") + serialize('model.embed_tokens.weight') + + # now all the layers + # attention weights + for i in range(p['n_layers']): serialize(f'model.layers.{i}.input_layernorm.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.q_proj.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.k_proj.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.v_proj.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.self_attn.o_proj.weight') + # ffn weights + for i in range(p['n_layers']): serialize(f'model.layers.{i}.post_attention_layernorm.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.mlp.gate_proj.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.mlp.down_proj.weight') + for i in range(p['n_layers']): serialize(f'model.layers.{i}.mlp.up_proj.weight') + + # final rmsnorm + serialize('model.norm.weight') + # freqs_cos, freqs_sin + freqs_cos, freqs_sin = precompute_freqs_cis(p['dim'] // p['n_heads'], p['max_seq_len'] * 2) + state_dict['freqs_cos'] = freqs_cos[:p['max_seq_len']] + state_dict['freqs_sin'] = freqs_sin[:p['max_seq_len']] + # check if this requires addtional conversion + serialize('freqs_cos') + serialize('freqs_sin') + + # finally write the output weights + serialize('lm_head.weight') + + f.close() + print(f"wrote {filepath}") + + +def concat_weights(models): + state_dict = {} + for name in list(models[0]): + tensors = [model[name] for model in models] + if len(tensors) == 1 or len(tensors[0].shape) == 1: + state_dict[name] = tensors[0] + continue + is_axis_1 = ( + name.startswith('model.embed_tokens.weight') + or name.endswith('.self_attn.o_proj.weight') + or name.endswith('.mlp.down_proj.weight') + ) + axis = 1 if is_axis_1 else 0 + state_dict[name] = torch.cat(tensors, dim=axis) + for model in models: + del model[name] + return state_dict + + +def load_and_export(model_path, output_path): + params_path = os.path.join(model_path, 'params.json') + with open(params_path) as f: + params = json.load(f) + print(params) + + model_paths = sorted(list(Path(model_path).glob('consolidated.*.pth'))) + models = [torch.load(p, map_location='cpu') for p in model_paths] + state_dict = concat_weights(models) + del models + export(params, state_dict, output_path) + + +if __name__ == '__main__': + if len(sys.argv) == 1: + print('[Llama model folder path] [output path]') + exit() + + model_path = sys.argv[1] + output_path = sys.argv[2] + load_and_export(model_path, output_path) diff --git a/model.py b/model.py new file mode 100644 index 0000000..c8c82a9 --- /dev/null +++ b/model.py @@ -0,0 +1,392 @@ +import math +import struct +import inspect +from dataclasses import dataclass +from typing import Any, Optional, Tuple + +import numpy as np +import torch +import torch.nn.functional as F +from torch import nn + +@dataclass +class ModelArgs: + # default hyperparameters for the Llama 7B model + dim: int = 4096 + n_layers: int = 32 + n_heads: int = 32 + n_kv_heads: Optional[int] = None + vocab_size: int = 32000 + multiple_of: int = 256 # MLP hidden layer size will be multiple of + norm_eps: float = 1e-5 + max_seq_len: int = 2048 + dropout: float = 0.0 + + +class RMSNorm(torch.nn.Module): + def __init__(self, dim: int, eps: float): + super().__init__() + self.eps = eps + self.weight = nn.Parameter(torch.ones(dim)) + + def _norm(self, x): + return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps) + + def forward(self, x): + output = self._norm(x.float()).type_as(x) + return output * self.weight + + +def precompute_freqs_cis(dim: int, end: int, theta: float = 10000.0): + freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim)) + t = torch.arange(end, device=freqs.device) # type: ignore + freqs = torch.outer(t, freqs).float() # type: ignore + freqs_cos = torch.cos(freqs) # real part + freqs_sin = torch.sin(freqs) # imaginary part + return freqs_cos, freqs_sin + +def reshape_for_broadcast(freqs_cis: torch.Tensor, x: torch.Tensor): + ndim = x.ndim + assert 0 <= 1 < ndim + assert freqs_cis.shape == (x.shape[1], x.shape[-1]) + shape = [d if i == 1 or i == ndim - 1 else 1 for i, d in enumerate(x.shape)] + return freqs_cis.view(shape) + +def apply_rotary_emb( + xq: torch.Tensor, + xk: torch.Tensor, + freqs_cos: torch.Tensor, + freqs_sin: torch.Tensor +) -> Tuple[torch.Tensor, torch.Tensor]: + + # reshape xq and xk to match the complex representation + xq_r, xq_i = xq.float().reshape(xq.shape[:-1] + (-1, 2)).unbind(-1) + xk_r, xk_i = xk.float().reshape(xk.shape[:-1] + (-1, 2)).unbind(-1) + + # reshape freqs_cos and freqs_sin for broadcasting + freqs_cos = reshape_for_broadcast(freqs_cos, xq_r) + freqs_sin = reshape_for_broadcast(freqs_sin, xq_r) + + # apply rotation using real numbers + xq_out_r = xq_r * freqs_cos - xq_i * freqs_sin + xq_out_i = xq_r * freqs_sin + xq_i * freqs_cos + xk_out_r = xk_r * freqs_cos - xk_i * freqs_sin + xk_out_i = xk_r * freqs_sin + xk_i * freqs_cos + + # flatten last two dimensions + xq_out = torch.stack([xq_out_r, xq_out_i], dim=-1).flatten(3) + xk_out = torch.stack([xk_out_r, xk_out_i], dim=-1).flatten(3) + + return xq_out.type_as(xq), xk_out.type_as(xk) + +def repeat_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: + """torch.repeat_interleave(x, dim=2, repeats=n_rep)""" + bs, slen, n_kv_heads, head_dim = x.shape + if n_rep == 1: + return x + return ( + x[:, :, :, None, :] + .expand(bs, slen, n_kv_heads, n_rep, head_dim) + .reshape(bs, slen, n_kv_heads * n_rep, head_dim) + ) + +class Attention(nn.Module): + def __init__(self, args: ModelArgs): + super().__init__() + self.n_kv_heads = args.n_heads if args.n_kv_heads is None else args.n_kv_heads + assert args.n_heads % self.n_kv_heads == 0 + model_parallel_size = 1 + self.n_local_heads = args.n_heads // model_parallel_size + self.n_local_kv_heads = self.n_kv_heads // model_parallel_size + self.n_rep = self.n_local_heads // self.n_local_kv_heads + self.head_dim = args.dim // args.n_heads + self.wq = nn.Linear(args.dim, args.n_heads * self.head_dim, bias=False) + self.wk = nn.Linear(args.dim, self.n_kv_heads * self.head_dim, bias=False) + self.wv = nn.Linear(args.dim, self.n_kv_heads * self.head_dim, bias=False) + self.wo = nn.Linear(args.n_heads * self.head_dim, args.dim, bias=False) + self.attn_dropout = nn.Dropout(args.dropout) + self.resid_dropout = nn.Dropout(args.dropout) + self.dropout = args.dropout + + # use flash attention or a manual implementation? + self.flash = hasattr(torch.nn.functional, 'scaled_dot_product_attention') + if not self.flash: + print("WARNING: using slow attention. Flash Attention requires PyTorch >= 2.0") + mask = torch.full((1, 1, args.max_seq_len, args.max_seq_len), float("-inf")) + mask = torch.triu(mask, diagonal=1) + self.register_buffer("mask", mask) + + def forward( + self, + x: torch.Tensor, + freqs_cos: torch.Tensor, + freqs_sin: torch.Tensor, + ): + bsz, seqlen, _ = x.shape + + # QKV + xq, xk, xv = self.wq(x), self.wk(x), self.wv(x) + xq = xq.view(bsz, seqlen, self.n_local_heads, self.head_dim) + xk = xk.view(bsz, seqlen, self.n_local_kv_heads, self.head_dim) + xv = xv.view(bsz, seqlen, self.n_local_kv_heads, self.head_dim) + + # RoPE relative positional embeddings + xq, xk = apply_rotary_emb(xq, xk, freqs_cos, freqs_sin) + + # grouped multiquery attention: expand out keys and values + xk = repeat_kv(xk, self.n_rep) # (bs, seqlen, n_local_heads, head_dim) + xv = repeat_kv(xv, self.n_rep) # (bs, seqlen, n_local_heads, head_dim) + + # make heads into a batch dimension + xq = xq.transpose(1, 2) # (bs, n_local_heads, seqlen, head_dim) + xk = xk.transpose(1, 2) + xv = xv.transpose(1, 2) + + # flash implementation + if self.flash: + output = torch.nn.functional.scaled_dot_product_attention(xq, xk, xv, attn_mask=None, dropout_p=self.dropout if self.training else 0.0, is_causal=True) + else: + # manual implementation + scores = torch.matmul(xq, xk.transpose(2, 3)) / math.sqrt(self.head_dim) + assert hasattr(self, 'mask') + scores = scores + self.mask[:, :, :seqlen, :seqlen] # (bs, n_local_heads, seqlen, cache_len + seqlen) + scores = F.softmax(scores.float(), dim=-1).type_as(xq) + scores = self.attn_dropout(scores) + output = torch.matmul(scores, xv) # (bs, n_local_heads, seqlen, head_dim) + + # restore time as batch dimension and concat heads + output = output.transpose(1, 2).contiguous().view(bsz, seqlen, -1) + + # final projection into the residual stream + output = self.wo(output) + output = self.resid_dropout(output) + return output + + +class FeedForward(nn.Module): + def __init__(self, dim: int, hidden_dim: int, multiple_of: int, dropout: float): + super().__init__() + hidden_dim = int(2 * hidden_dim / 3) + hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of) + self.w1 = nn.Linear(dim, hidden_dim, bias=False) + self.w2 = nn.Linear(hidden_dim, dim, bias=False) + self.w3 = nn.Linear(dim, hidden_dim, bias=False) + self.dropout = nn.Dropout(dropout) + + def forward(self, x): + return self.dropout(self.w2(F.silu(self.w1(x)) * self.w3(x))) + + +class TransformerBlock(nn.Module): + def __init__(self, layer_id: int, args: ModelArgs): + super().__init__() + self.n_heads = args.n_heads + self.dim = args.dim + self.head_dim = args.dim // args.n_heads + self.attention = Attention(args) + self.feed_forward = FeedForward( + dim=args.dim, + hidden_dim=4 * args.dim, + multiple_of=args.multiple_of, + dropout=args.dropout, + ) + self.layer_id = layer_id + self.attention_norm = RMSNorm(args.dim, eps=args.norm_eps) + self.ffn_norm = RMSNorm(args.dim, eps=args.norm_eps) + + def forward(self, x, freqs_cos, freqs_sin): + h = x + self.attention.forward(self.attention_norm(x), freqs_cos, freqs_sin) + out = h + self.feed_forward.forward(self.ffn_norm(h)) + return out + + +class Transformer(nn.Module): + last_loss: Optional[torch.Tensor] + + def __init__(self, params: ModelArgs): + super().__init__() + self.params = params + self.vocab_size = params.vocab_size + self.n_layers = params.n_layers + + self.tok_embeddings = nn.Embedding(params.vocab_size, params.dim) + self.dropout = nn.Dropout(params.dropout) + self.layers = torch.nn.ModuleList() + for layer_id in range(params.n_layers): + self.layers.append(TransformerBlock(layer_id, params)) + self.norm = RMSNorm(params.dim, eps=params.norm_eps) + self.output = nn.Linear(params.dim, params.vocab_size, bias=False) + + # share the unembedding parameters with the embedding parameters + self.tok_embeddings.weight = self.output.weight # https://paperswithcode.com/method/weight-tying + + # some useful precompute for the RoPE relative positional embeddings + freqs_cos, freqs_sin = precompute_freqs_cis(self.params.dim // self.params.n_heads, self.params.max_seq_len) + self.register_buffer("freqs_cos", freqs_cos, persistent=False) + self.register_buffer("freqs_sin", freqs_sin, persistent=False) + + # init all weights + self.apply(self._init_weights) + # apply special scaled init to the residual projections, per GPT-2 paper + for pn, p in self.named_parameters(): + if pn.endswith('w3.weight') or pn.endswith('wo.weight'): + torch.nn.init.normal_(p, mean=0.0, std=0.02/math.sqrt(2 * params.n_layers)) + + # Initialize attribute for the loss of the last forward call. This will be set if the forward is called with a targets tensor. + self.last_loss = None + + def _init_weights(self, module): + if isinstance(module, nn.Linear): + torch.nn.init.normal_(module.weight, mean=0.0, std=0.02) + if module.bias is not None: + torch.nn.init.zeros_(module.bias) + elif isinstance(module, nn.Embedding): + torch.nn.init.normal_(module.weight, mean=0.0, std=0.02) + + def forward(self, tokens: torch.Tensor, targets: Optional[torch.Tensor] = None) -> torch.Tensor: + _bsz, seqlen = tokens.shape + h = self.tok_embeddings(tokens) + h = self.dropout(h) + freqs_cos = self.freqs_cos[:seqlen] + freqs_sin = self.freqs_sin[:seqlen] + + for layer in self.layers: + h = layer(h, freqs_cos, freqs_sin) + h = self.norm(h) + + if targets is not None: + # if we are given some desired targets also calculate the loss + logits = self.output(h) + self.last_loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), ignore_index=-1) + else: + # inference-time mini-optimization: only forward the output on the very last position + logits = self.output(h[:, [-1], :]) # note: using list [-1] to preserve the time dim + self.last_loss = None + + return logits + + def configure_optimizers(self, weight_decay, learning_rate, betas, device_type): + # start with all of the candidate parameters + param_dict = {pn: p for pn, p in self.named_parameters()} + # filter out those that do not require grad + param_dict = {pn: p for pn, p in param_dict.items() if p.requires_grad} + # create optim groups. Any parameters that is 2D will be weight decayed, otherwise no. + # i.e. all weight tensors in matmuls + embeddings decay, all biases and layernorms don't. + decay_params = [p for n, p in param_dict.items() if p.dim() >= 2] + nodecay_params = [p for n, p in param_dict.items() if p.dim() < 2] + optim_groups = [ + {'params': decay_params, 'weight_decay': weight_decay}, + {'params': nodecay_params, 'weight_decay': 0.0} + ] + num_decay_params = sum(p.numel() for p in decay_params) + num_nodecay_params = sum(p.numel() for p in nodecay_params) + print(f"num decayed parameter tensors: {len(decay_params)}, with {num_decay_params:,} parameters") + print(f"num non-decayed parameter tensors: {len(nodecay_params)}, with {num_nodecay_params:,} parameters") + # Create AdamW optimizer and use the fused version if it is available + fused_available = 'fused' in inspect.signature(torch.optim.AdamW).parameters + use_fused = fused_available and device_type == 'cuda' + extra_args = dict(fused=True) if use_fused else dict() + optimizer = torch.optim.AdamW(optim_groups, lr=learning_rate, betas=betas, **extra_args) + print(f"using fused AdamW: {use_fused}") + + return optimizer + + def estimate_mfu(self, fwdbwd_per_iter, dt): + """ estimate model flops utilization (MFU) in units of A100 bfloat16 peak FLOPS """ + # first estimate the number of flops we do per iteration. + # see PaLM paper Appendix B as ref: https://arxiv.org/abs/2204.02311 + N = sum(p.numel() for p in self.parameters()) + cfg = self.params + L, H, Q, T = cfg.n_layers, cfg.n_heads, cfg.dim//cfg.n_heads, cfg.max_seq_len + flops_per_token = 6*N + 12*L*H*Q*T + flops_per_fwdbwd = flops_per_token * T + flops_per_iter = flops_per_fwdbwd * fwdbwd_per_iter + # express our flops throughput as ratio of A100 bfloat16 peak flops + flops_achieved = flops_per_iter * (1.0/dt) # per second + flops_promised = 312e12 # A100 GPU bfloat16 peak flops is 312 TFLOPS + mfu = flops_achieved / flops_promised + return mfu + + @torch.inference_mode() + def generate(self, idx, max_new_tokens, temperature=1.0, top_k=None): + """ + Take a conditioning sequence of indices idx (LongTensor of shape (b,t)) and complete + the sequence max_new_tokens times, feeding the predictions back into the model each time. + Most likely you'll want to make sure to be in model.eval() mode of operation for this. + Also note this is a super inefficient version of sampling with no key/value cache. + """ + for _ in range(max_new_tokens): + # if the sequence context is growing too long we must crop it at block_size + idx_cond = idx if idx.size(1) <= self.params.max_seq_len else idx[:, -self.params.max_seq_len:] + # forward the model to get the logits for the index in the sequence + logits = self(idx_cond) + logits = logits[:, -1, :] # crop to just the final time step + if temperature == 0.0: + # "sample" the single most likely index + _, idx_next = torch.topk(logits, k=1, dim=-1) + else: + # pluck the logits at the final step and scale by desired temperature + logits = logits / temperature + # optionally crop the logits to only the top k options + if top_k is not None: + v, _ = torch.topk(logits, min(top_k, logits.size(-1))) + logits[logits < v[:, [-1]]] = -float('Inf') + # apply softmax to convert logits to (normalized) probabilities + probs = F.softmax(logits, dim=-1) + idx_next = torch.multinomial(probs, num_samples=1) + # append sampled index to the running sequence and continue + idx = torch.cat((idx, idx_next), dim=1) + + return idx + + def export(self, filepath='model.bin'): + """export the model weights in fp32 into .bin file to be read from C""" + f = open(filepath, 'wb') + + def serialize(t): + d = t.detach().cpu().view(-1).numpy().astype(np.float32) + b = struct.pack(f'{len(d)}f', *d) + f.write(b) + + # first write out the header + hidden_dim = self.layers[0].feed_forward.w1.weight.shape[0] + p = self.params + n_kv_heads = p.n_heads if p.n_kv_heads is None else p.n_kv_heads + header = struct.pack('iiiiiii', p.dim, hidden_dim, p.n_layers, p.n_heads, + n_kv_heads, p.vocab_size, p.max_seq_len) + f.write(header) + + # next write out the embedding weights + serialize(self.tok_embeddings.weight) + + # now all the layers + # attention weights + for layer in self.layers: + serialize(layer.attention_norm.weight) + for layer in self.layers: + serialize(layer.attention.wq.weight) + for layer in self.layers: + serialize(layer.attention.wk.weight) + for layer in self.layers: + serialize(layer.attention.wv.weight) + for layer in self.layers: + serialize(layer.attention.wo.weight) + # ffn weights + for layer in self.layers: + serialize(layer.ffn_norm.weight) + for layer in self.layers: + serialize(layer.feed_forward.w1.weight) + for layer in self.layers: + serialize(layer.feed_forward.w2.weight) + for layer in self.layers: + serialize(layer.feed_forward.w3.weight) + # final rmsnorm + serialize(self.norm.weight) + # note: no need to write final classifier weights due to weight sharing + # freqs_cis + serialize(self.freqs_cos[:p.max_seq_len]) + serialize(self.freqs_sin[:p.max_seq_len]) + + # write to binary file + f.close() + print(f"wrote {filepath}") diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..7187a73 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,7 @@ +numpy==1.23.5 +pytest==7.4.0 +Requests==2.31.0 +sentencepiece==0.1.99 +torch==2.0.1 +tqdm==4.64.1 +wandb==0.15.5 diff --git a/run.ipynb b/run.ipynb new file mode 100644 index 0000000..ac57593 --- /dev/null +++ b/run.ipynb @@ -0,0 +1,130 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "HLdoj4cz-xal" + }, + "source": [ + "# Run.c\n", + "\n", + "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/karpathy/llama2.c/blob/master/run.ipynb)\n", + "\n", + "More details can be found in the [README.md](README.md) ." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Une3Ozlnu1B7" + }, + "outputs": [], + "source": [ + "#@title Clone Project\n", + "\n", + "!git clone https://github.com/karpathy/llama2.c.git\n", + "%cd llama2.c" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#@title Build\n", + "\n", + "!make runfast" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "thm0ZBrtSgoC" + }, + "outputs": [], + "source": [ + "#@title Pick Your Model\n", + "\n", + "#@markdown Choose model\n", + "model = \"stories15M\" #@param [\"stories15M\", \"stories42M\", \"stories110M\"]\n", + "\n", + "download_url = \"\"\n", + "\n", + "if(model == \"stories15M\"):\n", + " download_url = \"https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin\"\n", + "if(model == \"stories42M\"):\n", + " download_url = \"https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin\"\n", + "if(model == \"stories110M\"):\n", + " download_url = \"https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin\"\n", + "\n", + "print(f\"download_url: {download_url}\")\n", + "\n", + "!wget $download_url\n", + "\n", + "model_file = model + \".bin\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "OgAc3KjuT-NM" + }, + "outputs": [], + "source": [ + "#@title Generate Stories\n", + "\n", + "# Generate args\n", + "max_token = 256 #@param {type:\"slider\", min:32, max:1024, step:32}\n", + "temperature = 0.8 #@param {type:\"slider\", min:0.0, max:1, step:0.05}\n", + "top_p = 0.9 #@param {type:\"slider\", min:0.0, max:1.0, step:0.05}\n", + "prompt = \"One day, Lily met a Shoggoth\" #@param {type:\"string\"}\n", + "\n", + "print(f\"model: {model_file}, max_token: {max_token}, temperature: {temperature}, top_p: {top_p}, prompt: {prompt}\")\n", + "print(f\"----------------------------\\n\")\n", + "\n", + "cmd = f'./run {model_file} -t {temperature} -p {top_p} -n {max_token} -i \"{prompt}\"'\n", + "!{cmd}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#@title Run Meta's Llama 2 models\n", + "\n", + "#@markdown input your huggingface [access token](https://huggingface.co/settings/tokens) to download Meta's Llama 2 models.\n", + "\n", + "from huggingface_hub import snapshot_download\n", + "\n", + "token = \"replace your huggingface access token\" #@param {type:\"string\"}\n", + "path = snapshot_download(repo_id=\"meta-llama/Llama-2-7b\",cache_dir=\"Llama-2-7b\", use_auth_token=token)\n", + "\n", + "!python export_meta_llama_bin.py $path llama2_7b.bin\n", + "\n", + "print(\"./run llama2_7b.bin\\n\")\n", + "!./run llama2_7b.bin" + ] + } + ], + "metadata": { + "colab": { + "private_outputs": true, + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/sample.py b/sample.py new file mode 100644 index 0000000..d2f56ea --- /dev/null +++ b/sample.py @@ -0,0 +1,79 @@ +""" +Sample from the trained model with PyTorch +""" +import os +import pickle +from contextlib import nullcontext +import torch +from model import ModelArgs, Transformer +from tokenizer import Tokenizer + +from tinystories import get_tokenizer_model_path + +# ----------------------------------------------------------------------------- +checkpoint = 'out/ckpt.pt' +start = "" # or "<|endoftext|>" or etc. Can also specify a file, use as: "FILE:prompt.txt" +num_samples = 1 # number of samples to draw +max_new_tokens = 100 # number of tokens generated in each sample +temperature = 1.0 # 1.0 = no change, < 1.0 = less random, > 1.0 = more random, in predictions +top_k = 300 # retain only the top_k most likely tokens, clamp others to have 0 probability +tokenizer = "" # override the tokenizer model path +seed = 1337 +device = 'cuda' if torch.cuda.is_available() else 'cpu' # examples: 'cpu', 'cuda', 'cuda:0', 'cuda:1', etc. +#dtype = 'bfloat16' if torch.cuda.is_available() and torch.cuda.is_bf16_supported() else 'float16' # 'float32' or 'bfloat16' or 'float16' +dtype = "float32" +compile = False # use PyTorch 2.0 to compile the model to be faster +exec(open('configurator.py').read()) # overrides from command line or config file +# ----------------------------------------------------------------------------- + +torch.manual_seed(seed) +torch.cuda.manual_seed(seed) +torch.backends.cuda.matmul.allow_tf32 = True # allow tf32 on matmul +torch.backends.cudnn.allow_tf32 = True # allow tf32 on cudnn +device_type = 'cuda' if 'cuda' in device else 'cpu' # for later use in torch.autocast +ptdtype = {'float32': torch.float32, 'bfloat16': torch.bfloat16, 'float16': torch.float16}[dtype] +ctx = nullcontext() if device_type == 'cpu' else torch.amp.autocast(device_type=device_type, dtype=ptdtype) + +# init from a model saved in a specific directory +checkpoint_dict = torch.load(checkpoint, map_location=device) +gptconf = ModelArgs(**checkpoint_dict['model_args']) +model = Transformer(gptconf) +state_dict = checkpoint_dict['model'] +unwanted_prefix = '_orig_mod.' +for k,v in list(state_dict.items()): + if k.startswith(unwanted_prefix): + state_dict[k[len(unwanted_prefix):]] = state_dict.pop(k) +model.load_state_dict(state_dict, strict=False) + +model.eval() +model.to(device) +if compile: + print("Compiling the model...") + model = torch.compile(model) # requires PyTorch 2.0 (optional) + +# load the tokenizer +vocab_source = checkpoint_dict.get("vocab_source", "llama2") +vocab_size = gptconf.vocab_size +if tokenizer: + # a specific tokenizer is provided, use it + tokenizer_model = tokenizer +else: + # let's try to find the tokenizer model automatically. bit gross here... + query_vocab_size = 0 if vocab_source == "llama2" else vocab_size + tokenizer_model = get_tokenizer_model_path(vocab_size=query_vocab_size) +enc = Tokenizer(tokenizer_model=tokenizer_model) + +# encode the beginning of the prompt +if start.startswith('FILE:'): + with open(start[5:], 'r', encoding='utf-8') as f: + start = f.read() +start_ids = enc.encode(start, bos=True, eos=False) +x = (torch.tensor(start_ids, dtype=torch.long, device=device)[None, ...]) + +# run generation +with torch.no_grad(): + with ctx: + for k in range(num_samples): + y = model.generate(x, max_new_tokens, temperature=temperature, top_k=top_k) + print(enc.decode(y[0].tolist())) + print('---------------') diff --git a/save_torchscript.py b/save_torchscript.py new file mode 100755 index 0000000..af3a299 --- /dev/null +++ b/save_torchscript.py @@ -0,0 +1,66 @@ +#!/usr/bin/env python +"""Saves the model as a TorchScript. + +Usage examples: + ./save_torchscript.py + ./save_torchscript.py --dim=300 + ./save_torchscript.py --gzip_output=True --zero_params=True + +The resulting file can be loaded in C++ code and then used for training or +inference with: + #include + torch::jit::Module module = torch::jit::load("model.pt") + +Note that the serialized model includes the initial parameters and with the default +ModelArgs the file is 59M and gzips down to 55M. If you want to serialize/distribute +the model parameters separately you can zero out the parameters before saving it and +it will gzip down to 780K. +""" +import gzip +import os +import shutil +from inspect import signature + +import torch + +from model import ModelArgs, Transformer + +# Model args config +dim = 288 +n_layers = 6 +n_heads = 6 +n_kv_heads = n_heads +multiple_of = 32 +max_seq_len = 256 +dropout = 0.0 +vocab_size = 32000 +norm_eps = 1e-5 +# Save config +model_path = "model.pt" +zero_params = False +gzip_output = False +# Allow config overrides +exec(open("configurator.py").read()) + + +def main() -> None: + model_args = {k: globals()[k] for k in signature(ModelArgs).parameters} + model = Transformer(ModelArgs(**model_args)) + + # If requested zero params before saving the model. This is useful in + # conjunction with gzip_output. + if zero_params: + for p in model.parameters(): + p.detach().zero_() + + torch.jit.save(torch.jit.script(model), model_path) + + if gzip_output: + with open(model_path, "rb") as f_in: + with gzip.open(f"{model_path}.gz", "wb") as f_out: + shutil.copyfileobj(f_in, f_out) + os.unlink(model_path) + + +if __name__ == "__main__": + main() diff --git a/tinystories.py b/tinystories.py new file mode 100644 index 0000000..690cb02 --- /dev/null +++ b/tinystories.py @@ -0,0 +1,274 @@ +""" +Download, preprocess and serve the TinyStories dataset as a DataLoader. +""" + +import argparse +import glob +import json +import os +import random +from typing import List +from concurrent.futures import ProcessPoolExecutor +from functools import partial + +import numpy as np +import requests +import torch +import torch.distributed as dist +from tqdm import tqdm + +from tokenizer import Tokenizer + +DATA_CACHE_DIR = "data" + +def download_file(url: str, fname: str, chunk_size=1024): + """Helper function to download a file from a given url""" + resp = requests.get(url, stream=True) + total = int(resp.headers.get("content-length", 0)) + with open(fname, "wb") as file, tqdm( + desc=fname, + total=total, + unit="iB", + unit_scale=True, + unit_divisor=1024, + ) as bar: + for data in resp.iter_content(chunk_size=chunk_size): + size = file.write(data) + bar.update(size) + + +def download(): + """Downloads the TinyStories dataset to DATA_CACHE_DIR""" + os.makedirs(DATA_CACHE_DIR, exist_ok=True) + + # download the TinyStories dataset, unless it's already downloaded + data_url = "https://huggingface.co/datasets/roneneldan/TinyStories/resolve/main/TinyStories_all_data.tar.gz" + data_filename = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data.tar.gz") + if not os.path.exists(data_filename): + print(f"Downloading {data_url} to {data_filename}...") + download_file(data_url, data_filename) + else: + print(f"{data_filename} already exists, skipping download...") + + # unpack the tar.gz file into all the data shards (json files) + data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") + if not os.path.exists(data_dir): + os.makedirs(data_dir, exist_ok=True) + print(f"Unpacking {data_filename}...") + os.system(f"tar -xzf {data_filename} -C {data_dir}") + else: + print(f"{data_dir} already exists, skipping unpacking...") + + # print a single example just for debugging and such + shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) + with open(shard_filenames[0], "r") as f: + data = json.load(f) + print("Download done.") + print(f"Number of shards: {len(shard_filenames)}") + print(f"Example story:\n{data[0]}") + +def train_vocab(vocab_size): + """ + Trains a custom sentencepiece tokenizer on the TinyStories dataset. + The custom tokenizer files will be saved in DATA_CACHE_DIR/tok{N} directories, + where N is the vocab size. This is also where the pretok .bin files will go. + """ + assert vocab_size > 0, "Vocab size must be positive" + + # output file prefix path for sentencepiece + prefix = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") + + # how many shards we'll use for vocab training, kept low for efficiency + num_shards = 10 + + # 1) export a large chunk of text as a single text file tiny.txt + tiny_file = os.path.join(DATA_CACHE_DIR, "tiny.txt") + data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") + shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) + + print(f"Writing temporary file {tiny_file} with {num_shards} shards...") + with open(tiny_file, "w") as of: + for shard in tqdm(shard_filenames[:num_shards]): + with open(shard, "r") as f: + data = json.load(f) + for example in data: + text = example["story"] + text = text.strip() + of.write(text + "\n") + print(f"Size is: {os.path.getsize(tiny_file) / 1024 / 1024:.2f} MB") + + # 2) run the train_vocab.sh script that trains the sentencepiece model + print("Will now train the vocab with:") + cmd = f"bash train_vocab.sh {tiny_file} {prefix} {vocab_size}" + print(cmd) + print("OK? [y/N] ") + dec = input() + if dec.lower() != "y": + print("Exiting...") + return + os.system(cmd) + + # 3) optional cleanup, ask the user if they'd like to delete tiny.txt + dec = input(f"Delete the temporary file {tiny_file}? [y/N] ") + if dec.lower() == "y": + os.remove(tiny_file) + print(f"Deleted {tiny_file}") + + print(f"Trained tokenizer is in {prefix}.model") + print("Done.") + + +def process_shard(args, vocab_size): + shard_id, shard = args + tokenizer_model = get_tokenizer_model_path(vocab_size) + enc = Tokenizer(tokenizer_model) + with open(shard, "r") as f: + data = json.load(f) + all_tokens = [] + for example in tqdm(data, position=shard_id): + text = example["story"] + text = text.strip() # get rid of leading/trailing whitespace + tokens = enc.encode(text, bos=True, eos=False) # encode the text, use BOS + all_tokens.extend(tokens) + # convert to uint16 nparray + all_tokens = np.array(all_tokens, dtype=np.uint16) + # calculate the output filename + if vocab_size == 0: + # if we're using Llama 2, just save the tokenized file in the same dir + tokenized_filename = shard.replace(".json", ".bin") + else: + # save .bin files into a new tok{N} directory + bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") + shard_basename = os.path.basename(shard) + bin_basename = shard_basename.replace(".json", ".bin") + tokenized_filename = os.path.join(bin_dir, bin_basename) + # write the bytes + with open(tokenized_filename, "wb") as f: + f.write(all_tokens.tobytes()) + # calculate the average sequence length (they are separated by BOS=1) + avg_seq_len = all_tokens.size / ((all_tokens == 1).sum()) + print(f"Saved {tokenized_filename}, average seqlen: {avg_seq_len:.2f}") + + +def pretokenize(vocab_size): + # iterate the shards and tokenize all of them one by one + data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") + shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) + if vocab_size > 0: + # .bin files will be saved into tok{N} directory, create it once here + bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") + os.makedirs(bin_dir, exist_ok=True) + + # process all the shards in a process pool + fun = partial(process_shard, vocab_size=vocab_size) + with ProcessPoolExecutor() as executor: + executor.map(fun, enumerate(shard_filenames)) + print("Done.") + + +class PretokDataset(torch.utils.data.IterableDataset): + """Loads pretokenized examples from disk and yields them as PyTorch tensors.""" + + def __init__(self, split, max_seq_len, vocab_size, vocab_source): + super().__init__() + self.split = split + self.max_seq_len = max_seq_len + self.vocab_size = vocab_size + self.vocab_source = vocab_source + + def __iter__(self): + # get worker info within a DataLoader + worker_info = torch.utils.data.get_worker_info() + worker_id = worker_info.id if worker_info else 0 + # get DDP rank info + rank = dist.get_rank() if dist.is_initialized() else 0 + # combine the worker_id and worker_rank to create a unique seed for rng + seed = 42 + worker_id + 1337 * rank + rng = random.Random(seed) + print(f"Created a PretokDataset with rng seed {seed}") + if self.vocab_source == "llama2": + # the .bin files are right along the .json files + bin_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") + shard_filenames = sorted(glob.glob(os.path.join(bin_dir, "*.bin"))) + elif self.vocab_source == "custom": + # the .bin files are in tok{N} directory + bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{self.vocab_size}") + shard_filenames = sorted(glob.glob(os.path.join(bin_dir, "*.bin"))) + # train/test split. let's use only shard 0 for test split, rest train + shard_filenames = shard_filenames[1:] if self.split == "train" else shard_filenames[:1] + while True: + rng.shuffle(shard_filenames) + for shard in shard_filenames: + # open the dataset for reading but keep it on disk with memmap + m = np.memmap(shard, dtype=np.uint16, mode="r") + num_batches = len(m) // self.max_seq_len + num_batches -= 1 # drop the last partial batch + assert num_batches > 0, "this shard is way too small? investigate." + ixs = list(range(num_batches)) + rng.shuffle(ixs) + for ix in ixs: + start = ix * self.max_seq_len + end = start + self.max_seq_len + 1 + # calling .astype will copy the data into a new numpy array, now in RAM + chunk = torch.from_numpy((m[start:end]).astype(np.int64)) + x = chunk[:-1] + y = chunk[1:] + yield x, y + +# ----------------------------------------------------------------------------- +# public interface functions + +def get_tokenizer_model_path(vocab_size): + """ + Returns path to the sentencepiece tokenizer model for a given vocab size + vocab_size = 0 designates the default Llama 2 tokenizer, in that case + None is returned. + """ + if vocab_size == 0: + return None + else: + return os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}.model") + +class Task: + + @staticmethod + def iter_batches(batch_size, device, num_workers=0, **dataset_kwargs): + ds = PretokDataset(**dataset_kwargs) + dl = torch.utils.data.DataLoader( + ds, batch_size=batch_size, pin_memory=True, num_workers=num_workers + ) + for x, y in dl: + x = x.to(device, non_blocking=True) + y = y.to(device, non_blocking=True) + yield x, y + +# ----------------------------------------------------------------------------- +# CLI for constructing the dataset + +if __name__ == "__main__": + """ + These stages are designed to be run in order. + + To tokenize data with the Llama 2 tokenizer: + python tinystories.py download + python tinystories.py pretokenize + + To tokenize data with a custom tokenizer we train ourselves with sentencepiece, e.g.: + python tinystories.py download + python tinystories.py train_vocab --vocab_size=2048 + python tinystories.py pretokenize --vocab_size=2048 + """ + parser = argparse.ArgumentParser() + parser.add_argument("stage", type=str, choices=["download", "pretokenize", "train_vocab"]) + parser.add_argument("--vocab_size", type=int, default=0, help="pretokenization vocab size. 0 = use Llama 2 tokenizer.") + args = parser.parse_args() + + # depending on the stage call the appropriate function + if args.stage == "download": + download() + elif args.stage == "train_vocab": + train_vocab(vocab_size=args.vocab_size) + elif args.stage == "pretokenize": + pretokenize(vocab_size=args.vocab_size) + else: + raise ValueError(f"Unknown stage {args.stage}") diff --git a/tokenizer.py b/tokenizer.py new file mode 100644 index 0000000..f3c0cc3 --- /dev/null +++ b/tokenizer.py @@ -0,0 +1,78 @@ +# Taken from llama code and lightly modified +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. + +import os +import struct +import argparse +from typing import List + +from sentencepiece import SentencePieceProcessor + +TOKENIZER_MODEL = "tokenizer.model" # the llama sentencepiece tokenizer model + +class Tokenizer: + def __init__(self, tokenizer_model=None): + model_path = tokenizer_model if tokenizer_model else TOKENIZER_MODEL + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + self.model_path = model_path + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.pad_id() + #print(f"#words: {self.n_words} - BOS ID: {self.bos_id} - EOS ID: {self.eos_id}") + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + def encode(self, s: str, bos: bool, eos: bool) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + return self.sp_model.decode(t) + + def export(self): + + # get all the tokens (postprocessed) and their scores as floats + tokens, scores = [], [] + for i in range(self.n_words): + + # decode the token and light postprocessing + t = self.sp_model.id_to_piece(i) + s = self.sp_model.get_score(i) + if i == self.bos_id: + t = '\n\n' + elif i == self.eos_id: + t = '\n\n' + t = t.replace('▁', ' ') # sentencepiece uses this character as whitespace + b = t.encode('utf-8') # bytes of this token, utf-8 encoded + + tokens.append(b) + scores.append(s) + + # record the max token length + max_token_length = max(len(t) for t in tokens) + + # write to a binary file + # the tokenizer.bin file is the same as .model file, but .bin + tokenizer_bin = self.model_path.replace('.model', '.bin') + with open(tokenizer_bin, 'wb') as f: + f.write(struct.pack("I", max_token_length)) + for bytes, score in zip(tokens, scores): + f.write(struct.pack("fI", score, len(bytes))) + f.write(bytes) + +if __name__ == "__main__": + parser = argparse.ArgumentParser() + parser.add_argument("-t", "--tokenizer-model", type=str, help="optional path to custom tokenizer ") + args = parser.parse_args() + + t = Tokenizer(args.tokenizer_model) + t.export() diff --git a/train.py b/train.py new file mode 100644 index 0000000..b1972dc --- /dev/null +++ b/train.py @@ -0,0 +1,342 @@ +""" +This training script can be run both on a single gpu in debug mode, +and also in a larger training run with distributed data parallel (ddp). + +To run on a single GPU small debug run, example: +$ python -m train.py --compile=False --eval_iters=10 --batch_size=8 + +To run with DDP on 4 gpus on 1 node, example: +$ torchrun --standalone --nproc_per_node=4 train.py + +To run with DDP on 4 gpus across 2 nodes, example: +- Run on the first (master) node with example IP 123.456.123.456: +$ torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr=123.456.123.456 --master_port=1234 train.py +- Run on the worker node: +$ torchrun --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr=123.456.123.456 --master_port=1234 train.py +(If your cluster does not have Infiniband interconnect prepend NCCL_IB_DISABLE=1) +""" + +import math +import os +import time +from contextlib import nullcontext +from datetime import datetime +from functools import partial + +import torch +from model import Transformer, ModelArgs +from torch.distributed import destroy_process_group, init_process_group +from torch.nn.parallel import DistributedDataParallel as DDP + +from tinystories import Task + +# ----------------------------------------------------------------------------- +# I/O +out_dir = "out" +eval_interval = 2000 +log_interval = 1 +eval_iters = 100 +eval_only = False # if True, script exits right after the first eval +always_save_checkpoint = False # if True, always save a checkpoint after each eval +init_from = "scratch" # 'scratch' or 'resume' +# wandb logging +wandb_log = False # disabled by default +wandb_project = "llamac" +wandb_run_name = "run" + datetime.now().strftime("%Y_%m_%d_%H_%M_%S") +# data +batch_size = 128 # if gradient_accumulation_steps > 1, this is the micro-batch size +max_seq_len = 256 +vocab_source = "llama2" # llama2|custom; use Lllama 2 vocab from Meta, or custom trained +vocab_size = 32000 # the Llama 2 tokenizer has 32K tokens +# model +dim = 288 +n_layers = 6 +n_heads = 6 +n_kv_heads = 6 +multiple_of = 32 +dropout = 0.0 +# adamw optimizer +gradient_accumulation_steps = 4 # used to simulate larger batch sizes +learning_rate = 5e-4 # max learning rate +max_iters = 100000 # total number of training iterations +weight_decay = 1e-1 +beta1 = 0.9 +beta2 = 0.95 +grad_clip = 1.0 # clip gradients at this value, or disable if == 0.0 +# learning rate decay settings +decay_lr = True # whether to decay the learning rate +warmup_iters = 1000 # how many steps to warm up for +# system +device = "cuda" # examples: 'cpu', 'cuda', 'cuda:0', 'cuda:1' etc., or try 'mps' on macbooks +dtype = "bfloat16" # float32|bfloat16|float16 +compile = True # use PyTorch 2.0 to compile the model to be faster +# ----------------------------------------------------------------------------- +config_keys = [ + k + for k, v in globals().items() + if not k.startswith("_") and isinstance(v, (int, float, bool, str)) +] +exec(open("configurator.py").read()) # overrides from command line or config file +config = {k: globals()[k] for k in config_keys} # will be useful for logging +# ----------------------------------------------------------------------------- + +# fixing some hyperparams to sensible defaults +lr_decay_iters = max_iters # should be ~= max_iters per Chinchilla +min_lr = 0.0 # minimum learning rate, should be ~= learning_rate/10 per Chinchilla + +# validating checks +assert vocab_source in ["llama2", "custom"] +assert vocab_source == "custom" or vocab_size == 32000, "The vocab from Meta has 32K tokens" + +# various inits, derived attributes, I/O setup +ddp = int(os.environ.get("RANK", -1)) != -1 # is this a ddp run? +if ddp: + init_process_group(backend="nccl") + ddp_rank = int(os.environ["RANK"]) + ddp_local_rank = int(os.environ["LOCAL_RANK"]) + ddp_world_size = int(os.environ["WORLD_SIZE"]) + device = f"cuda:{ddp_local_rank}" + torch.cuda.set_device(device) + master_process = ddp_rank == 0 # this process will do logging, checkpointing etc. + seed_offset = ddp_rank # each process gets a different seed + # world_size number of processes will be training simultaneously, so we can scale + # down the desired gradient accumulation iterations per process proportionally + assert gradient_accumulation_steps % ddp_world_size == 0 + gradient_accumulation_steps //= ddp_world_size +else: + # if not ddp, we are running on a single gpu, and one process + master_process = True + seed_offset = 0 + ddp_world_size = 1 +tokens_per_iter = gradient_accumulation_steps * ddp_world_size * batch_size * max_seq_len +if master_process: + print(f"tokens per iteration will be: {tokens_per_iter:,}") + print(f"breaks down as: {gradient_accumulation_steps} grad accum steps * {ddp_world_size} processes * {batch_size} batch size * {max_seq_len} max seq len") + +if master_process: + os.makedirs(out_dir, exist_ok=True) +torch.manual_seed(1337 + seed_offset) +torch.backends.cuda.matmul.allow_tf32 = True # allow tf32 on matmul +torch.backends.cudnn.allow_tf32 = True # allow tf32 on cudnn +device_type = "cuda" if "cuda" in device else "cpu" # for later use in torch.autocast +# note: float16 data type will automatically use a GradScaler +ptdtype = {"float32": torch.float32, "bfloat16": torch.bfloat16, "float16": torch.float16}[dtype] +ctx = ( + nullcontext() + if device_type == "cpu" + else torch.amp.autocast(device_type=device_type, dtype=ptdtype) +) + +# task-specific setup +iter_batches = partial( + Task.iter_batches, + batch_size=batch_size, + max_seq_len=max_seq_len, + vocab_size=vocab_size, + vocab_source=vocab_source, + device=device, + num_workers=0, +) + +# init these up here, can override if init_from='resume' (i.e. from a checkpoint) +iter_num = 0 +best_val_loss = 1e9 + +# model init +model_args = dict( + dim=dim, + n_layers=n_layers, + n_heads=n_heads, + n_kv_heads=n_kv_heads, + vocab_size=vocab_size, + multiple_of=multiple_of, + max_seq_len=max_seq_len, + dropout=dropout, +) # start with model_args from command line +if init_from == "scratch": + # init a new model from scratch + print("Initializing a new model from scratch") + gptconf = ModelArgs(**model_args) + model = Transformer(gptconf) +elif init_from == "resume": + print(f"Resuming training from {out_dir}") + # resume training from a checkpoint. + ckpt_path = os.path.join(out_dir, "ckpt.pt") + checkpoint = torch.load(ckpt_path, map_location=device) + checkpoint_model_args = checkpoint["model_args"] + # force these config attributes to be equal otherwise we can't even resume training + # the rest of the attributes (e.g. dropout) can stay as desired from command line + for k in ["dim", "n_layers", "n_heads", "n_kv_heads", "vocab_size", "multiple_of", "max_seq_len"]: + model_args[k] = checkpoint_model_args[k] + # create the model + gptconf = ModelArgs(**model_args) + model = Transformer(gptconf) + state_dict = checkpoint["model"] + # fix the keys of the state dictionary :( + # honestly no idea how checkpoints sometimes get this prefix, have to debug more + unwanted_prefix = "_orig_mod." + for k, v in list(state_dict.items()): + if k.startswith(unwanted_prefix): + state_dict[k[len(unwanted_prefix) :]] = state_dict.pop(k) + model.load_state_dict(state_dict) + iter_num = checkpoint["iter_num"] + best_val_loss = checkpoint["best_val_loss"] +model.to(device) + +# initialize a GradScaler. If enabled=False scaler is a no-op +scaler = torch.cuda.amp.GradScaler(enabled=(dtype == "float16")) + +# optimizer +optimizer = model.configure_optimizers(weight_decay, learning_rate, (beta1, beta2), device_type) +if init_from == "resume" and "optimizer" in checkpoint: + optimizer.load_state_dict(checkpoint["optimizer"]) +checkpoint = None # free up memory + +# compile the model +if compile: + print("compiling the model... (takes a ~minute)") + unoptimized_model = model + model = torch.compile(model) # requires PyTorch 2.0 + +# wrap model into DDP container +if ddp: + # Ignore the `freqs_cis` buffer so that DDP does not broadcast it at + # construction time since NCCL does not support `ComplexFloat` + prefix = "_orig_mod." if compile else "" + model._ddp_params_and_buffers_to_ignore = {prefix + "freqs_cis"} + model = DDP(model, device_ids=[ddp_local_rank]) + +# helps estimate an arbitrarily accurate loss over either split using many batches +@torch.no_grad() +def estimate_loss(): + out = {} + model.eval() + for split in ["train", "val"]: + batch_iter = iter_batches(split=split) + losses = torch.zeros(eval_iters) # keep on CPU + for k in range(eval_iters): + X, Y = next(batch_iter) + with ctx: + logits = model(X, Y) + loss = raw_model.last_loss + losses[k] = loss.item() + out[split] = losses.mean() + model.train() + return out + +# learning rate decay scheduler (cosine with warmup) +def get_lr(it): + # 1) linear warmup for warmup_iters steps + if it < warmup_iters: + return learning_rate * it / warmup_iters + # 2) if it > lr_decay_iters, return min learning rate + if it > lr_decay_iters: + return min_lr + # 3) in between, use cosine decay down to min learning rate + decay_ratio = (it - warmup_iters) / (lr_decay_iters - warmup_iters) + assert 0 <= decay_ratio <= 1 + coeff = 0.5 * (1.0 + math.cos(math.pi * decay_ratio)) # coeff ranges 0..1 + return min_lr + coeff * (learning_rate - min_lr) + +# logging +if wandb_log and master_process: + import wandb + wandb.init(project=wandb_project, name=wandb_run_name, config=config) + +# training loop +train_batch_iter = iter_batches(split="train") +X, Y = next(train_batch_iter) # fetch the very first batch +t0 = time.time() +local_iter_num = 0 # number of iterations in the lifetime of this process +raw_model = model.module if ddp else model # unwrap DDP container if needed +running_mfu = -1.0 +while True: + # determine and set the learning rate for this iteration + lr = get_lr(iter_num) if decay_lr else learning_rate + for param_group in optimizer.param_groups: + param_group["lr"] = lr + + # evaluate the loss on train/val sets and write checkpoints + if iter_num % eval_interval == 0 and master_process: + losses = estimate_loss() + print(f"step {iter_num}: train loss {losses['train']:.4f}, val loss {losses['val']:.4f}") + if wandb_log: + try: + wandb.log( + { + "iter": iter_num, + "tokens": iter_num * tokens_per_iter, + "loss/train": losses["train"], + "loss/val": losses["val"], + "lr": lr, + "mfu": running_mfu * 100, # convert to percentage + } + ) + except Exception as e: + print(f"logging to wandb failed: {e}") + if losses["val"] < best_val_loss or always_save_checkpoint: + best_val_loss = losses["val"] + if iter_num > 0: + checkpoint = { + "model": raw_model.state_dict(), + "optimizer": optimizer.state_dict(), + "model_args": model_args, + "iter_num": iter_num, + "best_val_loss": best_val_loss, + "config": config, + } + print(f"saving checkpoint to {out_dir}") + torch.save(checkpoint, os.path.join(out_dir, "ckpt.pt")) + raw_model.export(os.path.join(out_dir, "model.bin")) + if iter_num == 0 and eval_only: + break + + # forward backward update, with optional gradient accumulation to simulate larger batch size + # and using the GradScaler if data type is float16 + for micro_step in range(gradient_accumulation_steps): + if ddp: + # in DDP training we only need to sync gradients at the last micro step. + # the official way to do this is with model.no_sync() context manager, but + # I really dislike that this bloats the code and forces us to repeat code + # looking at the source of that context manager, it just toggles this variable + model.require_backward_grad_sync = micro_step == gradient_accumulation_steps - 1 + with ctx: + logits = model(X, Y) + loss = raw_model.last_loss + loss = loss / gradient_accumulation_steps + # immediately async prefetch next batch while model is doing the forward pass on the GPU + X, Y = next(train_batch_iter) + # backward pass, with gradient scaling if training in fp16 + scaler.scale(loss).backward() + # clip the gradient + if grad_clip != 0.0: + scaler.unscale_(optimizer) + torch.nn.utils.clip_grad_norm_(model.parameters(), grad_clip) + # step the optimizer and scaler if training in fp16 + scaler.step(optimizer) + scaler.update() + # flush the gradients as soon as we can, no need for this memory anymore + optimizer.zero_grad(set_to_none=True) + + # timing and logging + t1 = time.time() + dt = t1 - t0 + t0 = t1 + if iter_num % log_interval == 0 and master_process: + # get loss as float, scale up due to the divide above. note: this is a CPU-GPU sync point + lossf = loss.item() * gradient_accumulation_steps + if local_iter_num >= 5: # let the training loop settle a bit + mfu = raw_model.estimate_mfu(batch_size * gradient_accumulation_steps, dt) + running_mfu = mfu if running_mfu == -1.0 else 0.9 * running_mfu + 0.1 * mfu + print( + f"{iter_num} | loss {lossf:.4f} | lr {lr:e} | {dt*1000:.2f}ms | mfu {running_mfu*100:.2f}%" + ) + iter_num += 1 + local_iter_num += 1 + + # termination conditions + if iter_num > max_iters: + break + +if ddp: + destroy_process_group() diff --git a/train_vocab.sh b/train_vocab.sh new file mode 100755 index 0000000..7803af8 --- /dev/null +++ b/train_vocab.sh @@ -0,0 +1,126 @@ +#!/bin/bash + +# Trains a sentencepiece tokenizer model on a bunch of given data, my best +# effort attempt to replicate how Meta trained their Llama 2 tokenizer. + +# usage: $ train_vocab.sh +# example: +# ./train_vocab.sh tiny.txt tokenizer_tiny 1024 +# requirements: +# install https://github.com/google/sentencepiece + +# check if the correct number of arguments are provided +if [ $# -ne 3 ]; then + echo "Usage: $0 " + exit 1 +fi + +# assign command-line arguments to variables +input=$1 +model_prefix=$2 +vocab_size=$3 + +# check if input file exists +if [ ! -f "$input" ]; then + echo "Usage: $0 " + echo "input '$input' not found." + exit 1 +fi + +# check if vocab_size is a positive integer +if ! [[ "$vocab_size" =~ ^[0-9]+$ ]] || [ "$vocab_size" -lt 1 ]; then + echo "Usage: $0 " + echo "vocab_size size must be a positive integer." + exit 1 +fi + +# Print the processed inputs +echo "Input: $input" +echo "Model Prefix: $model_prefix" +echo "Vocabulary Size: $vocab_size" + +# train a sentencepiece tokenizer model +# Llama 2 config can be printed as follows: + +# import sentencepiece.sentencepiece_model_pb2 +# mp = sentencepiece.sentencepiece_model_pb2.ModelProto() +# mp.ParseFromString(open("tokenizer.model", "rb").read()) +# print(mp.trainer_spec) +# print(mp.normalizer_spec) + +# this gives: + +# trainer_spec { +# input: "/large_experiments/theorem/datasets/MERGED/all.test1.merged" +# model_prefix: "spm_model_32k_200M_charcov099995_allowWSO__v2" +# model_type: BPE +# vocab_size: 32000 +# self_test_sample_size: 0 +# input_format: "text" +# character_coverage: 0.9999499917030334 +# input_sentence_size: 200000000 +# seed_sentencepiece_size: 1000000 +# shrinking_factor: 0.75 +# num_threads: 80 +# num_sub_iterations: 2 +# max_sentence_length: 4192 +# shuffle_input_sentence: true +# max_sentencepiece_length: 16 +# split_by_unicode_script: true +# split_by_whitespace: true +# split_by_number: true +# treat_whitespace_as_suffix: false +# split_digits: true +# allow_whitespace_only_pieces: true +# vocabulary_output_piece_score: true +# hard_vocab_limit: true +# use_all_vocab: false +# byte_fallback: true +# required_chars: "" +# unk_id: 0 +# bos_id: 1 +# eos_id: 2 +# pad_id: -1 +# unk_surface: " \342\201\207 " +# unk_piece: "" +# bos_piece: "" +# eos_piece: "" +# pad_piece: "" +# train_extremely_large_corpus: false +# enable_differential_privacy: false +# differential_privacy_noise_level: 0.0 +# differential_privacy_clipping_threshold: 0 +# } +# normalizer_spec { +# name: "identity" +# precompiled_charsmap: "" +# add_dummy_prefix: true +# remove_extra_whitespaces: false +# normalization_rule_tsv: "" +# } + +# let's now use spm_train to train this exact model +# options docs: https://github.com/google/sentencepiece/blob/master/doc/options.md + +# we'll depart on a few settings: +# character_coverage -> 1.0 + +# other important notes: +# --split-digits = true, per the paper +# --allow_whitespace_only_pieces is true, default in spm is false +# --byte_fallback is true, default in spm is false +# --normalization_rule_name is identity, default in spm is nmt_nfkc + +spm_train --input="$input" \ + --model_prefix="$model_prefix" \ + --model_type=bpe \ + --vocab_size="$vocab_size" \ + --self_test_sample_size=0 \ + --input_format="text" \ + --character_coverage=1.0 \ + --num_threads="$(nproc)" \ + --split_digits=true \ + --allow_whitespace_only_pieces=true \ + --byte_fallback=true \ + --unk_surface=" \342\201\207 " \ + --normalization_rule_name=identity \ From d09ebbb32ba62e2e6594b73c2520bd382d17f58f Mon Sep 17 00:00:00 2001 From: YiMing Han Date: Fri, 18 Aug 2023 15:14:08 -0400 Subject: [PATCH 76/79] Revert "working one" This reverts commit 8607b11ea1f287c2f0fdff6c40cd915a55dcd89b. --- .dart_tool/package_config.json | 20 - Makefile | 60 +++ ORIGINAL.md | 322 ------------- README.md | 356 +++++++++++++-- build_msvc.bat | 1 + pubspec.lock | 13 - pubspec.yaml | 10 - run.c | 740 ++++++++++++++++++++++++++++++ run.dart | 799 --------------------------------- test_all.py | 89 ++++ win.c | 180 ++++++++ win.h | 69 +++ 12 files changed, 1449 insertions(+), 1210 deletions(-) delete mode 100644 .dart_tool/package_config.json create mode 100644 Makefile delete mode 100644 ORIGINAL.md create mode 100644 build_msvc.bat delete mode 100644 pubspec.lock delete mode 100644 pubspec.yaml create mode 100644 run.c delete mode 100644 run.dart create mode 100644 test_all.py create mode 100644 win.c create mode 100644 win.h diff --git a/.dart_tool/package_config.json b/.dart_tool/package_config.json deleted file mode 100644 index ca60c60..0000000 --- a/.dart_tool/package_config.json +++ /dev/null @@ -1,20 +0,0 @@ -{ - "configVersion": 2, - "packages": [ - { - "name": "args", - "rootUri": "file:///Users/yiminghan/.pub-cache/hosted/pub.dev/args-2.4.2", - "packageUri": "lib/", - "languageVersion": "2.19" - }, - { - "name": "llama2.dart", - "rootUri": "../", - "packageUri": "lib/", - "languageVersion": "3.1" - } - ], - "generated": "2023-08-18T18:58:12.764817Z", - "generator": "pub", - "generatorVersion": "3.1.0" -} diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..a4c6588 --- /dev/null +++ b/Makefile @@ -0,0 +1,60 @@ +# choose your compiler, e.g. gcc/clang +# example override to clang: make run CC=clang +CC = gcc + +# the most basic way of building that is most likely to work on most systems +.PHONY: run +run: run.c + $(CC) -O3 -o run run.c -lm + +# useful for a debug build, can then e.g. analyze with valgrind, example: +# $ valgrind --leak-check=full ./run out/model.bin -n 3 +rundebug: run.c + $(CC) -g -o run run.c -lm + +# https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html +# https://simonbyrne.github.io/notes/fastmath/ +# -Ofast enables all -O3 optimizations. +# Disregards strict standards compliance. +# It also enables optimizations that are not valid for all standard-compliant programs. +# It turns on -ffast-math, -fallow-store-data-races and the Fortran-specific +# -fstack-arrays, unless -fmax-stack-var-size is specified, and -fno-protect-parens. +# It turns off -fsemantic-interposition. +# In our specific application this is *probably* okay to use +.PHONY: runfast +runfast: run.c + $(CC) -Ofast -o run run.c -lm + +# additionally compiles with OpenMP, allowing multithreaded runs +# make sure to also enable multiple threads when running, e.g.: +# OMP_NUM_THREADS=4 ./run out/model.bin +.PHONY: runomp +runomp: run.c + $(CC) -Ofast -fopenmp -march=native run.c -lm -o run + +.PHONY: win64 +win64: + x86_64-w64-mingw32-gcc -Ofast -D_WIN32 -o run.exe -I. run.c win.c + +# compiles with gnu99 standard flags for amazon linux, coreos, etc. compatibility +.PHONY: rungnu +rungnu: + $(CC) -Ofast -std=gnu11 -o run run.c -lm + +.PHONY: runompgnu +runompgnu: + $(CC) -Ofast -fopenmp -std=gnu11 run.c -lm -o run + +# run all tests +.PHONY: test +test: + pytest + +# run only tests for run.c C implementation (is a bit faster if only C code changed) +.PHONY: testc +testc: + pytest -k runc + +.PHONY: clean +clean: + rm -f run diff --git a/ORIGINAL.md b/ORIGINAL.md deleted file mode 100644 index 35d20a2..0000000 --- a/ORIGINAL.md +++ /dev/null @@ -1,322 +0,0 @@ -## llama2.c - -

- Cute Llama -

- -Train the Llama 2 LLM architecture in PyTorch then inference it with one simple 700-line C file ([run.c](run.c)). You might think that you need many billion parameter LLMs to do anything useful, but in fact very small LLMs can have surprisingly strong performance if you make the domain narrow enough (ref: [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) paper). This repo is a "fullstack" train + inference solution for Llama 2 LLM, with focus on minimalism and simplicity. - -As the architecture is identical, you can also load and inference Meta's Llama 2 models. However, the current code only inferences models in fp32, so you will most likely not be able to productively load models larger than 7B. Work on model quantization is currently ongoing. - -Please note that this repo started recently as a fun weekend project: I took my earlier [nanoGPT](https://github.com/karpathy/nanoGPT), tuned it to implement the Llama-2 architecture instead of GPT-2, and the meat of it was writing the C inference engine in [run.c](run.c). So the project is young and moving quickly. Hat tip to the awesome [llama.cpp](https://github.com/ggerganov/llama.cpp) for inspiring this project. Compred to llama.cpp, I wanted something super simple, minimal, and educational so I chose to hard-code the Llama 2 architecture and just roll one inference file of pure C with no dependencies. - -## feel the magic - -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/karpathy/llama2.c/blob/master/run.ipynb) - -First, navigate to the folder when you keep your projects and clone this repository to this folder: - -```bash -git clone https://github.com/karpathy/llama2.c.git -``` - -Then, open the repository folder: - -```bash -cd llama2.c -``` - -Now, let's just run a baby Llama 2 model in C. You need a model checkpoint. Download this 15M parameter model I trained on the [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) dataset (~60MB download): - -```bash -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin -``` - -Compile and run the C code: - -```bash -make run -./run stories15M.bin -``` - -You'll see the text stream a sample. On my M1 MacBook Air this runs at ~110 tokens/s. See [performance](#performance) or the Makefile for compile flags that can significantly speed this up. We can also try a bit bigger 42M parameter model: - -```bash -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin -./run stories42M.bin -``` - -This still runs at interactive rates and samples more coherent and diverse stories: - -> Once upon a time, there was a little girl named Lily. She loved playing with her toys on top of her bed. One day, she decided to have a tea party with her stuffed animals. She poured some tea into a tiny teapot and put it on top of the teapot. Suddenly, her little brother Max came into the room and wanted to join the tea party too. Lily didn't want to share her tea and she told Max to go away. Max started to cry and Lily felt bad. She decided to yield her tea party to Max and they both shared the teapot. But then, something unexpected happened. The teapot started to shake and wiggle. Lily and Max were scared and didn't know what to do. Suddenly, the teapot started to fly towards the ceiling and landed on the top of the bed. Lily and Max were amazed and they hugged each other. They realized that sharing was much more fun than being selfish. From that day on, they always shared their tea parties and toys. - -You can also prompt the model with a prefix or a number of additional command line arguments, e.g. to sample at temperature 0.8 for 256 steps and with a prompt: - -```bash -./run stories42M.bin -t 0.8 -n 256 -i "One day, Lily met a Shoggoth" -``` - -> One day, Lily met a Shoggoth. He was very shy, but was also very generous. Lily said “Hello Shoggy! Can I be your friend?” Shoggy was happy to have a friend and said “Yes, let’s explore the universe together!” So they set off on a journey to explore the universe. As they travelled, Shoggy was happy to explain to Lily about all the wonderful things in the universe. At the end of the day, Lily and Shoggy had gathered lots of wonderful things from the universe, and they both felt very proud. They promised to explore the universe as one big pair and to never stop being generous to each other. - -There is also an even better 110M param model available, see [models](#models). - -Quick note on sampling, the recommendation for ~best results is to sample with `-t 1.0 -p 0.9`, i.e. temperature 1.0 (default) but also top-p sampling at 0.9 (default). Intuitively, top-p ensures that tokens with tiny probabilities do not get sampled, so we can't get "unlucky" during sampling, and we are less likely to go "off the rails" afterwards. More generally, to control the diversity of samples use either the temperature (i.e. vary `-t` between 0 and 1 and keep top-p off with `-p 0`) or the top-p value (i.e. vary `-p` between 0 and 1 and keep `-t 1`), but not both. Nice explainers on LLM sampling strategies include [this](https://peterchng.com/blog/2023/05/02/token-selection-strategies-top-k-top-p-and-temperature/), [this](https://docs.cohere.com/docs/controlling-generation-with-top-k-top-p) or [this](https://huggingface.co/blog/how-to-generate). - -## Meta's Llama 2 models - -As the neural net architecture is identical, we can also inference the Llama 2 models released by Meta. Sadly there is a bit of friction here due to licensing (I can't directly upload the checkpoints, I think). So Step 1, get the Llama 2 checkpoints by following the [Meta instructions](https://github.com/facebookresearch/llama). Once we have those checkpoints, we have to convert them into the llama2.c format. -For this we need to install the python dependencies (`pip install -r requirements.txt`) and then use the `export_meta_llama_bin.py` file, e.g. for 7B model: - -```bash -python export_meta_llama_bin.py path/to/llama/model/7B llama2_7b.bin -``` - -The export will take ~10 minutes or so and generate a 26GB file (the weights of the 7B model in float32) called `llama2_7b.bin` in the current directory. It has been [reported](https://github.com/karpathy/llama2.c/pull/85) that despite efforts, the 13B export currently doesn't work for unknown reasons (accepting PRs for fix). We can run the model as normal: - -```bash -./run llama2_7b.bin -``` - -This ran at about 4 tokens/s compiled with [OpenMP](#OpenMP) on 96 threads on my CPU Linux box in the cloud. (On my MacBook Air M1, currently it's closer to 30 seconds per token if you just build with `make runfast`.) Example output: - -> The purpose of this document is to highlight the state-of-the-art of CoO generation technologies, both recent developments and those in commercial use. The focus is on the technologies with the highest merit to become the dominating processes of the future and therefore to be technologies of interest to S&T ... R&D. As such, CoO generation technologies developed in Russia, Japan and Europe are described in some depth. The document starts with an introduction to cobalt oxides as complex products and a short view on cobalt as an essential material. The document continues with the discussion of the available CoO generation processes with respect to energy and capital consumption as well as to environmental damage. - -base models... ¯\\_(ツ)_/¯. Since we can inference the base model, it should be possible to also inference the chat model quite easily, and have a conversation with it. And if we can find a way to run 7B more efficiently, we can start adding LoRA to our training script, and going wild with finetunes all within the repo! - -## models - -For the sake of examples of smaller, from-scratch models, I trained a small model series on TinyStories. All of these trained in a few hours on my training setup (4X A100 40GB GPUs). The 110M took around 24 hours. I am hosting them on huggingface hub [tinyllamas](https://huggingface.co/karpathy/tinyllamas), both in the original PyTorch .pt, and also in the llama2.c format .bin: - -| model | dim | n_layers | n_heads | n_kv_heads | max context length | parameters | val loss | download | -| ----- | --- | -------- | ------- | ---------- | ------------------ | ---------- | -------- | ------------------------------------------------------------------------------------------ | -| 260K | 64 | 5 | 8 | 4 | 512 | 260K | 1.297 | [stories260K](https://huggingface.co/karpathy/tinyllamas/tree/main/stories260K) | -| OG | 288 | 6 | 6 | 6 | 256 | 15M | 1.072 | [stories15M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin) | -| 42M | 512 | 8 | 8 | 8 | 1024 | 42M | 0.847 | [stories42M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin) | -| 110M | 768 | 12 | 12 | 12 | 1024 | 110M | 0.760 | [stories110M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin) | - -You'll notice that the 110M model is equivalent to GPT-1 in size. Alternatively, this is also the smallest model in the GPT-2 series (`GPT-2 small`), except the max context length is only 1024 instead of 2048. The only notable changes from GPT-1/2 architecture is that Llama uses RoPE relatively positional embeddings instead of absolute/learned positional embeddings, a bit more fancy SwiGLU non-linearity in the MLP, RMSNorm instead of LayerNorm, bias=False on all Linear layers, and is optionally multiquery (but this is not yet supported in llama2.c). - -## training - -Let's see how we can train a baby Llama 2 from scratch using the code in this repo. First let's download and pretokenize some source dataset, e.g. I like [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) so this is the only example currently available in this repo. But it should be very easy to add datasets, see the code. - -```bash -python tinystories.py download -python tinystories.py pretokenize -``` - -Then train our model: - -```bash -python train.py -``` - -**brief training guide**. See the train.py script for more exotic launches and hyperparameter overrides. Here is a brief guide to how to set the parameters. Look at the table at the very end of the [Chinchilla paper](https://arxiv.org/abs/2203.15556) to get a sense of how the Transformer parameters (dim, n*layers, n_heads) grow or shrink together. Extrapolate/interpolate this pattern to get bigger or smaller transformers. Set the max context length however you wish, depending on the problem: this should be the max number of tokens that matter to predict the next token. E.g. Llama 2 uses 2048. Next, you want the \_total* batch size per update (printed by the script as "tokens per iteration will be:") to be somewhere around 100K tokens for medium-sized applications. For tiny applications it could be lower, for large training (e.g. GPTs/LLamas) it is usually ~0.5M, or even more. You get there by first maxing out the batch*size to whatever your system allows (e.g. mine was 16 in a recent run because after that my GPU runs out of memory), and then you want to increase gradient_accumulation_steps to be as high as necessary to reach the total batch size of ~100K. Finally, you want to tune your learning_rate (LR). You want this to be as high as your training allows. Very small networks can get away with a large LR (e.g. 1e-3 or even higher). Large networks need lower LRs. 3e-4 is a safe choice in most medium-sized applications, but can be too low for small networks, so try to increase it! Finally, max_iters is the length of training. Play with different settings. I mostly only ever tune these parameters and leave most of the others unchanged. Here is an example of how I trained the 110M model, which I don't think is anywhere near optimal, but looked sensible to me: dim 768, n_layers 12, n_heads 12 (so size of each head is 768 / 12 = 64 channels), seq len of 1024, batch size 16 (this is the most that fit my A100 40GB GPU), gradient_accumulation_steps = 8 was needed to get total tokens batch size to be 16 batch size * 1024 tokens in sequence \_ 8 grad_accum = 131,072 tokens per update. Good. Learning rate 4e-4 (probably a little too low). max_iters 200K (probably a bit too high). Dropout 0.1, as that usually helps a bit at medium size. That was it. I ran using Distributed Data Parallel (DDP) on 4 GPUs on my cloud machine, training took ~day or so. - -Totally understand if you want to skip model training, for simple demo just download one of the pretrained models (see [models](#models) section), e.g.: - -```bash -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin -``` - -Once we have the model.bin file, we can inference in C. Compile the C code first: - -```bash -make run -``` - -You can now run it simply as - -```bash -./run stories15M.bin -``` - -Watch the tokens stream by, fun! We can also run the PyTorch inference script for a comparison. Download one of the models again from huggingface hub and point the `sample.py` script at it: - -```bash -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt -P out15M -python sample.py --checkpoint=out15M/stories15M.pt -``` - -Which gives the same results. - -## custom tokenizers - -In everything above, we've assumed the custom Lllama 2 tokenizer with 32,000 tokens. However, in many boutique LLMs, using vocabulary this big might be an overkill. If you have a small application you have in mind, you might be much better off training your own tokenizers. This can make everything nicer - with smaller vocabs your model has fewer parameters (because the token embedding table is a lot smaller), the inference is faster (because there are fewer tokens to predict), and your average sequence length per example could also get smaller (because the compression is a lot more efficient on your data). So let's see how we train a custom tokenizer. - -By default, to pretokenize the tinystories dataset we had to run, in order: - -``` -python tinystories.py download -python tinystories.py pretokenize -``` - -The `pretokenize` stage here loads the Llama 2 tokenizer (vocab size 32,000) and uses it to convert the downloaded text into integers, and saves that to file. We now change this as follows, to train an example 4096-token tokenizer: - -``` -python tinystories.py download -python tinystories.py train_vocab --vocab_size=4096 -python tinystories.py pretokenize --vocab_size=4096 -``` - -The `train_vocab` stage will call the `train_vocab.sh` script, which calls the `sentencepiece` library to train the tokenizer, storing it in a new file `data/tok4096.model`. I tried to reproduce as well as I could the settings that (I think) Meta used to train their vocabulary. This uses the Byte Pair Encoding algorithm that starts out with raw utf8 byte sequences of the text data and then iteratively merges the most common consecutive pairs of tokens to form the vocabulary. Inspect the `tinystories.py` file - the custom tokenizers are stored in a special directory structure indexed by the vocab size. - -A quick note of interest is that vocab size of 4096 trained specifically on tinystories creates integer sequences with about the same sequence length per example as the default Llama 2 tokenizer of 32000 tokens! This means that our custom, tailored tokenizer is a lot better adapted to our specific text, and can compress it very effectively. So our trained models are smaller and faster. - -Now that we have pretokenized the dataset with our custom tokenizer, we can train the model. The training script `train.py` doesn't care about the exact tokens, it only cares about the vocabulary size so it can correctly initialize the model. So when training your model, make sure to pass in - -``` -python train.py --vocab_source=custom --vocab_size=4096 -``` - -(The defaults are `llama2` and `32000` respectively, which indicates the default Llama 2 tokenizer). This trains the model. Finally we are ready to run inference with our `run.c` script. For that we need two things. Number one, we have to export our tokenizer in the `.bin` format, do that with: - -``` -python tokenizer.py --tokenizer-model=data/tok4096.model -``` - -This writes the tokenizer to `data/tok4096.bin`. Now we can run inference, pointing it to this tokenizer using the `-z` flag: - -``` -./run out/model.bin -z data/tok4096.bin -``` - -This should print the samples. If you leave out the `-z` flag, it will use the default Llama 2 tokenizer, which would generate a good sequence of integers, but they would get translated using a different vocabulary to text, so it would look like gibberish. - -## performance - -There are many ways to potentially speed up this code depending on your system. Have a look at the [Makefile](Makefile), which contains a lot of notes. The `make run` command currently uses the `-O3` optimization by default, i.e.: - -```bash -gcc -O3 -o run run.c -lm -``` - --O3 includes optimizations that are expensive in terms of compile time and memory usage. Including vectorization, loop unrolling, and predicting branches. - -To get a much better performance, try to compile with `make runfast`. This turns on the `-Ofast` flag, which includes additional optimizations that may break compliance with the C/IEEE specifications, in addition to `-O3`. See [the GCC docs](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html) for more information. - -Try `-march=native` to compile the program to use the architecture of the machine you're compiling on rather than a more generic CPU. This may enable additional optimizations and hardware-specific tuning such as improved vector instructions/width. - -The fastest throughput I saw so far on my MacBook Air (M1) so far is with `make runfast`. - -You can also experiment with replacing `gcc` with `clang`. - -If compiling with gcc, try experimenting with `-funroll-all-loops`, see PR [#183](https://github.com/karpathy/llama2.c/pull/183) - -### OpenMP - -Big improvements can also be achieved by compiling with OpenMP, which "activates" the `#pragma omp parallel for` inside the matmul and attention, allowing the work in the loops to be split up over multiple processors. -You'll need to install the OpenMP library and the clang compiler first (e.g. `apt install clang libomp-dev` on ubuntu). Then you can compile with `make runomp`, which does: - -```bash -clang -Ofast -fopenmp -march=native run.c -lm -o run -``` - -When you run inference make sure to use OpenMP flags to set the number of threads, e.g.: - -```bash -OMP_NUM_THREADS=4 ./run out/model.bin -``` - -Depending on your system resources you may want to tweak these hyperparameters and use more threads. But more is not always better, usually this is a bit U shaped. - -## platforms - -On **Windows**, use `build_msvc.bat` in a Visual Studio Command Prompt to build with msvc, or you can use `make win64` to use mingw compiler toolchain from linux or windows to build the windows target. MSVC build will automatically use openmp and max threads appropriate for your CPU unless you set `OMP_NUM_THREADS` env. - -On **Centos 7**, **Amazon Linux 2018** use `rungnu` Makefile target: `make rungnu` or `make runompgnu` to use openmp. - -On **Mac**, use clang from brew for openmp build. Install clang as `brew install llvm` and use the installed clang binary to compile with openmp: `make runomp CC=/opt/homebrew/opt/llvm/bin/clang` - -## tests - -You can run tests simply with pytest: - -```bash -$ pip install pytest -$ pytest -``` - -This will currently invoke two tests inside `test_all.py`, which forward the model in both C and Python for 200 steps and check the output against a known good expected output. The tests currently run in only a few seconds, but will have to download and cache the stories260K models in a temporary `test` directory (only ~2MB download). - -## ack - -I trained the llama2.c storyteller models on a 4X A100 40GB box graciously provided by the excellent [Lambda labs](https://lambdalabs.com/service/gpu-cloud), thank you. - -## discord - -Figured it's possible to reuse my existing discord channel (that I use for my [zero to hero youtube series](https://karpathy.ai/zero-to-hero.html)), see #llama2c channel on [discord](https://discord.gg/3zy8kqD9Cp), for any quick questions, related discussions, etc. - -## contributing - -A few words on this repo and the kinds of PRs that are likely to be accepted. What is the goal of this repo? Basically I think there will be a lot of interest in training or finetuning custom micro-LLMs (think ~100M - ~1B params, but let's say up to ~10B params) across a large diversity of applications, and deploying them in edge-adjacent environments (think MCUs, phones, web browsers, laptops, etc.). I'd like this repo to be the simplest, smallest, most hackable repo to support this workflow, both training and inference. In particular, this repo is not a complex framework with a 1000 knobs controlling inscrutible code across a nested directory structure of hundreds of files. Instead, I expect most applications will wish to create a fork of this repo and hack it to their specific needs and deployment platforms. - -People who care about deployment efficiency above all else should look at [llama.cpp](https://github.com/ggerganov/llama.cpp). This repo still cares about efficiency, but not at the cost of simplicity, readability or portability. Basically, I expect that a lot of people come to this repo because the training code is 2 readable .py files and the inference code is 500 lines of C. So I'd like this to continue to be a kind of simplest "reference implementation" that can be easily hacked in a separate fork into whatever downstream application people are excited about. It shouldn't be full-featured. It shouldn't take 100 different options or settings. It shouldn't be the most efficient. A few examples: - -- someone re-ordered two loops to improve data locality for a small efficieny win => instant merge. -- someone added the one line "pragma omp parallel for", which allows you to compile with OpenMP and dramatically speed up the code, or acts as just a comment if you don't compile it that way => instant merge. -- bug fixes and touchups etc. => happy to merge - -A few examples of PRs are that are not an excellent fit: - -- adding more than several #ifdefs all over the place in code. If they are localized / few, might be okay. -- adding a lot of code that is very specific to some specific platform (e.g. MCUs, or some special version of linux or processor). These may be a better fit for forks of the project, and I am very happy to maintain a list of these forks in section below. -- adding hundreds of lines of code to run.c that are only active in specific scenarios or platforms. - -If your candidate PRs have elements of these it doesn't mean they won't get merged, it just means they will make it into the gray territory. TLDR: I am eager to merge any mostly small, mostly localized, broadly applicable, clean changes that improve the efficiency and portability of the repo, while keep its hackability and readability. I appreciate all PRs seeking to help me improve the project, thank you! <3. - -## notable forks - -- Rust - - [llama2.rs](https://github.com/gaxler/llama2.rs) by @[gaxler](https://github.com/gaxler): a Rust port of this project - - [llama2.rs](https://github.com/leo-du/llama2.rs) by @[leo-du](https://github.com/leo-du): A Rust port of this project - - [llama2-rs](https://github.com/danielgrittner/llama2-rs) by @[danielgrittner](https://github.com/danielgrittner): a Rust port of this project - - [llama2.rs](https://github.com/lintian06/llama2.rs) by @[lintian06](https://github.com/lintian06): A Rust port of this project -- Go - - [go-llama2](https://github.com/tmc/go-llama2) by @[tmc](https://github.com/tmc): a Go port of this project - - [llama2.go](https://github.com/nikolaydubina/llama2.go) by @[nikolaydubina](https://github.com/nikolaydubina): a Go port of this project - - [llama2.go](https://github.com/haormj/llama2.go) by @[haormj](https://github.com/haormj): a Go port of this project - - [llama2.go](https://github.com/saracen/llama2.go) by @[saracen](https://github.com/saracen): a Go port of this project -- Android - - [llama2.c-android](https://github.com/Manuel030/llama2.c-android): by @[Manuel030](https://github.com/Manuel030): adds Android binaries of this project - - [llama2.c-android-wrapper](https://github.com/celikin/llama2.c-android-wrapper): by @[celikin](https://github.com/celikin): added JNI wrapper, PoC -- C++ - - [llama2.cpp](https://github.com/leloykun/llama2.cpp) by @[leloykun](https://github.com/leloykun): a C++ port of this project -- JavaScript - - [llama2.js](https://github.com/epicure/llama2.js) by @[epicure](https://github.com/epicure): a JavaScript port of this project - - [llama2.ts](https://github.com/wizzard0/llama2.ts) by @[oleksandr_now](https://twitter.com/oleksandr_now): a TypeScript port of this project. Full Llama2-7B capable. - - [llama2.c-emscripten](https://github.com/gohai/llama2.c-emscripten) by @[gohai](https://github.com/gohai): Emscripten (JavaScript) port, based on @ggerganov's initial prototype -- Zig - - [llama2.zig](https://github.com/cgbur/llama2.zig) by @[cgbur](https://github.com/cgbur): A Zig port of this project - - [llama2.zig](https://github.com/vodkaslime/llama2.zig) by @[vodkaslime](https://github.com/vodkaslime): a Zig port of this project - - [llama2.zig](https://github.com/clebert/llama2.zig) by @[clebert](https://github.com/clebert): a Zig port of this project -- Julia - - [llama2.jl](https://github.com/juvi21/llama2.jl) by @[juvi21](https://github.com/juvi21): a Julia port of this project -- Scala - - [llama2.scala](https://github.com/jrudolph/llama2.scala) by @[jrudolph](https://github.com/jrudolph): a Scala port of this project -- Java - - [llama2.java](https://github.com/mukel/llama2.java) by @[mukel](https://github.com/mukel): a Java port of this project -- Kotlin - - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @[madroidmaq](https://github.com/madroidmaq): a Kotlin port of this project -- Python - - [llama2.py](https://github.com/tairov/llama2.py) by @[tairov](https://github.com/tairov): a simple one file pure Python port of this project with zero dependencies -- C# - - [llama2.cs](https://github.com/trrahul/llama2.cs) by @[trrahul](https://github.com/trrahul): a C# port of this project -- WebAssembly - - [icpp-llm](https://github.com/icppWorld/icpp-llm): LLMs for the Internet Computer -- [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 -- [llama2.c-zh - Bilingual Chinese and English](https://github.com/chenyangMl/llama2.c-zh) by @[chenyangMl](https://github.com/chenyangMl): Expand tokenizer to support training and inference in both Chinese and English - -## unsorted todos - -- make it easier to add a new dataset with not too much pain -- should calculate freq_cis online in the script run.c instead of loading them -- int4/8 quantization -- export the model in a more sensible output format with a proper header, etc. -- support Llama 2 7B Chat models and tune run.c to Chat UI/UX -- llama2.cu investigate and merge -- (LoRA) finetuning and export of Llama 2 models - -## License - -MIT diff --git a/README.md b/README.md index de13c23..8c36285 100644 --- a/README.md +++ b/README.md @@ -1,48 +1,4 @@ -## llama2.dart - -This is a fork of Andrej Karpathy's [llama2.c](https://github.com/karpathy/llama2.c), implemented in (Almost) Pure Dart, except for some args parsing utility library. - -### To run : - -Instal Dart - -```bash -brew tap dart-lang/dart -brew install dart -``` - -Install the arg parsing dependency - -```bash -dart pub add args -``` - -Download the dataset: - -```bash -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin -wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin -``` - -```bash -dart run run.dart -c ./stories15M.bin -i "PROMPT GOES HERE" -``` - -## Performance - -Dart suprisingly ok performance being a single threaded language, tho it's starting to struggle at 110M: -Tested on M2 Max Chip - -| Model | Token/s | -| ----- | ------------ | -| 15M | tok/s: 17.78 | -| 42M | tok/s: 6.43 | -| 110M | tok/s: 2.47 | - -### Original README - -Extract from the original Repo: +## llama2.c

Cute Llama @@ -54,4 +10,312 @@ As the architecture is identical, you can also load and inference Meta's Llama 2 Please note that this repo started recently as a fun weekend project: I took my earlier [nanoGPT](https://github.com/karpathy/nanoGPT), tuned it to implement the Llama-2 architecture instead of GPT-2, and the meat of it was writing the C inference engine in [run.c](run.c). So the project is young and moving quickly. Hat tip to the awesome [llama.cpp](https://github.com/ggerganov/llama.cpp) for inspiring this project. Compred to llama.cpp, I wanted something super simple, minimal, and educational so I chose to hard-code the Llama 2 architecture and just roll one inference file of pure C with no dependencies. -Please refer to [Original README](/ORIGINAL.md) or the upstream repo for more information on llama2.c +## feel the magic + +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/karpathy/llama2.c/blob/master/run.ipynb) + +First, navigate to the folder when you keep your projects and clone this repository to this folder: + +```bash +git clone https://github.com/karpathy/llama2.c.git +``` + +Then, open the repository folder: + +```bash +cd llama2.c +``` + +Now, let's just run a baby Llama 2 model in C. You need a model checkpoint. Download this 15M parameter model I trained on the [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) dataset (~60MB download): + +```bash +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin +``` + +Compile and run the C code: + +```bash +make run +./run stories15M.bin +``` + +You'll see the text stream a sample. On my M1 MacBook Air this runs at ~110 tokens/s. See [performance](#performance) or the Makefile for compile flags that can significantly speed this up. We can also try a bit bigger 42M parameter model: + +```bash +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin +./run stories42M.bin +``` + +This still runs at interactive rates and samples more coherent and diverse stories: + +> Once upon a time, there was a little girl named Lily. She loved playing with her toys on top of her bed. One day, she decided to have a tea party with her stuffed animals. She poured some tea into a tiny teapot and put it on top of the teapot. Suddenly, her little brother Max came into the room and wanted to join the tea party too. Lily didn't want to share her tea and she told Max to go away. Max started to cry and Lily felt bad. She decided to yield her tea party to Max and they both shared the teapot. But then, something unexpected happened. The teapot started to shake and wiggle. Lily and Max were scared and didn't know what to do. Suddenly, the teapot started to fly towards the ceiling and landed on the top of the bed. Lily and Max were amazed and they hugged each other. They realized that sharing was much more fun than being selfish. From that day on, they always shared their tea parties and toys. + +You can also prompt the model with a prefix or a number of additional command line arguments, e.g. to sample at temperature 0.8 for 256 steps and with a prompt: + +```bash +./run stories42M.bin -t 0.8 -n 256 -i "One day, Lily met a Shoggoth" +``` + +> One day, Lily met a Shoggoth. He was very shy, but was also very generous. Lily said “Hello Shoggy! Can I be your friend?” Shoggy was happy to have a friend and said “Yes, let’s explore the universe together!” So they set off on a journey to explore the universe. As they travelled, Shoggy was happy to explain to Lily about all the wonderful things in the universe. At the end of the day, Lily and Shoggy had gathered lots of wonderful things from the universe, and they both felt very proud. They promised to explore the universe as one big pair and to never stop being generous to each other. + +There is also an even better 110M param model available, see [models](#models). + +Quick note on sampling, the recommendation for ~best results is to sample with `-t 1.0 -p 0.9`, i.e. temperature 1.0 (default) but also top-p sampling at 0.9 (default). Intuitively, top-p ensures that tokens with tiny probabilities do not get sampled, so we can't get "unlucky" during sampling, and we are less likely to go "off the rails" afterwards. More generally, to control the diversity of samples use either the temperature (i.e. vary `-t` between 0 and 1 and keep top-p off with `-p 0`) or the top-p value (i.e. vary `-p` between 0 and 1 and keep `-t 1`), but not both. Nice explainers on LLM sampling strategies include [this](https://peterchng.com/blog/2023/05/02/token-selection-strategies-top-k-top-p-and-temperature/), [this](https://docs.cohere.com/docs/controlling-generation-with-top-k-top-p) or [this](https://huggingface.co/blog/how-to-generate). + +## Meta's Llama 2 models + +As the neural net architecture is identical, we can also inference the Llama 2 models released by Meta. Sadly there is a bit of friction here due to licensing (I can't directly upload the checkpoints, I think). So Step 1, get the Llama 2 checkpoints by following the [Meta instructions](https://github.com/facebookresearch/llama). Once we have those checkpoints, we have to convert them into the llama2.c format. +For this we need to install the python dependencies (`pip install -r requirements.txt`) and then use the `export_meta_llama_bin.py` file, e.g. for 7B model: + +```bash +python export_meta_llama_bin.py path/to/llama/model/7B llama2_7b.bin +``` + +The export will take ~10 minutes or so and generate a 26GB file (the weights of the 7B model in float32) called `llama2_7b.bin` in the current directory. It has been [reported](https://github.com/karpathy/llama2.c/pull/85) that despite efforts, the 13B export currently doesn't work for unknown reasons (accepting PRs for fix). We can run the model as normal: + +```bash +./run llama2_7b.bin +``` + +This ran at about 4 tokens/s compiled with [OpenMP](#OpenMP) on 96 threads on my CPU Linux box in the cloud. (On my MacBook Air M1, currently it's closer to 30 seconds per token if you just build with `make runfast`.) Example output: + +> The purpose of this document is to highlight the state-of-the-art of CoO generation technologies, both recent developments and those in commercial use. The focus is on the technologies with the highest merit to become the dominating processes of the future and therefore to be technologies of interest to S&T ... R&D. As such, CoO generation technologies developed in Russia, Japan and Europe are described in some depth. The document starts with an introduction to cobalt oxides as complex products and a short view on cobalt as an essential material. The document continues with the discussion of the available CoO generation processes with respect to energy and capital consumption as well as to environmental damage. + +base models... ¯\\_(ツ)_/¯. Since we can inference the base model, it should be possible to also inference the chat model quite easily, and have a conversation with it. And if we can find a way to run 7B more efficiently, we can start adding LoRA to our training script, and going wild with finetunes all within the repo! + +## models + +For the sake of examples of smaller, from-scratch models, I trained a small model series on TinyStories. All of these trained in a few hours on my training setup (4X A100 40GB GPUs). The 110M took around 24 hours. I am hosting them on huggingface hub [tinyllamas](https://huggingface.co/karpathy/tinyllamas), both in the original PyTorch .pt, and also in the llama2.c format .bin: + +| model | dim | n_layers | n_heads | n_kv_heads | max context length | parameters | val loss | download +| --- | --- | --- | --- | --- | --- | --- | --- | --- | +| 260K | 64 | 5 | 8 | 4 | 512 | 260K | 1.297 | [stories260K](https://huggingface.co/karpathy/tinyllamas/tree/main/stories260K) +| OG | 288 | 6 | 6 | 6 | 256 | 15M | 1.072 | [stories15M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin) | +| 42M| 512 | 8 | 8 | 8 | 1024 | 42M | 0.847 | [stories42M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin) | +| 110M| 768 | 12 | 12 | 12 | 1024 | 110M | 0.760 | [stories110M.bin](https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin) | + +You'll notice that the 110M model is equivalent to GPT-1 in size. Alternatively, this is also the smallest model in the GPT-2 series (`GPT-2 small`), except the max context length is only 1024 instead of 2048. The only notable changes from GPT-1/2 architecture is that Llama uses RoPE relatively positional embeddings instead of absolute/learned positional embeddings, a bit more fancy SwiGLU non-linearity in the MLP, RMSNorm instead of LayerNorm, bias=False on all Linear layers, and is optionally multiquery (but this is not yet supported in llama2.c). + +## training + +Let's see how we can train a baby Llama 2 from scratch using the code in this repo. First let's download and pretokenize some source dataset, e.g. I like [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) so this is the only example currently available in this repo. But it should be very easy to add datasets, see the code. + +```bash +python tinystories.py download +python tinystories.py pretokenize +``` + +Then train our model: + +```bash +python train.py +``` + +**brief training guide**. See the train.py script for more exotic launches and hyperparameter overrides. Here is a brief guide to how to set the parameters. Look at the table at the very end of the [Chinchilla paper](https://arxiv.org/abs/2203.15556) to get a sense of how the Transformer parameters (dim, n_layers, n_heads) grow or shrink together. Extrapolate/interpolate this pattern to get bigger or smaller transformers. Set the max context length however you wish, depending on the problem: this should be the max number of tokens that matter to predict the next token. E.g. Llama 2 uses 2048. Next, you want the _total_ batch size per update (printed by the script as "tokens per iteration will be:") to be somewhere around 100K tokens for medium-sized applications. For tiny applications it could be lower, for large training (e.g. GPTs/LLamas) it is usually ~0.5M, or even more. You get there by first maxing out the batch_size to whatever your system allows (e.g. mine was 16 in a recent run because after that my GPU runs out of memory), and then you want to increase gradient_accumulation_steps to be as high as necessary to reach the total batch size of ~100K. Finally, you want to tune your learning_rate (LR). You want this to be as high as your training allows. Very small networks can get away with a large LR (e.g. 1e-3 or even higher). Large networks need lower LRs. 3e-4 is a safe choice in most medium-sized applications, but can be too low for small networks, so try to increase it! Finally, max_iters is the length of training. Play with different settings. I mostly only ever tune these parameters and leave most of the others unchanged. Here is an example of how I trained the 110M model, which I don't think is anywhere near optimal, but looked sensible to me: dim 768, n_layers 12, n_heads 12 (so size of each head is 768 / 12 = 64 channels), seq len of 1024, batch size 16 (this is the most that fit my A100 40GB GPU), gradient_accumulation_steps = 8 was needed to get total tokens batch size to be 16 batch size * 1024 tokens in sequence * 8 grad_accum = 131,072 tokens per update. Good. Learning rate 4e-4 (probably a little too low). max_iters 200K (probably a bit too high). Dropout 0.1, as that usually helps a bit at medium size. That was it. I ran using Distributed Data Parallel (DDP) on 4 GPUs on my cloud machine, training took ~day or so. + +Totally understand if you want to skip model training, for simple demo just download one of the pretrained models (see [models](#models) section), e.g.: + +```bash +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin +``` + +Once we have the model.bin file, we can inference in C. Compile the C code first: + +```bash +make run +``` + +You can now run it simply as + +```bash +./run stories15M.bin +``` + +Watch the tokens stream by, fun! We can also run the PyTorch inference script for a comparison. Download one of the models again from huggingface hub and point the `sample.py` script at it: + +```bash +wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt -P out15M +python sample.py --checkpoint=out15M/stories15M.pt +``` + +Which gives the same results. + +## custom tokenizers + +In everything above, we've assumed the custom Lllama 2 tokenizer with 32,000 tokens. However, in many boutique LLMs, using vocabulary this big might be an overkill. If you have a small application you have in mind, you might be much better off training your own tokenizers. This can make everything nicer - with smaller vocabs your model has fewer parameters (because the token embedding table is a lot smaller), the inference is faster (because there are fewer tokens to predict), and your average sequence length per example could also get smaller (because the compression is a lot more efficient on your data). So let's see how we train a custom tokenizer. + +By default, to pretokenize the tinystories dataset we had to run, in order: + +``` +python tinystories.py download +python tinystories.py pretokenize +``` + +The `pretokenize` stage here loads the Llama 2 tokenizer (vocab size 32,000) and uses it to convert the downloaded text into integers, and saves that to file. We now change this as follows, to train an example 4096-token tokenizer: + +``` +python tinystories.py download +python tinystories.py train_vocab --vocab_size=4096 +python tinystories.py pretokenize --vocab_size=4096 +``` + +The `train_vocab` stage will call the `train_vocab.sh` script, which calls the `sentencepiece` library to train the tokenizer, storing it in a new file `data/tok4096.model`. I tried to reproduce as well as I could the settings that (I think) Meta used to train their vocabulary. This uses the Byte Pair Encoding algorithm that starts out with raw utf8 byte sequences of the text data and then iteratively merges the most common consecutive pairs of tokens to form the vocabulary. Inspect the `tinystories.py` file - the custom tokenizers are stored in a special directory structure indexed by the vocab size. + +A quick note of interest is that vocab size of 4096 trained specifically on tinystories creates integer sequences with about the same sequence length per example as the default Llama 2 tokenizer of 32000 tokens! This means that our custom, tailored tokenizer is a lot better adapted to our specific text, and can compress it very effectively. So our trained models are smaller and faster. + +Now that we have pretokenized the dataset with our custom tokenizer, we can train the model. The training script `train.py` doesn't care about the exact tokens, it only cares about the vocabulary size so it can correctly initialize the model. So when training your model, make sure to pass in + +``` +python train.py --vocab_source=custom --vocab_size=4096 +``` + +(The defaults are `llama2` and `32000` respectively, which indicates the default Llama 2 tokenizer). This trains the model. Finally we are ready to run inference with our `run.c` script. For that we need two things. Number one, we have to export our tokenizer in the `.bin` format, do that with: + +``` +python tokenizer.py --tokenizer-model=data/tok4096.model +``` + +This writes the tokenizer to `data/tok4096.bin`. Now we can run inference, pointing it to this tokenizer using the `-z` flag: + +``` +./run out/model.bin -z data/tok4096.bin +``` + +This should print the samples. If you leave out the `-z` flag, it will use the default Llama 2 tokenizer, which would generate a good sequence of integers, but they would get translated using a different vocabulary to text, so it would look like gibberish. + +## performance + +There are many ways to potentially speed up this code depending on your system. Have a look at the [Makefile](Makefile), which contains a lot of notes. The `make run` command currently uses the `-O3` optimization by default, i.e.: + +```bash +gcc -O3 -o run run.c -lm +``` + +-O3 includes optimizations that are expensive in terms of compile time and memory usage. Including vectorization, loop unrolling, and predicting branches. + +To get a much better performance, try to compile with `make runfast`. This turns on the `-Ofast` flag, which includes additional optimizations that may break compliance with the C/IEEE specifications, in addition to `-O3`. See [the GCC docs](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html) for more information. + +Try `-march=native` to compile the program to use the architecture of the machine you're compiling on rather than a more generic CPU. This may enable additional optimizations and hardware-specific tuning such as improved vector instructions/width. + +The fastest throughput I saw so far on my MacBook Air (M1) so far is with `make runfast`. + +You can also experiment with replacing `gcc` with `clang`. + +If compiling with gcc, try experimenting with `-funroll-all-loops`, see PR [#183](https://github.com/karpathy/llama2.c/pull/183) + +### OpenMP +Big improvements can also be achieved by compiling with OpenMP, which "activates" the `#pragma omp parallel for` inside the matmul and attention, allowing the work in the loops to be split up over multiple processors. +You'll need to install the OpenMP library and the clang compiler first (e.g. `apt install clang libomp-dev` on ubuntu). Then you can compile with `make runomp`, which does: + +```bash +clang -Ofast -fopenmp -march=native run.c -lm -o run +``` + +When you run inference make sure to use OpenMP flags to set the number of threads, e.g.: + +```bash +OMP_NUM_THREADS=4 ./run out/model.bin +``` + +Depending on your system resources you may want to tweak these hyperparameters and use more threads. But more is not always better, usually this is a bit U shaped. + +## platforms + +On **Windows**, use `build_msvc.bat` in a Visual Studio Command Prompt to build with msvc, or you can use `make win64` to use mingw compiler toolchain from linux or windows to build the windows target. MSVC build will automatically use openmp and max threads appropriate for your CPU unless you set `OMP_NUM_THREADS` env. + +On **Centos 7**, **Amazon Linux 2018** use `rungnu` Makefile target: `make rungnu` or `make runompgnu` to use openmp. + +On **Mac**, use clang from brew for openmp build. Install clang as `brew install llvm` and use the installed clang binary to compile with openmp: `make runomp CC=/opt/homebrew/opt/llvm/bin/clang` + +## tests + +You can run tests simply with pytest: + +```bash +$ pip install pytest +$ pytest +``` + +This will currently invoke two tests inside `test_all.py`, which forward the model in both C and Python for 200 steps and check the output against a known good expected output. The tests currently run in only a few seconds, but will have to download and cache the stories260K models in a temporary `test` directory (only ~2MB download). + +## ack + +I trained the llama2.c storyteller models on a 4X A100 40GB box graciously provided by the excellent [Lambda labs](https://lambdalabs.com/service/gpu-cloud), thank you. + +## discord + +Figured it's possible to reuse my existing discord channel (that I use for my [zero to hero youtube series](https://karpathy.ai/zero-to-hero.html)), see #llama2c channel on [discord](https://discord.gg/3zy8kqD9Cp), for any quick questions, related discussions, etc. + +## contributing + +A few words on this repo and the kinds of PRs that are likely to be accepted. What is the goal of this repo? Basically I think there will be a lot of interest in training or finetuning custom micro-LLMs (think ~100M - ~1B params, but let's say up to ~10B params) across a large diversity of applications, and deploying them in edge-adjacent environments (think MCUs, phones, web browsers, laptops, etc.). I'd like this repo to be the simplest, smallest, most hackable repo to support this workflow, both training and inference. In particular, this repo is not a complex framework with a 1000 knobs controlling inscrutible code across a nested directory structure of hundreds of files. Instead, I expect most applications will wish to create a fork of this repo and hack it to their specific needs and deployment platforms. + +People who care about deployment efficiency above all else should look at [llama.cpp](https://github.com/ggerganov/llama.cpp). This repo still cares about efficiency, but not at the cost of simplicity, readability or portability. Basically, I expect that a lot of people come to this repo because the training code is 2 readable .py files and the inference code is 500 lines of C. So I'd like this to continue to be a kind of simplest "reference implementation" that can be easily hacked in a separate fork into whatever downstream application people are excited about. It shouldn't be full-featured. It shouldn't take 100 different options or settings. It shouldn't be the most efficient. A few examples: + +- someone re-ordered two loops to improve data locality for a small efficieny win => instant merge. +- someone added the one line "pragma omp parallel for", which allows you to compile with OpenMP and dramatically speed up the code, or acts as just a comment if you don't compile it that way => instant merge. +- bug fixes and touchups etc. => happy to merge + +A few examples of PRs are that are not an excellent fit: + +- adding more than several #ifdefs all over the place in code. If they are localized / few, might be okay. +- adding a lot of code that is very specific to some specific platform (e.g. MCUs, or some special version of linux or processor). These may be a better fit for forks of the project, and I am very happy to maintain a list of these forks in section below. +- adding hundreds of lines of code to run.c that are only active in specific scenarios or platforms. + +If your candidate PRs have elements of these it doesn't mean they won't get merged, it just means they will make it into the gray territory. TLDR: I am eager to merge any mostly small, mostly localized, broadly applicable, clean changes that improve the efficiency and portability of the repo, while keep its hackability and readability. I appreciate all PRs seeking to help me improve the project, thank you! <3. + +## notable forks + +- Rust + - [llama2.rs](https://github.com/gaxler/llama2.rs) by @[gaxler](https://github.com/gaxler): a Rust port of this project + - [llama2.rs](https://github.com/leo-du/llama2.rs) by @[leo-du](https://github.com/leo-du): A Rust port of this project + - [llama2-rs](https://github.com/danielgrittner/llama2-rs) by @[danielgrittner](https://github.com/danielgrittner): a Rust port of this project + - [llama2.rs](https://github.com/lintian06/llama2.rs) by @[lintian06](https://github.com/lintian06): A Rust port of this project +- Go + - [go-llama2](https://github.com/tmc/go-llama2) by @[tmc](https://github.com/tmc): a Go port of this project + - [llama2.go](https://github.com/nikolaydubina/llama2.go) by @[nikolaydubina](https://github.com/nikolaydubina): a Go port of this project + - [llama2.go](https://github.com/haormj/llama2.go) by @[haormj](https://github.com/haormj): a Go port of this project + - [llama2.go](https://github.com/saracen/llama2.go) by @[saracen](https://github.com/saracen): a Go port of this project +- Android + - [llama2.c-android](https://github.com/Manuel030/llama2.c-android): by @[Manuel030](https://github.com/Manuel030): adds Android binaries of this project + - [llama2.c-android-wrapper](https://github.com/celikin/llama2.c-android-wrapper): by @[celikin](https://github.com/celikin): added JNI wrapper, PoC +- C++ + - [llama2.cpp](https://github.com/leloykun/llama2.cpp) by @[leloykun](https://github.com/leloykun): a C++ port of this project +- JavaScript + - [llama2.js](https://github.com/epicure/llama2.js) by @[epicure](https://github.com/epicure): a JavaScript port of this project + - [llama2.ts](https://github.com/wizzard0/llama2.ts) by @[oleksandr_now](https://twitter.com/oleksandr_now): a TypeScript port of this project. Full Llama2-7B capable. + - [llama2.c-emscripten](https://github.com/gohai/llama2.c-emscripten) by @[gohai](https://github.com/gohai): Emscripten (JavaScript) port, based on @ggerganov's initial prototype +- Zig + - [llama2.zig](https://github.com/cgbur/llama2.zig) by @[cgbur](https://github.com/cgbur): A Zig port of this project + - [llama2.zig](https://github.com/vodkaslime/llama2.zig) by @[vodkaslime](https://github.com/vodkaslime): a Zig port of this project + - [llama2.zig](https://github.com/clebert/llama2.zig) by @[clebert](https://github.com/clebert): a Zig port of this project +- Julia + - [llama2.jl](https://github.com/juvi21/llama2.jl) by @[juvi21](https://github.com/juvi21): a Julia port of this project +- Scala + - [llama2.scala](https://github.com/jrudolph/llama2.scala) by @[jrudolph](https://github.com/jrudolph): a Scala port of this project +- Java + - [llama2.java](https://github.com/mukel/llama2.java) by @[mukel](https://github.com/mukel): a Java port of this project +- Kotlin + - [llama2.kt](https://github.com/madroidmaq/llama2.kt) by @[madroidmaq](https://github.com/madroidmaq): a Kotlin port of this project +- Python + - [llama2.py](https://github.com/tairov/llama2.py) by @[tairov](https://github.com/tairov): a simple one file pure Python port of this project with zero dependencies +- C# + - [llama2.cs](https://github.com/trrahul/llama2.cs) by @[trrahul](https://github.com/trrahul): a C# port of this project +- WebAssembly + - [icpp-llm](https://github.com/icppWorld/icpp-llm): LLMs for the Internet Computer +- [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 +- [llama2.c-zh - Bilingual Chinese and English](https://github.com/chenyangMl/llama2.c-zh) by @[chenyangMl](https://github.com/chenyangMl): Expand tokenizer to support training and inference in both Chinese and English + +## unsorted todos + +- make it easier to add a new dataset with not too much pain +- should calculate freq_cis online in the script run.c instead of loading them +- int4/8 quantization +- export the model in a more sensible output format with a proper header, etc. +- support Llama 2 7B Chat models and tune run.c to Chat UI/UX +- llama2.cu investigate and merge +- (LoRA) finetuning and export of Llama 2 models + +## License + +MIT diff --git a/build_msvc.bat b/build_msvc.bat new file mode 100644 index 0000000..f3b2c98 --- /dev/null +++ b/build_msvc.bat @@ -0,0 +1 @@ +cl.exe /fp:fast /Ox /openmp /I. run.c win.c diff --git a/pubspec.lock b/pubspec.lock deleted file mode 100644 index fae61c8..0000000 --- a/pubspec.lock +++ /dev/null @@ -1,13 +0,0 @@ -# Generated by pub -# See https://dart.dev/tools/pub/glossary#lockfile -packages: - args: - dependency: "direct main" - description: - name: args - sha256: eef6c46b622e0494a36c5a12d10d77fb4e855501a91c1b9ef9339326e58f0596 - url: "https://pub.dev" - source: hosted - version: "2.4.2" -sdks: - dart: ">=3.1.0 <4.0.0" diff --git a/pubspec.yaml b/pubspec.yaml deleted file mode 100644 index 47a3ad5..0000000 --- a/pubspec.yaml +++ /dev/null @@ -1,10 +0,0 @@ -name: llama2.dart -description: A one file implementation of llama2 inference -version: 1.0.0 - -environment: - sdk: ^3.1.0 - -# Add regular dependencies here. -dependencies: - args: ^2.4.2 diff --git a/run.c b/run.c new file mode 100644 index 0000000..10d468b --- /dev/null +++ b/run.c @@ -0,0 +1,740 @@ +/* Inference for Llama-2 Transformer model in pure C */ + +#include +#include +#include +#include +#include +#include +#include +#if defined _WIN32 + #include "win.h" +#else + #include + #include +#endif +// ---------------------------------------------------------------------------- +// Transformer and RunState structs, and related memory management + +typedef struct { + int dim; // transformer dimension + int hidden_dim; // for ffn layers + int n_layers; // number of layers + int n_heads; // number of query heads + int n_kv_heads; // number of key/value heads (can be < query heads because of multiquery) + int vocab_size; // vocabulary size, usually 256 (byte-level) + int seq_len; // max sequence length +} Config; + +typedef struct { + // token embedding table + float* token_embedding_table; // (vocab_size, dim) + // weights for rmsnorms + float* rms_att_weight; // (layer, dim) rmsnorm weights + float* rms_ffn_weight; // (layer, dim) + // weights for matmuls. note dim == n_heads * head_size + float* wq; // (layer, dim, n_heads * head_size) + float* wk; // (layer, dim, n_kv_heads * head_size) + float* wv; // (layer, dim, n_kv_heads * head_size) + float* wo; // (layer, n_heads * head_size, dim) + // weights for ffn + float* w1; // (layer, hidden_dim, dim) + float* w2; // (layer, dim, hidden_dim) + float* w3; // (layer, hidden_dim, dim) + // final rmsnorm + float* rms_final_weight; // (dim,) + // freq_cis for RoPE relatively positional embeddings (not used anymore) + float* freq_cis_real; // (seq_len, head_size/2) + float* freq_cis_imag; // (seq_len, head_size/2) + // (optional) classifier weights for the logits, on the last layer + float* wcls; +} TransformerWeights; + +typedef struct { + float prob; + int index; +} ProbIndex; // struct used when sorting probabilities during top-p sampling + +typedef struct { + // current wave of activations + float *x; // activation at current time stamp (dim,) + float *xb; // same, but inside a residual branch (dim,) + float *xb2; // an additional buffer just for convenience (dim,) + float *hb; // buffer for hidden dimension in the ffn (hidden_dim,) + float *hb2; // buffer for hidden dimension in the ffn (hidden_dim,) + float *q; // query (dim,) + float *k; // key (dim,) + float *v; // value (dim,) + float *att; // buffer for scores/attention values (n_heads, seq_len) + float *logits; // output logits + ProbIndex *probindex; // buffer used in top-p sampling + // kv cache + float* key_cache; // (layer, seq_len, dim) + float* value_cache; // (layer, seq_len, dim) +} RunState; + +void malloc_run_state(RunState* s, Config* p) { + // we calloc instead of malloc to keep valgrind happy + int kv_dim = (p->dim * p->n_kv_heads) / p->n_heads; + s->x = calloc(p->dim, sizeof(float)); + s->xb = calloc(p->dim, sizeof(float)); + s->xb2 = calloc(p->dim, sizeof(float)); + s->hb = calloc(p->hidden_dim, sizeof(float)); + s->hb2 = calloc(p->hidden_dim, sizeof(float)); + s->q = calloc(p->dim, sizeof(float)); + s->k = calloc(kv_dim, sizeof(float)); + s->v = calloc(kv_dim, sizeof(float)); + s->att = calloc(p->n_heads * p->seq_len, sizeof(float)); + s->logits = calloc(p->vocab_size, sizeof(float)); + s->probindex = calloc(p->vocab_size, sizeof(ProbIndex)); + s->key_cache = calloc(p->n_layers * p->seq_len * kv_dim, sizeof(float)); + s->value_cache = calloc(p->n_layers * p->seq_len * kv_dim, sizeof(float)); + // ensure all mallocs went fine + if (!s->x || !s->xb || !s->xb2 || !s->hb || !s->hb2 || !s->q + || !s->k || !s->v || !s->att || !s->logits || !s->key_cache + || !s->value_cache || !s->probindex) { + fprintf(stderr, "malloc failed!\n"); + exit(EXIT_FAILURE); + } +} + +void free_run_state(RunState* s) { + free(s->x); + free(s->xb); + free(s->xb2); + free(s->hb); + free(s->hb2); + free(s->q); + free(s->k); + free(s->v); + free(s->att); + free(s->logits); + free(s->probindex); + free(s->key_cache); + free(s->value_cache); +} + +// ---------------------------------------------------------------------------- +// initialization: read from checkpoint + +void checkpoint_init_weights(TransformerWeights *w, Config* p, float* ptr, int shared_weights) { + int head_size = p->dim / p->n_heads; + w->token_embedding_table = ptr; + ptr += p->vocab_size * p->dim; + w->rms_att_weight = ptr; + ptr += p->n_layers * p->dim; + w->wq = ptr; + ptr += p->n_layers * p->dim * (p->n_heads * head_size); + w->wk = ptr; + ptr += p->n_layers * p->dim * (p->n_kv_heads * head_size); + w->wv = ptr; + ptr += p->n_layers * p->dim * (p->n_kv_heads * head_size); + w->wo = ptr; + ptr += p->n_layers * (p->n_heads * head_size) * p->dim; + w->rms_ffn_weight = ptr; + ptr += p->n_layers * p->dim; + w->w1 = ptr; + ptr += p->n_layers * p->dim * p->hidden_dim; + w->w2 = ptr; + ptr += p->n_layers * p->hidden_dim * p->dim; + w->w3 = ptr; + ptr += p->n_layers * p->dim * p->hidden_dim; + w->rms_final_weight = ptr; + ptr += p->dim; + w->freq_cis_real = ptr; + ptr += p->seq_len * head_size / 2; + w->freq_cis_imag = ptr; + ptr += p->seq_len * head_size / 2; + w->wcls = shared_weights ? w->token_embedding_table : ptr; +} + +// ---------------------------------------------------------------------------- +// neural net blocks + +void rmsnorm(float* o, float* x, float* weight, int size) { + // calculate sum of squares + float ss = 0.0f; + for (int j = 0; j < size; j++) { + ss += x[j] * x[j]; + } + ss /= size; + ss += 1e-5f; + ss = 1.0f / sqrtf(ss); + // normalize and scale + for (int j = 0; j < size; j++) { + o[j] = weight[j] * (ss * x[j]); + } +} + +void softmax(float* x, int size) { + // find max value (for numerical stability) + float max_val = x[0]; + for (int i = 1; i < size; i++) { + if (x[i] > max_val) { + max_val = x[i]; + } + } + // exp and sum + float sum = 0.0f; + for (int i = 0; i < size; i++) { + x[i] = expf(x[i] - max_val); + sum += x[i]; + } + // normalize + for (int i = 0; i < size; i++) { + x[i] /= sum; + } +} + +void matmul(float* xout, float* x, float* w, int n, int d) { + // W (d,n) @ x (n,) -> xout (d,) + // by far the most amount of time is spent inside this little function + int i; + #pragma omp parallel for private(i) + for (i = 0; i < d; i++) { + float val = 0.0f; + for (int j = 0; j < n; j++) { + val += w[i * n + j] * x[j]; + } + xout[i] = val; + } +} + +void transformer(int token, int pos, Config* p, RunState* s, TransformerWeights* w) { + + // a few convenience variables + float *x = s->x; + int dim = p->dim; + int kv_dim = (p->dim * p->n_kv_heads) / p->n_heads; + int kv_mul = p->n_heads / p->n_kv_heads; // integer multiplier of the kv sharing in multiquery + int hidden_dim = p->hidden_dim; + int head_size = dim / p->n_heads; + + // copy the token embedding into x + float* content_row = &(w->token_embedding_table[token * dim]); + memcpy(x, content_row, dim*sizeof(*x)); + + // forward all the layers + for(int l = 0; l < p->n_layers; l++) { + + // attention rmsnorm + rmsnorm(s->xb, x, w->rms_att_weight + l*dim, dim); + + // qkv matmuls for this position + matmul(s->q, s->xb, w->wq + l*dim*dim, dim, dim); + matmul(s->k, s->xb, w->wk + l*dim*kv_dim, dim, kv_dim); + matmul(s->v, s->xb, w->wv + l*dim*kv_dim, dim, kv_dim); + + // RoPE relative positional encoding: complex-valued rotate q and k in each head + for (int i = 0; i < dim; i+=2) { + int head_dim = i % head_size; + float freq = 1.0f / powf(10000.0f, head_dim / (float)head_size); + float val = pos * freq; + float fcr = cosf(val); + float fci = sinf(val); + int rotn = i < kv_dim ? 2 : 1; // how many vectors? 2 = q & k, 1 = q only + for (int v = 0; v < rotn; v++) { + float* vec = v == 0 ? s->q : s->k; // the vector to rotate (query or key) + float v0 = vec[i]; + float v1 = vec[i+1]; + vec[i] = v0 * fcr - v1 * fci; + vec[i+1] = v0 * fci + v1 * fcr; + } + } + + // save key,value at this time step (pos) to our kv cache + int loff = l * p->seq_len * kv_dim; // kv cache layer offset for convenience + float* key_cache_row = s->key_cache + loff + pos * kv_dim; + float* value_cache_row = s->value_cache + loff + pos * kv_dim; + memcpy(key_cache_row, s->k, kv_dim * sizeof(*key_cache_row)); + memcpy(value_cache_row, s->v, kv_dim * sizeof(*value_cache_row)); + + // multihead attention. iterate over all heads + int h; + #pragma omp parallel for private(h) + for (h = 0; h < p->n_heads; h++) { + // get the query vector for this head + float* q = s->q + h * head_size; + // attention scores for this head + float* att = s->att + h * p->seq_len; + // iterate over all timesteps, including the current one + for (int t = 0; t <= pos; t++) { + // get the key vector for this head and at this timestep + float* k = s->key_cache + loff + t * kv_dim + (h / kv_mul) * head_size; + // calculate the attention score as the dot product of q and k + float score = 0.0f; + for (int i = 0; i < head_size; i++) { + score += q[i] * k[i]; + } + score /= sqrtf(head_size); + // save the score to the attention buffer + att[t] = score; + } + + // softmax the scores to get attention weights, from 0..pos inclusively + softmax(att, pos + 1); + + // weighted sum of the values, store back into xb + float* xb = s->xb + h * head_size; + memset(xb, 0, head_size * sizeof(float)); + for (int t = 0; t <= pos; t++) { + // get the value vector for this head and at this timestep + float* v = s->value_cache + loff + t * kv_dim + (h / kv_mul) * head_size; + // get the attention weight for this timestep + float a = att[t]; + // accumulate the weighted value into xb + for (int i = 0; i < head_size; i++) { + xb[i] += a * v[i]; + } + } + } + + // final matmul to get the output of the attention + matmul(s->xb2, s->xb, w->wo + l*dim*dim, dim, dim); + + // residual connection back into x + for (int i = 0; i < dim; i++) { + x[i] += s->xb2[i]; + } + + // ffn rmsnorm + rmsnorm(s->xb, x, w->rms_ffn_weight + l*dim, dim); + + // Now for FFN in PyTorch we have: self.w2(F.silu(self.w1(x)) * self.w3(x)) + // first calculate self.w1(x) and self.w3(x) + matmul(s->hb, s->xb, w->w1 + l*dim*hidden_dim, dim, hidden_dim); + matmul(s->hb2, s->xb, w->w3 + l*dim*hidden_dim, dim, hidden_dim); + + // F.silu; silu(x)=x*σ(x),where σ(x) is the logistic sigmoid + for (int i = 0; i < hidden_dim; i++) { + s->hb[i] = s->hb[i] * (1.0f / (1.0f + expf(-s->hb[i]))); + } + + // elementwise multiply with w3(x) + for (int i = 0; i < hidden_dim; i++) { + s->hb[i] = s->hb[i] * s->hb2[i]; + } + + // final matmul to get the output of the ffn + matmul(s->xb, s->hb, w->w2 + l*dim*hidden_dim, hidden_dim, dim); + + // residual connection + for (int i = 0; i < dim; i++) { + x[i] += s->xb[i]; + } + } + + // final rmsnorm + rmsnorm(x, x, w->rms_final_weight, dim); + + // classifier into logits + matmul(s->logits, x, w->wcls, p->dim, p->vocab_size); +} + +// ---------------------------------------------------------------------------- +// byte pair encoding (BPE) tokenizer, encodes strings into tokens so we can prompt + +typedef struct { + char *str; + int id; +} TokenIndex; + +int compare_tokens(const void *a, const void *b) { + return strcmp(((TokenIndex*)a)->str, ((TokenIndex*)b)->str); +} + +int str_lookup(char *str, TokenIndex *sorted_vocab, int vocab_size) { + // efficiently find the perfect match for str in vocab, return its index or -1 if not found + TokenIndex tok = { .str = str }; // acts as the key to search for + TokenIndex *res = bsearch(&tok, sorted_vocab, vocab_size, sizeof(TokenIndex), compare_tokens); + return res != NULL ? res->id : -1; +} + +void bpe_encode(char *text, char **vocab, float *vocab_scores, int vocab_size, unsigned int max_token_length, int *tokens, int *n_tokens) { + + // sort vocabulary + TokenIndex *sorted_vocab = malloc(vocab_size * sizeof(TokenIndex)); + for (int i = 0; i < vocab_size; i++) { + sorted_vocab[i].str = vocab[i]; + sorted_vocab[i].id = i; + } + qsort(sorted_vocab, vocab_size, sizeof(TokenIndex), compare_tokens); + + // create a temporary buffer that will store merge candidates of always two consecutive tokens + char* str_buffer = malloc((max_token_length*2 +1 +2) * sizeof(char)); // *2 for concat, +1 for null terminator +2 for UTF8 (in case max_token_lenght is 1) + size_t str_len = 0; + + // add_dummy_prefix is true by default + tokens[0] = str_lookup(" ", sorted_vocab, vocab_size); + *n_tokens = 1; // the number of tokens + + // Okay UTF-8 time. This will get messy. Here is the reference from Wikipedia: + // Code point ↔ UTF-8 conversion + // First code point Last code point Byte 1 Byte 2 Byte 3 Byte 4 + // U+0000 U+007F 0xxxxxxx + // U+0080 U+07FF 110xxxxx 10xxxxxx + // U+0800 U+FFFF 1110xxxx 10xxxxxx 10xxxxxx + // U+10000 U+10FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx + + // process the raw (UTF-8) byte sequence of the input string + for (char *c = text; *c != '\0'; c++) { + + // reset buffer if the current byte is ASCII or a leading byte + // 0xC0 is 11000000, so (*c & 0xC0) keeps the first 2 bits and zeros the rest + // 0x80 is 10000000 + // in UTF-8, all continuation bytes start with "10" in first two bits + // so in English this is: "if this byte is not a continuation byte" + if ((*c & 0xC0) != 0x80) { + // this byte must be either a leading byte (11...) or an ASCII char (0x...) + // => reset our location, as we're starting a new UTF-8 codepoint + str_len = 0; + } + + // append the current byte to the buffer + str_buffer[str_len++] = *c; // ++ is post-increment, incremented after this line + str_buffer[str_len] = '\0'; + + // while the next character is a continuation byte, continue appending + // but if there are too many of them, just stop to avoid overruning str_buffer size. + if ((*(c+1) & 0xC0) == 0x80 && str_len < 4) { + continue; + } + + // ok c+1 is not a continuation byte, so we've read in a full codepoint + int id = str_lookup(str_buffer, sorted_vocab, vocab_size); + + if (id != -1) { + // we found this codepoint in vocab, add it as a token + tokens[(*n_tokens)++] = id; + } else { + // byte_fallback encoding: just encode each byte as a token + // +3 is here because the first 3 vocab elements are , , + // so the individual bytes only start at index 3 + for (int i=0; i < str_len; i++) { + tokens[(*n_tokens)++] = (unsigned char)str_buffer[i] + 3; + } + } + str_len = 0; // protect against a sequence of stray UTF8 continuation bytes + } + + // merge the best consecutive pair each iteration, according the scores in vocab_scores + while (1) { + float best_score = -1e10; + int best_id = -1; + int best_idx = -1; + + for (int i=0; i < (*n_tokens-1); i++) { + // check if we can merge the pair (tokens[i], tokens[i+1]) + sprintf(str_buffer, "%s%s", vocab[tokens[i]], vocab[tokens[i+1]]); + int id = str_lookup(str_buffer, sorted_vocab, vocab_size); + if (id != -1 && vocab_scores[id] > best_score) { + // this merge pair exists in vocab! record its score and position + best_score = vocab_scores[id]; + best_id = id; + best_idx = i; + } + } + + if (best_idx == -1) { + break; // we couldn't find any more pairs to merge, so we're done + } + + // merge the consecutive pair (best_idx, best_idx+1) into new token best_id + tokens[best_idx] = best_id; + // delete token at position best_idx+1, shift the entire sequence back 1 + for (int i = best_idx+1; i < (*n_tokens-1); i++) { + tokens[i] = tokens[i+1]; + } + (*n_tokens)--; // token length decreased + } + + free(str_buffer); + free(sorted_vocab); +} + +// ---------------------------------------------------------------------------- +// utilities: time / rng + +long time_in_ms() { + // return time in milliseconds, for benchmarking the model speed + struct timespec time; + clock_gettime(CLOCK_REALTIME, &time); + return time.tv_sec * 1000 + time.tv_nsec / 1000000; +} + +unsigned long long rng_seed; +unsigned int random_u32() { + // xorshift rng: https://en.wikipedia.org/wiki/Xorshift#xorshift.2A + rng_seed ^= rng_seed >> 12; + rng_seed ^= rng_seed << 25; + rng_seed ^= rng_seed >> 27; + return (rng_seed * 0x2545F4914F6CDD1Dull) >> 32; +} +float random_f32() { // random float32 in [0,1) + return (random_u32() >> 8) / 16777216.0f; +} + +// ---------------------------------------------------------------------------- +// sampling can be done in a few ways: greedy argmax, sampling, top-p sampling + +int argmax(float* probabilities, int n) { + // return the index that has the highest probability + int max_i = 0; + float max_p = probabilities[0]; + for (int i = 1; i < n; i++) { + if (probabilities[i] > max_p) { + max_i = i; + max_p = probabilities[i]; + } + } + return max_i; +} + +int sample(float* probabilities, int n) { + // sample index from probabilities (they must sum to 1!) + float r = random_f32(); + float cdf = 0.0f; + for (int i = 0; i < n; i++) { + cdf += probabilities[i]; + if (r < cdf) { + return i; + } + } + return n - 1; // in case of rounding errors +} + +int compare(const void* a, const void* b) { + ProbIndex* a_ = (ProbIndex*) a; + ProbIndex* b_ = (ProbIndex*) b; + if (a_->prob > b_->prob) return -1; + if (a_->prob < b_->prob) return 1; + return 0; +} + +int sample_topp(float* probabilities, int n, float topp, ProbIndex* probindex) { + // top-p sampling (or "nucleus sampling") samples from the smallest set of + // tokens that exceed probability topp. This way we never sample tokens that + // have very low probabilities and are less likely to go "off the rails". + + int n0 = 0; + // quicksort indices in descending order of probabilities + // values smaller than (1 - topp) / (n - 1) cannot be part of the result + // so for efficiency we crop these out as candidates before sorting + const float cutoff = (1.0f - topp) / (n - 1); + for (int i = 0; i < n; i++) { + if (probabilities[i] >= cutoff) { + probindex[n0].index = i; + probindex[n0].prob = probabilities[i]; + n0++; + } + } + qsort(probindex, n0, sizeof(ProbIndex), compare); + + // truncate the list where cumulative probability exceeds topp + float cumulative_prob = 0.0f; + int last_idx = n0 - 1; // in case of rounding errors consider all elements + for (int i = 0; i < n0; i++) { + cumulative_prob += probindex[i].prob; + if (cumulative_prob > topp) { + last_idx = i; + break; // we've exceeded topp by including last_idx + } + } + + // sample from the truncated list + float r = random_f32() * cumulative_prob; + float cdf = 0.0f; + for (int i = 0; i <= last_idx; i++) { + cdf += probindex[i].prob; + if (r < cdf) { + return probindex[i].index; + } + } + return probindex[last_idx].index; // in case of rounding errors +} + + +// ---------------------------------------------------------------------------- +// int main + +void error_usage() { + fprintf(stderr, "Usage: run [options]\n"); + fprintf(stderr, "Example: run model.bin -n 256 -i \"Once upon a time\"\n"); + fprintf(stderr, "Options:\n"); + fprintf(stderr, " -t temperature, default 1.0\n"); + fprintf(stderr, " -p p value in top-p (nucleus) sampling. default 0.9\n"); + fprintf(stderr, " -s random seed, default time(NULL)\n"); + fprintf(stderr, " -n number of steps to run for, default 256. 0 = max_seq_len\n"); + fprintf(stderr, " -i input prompt\n"); + fprintf(stderr, " -z optional path to custom tokenizer\n"); + exit(EXIT_FAILURE); +} + +int main(int argc, char *argv[]) { + + // default inits + char *checkpoint = NULL; // e.g. out/model.bin + char *tokenizer = "tokenizer.bin"; + float temperature = 1.0f; // 0.0 = greedy deterministic. 1.0 = original. don't set higher + float topp = 0.9f; // top-p in nucleus sampling. 1.0 = off. 0.9 works well, but slower + rng_seed = 0; // seed rng with time by default + int steps = 256; // number of steps to run for + char *prompt = NULL; // prompt string + + // poor man's C argparse so we can override the defaults above from the command line + if (argc >= 2) { checkpoint = argv[1]; } else { error_usage(); } + for (int i = 2; i < argc; i+=2) { + // do some basic validation + if (i + 1 >= argc) { error_usage(); } // must have arg after flag + if (argv[i][0] != '-') { error_usage(); } // must start with dash + if (strlen(argv[i]) != 2) { error_usage(); } // must be -x (one dash, one letter) + // read in the args + if (argv[i][1] == 't') { temperature = atof(argv[i + 1]); } + else if (argv[i][1] == 'p') { topp = atof(argv[i + 1]); } + else if (argv[i][1] == 's') { rng_seed = atoi(argv[i + 1]); } + else if (argv[i][1] == 'n') { steps = atoi(argv[i + 1]); } + else if (argv[i][1] == 'i') { prompt = argv[i + 1]; } + else if (argv[i][1] == 'z') { tokenizer = argv[i + 1]; } + else { error_usage(); } + } + if(rng_seed == 0) { rng_seed = (unsigned int)time(NULL);} + + // read in the model.bin file + Config config; + TransformerWeights weights; + int fd = 0; // file descriptor for memory mapping + float* data = NULL; // memory mapped data pointer + ssize_t file_size; // size of the checkpoint file in bytes + { + FILE *file = fopen(checkpoint, "rb"); + if (!file) { fprintf(stderr, "Couldn't open file %s\n", checkpoint); return 1; } + // read in the config header + if (fread(&config, sizeof(Config), 1, file) != 1) { return 1; } + // negative vocab size is hacky way of signaling unshared weights. bit yikes. + int shared_weights = config.vocab_size > 0 ? 1 : 0; + config.vocab_size = abs(config.vocab_size); + // figure out the file size + fseek(file, 0, SEEK_END); // move file pointer to end of file + file_size = ftell(file); // get the file size, in bytes + fclose(file); + // memory map the Transformer weights into the data pointer + fd = open(checkpoint, O_RDONLY); // open in read only mode + if (fd == -1) { fprintf(stderr, "open failed!\n"); return 1; } + data = mmap(NULL, file_size, PROT_READ, MAP_PRIVATE, fd, 0); + if (data == MAP_FAILED) { fprintf(stderr, "mmap failed!\n"); return 1; } + float* weights_ptr = data + sizeof(Config)/sizeof(float); + checkpoint_init_weights(&weights, &config, weights_ptr, shared_weights); + } + // right now we cannot run for more than config.seq_len steps + if (steps <= 0 || steps > config.seq_len) { steps = config.seq_len; } + + // read in the tokenizer .bin file + char** vocab = (char**)malloc(config.vocab_size * sizeof(char*)); + float* vocab_scores = (float*)malloc(config.vocab_size * sizeof(float)); + unsigned int max_token_length; + { + FILE *file = fopen(tokenizer, "rb"); + if (!file) { fprintf(stderr, "couldn't load %s\n", tokenizer); return 1; } + if (fread(&max_token_length, sizeof(int), 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1; } + int len; + for (int i = 0; i < config.vocab_size; i++) { + if (fread(vocab_scores + i, sizeof(float), 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1;} + if (fread(&len, sizeof(int), 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1; } + vocab[i] = (char *)malloc(len + 1); + if (fread(vocab[i], len, 1, file) != 1) { fprintf(stderr, "failed read\n"); return 1; } + vocab[i][len] = '\0'; // add the string terminating token + } + fclose(file); + } + + // create and init the application RunState + RunState state; + malloc_run_state(&state, &config); + + // process the prompt, if any + int *prompt_tokens = NULL; + int num_prompt_tokens = 0; + if (prompt != NULL) { + prompt_tokens = (int*)malloc((strlen(prompt)+1) * sizeof(int)); + bpe_encode(prompt, vocab, vocab_scores, config.vocab_size, max_token_length, prompt_tokens, &num_prompt_tokens); + } + + // start the main loop + long start = 0; // used to time our code, only initialized after first iteration + int next; // will store the next token in the sequence + int token = 1; // init with token 1 (=BOS), as done in Llama-2 sentencepiece tokenizer + int pos = 0; // position in the sequence + while (pos < steps) { + + // forward the transformer to get logits for the next token + transformer(token, pos, &config, &state, &weights); + + // advance the state state machine + if(pos < num_prompt_tokens) { + // if we are still processing the input prompt, force the next prompt token + next = prompt_tokens[pos]; + } else { + // sample the next token + if (temperature == 0.0f) { + // greedy argmax sampling: take the token with the highest probability + next = argmax(state.logits, config.vocab_size); + } else { + // apply the temperature to the logits + for (int q=0; q= 1) { + // simply sample from the predicted probability distribution + next = sample(state.logits, config.vocab_size); + } else { + // top-p (nucleus) sampling, clamping the least likely tokens to zero + next = sample_topp(state.logits, config.vocab_size, topp, state.probindex); + } + } + } + pos++; + + // data-dependent terminating condition: the BOS (1) token delimits sequences + if (next == 1) { break; } + + // following BOS (1) token, sentencepiece decoder strips any leading whitespace (see PR #89) + char *token_str = (token == 1 && vocab[next][0] == ' ') ? vocab[next]+1 : vocab[next]; + // careful, some tokens designate raw bytes, and look like e.g. '<0x01>' + unsigned char byte_val; + if (sscanf(token_str, "<0x%02hhX>", &byte_val) == 1) { + // ok this token is a raw byte token, carefuly to only print printable chars or whitespace + // some of the other bytes can be various control codes, backspace, etc. => skip + if (isprint(byte_val) || isspace(byte_val)) { + char byte_piece[2]; + byte_piece[0] = byte_val; + byte_piece[1] = '\0'; + printf("%s", byte_piece); + } + } else { + printf("%s", token_str); + } + fflush(stdout); + token = next; + + // init the timer here because the first iteration can be slower + if (start == 0) { start = time_in_ms(); } + } + printf("\n"); + + // report achieved tok/s (pos-1 because the timer starts after first iteration) + if (pos > 1) { + long end = time_in_ms(); + fprintf(stderr, "achieved tok/s: %f\n", (pos-1) / (double)(end-start)*1000); + } + + // memory and file handles cleanup + free_run_state(&state); + for (int i = 0; i < config.vocab_size; i++) { free(vocab[i]); } + free(vocab); + free(vocab_scores); + if (prompt_tokens != NULL) free(prompt_tokens); + if (data != MAP_FAILED) munmap(data, file_size); + if (fd != -1) close(fd); + return 0; +} diff --git a/run.dart b/run.dart deleted file mode 100644 index 7b7be5f..0000000 --- a/run.dart +++ /dev/null @@ -1,799 +0,0 @@ -import 'dart:convert'; -import 'dart:developer'; -import 'dart:io'; -import 'dart:math'; -import 'dart:typed_data'; - -import 'package:args/args.dart'; - -class Config { - // transformer dimension - late int dim; - // for ffn layers - late int hidden_dim; - // number of layers - late int n_layers; - // number of query heads - late int n_heads; - // number of key/value heads (can be < query heads because of multiquery) - late int n_kv_heads; - // vocabulary size, usually 256 (byte-level) - late int vocab_size; - // max sequence length - late int seq_len; - - @override - String toString() { - return "Config(dim: $dim, hidden_dim: $hidden_dim, n_layers: $n_layers, n_heads: $n_heads, n_kv_heads: $n_kv_heads, vocab_size: $vocab_size, seq_len: $seq_len)"; - } -} - -const configByteSize = 7 * 4; - -//We are using 32 bit percision floats here -class TransformerWeights { - // token embedding table - late Float32List token_embedding_table; // (vocab_size, dim) - // weights for rmsnorms - late Float32List rms_att_weight; // (layer, dim) rmsnorm weights - late Float32List rms_ffn_weight; // (layer, dim) - // weights for matmuls. note dim == n_heads * head_size - late Float32List wq; // (layer, dim, n_heads * head_size) - late Float32List wk; // (layer, dim, n_kv_heads * head_size) - late Float32List wv; // (layer, dim, n_kv_heads * head_size) - late Float32List wo; // (layer, n_heads * head_size, dim) - // weights for ffn - late Float32List w1; // (layer, hidden_dim, dim) - late Float32List w2; // (layer, dim, hidden_dim) - late Float32List w3; // (layer, hidden_dim, dim) - // final rmsnorm - late Float32List rms_final_weight; // (dim,) - // freq_cis for RoPE relatively positional embeddings - late Float32List freq_cis_real; // (seq_len, head_size/2) - late Float32List freq_cis_imag; // (seq_len, head_size/2) - // (optional) classifier weights for the logits, on the last layer - late Float32List wcls; -} - -class ProbIndex { - double prob; - int index; - ProbIndex(this.prob, this.index); -} - -class TokenIndex { - String str; - int id; - TokenIndex(this.str, this.id); -} - -class RunState { - // current wave of activations - late Float32List x; // activation at current time stamp (dim,) - late Float32List xb; // same, but inside a residual branch (dim,) - late Float32List xb2; // an additional buffer just for convenience (dim,) - late Float32List hb; // buffer for hidden dimension in the ffn (hidden_dim,) - late Float32List hb2; // buffer for hidden dimension in the ffn (hidden_dim,) - late Float32List q; // query (dim,) - late Float32List k; // key (dim,) - late Float32List v; // value (dim,) - late Float32List att; // buffer for scores/attention values (n_heads, seq_len) - late Float32List logits; // output logits - late List probindex; // buffer used in top-p sampling - // kv cache - late Float32List key_cache; // (layer, seq_len, dim) - late Float32List value_cache; // (layer, seq_len, dim) -} - -initialize_run_state(RunState s, Config config) { - // we calloc instead of malloc to keep valgrind happy - int kv_dim = (config.dim * config.n_kv_heads) ~/ config.n_heads; - s.x = Float32List(config.dim); - s.xb = Float32List(config.dim); - s.xb2 = Float32List(config.dim); - s.hb = Float32List(config.hidden_dim); - s.hb2 = Float32List(config.hidden_dim); - s.q = Float32List(config.dim); - s.k = Float32List(kv_dim); - s.v = Float32List(kv_dim); - s.att = Float32List(config.n_heads * config.seq_len); - s.logits = Float32List(config.vocab_size); - s.probindex = []; - s.key_cache = Float32List(config.n_layers * config.seq_len * kv_dim); - s.value_cache = Float32List(config.n_layers * config.seq_len * kv_dim); -} - -class Tokenizer { - List vocab; - List vocab_scores; - Tokenizer( - this.vocab, - this.vocab_scores, - ); - - bpe_encode(String text, List tokens, int n_tokens) { - tokens = []; - - // First pass, combine raw tokens - text.runes.forEach((element) { - String decoded = utf8.decode([element]); - if (vocab.contains(decoded)) { - tokens.add(vocab.indexOf(decoded)); - } - }); - - // Second pass, combine bpe tokens - while (true) { - double best_score = -1e10; - int best_id = -1; - int best_index = -1; - - for (int i = 0; i < tokens.length - 1; i++) { - String newStr = vocab[tokens[i]] + vocab[tokens[i + 1]]; - int newStrIndex = vocab.indexOf(newStr); - if (newStrIndex != -1 && vocab_scores[newStrIndex] > best_score) { - best_score = vocab_scores[newStrIndex]; - best_id = newStrIndex; - best_index = i; - } - } - - if (best_index == -1) break; - - tokens[best_index] = best_id; - tokens.removeAt(best_index + 1); - } - return tokens; - } -} - -// ---------------------------------------------------------------------------- -// sampling can be done in a few ways: greedy argmax, sampling, top-p sampling - -int argmax(Float32List probabilities) { - // return the index that has the highest probability - int max_i = 0; - double max_p = probabilities[0]; - for (int i = 1; i < probabilities.length; i++) { - if (probabilities[i] > max_p) { - max_i = i; - max_p = probabilities[i]; - } - } - return max_i; -} - -int sample(Float32List probabilities) { - // sample index from probabilities (they must sum to 1!) - double r = Random().nextDouble(); - double cdf = 0.0; - for (int i = 0; i < probabilities.length; i++) { - cdf += probabilities[i]; - if (r < cdf) return i; - } - return probabilities.length - 1; // in case of rounding errors -} - -int sample_topp(Float32List probabilities, double topp) { - // top-p sampling (or "nucleus sampling") samples from the smallest set of - // tokens that exceed probability topp. This way we never sample tokens that - // have very low probabilities and are less likely to go "off the rails". - - // quicksort indices in descending order of probabilities - // values smaller than (1 - topp) / (n - 1) cannot be part of the result - // In the original llama.c they crop these out as candidates before sorting - List probindex = []; - - double cutoff = (1.0 - topp) / (probabilities.length - 1); - - for (int i = 0; i < probabilities.length; i++) { - if (probabilities[i] >= cutoff) { - probindex.add(ProbIndex(probabilities[i], i)); - } - } - - probindex.sort((a, b) => b.prob.compareTo(a.prob)); - - // truncate the list where cumulative probability exceeds topp - double cumulative_prob = 0.0; - int last_idx = - probindex.length - 1; // in case of rounding errors consider all elements - for (int i = 0; i < probindex.length; i++) { - cumulative_prob += probindex[i].prob; - if (cumulative_prob > topp) { - last_idx = i; - break; // we've exceeded topp by including last_idx - } - } - - probindex.removeRange(last_idx + 1, probindex.length); - - // sample from the truncated list - double r = new Random().nextDouble() * cumulative_prob; - double cdf = 0.0; - for (int i = 0; i <= last_idx; i++) { - cdf += probindex[i].prob; - if (r < cdf) { - return probindex[i].index; - } - } - return probindex[last_idx].index; // in case of rounding errors -} - -rmsnorm(Float32List out, Float32List x, Float32List weight) { - assert(out.length == x.length); - assert(x.length == weight.length); - // calculate sum of squares - double ss = 0.0; - x.forEach((element) { - ss += element * element; - }); - ss /= x.length; - ss += 1e-5; - ss = 1.0 / sqrt(ss); // sqr mean sum of squares - - // normalize and scale - for (int j = 0; j < x.length; j++) { - out[j] = weight[j] * (ss * x[j]); - } -} - -void softmax(Float32List x, int size) { - // find max value (for numerical stability) - double max_val = x[0]; - for (int i = 1; i < size; i++) { - if (x[i] > max_val) { - max_val = x[i]; - } - } - // exp and sum - double sum = 0.0; - for (int i = 0; i < size; i++) { - x[i] = exp(x[i] - max_val); - sum += x[i]; - } - // normalize - for (int i = 0; i < size; i++) x[i] /= sum; -} - -void matmul(Float32List out, Float32List x, Float32List w, int n, int d) { - assert(out.length == d); - assert(x.length == n); - assert(w.length == n * d); - - // W (d,n) @ x (n,) -> xout (d,) - // by far the most amount of time is spent inside this little function - for (int i = 0; i < d; i++) { - double val = 0.0; - for (int j = 0; j < n; j++) { - val += w[i * n + j] * x[j]; - } - out[i] = val; - } -} - -transformer(int token, int pos, Config config, RunState state, - TransformerWeights weights) { - int dim = config.dim; - int kv_dim = config.dim * config.n_kv_heads ~/ config.n_heads; - int kv_mul = config.n_kv_heads ~/ - config.n_heads; // integer multiplier of the kv sharing in multiquery - int hidden_dim = config.hidden_dim; - int head_size = config.dim ~/ config.n_heads; - - // copy the token embedding into x - Float32List current_row = Float32List.sublistView( - weights.token_embedding_table, - token * config.dim, - (token + 1) * config.dim); - for (int i = 0; i < config.dim; i++) state.x[i] = current_row[i]; - - // Note: Divide by 2 here because Rope Parameters repeat after every 2 dimensions - Float32List freq_cis_real_row = weights.freq_cis_real - .sublist(pos * head_size ~/ 2, (pos + 1) * head_size ~/ 2); - Float32List freq_cis_imag_row = weights.freq_cis_imag - .sublist(pos * head_size ~/ 2, (pos + 1) * head_size ~/ 2); - - // forward all the layers - for (int l = 0; l < config.n_layers; l++) { - rmsnorm( - state.xb, - state.x, - Float32List.sublistView( - weights.rms_att_weight, l * dim, (l + 1) * dim)); - - // qkv matmuls for this position - // NOTE:yiming This look slike a place for lots of paralle work :thinking: - // x = x @ wq, wq with dim * dim - matmul( - state.q, - state.xb, - Float32List.sublistView(weights.wq, l * dim * dim, (l + 1) * dim * dim), - dim, - dim); - - // x = x @ wk, wq with dim * kv_dim - matmul( - state.k, - state.xb, - Float32List.sublistView( - weights.wk, l * dim * kv_dim, (l + 1) * dim * kv_dim), - dim, - kv_dim); - - // x = x @ wv, wq with dim * kv_dim - matmul( - state.v, - state.xb, - Float32List.sublistView( - weights.wv, l * dim * kv_dim, (l + 1) * dim * kv_dim), - dim, - kv_dim); - - // RoPE relative positional encoding: complex-valued rotate q and k by freq_cis in each head - // https://arxiv.org/pdf/2104.09864v4.pdf - // We are just reusing the loop for k and q distance calculation - for (int v = 0; v < 2; v++) { - Float32List vec = - v == 0 ? state.q : state.k; // the vector to rotate (query or key) - int vec_size = v == 0 ? dim : kv_dim; // the size of the vector - - // We are only rotating in a group of 2 - for (int i = 0; i < vec_size; i += 2) { - double v0 = vec[i]; - double v1 = vec[i + 1]; - double fcr = freq_cis_real_row[(i % head_size) ~/ 2]; - double fci = freq_cis_imag_row[(i % head_size) ~/ 2]; - // See the RoPE paper for this section - // 3.4.2 Computational efficient realization of rotary matrix multiplication - // x1 = x1 + cos mθ_1 - x2 sin mθ_1 - vec[i] = v0 * fcr - v1 * fci; - // x2 = x1 sin mθ_1 + x2 + cos mθ_1 - vec[i + 1] = v0 * fci + v1 * fcr; - } - } - - // save key,value at this time step (pos) to our kv cache - // offset by n_layer * seq_len * kv_dim - int loff = - l * config.seq_len * kv_dim; // kv cache layer offset for convenience - // key cache = loff + pos * kv_dim - int key_cache_row_offset = loff + pos * kv_dim; - // save k,v into kv cache - for (int i = 0; i < state.k.length; i++) - state.key_cache[key_cache_row_offset + i] = state.k[i]; - - for (int i = 0; i < state.v.length; i++) - state.value_cache[key_cache_row_offset + i] = state.v[i]; - - // multihead attention. iterate over all heads - for (int h = 0; h < config.n_heads; h++) { - // get the query vector for this head - Float32List q = - Float32List.sublistView(state.q, h * head_size, (h + 1) * head_size); - // attention scores for this head - Float32List att = Float32List.sublistView( - state.att, h * config.seq_len, (h + 1) * config.seq_len); - // iterate over all timesteps, including the current one - for (int t = 0; t <= pos; t++) { - // get the key vector for this head and at this timestep - // kv_mul is just 1 now - int key_cache_offset = loff + - t * kv_dim + - (h ~/ kv_mul) * - head_size; // it's still offset by head size kv_dim = head_size * h! - // but sometimes multiple head can share a key_cache - Float32List k = Float32List.sublistView( - state.key_cache, key_cache_offset, key_cache_offset + kv_dim); - // calculate the attention score as the dot product of q and k - double score = 0.0; - for (int ll = 0; ll < head_size; ll++) { - score += q[ll] * k[ll]; - } - // TODO(yiming): reread the paper to understand better - score /= sqrt(head_size); - // save the score to the attention buffer - att[t] = score; - } - - // softmax the scores to get attention weights, from 0..pos inclusively - // soft max happens before attention * v - // softmax is done on the entire attention - // I think there's some trick in pytorch for this - softmax(att, pos + 1); - - // Now we have calculated the weighted attention vector, it's time to apply attention value - // weighted sum of the values, store back into xb - // Clear out xb for the next stage - for (int i = 0; i < head_size; i++) { - state.xb[h * head_size + i] = 0.0; - } - - Float32List xb_off = - Float32List.sublistView(state.xb, h * head_size, (h + 1) * head_size); - for (int t = 0; t <= pos; t++) { - // get the value vector for this head and at this timestep - int v_cache_offset = loff + t * kv_dim + (h ~/ kv_mul) * head_size; - Float32List v = Float32List.sublistView( - state.value_cache, v_cache_offset, v_cache_offset + head_size); - // get the attention weight for this timestep - double a = att[t]; - // accumulate the weighted value into xb - for (int i = 0; i < head_size; i++) { - xb_off[i] += a * v[i]; - } - } - } - - // final matmul to get the output of the attention - // The "Aggregate output" of all the attention heads - matmul( - state.xb2, - state.xb, - Float32List.sublistView(weights.wo, l * dim * dim, (l + 1) * dim * dim), - dim, - dim); - - // residual connection back into x - for (int i = 0; i < dim; i++) { - state.x[i] += state.xb2[i]; - } - - // ffn rmsnorm - rmsnorm( - state.xb, - state.x, - Float32List.sublistView( - weights.rms_ffn_weight, l * dim, (l + 1) * dim)); - - // Now for FFN in PyTorch we have: self.w2(F.silu(self.w1(x)) * self.w3(x)) - // first calculate self.w1(x) and self.w3(x) - matmul( - state.hb, - state.xb, - Float32List.sublistView( - weights.w1, (l * dim * hidden_dim), (l + 1) * dim * hidden_dim), - dim, - hidden_dim); - - matmul( - state.hb2, - state.xb, - Float32List.sublistView( - weights.w3, (l * dim * hidden_dim), (l + 1) * dim * hidden_dim), - dim, - hidden_dim); - - // F.silu; silu(x)=x*σ(x),where σ(x) is the logistic sigmoid - for (int i = 0; i < hidden_dim; i++) { - state.hb[i] = state.hb[i] * (1.0 / (1.0 + exp(-state.hb[i]))); - } - - // elementwise multiply with w3(x) - // F.silu(self.w1(x)) * self.w3(x) - for (int i = 0; i < hidden_dim; i++) { - state.hb[i] = state.hb[i] * state.hb2[i]; - } - - // final matmul to get the output of the ffn - // here we are reusing xb again! - // x = self.w2(F.silu(self.w1(x)) * self.w3(x)) - matmul( - state.xb, - state.hb, - Float32List.sublistView( - weights.w2, l * dim * hidden_dim, (l + 1) * dim * hidden_dim), - hidden_dim, - dim); - - // residual connection - for (int i = 0; i < dim; i++) { - state.x[i] += state.xb[i]; - } - } - - // final rmsnorm - rmsnorm(state.x, state.x, weights.rms_final_weight); - - // classifier into logits - matmul(state.logits, state.x, weights.wcls, config.dim, config.vocab_size); -} - -void main(List args) { - String? checkpoint_path = "./stories15M.bin"; - String tokenizer_path = "tokenizer.bin"; - double temperature = 1.0; - double top_p = 0.9; - int rng_seed = 0; // seed rng with time by default - int steps = 256; // number of steps to run for - String? prompt = " One"; - - var parser = ArgParser(); - parser.addOption( - 'checkpoint_path', - abbr: 'c', - callback: (value) => checkpoint_path = value, - ); - parser.addOption('temp', - abbr: 't', - callback: (value) => - {if (value != null) temperature = double.parse(value)}, - defaultsTo: "1.0"); - parser.addOption('topp', - abbr: 'p', - callback: (value) => {if (value != null) top_p = double.parse(value)}, - defaultsTo: "0.9"); - parser.addOption('seed', - abbr: 's', - callback: (value) => {if (value != null) rng_seed = int.parse(value)}, - defaultsTo: "0"); - parser.addOption('steps', - abbr: 'n', - callback: (value) => {if (value != null) steps = int.parse(value)}, - defaultsTo: "256"); - parser.addOption('prompt', - abbr: 'i', - callback: (value) => {if (value != null) prompt = value}, - defaultsTo: ""); - parser.addOption('tokenizer_path', - abbr: 'z', - callback: (value) => {if (value != null) tokenizer_path = value}); - - parser.parse(args); - - if (rng_seed == 0) rng_seed = Timeline.now; - - print("===========llama2.dart==========="); - print("check_point_path: $checkpoint_path"); - print("tokenizer_path: $tokenizer_path"); - print("temperature: $temperature"); - print("top_p: $top_p"); - print("rng_seed: $rng_seed"); - print("steps: $steps"); - print("prompt: $prompt"); - - var config = Config(); - var weights = TransformerWeights(); - - if (checkpoint_path == null) return print("No checkpoint path provided"); - - print("========= Reading Weights ========="); - - // Read Weights and Config from file - { - Uint8List checkpoint_bytes = File(checkpoint_path!).readAsBytesSync(); - print("Read ${checkpoint_bytes.length} bytes from $checkpoint_path"); - - { - // Reading Config - Uint8List config_bytes = checkpoint_bytes.sublist(0, configByteSize); - Int32List config_ints = config_bytes.buffer.asInt32List(); - config.dim = config_ints[0]; - config.hidden_dim = config_ints[1]; - config.n_layers = config_ints[2]; - config.n_heads = config_ints[3]; - config.n_kv_heads = config_ints[4]; - config.vocab_size = config_ints[5]; - config.seq_len = config_ints[6]; - print("Read Config: $config"); - } - - { - bool shared_weights = config.vocab_size > 0; - // negative vocab size is hacky way of signaling unshared weights. bit yikes. - config.vocab_size = config.vocab_size.abs(); - // Load the weights - int offset = 0; - Float32List weight_floats = - checkpoint_bytes.buffer.asFloat32List(configByteSize); - - int head_size = config.dim ~/ config.n_heads; - weights.token_embedding_table = weight_floats.sublist( - offset, offset + config.vocab_size * config.dim); - offset += config.vocab_size * config.dim; - print( - "Read ${weights.token_embedding_table.lengthInBytes} bytes into token_embedding_table"); - - weights.rms_att_weight = - weight_floats.sublist(offset, offset + config.n_layers * config.dim); - offset += config.n_layers * config.dim; - print( - "Read ${weights.rms_att_weight.lengthInBytes} bytes into rms_att_weight"); - - weights.wq = weight_floats.sublist(offset, - offset + config.n_layers * config.dim * config.n_heads * head_size); - offset += config.n_layers * config.dim * config.n_heads * head_size; - print("Read ${weights.wq.lengthInBytes} bytes into wq"); - - weights.wk = weight_floats.sublist( - offset, - offset + - config.n_layers * config.dim * config.n_kv_heads * head_size); - offset += config.n_layers * config.dim * config.n_kv_heads * head_size; - print("Read ${weights.wk.lengthInBytes} bytes into wk"); - - weights.wv = weight_floats.sublist( - offset, - offset + - config.n_layers * config.dim * config.n_kv_heads * head_size); - offset += config.n_layers * config.dim * config.n_kv_heads * head_size; - print("Read ${weights.wv.lengthInBytes} bytes into wv"); - - weights.wo = weight_floats.sublist(offset, - offset + config.n_layers * config.n_heads * head_size * config.dim); - offset += config.n_layers * config.n_heads * head_size * config.dim; - print("Read ${weights.wo.lengthInBytes} bytes into wo"); - - weights.rms_ffn_weight = - weight_floats.sublist(offset, offset + config.n_layers * config.dim); - offset += config.n_layers * config.dim; - print( - "Read ${weights.rms_ffn_weight.lengthInBytes} bytes into rms_ffn_weight"); - - weights.w1 = weight_floats.sublist( - offset, offset + config.n_layers * config.hidden_dim * config.dim); - offset += config.n_layers * config.hidden_dim * config.dim; - print("Read ${weights.w1.lengthInBytes} bytes into w1"); - - weights.w2 = weight_floats.sublist( - offset, offset + config.n_layers * config.dim * config.hidden_dim); - offset += config.n_layers * config.dim * config.hidden_dim; - print("Read ${weights.w2.lengthInBytes} bytes into w2"); - - weights.w3 = weight_floats.sublist( - offset, offset + config.n_layers * config.hidden_dim * config.dim); - offset += config.n_layers * config.hidden_dim * config.dim; - print("Read ${weights.w3.lengthInBytes} bytes into w3"); - - weights.rms_final_weight = - weight_floats.sublist(offset, offset + config.dim); - offset += config.dim; - print( - "Read ${weights.rms_final_weight.lengthInBytes} bytes into rms_final_weight"); - - weights.freq_cis_real = weight_floats.sublist( - offset, offset + config.seq_len * head_size ~/ 2); - offset += config.seq_len * head_size ~/ 2; - print( - "Read ${weights.freq_cis_real.lengthInBytes} bytes into freq_cis_real"); - - weights.freq_cis_imag = weight_floats.sublist( - offset, offset + config.seq_len * head_size ~/ 2); - offset += config.seq_len * head_size ~/ 2; - print( - "Read ${weights.freq_cis_imag.lengthInBytes} bytes into freq_cis_imag"); - - if (shared_weights) { - print("Read shared weights into wcls"); - weights.wcls = weights.token_embedding_table; - } else { - weights.wcls = weight_floats.sublist( - offset, offset + config.vocab_size * config.dim); - offset += config.dim; - print("Read ${weights.wcls.lengthInBytes} bytes into wcls"); - } - } - } - - // clamp number of steps to supported range - if (steps <= 0 || steps > config.seq_len) { - steps = config.seq_len; - } - - // read in the tokenizer .bin file - List vocab = new List.filled( - config.vocab_size, new Uint8List(0)); // config.vocab_size; - Float32List vocab_scores = new Float32List(config.vocab_size); - { - ByteData tokenizer_bytes = - File(tokenizer_path).readAsBytesSync().buffer.asByteData(0); - int offset = 0; - // Not being used but read anyways - int max_token_length = tokenizer_bytes.getUint32(offset, Endian.little); - offset += 4; - int next_str_length = 0; - for (int i = 0; i < config.vocab_size; i++) { - double score = tokenizer_bytes.getFloat32(offset, Endian.little); - offset += 4; - next_str_length = tokenizer_bytes.getUint32(offset, Endian.little); - offset += 4; - Uint8List next_chunk = - tokenizer_bytes.buffer.asUint8List(offset, next_str_length); - vocab_scores[i] = score; - offset += next_str_length; - vocab[i] = next_chunk; - } - } - - print("=====beginning generation====="); - - Tokenizer tokenizer; - tokenizer = - Tokenizer(vocab.map((e) => utf8.decode(e)).toList(), vocab_scores); - - // process the prompt, if any - List prompt_tokens = []; - int num_prompt_tokens = 0; - if (prompt != null) { - prompt_tokens = - tokenizer.bpe_encode(prompt!, prompt_tokens, num_prompt_tokens); - } - - RunState state = RunState(); - - initialize_run_state(state, config); - // Finally! the main loop - // used to time our code, only initialized after first iteration - int start = 0; - int next; // will store the next token in the sequence - // init with token 1 (=BOS), as done in Llama-2 sentencepiece tokenizer - int token = 1; - int pos = 0; // position in the sequence - - while (pos < steps) { - // transformer! Run the model - transformer(token, pos, config, state, weights); - - // advance the state state machine - if (pos < prompt_tokens.length) { - // if we are still processing the input prompt, force the next prompt token - next = prompt_tokens[pos]; - } else { - // sample the next token - if (temperature == 0.0) { - // greedy argmax sampling: take the token with the highest probability - next = argmax(state.logits); - } else { - // apply the temperature to the logits - for (int q = 0; q < config.vocab_size; q++) { - state.logits[q] /= temperature; - } - // apply softmax to the logits to get the probabilities for next token - softmax(state.logits, state.logits.length); - - // we sample from this distribution to get the next token - if (top_p <= 0 || top_p >= 1) { - // simply sample from the predicted probability distribution - next = sample(state.logits); - } else { - // top-p (nucleus) sampling, clamping the least likely tokens to zero - next = sample_topp(state.logits, top_p); - } - } - } - pos++; - - // data-dependent terminating condition: the BOS (1) token delimits sequences - if (next == 1) { - break; - } - - // following BOS (1) token, sentencepiece decoder strips any leading whitespace (see PR #89) - Uint8List token_str = - (token == 1 && (vocab[next][0] == ' ')) ? vocab[next + 1] : vocab[next]; - - // careful, some tokens designate raw bytes, and look like e.g. '<0x01>' - String str; - str = utf8.decode(token_str); - - // In the original llama2.c they check for a lot of special tokens, but I've only seen this token really being used - // Being a little lazy here Hehe. - if (str == "<0x0A>") { - str = "\n"; - } - stdout.write("$str"); - token = next; - - // init the timer here because the first iteration can be slower - if (start == 0) { - start = DateTime.now().millisecondsSinceEpoch; - } - } - stdout.write("\n"); - - // report achieved tok/s (pos-1 because the timer starts after first iteration) - if (pos > 1) { - int end = DateTime.now().millisecondsSinceEpoch; - print("achieved tok/s: ${(pos - 1) / (end - start) * 1000} \n"); - } -} diff --git a/test_all.py b/test_all.py new file mode 100644 index 0000000..a4d0976 --- /dev/null +++ b/test_all.py @@ -0,0 +1,89 @@ +""" +Run simply with +$ pytest +""" +import os +import pytest # pip install pytest +import requests +import subprocess + + +import torch +from model import ModelArgs, Transformer +from tokenizer import Tokenizer + +# ----------------------------------------------------------------------------- +# test utilities + +test_ckpt_dir = "test" + +def download_file(url, filename): + print(f"Downloading {url} to {filename}") + response = requests.get(url, stream=True) + response.raise_for_status() # Raise an HTTPError on bad status code + with open(filename, 'wb') as file: + for chunk in response.iter_content(chunk_size=8192): + file.write(chunk) + +def attempt_download_files(): + os.makedirs(test_ckpt_dir, exist_ok=True) + root_url = "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories260K" + need = ["stories260K.bin", "stories260K.pt", "tok512.bin", "tok512.model"] + for file in need: + url = root_url + '/' + file #os.path.join inserts \\ on windows + filename = os.path.join(test_ckpt_dir, file) + if not os.path.exists(filename): + download_file(url, filename) + +expected_stdout = b'Once upon a time, there was a little girl named Lily. She loved to play outside in the park. One day, she saw a big, red ball. She wanted to play with it, but it was too high.\nLily\'s mom said, "Lily, let\'s go to the park." Lily was sad and didn\'t know what to do. She said, "I want to play with your ball, but I can\'t find it."\nLily was sad and didn\'t know what to do. She said, "I\'m sorry, Lily. I didn\'t know what to do."\nLily didn\'t want to help her mom, so she' + +# ----------------------------------------------------------------------------- +# actual tests + +def test_runc(): + """ Forwards a model against a known-good desired outcome in run.c for 200 steps""" + attempt_download_files() + + model_path = os.path.join(test_ckpt_dir, "stories260K.bin") + tokenizer_path = os.path.join(test_ckpt_dir, "tok512.bin") + command = ["./run", model_path, "-z", tokenizer_path, "-t", "0.0", "-n", "200"] + with open('err.txt', mode='wb') as fe: + with open('stdout.txt', mode='wb') as fo: + proc = subprocess.Popen(command, stdout=fo, stderr=fe) #pipe in windows terminal does funny things like replacing \n with \r\n + proc.wait() + + with open('stdout.txt', mode='r') as f: + stdout = f.read() + # strip the very last \n that is added by run.c for aesthetic reasons + stdout = stdout[:-1].encode('ascii') + + assert stdout == expected_stdout + +def test_python(): + """ Forwards a model against a known-good desired outcome in sample.py for 200 steps""" + attempt_download_files() + + device = "cpu" # stories260K is small enough to just breeze through it on CPU + checkpoint = os.path.join(test_ckpt_dir, "stories260K.pt") + checkpoint_dict = torch.load(checkpoint, map_location=device) + gptconf = ModelArgs(**checkpoint_dict['model_args']) + model = Transformer(gptconf) + state_dict = checkpoint_dict['model'] + unwanted_prefix = '_orig_mod.' + for k,v in list(state_dict.items()): + if k.startswith(unwanted_prefix): + state_dict[k[len(unwanted_prefix):]] = state_dict.pop(k) + model.load_state_dict(state_dict, strict=False) + model.eval() + model.to(device) + x = torch.tensor([[1]], dtype=torch.long, device=device) # 1 is BOS + with torch.inference_mode(): + y = model.generate(x, max_new_tokens=200, temperature=0.0) + pt_tokens = y[0].tolist() + + tokenizer_model = os.path.join(test_ckpt_dir, "tok512.model") + enc = Tokenizer(tokenizer_model=tokenizer_model) + text = enc.decode(pt_tokens) + text = text.encode('ascii') # turn into bytes + + assert text == expected_stdout diff --git a/win.c b/win.c new file mode 100644 index 0000000..5cd7f1c --- /dev/null +++ b/win.c @@ -0,0 +1,180 @@ +#include "win.h" +#include +#include + +#ifndef FILE_MAP_EXECUTE +#define FILE_MAP_EXECUTE 0x0020 +#endif /* FILE_MAP_EXECUTE */ + +static int __map_mman_error(const uint32_t err, const int deferr) +{ + if (err == 0) + return 0; + //TODO: implement + return err; +} + +static uint32_t __map_mmap_prot_page(const int prot) +{ + uint32_t protect = 0; + + if (prot == PROT_NONE) + return protect; + + if ((prot & PROT_EXEC) != 0) + { + protect = ((prot & PROT_WRITE) != 0) ? + PAGE_EXECUTE_READWRITE : PAGE_EXECUTE_READ; + } + else + { + protect = ((prot & PROT_WRITE) != 0) ? + PAGE_READWRITE : PAGE_READONLY; + } + + return protect; +} + +static uint32_t __map_mmap_prot_file(const int prot) +{ + uint32_t desiredAccess = 0; + + if (prot == PROT_NONE) + return desiredAccess; + + if ((prot & PROT_READ) != 0) + desiredAccess |= FILE_MAP_READ; + if ((prot & PROT_WRITE) != 0) + desiredAccess |= FILE_MAP_WRITE; + if ((prot & PROT_EXEC) != 0) + desiredAccess |= FILE_MAP_EXECUTE; + + return desiredAccess; +} + +void* mmap(void *addr, size_t len, int prot, int flags, int fildes, ssize_t off) +{ + HANDLE fm, h; + void * map = MAP_FAILED; + +#ifdef _MSC_VER +#pragma warning(push) +#pragma warning(disable: 4293) +#endif + + const uint32_t dwFileOffsetLow = (uint32_t)(off & 0xFFFFFFFFL); + const uint32_t dwFileOffsetHigh = (uint32_t)((off >> 32) & 0xFFFFFFFFL); + const uint32_t protect = __map_mmap_prot_page(prot); + const uint32_t desiredAccess = __map_mmap_prot_file(prot); + + const ssize_t maxSize = off + (ssize_t)len; + + const uint32_t dwMaxSizeLow = (uint32_t)(maxSize & 0xFFFFFFFFL); + const uint32_t dwMaxSizeHigh = (uint32_t)((maxSize >> 32) & 0xFFFFFFFFL); + +#ifdef _MSC_VER +#pragma warning(pop) +#endif + + errno = 0; + + if (len == 0 + /* Unsupported flag combinations */ + || (flags & MAP_FIXED) != 0 + /* Usupported protection combinations */ + || prot == PROT_EXEC) + { + errno = EINVAL; + return MAP_FAILED; + } + + h = ((flags & MAP_ANONYMOUS) == 0) ? + (HANDLE)_get_osfhandle(fildes) : INVALID_HANDLE_VALUE; + + if ((flags & MAP_ANONYMOUS) == 0 && h == INVALID_HANDLE_VALUE) + { + errno = EBADF; + return MAP_FAILED; + } + + fm = CreateFileMapping(h, NULL, protect, dwMaxSizeHigh, dwMaxSizeLow, NULL); + + if (fm == NULL) + { + errno = __map_mman_error(GetLastError(), EPERM); + return MAP_FAILED; + } + + map = MapViewOfFile(fm, desiredAccess, dwFileOffsetHigh, dwFileOffsetLow, len); + + CloseHandle(fm); + + if (map == NULL) + { + errno = __map_mman_error(GetLastError(), EPERM); + return MAP_FAILED; + } + + return map; +} + +int munmap(void *addr, size_t len) +{ + if (UnmapViewOfFile(addr)) + return 0; + + errno = __map_mman_error(GetLastError(), EPERM); + + return -1; +} + +int mprotect(void *addr, size_t len, int prot) +{ + uint32_t newProtect = __map_mmap_prot_page(prot); + uint32_t oldProtect = 0; + + if (VirtualProtect(addr, len, newProtect, &oldProtect)) + return 0; + + errno = __map_mman_error(GetLastError(), EPERM); + + return -1; +} + +int msync(void *addr, size_t len, int flags) +{ + if (FlushViewOfFile(addr, len)) + return 0; + + errno = __map_mman_error(GetLastError(), EPERM); + + return -1; +} + +int mlock(const void *addr, size_t len) +{ + if (VirtualLock((LPVOID)addr, len)) + return 0; + + errno = __map_mman_error(GetLastError(), EPERM); + + return -1; +} + +int munlock(const void *addr, size_t len) +{ + if (VirtualUnlock((LPVOID)addr, len)) + return 0; + + errno = __map_mman_error(GetLastError(), EPERM); + + return -1; +} + +// Portable clock_gettime function for Windows +int clock_gettime(int clk_id, struct timespec *tp) { + uint32_t ticks = GetTickCount(); + tp->tv_sec = ticks / 1000; + tp->tv_nsec = (ticks % 1000) * 1000000; + return 0; +} diff --git a/win.h b/win.h new file mode 100644 index 0000000..383cfad --- /dev/null +++ b/win.h @@ -0,0 +1,69 @@ +#ifndef _WIN_H_ +#define _WIN_H_ + +#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers +#include +#include +#include + +#define ssize_t int64_t +#define ftell _ftelli64 + +// Below code is originally from mman-win32 +// +/* + * sys/mman.h + * mman-win32 + */ + +#ifndef _WIN32_WINNT // Allow use of features specific to Windows XP or later. +#define _WIN32_WINNT 0x0501 // Change this to the appropriate value to target other versions of Windows. +#endif + +/* All the headers include this file. */ +#ifndef _MSC_VER +#include <_mingw.h> +#endif + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +#define PROT_NONE 0 +#define PROT_READ 1 +#define PROT_WRITE 2 +#define PROT_EXEC 4 + +#define MAP_FILE 0 +#define MAP_SHARED 1 +#define MAP_PRIVATE 2 +#define MAP_TYPE 0xf +#define MAP_FIXED 0x10 +#define MAP_ANONYMOUS 0x20 +#define MAP_ANON MAP_ANONYMOUS + +#define MAP_FAILED ((void *)-1) + +/* Flags for msync. */ +#define MS_ASYNC 1 +#define MS_SYNC 2 +#define MS_INVALIDATE 4 + +/* Flags for portable clock_gettime call. */ +#define CLOCK_REALTIME 0 + +void* mmap(void *addr, size_t len, int prot, int flags, int fildes, ssize_t off); +int munmap(void *addr, size_t len); +int mprotect(void *addr, size_t len, int prot); +int msync(void *addr, size_t len, int flags); +int mlock(const void *addr, size_t len); +int munlock(const void *addr, size_t len); +int clock_gettime(int clk_id, struct timespec *tp); + +#ifdef __cplusplus +}; +#endif + +#endif /* _WIN_H_ */ From 882e480bc0a5abfaf2958a15e126d1e60df3b8cf Mon Sep 17 00:00:00 2001 From: YiMing Han Date: Fri, 18 Aug 2023 15:18:29 -0400 Subject: [PATCH 77/79] update read me --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 8c36285..230d4ad 100644 --- a/README.md +++ b/README.md @@ -301,6 +301,8 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - [llama2.py](https://github.com/tairov/llama2.py) by @[tairov](https://github.com/tairov): a simple one file pure Python port of this project with zero dependencies - C# - [llama2.cs](https://github.com/trrahul/llama2.cs) by @[trrahul](https://github.com/trrahul): a C# port of this project +- Dart + - [llama2.dart](https://github.com/yiminghan/llama2.dart) by @[yiminghan](https://github.com/yiminghan/llama2.dart): one-file dart port of this project, works with Flutter! - WebAssembly - [icpp-llm](https://github.com/icppWorld/icpp-llm): LLMs for the Internet Computer - [llama2.c - Llama 2 Everywhere](https://github.com/trholding/llama2.c) by @[trholding](https://github.com/trholding): Standalone, Bootable & Portable Binary Llama 2 From 978c311b3078e216d670e5b57fe4fee8419c60d5 Mon Sep 17 00:00:00 2001 From: rahoua <142625193+rahoua@users.noreply.github.com> Date: Fri, 18 Aug 2023 14:58:21 -0700 Subject: [PATCH 78/79] Add pecca-rs to README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 8c36285..32a25ab 100644 --- a/README.md +++ b/README.md @@ -271,6 +271,7 @@ If your candidate PRs have elements of these it doesn't mean they won't get merg - [llama2.rs](https://github.com/leo-du/llama2.rs) by @[leo-du](https://github.com/leo-du): A Rust port of this project - [llama2-rs](https://github.com/danielgrittner/llama2-rs) by @[danielgrittner](https://github.com/danielgrittner): a Rust port of this project - [llama2.rs](https://github.com/lintian06/llama2.rs) by @[lintian06](https://github.com/lintian06): A Rust port of this project + - [pecca.rs](https://github.com/rahoua/pecca-rs) by @[rahoua](https://github.com/rahoua): A Rust port leveraging [ndarray](https://github.com/rust-ndarray/ndarray), supports BLAS. - Go - [go-llama2](https://github.com/tmc/go-llama2) by @[tmc](https://github.com/tmc): a Go port of this project - [llama2.go](https://github.com/nikolaydubina/llama2.go) by @[nikolaydubina](https://github.com/nikolaydubina): a Go port of this project From fbefeec1b1e215206060079e3abe7b9bf95ea548 Mon Sep 17 00:00:00 2001 From: rahulschand Date: Sat, 19 Aug 2023 13:05:26 +0530 Subject: [PATCH 79/79] add assert message to give better warning --- tinystories.py | 1 + 1 file changed, 1 insertion(+) diff --git a/tinystories.py b/tinystories.py index 690cb02..90d576b 100644 --- a/tinystories.py +++ b/tinystories.py @@ -196,6 +196,7 @@ class PretokDataset(torch.utils.data.IterableDataset): shard_filenames = sorted(glob.glob(os.path.join(bin_dir, "*.bin"))) # train/test split. let's use only shard 0 for test split, rest train shard_filenames = shard_filenames[1:] if self.split == "train" else shard_filenames[:1] + assert len(shard_filenames)>0, f"No bin files found in {bin_dir}" while True: rng.shuffle(shard_filenames) for shard in shard_filenames: