Morgan Funtowicz
|
b98c635781
|
feat(backend): entirely rewrite backend
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
0c1dd0ed2b
|
feat(llamacpp): wip explosion
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
a316c53255
|
feat(llamacpp): expose number of threads for the backend when constructing the model
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
e4d803c94e
|
feat(backend): build and link through build.rs
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
355d8a55b4
|
feat(backend): wip Rust binding
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
f9c248657d
|
chore(backend): minor formatting
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
37faeb34b2
|
feat(backend): expose frequency and repetition penalties
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
d4b5be10f9
|
feat(backend): minor refactor
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
92bb113653
|
feat(backend): use llama_token as TokenId type
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
45d5a6a8c5
|
feat(backend): add some initial decoding steps
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
0911076320
|
feat(backend): correctly load llama.cpp model from llama api and not gpt2
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
52d57dca79
|
feat(llamacpp): initial end2end build
|
2024-11-14 08:42:01 +01:00 |
Morgan Funtowicz
|
aa1fcba59f
|
feat(llamacpp): initial commit
# Conflicts:
# Cargo.lock
|
2024-11-14 08:42:01 +01:00 |