Skip to content
GitLab
Explore
Sign in
Overview
Active
Stale
All
llama-metadata
d0445a2e
·
better documentation
·
Nov 10, 2023
fix-tensor-split-zero
47d604fa
·
fix issues
·
Nov 05, 2023
revert-pool
3ef358ff
·
Revert "cuda : use CUDA memory pool with async memory allocation/deallocation...
·
Nov 04, 2023
metal-soft-max
46868a49
·
metal : multi-simd softmax
·
Nov 01, 2023
llm-build-context
a8796f96
·
llm : cleanup + comments
·
Nov 01, 2023
llm-reuse-constants
7420bef8
·
wip wip wip
·
Nov 01, 2023
llama-refactor
afb39292
·
Merge branch 'master' into llama-refactor
·
Oct 31, 2023
test-mmv
29fe5169
·
wip
·
Oct 31, 2023
deploy
dab42893
·
scripts : working curl pipe
·
Oct 31, 2023
llama-refactor-norm
7923b70c
·
llama : add llm_build_inp_embd helper
·
Oct 31, 2023
ggml-impl
4b3cb98d
·
ggml-impl : move extern "C" to start of file
·
Oct 30, 2023
lto
bc28aaa8
·
make : use -lfto=auto to avoid warnings and maintain perf
·
Oct 30, 2023
scratch
15267192
·
llama : refactor tensor offloading as callback
·
Oct 29, 2023
ggml-quants
8a86b95e
·
quantize : --pure option for disabling k-quant mixtures
·
Oct 28, 2023
apply-3585
de7e0912
·
convert : ignore tokens if their IDs are within [0, vocab_size)
·
Oct 28, 2023
sampling-greedy-with-probs
bbfc62ac
·
sampling : temp == 0.0 -> no probs, temp < 0.0 -> probs
·
Oct 28, 2023
cuda-multi-gpu
cd3e20fb
·
cuda : fix multi-gpu with tensor cores
·
Oct 27, 2023
cuda-quantum-batch
49af767f
·
build : add compile option to force use of MMQ kernels
·
Oct 27, 2023
cuda-batched-gemm
d798a17c
·
cuda : add TODO for calling cublas from kernel + using mem pool
·
Oct 24, 2023
cuda-batched-gemm-deq
69664749
·
cuda : play with faster Q4_0 dequantization
·
Oct 24, 2023
Prev
1
…
5
6
7
8
9
10
11
12
13
Next