Skip to content
GitLab
Explore
Sign in
Overview
Active
Stale
All
alloc-assert-fix
ee745692
·
ggml-alloc : fix assert in debug builds
·
Oct 09, 2023
batched-bench
2fcdf869
·
batched-bench : add mmq CLI arg
·
Oct 11, 2023
rev-sampling
5261aee8
·
sampling : one sequence per sampling context
·
Oct 12, 2023
llava-fix-offloading
932589c0
·
Honor -ngl option for Cuda offloading in llava
·
Oct 14, 2023
speculative-tree
ad2727d0
·
Merge branch 'master' into speculative-tree
·
Oct 18, 2023
sampling-refactor
56ba00b9
·
sampling : hide prev behind API and apply #3661
·
Oct 20, 2023
perf-study
cb79f8a2
·
llama : add SKIP_KQ_KQV option
·
Oct 22, 2023
server-rev
c0f4d548
·
server : add comment about changing slot_state to bool
·
Oct 22, 2023
upd-issue-templates
b9bb4cbe
·
Separate bug and enhancement template + no default title
·
Oct 23, 2023
cuda-batched-gemm-deq
69664749
·
cuda : play with faster Q4_0 dequantization
·
Oct 24, 2023
cuda-batched-gemm
d798a17c
·
cuda : add TODO for calling cublas from kernel + using mem pool
·
Oct 24, 2023
cuda-quantum-batch
49af767f
·
build : add compile option to force use of MMQ kernels
·
Oct 27, 2023
cuda-multi-gpu
cd3e20fb
·
cuda : fix multi-gpu with tensor cores
·
Oct 27, 2023
sampling-greedy-with-probs
bbfc62ac
·
sampling : temp == 0.0 -> no probs, temp < 0.0 -> probs
·
Oct 28, 2023
apply-3585
de7e0912
·
convert : ignore tokens if their IDs are within [0, vocab_size)
·
Oct 28, 2023
ggml-quants
8a86b95e
·
quantize : --pure option for disabling k-quant mixtures
·
Oct 28, 2023
scratch
15267192
·
llama : refactor tensor offloading as callback
·
Oct 29, 2023
lto
bc28aaa8
·
make : use -lfto=auto to avoid warnings and maintain perf
·
Oct 30, 2023
ggml-impl
4b3cb98d
·
ggml-impl : move extern "C" to start of file
·
Oct 30, 2023
llama-refactor-norm
7923b70c
·
llama : add llm_build_inp_embd helper
·
Oct 31, 2023
Prev
1
2
3
4
5
6
7
8
…
13
Next