AI coding assistance discussion

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Jul 27, 2020
24,052
16,806
146

Pretty CRAZY model.

Don't believe me? Ask it something and watch it go. Like really, really go.

It doesn't stop until it runs against some sort of limit. Keeps going through different code possibilities.
 
Jul 27, 2020
24,052
16,806
146
1741353185393.png


Speculative decoding now allows a larger 231B model to oversee the draft work of the smaller 13B model, resulting in improved response times.
 
Jul 27, 2020
24,052
16,806
146
It's LIVE!


Kids can now create their own CPU benchmarks!

(yes, I'm a 44 year old kid...)
 
  • Like
Reactions: Red Squirrel
Jul 27, 2020
24,052
16,806
146
RAM latency checker: https://www.overclock.net/posts/29439133/

As described there, it's not the absolute latency but it seems fairly consistent.

Tested to work on Haswell and onwards. Don't think I can try it on my Epyc today so someone may wanna volunteer and test on their Ryzen? Thanks!

EDIT: Tested and working as intended on Tiger Lake. Average latency deviation isn't wild which means it can be useful.
 
Last edited:
Jul 27, 2020
24,052
16,806
146
A disappointment to report, hoping it would dissuade someone else from investing in expensive hardware (good thing LLM wasn't the only thing I bought the laptop for).

So my Thinkpad now has 128GB RAM and RTX 5000 16GB dGPU. I was hoping I would be able to run Llama 3.3 70B. It loads, at a context length of 16384 and consumes 71GB system RAM and all of VRAM. Unfortunately, the calculations are not offloaded to the GPU, despite lowering the core count to 1 and using all 80 cores of the GPU. It stays at 0% utilization. The processing happens on the CPU and even when setting it to max 6 cores (HT not supported by LM Studio I guess), the CPU utilization does not go beyond 17%. It gives a response, at the most horrible speed of something like 0.05 tokens per second or even lower. Gave up on it and now downloading another 8B LLM at F16 and Q8, to take advantage of speculative decoding. If I still don't get any GPU utilization, I will need to troubleshoot (maybe driver issue?).
 
Jul 27, 2020
24,052
16,806
146

LM Studio can't use both GPUs in parallel so one of them is doing the hard work while the other is chilling and just holding some data in its VRAM.
 
Jul 27, 2020
24,052
16,806
146
Tried the same prompt with/without GPU offloading and in the CPU only scenario, it created twice as many tokens to arrive at the solution. May have to do that a number of times to verify if this behavior is consistent but it does beg the question why the "thinking" is better with the involvement of GPU.
 
Jul 27, 2020
24,052
16,806
146
Was planning to benchmark my 9950X3D using LG ExaOne Deep F16 model. On the Xeon 6248R, I got a speed of roughly 3.7 tokens per second. Tried it at home and first, it loads up only the bottom half of the threads in Task Manager. Second, it keeps processing and never gets to the "thinking" stage. Just wastes a whole lot of power for nothing. So I suspect:

1) LM Studio isn't optimized for 9950X3D or getting confused by the CCD crap.

2) LM Studio is secretly co-owned by Intel or AMD or both and it only works flawlessly on server CPUs.

Extremely annoyed since the Xeon was only 41% utilized with max speeds of 3.9 GHz on 24 cores while the 9950X3D was hitting 5+ GHz on 16 cores and still failed to progress to the thinking stage.
 

MS_AT

Senior member
Jul 15, 2024
594
1,243
96
Extremely annoyed since the Xeon was only 41% utilized with max speeds of 3.9 GHz on 24 cores while the 9950X3D was hitting 5+ GHz on 16 cores and still failed to progress to the thinking stage.
The prompt processing part is compute heavy and during that on 16 threads your clocks should be sinking low if the code is well optimized. SMT will be by definition useless in this case.

The token generation part in single user case is dominated by memory BW, during that the clocks will be high, and you could get by with fewer than 16 threads even (it takes 2, pinned to different CCDs to maximize MemBW usage and if more threads help will depend on the model compute needs)

https://github.com/ikawrakow/ik_llama.cpp discussions in this repo are quite insightful, as well as in https://github.com/ggml-org/llama.cpp
 
  • Like
Reactions: igor_kavinski
Jul 27, 2020
24,052
16,806
146

Athene V2 Chat IQ4_XS 73B model

9950X3D

6200C52 FCLK 2133 UCLK 3100 CO -37

Pretty impressive that it's maintaining a solid 5.35 GHz speed, even with my less than stellar 240mm AIO cooler.