• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

About Logical core and execute instruction

life24

Senior member
Hello,
Could you explain about it?
Why?


This is important to realize when you examine your computer hardware and estimate performance gains of a parallel application.
For our examples of performance estimations using Amdahl's and Gustafson's laws, we will only be counting physical cores because technically logical cores, in a single physical core, cannot execute instructions during the same clock cycle.
 
[Citation Needed]

Implementations vary, but I'd say that the highlighted text is not true for some modern processors in some cases.
 
[Citation Needed]

Implementations vary, but I'd say that the highlighted text is not true for some modern processors in some cases.

C# Multithreaded and Parallel Programming
Starting
Rodney Ringler

link deleted
spamming is not allowed here.
Markfw900
 
Last edited by a moderator:
Do your own homework?

I attached from the above book. so this isn't HW.

y37vrp6y30tmdpb1via8.jpg


yekoxltd9zx5pm19d5b5.jpg
 
That's quite a simplified view. Often a pipeline will have openings in the various stages where instructions in one thread are stalled/flushed/bubbled, in some of these cases another thread can take advantage of the unused resource if SMT is implemented. So in some cases, two threads can essentially "fill" the single physical core more consistently, making better use of the pipelined resources.
 
More than that, modern Intel cores have much more execution resources than a single typical thread can usually utilize, and the execution core is completely agnostic about which tread the instructions in the instruction window originate from. This means that they do usually execute instructions from both threads on almost every cycle in normal operation.
 
More than that, modern Intel cores have much more execution resources than a single typical thread can usually utilize, and the execution core is completely agnostic about which tread the instructions in the instruction window originate from. This means that they do usually execute instructions from both threads on almost every cycle in normal operation.

See page 132 from one of Agner's optimization guides[1] to see what resources are shared between cores. If the shared resources are a bottleneck there won't be an advantage from SMT.

[1] http://www.agner.org/optimize/microarchitecture.pdf
 
That's not true, outside of the Xeon Phi. A modern Intel core will schedule instructions from both Hyperthreads to be executed simultaneously.

I am almost certain that even (current) Xeon Phi can execute instructions from all HT threads on the core at same time as long as there are resources available. Being in-order has nothing to do with that ability.
 
The point here is that introductory books are just that, an introduction. It's leaving out quite a few "modern" details. But that's ok for learning, jumping straight into a full modern core isn't advisable.
 
I attached from the above book. so this isn't HW.

y37vrp6y30tmdpb1via8.jpg


yekoxltd9zx5pm19d5b5.jpg

How old is that book? We've had superscalar x86 CPU cores that can execute more than one instruction per clock cycle for two decades.

I suggest you get a different book.
 
Last edited:
Yes, this is a simplified view of cpu architectures. I agree with the above comments conerning today's cpu architectures.

The book is viewing it from the software side. Specifically, the .NET Common Language Runtime engine's view and how it schedules threads and uses its pool of threads. The book is acurate from that perspective.

Hope this helps.

Rodney
 
Back
Top