Viewing a single comment thread. View all comments

tokynambu t1_j1tq28j wrote

No, it isn’t. That is why cpus need a lot of pipelining and speculative execution and caching and the like.

Mid 1980s, the choice when buying asynchronous RAM was, if memory serves, 70ns or 35ns latency. That when a fast processor was less than 20MHz (a Sun 3/160 was a 68020 clocked at 16.67MHz). So one cycle was about 60ns, and processors did not need to wait for RAM.

Today synchronous memory has a first word latency of the order of 10ns; that isn’t exactly the same as asynchronous latency but approximately comparable. But the processor is running at, say, 3GHz. So now, instead of people able to read RAM in a clock cycle or at most two, you need 30 cycles to access RAM. Clock is ~100x faster, RAM latency is only ~3x better. The RAM is much faster in bulk transfer (perhaps 50x or more) which helps for some operations, but a single random read is not helped.

Hence cache, pipelines, speculative execution, caches, more caches.

https://en.m.wikipedia.org/wiki/CAS_latency

−1

Potatoswatter t1_j1u2olv wrote

You just claimed that DDR5 SDRAM isn’t faster than magnetic core memory. Then you moved the starting point from core memory to mid-80’s DRAM, a completely different thing. Then gave the state of the art as 100ns when it’s more like 25ns if you stop to search. Then you still came up with 3x improvement in contradiction to the first sentence.

2

tokynambu t1_j1u53d8 wrote

That's a fair comment: I was comparing with the 80s, not the 70s.

However, I don't see how you get "Then gave the state of the art as 100ns" when I explicitly wrote "Today synchronous memory has a first word latency of the order of 10ns;"

1