Submitted by maxtility t3_ymzs3z in singularity
Comments
iNstein t1_iv8asco wrote
Came looking for this. It is probably equivalent to 10nm over at Intel. One thing I always liked about Intel is they keep things fairly honest. TSMC doesn't even try.
Kinexity t1_iv8b9q3 wrote
Nah, it's probably around OG Intel 5 nm (before rebranding). TSMC's naming scheme is disingenous but they aren't that much behind if you look at transistor density.
Down_The_Rabbithole t1_iv9fyae wrote
Intel has stopped doing that this generation. As they got frustrated by consumers thinking they are behind in transistor density. So they have now renamed their 7nm as 5nm. And will rename their 5nm to 2.1nm to be more in line with the fake names of TSMC.
Samsung is the worst of all. Their "4nm" is equivalent to GlobalFoundry 12nm, Intel 14nm and TSMC 10nm.
justowen4 t1_iv8kqea wrote
Lol that’s hillllllllaaaaaarious
justowen4 t1_iv8kur9 wrote
I love pat, and chips act is wise, but intel historically has been anything but opaque regarding practically anything related to chip marketing
[deleted] t1_iv8c3wp wrote
[deleted]
[deleted] t1_iv780w3 wrote
And you’re comment has nothing to do wit legitimate progress being made
Kaarssteun t1_iv874lz wrote
whats wrong with them pointing out a misleading claim?
vernes1978 t1_iv8bcok wrote
It ruins the fanfiction.
Low_Job_4937 t1_iv6rlof wrote
Knnnk re uu uu⁶⁶7 u, j
Gasoline_Dreams t1_iv7zjtz wrote
Indeed.
modestLife1 t1_iv8x92q wrote
wait – he's giving us the cheat code to activate agi!
now-here-be t1_iv6ew6r wrote
ELI5 - why does this matter. Chips are so tiny anyways. What does a jump from say 3nm to 1nm mean for me as an end consumer? Thanks!
smenjas t1_iv6jv9w wrote
Smaller process nodes allow chips to perform operations faster and use less energy. So your computer or your phone can have better battery life, operate with a lower electricity bill, and run more complex software without feeling slow.
Because the distance between components is smaller, the electrical signal can reach them more quickly, allowing the clock to operate at a higher frequency.
Because there is less material for the electricity to pass through, there is less electrical resistance, so the chip uses less energy to perform the same computations as a larger process node.
The problem with making the wires smaller and closer together, is that electrons will “tunnel” through the insulating layer between them, causing the electrical signals to behave unpredictably. It is also very difficult to etch the patterns into the chips, because the size of the wires are approaching the limits of our ability to focus light accurately enough.
CompressionNull t1_iv7ne8k wrote
Yea but the flip side of that coin is lazy software development. Hardware is so fast now that coders don’t need to optimize code anymore, so performance for the end user does not advance as rapidly as it should.
xeneks t1_iv7z3mx wrote
Actually they do.
It lags the leading edge though. I think that’s due to the large time it takes to recode at simple or base levels, replacing routines or libraries or writing entire new code bases or implementing algorithms that take advantage of unique or abstracted hardware.
Having rewritten software using new codebases with new libraries or upgraded dependencies often addresses software bloat issues. If you upgrade the OS you can often run more recent apps.
Guessing mostly,
If you take a bunch of computers that is <1 yo and the best os & software you can find. The software choices often work fine, but are actually not so optimised. Sometimes they are brutal in their resource requirements.
Then you take a bunch of computers >5 yo. And you install the best os & software you can find. The software choices apply many code optimisations that actually take substantial advantage of the full set of hardware features.
It’s another reason why old hardware is amazing and always worth keeping, repairing and maintaining and even, actively using privately, professionally or commercially.
It’s why even a low end old mobile phone is worth spending hours to repair, service, and make hardware reliable on.
Apply upgraded OS or different apps, suddenly the phone is a completely different machine, not only functional, but usable and even satisfying and enjoyable to use.
This is really easy to do with PCs, that typically run windows or linux, but with phones or with Apple hardware it’s less possible due to the closed development environment.
I’ve done it using jailbroken android stacks though, and been very happy as old hardware suddenly works equal to new hardware with no additional resources or pollution and water needs, and deferred recycling costs.
When old hardware is reliably and operating consistently, or even only low cost but working well and repairable, you really warm to the manufacturers.
Source:
Personal/professional experience over 20+ years of trying to get new optimal OS & software working on old hardware, to avoid disrespecting embedded resource, material, carbon costs, water and air pollution.
Ps: 1nm… low power! Low temperature! Awesome! I’m thinking this might create the first generations of hardware for computers, phones and tablets that might be in-field functional past two decades… I hope it’s able to be adjusted to remain viable even if hardware code exploits are discovered after a decade or more of use. Aside from microcode, what other approaches are taken to make hardware reliable aside from air gaps and isolation from networks?
wen_mars t1_iv8r7f8 wrote
> Guessing mostly, > > If you take a bunch of computers that is <1 yo and the best os & software you can find. The software choices often work fine, but are actually not so optimised. Sometimes they are brutal in their resource requirements. > > Then you take a bunch of computers >5 yo. And you install the best os & software you can find. The software choices apply many code optimisations that actually take substantial advantage of the full set of hardware features. > > It’s another reason why old hardware is amazing and always worth keeping, repairing and maintaining and even, actively using privately, professionally or commercially.
This is not true. The actual reasons why old hardware works just fine are that CPUs have not improved all that much in single-threaded performance over the last decade or so and RAM does not meaningfully impact performance unless you have too little of it. The only big change has been the transition from HDDs to SSDs. Loading times and boot times have improved a lot because of it.
CPUs now have more cores than before but most software does not take advantage of it.
hagaiak t1_iv9hv6l wrote
Indeed. I also blame some language designers. The fact that so many computers, including smartphones, are running so much software written in dynamic languages instead of proper compiled ones accounts for an insane amount of lost performance in the world.
I feel it is just disrespectful. These languages could have been desea bit differently to solve the same use case, and still be in a similar performance category to other compiled languages.
TheRidgeAndTheLadder t1_iv6irbn wrote
50% power usage drop. Same as any other process change.
cocopuffs239 t1_iv6mhvf wrote
Easiest way to explain is this. I put 3.2k into my computer in 2014, right now my phone that I bought in 2020 has almost the same amount of processing my desktop has.
But instead of needing desktop power, my phone battery is now enough to power my phones processing.
This is all due to shrinkage
Economy_Variation365 t1_iv6zal5 wrote
The phone you bought in 2000? Huh?
Lawjarp2 t1_iv741yg wrote
He just upgraded it
cocopuffs239 t1_iv7dp6g wrote
Lmao meant 2020
wordyplayer t1_iv7gevo wrote
George: "You ladies know about shrinkage, right? Right??"
RikerT_USS_Lolipop t1_iv86k8r wrote
And where have all those gains gone? The user experience is identical. Everything the hardware guys give us, the software guys take away.
cocopuffs239 t1_iv8l0y2 wrote
I wouldn't say that, it's a different form factor different o.s. it's not the same as a PC
muchcharles t1_iv6xcb5 wrote
Increasing linear density by 2X (not necessarily happening depending on how they are applying the marketing term to actual sizes) means quadrupling the number of transistors.
WheelyFreely t1_iv75mol wrote
A chip that was 3mm in size is now 1mm. Not only does it shrink in size allowing more.chips to be installed it also lessens the material required to build one and energy to operate them.
Chop1n t1_iv82gip wrote
Have you been thinking all this time that processor architecture is described in terms of millimeters?
toastjam t1_iv84zwu wrote
They're talking about chips, not transistors. The scale change would be roughly proportional.
WheelyFreely t1_iv8tky1 wrote
The comment i replied to asked that we explain like he was 5. So an oversimplification, but done for the sake of the analogy.
Down_The_Rabbithole t1_iv9ggzh wrote
Smaller chips are faster because things are closer together.
Smaller chips are cheaper to produce because you can make more of them at the same time
Smaller chips consume less power and thus increases battery life on smartphones/laptops
Smaller chips produce less heat and thus can be either clocked higher for more speed or laptops/smartphones can be made smaller/thinner as it uses less cooling.
But most often the chips don't actually shrink, they just use the new production technology to put more stuff in a chip of similar size.
2Punx2Furious t1_iv7lc68 wrote
Yeah, but what about quantum tunneling?
inspectorgadget9999 t1_iv7oukf wrote
Maybe it's a problem, maybe it's not
Down_The_Rabbithole t1_iv9g8vz wrote
Quantum tunneling has been a problem since 32nm. The solution to it is to just have hardware that does the calculation multiple times to ensure a bit didn't get switched, the result that comes up most often is assumed to be the correct one.
Jim Keller has an entire talk about how to manage quantum tunneling bit flips statistically.
Sadly it means more and more of the actual silicon is used for redundancy stuff like this instead of actually used for normal computing.
We can clearly see this as a CPU from 2008 (I7 920) and a CPU from 2022 (I7 13900k) have almost 100x difference in amount of transistors, yet the 13900k is "only" 5-10x faster.
2Punx2Furious t1_iv9jgib wrote
> The solution to it is to just have hardware that does the calculation multiple times to ensure a bit didn't get switched, the result that comes up most often is assumed to be the correct one.
So we have to do the same calculation multiple times, effectively negating any gains coming from smaller transistors? Or even counting the additional calculations, it's still worth it? I assume the latter, since we're still doing it.
> We can clearly see this as a CPU from 2008 (I7 920) and a CPU from 2022 (I7 13900k) have almost 100x difference in amount of transistors, yet the 13900k is "only" 5-10x faster.
Ah, there's the answer. Thanks.
space_troubadour t1_iv6e0l4 wrote
Damn, so soon we’ll be talking about picometer processes?
Sotamiro2 t1_iv9g5fb wrote
I think they want to use the term "angstrom" which is 0.1nm or 100 picometers
Kinexity t1_iv6fmnp wrote
TSMC isn't approaching 1 nm - they are approaching "1 nm". This name has nothing to do with any dimension of their transistors as it's only a marketing name.