Comments
BLACK-C4T t1_iwgierp wrote
Will be more, it just doesn't make sense to upgrade everytime something better comes along
ShavedPapaya t1_iwgoku8 wrote
Don’t tell r/pcmasterrace that
Bobert_Manderson t1_iwgptsc wrote
Hey, I’ll dig myself into debt however I want. American dream.
ShavedPapaya t1_iwh1i0n wrote
It that debt isn’t medical, then you’re an amateur.
Bobert_Manderson t1_iwhg14n wrote
Jokes on you, I put my life savings into GameStop, bought $20k worth of guns, a $85k lifted trucks, and a $500k house while making 50k a year. All I need to do is have a mild medical emergency and I might beat the American Dream speedrun record.
djphreshprince t1_iwikwcj wrote
Presents mortgage-sized student loan debt
Lucius-Halthier t1_iwhdbjn wrote
I’ve already alerted the council and they became as hot in the face and angry as a 4090
[deleted] t1_iwhqfpe wrote
[removed]
SomeToxicRivenMain t1_iwgfrh7 wrote
That’s over 20%!
molybdenum99 t1_iwiwmz0 wrote
And here I was thinking they only had 5 of them
Mowensworld t1_iwfp3qy wrote
At the moment EPYC is just too good and new chips are looking even better so I don't see this changing any time soon. Considering AMD was literally almost down and out a decade a go, can't wait to see what Intel fires back with or what other architectures have in store.
rtb001 t1_iwgntj1 wrote
It is super impressive that Intel is a much bigger company that until recently only did CPUs, and nVidia is a much bigger company that mostly does GPUs, while AMD does BOTH yet has survived all this time.
frostnxn t1_iwgo5kz wrote
Yes, but amd builds the consoles exclusively which helped them stay afloat for sure.
rtb001 t1_iwgov0f wrote
Also I think in hindsight, AMD spinning off global foundries was a really good move. Maybe at the time it was because AMD didn't have to money to keep and maintain their own fab, so they had to become a contract manufacturer. However in later years we would see that not having their own fab meant AMD could be agile about the design of their next gen PC and server chips. So long as TSMC or Samsung could make it, then AMD can design it. But Intel was forced to only make chip designs that can be made to a good yield in their own fabs.
sultry_eyes t1_iy5dvg4 wrote
This is because of the two emerging markets.
NAND Flash
Mobile Phones
and Tablets/Phablets
The tablet is somewhat like a phone and a laptop but not either.
Intel and NVIDIA were already in their own respective markets. CPU and GPU.
AMD was in between CPU&GPU and IBM no longer made great Console chips. See Sony Cell Processor (poor performing difficult to program) and Xbox 360 red ring of death issues.
There suddenly needed to be a fab that could fill the gap for the emerging mobile phone sector. Intel failed and failed HARD in this market. They could not pivot to mobile phones.
Samsung and TSMC however did not fail. And NAND Flash is necessary in order for mobile phones to store the amount of data that they store.
This new market heavily funded both Samsung and TSMC to the point where TSMC is able encroach on Intel's heavy data center customers. Before this those customers were mostly Intel as they were the most reliable as opposed to 2010s AMD. Back then you would be laughed out of the room if you remotely mentioned going with an AMD system.
They had a very tiny laptop (mobile) segment.
Desktops, Servers, and Laptops were all Intel. And that made sense for them to stick to just that and not pivot into the new and emerging mobile phone market/segments.
And yeah hindsight is 20/20 and all that. Now it is Samsung and TSMC with heavy mobile segment growth. And because they are capital rich, they are encroaching into Intel's territory faster than Intel can pivot to theirs.
Intel Foundry won't fire up until 2025. And even then, we will see how many customers they can win back. (Just Qualcomm and Apple pretty much).
I can see Apple wanting to diversify their suppliers from TSMC. Apple makes most of what Intel and TSMC can sell. Smartphone, watches, iPad/Tablets, laptop and desktop chips.
Qualcomm just sells many many mobile phone CPU/GPUs so they may go with Intel if priced correctly.
I don't see anyone dethroning Samsung from their NAND flash memory business. They are pretty good at that. And the is demand for that type of storage.
HDD manufacturers appear content with pumping out 10TB+ drives forever. No change and no one clamoring for big changes there.
Halvus_I t1_iwh5igk wrote
Steam Deck too. Switch is Nvidia though.
mule_roany_mare t1_iwgp1l4 wrote
I’m honestly surprised Intel didn’t try to launch their GPUs with a console.
There’s no better environment to prove your hardware while devs optimize to it.
The whole Dx12 vs older APIs would have been a non issue & given them another year or two to work things out.
SpicyMintCake t1_iwgtz6s wrote
A lot harder to convince Sony or Microsoft to leave the established AMD platform for a new and untested at scale platform. Especially when consoles are thin margin items, any hardware issue is going to cut deep.
frostnxn t1_iwgu9su wrote
Also intel did not have the patenofor gpus, which expired in 2020 I believe.
mule_roany_mare t1_iwguwll wrote
..Intel has been making GPUs for a few decades. Just not discrete GPUs
thad137 t1_iwh0fpk wrote
The patent for what exactly? There's any number of GPU manufacturers. I don't believe any of them all have a common patent.
Justhe3guy t1_iwgvetu wrote
They do work on very thin margins for that though so they don’t earn massively from consoles, still worthwhile
DatTF2 t1_iwgos0d wrote
Part of that reason why Intel had so much more market share, at least in the late 90s and early 00s is that Intel was bribing companies like DELL to only use Intel processors. Most computers you went to buy in a store only had Intel processors and it's why they dominated the home computing space. While I try Not to fanboy and have used both Intel and AMD systems I am really glad for AMD.
WormRabbit t1_iwgyixd wrote
Their compiler also produced very inefficient code for AMD chips. Not because they didn't implement the optimizations, but because they detected at runtime your CPU model and used the suboptimal code paths.
pterofactyl t1_iwin5v6 wrote
That’s not a bribe, that’s literally just how business deals work. It’s a bribe when the money is used to influence the decision of a person when money should not be an influence.
qualverse t1_iwiymmx wrote
A regular business deal would be Intel saying "we'll give you a 30% discount if you buy a million Intel processors".
A bribe would be Intel saying "we'll give you a 30% discount if you don't buy any AMD processors" which is what they actually did.
pterofactyl t1_iwjbz5l wrote
Ok so again… that’s a business deal. Do you understand that me paying you to exclusively use my product is completely legal and not even immoral unless it causes harm on a person? If a company bribes a doctor to use only their brand of medicine, that’s immoral. If a company pays a sports team to only use their products and avoid all others, that’s literally the basis of sports sponsorships. Amd presented the best case for dell to only use their chips. Is your workplace bribing you by paying you a set fee with the understanding that you only work for them and no one else? Come on man
Earthborn92 t1_iwpvteo wrote
Read about Antitrust law.
pterofactyl t1_iwq1ls0 wrote
https://www.investopedia.com/ask/answers/09/antitrust-law.asp
I think you should. Anti trust laws prevent buyers from preventing suppliers from supplying to other businesses, but if a supplier pays for themselves to be your supplier, that is not anti trust.
Is Nike in violation because they pay teams to use only their shoes and clothes? Literally think about this. Are restaurants in violation for agreeing to stock only Pepsi products?
AsleepNinja t1_iwhq6mm wrote
Intel has been making graphics for decades.
They're just mostly integrated GPUs in CPUs. They're in an enormous amount of things.
They're also low performance and power so not for gaming.
https://en.m.wikipedia.org/wiki/List_of_Intel_graphics_processing_units
More recently Intel's launching the Arc discrete GPUs in the Arc series.
No idea how good they are.
Dry-Purchase-3022 t1_iwj0qll wrote
AMD doesn’t make their own chips, making bearing Intel much easier. The fact Intel is even close to AMD while having a significantly worse manufacturing line is a testament to how great their designs are.
Mowensworld t1_iwiv895 wrote
AMD originally only made CPUs. They bought ATi who at the time were nvidias main competitor for 5 billion dollars. This was only back in 2006.
Coincedence t1_iwjtwk7 wrote
With upcoming platforms, AmD is shaping up to be a powerhouse. Majority of the performance for a fraction of the price compared to the corresponding nvidia is very tempting. Not to mention 3-D vcache coming up soon to further dominate the gaming cpu market
damattdanman t1_iwga7yk wrote
What do they get these super computers to do? Like what calculations are they running for this kind of power to make sense?
emize t1_iwgbm0r wrote
While not exciting weather predictions and analysis is a big one.
Astrophysics is another popular one.
Anything where you need to do calculations that have large numbers of variables.
asdechlpc t1_iwhlzr3 wrote
Another big one is high resolution fluid simulations
[deleted] t1_iwhvijs wrote
[removed]
atxweirdo t1_iwhowxd wrote
Bioinformatics and ML has taken off in recent years. Not to mention data analytics for research projects. I used to work for a supercomputer center. Lots of interesting projects were going through our queues
paypaytr t1_iwj6zbm wrote
For ML this is useless though. They don't need supercomputers but rather cluster of well efficient GPUs
DeadFIL t1_iwjflz1 wrote
All modern supercomputers are just massive clusters of nodes, and this list includes GPU-based supercomputers. Check out #4 on the list: Leonardo, which is basically just a cluster of ~3,500 Nvidia A100-based nodes.
My_reddit_account_v3 t1_iwjmbs9 wrote
Ok, but why would supercomputers suck? Are they not equipped with arrays of GPUs as well?
DeadFIL t1_iwjpc1o wrote
Supercomputers cost a lot of money and are generally funded for specific reasons. Supercomputers are generally not very general purpose, but rather particularly built to be as good as possible at one class of task. Some computers will have a lot of CPUs, some will have a lot of GPUs, some will have a lot of both, and some will have completely different types of units that are custom built for a specific task.
It all depends on the supercomputer, but some aren't designed to excel at the ML algorithms. Any of them will do wayyyy better than your home computer due to their processing power, but many will be relatively inefficient.
My_reddit_account_v3 t1_iwjshyb wrote
Right. I guess what you are saying is you prefer to control the composition of the array of CPUs/GPUs, rather than rely on a “static” supercomputer, right?
QuentinUK t1_iwgdsbh wrote
Oak Ridge National Laboratory: materials, nuclear science, neutron science, energy, high-performance computing, systems biology and national security.
damattdanman t1_iwgegbh wrote
I get the rest. But national security?
StrategicBlenderBall t1_iwgigpd wrote
Nuclear forensics, nonproliferation modeling, etc.
nuclear_splines t1_iwgik6l wrote
Goes with the rest - precise simulations of nuclear material are often highly classified. Sometimes also things like “simulating the spread of a bio weapon attack, using AT&Ts cell tower data to get high precision info about population density across an entire city.”
Ok-disaster2022 t1_iwiccqm wrote
Well there's numerous nuclear modeling codes, but one of the biggest most validated is MCNP. The team in charge of it has accepted bug fix reports from researchers around the world regardless if they're allowed to have access to the files and data or not, export control be damned. Hell the most important part is the cross section libraries (which cut out above 2 MeV) and you can access those on public website.
I'm sure there's top secret codes, but it costs millions to build and validate codes and keep them up to date, but there's not profit in nuclear. Aerospace the modeling software is proprietary but that's because it's how those companies make billion dollar airplane deals.
nuclear_splines t1_iwid0jw wrote
Yeah, I wasn’t thinking of the code being proprietary, but the data. One of my friends is a nuclear engineer, and as an undergraduate student she had to pass a background check before the DoE would mail her a DVD containing high-accuracy data on measurements of nuclear material, because that’s not shared publicly. Not my background, so I don’t know precisely what the measurements were, but I imagine data on weapons grade materials is protected more thoroughly than the reactor tech she was working with.
[deleted] t1_iwgw7w6 wrote
[deleted]
Defoler t1_iwge3xy wrote
Huge financial models.
Nuclear models.
Environment models.
Things that have millions of millions of data points that you need to calculate each turn
blyatseeker t1_iwhpceq wrote
Each turn? Are they playing one match of civilization?
Defoler t1_iwhsogi wrote
Civ 7 with 1000 random pc faction players on a extra-ultra-max size map and barbarians on maximum.
That is still a bit tight for a supercomputer to run, but they are doing their best.
Broadband- t1_iwgcmvh wrote
Nuclear detonation modelling
0biwanCannoli t1_iwhby3f wrote
They’re trying to play Star Citizen.
johnp299 t1_iwhl0th wrote
Mostly porn deepfakes and HPC benchmarks.
Ok-disaster2022 t1_iwibhfx wrote
For some models instead of attempting to derive an sexy formulation you take random numbers, assign them to certain properties for a given particle and other random numbers to have that particle act. Do this billions of times and you cna build a pretty reliable detailed model of weather patterns or nuclear reactors or whatever.
These supercomputers will rarely be used all at once for a single calculation. Instead the different research groups may be given certain amounts of computation resources according to a set schedule. A big deal at DOE SCs is making sure there isn't idle time. It cost millions to power and cool the systems, and letting them run idle is pretty costly. Same can be said for universities and such.
[deleted] t1_iwj8q21 wrote
[deleted]
My_reddit_account_v3 t1_iwjmtbc wrote
My former employer would run simulations for new models of their products (ex: identify design flaws in aerodynamics). Every ounce of power reduced the lead time to get all our results for a given model / design iteration. I don’t understand anything that was actually going on there, but I know that our lead times highly depended on the “super power” 😅
[deleted] t1_iwikq4p wrote
[removed]
supermoderators t1_iwfle2n wrote
Which is the fastest of all the fastest supercomputers?
wsippel t1_iwfy16v wrote
Frontier, the first supercomputer to exceed 1 exaFLOPS/s, almost three times as fast as number two. Powered by Epyc CPUs and AMD Instinct compute accelerators.
Here's the current list: https://www.top500.org/lists/top500/2022/11/
Contortionietzsche t1_iwg1263 wrote
21,000 kilowatts of power. That's a lot, right? I read a story recently about a company that bought a Sun Enterprise 10000 server and an executive shut it down when they got the electricity bill.
wsippel t1_iwg3vqe wrote
It's a lot, but the performance per watt is actually really good, and that's what matters. It's the sixth most energy efficient supercomputer: https://www.top500.org/lists/green500/2022/11/
nexus1011 t1_iwg1krr wrote
Look at the 2nd one on the list.
29,000 almost 30k KW of power!
Zeraleen t1_iwg64o3 wrote
30k kW, that is almost 30MW. wow!
Gnash_ t1_iwgf457 wrote
My Factorio factory only consumes 9 MW and I had to build 3 nuclear reactors, just to keep the power up at night. That’s one big supercomputer
calvin4224 t1_iwgfzlw wrote
irl a nuclear Generator has around 1 GW (1000MW). But 30MW is still about 6 Land based Wind turbines running at full load. It's a lot!
Ok-disaster2022 t1_iwid396 wrote
Physics wise you can run a GW reactor at 30 W and it will essentially last forever from a fuel standpoint, just the turbines and such have to re engineered to that lower output.
But there are smaller reactors. I believe for example the Ford class supercarriers run on 4x250w reactors.
calvin4224 t1_iwror1m wrote
I don't think that's how nuclear fission works.
Also, 4x250 Watts will run your kettle but not a ship :P
PhantomTroupe-2 t1_iwgjgwp wrote
So the guy above a lying little shit or?
Gnash_ t1_iwgk22o wrote
Factorio is a video game, did you really think I went out and built 3 reactors all by myself
also was the uncalled for insult really necessary?
PhantomTroupe-2 t1_iwgo5sw wrote
Yeah I think it was
Alexb2143211 t1_iwgpnuv wrote
Damn, hope you become a better person
PhantomTroupe-2 t1_iwgq52t wrote
Well, you can always cry I guess
depressedbee t1_iwgfvjf wrote
Yea maybe, but can it run Crysis?
MattLogi t1_iwga7wo wrote
What’s it power draw? Isn’t something like 30000 kWh only like $3000 a month? Which sure isn’t cheap but if you’re buying these super computers, I feel like $3000 is a drop in the bucket for them
Edit: yup, made a huge mistake in calculation. Much much larger number
Catlover419-20 t1_iwgfacd wrote
Nono, that means 30000kWh is for 1h of operation. For one month of 24/7 at 30 days you‘d need 21.600.000 kWh or 21.600 mWh, or 2.741.040€ at 12,69ct/kWh. So $2.75M if Im correct
MattLogi t1_iwguqiu wrote
Yeah I messed up! I was think W as I do the calculation a lot with my computers at home so I always divide by 1000 to get kWh. Like you said this is 30000kWh! Oof yeah that’s a big bill.
Contortionietzsche t1_iwgark3 wrote
True. Frontier is for the US department of energy right? The company that bought the E10K probably was not. AFAIK the E10K requires a 100 amp power line and back in those days (late 90’s) I don’t think performance per watt was a thing they worried about, could be wrong though.
Dodgy_Past t1_iwgrgw5 wrote
I was selling sun servers back then and customers never considered power consumption.
Diabotek t1_iwgf8be wrote
Lol, not even close. 30,000 kW * 720 hours * kW price.
MattLogi t1_iwguf3a wrote
Oooo yeah I did a major mistake in calculation. I’m so used to calculating W with home computers and dividing by 1000 to get my KWH…this is 30000KwH! Ooooof! Yeah that’s a huge bill. Makes a lot more sense now lol
Diabotek t1_iwh5jxx wrote
Yeah 30000kW is an insanely massive number. The amount of power required to run that for an hour, could power my stack for 7 years.
The-Protomolecule t1_iwgismm wrote
It’s easy to power when you’re oak ridge and have your own nuclear power plant.
LaconicLacedaemonian t1_iwgsooq wrote
Data centers are generally located specifically where they can get cheap power.
chillinwithmypizza t1_iwgftvr wrote
Wouldn’t they lease it though 🤔 Idk any company that outright buys a server
Contortionietzsche t1_iwgiihr wrote
You’re probably right.
fllr t1_iwg8ywq wrote
An exaflop in a singular computer… that’s absolutely insane :O
neoplastic_pleonasm t1_iwgv80c wrote
It's a cluster. I forget if they've published the official number yet but I want to say it was something like 256 racks of servers. I turned down a job there last year.
diacewrb t1_iwg8ks6 wrote
If you include distributed computing then Folding@Home is probably the fastest in the world with 2.43 exaflops of power since 2020.
jackfabalous t1_iwgectg wrote
woah that’s cool as fuck 🤯
IAmTaka_VG t1_iwgz2bx wrote
I think they basically ran out of simulations because so many people signed up no?
Ripcord t1_iwgempu wrote
It's right at the top of the goddamn article
MurderDoneRight t1_iwgkq9m wrote
If you want your mind blown you should look into quantum computers! They're insane! They can create time crystals, that's crystals that can change state without the need to add energy or loss of energy creating true perpetual motion! And with time crystals we might be able to create even faster quantum computers by using them as quantum memory.
And even though I have no idea what any of it means, I am excited because this is real life sci-fi stuff! There's a great mini-series called DEVS where they use a quantum computer and it's nuts they exists in real life.
And you might say "yeah yeah everyone has said there's new tech on the horizon that will change the world but it always takes way longer for anything close to be developed" but check this out: The IDEA of time crystals was thought up just 10 years ago, since then they have not just been proven to exist but we can create them and yeah deep dive into everything quantum computers are doing it's just speeding up exponentially every day!
bigtallsob t1_iwgosoo wrote
Keep in mind that anything that appears to be "true perpetual motion" at first glance always has a catch that prevents it from being actual perpetual motion.
SAI_Peregrinus t1_iwhcom7 wrote
Perpetual motion is fine, perpetual motion you can extract enirgy from isn't. An object in a stable orbit with no drag (hypothetical truly empty space) around another object would never stop or slow down.
A time crystal is a harmonic oscillator that neither loses nor gains energy while oscillating. It's "perpetual motion" in the "orbits forever" sense, not the "free energy" sense. Also has nothing to do with quantum computers.
pterofactyl t1_iwinoyy wrote
Well no because for that “no drag” space to exist, it would need to be in an imaginary world, so perpetual motion does not exist either way.
MurderDoneRight t1_iwh3wgv wrote
True, a perpetual motion machine is impossible according to the laws of physics. But time crystals are not a machine, it's an entirely new kind of exotic matter on par with supersolids, superfluids and Bose-Einstein condensates!
bigtallsob t1_iwh8ebm wrote
Yeah, but you are dealing with quantum funkiness. There's always a catch, like with quantum entanglement, and how despite one's state affecting the other regardless of distance, you can't use it for faster than light communication, since the act of observing the state changes the state.
MurderDoneRight t1_iwhacjs wrote
Yeah, like I mentioned in my first comment I don't really know anything so you may be right too. 😉
But I don't know, there's a lot of cool discoveries being done right now anyway. I did read up on quantum entanglement too because of this years Nobel prize winner in physics who used it to prove that the universe is not "real". How crazy is that?
SAI_Peregrinus t1_iwhc0ph wrote
Time crystals have no direct relation to quantum computers.
Quantum computers currently are very limited, but may be able to eventually compute Fourier Transforms in an amount of time that's a polynomial function of the input size (aka polynomial time), even for large inputs. That would be really cool! There are a few other problems they can solve for which there's no known classical polynomial time algorithm, but the Quantum Fourier Transform (QFT) is the big one. AFAIK nobody has yet managed to even factor the number 21 with a quantum computer, so they're a tad impractical still. Also there's no proof that classical computers can't do everything quantum computers can do just as efficiently (i.e. that BQP ≠ P), but it is strongly suspected.
Quantum annealers like D-wave's do exist now, but solve a more limited set of problems, and can't compute the QFT. It's not certain whether they're even any faster than classical computers.
I've made several enormous simplifications above.
mule_roany_mare t1_iwgpo4d wrote
Devs was an imperfect show, but good enough to be measured against one.
It deserved a bigger audience & should get a watch.
TheDevilsAdvokaat t1_iwgnkic wrote
NVIDIA technologies power 342 systems on the TOP500 list released at the ISC High Performance event today, including 70 percent of all new systems and eight of the top 10. (June 28 2021)
Not a fanboy of either, just posted this for the sake of comparison.
jollingo t1_iwgglb3 wrote
The real question is can it run crysis at max settings?
PhantomTroupe-2 t1_iwgjnuj wrote
Not at max settings but yeah
jollingo t1_iwgkj3v wrote
I’ve edited it
12358 t1_iwggel6 wrote
And BTW, they run Arch, right?
dasda_23 t1_iwgr56q wrote
But can they run doom?
nascarhero t1_iwh1jxw wrote
Downvote for sponsored content
AutoModerator t1_iwfjm44 wrote
We have multiple giveaways running!
Razer Thunderbolt 4 Dock Chroma! - Intel Thunderbolt 4.
Phone 14 Pro & Ugreen Nexode 140W chargers Giveaway!
WOWCube® Entertainment System!
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
[deleted] t1_iwguhoy wrote
[deleted]
[deleted] t1_iwgy6wu wrote
[removed]
tobsn t1_iwh32gt wrote
just until series 5 rtx released and then it’s going to be a lot more!
/s
dhalem t1_iwhhoun wrote
I wonder how one of Google’s data centers compares.
The_Zobe t1_iwhjjfp wrote
We want percents! We want percents!
beaumax1 t1_iwi2dc0 wrote
Zoinks
[deleted] t1_iwiu79k wrote
[removed]
[deleted] t1_iwiytrx wrote
[removed]
csbc801 t1_iwj6awg wrote
So my computer got their only dud?
tripodal t1_iwjhv1m wrote
And top 3
IdeaJailbreak t1_iwk1cjw wrote
Pfft only 5 supercomputers?
[deleted] t1_iwk74m4 wrote
[removed]
lohvei0r t1_iwmauak wrote
4 supercomputers!
GroundbreakingDot961 t1_iwmx7e6 wrote
Looks like a factory
Sarah_Rainbow t1_iwgfy31 wrote
Serious question, what is the need for supercomputers when you have access to cloud computing in all its glory?
alexandre95sang t1_iwgkk3d wrote
The cloud is just someone else's computer
dddd0 t1_iwghfua wrote
Interconnect
Supercomputer nodes are usually connected using 100-200 Gbit/s fabrics with latencies in the microsecond range. That's pretty expensive and requires a lot of power, too, but it allows you to treat a supercomputer much more like one Really Big Computer (and previous generations of supercomputers were indeed SSI - Single System Image - systems) instead of A Bunch Of Servers. Simulations like Really Big Computers instead of A Bunch Of Servers. On an ELI5 level something like a weather simulation will divide the world into many regions and each node of a supercomputer handles one region. Interactions between regions are handled through the interconnect, so it's really important for performance.
Sarah_Rainbow t1_iwghri2 wrote
Thats awesome, thanks!
LaconicLacedaemonian t1_iwgt8rr wrote
I maintain a 20k node cluster of computers that pretends to be a single computer. The reason to do it that way is if we 10x our size we can 10x the hardware and individual machines dying are replaced.
_ytrohs t1_iwgziml wrote
and cost.. and hypervisor overhead etc
krokotak47 t1_iwghwko wrote
So cloud computing literally happens in the sky and we don't need hardware for it?
ButMoreToThePoint t1_iwgjhug wrote
It says it right in the name
Sarah_Rainbow t1_iwgikys wrote
Why else would i buy a telescope for?!??
I mean with the cloud you can have your computing power distributed over a larger geographic area, plus the hardware cost is lower and setting it up is relatively simple. I've heard stories from the physics department at UofT where students preferred to use AWS over other available options (supercomputers in Canada) to run their models and stuff.
Ericchen1248 t1_iwgnt5q wrote
While I don’t know the costs for them. I would wager the students chose to use AWS not because it was cheaper but because registering/queueing for super computer time is a pain/can take a while.
Sarah_Rainbow t1_iwgo5u4 wrote
Thats a great point. thanks
krokotak47 t1_iwgq1bd wrote
I believe it all comes down to cost. I've seen some calculations on reddit that were like 30k USD for the compute needed on Amazon ( absolutely no idea what the task was, something with GPUs). So that's obviously not possible for many people. What's the price for a supercomputer compared to that? I imagine it may be free for students? In my university you can gain access to serious hardware ( I'm talking powerful servers, not comparable to a supercomputer) by just asking around. What is it like in bigger universities?
Pizza_Low t1_iwhkwyo wrote
Cloud is great for when you want to rent someone else’s computer space. It can be cheaper than building a data center, maintaining the hardware and software, expand and contract dynamically.
For example a ton of servers can be brought online for something like if Netflix was streaming the super bowl. They might suddenly need 3 times the servers they normally need, cloud is good for that sudden expansion, but tends to be more costly for regular use.
Super computers are great for lots of calculations very quickly. For example you want to simulate the air flow of individual air molecules over a new airplane wing design. Or some other kind of complex mathematical modeling in science, or finance.
JohnnyCupcakes t1_iwh0z9y wrote
yea, for poor people
izza123 t1_iwgkio8 wrote
Nvidia on suicide watch
_HiWay t1_iwguxle wrote
not at all, with their acquisition of Mellanox and smart NICs (Bluefield 2 and beyond) they are accelerating things right on the edge of the interconnect. Will vastly improve performance once scalability and software have been figured out at super computer size.
mikegwald t1_iwgrklq wrote
AMD is still a thing ?
fishbulbx t1_iwgkw4u wrote
AlltheCopics t1_iwg11ga wrote
Intel the good guys
JJagaimo t1_iwgbvil wrote
Neither AMD not Intel are the "good guys." Both are corporations that while we may support one or the other for whatever reason, we should not treat as if they are an individual we personally know or as if they are infallible.
imetators t1_iwgr6fj wrote
If you knew that these corporations are not actually a competitor but more of a teammates in market rigging, the statement about 'good guys' becomes much more funnier.
AlltheCopics t1_iwgie5e wrote
Aight
[deleted] t1_iwfsmap wrote
[deleted]
Substantial_Boiler t1_iwfvem8 wrote
Supercomputers aren't really meant to be impressive tech demos, at the end of the day they're meant for actual real-world applications
Avieshek OP t1_iwg3qod wrote
Then quantum computers would simply become the next supercomputers as it's just a term for commercial purposes with multiple stacks, you do realise that right?
What we are using can be termed as Classical Computers and if tomorrow's iPhone is a quantum computer onto everyone's hands then there's no reason a supercomputer in a University then would still be a classical computer.
12358 t1_iwggr2p wrote
Quantum computers are not a more powerful version of a supercomputer; they do different kinds of calculations, and solve problems differently, so they are used to solve different kinds of problems. They are not a replacement for supercomputers.
Avieshek OP t1_iwgh5i4 wrote
As said, Quantum & Classical are different breed of computers where there’s no parallel between and please refrain from twisting into your own version where nothing has been said regarding “Quantum being more powerful than Supercomputer” when I have just stated what supercomputer itself is to be comparing with quantum which is dumb.
themikker t1_iwfutq9 wrote
Quantum computers can still be fast.
You just won't be able to know where they are.
[deleted] t1_iwg455v wrote
[removed]
SAI_Peregrinus t1_iwhczon wrote
They still can't find the prime factors of the number 21 with a quantum computer. They're promising, not impressive (yet).
iiitme t1_iwfvjfl wrote
What’s with the downvotes this isn’t a serious comment
jahwls t1_iwfo1lq wrote
101/500 pretty good.