Comments
[deleted] t1_itml5k0 wrote
[deleted]
Wuyley t1_itml7cu wrote
The week after you install your "at home fusion reactor"
SwarfDive01 t1_itmnbcg wrote
But really though. This is probably going to be reserved for extremely data heavy research. Like, recording photons propagation in real time. Or upgrading the LHC sensors
amcrambler t1_itmovue wrote
Ahh but can it mine but coins? If so you’ll never get your hands on one.
DangerStranger138 OP t1_itmpjtq wrote
Did somebody say r/Buttcoin!?
chisayne t1_itmpq6p wrote
Nobody needs that much porn
DangerStranger138 OP t1_itmq1nn wrote
INB4 Your mom's webcam! hurrhurrhurr GOTTEEM!!!
DangerStranger138 OP t1_itmr52e wrote
But then what will I barter with the doompreppers for a can of beans!?
am0x t1_itmzpg6 wrote
And then in 3 years it will mainly be used for porn and gaming. The true drivers of the market.
arrjen t1_itn1x0r wrote
r/titlegore
MsGoogleEyes t1_itn4lfv wrote
i litterally hate being 5’4
SwarfDive01 t1_itnhqn1 wrote
They could handle all the data streams in a single cable for every streamer plus their cyber feeler 5000 for a lower cost than the competition
RockItGuyDC t1_itnhy2n wrote
Transferred from what to what? Is there any memory or storage media that has I/O fast enough to say you actually transferred any information?
BigGayGinger4 t1_itnj17g wrote
yo dawg we herd u like ray tracing
rastilin t1_itnj19o wrote
DDR5 is 50GB/s, which is workable. But it's possible there'll be a way to apply this technology to get even better throughput on memory and processors as well.
EDIT: Here's a thought. Mosix, the software that allowed pooling processors from multiple separate computers together over a network always had the issue that the network was too slow to properly copy memory around fast enough. With this kind of transfer speed it's actually practical to stream memory from one machine for processing on another.
Moontoya t1_itnnfd1 wrote
Nah
It'll be used by wall Street if it can shunt that much data that fast.
Reducing latency is mega$, shaving the tiniest fraction of latency gets your trades and positions in 'first', bumping your likely profits.
Commercial-Cicada726 t1_ito4ugs wrote
15 yrs from now...AR is made accessible and seamlessly integrated into everyday life because of this type of research
[deleted] t1_itofrx6 wrote
[deleted]
Account_343 t1_itol8ij wrote
So how can we turn these into a supercomputer? I can think of a lot of medical research and scientific research that can benefit.
November19 t1_itoo63e wrote
Speed trading isn’t a bandwidth problem, and co-located servers already measure cable length to account for the speed of light. This really has nothing to do with that.
archaeolinuxgeek t1_itorb5m wrote
Found the Node developer.
onyxengine t1_itoreij wrote
I think this stuff is going get rolled in a new wave a tech products, we’re mapping the brain this century, hooking devices into the human nervous system on a level we’ve never seen before. Thoughts to control robots, even fleets of robots. Sure there will be tons of use cases for AI.
R1150gsguy t1_itp63vi wrote
> …10-100g to the home is on its way
With a 50g cap…
SwarfDive01 t1_itpebvf wrote
Oh AI was a given too. But I didn't consider brain/computer interfaces. I'd imagine there is plenty of data transfer to maximize with minimal extra hardware
eddiepaperhands t1_itphrqn wrote
Misleading headline.
eddiepaperhands t1_itphv3z wrote
I already have 10G to my house with no cap.
GoldWallpaper t1_itpno6l wrote
False. The internet is way, WAY bigger than a petabyte. Orders of magnitute bigger. Hell, YouTube alone is bigger than a petabyte.
The chip transmitted the equivalent of one second's worth of all of the internet's traffic. These are totally different things.
GoldWallpaper t1_itpobbh wrote
From point A to point B, in a lab under laboratory conditions. There's no need for storage or memory, because all that's being measured is data flow.
This is a very interesting experiment that could have implications for switches in massive data centers, but isn't all that useful for anyone here.
I'm sad that redditors in this sub seem to not understand even a little what this article is saying, given that what's being described (data transmission) is pretty basic technology. What's new is the technique and hardware.
GoldWallpaper t1_itpoej1 wrote
Do you own a massive data center? If not, this has no application for you.
RockItGuyDC t1_itprxw0 wrote
>I'm sad that redditors in this sub seem to not understand even a little what this article is saying,
First, fuck off with your condescending tone. Fuck right off with it.
Second, what did they transmit from point A to point B? If it's actual data, then in order to confirm that what you think you sent from point A to point B is what actually got sent, without appreciable loss, you would need to check the data. Via a checksum or something. I'm asking how that's done.
If they didn't do that, then I don't see how one can claim any data was sent. It sounds to me like they sent noise at that point.
thepoprock t1_itq5a1p wrote
MICROCOMBS! The bees have done it again!!
DangerStranger138 OP t1_itq923y wrote
Bee the light you wanna shine on the world - Bee Arthur
almightySapling t1_itr8pp0 wrote
By no means do I understand the details, but I have to imagine they are only sending a very small amount of data, such that the chip itself is able to locally store the information and release it at a speed the next layer of hardware is capable of handling.
RockItGuyDC t1_itraqv5 wrote
Thanks for your insight. That would make sense to me, and perhaps then they simply sent that data many multiples of times in that short span to add up to the petabytes they're quoting. Or maybe they simply sent a tone over each of the channels and used that as a proof of concept that they could have sent those petabytes.
In any case, I'm still curious about what information actually was transmitted.
McDonaldsMapping t1_itreti5 wrote
Excited for 20 years from now where people use this to grind on League of Legends
wrt-wtf- t1_its7ds0 wrote
Good, closer…
DangerStranger138 OP t1_itm5vck wrote
CLIFFNOTES FROM ARTICLE
​
>A single chip has managed to transfer over a petabit-per-second by a team of scientists from universities in Denmark, Sweden, and Japan. That's over one million gigabits of data per second over a fibre optic cable, or basically the entire internet's worth of traffic.
>
>The researchers—A. A. Jørgensen, D. Kong, L. K. Oxenløwe—and their team successfully showed a data transmission of 1.84 petabits over a 7.9km fibre cable using just a single chip. That's not quite as fast as some other alternatives with larger, bulkier systems, which have reached up to 10.66 petabits, but the key here is scale: the proposed system is very compact.
>
>By splitting a data stream into 37 sections, one for each core of a fibre optic cable, and then further splitting each of those streams into 223 channels, the researchers were able to remove a great deal of interference that slows down optical systems and therefore deliver an internet's worth of data transmission using a single chip.
>
>The researchers also theorise that such a system could support speeds of up to 100 petabits-per-second in massively parallel systems.
>
>Essentially, high-speed data transmission that often requires a fibre optic cable and bulky equipment is now being miniaturised into a smaller on-chip package. Instead of multiple lasers in parallel, which come with their own set of challenges, it's possible to shrink a good deal of this equipment to the silicon level. And with that even remove some of the difficulties in sending massive data packages long distances and at high speeds.
>
>A big part of these new breakthroughs are microcombs, which are a way of generating constant and measurable frequencies of light.