jormungandrsjig

jormungandrsjig t1_jcknp7i wrote

> Following up on point A, let’s say hypothetically the reactor does have a meltdown and we have another Chernobyl disaster. How will we suppress a reactor meltdown were it to happen in space, away from the equipment and resources we have down here on earth? How will it affect future missions to the moon?

Use an oversized spatula and flip it off the surface into interstellar space.

4

jormungandrsjig OP t1_j7alpz6 wrote

> New research published in Nature Physics by collaborating scientists from Stanford University in the U.S. and University College Dublin (UCD) in Ireland has shown that a novel type of highly-specialized analog computer, whose circuits feature quantum components, can solve problems from the cutting edge of quantum physics that were previously beyond reach. When scaled up, such devices may be able to shed light on some of the most important unsolved problems in physics.

37

jormungandrsjig OP t1_j2c5jpu wrote

We now have five spacecraft that have either reached the edges of our solar system or are fast approaching it: Pioneer 10, Pioneer 11, Voyager 1, Voyager 2 and New Horizons.

From close fly-bys of the outer planets to exploring humans' furthest reach in space, these two spacecraft have contributed immensely to astronomers' understanding of the solar system.

Now, the spacecraft will provide better-than-ever measurements of the background of light and cosmic rays in space, trace the distributions of dust throughout our solar system, and obtain crucial information on the sun's influence, complimentary to the Voyagers.

35

jormungandrsjig OP t1_j22tk5e wrote

There's a lot of work to be done, and if we can somehow solve value pluralism for A.I., that would be exciting. We could think of it, AI shouldn't suggest humans do dangerous things or A.I. shouldn't generate statements that are potentially racist and sexist, or when somebody says the Holocaust never existed, A.I. shouldn't agree. But yet, there were instances such as Tay bot. So I think we have a long way to go.

3

jormungandrsjig OP t1_j1rz7qn wrote

Until now, national governments have been slow to adopt this cutting-edge technology. But in 2023, governments will also finally start using AI and big data to tackle some of society’s biggest problems. Done right—and with the proper privacy protections in place—projects can generate a trove of data that is itself a competitive asset, helping research and innovation to flourish. Just consider Biobank, one of the most important government-led biomedical initiatives worldwide. This project has produced a public database with genetic information of more than half a million people. To this date, it has been accessed by nearly 30,000 researchers from 86 countries, helping AI and biotech startups create new drugs and therapeutics. 

2

jormungandrsjig t1_ix3kh0v wrote

Reply to comment by ttkciar in is linkedin dying? by diogo_ao

> As a LinkedIn user since 2007, I can confidently say that it's become more annoying than ever. It didn't used to be this way. I don't even go there anymore. > > > > If it isn't dying, maybe it should. It's no longer useful.

I haven't logged in since 2017. I was surprised my account wasn't deleted for inactivity. Most of the people I had on there are senior managers and above. Seems like it's an echo chamber now where you reshare links from on r/technews. Seems pointless now and I probably won't login again for another five years.

1

jormungandrsjig t1_isanycr wrote

> It’s amazing we live in a world where automation is a threat to livelihoods rather than an instrument to allow people to pursue more important callings.

Retraining isn't an option for many people today due to the cost of a secondary education and absolute need to find replacement work ASAP to keep the wolves at bay. Job losses in this sector will hit the poor hard, and not many of them have the credit to borrow enough to go back to school full time and cover living expenses.

1