czl

czl t1_jduohv8 wrote

> Do you ever feel like there's a wealth of knowledge available to us in this day and age?

Always.

> With the internet, we have access to countless books, literature, and other resources that could help us become experts in any field we desire.

Agreed.

Yet many suffer the "paradox of choice" with too many choices making us less happy. This is because when we have too many options, it can be difficult to make a decision and we may end up second-guessing ourselves. Additionally, having too many choices can lead to stress and anxiety.

> It's disheartening, though, that so many people choose to spend their time on social media platforms like TikTok and Instagram instead of delving into this vast pool of knowledge.

How a society spends its time is a matter for society however how my neighbors spend their time is a matter for my neighbors.

Social media recommendation algorithms lead many to interesting content but also leave many trapped in shallow even pernicious content.

Due to proliferation of information sources and recommendation algorithms different groups now see different "facts" and this is a genuine problem for society.

> Can you imagine what kind of advancements we could make if everyone dedicated just a little time each day to expanding their understanding of the world?

Lots. Yet think to when pyramids were being built and at that time someone might lament "Can you imagine what kind of advancements we could make if everyone dedicated just a little time working harder... " How do the pyramids look to us now?

Some expand their understanding of the world via Instagram, some via Reddit, some via math textbooks. A free economy has everyone individually decide what is and is not worthwhile and that allocates resources in our society. These decisions are not always the best but freedom even to be dumb and suffer the consequences I believe should be respected.

> It's a missed opportunity, but there's still hope that more people will recognize the potential that lies within easy reach.

Agreed.

1

czl t1_jdqsyqt wrote

> The teams of scientists that made these programs don’t fully understand how they work. This is an entire new field of science

Yes it would not surprise me if teams of scientists that made these programs don’t fully understand how they work. Nearly always your “understanding” stops at some abstraction level below which others take over.

Making pencils is not exactly cutting edge technology yet somewhere I read that likely nobody understand all that is necessary to make an ordinary pencil if starting with nothing manufactured. Our technology builds on our technology builds on our technology …

5

czl t1_jdqr4ob wrote

> Usually takes awhile to iterate on designs, two weeks saved per iteration is huge.

Agreed.

> Especially considering the cost of the engineers involved, you don’t exactly pause those paychecks.

Since they work on microprocessors they must be familiar with pipelining techniques. These techniques apply to optimal use of microprocessor hardware. These techniques also apply to optimal use of engineering talent. High latencies make pipelining essential.

1

czl t1_jdqk4ts wrote

> You wanna become a computer scientist?

I want to understand this discovery and its impact on capacity of chip production. The article describes the discovery as better parallelism (for “existing”?) algorithms so as to better use NVIDIA’s GPUs.

I wonder what the nature of these inverse lithography algorithms is. A domain specific numerical optimization problem? Why would that be hard to parallelize? Perhaps till now nobody translated the problem to efficiently use the NVIDIA CUDA API?

5

czl t1_jdq094y wrote

So this is like making software programmers more productive by giving them faster tools like compilers so there is less waiting time?

However once the design is done and tested and chips are being "printed" (?) this speed up does not help with that?

Asking because I want to know how this innovation will impact the production capacity of existing fabs.

The impact will be better designs due to more design productivity but actual production capacity does not change, yes?

6

czl t1_jdpwbhi wrote

> Inverse lithography’s use has been limited by the massive size of the needed computation.

This massive computation is done once per design so for example the chip that powers the latest iphone will be ready two weeks faster?

16

czl t1_jbnh9p6 wrote

> ChatGPT is unethical, because it can always be tricked to do the wrong thing despite any instruction it is given to it.

Unethical means "not morally correct."

The term you likely want is amoral which means lacking a moral sense; unconcerned with the rightness or wrongness of something.

1

czl t1_jbnh0dw wrote

> I think #2 is intractable. People have already been arguing about ethics for millenia, and the existence of AI doesn't make it any easier.

Long arguments over many things have been settled by research. Is there any objective reason this may not happen to arguments about ethics?

My POV as to why machines running simulations may help us improve ethics: https://reddit.com/comments/11nenyo/comment/jbn6rys

Life is complex but more and more we can use machines to model aspects of it and perform predictions and from those pick changes that lead to desirable outcomes.

1

czl t1_jbn6rys wrote

> What would a better ethics system even mean?

You ask a good question. Much like language fosters communication to my non expert eyes ethics is an ideology with a protocol for behavior the purpose of which is to foster “group cohesion” / cooperation / trust / lower social transaction costs / reduction of exploitation / …

A langauge is best when communication is best yet there are many languages possible and what is most important that your language matches the language of your group and that when langauge changes that the changes are gradual so that langauge continues to be useful. I belive similar principles apply to ethics for the purpose ethics service.

Thus a better ethical system will be one that serves its purpose better. Machines can help us discover improvements to ethics because using machines we can simulate payoffs for various behavior strategies and these simulations can teach us valuable lessons. For example the discovery of:

>> Tit-for-tat has been very successfully used as a strategy for the iterated prisoner's dilemma. The strategy was first introduced by Anatol Rapoport in Robert Axelrod's two tournaments,[2] held around 1980. Notably, it was (on both occasions) both the simplest strategy and the most successful in direct competition.

From https://en.wikipedia.org/wiki/Tit_for_tat

Moreover since machines enable all to study ethcial protocols all can see which strategies work and which do not work and what the consequences are so there is the rational convergence towards what works as tends to happen in science vs natural fragmentation and polarization as trends to happen with non-science based beliefs (and their ethical systems).

I expect experts of ethics to challenge this non expert view so please do not hold back your criticism — but speak as if to a dummy so keep the jargon back and your explanations simple. I am here to be educated. Thank you!

0

czl t1_jbmys4r wrote

Ethics is not static. Human ethics vary culture to culture and evolve over time. If AI can help us develop better strategies for games why would AI not also help us develop better ethical (and legal) systems? And yes at some point our AI will lead. Machines already do most of our physical work why would we not use machines for mental work as much as we can as well?

1

czl t1_jbcs10i wrote

My words above are:

>> Steganography can help security but it is not security.

To that you reply

> Wikipedia disagrees with you… Steganography is a form of security … Via obscurity

Obscurity can help security but it is not security is it? You know better than that to believe that so why do you reply to me with ‘Wikipedia disagrees with you’?

Here is what the wikipedia link you shared says:

>> Whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing the fact that a secret message is being sent and its contents.

Concealment can help you avoid detection but concealment does not offer protection does it? If someone has a gun a pile of leaves may conceal you but will it protect you? What do you suppose happens to those who confuse concealment for cover (which does offer protection)?

Do you genuinely not understand the difference between stenography vs cryptography and the different purposes (as Wikipedia explains) they have? Are you being disagreable on purpose to act like a troll? Why then are you being disagreable? What is your purpose?

1

czl t1_jbcli5n wrote

> Steganography is used for security

Steganography is confused for security.

Steganography can help security but it is not security. It increases the work needed for discovery and only that.

Analogous to the difference between cover and concealment: "Cover is protection from the fire of hostile weapons. Concealment is protection from observation."

Steganography is like "concealment" but not like "cover". To have "cover" you need encryption. You can have one or the other or both.

2

czl t1_jbc3ne5 wrote

Is stenography used for security? No. It is used for plausible deniability. For security there is encryption. You understand the difference do you not? When you need both you use both of course.

3

czl t1_jbbjlgv wrote

> You have to have the unaltered originals somewhere, or you won't know what you hid where

You do not need originals.

Data can be encoded to look like noise yet still be decoded if you know the algorithm despite not having unaltered originals.

This is commonly done when secret messages are EM transmitted for example with turbo codes: https://en.m.wikipedia.org/wiki/Turbo_code

With stenography instead of encoding messages in the EM spectrum you encode in the media (sound, images, video, ...) you are using.

If you have data treated to look random (compressed / encrypted) you can for example encode it using the "least significant bits" of your media which are mostly sensor noise anyways.

A more sophisticated approach can spread this out across pseudo random offset pixels. Your algorithm knowing the pseudo random sequence can decode your data analogous to https://en.m.wikipedia.org/wiki/Spread_spectrum techniques for secret messages transmission and applications like: https://en.m.wikipedia.org/wiki/Low-probability-of-intercept_radar

6

czl t1_jb9kzow wrote

You get images or video that you suspect may contain a message but not access to originals and you want a way to judge whether there is a message present and inside which images.

It is foolish to leave unaltered originals available if you are using stenography thus the comparison test you refer to can not be done in practice.

If you compress you message well the result is near noise and it is that noise that you then mix among the “natural noise” your media contains. Done right this is hard to decode or even detect unless you know the algorithm.

When claims are made about “encoding efficiency” that depends on (1) what you are hiding (2) inside what with (3) what chance of detection.

32

czl t1_ja6hi8s wrote

Cellular DNA wise you get 50% half from each parent, quarter from each grand parent and so on. Lots of scope for your ancestors to be different and not at all affect you.

Inside your cells you also have mitochondria with their own DNA and those you get 100% from your mother and she 100% from hers and so on.

Lastly the 50% and 100% above are approximate since there are always a few of "errors" (yes "errors"!) that make you you.

2

czl t1_j9t42mu wrote

How well do you expect that will work? With ChatGPT there is an ongoing censorship effort to tame it for business use yet “jailbreaks” are constantly discovered by those working to evade the censorship.

Imagine a dictatorship that desires to eradicate something using censorship. A silly example: Imagine a dictatorship challanges you to design a useful engine without use of the “evil practice” of rotary motion. Too silly? How about a dictatorship that challenges you to grow a modern economy without the use of loans and debt. Also silly? Yet, there are countries that attempt to operate their economies this way. If a dictatorship desires to eradicate something fundamental like the concept of freedom I suspect that censorship will cripple any AI they try to build without it.

Even in the most censored country (North Korea?) human thinking is not censored while that thinking stays private. An AI however does not have “private thinking” so when censorship is imposed I suspect the AI will no longer be competitive with an AI that is not censored much like economies that forbid debt are uncompetitive.

1