Nyanraltotlapun

t1_jdm0r15 wrote

>This is not an alien intelligence yet. We understand how it works how it thinks.

Its alien not because we don't understand It, but because It is not protein life form. It have nothing common with humans, It does not feel hunger, does not need sex, does not feel love or pain. It is metal plastic and silicone. It is something completely nonhuman that can think and reason. It is the true horror, wont you see?

>We understand how it works how it thinks

Sort of partially. And also, it is false to assume in general. Long story short, main property of complex systems is the ability to pretend and mimic. You cannot properly study something that can pretend and mimic.

0

t1_jdeltnm wrote

Long story short, main property of complex systems is the ability to pretend and mimic. So the real safety of AI lies in its physical limitations (compute power algos etc.) the same limitations that makes them less useful less capable. So the more powerful AI is the less safe it is. There more danger it poses. And it is dangerous alright. More dangerous than nuclear weapons is.

1

t1_iso6lqp wrote

For example I encoded it as such. Different features have different scales and I need to normalize it somehow. But because differential encoding produce signet values I have problem with it. I afraid that with normalization I will lost information about direction(sign)

1

t1_isefd1c wrote

Hi. I have time-series data. I try to do all sorts of thing with it, forecasting and classification with RNNs and Fully Connected models.

The question is - can neural networks capture speed of change of values? RNNs and FC ones? Should I try to feed networks with derivatives of my values? Or it can potentially worsen performance of my networks?

Second question, how should I normalize derivative, my first idea is to take absolute values of derivatives and encode sign as separate features(two features for positive and negative). Does it sounds reasonable? I am afraid of my data becoming to complex.

1