terath

terath t1_j9x6v7k wrote

My point is that you don’t need ai to hire a hundred people to manually spread propaganda. That’s been going on now for a few years. AI makes it cheaper yes but banning AI or restricting it in no way fixes it.

People are very enamoured with AI but seem to ignore the already many existing technological tools being used to disrupt things today.

0

terath t1_j9u4o7b wrote

If we're getting philosophical, in a weird way if we ever do manage to build human-like AI, and I personally don't believe were at all close yet, that AI may well be our legacy. Long after we've all died that AI could potentially still survive in space or in environments we can't.

Even if we somehow survive for millenia, it will always be near infeasible for us to travel the stars. But it would be pretty easy for an AI that can just put itself in sleep mode for the time it takes to move between system.

If such a thing happens, I just hope we don't truly build them in our image. The universe doesn't need such an aggressive and illogical species spreading. It deserves something far better.

1

terath t1_j8oemyz wrote

Another key phrase to use with google scholar is "online learning", this is where you have a stream of new examples and you update a model one example at a time. Usually you can use the model for inference at any point in this process, and some algorithms in this area are designed to be a bit more aggressive or at least to control the update rates to more quickly more more slowly adapt to new data.

21

terath t1_j5l8t4k wrote

Oh I see what you mean. I remember that there were some character level language models, but they fell out of favour for subwords as I think the accuracy difference wasn't enough to justify the extra compute required for the character level.

Reviewing the fast text approach, they still end up hashing the character-ngrams rather then training an embedding for each. This could introduce the same sorts of inconsistencies that you're observing. That said, the final fast text embeddings are already the sum of the character embeddings, so I'm not clear on how your approach is different than just using the final fast text embeddings.

3