ArnoF7
ArnoF7 t1_je510x4 wrote
Reply to comment by Level3Kobold in [OC] Research Funding vs Human Development: a country's R&D spending correlates with its societal well-being by latinometrics
Yes it’s complicated. If you are a petro state, or rich in some natural resources. Then you don’t have to spend so much on RD to live a cushy life.
It can also be hard to reap the rewards from RD investment. A lot of high tech industries go through several consolidations and only the very top can remain and control the market. For example semiconductor, aerospace and etc. Being okayish doesn’t bring much in return in too many industries.
ArnoF7 t1_je4zrch wrote
Reply to comment by Anonymous_linux in How racing drones are used as improvised missiles in Ukraine - They are light, fast and cheap by speckz
As a person who grew up in post-communism country, those persons in the west who keep commenting about how communism works and if it’s not for xxx it would have worked really make me depressed.
Like can we just move on please. It has already caused some of the most traumatic moments in the entire history of this land. Can we please just move on from this? I don’t want any risk that my future kids have to live through what my grandparents live through just to keep doing this mostly baseless ideological experiment.
ArnoF7 t1_je0dzqg wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Funnily, I actually found GPT-4 far worse than what I expected in terms of coding, especially after I looked at its impressive performance on other exams. I guess it’s still a progress in terms of LLM for coding, maybe just a little underwhelming compared to other standardized tests it aces? GPT-4’s performance on codeforces is borderline abhorrent.
And now you are telling me there is data leakage, so the actual performance would be even worse than what’s on paper???
ArnoF7 t1_j9sbjc8 wrote
Reply to comment by LetterRip in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yes, I am aware of the paper you linked, although I can’t say I am super familiar with the details.
This is very cool and solves some of the problems with robotics, but not a whole lot. Not discrediting the authors (especially Fei Xia, who I really admire as a robotics researcher. And of course Sergey Levine, who is probably my favorite), but the idea of fusing NLP and robotics to create a robot that can understand command and serve you is not super new. Even 10+ years ago there is this famous video from ROS developer Open Robotics (at the time it was still Willow Garage IIRC) in which they tell the robot to grab a bear and the robot will navigate the entire office and fetch it from the kitchen. Note that this is not the innovation these papers claim, (these papers are actually investigating a possibility instead of solving a problem) but I assume this is probably what everyone assumes to be the bottleneck of service robot, which in reality isn’t.
ArnoF7 t1_j9rzhcc wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I must say I am not very involved with the alignment community and do not have much exposure to their discussions, so I may miss some ideas, but as a researcher in robotics I am not super worried about some of his concerns just by reading his post.
Currently there is no clear roadmap in the robotics community to achieve an agent that can autonomously and robustly interact with the unstructured physical world, even just for a relatively specialized environment. Robotics is still very far away from its ChatGPT moment, and I think current socioeconomic conditions are rather adversarial to robotics RD compared to other domain. So such agent will have very limited physical agency.
If you assume current auto-regressive LLMs can somehow lead to a super-intelligent agent and just figure out the robotics/physical interaction problem itself, then sure you could worry about it. But if we assume an omnipotent oracle then we could worry about anything. It’s not so much different from worrying about a scenario in which the law of physics just changes the next instant and all biological creatures will just explode under the new law of physics. I mean it’s possible, just not falsifiable so I wouldn’t worry too much about it.
Btw, I want to stress that I think most of EY’s chain of thoughts that I have the chance to read about are logical. But his assumptions are usually so powerful. When you have such powerful assumptions a lot of things become possible.
Also, I wouldn’t dismiss alignment research in general like many ML researchers do, precisely because I work with physical robots. There are many moments during my experiments I would think to myself “this robot system can be a very efficient killing machine if people really try” or “this system can make many people lose their jobs if it can economically scale”. So yeah in general I think some “alignment” research has its merits. Maybe we should start by addressing some problems that already happened or are very imminent
ArnoF7 t1_j8azbzj wrote
Reply to [D] Quality of posts in this sub going down by MurlocXYZ
Discussion in this subreddit is always a bit hit and miss. After all, reddit as a community has almost no gate keeping. While this could be a good thing, there are of course downsides to it.
If you look at this post about batch norm, you see that there are people who brought up interesting insights, and there are a good chunk of people who clearly have never even read the paper carefully. And this post is 5 years ago.
ArnoF7 t1_j8a606r wrote
Reply to comment by konrradozuse in [D] Can Google sue OpenAI for using the Transformer in their products? by t0t0t4t4
No every innovation can be materialized just by a handful of people like a software app, and not everyone who is involved in this process is your buddy and can be assumed to have good will.
In any hardware-related industry, you will need corporations to mass produce your innovations. If there is no patent system, the moment the manufacturer figures out how to produce it, the innovation is no longer yours. In fact, this is one of the major reasons there is this whole US-China trade war in the first place. Basically, local Chinese contract manufacturers have access to the manufacturing procedures of foreign companies who invent the products, so they just directly copy it and undercut their customers.
Patent also protects the interest of individual researchers who do RD for corporations. But that’s another topic.
ArnoF7 t1_j8a296a wrote
Reply to comment by konrradozuse in [D] Can Google sue OpenAI for using the Transformer in their products? by t0t0t4t4
If there is no patent system then every innovation by any individual will be copied and mass produced by big corporations within the day it’s invented.
Imagine you spend a few years designing a new motor. if there is no patent system, Toyota or Tesla will mass produce it the moment they understand how it works. And since they are far more resourceful, you will never be able to produce anything that can compete with them in quality or scale. At least now with patent system they will have to pay you a little to use your invention.
You may not care if you can benefit from your own innovation, but I still think a system that can protect individual ingenuity is somewhat useful
ArnoF7 t1_j5omrfh wrote
Reply to comment by FastestLearner in [D] Multiple Different GPUs? by Maxerature
Great insight. Appreciate it
ArnoF7 t1_j5lknua wrote
Reply to comment by FastestLearner in [D] Multiple Different GPUs? by Maxerature
I have the similar suspicion as well, that the training will be bottlenecked by the slow 1080. But I am wondering if it’s possible to treat 1080 as a pure VRAM extension?
Although it’s possible that the time spent on transferring between different memories makes the gain of having more VRAM pointless
ArnoF7 t1_iz13so3 wrote
Reply to [D] Are ML platforms honestly useful or just money-making on software that's really free? by [deleted]
Azure is not a ML platform. I am not sure where you get the idea that it’s dedicated to ML, or are we talking about complete different Azure?
As for why aren’t they free. That’s their business model. Software with the complexity of Kubernetes used to be premium software, but google figured out a way to make it completely free and yet still afford to pay the developers handsome salaries. So yea technically speaking you can make similar products that are free, as long as you can figure out a viable business model to support it in the long run
ArnoF7 t1_ir4fhmm wrote
Reply to comment by BlueGuyBuff in Micron’s investing $100 billion to bring the country’s ‘largest semiconductor fabrication facility’ to New York by Avieshek
People not in the industry probably don’t know this, and that’s very understandable. But NY state has a lot going on for semiconductor at the moment. IBM has a research hub in semiconductors in upstate NY for a long time and it’s leading the new US-Japan collaboration. Wolfspeed is building a very big SiC fab. GF is also expanding.
lot of actions happening in upstate New York. It’s a good thing. Chip manufacturing is one of the few manufacturing that’s not so sensitive to labor cost, making it a good fit for developed countries. Although the industry has it bust and boom cycles, in the long run the demand is always going up.
ArnoF7 t1_je54egq wrote
Reply to comment by Level3Kobold in [OC] Research Funding vs Human Development: a country's R&D spending correlates with its societal well-being by latinometrics
I wouldn’t say all industries have no place for the second place, but yeah this is true for a decent many