artificial intelligence

Them (01)

well sora doesnt destroy the world. LLMs are the ones with the potential. Here is one scenario: Since two years ago, there have been agentic frameworks for chatGPT (autoGPT, babyAGI) which have allowed GPT to reprompt itself and follow through on sophisticated tasks. Assume GPT7 is put into this framework and can superhumanly do everything a human can. I can think of extreme destabilization, maybe perverse goals that are not human aligned (this could lead to the extermination of the human race), or just teachno-feudalism.

Another scenario, imagine open source AI continues. Everyone now has the ability to create world ending bioweapons in their garage basement. When the model weights are publically accessible, what happens next?

Me (01)

Assume GPT7 is put into this framework and can superhumanly do everything a human can. I can think of extreme destabilization, maybe perverse goals that are not human aligned (this could lead to the extermination of the human race), or just teachno-feudalism.

So we have to (1) assume it can superhumanly do everything a human can (this would be an insane leap for gpt7 since you'd be talking AGI (which some think might not even be possible) (2) the people running the AI would either have to add no fail safes or would have to actively desire self destruction - you've not given a single reason to think either of those assumptions should be seen as correct especially considering assumption (1) (agi) might not even be possible

Another scenario, imagine open source AI continues. Everyone now has the ability to create world ending bioweapons in their garage basement. When the model weights are publically accessible, what happens next?

Firstly the idea that everyone would have access to the level of compute needed to run these models is absurd (since for world ending bioweapons you'd likely need ASI), secondly the reason people aren't making world ending bioweapons isn't a knowledge problem, there are countries who cannot successfully build bioweapons - Why do you think the thing stopping people from building bioweapons is knowledge? and more to the point do you think we should start banning certain people from studying chemistry at university since that could give the knowledge to build bioweapons?

Them (02)

So we have to (1) assume it can superhumanly do everything a human can (this would be an insane leap for gpt7 since you'd be talking AGI (which some think mightn ot even be possible) (2) the people running the AI would either have to add no fail safes or would have to actively desire self destruction - you've not given a single reason to think either of those assumptions should be seen as correct especially considering assumption (1) (agi) might not even be possible

  1. There is no theoretical reason that a machine can't do everything a human can. SOTA models are already better than average humans at many benchmarks and they are attempting to exceed expert level humans at certain realms of specialized knowledge. Yes it's possible it could be 100 years+ but I believe that there is a >50% chance it comes within the next 20.

  2. Fail-safes and safety measures are disincentivized. AI labs are incentivized to progress as fast as possible. People have termed the current situation as a race to the bottom since the more time and resources dedicated to safety, the further behind you fall compared to the unshackled, reckless competitor. This exists between US AI labs and also between US and China. Slowing down or stopping is out of the question without multi-party coordination. In terms of incentives against doom causing tech, I don't think they are strong enough. I think an AI lab would take a risk of 99% extreme profit if the flipside was 1% doom. Humanity would not however.

Firstly the idea that everyone would have access to the level of compute needed to run these models is absurd (since for world ending bioweapons you'd likely need ASI)

Firstly, running a model is cheaper than training it. Secondly, Jensen Huang noted GPUs were "25 times faster than five years ago." It's very possible that running ASI is doable locally at some point. But yes, I expect that labs will have much more control of the technology. My point is that the world is likely not stable if people are given superhuman intelligence at their fingertips.

secondly the reason people aren't making world ending bioweapons isn't a knowledge problem, there are countries who cannot successfully build bioweapons - Why do you think the thing stopping people from building bioweapons is knowledge?

Bioweapon research is, in some cases, discouraged and banned due to their likely ability to cause greater harm than safety. I'm not sure if the ability to create bioweapons is constrained by knowledge, maybe its tech + knowledge + personnel. It is inevitable that more and more actors become capable of building bioweapons as scientific knowledge progresses. The problem is that I expect Robotics + ASI to rapidly accelerate this trend.

and more to the point do you think we should start banning certain people from studying chemistry at university since that could give the knowledge to build bioweapons?

I think we both agree there is a line where certain knowledge should not be taught due to potential for harm. We would both agree this line is not drawn between high-school and university level chemistry. This might be a non-issue since an AI capable of making bioweapons would also be able to teach someone how to make one.

Me (02)

There is no theoretical reason that a machine can't do everything a human can.

A lot of prominant scientists believe consciousness isn't computational, the fact you'd say this as a matter of fact shows you're biased - my guess is you only watch/consume ai doomer shit

Fail-safes and safety measures are disincentivized. AI labs are incentivized to progress as fast as possible.

Both openAI and claude the two biggest players purposefully hold back certain tech because of safety concerns and both are trying to get the government to regulate development - even meta who're big on open models are restricting their AI for safety concerns

Jensen Huang noted GPUs were "25 times faster than five years ago."

We're already reaching an uppercap with GPU inference and most are looking for the next pivot

It's very possible that running ASI is doable locally at some point.

I see absolutely no reason to believe this as of right now most people struggle to consistently run 70b param locally and asi would require near unlimited context length

My point is that the world is likely not stable if people are given superhuman intelligence at their fingertips.

This is a fair viewpoint however I believe making people smarter is a good thing

It is inevitable that more and more actors become capable of building bioweapons as scientific knowledge progresses. The problem is that I expect Robotics + ASI to rapidly accelerate this trend.

isn't this true if society just continually gets smarter? if we as a society got so smart that we're on the verge of the average person being able to build bioweapons would you want to stop us getting smarter?

I think we both agree there is a line where certain knowledge should not be taught due to potential for harm.

I do not, I think people being smarter is good, I don't think intelligence should be regulated (human intelligence I mean)

We would both agree this line is not drawn between high-school and university level chemistry.

But at that point a person would know enough to start making drugs or making explosives to do pretty serious terror attacks, my point was this is an example where a person is given the knowledge to be able to manufacture explosives and they also have access to the raw materials but yet we don't have wide spread terrorism in western countries.

END