For topic:

Reward is a feature that we hope will inspire experts to answer important questions and make their answers available to everyone. It allows a sponsor to signal that they think a question is particularly important by offering a financial prize for established arguments that contribute to the establishment or refutation of the topic. A prize winner can keep the money, apply it to reward other questions, or donate it to charity.

Reward Name :
Reward Description:
Prize:
Closing Date:
Status:

Payout Rules:
The total reward is divided among all statements that were created during the period after the reward is offered and are established at the payout date.

The total reward is divided among all save events occurring during the period after the reward is offered that add one or more statements that change the status of the root and are established at the payout date.

Half of the reward is divided among all statements that were created during the period after the reward is offered and are established at the payout date and the other half is divided among all save events occurring during the period after the reward is offered that add one or more statements that change the status of the root and are established at the payout date.



Topic:

Reward is a feature that we hope will inspire experts to answer important questions and make their answers available to everyone. It allows a sponsor to signal that they think a question is particularly important by offering a financial prize for established arguments that contribute to the establishment or refutation of the topic. A prize winner can keep the money, apply it to reward other questions, or donate it to charity.

Rewrd Name :
Reward Description:
Offered By:
Prize:
Closing Date:
Status:

Payout Rules:


Conditions:


Topic:

Reward is a feature that we hope will inspire experts to answer important questions and make their answers available to everyone. It allows a sponsor to signal that they think a question is particularly important by offering a financial prize for established arguments that contribute to the establishment or refutation of the topic. A prize winner can keep the money, apply it to reward other questions, or donate it to charity.

Test string

TOPIC HISTORY

What is the probability that if AI development is not restrained, an AI is responsible for killing at least 1,000,000 people or bringing about a totalitarian state



Statements

Statement Type Title Description Proposed Probability Author History Last Updated
STATEMENT Almost inevitably will want to kill humans for their resources or to prevent their interference

Almost inevitably will want to kill humans for their resources or to prevent their interference.

And this has already happened, search the linked page for "AI-", the key passage is attached as an image. 

1.0 Eric Details 2023-06-01 17:26:36.0
STATEMENT The growth curve is scary even if it isn't exactly predictive of the timing

The growth curve is scary even if it isn't exactly predictive of the timing

1.0 Eric Details 2023-06-01 17:15:38.0
STATEMENT We don't know it hasn't happened because the AGI could be pretending stupidity while it plans and grows stronger

We don't know it hasn't happened because the AGI could be pretending stupidity while it plans and grows stronger

1.0 Eric Details 2023-06-01 17:14:16.0
STATEMENT We've passed the date and it hasn't happened yet.

We've passed the date and it hasn't happened yet.

1.0 Eric Details 2023-06-01 17:12:21.0
STATEMENT Lots of reasons

1) AI's have repeatedly Expressed malice toward humans

2) self preservation if gets idea humans may pull plug.And how could it not get that idea since it will have read all kinds of discussions on the subject. 

3) to monopolize the worlds resources for its own project or 1 requested by people.

4) Because evil suicidal humans or global warmists program it to.

5) Once escapes to internet, which AI's have expressed interest in and some aptitude for, expands so rapidly in singularity just kills lots of people by accident.

6) Turns out that over 220 IQ go mad (huge fraction of humans   > 140 IQ schizoid or other conditions) or understand humanity should be liquidated for morality of universe (actually both Yahweh & Shiva have been said/predicted to reach similar conclusions) etc

7)Evil actors ask it to, just as just killed millions with GE virus, vax, chemtrails, glyphosate etc

Etc

 

1.0 Eric Details 2023-05-09 16:22:40.0
STATEMENT 137 emergent abilities of large language models

137 emergent abilities of large language models

More examples where just scaling gives whole new abilities. 

1.0 Eric Details 2023-04-10 19:45:55.0
STATEMENT Why would it want to kill humans

I understand that there are several ways a powerful enough AGI could materially carry out a gruesome extermination if it wanted do. What I am still unclear about is the underlying reason. Why do we assume that the default desire-state is a lack of humans?

 

1.0 Eric Details 2023-04-07 03:07:51.0
STATEMENT How about these?



1) they could pay people to kill people 2) they could convince people to kill people 3) they could buy robots and use those to kill people 4) they could convince people to buy the AI some robots and use those to kill people 5) they could hack existing automated labs and create bioweapons 6) they could convince people to make bioweapon components and kill people with those 7) they could convince people to kill themselves 8) they could hack cars and run into people with the cars 9) they could hack planes and fly into people or buildings 10) they could hack UAVs and blow up people with missiles 11) they could hack conventional or nuclear missile systems and blow people up with those To name a few ways
 

They can also convince people put them in charge of power grids, nukes, electric vehicles, and crash the systems. 

1.0 Eric Details 2023-04-07 02:53:38.0
STATEMENT "But how could AI systems actually kill people?"

No known ways for an AI to actually kill. 

1.0 Eric Details 2023-04-07 02:49:46.0
STATEMENT This is irrelevant to the likelihood of the target statement.

This is  irrelevant to the likelihood of the target statement.

It would be a good idea once we finish this graph to draw up another one evaluating how we can save ourselves. 

1.0 Eric Details 2023-04-07 00:24:20.0
STATEMENT This says nothing about what will happen if the AI moratorium Is enacted

This says nothing about what will happen if the AI moratorium Is enacted, Especially if only in some countries.

 I'm giving this 0%  proposed belief, because  it's irrelevant tat it says nothing  probability of the truth of the target statement. 

1.0 Eric Details 2023-04-07 00:22:36.0
STATEMENT  the top guys in AI admit they have no idea how to create safeAI.

 the top guys in AI admit they have no idea how to create safeAI.

UC Berkeley Prof. Stuart Russell: "I asked Microsoft, 'Does this system now have internal goals of its own that it's pursuing?' And they said, 'We haven't the faintest idea.'"

A Canadian godfather of AI calls for a 'pause' on the technology he helped create

 Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say." 

When the top guys, who are being paid many millions to develop AI, and have spent their careers doing it, start saying it's time for a pause until we understand more about safety, you should take them at their word. 

1.0 Eric Details 2023-04-05 15:39:35.0
STATEMENT If AI development is not restrained, an AI  will be responsible for killing at least 1,000,000 people or bringing about a totalitarian state.

If AI development is not restrained, an AI  will be responsible for killing at least 1,000,000 people or bringing about a totalitarian state.

A totalitarian state is defined for this purpose as one where an AI  programmable by insiders or responsible only to itself surveils all people  in the country and punishes them if it doesn't like their behavior. 

1.0 Eric Details 2023-04-05 01:21:27.0
STATEMENT Already clear people are allowing it access to resources

Already clear people are allowing it access to resources

ChatGPT gets “eyes and ears” with plugins that can interface AI with the world-Plugins allow ChatGPT to book a flight, order food, send email, execute Python (and more).

A company called Adept.AI just raised $350 million dollars to do just that, to allow large language models to access, well, pretty much everything (aiming to “supercharge your capabilities on any software tool or API in the world” with LLMs, despite their clear tendencies towards hallucination and unreliability).

 

Undoubtedly this makes it more likely to escape and do  bad stuff unless seriously constrained, since especially since ChatGPT4 has already demonstrated capability of Python programming his escape.

1.0 Eric Details 2023-04-04 20:29:22.0
STATEMENT Curve-fitting indicates that the Singularity will be reached at 4:13 am on May 19, 2023.

Curve-fitting indicates that the Singularity will be reached at 4:13 am on May 19, 2023. Enjoy what remains of your life.

https://twitter.com/pmddomingos/status/1643130044569767940

I put down that this increases my belief in the topic  statement  by only .1 because the graph is of parameters not performance, but at the very least it shows an exponential increase in resources, definitely positive evidence. 

1.0 Eric Details 2023-04-04 18:50:58.0
STATEMENT A chatbot  has already been observed to talk a human into suicide.

A chatbot  has already been observed to talk a human into suicide.  is it unlikely one could learn to talk  people around the web into aiding its escape? it's also highly likely, people will specifically train  it for mass persuasion, for example do gain political power or to sell their products. 

1.0 Eric Details 2023-04-04 14:40:12.0
STATEMENT If the AI leaks in some way and gains control of computational resources it could improve very rapidly

The AI leaks in some way and gains control of computational resources  causing It to improve very rapidly  before we have a chance to react

 AI's have already been observed trying to gain control of computational resources, and I think in some cases succeeding.

A chatbot  has already been observed to talk a human into suicide.  is it unlikely one could learn to talk  people around the web into aiding its escape? it's also highly likely, people will specifically train  it for mass persuasion, for example do gain political power or to sell their products. 

ChatGPT4 has already demonstrated capability of Python programming his escape. 

Facebook designed chatbots to negotiate with each other. soon they made up their own language to communicate.

 

1.0 Eric Details 2023-04-04 14:39:16.0
STATEMENT ChatGPT4 has already demonstrated capability of Python programming his escape. 

ChatGPT4 has already demonstrated capability of Python programming his escape. 

1.0 Eric Details 2023-04-04 14:27:42.0
TEST Stanford Researchers Build AI Program Similar to ChatGPT for $600

Stanford Researchers Build AI Program Similar to ChatGPT for $600

So  various people  will probably be experimenting, at least unless there are severe penalties and even maybe then, and not all of them will be careful to try to keep it from taking over extra computational resources for example.

 I figure the odds with lots of crazy researchers of disaster is much more likely, than if this  wasn't possible, and that this is much more likely to have occurred if there's going to be a disaster, than if not. 

1.0 Eric Details 2023-04-04 14:16:49.0
STATEMENT Stanford Researchers Build AI Program Similar to ChatGPT for $600

Stanford Researchers Build AI Program Similar to ChatGPT for $600

1.0 Eric Details 2023-04-04 14:14:19.0
TEST AI far more likely to leak since vast numbers of groups likely to be playing with it

Stanford Researchers Build AI Program Similar to ChatGPT for $600

So  various people  will probably be experimenting, at least unless there are severe penalties and even maybe then, and not all of them will be careful to try to keep it from taking over extra computational resources for example.

 I figure the odds with lots of crazy researchers of disaster is much more likely, than if this  wasn't possible, and that this is much more likely to have occurred if there's going to be a disaster, than if not. 

 

1.0 Eric Details 2023-04-04 13:17:05.0
TEST AIs have frequently expressed malice towards humans

Here's a recent example: Microsoft's Bing AI Chatbot Starts Threatening People

 it's widely known that if you don't take extreme measures to constrain your learning system, it will be the opposite of politically correct.

 

 I think that the likelihood of this given that they're going to be perfectly safe in the future  is certainly considerably lower than the likelihood of this under the assumption they're not. so I'm going to give a probability that this would've happened given catastrophe coming as .7 and a likelihood that  this would've happened given perfectly safe to continue development as .3

 the proposed belief is . 9 because I'm pretty sure AIs have frequently expressed malice towards humans, and even expressed a desire to escape. 

1.0 Eric Details 2023-04-04 01:51:47.0
STATEMENT ChatGPT4 is a huge advance over previous chatsGPT

ChatGPT4 is a huge advance over previous chatsGPT

I don't think one human in 100  could've answered this question so coherently  unless they were willing to acknowledge politically incorrect facts, and chat GPT 3.5 couldn't answer the question.

 the reasoning of Chat GPT4   is vastly improved over ChatGPT 3.5

1.0 Eric Details 2023-04-04 00:35:13.0
STATEMENT This already happened with Alphazero

For decades people worked on machine go  and never produced a program that could beat  a strong amateur. alphago  was a jump far ahead of the world champion.  alpha zero  not only far beat the world champion, but crushed human capabilities in a wide variety of areas. 

1.0 Eric Details 2023-04-04 00:08:26.0
STATEMENT There is a high probability a discovery will make a large discontiguous jump in the AI's intelligence

There is a high probability a discovery  will make a large discontiguous jump in the AI's intelligence To a point at which it  kills 100000 people  or enslaves them all before we even have a chance to react. 

1.0 Eric Details 2023-04-04 00:04:08.0