over 7 years

Hey guys, I'm trying to gather additional research for a paper I'm writing for one of my New Media courses, and the topic is on Singularity (Superintelligent AI).

So the poll is actually pretty simple, basically it's trying to gauge the general level of acceptance/apprehension somebody might have regarding AI with intelligence to match or surpass that of a human's.

Please give your answer and be as honest as possible.

And also don't ask how I'm citing this because I'm still trying to determine if my professor will accept/care that I gathered this from such an obscure site. He's unpredictable.

To keep the poll short, I'm jerry-rigging my own acronym: SIAI, which here stands for Super-Intelligent Artificial-Intelligence.

AI and Humans
37
SIAI will be completely within our control.
27
SIAI are not to be trusted.
over 7 years

Boo says

for example, suppose a machine was able to monitor a group of people with cameras. suppose one of the people was mugged at gunpoint and shot. if a machine observes enough instances of people dying by gunshot, it may find the common factor of all these instances and learn to associate guns with danger and death. even though you didn't explicitly program that guns are dangerous into your code, you did program a means for the machine to learn something new.


You're describing a neural network. This is programmed through various techniques, but you're describing backpropagation.


Boo says

following this idea, it may be possible that given enough input...the behavior of the machine becomes unpredictable
edit: by the last part, i meant that the way it operates can start to resemble how we think, which ties back in with your original question


Nope, that's the part where it doesn't make sense. If you understand neural networks, you'll understand why this isn't a thing. The people who think this is a thing don't understand the technology. It's like people thinking that since engines make explosions, if you leave an engine running for enough time it will blow up. A mechanic knows this is wrong, but someone who doesn't understand engines thinks this makes sense.
over 7 years

Boo says

well I don't think theres any proof that the human mind can't be simulated with a turing machine.


Absolutely there is. Look up the Chinese room thought experiment.


Boo says

i don't know what OOP has to do with anything...


It's the idea that AI uses frames to contain attributes of events. Just a different term for objects.


Boo says

if a computer program were to become intelligent like a human mind...I really doubt its behavior would be hard-coded by a bunch of programmers


Of course it would. There's not another way we develop computer programs.


Boo says

but rather, the programmers may define some very abstract ideas and the computer program would rely on external input and find patterns within the input to adjust its own behavior.


Machines suck at finding patterns without lots of data. That's why we use neural networks, but even these require lots of data. You're describing a neural network. Those don't just appear, they're programmed.
over 7 years
Thinking sufficiently advanced neural networks naturally develop a survival instinct and the drive for conquest is akin to thinking your dog is trying to say human words when it barks.

Just because it's how we work doesn't mean it's how a generalized intelligence works.
over 7 years
Teams will always know when the behavior of their program can and can't become unpredictable.
deletedover 7 years
well I don't think theres any proof that the human mind can't be simulated with a turing machine.

i don't know what OOP has to do with anything...

if a computer program were to become intelligent like a human mind...I really doubt its behavior would be hard-coded by a bunch of programmers

but rather, the programmers may define some very abstract ideas and the computer program would rely on external input and find patterns within the input to adjust its own behavior.


for example, suppose a machine was able to monitor a group of people with cameras. suppose one of the people was mugged at gunpoint and shot.

if a machine observes enough instances of people dying by gunshot, it may find the common factor of all these instances and learn to associate guns with danger and death.

even though you didn't explicitly program that guns are dangerous into your code, you did program a means for the machine to learn something new.

following this idea, it may be possible that given enough input...the behavior of the machine becomes unpredictable

edit: by the last part, i meant that the way it operates can start to resemble how we think, which ties back in with your original question
over 7 years

Evil says


Arcbell says

The fear of mass unemployment is going to do more damage than the actual movements in the job market.


Why do you think that? The majority of jobs being entirely eradicated seems fairly likely... do you think that we will have compensated for that in some way by the time we get to that point?


Technology always creates value and obliterates existing occupations. It's been happening for millennia. People are remarkable at adapting when new value is created lol
over 7 years

Whitepimp007 says

I just finished writing my final paper for an upper division AI class. You are correct that the people who think this is a thing don't understand computing, but philosophy and psychology. Steven Hawking thinks the singularity is a thing, but he doesn't understand object oriented programming. Forget this poll and look at the fact that those who know what they're talking about in computer science say it's all bullsh!t.


The poll was to get an idea of how people felt about the issue, not really trying to say whether it's right or wrong, just gauging people's thoughts on it so far. I hear you though.
over 7 years

Arcbell says

The fear of mass unemployment is going to do more damage than the actual movements in the job market.


Why do you think that? The majority of jobs being entirely eradicated seems fairly likely... do you think that we will have compensated for that in some way by the time we get to that point?
over 7 years
I just finished writing my final paper for an upper division AI class. You are correct that the people who think this is a thing don't understand computing, but philosophy and psychology. Steven Hawking thinks the singularity is a thing, but he doesn't understand object oriented programming. Forget this poll and look at the fact that those who know what they're talking about in computer science say it's all bullsh!t.
over 7 years
The fear of mass unemployment is going to do more damage than the actual movements in the job market.
over 7 years

Arcbell says


You have to tell them what they're trying to accomplish in order for them to learn in the first place.


The majority of arguments I've seen against AI super intelligence tend to be embedded in philosophy more than anything else, so I'm definitely with you on this.

Still, it's hard to know exactly where we will stand with this when we are still relatively far from reaching this kind of breakthrough. I consider the unemployment fear to be valid though, although I don't think it will be quite as bad as many are predicting.
deletedover 7 years
this thread is some very good bait.

just give it a few more pages and xxerox will come in and post about how standing near microwaves gives him headaches
over 7 years
Hey Evil.

Let's take the example of AlphaGo, DeepMind's Go playing AI. Go is an ancient game that humans have been mastering for thousands of years. AlphaGo currently beats all of the best humans at the game bar none, a feat that wasn't expected of technology for at least another decade.

Interestingly the "intelligent" part of the program was not specifically designed to play Go! It can play all kinds of other computer games and be better than the top humans. It's just fed input, and told what result it wants more or less.

This might seem scary at first, because a computer is learning to beat humans at games it's not even specifically built to learn, until you realize that it only learns to play the game because we've, in a sense, told it it's purpose in life is to win at the game and nothing else.

Unless the technology used to create an AI takes on a substantially different form than the neural networks we use today and that are making such huge waves in industries, AI still will never automatically develop independent motives. You have to tell them what they're trying to accomplish in order for them to learn in the first place.
over 7 years

InfinityMidnight says

Why is there no third option saying it'll take over the world and probably would make it a better place than we could ever hope. Support AI takeover 100%. They're probably better than people


Okay yes but we are people
over 7 years
Why is there no third option saying it'll take over the world and probably would make it a better place than we could ever hope. Support AI takeover 100%. They're probably better than people
over 7 years
Incidentally, SIAI also stands for Singularity Institute for Artificial Intelligence. Thanks google.