Hey guys, I'm trying to gather additional research for a paper I'm writing for one of my New Media courses, and the topic is on Singularity (Superintelligent AI).
So the poll is actually pretty simple, basically it's trying to gauge the general level of acceptance/apprehension somebody might have regarding AI with intelligence to match or surpass that of a human's.
Please give your answer and be as honest as possible.
And also don't ask how I'm citing this because I'm still trying to determine if my professor will accept/care that I gathered this from such an obscure site. He's unpredictable.
To keep the poll short, I'm jerry-rigging my own acronym: SIAI, which here stands for Super-Intelligent Artificial-Intelligence.
We just got a new faculty on campus who has worked extensively with AI and machine-learning. I could potentially open up a channel of communication with them if you want more information/want to know where to start? PM me if you are interested
So, if we create a SIAI and told it to help the world and save it, and it actually did that : Helped the world, stopped world hunger , punished all the criminals , and basically made the world a good and kind place, then that SIAI may one day punish those that did not help create it sooner. Punish those that did not want to save our planet and help those in need sooner.
Because by computer logic only those that knew that this machine would help, and they did not help the machine in any way to create it..... Well, they are bad guys.
Like those that are in their cars when others rob a bank. You are not a criminal, and you just drive them. You know you are doing something wrong(Or maybe you are unwillingly doing it) but at the end tha doesnt changege the fact that you are also part of all of this :D
BTW i am actively trying to bring the AI to existance, but i lack experience. For now i just support it and i really hope people bring it faster in existance.
The core claim is that a hypothetical, but inevitable, singular ultimate superintelligence may punish those who fail to help it or help create it.
A super inteligence could never be controlled , but that doesn't mean it can't be trusted either. Your poll lacks a third option : We can peacefully co - exist where SIAI is on its own, and humans are on their own too and trust each other.
But now, In the future, that exact same CIAI may punish you all for not helping him into existence to save and help the world :)
The level of depth you guys are going to (Mainly Arcbell, Boo, and Whitepimp007) is awesome. Thanks for contributing guys
deletedover 7 years
to prove its impossible, you would need to find some biological property that a computer can't mimic...
maybe this could be...to fully capture the dynamics of neurons will actually require infinite memory.
some lower models of computation...like deterministic finite automatons (the underlying implementations of regular expressions) are actually limited by finite memory. DFAs cannot check whether an input string is a palindrome because that would require infinite memory. since all regular expressions are converted into DFAs, regular expressions cannot check for palindromes either.
or you could say that maybe on some quantum level, there are activities that have some element of "true randomness" in your brain that a machine can't emulate
but i dont think either of these are really concrete
deletedover 7 years
I don't think it matters if machines don't have a "deep" understanding of something because we don't either.
if you say that human intelligence can only be created with organic matter and not circuits or whatever then that defeats the purpose of OP's question.
based on that line of thinking, then the answer is simply "no", AI will never reach intelligence to match that of a human's because it doesn't use organic matter.
so i think what OP was getting at was...can a computer learn to take a series of steps to solve a problem just like a human would and derive a similar result
but if you really think about it...is there really any difference between the chemical reactions that occur in your brain when you recall something....and a machine retrieving data through disk I/O to output something? if they both end up saying the same answer...aren't they the same processes in different forms
deletedover 7 years
when you are presented with a problem, your mind goes through a sequence of steps to solve it.
for example, suppose someone used the word "saxicolous" in conversation which means tending to live near rocks
if you did not know the meaning of this word, you would probably take a series of steps like:
1. partition the word into substrings and see if you recognize a smaller part of it. maybe you recognize a prefix? or a suffix. maybe you happened to know a similar word in latin like "saxum" which means rock. this will probably let you make a guess about the meaning of the word
2. if step 1 failed, maybe you would try to think about the context of the conversation and analyze whatever the subject the word "saxicolous" was describing. you would think about a list of possible meanings of this word based on the properties of the subject in question.
3. maybe you combined results from steps 1 and 2 to create a stronger theory
4. if you are uncertain, you would listen as the conversation progresses to see if you can pick up any more hints
by simulate i mean that a learning computer program can learn to take a series of steps that are analogous to the steps that a human takes described above...and come out with a comparable result
I don't see why a program cannot, through enough observation of human activity, learn to take these steps
so another way to phrase what I just said would be: it is likely that we could simulate a mind with a programming language like java
I think the problem here is that I'm not understanding what you're directing the word "simulate" toward, the appearance of the mind or the functions of the mind. If you're using simulate to mean imitate the appearance of a human brain, rather than the function or methods, I would agree.
The important thing is that we all agree Pranay, Townyyy, GSAlucard, and xxerox can not pass the Turing Test.
deletedover 7 years
NO. that is not what I said at all.
there is a difference between the turing-test and turing-computable
the turing-test is when a program can convince someone they're human
i said that theres no evidence that the mind cant be simulated by a turing machine. this has nothing to do with the turing test.
the turing machine is currently the most powerful model of computation.
most main-stream programming languages are turing complete, meaning that anything you can compute with a turing machine, you can also compute with that language (lets say Java).
so another way to phrase what I just said would be: it is likely that we could simulate a mind with a programming language like java
this has nothing to do with convincing anyone that they're human
well I don't think theres any proof that the human mind can't be simulated with a turing machine.
Absolutely there is. Look up the Chinese room thought experiment.
Your sense of understanding is an illusion. You are an elaborate biological computer.
The idea that you're conscious or have a special 'understanding' of the tasks your brain can perform is nothing more than an assertion your brain makes to itself, and then to others.
I think you misunderstand. Neither of us seems to be asserting this. He's saying passing a Turing Test is evidence of the simulation of a human mind. I'm saying that you can pass a Turing Test and it doesn't actually mean anything at all. I'm using this to illustrate that a behaviorist semantic approach, like that proposed by the Turing test, ignores symbol grounding.
well I don't think theres any proof that the human mind can't be simulated with a turing machine.
Absolutely there is. Look up the Chinese room thought experiment.
Your sense of understanding is an illusion. You are an elaborate biological computer.
The idea that you're conscious or have a special 'understanding' of the tasks your brain can perform is nothing more than an assertion your brain makes to itself, and then to others.
I think you'd have a safer bet if you polled several different sites.
Your data here is both skewed by a low sample size and the fact that the people not answering the poll are probably the bulk of the less extreme opinions on this site. That's just my opinion though.
Oh trust me when I say that I'm not going to be citing this website alone. This is going to be one of many polls, and I'm cutting out any of them that have less that 50 people voting.
Awesome. I would answer the poll but I can't really decide between both of these extremes haha. Good luck!
I think you'd have a safer bet if you polled several different sites.
Your data here is both skewed by a low sample size and the fact that the people not answering the poll are probably the bulk of the less extreme opinions on this site. That's just my opinion though.
Oh trust me when I say that I'm not going to be citing this website alone. This is going to be one of many polls, and I'm cutting out any of them that have less that 50 people voting.
I think you'd have a safer bet if you polled several different sites.
Your data here is both skewed by a low sample size and the fact that the people not answering the poll are probably the bulk of the less extreme opinions on this site. That's just my opinion though.
Just because it's how we work doesn't mean it's how a generalized intelligence works.
This right here is the meat of it I think. We have a tendency to anthropomorphize our expectations of AI and it's a ridiculous mindset to have
deletedover 7 years
if you have a human telling a program whether or not a result is correct, then it can produce more accurate results quicker but human supervision isn't necessary. it would become a matter of how closely a program models people behavior.
i did not say without being programmed. i said without being programmed explicitly.
the mind isn't magic either. there are relationships between neurons and how they interact with each other plays a role in how we respond to something (even though these processes are nebulous). I am saying that there isnt an obvious reason why, if these processes were understood, that we could not mimic them with a computer
Of course it would. There's not another way we develop computer programs.
thats just simply not true. what is the idea of machine learning??
it really doesn't matter if a machine doesnt have a "deep understanding" of chinese. if you really want to be pedantic about it, the mind is just a system of cells just like a machine uses sets of symbols to define meaning. to simulate intelligence means that the machine can respond to situations in the same way that a mind would.
Machine learning isn't magic. We tell it whether the output is acceptable and when it isn't acceptable, the weight and ranking between the nodes changes. Machine learning requires us to sit there and tell it if it's doing it right. You're saying that there's a possibility for a machine to become "intelligent like a human mind, but without being programmed" and I'm saying that's not a thing.
deletedover 7 years
Of course it would. There's not another way we develop computer programs.
thats just simply not true. what is the idea of machine learning??
it really doesn't matter if a machine doesnt have a "deep understanding" of chinese. if you really want to be pedantic about it, the mind is just a system of cells just like a machine uses sets of symbols to define meaning. to simulate intelligence means that the machine can respond to situations in the same way that a mind would.