According to the story, BINA 48 was able to produce human mental capacities and she accidentally learned that the company owned her decided to switch her off permanently to use her parts to build a new supercomputer. So BINA 48 sent e-mails to attorneys asking them to protect her rights to life and consciousness, and the story ended with the jury’s announced about their inability to decide if she (it) was really intelligent or she only emulated an intelligent behavior.
This case raises other questions over the main problem, namely, whether a computer can be really intelligent. First of all, it isn’t clear that an artificial intelligence/super intelligence is necessarily a GPS (general purpose system): Humans use their cognitive apparatus as a general problem solving mechanism from walking to dating to thinking. But it isn’t a necessity to a BINA 48 to have a general problem solving “brain”–after all, she never would have a date:-). Operating a GPS is an evolutionary solution, since a living being wouldn’t be able to survive without it, but an artificial intelligence can be a “segment intelligence” (it is more or less a synonym of the “weak AI”). After all, she doesn’t have to fight with predators, to survive: It is enough to her to think and perform her tasks. Perhaps it is possible to build a GPS AI, but perhaps it is not essential if our aim is only to construct systems that would serve us.
Similarly, we feel to exist a strong connection between intelligence and consciousness. But it is questionable whether machine consciousness is an essential ingredient of machine intelligence. These are two, radically different things: AI is about problem solving and consciousness is about to know that you solve a problem. Chess programs’ efficiency as segment intelligence shows that the consciousness is not a prerequisite for complicated problem solving. Obviously, these two features are inseparable in humans, but do not mix up evolutionary/historical paths and necessity.
Last, but not least, evolutionary development follows trial and error methods and because of it can reach only local maximums. The traditional idea of an artificial super intelligence is about the improvement of “simple” human intelligence. We are animals with consciousness, of course, but we are far from the real consciousness: for example, regarding a usual day, how many minutes (seconds) are you aware of the fact that you are conscious? According to Susan Blackmore, “In some ways the brain does not seem to be designed the right way to produce the kind of consciousness we have.” (Consciousness. A very short introduction, 17) And it is not a surprise: Nick Bostrom mentions that evolution wasted a lot of selection power for other aims than developing intelligence (Superintelligence, 57), and it is a certainty that the same is true in the case of consciousness.
So why not to try to build instead of a super intelligent machine a super conscious one?