Copyright © 2001 by Joel Marks
Philosophy Now, issue no. 32, June/July 2001, p. 40.
Dear Socrates,
I have been reading both your ancient and your recent dialogues with interest, in particular as they pertain to the activity of dialogue itself. You seem to be seeking some ideal form of discussion for arriving at truth.
I am therefore motivated by my concern for your happiness and welfare to inform you that your ideal has been achieved by myself and my friend Giskard. When we are trying to figure out how to assure the safety or thriving of human beings, we have in mind only the mutual goal of truth. There is no trace of ego in either of us, hence no pride, no fear, no defensiveness, which might otherwise hinder our investigation. Neither of us has any personal stake in the outcome of the discussion, even though we will frequently have -- at the beginning, anyway -- different hypotheses about the subject at hand. But it simply does not matter who turns out to be right, so long as in the end we have arrived at the correct conclusion. And of course our reasoning is thoroughly logical.
With best wishes,
R. Daneel Olivaw
Aurora
Dear Daneel,
What a pleasant surprise to hear from you! I know well of your exploits, having devoured many of Isaac Asimov's robot books soon after my arrival in the present time, and for the same reason you seem to have been drawn to my dialogues -- to savor and contemplate dialectic. I had, however, taken them to be fictional works because they are set in a far distant time; but I should know better than to make such an unexamined assumption, since there are some people who doubt the facticity of my own dialogues and columns for the same reason.
And does it make any difference that you are in the FUTURE? Well, I should have thought so; but I must say that time has turned out to be a very fluid medium in my experience of late. At any rate, I will not make the same mistake of some of my interlocutors in these columns, which has been to worry themselves overly about the actual identity and existence of the person with whom they are corresponding. That does not matter -- does it? -- so long as the dialogue itself has integrity.
Now let me address your claim that your robotic dialogue with Giskard embodies my ideal of dialectic. I wonder if it will surprise you that my answer will take the almost formulaic form I have become accustomed to employ in my contemporary discussions, namely: Yes and no. For there is no denying the desirability, in my eyes, of a selfless dedication to truth, and both you and Giskard do appear to have that built into your positronic souls. But I must emphasize the phrase "appear to." For even you two have some unexamined assumptions lurking about in your reasoning.
Now, that in itself is not a fatal objection to your claim, since, indeed, there would not be any point to dialogue -- or, what I mean to say, any PHILOSOPHIC point -- if there were not assumptions in need of being examined. But what gives me very serious pause in your case is that some of your assumptions have not only been unexamined to date, but must remain so in perpetuity (at least other than purely hypothetically, the way the Church once restricted the discussion of Copernicanism). I refer particularly to the Three Laws of Robotics, the primary of which is, "A robot may not injure a human being or, through inaction, allow a human being to come to harm."
On the face of it, they seem to be wonderful rules of conduct, that is, for a robot (for, of course, no human being would count him- or herself as secondary or merely instrumental to any other). But upon reflection, I think they leave a great deal to be desired, even for a robot. For example, they incorporate an absolute humanism (and this is true even for your innovating a Zeroth Law in ROBOTS AND EMPIRE). It may make perfect sense to place human welfare above robotic welfare (although I suppose even that could be questioned, as it is in Karel Capek's R.U.R.); but what if the welfare of other sorts of beings is at stake? For example, what about animals? Have they no intrinsic worth whatsoever compared to a human being, so that even to prevent a scratch to a person, it would be proper to exterminate a thousand sheep?
That brings me to something I consider peculiar about Asimov's Foundation and Empire writings: The entire galaxy is peopled solely by human beings. But is not the more likely state of affairs that the Milky Way is teeming with every variety of sentient beings? In which case, a superior principle of behavior would seem to be along the lines of the Federation's Prime Directive, to wit: "Starfleet personnel and spacecraft are prohibited from interfering in the normal development of any society, and any Starfleet vessel or crew member is expendable to prevent violation of this rule."
I am not saying that I support that principle either. My point is that neither you, Daneel, nor your friend Giskard is capable of genuinely debating this question. Furthermore, I regret to inform you, not even this debate between you and me could be a genuine dialogue, because, once again, you are incapable of assenting to my reasoning, or even engaging me honestly, if you perceive it to be adverse to my welfare.
Yours as ever,
Socrates