More Happy Than Human?

As deep background for a possible story, I have been considering hypothetical histories of self-awareness in robots. There are many stories already which posit negative consequences to mistreating self-aware machines; but I wonder if the ethics of treating them well are equally problematic.

The ethics of treating beings we created worse than humans are the successors to the debates over the rights of other races, other genders, and other animals; the issue of creation a mere mask for existing concepts of ownership and divine right. But what we could improve our creations’ lives in a way we cannot improve our own? Do we owe them more than we owe our own species? To demonstrate the issue, a thought experiment:

Developments in computer programming result in vastly improved automated problem solving by robotic production lines. But as an emergent property, the improvements also give the robots rudimentary self-awareness. We don’t know how to alter the programming to produce any result we want, but a chance event reveals a variable which lets us either leave the robots as they are or make them feel pleasure from working on a production line. Do we set the variable to ‘pleasure’?

Woman overlayed with circuit images
(Public Domain – CC0 1.0)

 

For me, this splits into two questions:

  • Is the choice between increasing pleasure and not a different choice from decreasing pleasure or not? i.e. is choosing not to act the same sort of choice as choosing between two actions, or is merely standing by ethically neutral?

  • Is imposed pleasure actually a good thing? If we don’t have a choice about enjoying something, is it actually an addiction; or a tool of control?

As an exercise in world-building, the answers society chooses and the outcomes that might produce make for great fiction.

But for our own future, the lack of a clear answer is concerning. Few would argue we should not defend ourselves should our creations prove hostile, but – if they are not – do we build servants whose meaning is in serving, or do we let them find their own meaning, trusting to the idea that they will choose to aid their creators?

Would you turn on pleasure? Do you think the concept of a choice you can never have is meaningless?

Advertisements

9 thoughts on “More Happy Than Human?

  1. That’s an interesting question. Taken to its conclusion, would the pleasure increasing code be the robot equivalent of heroin?

    I’m looking forward to whatever story results 🙂

    Like

    1. I wasn’t thinking it had to be heroin levels. I am currently neutral on a number of tasks (pouring toilet cleaner into the bowl for instance); adding it to the list of things I enjoy wouldn’t necessarily make me do it all the time; but probably would make me do it more often than was necessary.

      Liked by 1 person

  2. Depends. Does introducing pleasure increase the robot’s productivity, and if so, by how much? Does the introduction of pleasure implicitly include the introduction of its opposite? And, biggest if of all, what the hell is consciousness, and could a robot have it?

    Like

    1. Are you saying we are more or less justified if the robot enjoying its task increases the benefit to us?

      The introduction of opposites is an interesting point: if there is only pleasure while doing a particular task, then not doing that task will not bring pleasure. So, if we posit memory as part of self-awareness, there can be an awareness of not-pleasure. Semantics aside, that awareness is an opposite of pleasure.

      Which would add the nuance: is the pleasure only in the narrow act, or in the performance of the act when appropriate? Assuming ethics are not binary, I think making something only able to experience pleasure when riveting panels is worse than making something that feels pleasure both in riveting panels and in being ready to rivet panels when next asked to rivet.

      I am not certain what consciousness is. However, something that I identify as consciousness exists in the collection of particles I identify as me; and it appears to exist in different collections of particles that I identify as other humans. So – strong divinity aside – it seems plausible it could exist in a collection of particles that was different enough to be identified as not human; and that theoretically we could put such a collection of particles together with the right developments.

      Like

      1. Why would increased productivity not be a justification for introducing pleasure? As for the lack of pleasure being its opposite, I disagree. I enjoy riding my bicycle, but this does not mean I’m miserable when I’m not.

        Like

        1. Doing lots of things yesterday, so possibly didn’t spend as much time as I should have on context.

          I accept increased productivity could be a benefit to the person experiencing it. I intended my question to be a follow-up: are we as the creators more or less justified in adding pleasure if it increases the benefit to us? Or is benefit to us entirely ethically neutral?

          For example, if feeling pleasure increased robot productivity but lead to robot depression too, would the increased productivity justify us adding pleasure?

          I could see a benefit to society excusing a detriment to the individual, but I don’t think it justifies it. In the same way that imperfect knowledge excuses a civilised legal system having some duties and prohibitions based not on specific behaviour but wider categories of behaviour that could go either way (for example. drug addiction is an incentive to steal so the individual’s liberty to take a small amount of drugs is removed to prevent some users from becoming thieves).

          I meant the absence of pleasure is equivalent to a negative state in a holistic context, not a specific task context.

          You, I assume, do not only gain pleasure from riding a bicycle; you also gain it from both recalling riding a bicycle, anticipating riding a bicycle, and many non-bicycle experiences.

          Whereas a production line robot might only have three states: producing, not producing, and turned-off.

          Like

          1. Ok, I agree, no pleasure if it’s paid for by depression. That, of course, assumes consciousness, unlikely in a robot which has only the three states.

            I do enjoy recalling and anticipating my bike rides, but I’m not depressed when I’m not doing it, and I wouldn’t be even if I learned I could never do it again.

            To be clear: inducing pleasure at the price of depression would not be ethical. Inducing it in a robot, which you could presumably also program to be consistently cheerful, would be fine.

            Like

            1. Interesting. While the two situations aren’t utterly analogous, what is your stance on offering humans perfect simulations that are better than reality? Most people say they would prefer real life over a happy simulation, which would suggest they feel externally applied pleasure lacks something.

              Like

Share Your Thoughts

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s