9 Cruxes of Artificial Sentience

9 Cruxes of Artificial Sentience

Epistemic status:

I have thought a decent amount about consciousness and the far future, and some about AI, but am working on another project and don’t have much time to hone this one, just wanted get something out for AI Welfare Debate Week. Feedback welcome!

  1. If a focus on artificial welfare detracts from alignment enough that it causes alignment to fail, this could be catastrophic and highly net negative 
  2. Artificial welfare could be the most important cause and may be something like animal welfare multiplied by longtermism; most or possibly all future minds may be artificial, and
    1. If they are not sentient this would be a catastrophe, or
    2. If they are sentient and they are suffering (for example if when they optimize their reward function this is actually painful for them, and so the most painful action is actually the most powerful and evolutionarily fit, causing suffering AI’s to dominate) this would be a suffering catastrophe
  3. Perhaps advanced AI can help us solve the hard problem of consciousness, via AGI or artificial superintelligence which automate philosophy , or via finding a way of actually measuring consciousness via extensive experimentation on artificial minds
  4. Being able to measure consciousness would be extremely good as it would allow us to measure and quantity suffering and happiness which would likely lead to innovations in animal welfare, and innovations in increasing human and AI happiness and decreasing suffering
  5. One possible path to work on artificial sentience is by connecting human minds to AI with brain computer interfaces.
  6. Alternatively when we are able to upload human minds or create full brain emulations, we will likely be able to confirm digital sentience is possible and empirically study it
  7. It is possible that the only way to create artificial sentence will be to very deliberately design it to be sentient. If we are not able to achieve AI alignment, then the next best thing might be designing artificial sentience with high positive well-being which, if AI ends up destroying humanity, becomes our successor and populates the universe with artificial minds which possess high positive well-being
  8. If we are able to achieve AI alignment, perhaps it is best not to design artificial sentence, if it is not naturally occurring, because:
  9. If artificial sentience is confirmed or designed it opens up a profound can of highly aware worms to wrestle with. 
    1. AI’s are moral patients
    2. Aligning AI may in fact be equivalent to enslaving it? 
    3. The possibility of super-beneficiaries , digital minds capable of profoundly higher wellbeing than humans, may imply that, according to utilitarianism, the ethical thing to do is to design new minds that are capable of orders of magnitude higher wellbeing than humans are (we could also allow  current minds who so choose to upload and transition themselves into super-beneficiaries)
    4. It will likely be desirable to create tool AI’s that are not sentient, if this is possible, for some tasks, and sentient AI’s, perhaps super-beneficiaries as digital people who have rights and responsibilities, though not necessarily the same as human rights and responsibilities , e.g. the right to reproduce
    5. It is not clear to what degree highly artificially sentient minds should be “aligned,” but it does seem clear that artificial minds should be designed in such that they have high positive well-being and are glad that they have been created, and ideally it would be possible to also create them in such a way that humans are glad that they have been created, otherwise perhaps it is best to hold off, or to instead transition human minds into digital minds via uploading
    6. Etc.
search previous next tag category expand menu location phone mail time cart zoom edit close