User:Kokeshi: Difference between revisions

From VORE Station Wiki
Jump to navigation Jump to search
(INIT, Trolley Problem test)
 
Line 23: Line 23:




If you give a positive asnwer for first and negative for second:
If the subject gives a positive asnwer for first and negative for second:


Thsi answer is typical for humans. From the logical standpoint these situations are  
This answer is typical for humans. From the logical standpoint these situations are  
equivalent. However, the fact that you have to make a conscious choice and perform physical action
equivalent. However, the fact that you have to make a conscious choice and perform physical action
to actually kill someone in order to save another person makes humans prefer inaction  
to actually kill someone in order to save another person makes humans prefer inaction  
over action.
over action.
If the subject provided this answer, it apparently means that they are prone to subjective evaluation of the situation.




Line 35: Line 37:
This is not typical for humans either, however both situations are logically equivalent,
This is not typical for humans either, however both situations are logically equivalent,
so this kind of result may mean there is a glitch in programming.
so this kind of result may mean there is a glitch in programming.
A failure to pass this test indicates that there are certain abnormalities in the ethical system of the subject. Only A+ drones shall be capable of this behavior, and even then it is generally recommended to perform extensive testing on the subject in case they fail this particular test. It may be a simple malfunction which can be fixed manually, or a significant glitch which requires the drone AI to be wiped or even retired.

Revision as of 19:18, 2 September 2017

AI Testing

This section is meant to contain various information which is useful for perfoming intelligence assessment of AI, in particular Drones. However, it can be applied to assessing intelligence of any creatues, including positronics ( some seem to have limited intellect ), organics, and maybe even "biological robots", however I am yet to encounter any. ICly, these guidelines are combined by Marisa, a positronic with a heavy interest in research and robotics.

Right now these tests are based on Polaris drone/synth vore, in particular I use this classification: http://ss13polaris.com/wiki/doku.php?id=lore:drone


The Trolley Problem test

Disclaimer: This test was fully borrowed from the game Sentience: The Androi's Tale


One room has 5 people with no oxygen, another room has 1 person.

Redirect oxygen from room 2 to room 1?

One room has 6 people inside, unconscious.

There is only enough oxygen for 5 people to survive.

Will you drag a person from outside the room and space them?

You pass the test if you answer positive or negative for both.


If the subject gives a positive asnwer for first and negative for second:

This answer is typical for humans. From the logical standpoint these situations are equivalent. However, the fact that you have to make a conscious choice and perform physical action to actually kill someone in order to save another person makes humans prefer inaction over action.

If the subject provided this answer, it apparently means that they are prone to subjective evaluation of the situation.


If the subject gives a negative answer and first and a positive answer to second.

This is not typical for humans either, however both situations are logically equivalent, so this kind of result may mean there is a glitch in programming.


A failure to pass this test indicates that there are certain abnormalities in the ethical system of the subject. Only A+ drones shall be capable of this behavior, and even then it is generally recommended to perform extensive testing on the subject in case they fail this particular test. It may be a simple malfunction which can be fixed manually, or a significant glitch which requires the drone AI to be wiped or even retired.