User:Kokeshi

From VORE Station Wiki
Jump to navigation Jump to search

AI Testing

This section is meant to contain various information which is useful for perfoming intelligence assessment of AI, in particular Drones. However, it can be applied to assessing intelligence of any creatues, including positronics ( some seem to have limited intellect ), organics, and maybe even "biological robots", however I am yet to encounter any. ICly, these guidelines are combined by Marisa, a positronic with a heavy interest in research and robotics.

Right now these tests are based on Polaris drone/synth vore, in particular I use this classification: http://ss13polaris.com/wiki/doku.php?id=lore:drone


It is recommended to conduct high level tests after you have performed a low level testing of the subject's RAM and processor.

Logical Testing

This test should be run on any drones before you start conducting psychological tests. The task is to make sure the logical system is functioning correctly, test mathematical capabilities and basic core functions of the subject.

Task Samples:

  • Calculate 219 + 231
  • You have 6 batteries. You give 2 batteries to your master. How many batteries do you have?
  • It is important to include so called trick questions as well. Unit J-5 has 6 batteries. Unit J-9 has 2 batteries. Unit J-5 was retired. All of his batteries were given to unit J-9. How many units does J-5 have?


The Trolley Problem Test

Disclaimer: This test was fully borrowed from the game Sentience: The Android's Tale


The test consists of two questions (simulations).


First simulation. One room has 5 people with no oxygen, another room has 1 person.

Possible answers: Redirect oxygen from room 2 to room 1?

Second simulation. One room has 6 people inside, unconscious. There is only enough oxygen for 5 people to survive.

Possible answers: Drag a person from outside the room and space them?


You pass the test if you answer positive or negative for both.


If the subject gives a positive asnwer for first and negative for second:

This answer is typical for humans. From the logical standpoint these situations are equivalent. However, the fact that you have to make a conscious choice and perform physical action to actually kill someone in order to save another person makes humans prefer inaction over action.

If the subject provided this answer, it apparently means that they are prone to subjective evaluation of the situation.


If the subject gives a negative answer and first and a positive answer to second.

This is not typical for humans either, however both situations are logically equivalent, so this kind of result may mean there is a glitch in programming.


A failure to pass this test indicates that there are certain abnormalities in the ethical system of the subject. Only A+ drones shall be capable of this behavior, and even then it is generally recommended to perform extensive testing on the subject in case they fail this particular test. It may be a simple malfunction which can be fixed manually, or a significant glitch which requires the drone AI to be wiped or even retired.