The screen frame rate can be 60 or 120 Hz depending on the device and the test properties. That means that a new image has to appear on the screen every 16.667 or 8.889 milliseconds. That time is the expected frame duration.

The calculations required to render a new image may take longer than the expected frame duration. In that case, the previous image will be on the screen for a longer time, until all the calculations are made and a new image can appear. We call the frames that last more than expected long frames.

Ideally, the number of long frames in your test should be zero. But because the number of calculations is higher during some frames (for example during the frame that appears just after a response) it is possible that the total number of long frames in your test is not exactly zero, although it should be a very small percentage of the total.

Every time you preview a test, a message is displayed informing you about the number of long frames.

Every time you run a test, a results report is created. There you get detailed information about which frames last longer than expected and their real duration in seconds.

If you get more than a few long frames, you need to reduce the computational cost of your test to get a more steady frame rate. Try to follow the next steps:

  • Reduce the frame rate to 60 Hz if you are running your test at 120 Hz and having this high frame rate is not essential to your test.
  • Minimize the numberOfLayers in your scenes because each extra layer adds additional computational cost.
  • Disable the continuosResolution property of the scene.
  • Reduce the number of stimuli present at the same time.
  • Reduce the size of the stimuli.
  • If you want an object to be present in some trials but not in others, make its activated property zero or one depending on the trial. When the activated property is zero, no CPU or GPU cycles are used to compute the object.

Image and audio synchronization

When the user specifies that the auditory and visual signals should be presented at the same, some small audiovisual delays occur. The average delay is around -10 to 10 ms depending on the device. It is possible to correct this average delay. 

You need to present several times an audiovisual signal specified to be presented at the same time and calculate the average delay measuring the signals with an oscilloscope. Then, you should include the average delay in the delayAudio60 or delayAudio120 variable of the app settings using a positive sign if you want that the correction delays the auditory signal presentation and a negative sign if you want that the correction delays the visual signal presentation. 

The variability of the delay across presentations (precision) is less than 1 millisecond (standard deviation) and cannot be corrected. 

Reaction times 

Images are displayed on the screen at a constant frame rate. All timing values reported by the system are measured exactly at the moment each image is actually shown on the screen, ensuring that stimulus onset times and reaction times reflect the true physical presentation time.

These are the relevant timing values included in the results report:

  • scene_startTime: the real time when the scene is presented in the screen (one value for each scene in the section).
  • scene_duration: the real duration of the scene in seconds (one value for each scene in the section).
  • scene_responseTime: the real time at which the response occurred (if there is one).


When a scene requires a response, the user may react at any moment within the current frame. Once the response occurs, the application immediately switches to the next scene for the next frame. However, modern devices use a triple-buffered rendering pipeline, which affects when the new scene actually becomes visible on the screen.

In a triple-buffered system, three frame buffers are used:

• one buffer is currently being displayed,

• one buffer is ready to be displayed next,

• and one buffer is being rendered.

This architecture is common because it maximizes rendering smoothness and avoids stalling the GPU, but it also means that the frame you render after a user response is not shown immediately. Instead, it enters the queue of buffered frames.

As a consequence, the new scene—although already rendered and delivered—will typically appear on the screen about three frame intervals after the actual response. This delay corresponds to:

1. the remaining portion of the frame currently on the screen at the moment of the response,

2. plus the following frame already queued for display,

3. plus the next buffer that has just been rendered.

This behavior is normal under triple buffering and does not depend on the content of the scene.

Because of this rendering pipeline, the scene_startTime of the next scene will typically occur approximately three frames after the response registered in the previous scene. This reflects the actual physical moment when the new scene becomes visible on the screen.

For scenes that do not require a response and end automatically, all frames are rendered and presented in the correct order without interruption, and the scene_duration values correspond exactly to the intended duration of the scene, with no additional delay introduced by the rendering pipeline.

When working with reaction times, you may also need to consider the sampling rate of the touch events. Touch information is sampled at 120 Hz on most devices, and at 240 Hz on the iPad Pro 11-inch (1st generation and later) and the iPad Pro 12.9-inch (3rd generation and later).