All visual elements

Visual elements include visual stimuli as well as elements that use other kinds of screen-based functionality like mouse. They all have the properties below setting a range of core visual options.

Input properties all visual elements have

position
layer

Default: position = screen center
Default: layer = 0 (doesn't matter if element won't overlap with other elements)

position is a vector [x y] setting element position on screen (deg). + = right⁠/⁠down (like reading), <cd>[0 0]<cd> = screen center. Typically elements center at this point, but see element type documentation for exceptions.

layer is a number setting element layering on screen. + = backward (away from viewer), − = forward (toward the viewer). Absolute values don’t matter, only relative. You can ignore layer for element displays that won’t overlap.

To set drift or movement, you can use property vary.

Examples

element.position = [2 0]

→ 2 deg right from screen center

element.position = {[-400 -400], <cdsm>"px"<cdsm>}

→ 400 px left and up from screen center

nn_eyes

Default: show on both eyes

If you turn on stereo display (screen object property stereo) use this property to set whether the element is on left eye (<cd>1<cd>), right eye (<cd>2<cd>), or both eyes (a vector <cd>[1 2]<cd>). The other position properties above apply as usual within each eye.

rotation

Default: no rotation

Element orientation clockwise (from +x to +y screen axis) about position (deg).

flipHorz
flipVert

Defaults: don't flip

<cd>true<cd>/<cd>false<cd>: flip element horizontally/vertically. Note if you want to flip the whole display for the experiment, use screen object properties flipHorz, flipVert instead.

pixelResolution

Default: same as experiment window

Intensity/Color resolution for the element display (bits/pixel). The most common value is 32 bits/px, corresponding to 8 bits (256 levels) per RGBA color component. Possible higher resolutions are 64 bits/px (16 bits/component) and 128 bits/px (32 bits/component). Generally you can leave this at default = same as experiment window, since pixel resolution of an element display is eventually limited by pixel resolution of the window (usually 32 bits/px). However, in unusual cases you may want to set it higher than the window to facilitate pre-window processing. Note this property is ignored for a stimulus sourced from a file with its own pixel resolution, e.g. a movie element showing a movie file.

Example

If the element display has low dynamic range at low levels (e.g. levels 15–20 out of 256 at 32 bits/px) and you apply a filter like intensity that scales it up to e.g. 150–200, it would remain quantized into 6 levels but the quantization would then be easily visible. To fix this you could set the element's pixel resolution = 64 or 128 such that the same initial range would be quantized into 6,144 or 100,663,296 levels, which would continue to look smooth when scaled up.

opacity

Default: no additional transparency

A number between 0–1 applying transparency to the element. 0 = fully transparent (invisible), 1 = no additional transparency. Note many elements have a transparent background regardless of this property.

intensity

Default: normal

A number ≥ 0 multiplying the RGB intensity of the element. < 1 = decrease brightness, 1 = normal, > 1 = increase brightness. Note often you can set brightness more directly through type-specific properties like color. This property is just for when that’s inconvenient or impossible.

contrastMult

Default: normal

A number ≥ 0 multiplying the contrast of the element, assuming a mean intensity of 0.5 (50%). < 1 = decrease contrast, 1 = normal, > 1 = increase contrast.

Or a vector [mult mean] if you want to use a different mean intensity value. Currently PsychBench cannot automatically detect actual mean intensity values for use with this property.

The transformation at each pixel is:

rgb = (rgb-mean)*mult+mean

convolution

Default: no convolution

You can apply various types of blur or any other convolution filter using this property. convolution is a further struct that can have the following fields. You can omit fields (or leave them = <cd>[]<cd>) to leave them at default.

type
size
sigma

Default: no generated kernel

type is a string <cdsm>"average"<cdsm>, <cdsm>"disk"<cdsm>, or <cdsm>"gaussian"<cdsm> to tell PsychBench to use a symmetric convolution kernel of that type. You then need to set size and other parameters in further fields below:

size is width of the symmetric kernel. Convolution is applied at the resolution the element shows at on screen. As such, kernel size is a distance on screen, so by default it’s in deg units as usual. The resulting size will round up to an odd integer number of px on screen. Note if you want to set size in px units, you can use <cdm>{value, <cdm><cdsm>"unit"<cdsm><cdm>}<cdm> form directly in the field, e.g. convolution.size = <cdm>{value, <cdm><cdsm>"px"<cdsm><cdm>}<cdm>.

sigma (type = <csdm>"gaussian"<csdm> only): This is standard deviation of the Gaussian. Again this is a distance on screen, by default in deg but you can specify other units like px directly in the field if you want. For type = <csdm>"gaussian"<csdm>, you can set either size or sigma, and the other will default such that size = 4 × sigma. You can also set both if you want a different ratio.

Note if you use filterResolution below to apply the convolution at a resolution lower than screen resolution, PsychBench will automatically scale size and sigma down. So, always set them assuming convolution at screen resolution.

type requires the MATLAB Image Processing Toolbox.

kernel

Default: no custom kernel

If you don’t have the Image Processing Toolbox or want to use a different kernel than one of the above, you can directly specify any kernel as a 2D matrix in kernel instead. Specifically this is a correlation kernel like those generated by fspecial (it will be rotated 180 deg at application to implement convolution). Pixels in the kernel correspond to pixels at screen resolution. Like a MATLAB image matrix, rows correspond to height and columns to width. The kernel must be an odd integer number of px wide and high (not necessarily symmetric). If the kernel is separable, you can use a 1×2 cell array containing x and y kernels for greater efficiency.

Note if you use filterResolution below to apply the convolution at a resolution lower than screen resolution, PsychBench does not scale your custom kernel down. You must set it assuming convolution at the lower resolution.

backColor

Default: same as trial background color

If you use convolution, PsychBench needs to fill any transparent parts of the element background with a solid color to blend with. Even if the element doesn’t have any transparent background, this still affects pixels near its edges since PsychBench pads the element display with this color for convolution. backColor is a 1×3 RGB vector with numbers between 0–1 setting this color.

noise

(Coming soon)

gamma

Default: same gamma decoding as rest of the experiment

The most efficient way to set gamma decoding (correction) is to set it for the whole experiment using screen object property gamma. However, you can instead or in addition use element property gamma if you want an element to have different gamma decoding. For the element this replaces (doesn't add to) screen object gamma. Usage is the same as screen object gamma: This is an n×3 matrix with columns corresponding to RGB color channels, rows corresponding to input intensities (e.g. for 1024 rows: 0, 1/1024, 2/1024, ... 1), and numbers = output intensities between 0–1. Or you can use one number for simple power law gamma. <cd>[]<cd> = same gamma as rest of the experiment.

See screen object property gamma for more information. See also filterGamma below.

filterOrder
filterResolution
filterGamma

Default: filterOrder = noiseconvolutionintensitycontrastMult
Default: filterResolution = filter at screen resolution
Default: filterGamma = apply filters to standard gamma-encoded RGB

Properties noise, convolution, intensity, contrastMult above are implemented as OpenGL shaders that run in real time on your graphics hardware. The properties below affect all of them.

filterOrder: If you enable more than one of these filters, the default order they apply in is listed above. You can set filterOrder if you want to use a different order. This is an array of strings that can include any of <cds>"noise"<cds>, <cds>"convolution"<cds>, <cds>"intensity"<cds>, <cds>"contrastMult"<cds> in any order. Note if you set gamma it's always applied last.
filterResolution lets you apply these filters at a lower resolution than the element shows at on screen. This is a number between 0–1 that is fraction of screen resolution to use, e.g. 0.5 = half screen resolution, 1 = screen resolution. The result is then scaled up to screen resolution for display. filterResolution < 1 makes the filter faster, which can help if you have a dynamic stimulus where PsychBench needs to filter a new image at each frame of animation and you have some combination of big filter / large stimulus / high screen resolution / slow system, causing dropped frames. However, filterResolution < 1 comes at a cost of reduced image quality.

Note for a stimulus sourced from a file with its own resolution which puts an upper limit on image quality (e.g. a movie element showing a movie file), all filters except convolution automatically apply at the source resolution if it's lower than the resolution the stimulus will show at on screen. In this case filterResolution only has an effect if filterResolution × screen resolution < source resolution.

filterGamma applies a gamma decoding before all these filters and then its inverse (re-encoding) after them and before display. You can use this to apply the filters to a representation that models the physical luminance the element will produce as opposed to its standard gamma-encoded RGB, assuming the filterGamma you set correctly models the total mapping from gamma-encoded RGB → luminance for your display (note this total mapping is generally not just your graphics card's gamma table—see screen object gamma for more information). For example, you could use filterGamma with a blur convolution to simulate reduced visual acuity at the eye.

filterGamma can be one number for simple power law gamma decoding/re-encoding (e.g. the standard 2.2). Or it can be an n×3 matrix with columns corresponding to RGB color channels, rows corresponding to input intensities (e.g. for 1024 rows: 0, 1/1024, 2/1024, ... 1), and numbers = output intensities between 0–1.

filterGamma doesn't change the gamma decoding that is actually applied to the element display—for that use element gamma above or screen object gamma. You can also mix using filterGamma with element or screen gamma.

Input properties all objects have

report
info

Record properties all visual elements have

PsychBench uses record properties to record information during experiments. You can't set record properties but you can see them in experiment results using input property report.