Coding overview and setup

The collaborating sites in PLAY perform a variety of roles (see people for details). Each site that performs a coding role is pre-assigned to complete one or more coding functions (see in yellow below). This webpage contains detailed help for coding set up, workflow, and step-by-step procedures for each coding function.

Quality Assurance

All videos that are coded with go through a quality assurance process in which the PLAY team will ensure that the coded videos are eligible to be included in the final sample. Videos that pass QA will be checked for reliability, merged, and added to the final PLAY dataset. Coded videos that don’t pass QA will be sent back to the coding site for recoding. In some cases, the coding site might be provided additional training.

Getting Started

To code videos, you will need: * access to Databrary, the video repository that will contain files to be coded * a downloaded version of Datavyu, a video-coding application * coding templates and scripts

  1. Download the development version of Datavyu.
  2. Download the PLAY_CodingTemplate.opf file from the PLAY Databrary Volume. Name this file with the PLAY naming convention (e.g., PLAY_NYU001, … PLAY_NYU010, … PLAY_NYU030).
    • This template contains all of the primary variables that will be coded by each site: communication, gesture, locomotion, object interaction, and emotion.
  3. Download Ruby scripts for each coding variable as needed from the PLAY Github repository.
  4. Familiarize yourself with Datavyu before you begin coding (resources on Datavyu.org, videos from past workshops, etc.).

Coding in Passes

  • The coding manual describes the transcription process and codes for 5 content areas: communication, gesture, locomotion, object interaction, and emotion.
  • Each content area includes two passes: one pass for the infant and one pass for the mother. For gesture alone, the child and mom are coded together in a single pass.
  • A pass entails scoring the relevant codes for 1-hour of video.

Please visit our GitHub Repository for all of the scripts mentioned in this wiki.


Communication passes

1. Workflow

  1. You will receive video files that have been transcribed by the PLAY team. Run two additional scripts that will prepare new Communication columns for further coding.
  2. Run splitmomchild_transcribe.rb. This script pulls out mom and child language from the transcribe column into two new columns: (1) momspeech and (2) childvoc . Each column is automatically populated with cells from the respectively tagged utterances from the transcribe column (e.g., the script ports all utterances coded as ‘m’ to the momspeech column and ‘b’ to the childvoc column). Each new cell in momspeech and childvoc is a point cell created at the onset of each cell from the transcription.
  3. Run create_momchild_utterancetype.rb. This script also creates two new columns: (1) momutterancetype and (2) childutterancetype . For each cell in momspeech and childvoc , a new cell is created in momutterancetype and childutterancetype , respectively. The codes for these cells are blank, and the coder now scores mom and child communication according to definitions in Communication Codes.

2. Datavyu - Child locomotion pass

2.1 General orientation:

Screenshot of template
Column name: momloc, Prompt: <loc_l-f-h-c>

2.2 List of values and what they mean

2.3 How to code

3. Datavyu - Mom locomotion pass

3.1 General orientation:

Screenshot of template
Column name: momloc, Prompt: <loc_l-f-h-c>

3.2 List of values and what they mean

3.2.3 How to code

Datavyu Communication Codes

Make sure you are currently logged in at Databrary to view embedded video examples in this wiki.

This section covers these 4 main codes: childvoc, childutterancetype, momspeech, and momutterancetype

1. childvoc

<content>

General Orientation

Contains a transcript of all of the utterances/vocalizations of the child.

This column is automatically populated after the transcribe pass is completed using a Ruby script. All of the utterances tagged with ‘b’ in in transcribe are transferred here. The onset and offset are equal, and set to the onset from the transcribe column, which reflects a time as close as possible to the onset of that utterance.

2. childutterancetype

<language_s-w> <langlike-b-v> <crygrunt_c-g> <unintell-x>

General Orientation

Utterance type = categorization of previously coded utterances as a specific type of speech form. Read the utterance transcribed in childvoc column and categorize each utterance based on codes below.

Codes are mutually exclusive. The prompts/arguments in the code are designed to speed the coder through the easiest to detect and easiest to code categories (language, language-like sounds, etc.) down through the more nuanced and time-consuming codes. Once the proper code has been found, enter it into the prompt you are at, then code all of the rest as periods ”.”. For instance, if the child didn’t speak in full speech, or speech-like sound, but did cry/scream, then code <.,.,c,.>.

The transcript will expedite this process. Double check and listen again as you read the transcript. Comment any disagreements.

Value List

<language_s-w>

s = sentence

w = word

<langlike_b-v>

b = babble

v = vowel

<crygrunt_c-g>

c = cry

g = grunt

<unintell-x>

x = unintelligible

Operational Definitions

<s>

Sentence = an utterance in which the speaker utters more than one word, producing a sentence or phrase (e.g., “Daddy’s shoe” or “Go to the park”). 3 videos (“ooh gimme that”, “i take this”, “goodbye sad face?”)

<w>

Word = an utterance in which the speaker utters a single word, such as “dolly” or “ball.” 3 videos (“car”, “basketball”, “truck”)

<b>

Babble = an utterance in which the speaker utters a series of repeated canonical syllables, such as ba-ba-ba, or ga-ga-ga. 3 videos (all “b”)

<v>

Vowel = an utterance in which the speaker utters a vowel sound (e.g, /a/, /i:/). 3 videos (all “v”)

<c>

Cry= an utterance in which the speaker is experiencing a period of prolonged distress. 3 videos (all “c”)

<g>

Grunt= an utterance in which the speaker produces a low, short, inarticulate, guttural sound often used to express effort or exertion. Vegetative sounds, such as coughing and sneezing, should be captured using this code. 2 videos (both “g”)

<x>

Unintelligible = either what the child said was not intelligible to the transcriber, or after listening you are not able to understand well enough what they say even with the transcript to properly code it.

How to Code

Set the “JUMP-BACK-BY” to 2 s.

Hit “FIND” on the controller to go to the onset of each utterance, which was populated in childvoc column. JUMP-BACK-BY 2 s so the utterance can be viewed in context.

Play in real time to code each utterance, which is coded in mutually exclusive categories. TAB to between each argument/prompt inserting period “.” until you reach the appropriate code. Then insert periods to the end of the cell.

3. momspeech

<content>

General Orientation

Contains a transcript of all of the utterances of the mom.

This column is automatically populated after the transcribe pass is completed using a Ruby script. All of the utterances tagged with ‘m’ in in transcribe are transferred here. The onset and offset are equal, and set to the onset from the transcribe column, which reflects a time as close as possible to the onset of that utterance.

4. momutterancetype

<imperative_l-a-p> <interrog-i_declar-d> <filler-f> <unintell-x>

General Orientation

Utterance type = categorization of previously coded utterances as a specific type of speech form. Read the utterance transcribed in momspeech column and categorize each utterance based on codes below.

Codes are mutually exclusive. The prompts/arguments in the code are designed to speed the coder through the easiest to detect and easiest to code categories (imperatives, then interrogatives, declaratives, etc.) down through the more nuanced and time-consuming codes. After the proper code has been found, enter it into the prompt you are at, then code all of the rest as periods “.”. For instance, if the mom didn’t do an imperative, sing/read, but did give a declarative, then code <.,d,.,.>.

What is coded is not solely based on the transcript. Listen to the audio, watch the video, and read the transcript so you are sure of the intent behind the mom’s speech.

Value List

<imperative_l-a-p>

l = imperative look

a = imperative act

p = imperative prohibit

<interrog-i_declar-d>

i = interrogative`

d = declarative

<filler-f>

f = filler/affirmation

<unintell-x>

x = unintelligible

Operational Definitions

<l>

Imperative Look = an utterance in which the speaker directs a child’s attention (e.g., “Look here”, “See?”, or calls child’s name to alert attention). 2 videos (“evelyn”, “look”)

<a>

Imperative Act = an utterance in which the speaker directs a child’s action, such as asking child to do something, or to play with an object. An example would be if a mother tells her child “let’s play with the ball”. 3 videos (“turn the page”, “come here please”, “go get the basketball”)

NOTE: The imperative look and imperative act can be collapsed if the breakdown takes too long to code/specify (although we don’t think it will save time).

<p>

Imperative Prohibit = an utterance in which the speaker prohibits a child’s behavior, such as asking child to stop what they’re doing. 3 videos (“dont knock it over”, “dont be so rough”, “no no tv”)

<i>

Interrogative = an utterance in which the speaker asks for information about objects or ongoing activities (e.g., “What is this called?”, “What color is this?). Questions that start with “Can you” or “Would you” (e.g., “Can you put that down”) should not be considered for interrogatives. Their function is to regulate the child’s behavior and should be coded as imperatives. Tag questions, in which the speaker adds a question at the end of a statement (“That’s a blue truck, right?”) are not considered questions. These should be coded as declaratives. 2 videos (“what does the pig say?”, “what is this?”)

<d>

Declarative= an utterance in which the speaker provides information about objects, events or ongoing activities (e.g., “This is a fun toy”; “Red truck”; “You are stirring in the cup”. 2 videos (“child’s clothes”, “that’s a lemonade”)

<f>

Affirmations/Fillers = an utterance in which the speaker is recognizing another speaker’s behavior and agreeing with it, or using words as conversational fillers. For instance, when the mother says “There you go” when the child successfully completes a puzzle, or when she says “yeah”, or “uhuh”. 2 videos (“wow”, “there you go”)

<x> Unintelligible = either what the mom said was not intelligible to the transcriber, or after listening you are not able to understand well enough what they say even with the transcript to properly code it. 2 videos (both “xxx”)

How to Code

Set the JUMP-BACK-BY key for 2 s.

Hit “FIND” on the controller to go to the onset of each utterance, which was populated in momspeech column.

JUMP-BACK-BY 2s so the utterance can be viewed in context.

Play in real time to code each utterance, which is coded in mutually exclusive categories. Tab between each argument/prompt inserting period . until you reach the appropriate code. Then insert periods . to the end of the cell.



Gesture pass

Workflow

  1. Score child and mom gesture together in a single pass according to definitions in Gesture Codes.
  2. After the gesture coding pass (for both mom and child) has been done, run a script that will separate mom and child gestures into two columns.
  3. Run Split-MomchildGesture.rb. This script pulls out mom and child gestures from the gesture column into two new columns: (1) childgesture and (2) momgesture. Each column is automatically populated with cells from the respectively tagged events from the gesture column (e.g., the script ports all gestures coded as ‘m’ to the momgesture column and ‘b’ to the childgesture column). Each new cell in childgesture and momgesture is a point cell created at the onset of each cell in the gesture column.

Datavyu Gesture Codes

gesture

<source_m-b>, <gesture_p-s-i-c>

General Orientation

Gestures are segmented, durative, event-based behaviors. Watch the video paying attention to the communicative gestures used by the parent and the child. When coding for gesture, focus on the mother’s or child’s hands and head.

Code mother and child gesture simultaneously in one pass. Then based on the <source> of the gesture, a script breaks apart mom and child into separate childgesture and momgesture columns.

Only onsets are coded to expedite coding; offsets could be coded later if duration of gesture or overlap with specific other domains is of interest.

Value List

<source_m-b>

m = mom

b = child

h = mom holding child

<gesture_p-s-i-c>

p = point

s = show/hold up

i = iconic gesture

c = conventional gesture

Operational Definitions

<source_m-b>

<m>: Code ‘m’ if the mom is the source of the gesture.

<b>: Code ‘b’ if the child is the source of the gesture.

<gesture_p-s-i-c>

Gesturing by either mom or child to the investigator (or anyone else in the room) should not be coded. The following should NOT be coded as gestures: tapping child to get his/her attention; pushing an object away; hugging and kissing; one partner moving the other’s hand (e.g., to initiate contact, like proximity seeking); jerking the head to indicate “come here.”

<p>

Code ‘p’ when the child or mom extends their index finder to indicate reference to objects, people, events, or locations in the environment.

Onset is the frame when the finger is fully extended in space toward a referent, or when the point finger is extended and makes contact with the object. Repetitive points should be coded as separate gesture events.

<s>

Code ‘s’ when the child or mom holds up an object to present it as if to say: “look at this” or “do you want this” or “I want you to take this”. Given that it’s not possible to distinguish intention, when a participant shows, offers, or gives an object (e.g., child actually hands toy to mom, offering toy to mom but mom doesn’t take) code as ‘s’, to save decision-making time.

Onset is the frame when the object is fully held up or out to show it. Repetitive instances of holding up or offering an object should be coded as separate gesture events.

<i>

Code ‘i’ when the child or mom engages in an iconic gesture. They are called iconic because they represent an object, idea, or action that can’t easily be referenced with a deictic (point/show) or conventional gesture. The movement of these gestures usually calls to mind something about the nature of the object, idea or action being referenced. For example, you could move your arms back and forth to represent running, or you could trace a square in the air with your finger, or flap your arms as if flying.

Onset is the frame when the child or mom has clearly begun the iconic gesture, and the coder can clearly identify this a gesture but does fall into the conventional gesture category (see <c>). Repetitive instances of an iconic gesture should be coded as separate gesture events.

<c>

Code ‘c’ when the child or mom engages in a conventional gesture. Conventional gestures are culturally-agreed-upon hand or head movements with a specific meaning, like nodding the head to mean “yes,” shaking the head to mean “no,” and moving the finger to lips to indicate “be quiet”. 2 videos (shaking head “no”; holding out hand for “give me”)

If a gesture is conventional, you should be able to understand its meaning just by seeing it in isolation, without knowing any of the context. Some additional examples of conventional gestures include: waving, clapping, flipping arms out to side to indicate “I don’t know’ or “where is it”, come here gesture (finger motions or palms), sit down gesture (pats ground), pickup gesture (child holds up arms to be picked up), thumbs up, shrugs, naughties (wag finger), hug me (hold arms out asking for hug), etc.

Onset is the frame when the child or mom has clearly begun the conventional gesture, and the coder can clearly identify this a gesture but does fall into the iconic gesture category (see <i>). Repetitive instances of a conventional gesture should be coded as separate gesture events.

How to Code

Set “JUMP-BACK-BY” key to 2 s.

Gestures are best coded with the volume low or muted so that the language content does not confound the coding process.

Watch in 1x speed until either mom or child gestures. Focus on the mom’s and infant’s hands and head to identify instances of gestures.

Gestures are defined purely as they relate to the communicative nature of each action. The coder can establish whether something is communicative by looking at things like eye contact, conversational context, and the reaction of the person being spoken or gestured to. If the movement isn’t supposed to communicate anything, then it’s not a gesture. For example, a child might reach for an object and pick it up and look at it. This is an action, not a gesture. But, if the child points to the object to indicate its presence, or if the parent claps her hands to indicate “good job,” then these are gestures. (If there is significant ambiguity in whether a gesture is communicative, or how to code it, sound may be of assistance.)

When the coder identifies a mom or child gesturing, jump back 2 seconds and play the video again at ½ speed until the frame the gesture is clearly underway is found. Hit the = key (equal sign) to insert a point cell; so the current video frame becomes the onset and the offset.

Type ‘m’ or ‘b’ to indicate whether mom of the child was the <source> of the gesture. Hit the TAB key to advance the cursor to <gesture>, then type ‘p’, ‘s’, ‘i’, or ‘c’ to indicate the type of gesture.

Splitting Mom and child Gestures

It’s faster to code mom and child gesture together in one pass. But for consistency with the other coding passes, we want mom speech and child gestures to be in two separate columns.

Run the Split-MomchildGesture.rb script to pull child and mom gestures from the the gesture column into childgesture and momgesture columns.

childgesture

<gesture_p-s-i-c>

General Orientation

Contains gestures produced by the child.

This column is automatically populated after the gesture pass is completed, using a Ruby script. All of the gestures tagged with ‘b’ in <source> in the gesture column are transferred here. The onset and offset are equal, and set to the onset from the gesture column, which reflects the time when the coder was sure the gesture had begun.

Value List

<gesture_p-s-i-c>

p = point

s = show/hold up

i = iconic gesture

c = conventional gesture

momgesture

<gesture_p-s-i-c>

General Orientation

Contains gestures produced by the mom.

This column is automatically populated after the gesture pass is completed, using a Ruby script. All of the gestures tagged with ‘m’ in <source> in the gesture column are transferred here. The onset and offset are equal, and set to the onset from the gesture column, which reflects the time when the coder was sure the gesture had begun.

Value List

<gesture_p-s-i-c>

p = point

s = show/hold up

i = iconic gesture

c = conventional gesture



Locomotion passes

Workflow

  1. Choose whether to code child or mom first
  2. Score each pass according to definitions below

1. Datavyu - Child locomotion pass

1.1 General orientation:

Screenshot of template (coming soon!)
Column name: childloc
Prompt: <loc_l-f-h-c>

This code captures the times that the child is engaged in salient self-generated locomotion in any form (i.e., bum shuffling, scooting, belly crawling, hands-knees crawling, cruising, supported walking, independent walking, etc.).

Coders score only instances of child-generated locomotion, and instances of falling, riding, or being constrained by mom or device. Coders do not score instances where child is stationary but could have locomoted. Bouts of locomotion are scored as events, where the gray spaces between cells mean the child is stationary but not held and not constrained.

Coders are watching/tagging the duration of each of these events (locomotion, falls, mom-constrained, device-constrained, riding a toy with wheels, not visible) by marking onset/offset times. To determine locomotion, coders are watching for steps with the feet, the knees, or the bum. Any other movements that are not initiated from these three body locations are considered to be a transition between postures and are subsumed by stationary, because it is likely a transition rather than salient locomotion.

1.2 List of values and definitions

l =

f =

m =

d =

r =

. =

1.3 How to code


Turn volume off file by clicking on the speaker icon on the video track on the data controller and dragging the bar all the way down.
Set “JUMP-BACK-BY” key to 1 s.
Watch in real time except when the child is restrained by a device.
Watch baby’s feet and knees.

As soon as you see baby’s foot/knee lift up off of the ground; hit #5-STOP and then hit “JUMP-BACK-BY” to go back to the timestamp that is just before the lift. Then JOG forward by hitting #3-JOGFORWARD until you reach the Onset of that cell. If you go too far, you can JOG backward by hitting #1-JOGBACK. You will likely have to hit the JOG keys numerous times. If you feel that you have either jumped too far back or went too far forward, hold the JOG keys to move in either direction a bit faster. Hit ENTER to create a new cell at this Onset.

Now, watch in real time to see when the baby stops moving. The Offset is when the baby stops moving for at least 1 s (the pause has to look and feel like an actual pause when you are watching in real time; don’t simply end a bout of locomotion because there was a 1-s pause, especially if it looks like the baby is about to take another step). The first frame when the foot/knee stops moving or when the foot settles into its final position (sometimes infants stop their walking bout on their tip-toes) is the offset. The same applies to sliding steps.

To set the Offset, use the same rules for mechanics as with the onset. Hit #5-STOP and then hit “JUMP-BACK-BY” to go back to the timestamp that is just before the lift. Then JOG forward by hitting #3-JOGFORWARD until you reach the Offset of that cell. If you go to far, you can JOG backward by hitting #1-JOGBACK. You will likely have to hit the JOG keys numerous times. If you feel that you have either jumped too far back or went too far forward, hold the JOG buttons to move in either direction a bit faster.

If child begins engaging in a stationary activity that is likely to last more than a few seconds (e.g. watching tv, coloring, sitting on floor with a book, etc.), move to 2x or even 4x speed. When child begins moving, use jump-back button to go back to onset of locomotion. THIS WILL SAVE YOU TIME! Child may make small movements during this time to re-adjust body position, but if movements are less than 3 steps on bum or knees, do not slow down to code them as these would be ignored as per coding rules discussed above.

2. Datavyu - Mom locomotion pass

2.1 General orientation:

Screenshot of template
Column name: momloc
Prompt: <loc_l-f-.>

This code captures the times that mom is engaged in locomotion or fell.

Bouts of locomotion are scored as events, where the gray spaces between cells mean the mom is stationary.

Coders are watching/tagging each of these events by marking onset/offset times for the duration of locomotion bouts.

Coders are watching for steps with the feet, the knees, or the bum.

Any other movements that are not initiated from these three body locations is considered to be a transition between postures and is subsumed by stationary, as it is not locomotion.

Bouts that are coded as “.” means that mom is off camera or her legs are off camera, and coder cannot see or infer whether mom is stationary or moving

2.2 List of values and definitions

l =

f =

. =


2.3 How to code

Set “JUMP-BACK-BY” key to 1 s.

Enable cell highlighting on the controller.

Watch in real time for the mom’s movement.

Watch for the feet and knees.

As soon as you see mom’s foot/knee lift up off of the ground; hit #5-STOP and then hit “JUMP-BACK-BY” to go back to the timestamp that is just before the lift. Then JOG forward by hitting #3-JOGFORWARD until you reach the onset of that cell. If you go too far, you can JOG backward by hitting #1-JOGBACK. If you feel that you have either jumped too far back or went too far forward, hold the JOG keys to move in either direction a bit faster. Hit ENTER to create a new cell at this Onset.

Now, watch in real time to see when the mom stops moving. The Offset is when the mom stops moving for at least 1 sec (the pause has to look and feel like an actual pause when you are watching in real time; don’t simply end a bout of locomotion because there was a 1 sec pause, especially if it looks like the mom is about to take another step). The first frame when the foot/knee stops moving or when the foot settles into its final position is the offset. The same applies to sliding steps.

To set the Offset, use the same rules for mechanics as with the Onset. Hit #5-STOP and then hit “JUMP-BACK-BY” to go back to the timestamp that is just before the lift. Then JOG forward by hitting #3-JOGFORWARD until you reach the Offset of that cell. If you go too far, you can JOG backward by hitting #1-JOGBACK. You will likely have to hit the JOG keys numerous times. If you feel that you have either jumped too far back or went too far forward, hold the JOG keys to move in either direction a bit faster.


Object Interaction passes

Workflow

  1. Choose whether to code child or mom first
  2. Score each pass according to definitions in Object Codes

Datavyu Object Codes

Make sure you are currently logged in at Databrary to view embedded video examples in this wiki.

childobject

<obj>

General Orientation

This code captures the times that the child is manually engaged with an object. Coders score only when object events occur, not when they don’t occur. This is an event code, where gray spaces between cells mean that the child is not engaged with an object. (2 videos)

Value List

o = object

. = when child is off camera and coder cannot determine whether child has an object in hand.

Operational Definitions

<obj>

Object = is defined as any manipulable, moveable item that may be detached and moved through space (e.g., toys, household items, and smaller moveable elements of larger objects like beads on busy box, doorknob) (2 videos) . Objects may include large objects (i.e., a stroller, adult furniture, door, etc.) when child moves them, thus, manually engaging with them. If the object never moves (e.g., the child has a hand on the stroller but does not displace it), then this is not coded as ‘o.’ (1 video) The displacement rule is so that we can differentiate object engagement episodes from instances where child is exploring a surface or resting hands on a surface for support. (1 video) The infant does not have to be looking at the object for the event to count as an object engagement (e.g., child is carrying object) (1 video) .

Riding on toys with wheels does not count as object, but this will be coded in childloc pass.

Code ‘o’ if the child is engaged with an object by making contact with the item with hand(s) and/or moving the item in space (e.g., carrying, pushing on the floor, etc.) (1 video)

Onset is the frame when child first causes the object to move while making contact with any part of the hand(s), not feet. Contact could be from any part of the hand (fingers, palm, side of hand). Movement could including lifting, holding, pressing, grasping, shaking, banging, or any other type of displacement event. DO NOT code onset, just when the hand touches the object if the object is not displaced (e.g., if they child touches a pillow but then 1 minute later actually grasps and moves it, code onset at the movement not when the hand touches the object). Offset is the frame when child is no longer in manual contact with an object for at least 3 s. OR when the child is in manual contact but the object is no longer being displaced (displacement includes holding, lifting) for at least 3 s. There is no minimum duration for child to touch an object to be scored as ‘o,’ but if infant is touching multiple objects, the offset of ‘o’ object cell is when child is no longer in manual contact with the last object contacted for 3+ s. If the child is in manual contact with an object in one hand and makes contact with another object with their second hand, count this as the same bout. (1 video)

How to Code

Set “JUMP-BACK-BY” key to 3 s.

Enable cell highlighting.

Watch in real time for the child’s hand(s). As soon as you see the hand(s) touch an object (as defined above), continue watching for a couple of seconds to see if the child moves/manipulates the object. Then, hit #4-SHUTTLEBACK to get to the onset of the cell. The Onset is the first frame when the child makes manual contact with the item. Set this onset by hitting ENTER to set a new cell with that onset time. Now, continue watching the object bout in real time and set the Offset when the child breaks manual contact or stops moving object (e.g., stroller) for at least 3 s. Once you’ve determined that the bout has ended, set the offset by hitting #5-STOP and then #4-SHUTTLEFORWARD or #6-SHUTTLEBACK to the last frame where the child is no longer in manual contact with the item and/or when the child is no longer moving it. Then, hit #9-SETOFFSET.

Continue watching in real time for the next object bout. If the child is holding an object while crawling or walking around, you can watch faster by SHUTTLING at 2x speed to find the end of the object engagement.

To check whether a 3-s pause has occurred between object engagements, go to the offset of the previous object cell and watch until you reach the next instance of ‘o’. Then, hit the ‘JUMP-BACK-BY’ key and check to see if the previous cell lights up. If it does, then the two cells are <3 s apart and should be combined into one bout of ‘o’.

momobject

<obj>

General Orientation

This code captures the times that the mom is engaged with an object. Coders score only when object events occur, not when they don’t occur. This is an event code, where gray space in between cells means that the mom is not engaged with an object.

Value List

o = object.

. = when mother is off camera and coder cannot determine whether she has an object in hand.

Operational Definitions

<obj>

Object = is defined as any manipulable, moveable item that may be detached and moved through space (e.g., toys, household items). Object can include parts of a stationary object (e.g., doorknob on door, clasp on drawer) that can be moved or manipulated (5 videos) . Object can include large objects that mom may move (chairs).

Code ‘o’ if mom is engaged with an object by making contact with the item with her hand(s). Onset is the frame when mom first makes contact with hands. Offset is the frame when mom is no longer in manual contact with an object for at least 3 s. If the mom has multiple items in hand, the Onset of object is when a hand(s) touched the first object in the multiple-object-bout and the Offset is when the hand(s) release the last object.

In cases of larger objects (i.e., a stroller, a box, a chair, a table, etc.), the object engagement begins when the object starts to move. If the large object never moves (e.g., the mom has a hand on the stroller but does not displace it), then this is not coded as ‘o’. (1 video)

Your browser does not support html5 video.

If the mom is not in the camera view, code this with a ‘.’ as missing data.

How to Code

Set “JUMP-BACK-BY” key to 3 s.

Enable cell highlighting.

Watch in real-time for the mom’s hand(s). As soon as you see the hand(s) touch an object (as defined above), continue watching for a couple of seconds to see if the mom moves/manipulated the object (which would make this an instance of Object). Then, hit #4-SHUTTLEBACK to get to the onset of the cell. The Onset is the first frame when the mom makes manual contact with the item and moves it through space. Set this onset by hitting ENTER to set a new cell with that onset time. Now, continue watching the Object bout in real time and set the Offset when the mom breaks manual contact or stops moving the object for at least 3 s (i.e., Object bouts that are interrupted by gray space are more than 3 s apart.

There is no necessary minimum duration for object engagement during the ‘o’ bout to be coded as Object. In other words, the mom can engage with an item or as little or as much time as they would like, however, the mom must make manual contact and move it through space to count.

Once you’ve determined that the bout has ended, set the offset by hitting #5-STOP and then #4-SHUTTLEFORWARD or #6-SHUTTLEBACK to the last frame where the mom if no longer in manual contact with the item and/or when the mom is no longer moving it. Then, hit #9-SETOFFSET.

Continue watching in real time for the next object bout. If the mom is walking or crawling with an object, watch at 2x speed.

Do not agonize. If the mom goes in and out of the camera view, but you know she is still holding the same object and has not put it down, code it in the same bout of ‘o’. Do not mark the “.” for every few seconds she is out of frame.

Code as Object event if mom’s back is to the camera, but you see her arms moving and she overtly appears to be manipulating something—even if you can’t see exactly what it is.

Many times, onsets and offsets are coded when mom goes in and out of frame. In these instances, hit the 0 key to set a continuous cell, whose onset is 1-ms after the previous cell.



Emotion passes

Workflow

  1. Choose whether to code child or mom first
  2. Score each pass according to definitions in Emotion Codes.

Coding emotion

childemotion

<emotion_p-n>

General Orientation

This code captures the times that the child is clearly displaying positive or negative emotion through facial expressions. Times when the child is in neutral emotion are not marked. Bouts of emotion as scored as events, where the grey spaces between emotion events mean the child is neutral or emotion is unclear. Coders also mark times as “missing” when the child’s face could not possibly be coded for emotion (e.g., face completely turned away from camera, child’s head out of the video). When the child’s face is clearly not visible, negative emotion may be coded only if there is absolute clear vocal affect (e.g., child is screaming and crying).

Coders are watching/tagging the duration of each positive or negative emotion event by marking onset/offset times. To determine emotion, coders are watching the child’s face, not vocal affect. To determine if emotion is not codeable/missing, coders are watching for when the face fully moves out of the camera view.

Value List

p = positive emotion

n = negative emotion

. = child’s face is completely not visible

Operational Definitions

<p>

Code ‘p’ when the child is clearly displaying positive emotion (e.g., smiling). Code based off of the face and not off of the voice. Look for raising of the corners of the mouth, grinning and showing the teeth, along with closing of the eyes because of the raised cheeks. If there is any doubt that the child is showing positive emotion, then do not begin the code.

Positive emotion cannot be coded based on the voice alone. So positive emotion could not be scored when the child’s face is absolutely not visible (i.e. missing).

<n>

Code ‘n’ when the child is clearly displaying negative emotion (e.g., frowning, wincing). Code based off of the face and not off of the voice. Look for lowering of the corners of the mouth, stretching and tautness of the lips, along with closing of the eyes because of furrowed brow. If there is any doubt that the child is showing negative emotion, then do not begin the code.

Do not defer to the voice to code negative emotion when the face is visible. If the child’s face is clearly not visible (i.e. missing), then ‘n’ can be coded if the child is displaying clear negative emotion through their voice. The child could be screaming, crying, or yelling. If there is any doubt whether the voice is negative, then do not begin the code.

Onset

Onset of an emotion event is the first frame the child is clearly displaying positive/negative emotion through the face. The onset does not need to be completely frame accurate, since emotion could begin in any part of the face. The coder is looking to identify when any lay person would absolutely agree the child is showing positive/negative emotion based on the face.

There is no criterion for how long an emotion event should be. It is easy for the coder to mark the first frame when they see clear positive/negative emotion, even if it ends a few frames later. Events that are later deemed “too brief” could be removed via scripting.

When coding ‘n’ from voice during missing time, set the onset when the negative voice starts and end the ‘n’ code when the voice ends. The onset/offset do not need to be frame accurate. For cases when emotion code begins right out of missing: The face has not been visible and the first frame you can see the face again the infant is clearly displaying positive/negative emotion. Code the onset of the emotion code as the first frame when the face reappears. Use the “0” key to set the onset of the emotion event and simultaneously set the offset of ‘missing’. We want to preserve the 1ms difference between ‘missing’ and the emotion code so we can know that that ‘missing’ event was ended because of the onset of an emotion code.

Offset

Offset of a positive/negative emotion is the first frame the child is clearly back to a neutral emotion through the face. The offset does not need to completely frame accurate, since emotion could end in any part of the face. The coder is looking to identify when any lay person would absolutely agree the child is no longer showing any positive/negative emotion based on the face.

If the child’s face returns to neutral for less than 5 frames during one emotion code (e.g. positive, then neutral for 4 frames, then back to positive), continue the ‘p’ or ‘n’ code. The coder would have to expend unneeded effort to identify and tag those offsets and onsets time, since reliability does not need to be frame accurate.

For cases when emotion code is ended by missing: The emotion event may or may not have ended but the coder can no longer see the face to code offset. Code the offset as the first frame the face completely is not visible (see ‘missing’ code below). Use “0” key to set the offset of emotion and simultaneously code the onset of ‘missing’. We want to preserve the 1ms difference between the emotion code and ‘missing’ so we can know that that ‘missing’ event caused the offset of that emotion event.

<.>

Code ‘.’ for missing when the child’s face is completely not visible. The child’s full face could turned away from the camera, child’s head is completely off camera, or the child is out of frame entirely. If the coder could see the emotion the child is expressing in their face from a side or oblique angle, then do not code missing.

If the face is not visible but the child is displaying clear negative emotion in the voice (e.g., child crying or screaming) then the missing code is ended and ‘n’ is coded (see ‘n’ code). Positive emotion cannot be coded by voice.

Onset is the first frame in which the coder clearly cannot see the face. Offset is the first frame in which the coder can clearly see the face again. The onset and offset do not need to be completely frame accurate, since reliability does not need to be exact from accurate.

If the child’s face was ‘missing’ and then reappeared for less than 5 frames, don’t stop the ‘.’ code to mark those few frames.

How to Code

Set “JUMP-BACK-BY” key to 1 s.

Play with #8-PLAY in real time (1x speed) until the child changes to clear positive or clear negative emotion or the face is clearly not visible. Focus on the child’s face and do not be distracted by what the child is saying or doing.

Pause with #5-STOP once you have identified a clear change in emotion or that the child’s face is no longer visible at all. Shuttle back with #4-SHUTTLEBACK at 1/8-1/4x speed to identify the onset. Use the mouth and eyes as the guide to onset. Press ENTER to set the onset as the frame where any lay person would say that child is happy or sad. The coder may even feel happy or sad watching the child’s face; use this as a guide for onset.

Then hit #8-PLAY then #4-SHUTTLEBACK once to watch at 1/2x and look for the offset of that emotion or when the face comes completely back into view. For missing, if it seems like there may be a long stretch of missing (e.g. child has completely wandered out of the room) then watch at 1x or 2x speed—but be listening in case there is negative emotion in the voice. Pause when you identify the offset.

Hit #1-JOGBACK and #3-JOGFORWARD to tag the frame the child’s face is clearly not positive or not negative (returned to neutral) or the face is visible again to code emotion from.

Then return to real time (1x speed) with #8-PLAY to watch for the next event.

momemotion

<emotion_p-n>

General Orientation

This code captures the times that the mom is clearly displaying positive or negative emotion through facial expressions. Times when the mom is in neutral emotion are not marked. Bouts of emotion as scored as events, where the grey spaces between emotion events mean the mom is neutral or emotion is unclear. Coders also mark times as “missing” when the mom’s face could not possibly be coded for emotion (e.g., face completely turned away from camera, mom’s head out of the video). When the mom’s face is clearly not visible, negative emotion may be coded only if there is absolute clear vocal affect (e.g., mom is yelling).

Coders are watching/tagging the duration of each positive or negative emotion event by marking onset/offset times. To determine emotion, coders are watching the mom’s face, not vocal affect. To determine if emotion is not codeable/missing, coders are watching for when the face fully moves out of the camera view.

Value List

p = positive emotion

n = negative emotion

. = mom’s face is completely not visible

Operational Definitions

<p>

Code ‘p’ when the mom is clearly displaying positive emotion (e.g., smiling). Code based off of the face and not off of the voice. Look for raising of the corners of the mouth, grinning and showing the teeth, along with closing of the eyes because of the raised cheeks. If there is any doubt that the mom is showing positive emotion, then do not begin the code.

Positive emotion cannot be coded based on the voice alone. So positive emotion could not be scored when the mom’s face is absolutely not visible (i.e. missing).

<n>

Code ‘n’ when the mom is clearly displaying negative emotion (e.g., frowning, wincing). Code based off of the face and not off of the voice. Look for lowering of the corners of the mouth, stretching and tautness of the lips, along with closing of the eyes because of furrowed brow. If there is any doubt that the mom is showing negative emotion, then do not begin the code.

Do not defer to the voice to code negative emotion when the face is visible. If the mom’s face is clearly not visible (i.e. missing), then ‘n’ can be coded if the mom is displaying clear negative emotion through their voice. The mom could be screaming or upset. If there is any doubt whether the voice is negative, then do not begin the code.

Onset

Onset of an emotion event is the first frame the mom is clearly displaying positive/negative emotion through the face. The onset does not need to be completely frame accurate, since emotion could begin in any part of the face. The coder is looking to identify when any lay person would absolutely agree the mom is showing positive/negative emotion based on the face.

There is no criterion for how long an emotion event should be. It is easy for the coder to mark the first frame when they see clear positive/negative emotion, even if it ends a few frames later. Events that are later deemed “too brief” could be removed via scripting.

When coding ‘n’ from voice during missing time, set the onset when the negative voice starts and end the ‘n’ code when the voice ends. The onset/offset do not need to be frame accurate. For cases when emotion code begins right out of missing: The face has not been visible and the first frame you can see the face again the infant is clearly displaying positive/negative emotion. Code the onset of the emotion code as the first frame when the face reappears. Use the “0” key to set the onset of the emotion event and simultaneously set the offset of ‘missing’. We want to preserve the 1ms difference between ‘missing’ and the emotion code so we can know that that ‘missing’ event was ended because of the onset of an emotion code.

Offset

Offset of a positive/negative emotion is the first frame the mom is clearly back to a neutral emotion through the face. The offset does not need to completely frame accurate, since emotion could end in any part of the face. The coder is looking to identify when any lay person would absolutely agree the mom is no longer showing any positive/negative emotion based on the face.

If the mom’s face returns to neutral for less than 5 frames during one emotion code (e.g. positive, then neutral for 4 frames, then back to positive), continue the ‘p’ or ‘n’ code. The coder would have to expend unneeded effort to identify and tag those offsets and onsets time, since reliability does not need to be frame accurate.

For cases when emotion code is ended by missing: The emotion event may or may not have ended but the coder can no longer see the face to code offset. Code the offset as the first frame the face completely is not visible (see ‘missing’ code below). Use “0” key to set the offset of emotion and simultaneously code the onset of ‘missing’. We want to preserve the 1ms difference between the emotion code and ‘missing’ so we can know that that ‘missing’ event caused the offset of that emotion event.

<.>

Code ‘.’ for missing when the mom’s face is completely not visible. The mom’s full face could turned away from the camera, mom’s head is completely off camera, or the mom is out of frame entirely. If the coder could see the emotion the mom is expressing in their face from a side or oblique angle, then do not code missing.

If the face is not visible but the mom is displaying clear negative emotion in the voice (e.g., mom yelling) then the missing code is ended and ‘n’ is coded (see ‘n’ code). Positive emotion cannot be coded by voice.

Onset is the first frame in which the coder clearly cannot see the face. Offset is the first frame in which the coder can clearly see the face again. The onset and offset do not need to be completely frame accurate, since reliability does not need to be exact from accurate.

If the mom’s face was ‘missing’ and then reappeared for less than 5 frames, don’t stop the ‘.’ code to mark those few frames.

How to Code

Set “JUMP-BACK-BY” key to 1 s.

Play with #8-PLAY in real time (1x speed) until the mom changes to clear positive or clear negative emotion or the face is clearly not visible. Focus on the mom’s face and do not be distracted by what the mom is saying or doing.

Pause with #5-STOP once you have identified a clear change in emotion or that the mom’s face is no longer visible at all. Shuttle back with #4-SHUTTLEBACK at 1/8-1/4x speed to identify the onset. Use the mouth and eyes as the guide to onset. Press ENTER to set the onset as the frame where any lay person would say that mom is happy or sad. The coder may even feel happy or sad watching the mom’s face; use this as a guide for onset.

Then hit #8-PLAY then #4-SHUTTLEBACK once to watch at 1/2x and look for the offset of that emotion or when the face comes completely back into view. For missing, if it seems like there may be a long stretch of missing (e.g. mom is on a different side of the room from the child) then watch at 1x or 2x speed—but be listening in case there is negative emotion in the voice. Pause when you identify the offset.

Hit #1-JOGBACK and #3-JOGFORWARD to tag the frame the mom’s face is clearly not positive or not negative (returned to neutral) or the face is visible again to code emotion from.

Then return to real time (1x speed) with #8-PLAY to watch for the next event.



Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.