How has the test structure changed from the WISC–IV Integrated? | The WISC–V Integrated is organized into the five cognitive domains defined in WISC–V: Verbal Comprehension, Visual Spatial, Fluid Reasoning, Working Memory, and Processing Speed. The WISC–IV and WISC–IV Integrated subtests were organized into a four-index test framework: Verbal Comprehension Index, Perceptual Reasoning Index, Working Memory Index, and Processing Speed Index. The Perceptual Reasoning Index included subtest scores that measured visual spatial and fluid reasoning abilities. With the separation of the Perceptual Reasoning Index into the Visual Spatial and Fluid Reasoning index scores, greater interpretive clarity is possible. Two new index scores, the Multiple Choice Verbal Comprehension Index (MCVCI) and the Visual Working Memory Index (VWMI), are introduced in the WISC–V Integrated. Each index score is derived after administering and scoring two subtests. The MCVCI is derived from the Similarities Multiple Choice and the Vocabulary Multiple Choice scaled scores. The VWMI is derived from the WISC–V Picture Span scaled score and the WISC–V Integrated Spatial Span scaled score. The MCVCI is useful in a number of situations. For example, if a child obtains a disproportionally low score on Vocabulary, in many situations it may be useful to know if the low performance reflects limited word knowledge or expressive language difficulties. A separate test of receptive word knowledge could be administered in this situation. However, if the receptive word knowledge test is based on different item content and the relations of the two tests are unknown, the comparison is of limited utility. With the WISC–V Integrated, the child’s performance on Vocabulary Multiple Choice and/or Picture Vocabulary Multiple Choice can be compared directly with his or her WISC–V Vocabulary subtest performance because of shared item content. For instance, if the child’s performance is higher on Picture Vocabulary Multiple Choice than Vocabulary, poor retrieval or expressive skills may be interfering with the ability to express verbal concept formation skills. Likewise, a Vocabulary Multiple Choice score that is higher than Vocabulary suggests that poor retrieval or expressive language difficulties may be interfering with the ability to express verbal concept formation and word knowledge. Such information may be especially important for children who present with school difficulties and are suspected of having language-based learning impairments. The WISC–V Integrated VWMI facilitates expanded interpretation of working memory ability when used together with WISC–V measures of working memory. Both auditory and visual working memory subtests are included on the WISC–V Integrated because they provide somewhat different information, potentially expanding the construct coverage. The new auditory working memory subtest, Sentence Recall, is a complex span task that provides additional insight into working memory functioning as cognitive processing demands increase, complementing the WISC–V auditory working memory subtests. The visual working memory subtest, Spatial Span, is a visual-spatial measure that can be interpreted with Picture Span, a WISC–V visual working memory subtest. Together, Picture Span and Spatial Span provide increased construct coverage because they tap both visual and visual-spatial aspects of working memory. The VWMI can be contrasted with the WISC–V Working Memory Index and the WISC–V Auditory Working Memory Index, providing a more complete assessment of domain-specific working memory functioning. Taken together with the WISC–V Working Memory subtests, these diverse measures of working memory can assist with accommodation recommendations for children with working memory deficits in the auditory and/or visual realms. | What theory or models influenced the development of the WISC–V Integrated? | The process approach to test performance interpretation was the primary model that influenced the development of the WISC–V Integrated. A primary assumption of the process-oriented approach to assessment is that cognitive tests are multi-factorial and that any one, or combination of, factor(s) may contribute to an individual’s performance on a task. For example, one child may struggle with a measure of expressive vocabulary because he or she lacks semantic knowledge, whereas another child may encounter problems on the same measure because he or she is unable to express or describe semantic knowledge. These two children may achieve a similar low score, but the underlying reasons for their performances are different. A process-oriented evaluation of these low scores attempts to identify the cognitive subprocesses contributing to the score (e.g., lack of word knowledge, difficulty with memory retrieval, and/or problems with verbal expression). To identify the cognitive subprocesses, tasks that break down the processes involved in a subtest may be utilized to help identify the specific weaknesses. For example, the vocabulary words may be presented in a format that reduces reliance on memory retrieval and need for verbal expression (e.g., multiple-choice or pictorial format). In addition, neurodevelopmental research, contemporary working memory models and research, and issues of clinical utility were important factors in the development of the WISC–V Integrated. | Is the WISC–V Integrated quicker to administer than the WISC–IV Integrated? | During WISC–V Integrated development, substantial efforts were made to achieve the shortest testing time possible, yet offer greater construct coverage. In addition to shortening subtest instructions, the number of administered items is held to a minimum by reducing the overall number of items and modifying discontinue rules. The discontinue rules for all retained subtests were substantially reduced. For example, the discontinue rule for Similarities Multiple Choice, which was 5 consecutive scores of 0 on the WISC–IV Integrated, is reduced to 3 consecutive scores of 0 on the WISC–V Integrated. For retained tasks, the overall number of items was reduced relative to the WISC–IV Integrated, and administration time was reduced an average of one minute per subtest. The greatest time savings is on Arithmetic Process Approach, which most examinees complete in approximately six less minutes on the WISC–V Integrated, relative to the WISC–IV Integrated version of the subtest. This dramatic reduction in administration time is possible because only the items that are not awarded full credit on Arithmetic are readministered. On the WISC–IV Integrated, all Arithmetic items were readministered. During WISC–V Integrated development, substantial efforts were made to achieve the shortest testing time possible, yet offer greater construct coverage. In addition to shortening subtest instructions, the number of administered items is held to a minimum by reducing the overall number of items and modifying discontinue rules. The discontinue rules for all retained subtests were substantially reduced. For example, the discontinue rule for Similarities Multiple Choice, which was 5 consecutive scores of 0 on the WISC–IV Integrated, is reduced to 3 consecutive scores of 0 on the WISC–V Integrated. For retained tasks, the overall number of items was reduced relative to the WISC–IV Integrated, and administration time was reduced an average of one minute per subtest. The greatest time savings is on Arithmetic Process Approach, which most examinees complete in approximately six less minutes on the WISC–V Integrated, relative to the WISC–IV Integrated version of the subtest. This dramatic reduction in administration time is possible because only the items that are not awarded full credit on Arithmetic are readministered. On the WISC–IV Integrated, all Arithmetic items were readministered. | Is there information in the WISC–VIntegrated Technical and Interpretive Manualabout the proportions of children with various clinical conditions that were included in the normative sample? Are norms available that do not include children from these special groups? | As shown in Table 3.4 of the WISC–V Integrated Technical and Interpretive Manual, representative proportions of children from the special group studies were included in the normative sample. In addition to children with various clinical conditions, children with intellectual giftedness also were included to represent children with extremely high scores. The proportions of children from special group studies are low and reflect their presence in the U.S. population. For instance, 5% of children in the U.S. are diagnosed with ADHD and 4.7% of children in the normative sample were diagnosed with ADHD. It is unlikely that the inclusion of very small proportions of children with disabilities in the normative sample will result in more children scoring within the normal range such that separate norms excluding children from special groups would be necessary. | How long do professionals have to transition from using the WISC–IV Integrated to using the WISC–V Integrated? | Publications such as the Standards for Educational and Psychological Testing provide guidance about the use of obsolete tests. Most practitioners make the move to the new edition within 8–12 months of the release. Consider your own practice situation and how critical the evaluations you conduct are when making the decision. For example, in cases where the older edition is used and an independent educational evaluation is requested, a school system or clinician may be at a greater risk of having results called into question. | Does the WISC–V Integrated support use of a pattern of strengths and weaknesses approach to learning disability evaluation? What are the scoring software requirements? | Yes. The WISC–V Integrated index scores (i.e., the Multiple Choice Verbal Comprehension Index and the Visual Working Memory Index) may be used with the WIAT–III and/or the KTEA–3 to conduct these evaluations. Using the WISC–V standard scores in tandem with the WISC–V Integrated index scores provides a more comprehensive selection of standard scores for these purposes. The pattern of strengths and weaknesses (PSW) analysis is calculated using the Q-global® web-based scoring and reporting platform; tables for calculating PSW by hand are not included in theWISC–V Technical and Interpretive Manual. The data are too complex to provide in a paper format; the scoring software must be used for this purpose. The Q-global scoring reports can be used to help evaluate a specific learning disability, using this approach. The scoring software for the WISC–V Integrated on Q-global is planned for release in the fourth quarter of 2015. In order to obtain the PSW analysis using Q-global, you must have index scores from the WISC–V Integrated (and WISC–V standard scores, if desired) as well as KTEA–3 and/or WIAT–III results. You must manually enter the WISC–V Integrated index scores (and the WISC–V standard scores) when creating a KTEA–3 or a WIAT–III score report to conduct the PSW analysis. Refer to Chapter 6 of theWISC–VIntegrated Technical and Interpretive Manualand theWISC–VTechnical and Interpretive Manualfor more information. | I have seen children get correct answers but just after the time limit has expired. These children had the correct answers but were just somewhat slower in responding. Are these children penalized due to their slow processing speed rather than their cognitive abilities on these higher-level cognitive reasoning tasks? For any of the subtests, did the WISC–V Integrated standardization research compare the accuracy of answers versus just their time-based raw scores? | In early research phases of the project, data were collected with extended time limits. Analyses indicated the children who responded correctly after the time limit were of lower ability than children who responded within the time limit. There was little benefit to extending the time, as few children could answer correctly after the time limit expired. Data were not collected with extended time limits at standardization because that would have provided more exposure to the items, which could result in some additional procedural learning or practice that is not standard. Process observations to test the limits can be done at the end of all testing and described qualitatively in the report. Research shows that tasks such as Figure Weights and Arithmetic may be well within the capacity of a child with neurodevelopmental difficulties, if extra time is allotted. Figure Weights Process Approach and Arithmetic Process Approach provide the opportunity to observe the effects of time on responses. Both subtests offer extended time for responses to items scored 0 points and those beyond the discontinue points. | I found a discrepancy between two scores that is rare, but I am unsure how to interpret it. Is there somewhere I can see specifics? | After identifying critical value and base rate information for comparisons in Table B.2 of theWISC–V Integrated Administration and Scoring Manual, you can find the interpretive hypotheses for every discrepancy that appears on the Record Form in Chapter 6 of theWISC–V Integrated Technical and Interpretive Manual. | Is color blindness a factor on the WISC–V Integrated? | Color blindness occurs in approximately 10% of the general population and more commonly, in males. We have made every effort to ensure that items on Pearson tests, including those on the WISC–V Integrated, are free of bias against these individuals. Reviews for color blindness involve a variety of procedures. Items are reviewed by experts in color blindness, as well as individuals with color blindness, during early stages of test development. Acetate overlays are utilized to give the test developers a visual representation of the stimuli as it appears to individuals with the various types of color blindness. Items are copied in greyscale to check appearance to those with monochromatic color blindness. In addition, items are subjected to a simulation of color blindness to check item appearance with every type of color blindness and to ensure that the intensity and saturation of colors are not overly similar and do not suggest different responses. |
| |