Otherwise known as how to fool your brain into doing something it doesn’t really want to do because you’ve procrastinated far too long as it is and have a deadline coming up.
Also known as, although reading journal articles can be tedious it is very necessary and extremely informative…
In reality, this blog post almost didn’t come about as my mind just wasn’t in to reading journals today (I KNOW! It’s almost sacrilege). I have been super busy (not an excuse) and I set aside today as the day to write the blog (it’s even on my calendar). Unfortunately, my brain did NOT want to read a dry boring article (I swear some people think they get paid by the word!). But, I persevered and finally found one that was 1) relevant to me, and 2) interesting.
So…without further ado I bring you…(hey that rhymes)….
Betz, S., Eickhoff, J., & Sullivan, S. (2013); Language, Speech, and Hearing Services in Schools; Vol. 44, 133-146,
Background: In most school-districts, standardized tests (the bane of our existence sometimes) are required to be used to determine eligibility for services, etc. There are seemingly endless numbers of assessments available as evidenced by the vendors at various conferences. However, how SLPs determined which assessments were to be administered is unknown. The researchers wanted to determine if the assessments chosen were based on quality, as measured by psychometric properties as this would be best practices.
Question: Does the psychometric qualify of the standardized assessment relate to how frequently tests are used to diagnose suspected Specific Language Impairment?
What they did: The researchers created a survey and sent out an email with a link to the survey via the ASHA Division 1 (Language, Learning, and Education) list-serv. They contacted the presidents of the 50 state speech-language hearing associations requesting them to send out the link to their members. Finally, as data began being processed they filled in the areas with only a few respondents by sending recruitment emails to school districts, hospitals, and private clinics. They survey took approximately 20 minutes to complete. A total of 364 responding SLPs met the criteria for inclusion into the study.
In addition to the survey, the researchers compiled information on 55 standardized language tests. They looked at general test characteristics (purpose, publication year, administration time), reliability (test-retest reliability, and SEM), criterion-related validity, sensitivity, specificity, and diagnostic accuracy.
When the survey results came in, the researchers compiled the survey data against the comparison of tests data. Tests that were rated twice (expressive and receptive) the highest frequency rating was used. The tests were then rank ordered to determine which tests were used most frequently while correlations were used to determine frequency of test use and psychometric test data.
Results: The article breaks down all the individual assessments and their frequency of use. Essentially, the CELF-4 was the most widely used test with 67% of SLPs using it at least sometimes. In addition to the CELF-4, the most commonly used tests were the: PLS-4, PPVT-4, EOWPVT-3, CASL, CELF-P2, ROWPVT-2, TOLD-P:4, OWLS, and the EVT-2. These most commonly used tests are comprehensive language and/or vocabulary assessments.
Twelve tests were seldom used (less than 2%): ACLC, DELV-S, NSST, PEST, Bus Story, CADeT, TROG, PSLT, MY, TEGI, and VCS. These least commonly used tests primarily assess morphology/syntax.
Unfortunately, not all of the test manuals included all the details needed for the correlations to be entirely accurate. For example, all 55 publications included publication year (I hope so!), but only 22 manuals reported sensitivity and specificity.
Of all the correlations referenced, the results of the survey indicated that the publication year was the only significant correlation between frequency of test use and individual test characteristic. More recently published tests were used more often.
Test reliability was not correlated with frequency of use. The researchers stated it is recommended that SLPs use assessments with a test-retest reliability of >90% (Salvia, 2010). Of the 47 (out of 55) tests that reported test-retest validity greater 64% had test-retest valuse of >90% while 94% had test-retest values greater than 80%. Since most of the tests used for the study had acceptable test-retest reliability, the researchers determined reliability was not a factor.
Test validity was also not a factor. Only 22 (out of 55) tests reported sensitivity and specificity. Of those, only 13 had both measures higher than the accepted .80 standard. Of those 13, only two were in the top 10 most frequently used tests. Three of the tests were in the least commonly used tests. The researchers stated this further supported that test accuracy was not related to the frequency of test use.
The study found that SLPs tend to use only a small number of tests on a regular basis. Comprehensive tests may be most common because they allow SLPs to determine weaker areas. The researchers determined the majority of SLPs may not consider the “technical section” of the manuals when determine which assessment should be given.
Discussion: I think, personally, the one question that was lacking in the research is which tests are available for the school-based SLP. I remember at the clinic at my University, we had seemingly 100s of assessments just waiting to be used. We would research them, figure out which one would be best for our particular client we were assessing and away we went. However, in the real world – the school based SLP does NOT have an unlimited amount of assessments available. For myself, I have roughly 8 assessments that are housed in my school. I have roughly 5 more that I can “borrow” from other nearby schools within my co-operative which usually entails a 3-5 day “wait” to get to me IF the other SLP isn’t already using it.
The second factor that was missing, is the sheer amount of testing that must be done and the time to do it as well as the requirements for eligibility put forth by the department of education. For instance, I need to pick carefully the test that will give me the most for my time. If I suspect a specific area is weak, I may be able to supplement the comprehensive assessment. But, I have to give the comprehensive assessment because that’s required by the Dept. of Ed.
This research frustrated me just a bit simply because I came away feeling like it was doing a disservice to school-based SLPs. While I freely acknowledge (and agree) that we should be doing more evidence based assessments – and we do have to take into consideration all the validity measures, etc. and think about whether the test truly assesses what we want assessed… we also have to make do with what we have on hand. At a few hundred dollars per assessment, most school-districts are reluctant to buy additional testing that isn’t necessarily “required.”
Even though this article was frustrating…it was also a bit of an eye opener. I’ll admit I haven’t looked at specificity and sensitivity since grad school (mainly b/c I have the tests I have and that’s all she wrote)…but also because I had forgotten how important it was. I’ll have to do better…but, the flip side is – my most commonly used test is the CELF-4 (well CELF-5 now – and it was rated high).
So…what are your thoughts? Drop me a note…
Until then…Adventure on!