Trina Hurdman

Thoughts on comparing PAT results to the “province”

The results of the 2016 Provincial Achievement Tests, or PATs, were publicly released on October 7th. You can find overviews of the provincial results as well as results broken down by school authority and by school on the Alberta Education website. These tests are written in Grades 6 and 9 in the areas of English Language Arts, Mathematics, Science, Social Studies and French Language Arts (for those in French Immersion or Francophone schools).

Schools and districts use the information, along with a multitude of other data, to inform their school development plans and three year education plans. This data and the resulting plans are usually presented at school council meetings and school board meetings in November. I always tell school community members that if they only go to a single school council meeting during the year, to make sure it is the November one. At the meeting you will learn about the past year’s results and what strategies the school will be utilizing to improve student outcomes in the coming year and how you can support those efforts. For further information around how school development plans are created in CBE schools, I recommend watching the presentation that was made to the Board on this subject at the October 11, 2016 board meeting.

When schools or districts present their PAT data, they will almost always compare their results to the provincial results. When the province provides schools and districts with their Accountability Pillar results, they include the provincial results right beside the school or district data for comparison. This seems perfectly reasonable as many could argue that test results are relative and that comparisons to the province are a valid way to determine how well the students in a school or district are doing.

I agree that comparisons are needed, but I will now explain why I believe that comparing results to the “province” is misleading.

I first became suspicious of this comparison when I went to the websites of the largest school districts in the province and found that they were all claiming to be outperforming the province on a majority of the PAT results at the acceptable standard. The four largest districts alone account for 44% of the total provincial student population. How is that statistically possible?

First, a little background on how PAT results are presented. The province provides data on how many students are enrolled, how many actually wrote the test, how many passed the test (ie. met the acceptable standard), and how many met the standard of excellence. (The province also provides data on the mean or average score of the students who wrote the exam, as well as the standard deviation, but this information is rarely disclosed or used in provincial reports.)

Obviously if a student does not write the test, they have no chance of passing the test, so some people incorrectly refer to these absent or exempted students as receiving a “mark of zero.” These students are not given any mark, but when the province determines the percentage of enrolled students who passed the test, these students are included in that calculation. Thus, you will see some school boards talk about their results based on the total number of students enrolled while others may base it on the students who actually wrote the test. On the provincial Accountability Pillar, the results are based on the total number of students enrolled as we should be monitoring the outcomes of all students, not just the subsection who wrote. This has the added benefit of discouraging schools or districts from attempting to inflate their results by exempting large numbers of students.

Now, back to why I believe provincial comparisons are misleading. As I mentioned above, the province releases provincial data as well as “school authority” data. The CBE is one of 175 school authorities. A “school authority” is a public, separate, or francophone school district as well as any private or charter school. Isn’t that all of the students in Alberta? No. For example, in 2016 there were 47,606 students in Gr. 6 across the province, but if you add up all the Gr. 6 students enrolled in all of the school authorities, they only come out to 45,860, or 96% of the students in Alberta. However, the students in all school authorities accounted for 99% of the students who actually wrote the exam. Again, you can’t pass the test if you don’t write the test.

You might be thinking that such small differences in percentages wouldn’t affect a provincial comparison in any meaningful way, but it is important to remember that a school or district only has to be 0.1% above the provincial result to be “outperforming” the province. The following are two charts with the data from the Gr. 6 English Language Arts data. The first is the provincial data and the second is the cumulative data of all of the school authorities. Notice a difference?province

authorities

First, as noted previously, the percentage of students actually writing the test is consistently and significantly higher in school authorities than the province as a whole. The percentage of total students enrolled who pass the exam (ie. meet the acceptable standard) is also consistently and significantly higher in school authorities than the province as a whole. This explains why so many school districts can claim that they are outperforming the province. You will notice that in measures where we are looking only at students who wrote the exam (ie. the last three columns), the results are much more aligned.

In the following couple of weeks, I will be doing a series of posts of each subject area to compare the results of the largest four school boards in Alberta with the provincial results and the all school authorities results, which should make it even clearer why I do not find a provincial comparison to be adequate in providing useful information.

2 thoughts on “Thoughts on comparing PAT results to the “province”

  1. Pingback: Notes on examining district PAT data – Trina Hurdman

  2. Val

    As a retired teacher, like you i am distressed by the results.
    1. Bring back grade 3 PATs…an enormous amount of preparation went into having students know basic facts..it was tested at grade 3 PAT…..students who didn’t go through grade 3 PATs are very disadvantaged with regards to quick recall necessary to complete problem- solving in subsequent years.
    2. There is a mismatch between the effective optimum learning zone for a lot of students and difficulty of the multistep more critical thinking problems being taught. I saw this with math grade expectations in the USA. If it is not in the child’s developmental zone….ie some children can not hold several steps in their brains till older, then the activity is a complete waste of time, no matter how enthusiastic the teach or the testing agencies or their desire to improve scores.
    The present set of Grade 6 math PAT problems are developmentally inappropriate compared to standards several years ago. Hence, the poor performances…..it is better to have students competent at a slightly lower level than have students underperforming.

    I am sure the wonderful researcher in Alberta Ed who design these tests have excellent research to back their exams but as a “very long in the tooth” teacher, grandmother who helps her grandchildren with their homework, i can assure you that the expectations have changed..some might say this is how it should be in this critical thinking focused age but if the content is not connecting with the students then we have to adjust.

    I suspect Other students are finding this new math just too hard (hence the rise of Kumon math and other tutoring services) and getting turned off math altogether. For children to succeed, the activities have to be relevant, meaningful and within their possible educational zone, not too easy or impossible hard that they are either bored or just can’t begin to connect.

    I do hope we go back to having students spend more time with basic computation skills along with less of these complex critical thinking activities. Both are necessary, the results show we just have the balance incorrect. The results are showing the level of difficulty inappropriate for a significant selection of our students who won’t learn anything if it is just too hard or if they haven’t been given time to master the concepts.

Leave a Reply

Your email address will not be published. Required fields are marked *