The double-edged sword of competency assessment

Educators face an uphill battle when attempting to embrace 21st century teaching models. Regardless of the technical accommodations in a particular classroom, the instructor tasked with using those tools must be trained to apply them effectively. They must be equipped to create and adapt lesson plans to impact student learning.

How do educators avoid a negative-feedback loop, whereby a deficiency in training is only discovered once a piece of technology or an emerging concept is encountered? Often, the answer is pre-assessment of technology literacy. By determining the best areas in which an individual could focus training time and energy, professional development is made more efficient. Overall technical effectiveness is increased. 

This sounds reasonable enough until we consider the human element. People don't always want to be told where they need help, or that they don't adequately understand something. The twin cutting edges of technical assessment are that it identifies training gaps and allows for more effective learning, and that it can lead to people feeling embarrassed or demeaned upon finding the initial results less than satisfactory. The ideal solution is to prepare those taking an assessment for the tremendous potential of the tool, and to avoid creating feelings of inadequacy.

Any assessment worth taking will indicate a threshold of knowledge--a starting point from which improvement can be made and measured. A rubric which indicates high initial knowledge for a group is essentially useless, as it doesn't reveal shortcomings or suggest direction in training, and it will be incapable of indicating progress made. It must, by its very nature, allow for maximum expected improvement in knowledge and competency. For this reason, initial average results must be low.

It is therefore paramount that individuals understand the purpose and intent of technical assessment. Administrators must explain that low initial scores are expected and not abnormal or a reason for embarrassment. The assessment itself must be presented as a facilitator of learning, not a judge of intelligence or competence. Most educators, being individuals who value education highly, will embrace this approach if given proper understanding of intent.

Atomic Learning has created a technical literacy assessment for teachers with the sole intention of facilitating professional development in an efficient manner. It is based directly on ISTE NETS-T 2008 standards, and as such many individuals will achieve lower initial scores than they would like. However, as training is undertaken over the course of a school year, re-assessment scores often climb rapidly, resulting in a wonderful feeling of accomplishment. Without the intentional difficulty inherent in the rubric, this would not be possible.

It's important to understand the goals of an assessment rubric before using it. Only then can learning commence with confidence and direction.

Atomic Learning subscribers who wish to take advantage of the 21st Century Skills Teacher Assessment should contact their technology administrator for access.

Share this post