Recently I've been on a flipping mission to get people to flip learning and talked about the flipped LMS in detail. What that really means is that we need to concentrate heavily on the LMS as a tool for assessing learning (rather than holding all the knowledge and learning). If you're going to achieve that it stands to reason that your assessments need to be well-designed if that's going to work. The good news is your learning system is pretty much made to achieve this - tracking stuff is what it's good at after all.
If you're going to properly assess learning it really does help to have a good understanding of learning levels. There are for sure a few models out there, but if we start by looking at Bloom's Taxonomy (or classification of learning) it gives us a simple starting point. The thing here to notice is that knowledge sits at the lowest level. What that means in simple terms is that if we set the regurgitation of knowledge as our assessments we're really limiting what our assessment actually tests to the lowest levels. Think about tests you've done where the only outcome is to recall basic facts. The big issue is that this doesn't really test beyond that, do you know what that really means or how to apply that knowledge?
'Knowledge is power' is a misdirection in my opinion. It's not absolutely wrong because it's a contributing factor for sure (and the base of the power pyramid perhaps), but actually just knowing 'stuff' probably isn't going to get the job done on its own. An electrician for example can hold all the technical knowledge, legislation and standards, but if they can't actually do the work required such as stripping wire and operating the tools of the trade they are entirely limited in their abilities - then you'd have some 'power' issues! So knowing is at the bottom closely followed by comprehension or understanding. Again, great that you understand but until you really apply your knowledge its not going to hit the higher notes. Application is the real turning point in assessments - it shows the ability to take what's been learned and start to demonstrate an improvement in abilities. After all that's what we're really after in learning solutions isn't it? At least at some level we're capability building and that relies on at least some level of application.
From here there's a number of higher levels of learning - from being able to analyse more complex issues to synthesis and evaluation. Good news is that these fit very neatly into piece on scenarios many moons ago and most of that holds true. By setting a decent scenario you can certainly get learners to analyse information and by taking it further they can evaluate the effectiveness too. There are also plenty of tools out there to achieve this like the Storyline series and Captivate - but even using LMS built in tools like the excellent Quiz tool in Moodle and Totara you can create engaging scenarios. One thing to realise is that a multi-choice test doesn't have to mean a simple knowledge check. If you write your scenarios well there's actually nothing wrong with presenting the options as choices - you can then get learners to analyse the situation and select an appropriate course of action. I love the use of getting them to reflect and evaluate their choices later too.
types of assessment that we can put together and track in our LMS. First up let's think about scenarios as they hold the key to lots of good assessments. I wrote a
Another good type of assessment to use is the synthesis path. For an LMS this is harder to put into a standard quiz type assessment, but most learning systems will allow you to add uploads for assessment. I've seen this most effectively used when asking learners to develop a plan or course of action. One good example you may have seen is the 'Fire Plan' one where you work through a fire plan with your children in the case of a fire - escape routes etc. It's a good example of synthesis in that you put together the plan based upon your learnings and pulling on all the important 'stuff' that is required. The disadvantage is that this requires manual grading; but at times a meaningful assessment requires this. Of course you can flip this around and have a forum type assessment where you can actually get other learners to evaluate each others' plans. You're then hitting all the high notes with synthesis and evaluation displayed. I've also seen this work really well using groups and a wiki type approach - that is where the group puts together a plan and modifies it before submitting. Again using social learning type techniques peer review is another valuable tool (and less emphasis on the trainer again).
Another important area it's easy to forget is that all learning doesn't have to be demonstrated through an exam, quiz or assignment type activity. One of the best ways to evaluate the effectiveness of training is through on job assessment. Use your learning system to track how someone is performing on the job by having assessments that are completed by a supervisor. I've seen this work really well where the assessments have been completed on an tablet and straight into the LMS.
Finally assessments are a really important part of the learning process - it's important to recognise that assessments aren't just knowledge checks and that they can actually be a part of the learning itself. Those of you that have read some of my older posts will know that I'm actually a big fan of pull type learning and the assessment can be used to drive the learning rather than just measure it...