544+Evaluation

from Jim's template

==

[Provide a **high-level description** of how this product should be evaluated. Use Kirkpatrick’s Levels of Evaluation to frame your evaluation plan where helpful. What should be measured? The main things to be measured would be : Subcategories:
 * 1) Did the learner, (which would be a teacher), find enough helpful information in the presentation that they felt it was worth their time?
 * 2) Does this presentation provide enough instruction to enable a teacher to present and prepare an engaging algebra lesson?
 * 1) Is the information presented in a logical manner?
 * 2) Is all the information presented, easily understandable?
 * 3) What percent of the material does the student remember when the course is completed?

What are the key evaluation questions that should be investigated? How should these questions measured? They would be measured by a test of student knowledge, a written evaluation by the students, and ultimately by the student submitting a presentation to be evaluated. On what time frame should evaluation occur? It would be interesting to have both an initial evaluation at the end of the course and then whenever they log back in for added information. Consider when the evaluation should be done, and for how long data should be collected. Define the parameters necessary to give your product a **true and accurate** test of effectiveness.] The four levels of Kirkpatrick's evaluation model essentially measure:
 * **reaction of student - what they thought and felt about the training**
 * **learning - the resulting increase in knowledge or capability**
 * **behaviour - extent of behaviour and capability improvement and implementation/application**
 * **results - the effects on the business or environment resulting from the trainee's performance**

[Begin with a brief introduction that describes the over-arching evaluation intent.]

[Following an opening sentence or stem of some sort, list a handful of relevant evaluation questions. Again, tie to Kirkpatrick where helpful. How many questions should you have? The answer is: you should have enough questions to measure user reaction to your product that can be used to inform your future development efforts, followed by questions to measure both the immediate learning derived from the product and its application in the field.]

[How will you collect the necessary data? What instrument(s) will be required? Is it possible to use data from the product itself to complement any additional data you collect via evaluator-constructed instruments?]


 * **Evaluation Question** || **Information Required to Answer Question** || **Source of Information** || **Data Collection Strategy** ||
 * 1. Which objectives do learners master || Results from each module’s mastery tests || Product management system || Extant data: Product database of results (individual learner results) ||
 * 2. How does training get applied on the job? || … and so on… ||  ||   ||
 * 3. And so on… ||  ||   ||   ||
 * 4. ||  ||   ||   ||

[Use the above table to summarize your questions, sources and strategies.]