Intro
The components of instructional design are often referred to by ADDIE, which is an acronym, not a person who felt so important we had to write her name in all caps. ADDIE stands for analyze, design, develop, implement, and evaluate. Based on these components, several instructional design practitioners and researchers developed models to describe the instructional design process. ADDIE itself is not a model in that it doesn't give steps, guidelines, or instructions on HOW to analyze, design, develop, implement, and evaluate instruction. So, the next time you are at an instructional design party (aren't you the lucky one?), if someone refers to the ADDIE model, you can quickly correct them, thus sounding intelligent (it also helps if you throw in "thus").
Because the information can be somewhat involved for people new to instructional design, I will discuss the first two components--analyzing and designing--in this post. Aww, don't be disappointed; part 2 will come soon.
As a side note (and to avoid accusations of plagiarism), most of this information is found in Dick, Carey and Carey's book The Systematic Design of Instruction, seventh edition. The opinions, however, are mine and do not represent the beliefs of Dick or either Carey.
Enough babble, let's get started!
ANALYZE
Analysis, by definition, means to break something into its component parts. In the analysis phase, writers generally talk about two areas: learners and instruction. Since this is the instructional design process, it is generally assumed that you have done some analysis prior to this to make sure training is necessary. It is important to establish that there is a need (tech definition of need is a gap between what is desired and what exists) for training and that the need can be tied directly to an organization's goals. Sure, we want everyone to follow the same procedure for clocking in/out because it makes a more streamlined process, but unless we can determine that a) not clocking in according to the process is caused by a lack of knowledge of the procedure and b) not following the procedure directly (and measurably) affects an organization's goal, we should not recommend training. We will talk more about a needs analysis later. I bring this up because instructional designers are often accused of trying to force training as an answer to every organizational problem.
ANYWAY, are you still with me? Good. Back to the two types of analysis. Analyzing learners means that we get a good idea of who will be receiving training and what they already know. We look at the conditions under which they will perform the skills we are going to teach them. We should also consider obstacles to implementing training (access to resources, motivation, culture, etc). Second, we analyze the instruction that we are going to develop. We do this through identifying our instructional goals. What do we want them to be able to do/know/believe by the time they finish the training?
The Dick and Carey model (I always wonder which Carey got left out of the model name) go into detail about identifying subordinate skills and entry skills, but that is beyond the scope of this post.
DESIGN
Once we know what we want the learners to know, we have to do some designing. I think that this component is when everything you have learned about learning theory comes in to play. Based on what you know about the learners, what approach do you think will most effectively facilitate learning? What media are options in conveying the message? How much information can your chunk together at one time? The learners' prior knowledge and current level of performance should factor into how much information you throw at them at one time. How do you plan to manage cognitive load so that learners are engaged but not overwhelmed? What motivational strategies do you plan to use? I am assuming here that you are flexible in your approaches, not that you are rigidly stuck to one theory that works for everyone. But, if you are....
These "simple" decisions are best made here, in addition to deciding how you are going to evaluate their learning. In a sense, teaching to the test is what you want--as long as your assessment is in line with your objectives, and your assessment is a valid representation of how the skills will be used in the real world.
Whew! Ok, I think we have done enough for now. Next time, we will talk about development, implementation, and evaluation. Please leave any comments and subscribe if you dare!