Reading more of The Gamification of Learning and Education, I dug into a chapter about producing motivation, and several theories on the subject. A major element in motivation is the difference between intrinsic and extrinsic motivation, namely, motivation to do something that comes from within, or comes from without. Basic examples are doing something for an explicit reward: To misquote the book's example "If someone offers you $20 to wash their [cat], and this is something you'd take no pleasure from, but you want the $20 dollars", and curiosity is a really profound intrinsic motivation.
I personally from this section feel there is also a really important division between positive and negative motivation, meaning motivation to do something, or not to do something. This is the basic concept of a stick and carrot dichotomy, and motivation, especially, in my opinion, intrinsic motivation, comes purely from a carrot-centric style of positive motivations.
Then there were a series of models on intrinsic motivation, I'll overview them here (this is more for my reference than yours as the reader):
This stands for:
- Goal Orientation
- Motive Matching (making sure the instructor and learner are on the same page)
- Modling Results of Learning (understanding what the takeaway will look like)
- Perceivable value
- Positive Feedback
- Intrinsic Motivation
So, these are four broad categories that are related to producing and maintaining motivation, but I really want to take a second and look at the contents of the Confidence portion. My own personal working model of intrinsic motivation/satisfaction (in fact built from what constitutes my positive experiences with board games) consists of three major elements: Agency, Validation, and Engagement. The book tends to use the word autonomy in place of agency, but I like agency as it has a bit more weight and nuance to it. Self-confidence is a huge thing. Having it makes things so much easier, and not having it (speaking from a lifetime of experience) makes things incredibly difficult. When I play board games, it's extremely important for me to have agency, a sense that my actions produce results in a way where my decisions matter. I don't need complete control, but I need to feel that I can affect my environment, can effect change. Validation is, unsurprisingly , overlapping with both success and feedback from the ARCS model's Confidence, and talks about the confirmation, or affirmation, that you are doing things correctly, or even at times, incorrectly. This ties into later ideas from my reading about the idea of microgoals, little goalposts that you constantly pass to trigger the reward centers of your brain. Finally engagement is a huge vast concept, and in many ways ties into several other pieces of the ARCS. Does the game hold my attention? Does the game make me feel invested? Does my success or failure matter to me? If I have to wait ten minutes between turns, and have very little interest in what is happening in that ten minutes, I'm not engaged. There's a section discussing the idea of flow later, and when you're doing things, having a well established flow is so important.
Next is Malone's Theory of Intrinsically Motivating Instruction
It consists of three main elements:
- Uncertain outcomes
- Using a tool should be easy, completing a task with it can be a challenge
- Understanding how to use a toy can be a rewarding challenge
- extrinsic fantasy- fantasizing winning the game; a fantasy exterior to the game's function itself.
- intrinsic- embedded in the game itself; this is almost always a far more effective means of producing intrinsic motivation
- Sensory- changes in stimuli
- Cognitive- create sense of incomplete, inconsistent, or nonparsimonious understanding to drive desire to complete
- Feedback should be surprising (random) and constructive
This model, I must say, is pretty intuitive, but I really really like the part about Cognitive Curiosity. It's exploiting the human desire to collect, to complete, to finish things in order to make them better understand something. You can produce motivation by creating a sense of incompleteness, and people LOVE to complete things. As someone of the age where Beanie Babies, Pokémon cards, comic books, and video games were must haves, I have been stung too many times by the innate human need to have completed collections to fall for such simple traps, but damn if they aren't VERY effective, and absolutely beautiful in this meta-application for learning.
Next is Lepper's Instructional Design Principles for Intrinsic Motivation, you'll note that this one has some overlap with Malone's, and this will be pertinent soon. It consists of four concepts, which I don't need to flesh out:
- Control (familiar?)
- Challenge (?!)
Now if those seem to show a lot of similarity, then it's no surprise to encounter the next theory, which is beautifully named The Taxonomy of Intrinsic Motivation. This is the direct result of Lepper and Malone combining their ideas to create a new model, that is divided into two components:
- Recognition (having the effort you've undertaken be perceived and appreciated by others)
What I find rather interesting here is the addition of the Interpersonal elements, two of which are mentioned earlier in the book in the discussion of what makes a game a game. This theory has nothing to do with games, but has stumbled regardless into game territory, with Cooperation and Competition (two major facets of gaming, the third, conflict, being a foundational component of the other two). Recognition then falls into a rather interesting place, as I am not entirely sure I can agree with it, at least at a personal level. Recognition seems to fall into a state of being an extrinsic goal or motivation, as its nature is very similar to the idea of an extrinsic fantasy. Winning is not in and of itself a reward, but having others see you win, and appreciate the labor required to win, is the motivation, and while the water becomes rather murky, this seems to be in a strange in between state. Recognition to me falls into a spot I am weary of, in part due to my own issues with shame of pride, self-confidence, and a disfamiliarity with how to gracefully receive compliments.
The next section delves into some more robustly tested ideas, the first of which is Operant Conditioning.
Before the development of operant conditioning, there were classical experiments performed by Pavlov (yes, the dogs). He would produce a sound with a tuning fork (not a bell!) when the dogs were fed, and eventually he could produce the salivation response associated with being fed by sounding the tuning fork, even if food was not present.
In the classical operant conditioning experiments, Skinner (yes, the box) wanted to condition the animals to perform an action separate from the normal behavior. So the mice were put into boxes that contained buttons, and the buttons would produce food only when a corresponding noise was made. Soon the mice would only press the button when the sound was made, as they learned the button did nothing without that additional element. Skinner then proceeded to more complex experiments with rather interesting variations. What's important to note, is that if a single press of the button (sans other prompts) produced food, the mice would be content until the button press ceased to produce food, at which point the mice would almost immediately cease to associate the button with the production of food. This is called the extinction of the behavior.
The first was the Variable Ratio Schedule experiment. In this, the number of times the button had to be pressed to produce food was variable. It would eventually produce food, but sometimes it had to be pressed three times, or ten times, or some other number. The result was rather simple, the mice knew the button produced food, but never knew how many times they'd have to press it, and as such, they would press it all the time, hoping for the food. A human analogy would be a slot machine. It's going to pay off eventually, so if you just keep pulling that arm eventually it'll pay out, but who knows when.
The second experiment was the Fixed Ratio Schedule. In this, the number of times the button had to be pressed was fixed. So every three presses, food would come out. In this case, the behavior changed. The mice would wander about and wouldn't return to the button to press it until they wanted food, at which point they would press the button the requisite number of times (presumably just pressing it repeatedly until food came out, rather than actually tracing the number, although that component has likely been tested as well in regards to the counting ability of mice). This is similar to any task that produces a reward. When you want the reward, you perform the task to completion as quickly as possible to save time and energy, but there is no point in endeavoring in the task until you desire the reward (not very forward thinking, admittedly).
Next was the Fixed Interval Schedule. The button would do nothing, until a certain amount of time passed, at which point the button would produce food. In this setup, the mice would loiter near the button, and would only try pressing the button when they thought enough time had passed. As more time went by, the mice would press the button more frequently, as they eagerly awaited the food they expected to be delivered. This is interestingly something that I see a lot of in existing games, particularly those with microtransactions, where the ability to perform certain actions are dependent on a fixed interval, and the ability to shorten or remove that interval can be procured using real life money. I find that this sort of use of games, as a sort of exploit of human psychology for a financial gain, rather disgusting, but that isn't really what this is a discussion of.
The final setup is the one that is most interesting, and is what most people think of when they think of the Skinner Box experiments: The Variable Interval Schedule. In this, the amount of time needed to pass before a button press will produce food is random. Pressing the button will produce food.. sometimes, but there is no direct correlation between the number of presses nor the amount of time passed. In this setup, the mice will, with fixed regularity, press the button. They may not be constantly pressing it, but they will check on it frequently; they never know when it will produce food, but the only way to know is to check.
Now, with these results, the variable ratio and variable interval experiments seem to produce similar results, but there is a rather important difference. In the variable ratio experiment, the mice learn that pressing the button will produce food. The action of pressing it some number of times generates food, they recognize that if they keep pressing the button, it will produce food, and so the more the press it, the more food they get. They know the action produces a reward, they simply can never tell how many times they must repeat it. If we think about games, often the nature of the objective, defeat enough enemies, collect enough points, etc., is known, but the actual goal isn't. You know you have to defeat all the enemies, but you don't know how many there are; you have to collect enough points in an event, but you don't know how many are actually available. This variation helps maintain attention, if we revisit some of the earlier ideas. So while making the interval variable creates a scenario where you simply are waiting for a reward to be available, the ratio helps create a sense of actual reward for your actions. Which is in many ways intuitive, as the ratio rewards a behavior, while the interval merely makes a reward available after an amount of time (something you cannot affect) passes.
The next set of ideas fall into the Self-Determination Theory, the first subtheory being the Cognitive Evaluation Theory, This consists of three components, of which two fall into the defining statement "Enhancing a sense of autonomy and competence supports the development of intrinsic motivation". This again ties into earlier sentiments: Self-confidence is a foundation for self-motivation. The third component is relatedness, referring to a sense of connection to other people, a degree of social engagement. Again, nothing terribly new, after what we've already seen.
Distributed Practice refers to the idea of building reinforcement of learning through the use of multiple sessions. Whether it be multiple days of coursework focusing on a single topic, or playing a game multiple times, the important thing is the gaps between individual sessions, giving you time to think about the activity outside of the activity. This is, in essence, a fancy way of describing the antithesis of cramming. The longer you spend on something, with gaps in between (ideally 24 hours at least), the better you'll be able to both retain the information, and recall it, because in fact by placing these gaps, you're forcing yourself to develop the skill of pulling information out of your head that has been stored there for lengths of time. Cramming can help you pass a test, but won't help you remember the information long term. A lesson I myself have barely applied up until now (but will be applying for comps by rotating through subjects from day to day).
Scaffolding is another fairly simple idea, one that overlaps somewhat with the idea of Learning Progressions, of building difficulty and complexity over time. Whether it be slow surges in difficulty, or different levels, the idea is to build a scaffold, and then use that to expand that scaffold further. Anyone who has played any modern "non-casual" game understands intuitively what scaffolding is.
Episodic Memory refers to our ability to remember visual cues, to see large patterns as opposed to memorizing individual details. In game design this is very important for the concept of seeing a game-state. Of being able to see the status of the game in a glance by the visual cues of the game itself. While the book uses multiple examples, including the classic one of chess masters being able to memorize thousands of board arrangements, my preferred example is the simpler one: while you may not be able to recall half of the things you learned in high school, can you remember what the rooms looked like? Where you sat? The arrangement of the desks? Where your friends were? It gets even easier if you recall a specific instance, the episodes of this concept's name. We as humans construct narratives, built from episodes, from events, and so this is yet another instance of taking something that is key to human experiences and adapting it for learning, both within the context of game design, and the broader application in learning.
There are a few more theories listed in the book, several of which I found rather drab, almost not worth mentioning, but I will examine one more, the idea of Flow.
When I was little, I read like a demon. I would lie on a couch, with a book in my lap, and would read for hours on end. My mother would call me for dinner, and until she walked directly up to me and shouted in my ear, I heard nothing. This is, from my own personal experience, a classic example of Flow. When time loses all meaning, your body loses all focus on everything but the activity. There is nothing else, and it is everything. Flow is.. the dream state, the envy of all tasks. It is described as the ideal state between anxiety and boredom, involving a sense of rich immersion and control, while at the same time engaging and fascinating. Flow is not something you can design for, but you can optimize conditions for. Here's a basic list of things that contribute to and support the Flow state, some more or less obvious than others:
- Achievable Task
- Clear Goals
- Effortless Involvement
- Control Over Actions
- Concern for self disappears
- Loss of Sense of Time
Again we encounter some familiar ideas: feedback, control, structure, agency. Flow is the sort of endeavor where everything is intuitive. The book uses the term "grok", which conveys that sense of intuition and understanding, and while I don't have much preference for saying grok, it does the job quite nicely. For me in game design, the game should have a flow, if not induce Flow. I should understand the game, even if I do not have mastery of it. In learning, especially when you combine ideas like Scaffolding and Flow, you should see the underlying current of an idea long before you see where it leads, which is understanding, then mastery.
The next chapters examine metaanalyses of the literature on gaming, and produce generally uninteresting, broad results. Another chapter examines what gamification can be useful for, how it can be used for addressing problem-solving (which is more interesting in it's explanation of the usefulness of teaching problem-solving as opposed to rote facts), and a rather bizarre chapter about different types of players. This chapter focuses in on very specific types of games, and the resultant factors that are produced. Frankly, they feel utterly out of touch, and somewhat patronizing in tone, in part because they carry a vague implication that any one type of game can be representative of gaming as a whole. There is a heavy focus on the types of psychological behavior in multiplayer (particularly mass multiplayer) gaming, with minimal focus on why people play, and more on how, in a way that is somewhat nonfunctional. Essentially these all come down to overgeneralization of what games are (one theory essentially reduces games to those of strategy, chance, role play, and disorder, the latter of which almost explicitly lacks any actual exampes of games.
But, this chapter ends with a wonderfully accurate observation on game design, namely that the game is defined by its interactions. You do not build a game off of a theme, or an aesthetic. You build it off of the interactions the game produces. You might build your mechanics and themes simultaneously, and they may inform each other, but deciding to make a game about goblins does not inform you or others of what the game is, but deciding to make a game using certain mechanics gives you a foundation to work from.