Monday, May 11, 2009

Reinforcement

For the construction materials reinforcement, see Rebar.
For reinforcement learning in computer science, see Reinforcement learning.
In operant conditioning, reinforcement occurs when an event following a response causes an increase in the probability of that response occurring in the future. Response strength can be assessed by measures such as the frequency with which the response is made (for example, a pigeon may peck a key more times in the session), or the speed with which it is made (for example, a rat may run a maze faster). The environment change contingent upon the response is called a reinforcer.
Contents
1 Types of reinforcement
1.1 Primary reinforcers
1.2 Secondary reinforcers
1.3 Other reinforcement terms
2 Natural and artificial reinforcement
3 Schedules of reinforcement
3.1 Simple schedules
3.1.1 Effects of different types of simple schedules
3.2 Compound schedules
3.3 Superimposed schedules
3.4 Concurrent schedules
4 Shaping
5 Chaining
6 Criticisms
6.1 History of the terms
7 See also
8 Footnotes
9 References
10 External links
//

[edit] Types of reinforcement
B.F. Skinner, the researcher who articulated the major theoretical constructs of reinforcement and behaviorism, refused to specify causal origins of reinforcers. Skinner argued that reinforcers are defined by a change in response strength (that is, functionally rather than causally), and that which is a reinforcer to one person may not be to another. Accordingly, activities, foods or items which are generally considered pleasant or enjoyable may not necessarily be reinforcing; they can only be considered so if the behavior that immediately precedes the potential reinforcer increases in similar future situations. If a child receives a cookie when he or she asks for one, and the frequency of 'cookie-requesting behavior' increases, the cookie can be seen as reinforcing 'cookie-requesting behavior'. If however, cookie-requesting behavior does not increase, the cookie cannot be considered reinforcing. The sole criterion which can determine if an item, activity or food is reinforcing is the change in the probability of a behavior after the administration of a potential reinforcer. Other theories may focus on additional factors such as whether the person expected the strategy to work at some point, but a behavioral theory of reinforcement would focus specifically upon the probability of the behavior.
The study of reinforcement has produced an enormous body of reproducible experimental results. Reinforcement is the central concept and procedure in the experimental analysis of behavior and much of quantitative analysis of behavior.
Positive reinforcement is an increase in the future frequency of a behavior due to the addition of a stimulus immediately following a response. Giving (or adding) food to a dog contingent on its sitting is an example of positive reinforcement (if this results in an increase in the future behavior of the dog sitting).
Negative reinforcement is an increase in the future frequency of a behavior when the consequence is the removal of an aversive stimulus. Turning off (or removing) an annoying song when a child asks their parent is an example of negative reinforcement (if this results in an increase in asking behavior of the child in the future).
Avoidance conditioning is a form of negative reinforcement that occurs when a behavior prevents an aversive stimulus from starting or being applied.
Skinner discusses that while it may appear so, Punishment is not the opposite of reinforcement. Rather, it has some other effects as well as decreasing undesired behavior.

decreases likelihood of behavior
increases likelihood of behavior
presented
positive punishment
positive reinforcement
taken away
negative punishment
negative reinforcement
Distinguishing "positive" from "negative" can be difficult, and the necessity of the distinction is often debated[1]. For example, in a very warm room, a current of external air serves as positive reinforcement because it is pleasantly cool or negative reinforcement because it removes uncomfortably hot air[2]. Some reinforcement can be simultaneously positive and negative, such as a drug addict taking drugs for the added euphoria and eliminating withdrawal symptoms. Many behavioral psychologists simply refer to reinforcement or punishment—without polarity—to cover all consequent environmental changes.

[edit] Primary reinforcers
A primary reinforcer, sometimes called an unconditioned reinforcer, is a stimulus that does not require pairing to function as a reinforcer and most likely has obtained this function through the evolution and its role in species' survival[3]. Examples of primary reinforcers include sleep, food, air, water, and sex. Other primary reinforcers, such as certain drugs, may mimic the effects of other primary reinforcers. While these primary reinforcers are fairly stable through life and across individuals, the reinforcing value of different primary reinforcers varies due to multiple factors (e.g., genetics, experience). Thus, one person may prefer one type of food while another abhors it. Or one person may eat lots of food while another eats very little. So even though food is a primary reinforcer for both individuals, the value of food as a reinforcer differs between them.
Often primary reinforcers shift their reinforcing value temporarily through satiation and deprivation. Food, for example, may cease to be effective as a reinforcer after a certain amount of it has been consumed (satiation). After a period during which it does not receive any of the primary reinforcer (deprivation), however, the primary reinforcer may once again regain its effectiveness in increasing response strength.

[edit] Secondary reinforcers
A secondary reinforcer, sometimes called a conditioned reinforcer, is a stimulus or situation that has acquired its function as a reinforcer after pairing with a stimulus which functions as a reinforcer. This stimulus may be a primary reinforcer or another conditioned reinforcer (such as money). An example of a secondary reinforcer would be the sound from a clicker, as used in clicker training. The sound of the clicker has been associated with praise or treats, and subsequently, the sound of the clicker may function as a reinforcer. As with primary reinforcers, an organism can experience satiation and deprivation with secondary reinforcers.

[edit] Other reinforcement terms
A generalized reinforcer is a conditioned reinforcer that has obtained the reinforcing function by pairing with many other reinforcers (such as money, a secondary generalized reinforcer).
In reinforcer sampling a potentially reinforcing but unfamiliar stimulus is presented to an organism without regard to any prior behavior. The stimulus may then later be used more effectively in reinforcement.
Socially mediated reinforcement (direct reinforcement) involves the delivery of reinforcement which requires the behavior of another organism.
Premack principle is a special case of reinforcement elaborated by David Premack, which states that a highly preferred activity can be used effectively as a reinforcer for a less preferred activity.
Reinforcement hierarchy is a list of actions, rank-ordering the most desirable to least desirable consequences that may serve as a reinforcer. A reinforcement hierarchy can be used to determine the relative frequency and desirability of different activities, and is often employed when applying the Premack principle.[citation needed]
Contingent outcomes are more likely to reinforce behavior than non-contingent responses. Contingent outcomes are those directly linked to a causal behavior, such a light turning on being contingent on flipping a switch. Note that contingent outcomes are not necessary to demonstrate reinforcement, but perceived contingency may increase learning.
Contiguous stimuli are stimuli closely associated by time and space with specific behaviors. They reduce the amount of time needed to learn a behavior while increasing its resistance to extinction. Giving a dog a piece of food immediately after sitting is more contiguous with (and therefore more likely to reinforce) the behavior than a several minute delay in food delivery following the behavior.
Noncontingent reinforcement refers to response-independent delivery of stimuli identified serve as reinforcers for some behaviors of that organism. However, this typically entails time-based delivery of stimuli identified as maintaining aberrant behavior, which serves to decrease the rate of the target behavior[4]. As no measured behavior is identified as being strengthened, there is controversy surrounding the use of the term noncontingent "reinforcement".[5]

[edit] Natural and artificial reinforcement
In his 1967 paper, Arbitrary and Natural Reinforcement, Charles Ferster proposed that reinforcement can be classified into events which increase the frequency of an operant as a natural consequence of the behavior itself, and those which are presumed to affect frequency by their requirement of human mediation, such as in a token economy where subjects are "rewarded" for certain behavior with an arbitrary token of a negotiable value. In 1970, Baer and Wolf created a name for the use of natural reinforcers called behavior traps.[6] A behavior trap is one in which only a simple response is necessary to enter the trap, yet once entered, the trap cannot be resisted in creating general behavior change. It is the use of a behavioral trap that will increase one's repertoire by exposing a person to the naturally occurring reinforcement of that behavior. Behavior traps have four characteristics:
They are "baited" with virtually irresistible reinforcers that "lure" the student to the trap
Only a low-effort response already in the repertoire is necessary to enter the trap
Interrelated contingencies of reinforcement inside the trap motivate the person to acquire, extend, and maintain targeted academic/social skills[7]
they can remain effective for long time because the person shows few, if any, satiation effects.
As can be seen from the above, artificial reinforcement is created to build or develop skills, and to generalize, it is important that either a behavior trap is introduced to 'capture' the skill and utilize naturally occurring reinforcement to maintain or increase it. This behavior trap may simply be a social situation that will generally result from a specific behavior once it has met a certain criterion (ex: if you use edible reinforcers to train a person to say hello and smile at people when they meet them, after that skill has been built up, the natural reinforcer of other people smiling, and having more friendly interactions will naturally reinforce the skill and the edibles can be faded).[8]

[edit] Schedules of reinforcement
When an animal's surroundings are controlled, its behavior patterns after reinforcement become predictable, even for very complex behavior patterns. A schedule of reinforcement is the protocol for determining when responses or behaviors will be reinforced, ranging from continuous reinforcement, in which every response is reinforced, and extinction, in which no response is reinforced. Between these extremes is intermittent or partial reinforcement where only some responses are reinforced.
Specific variations of intermittent reinforcement reliably induce specific patterns of response, irrespective of the species being investigated (including humans in some conditions). The orderliness and predictability of behaviour under schedules of reinforcement was evidence for B. F. Skinner's claim that using operant conditioning he could obtain "control over behaviour", in a way that rendered the theoretical disputes of contemporary comparative psychology obsolete. The reliability of schedule control supported the idea that a radical behaviourist experimental analysis of behavior could be the foundation for a psychology that did not refer to mental or cognitive processes. The reliability of schedules also led to the development of Applied Behavior Analysis as a means of controlling or altering behavior.
Many of the simpler possibilities, and some of the more complex ones, were investigated at great length by Skinner using pigeons, but new schedules continue to be defined and investigated.

[edit] Simple schedules

A chart demonstrating the different response rate of the four simple schedules of reinforcement, each hatch mark designates a reinforcer being given
Simple schedules have a single rule to determine when a single type of reinforcer is delivered for specific response.
Fixed ratio (FR) schedules deliver reinforcement after every nth response
Example: FR2 = every second response is reinforced
Lab example: FR5 = rat reinforced with food after each 5 bar-presses in a Skinner box.
Real-world example: FR10 = Used car dealer gets a $1000 bonus for each 10 cars sold on the lot.
Continuous ratio (CRF) schedules are a special form of a fixed ratio. In a continuous ratio schedule, reinforcement follows each and every response.
Lab example: each time a rat presses a bar it gets a pellet of food
Real world example: each time a dog defecates outside its owner gives it a treat
Fixed interval (FI) schedules deliver reinforcement for the first response after a fixed length of time since the last reinforcement, while premature responses are not reinforced.
Example: FI1" = reinforcement provided for the first response after 1 second
Lab example: FI15" = rat is reinforced for the first bar press after 15 seconds passes since the last reinforcement
Real world example: FI24 hour = calling a radio station is reinforced with a chance to win a prize, but the person can only sign up once per day
Variable ratio (VR) schedules deliver reinforcement after a random number of responses (based upon a predetermined average)
Example: VR3 = on average, every third response is reinforced
Lab example: VR10 = on average, a rat is reinforced for each 10 bar presses
Real world example: VR37 = a roulette player betting on specific numbers will win on average once every 37 tries (on a U.S. roulette wheel, this would be VR38)
Variable interval (VI) schedules deliver reinforcement for the first response after a random average length of time passes since the last reinforcement
Example: VI3" = reinforcement is provided for the first response after an average of 3 seconds since the last reinforcement.
Lab example: VI10" = a rat is reinforced for the first bar press after an average of 10 seconds passes since the last reinforcement
Real world example: a predator can expect to come across a prey on a variable interval schedule
Other simple schedules include:
Differential reinforcement of incompatible behavior (DRI) is used to reduce a frequent behavior without punishing it by reinforcing an incompatible response. An example would be reinforcing clapping to reduce nose picking.
Differential reinforcement of other behavior (DRO) is used to reduce a frequent behavior by reinforcing any behavior other than the undesired one. An example would be reinforcing any hand action other than nose picking.
Differential reinforcement of low response rate (DRL) is used to encourage low rates of responding. It is like an interval schedule, except that premature responses reset the time required between behavior.
Lab example: DRL10" = a rat is reinforced for the first response after 10 seconds, but if the rat responds earlier than 10 seconds there is no reinforcement and the rat has to wait 10 seconds from that premature response without another response before bar pressing will lead to reinforcement.
Real world example: "If you ask me for a potato chip no more than once every 10 minutes, I will give it to you. If you ask more often, I will give you none."
Differential reinforcement of high rate (DRH) is used to increase high rates of responding. It is like an interval schedule, except that a minimum number of responses are required in the interval in order to receive reinforcement.
Lab example: DRH10"/15 responses = a rat must press a bar 15 times within a 10 second increment in order to be reinforced
Real world example: "If Lance Armstrong is going to win the Tour de France he has to pedal x number of times during the y hour race."
Fixed Time (FT) provides reinforcement at a fixed time since the last reinforcement, irrespective of whether the subject has responded or not. In other words, it is a non-contingent schedule
Lab example: FT5": rat gets food every 5" regardless of the behavior.
Real world example: a person gets an annuity check every month regardless of behavior between checks
Variable Time (VT) provides reinforcement at an average variable time since last reinforcement, regardless of whether the subject has responded or not.

[edit] Effects of different types of simple schedules
Ratio schedules produce higher rates of responding than interval schedules, when the rates of reinforcement are otherwise similar.
Variable schedules produce higher rates and greater resistance to extinction than most fixed schedules. This is also known as the Partial Reinforcement Extinction Effect (PREE)
The variable ratio schedule produces both the highest rate of responding and the greatest resistance to extinction (an example would be the behavior of gamblers at slot machines)
Fixed schedules produce 'post-reinforcement pauses' (PRP), where responses will briefly cease immediately following reinforcement, though the pause is a function of the upcoming response requirement rather than the prior reinforcement.
The PRP of a fixed interval schedule is frequently followed by an accelerating rate of response which is "scallop shaped," while those of fixed ratio schedules are more angular.
Organisms whose schedules of reinforcement are 'thinned' (that is, requiring more responses or a greater wait before reinforcement) may experience 'ratio strain' if thinned too quickly. This produces behavior similar to that seen during extinction.
Partial reinforcement schedules are more resistant to extinction than continuous reinforcement schedules.
Ratio schedules are more resistant than interval schedules and variable schedules more resistant than fixed ones.

[edit] Compound schedules
Compound schedules combine two or more different simple schedules in some way using the same reinforcer for the same behaviour. There are many possibilities; among those most often used are:
Alternative schedules - A type of compound schedule where two or more simple schedules are in effect and which ever simple schedule is completed first results in reinforcement. [9]
Conjunctive schedules - A complex schedule of reinforcement where two or more simple schedules are in effect independently of each other and requirements on all of the simple schedules must be met for reinforcement.
Multiple schedules - either of two, or more, schedules may occur with a stimulus indicating which is in force.
Example: FR4 when given a whistle and FI 6 when given a bell ring.
Mixed schedules - either of two, or more, schedules may occur with no stimulus indicating which is in force.
Example: FI6 and then VR 3 without any stimulus warning of the change in schedule.
Concurrent schedules - two schedules are simultaneously in force though not necessarily on two different response devices, and reinforcement on those schedules is independent of each other.
Interlocking Schedules - A single schedule with two components where progress in one component affects progress in the other component. An interlocking FR60-FI120, for example, each response subtracts time from the interval component such that each response is "equal" to removing two seconds from the FI.
Chained schedules - reinforcement occurs after two or more successive schedules have been completed, with a stimulus indicating when one schedule has been completed and the next has started.
Example: FR10 in a green light when completed it goes to a yellow light to indicate FR 3, after it's completed it goes into red light to indicate VI 6, etc. At the end of the chain, a reinforcer is given.
Tandem schedules - reinforcement occurs when two or more successive schedule requirements have been completed, with no stimulus indicating when a schedule has been completed and the next has started.
Example: VR 10, after it is completed the schedule is changed without warning to FR 10, after that it is changed without warning to FR 16, etc. At the end of the series of schedules, a reinforcer is finally given.
Higher order schedules - completion of one schedule is reinforced according to a second schedule; e.g. in FR2 (FI 10 secs), two successive fixed interval schedules would have to be completed before a response is reinforced.

[edit] Superimposed schedules
Superimposed schedules of reinforcement is a term in psychology which refers to a structure of rewards where two or more simple schedules of reinforcement operate simultaneously. The reinforcers can be positive and/or negative. An example would be a person who comes home after a long day at work. The behavior of opening the front door is rewarded by a big kiss on lips by the person's spouse and a rip in the pants from the family dog jumping enthusiastically. Another example of superimposed schedules of reinforcement would be a pigeon in an experimental cage pecking at a button. The pecks result in a hopper of grain being delivered every twentieth peck and access to water becoming available after every two hundred pecks.
Superimposed schedules of reinforcement are a type of compound schedule that evolved from the initial work on simple schedules of reinforcement by B. F. Skinner and his colleagues (Skinner and Ferster, 1957). They demonstrated that reinforcers could be delivered on schedules, and further that organisms behaved differently under different schedules. Rather than a reinforcer, such as food or water, being delivered every time as a consequence of some behavior, a reinforcer could be delivered after more than one instance of the behavior. For example, a pigeon may be required to peck a button switch ten times before food is made available to the pigeon. This is called a "ratio schedule." Also, a reinforcer could be delivered after an interval of time passed following a target behavior. An example is a rat that is given a food pellet two minutes after the rat pressed a lever. This is called an "interval schedule." In addition, ratio schedules can deliver reinforcement following fixed or variable number of behaviors by the individual organism. Likewise, interval schedules can deliver reinforcement following fixed or variable intervals of time following a single response by the organism. Individual behaviors tend to generate response rates that differ based upon how the reinforcement schedule is created. Much subsequent research in many labs examined the effects on behaviors of scheduling reinforcers. If an organism is offered the opportunity to choose between or among two or more simple schedules of reinforcement at the same time, the reinforcement structure is called a "concurrent schedule of reinforcement." Brechner (1974, 1977) introduced the concept of "superimposed schedules of reinforcement in an attempt to create a laboratory analogy of social traps, such as when humans overharvest their fisheries or tear down their rainforests. Brechner created a situation where simple reinforcement schedules were superimposed upon each other. In other words, a single response or group of responses by an organism led to multiple consequences. Concurrent schedules of reinforcement can be thought of as "or" schedules, and superimposed schedules of reinforcement can be thought of as "and" schedules. Brechner and Linder (1981) and Brechner (1987) expanded the concept to describe how superimposed schedules and the social trap analogy could be used to analyze the way energy flows through systems.
Superimposed schedules of reinforcement have many real-world applications in addition to generating social traps. Many different human individual and social situations can be created by superimposing simple reinforcement schedules. For example a human being could have simultaneous tobacco and alcohol addictions. Even more complex situations can be created or simulated by superimposing two or more concurrent schedules. For example, a high school senior could have a choice between going to Stanford University or UCLA, and at the same time have the choice of going into the Army or the Air Force, and simultaneously the choice of taking a job with an internet company or a job with a software company. That would be a reinforcement structure of three superimposed concurrent schedules of reinforcement. Superimposed schedules of reinforcement can be used to create the three classic conflict situations (approach-approach conflict, approach-avoidance conflict, and avoidance-avoidance conflict) described by Kurt Lewin (1935)and can be used to operationalize other Lewinian situations analyzed by his force field analysis. Another example of the use of superimposed schedules of reinforcement as an analytical tool is its application to the contingencies of rent control (Brechner, 2003).

[edit] Concurrent schedules
In operant conditioning, concurrent schedules of reinforcement are schedules of reinforcement that are simultaneously available to an animal subject or human participant, so that the subject or participant can respond on either schedule. For example, a pigeon in a Skinner box might be faced with two pecking keys; pecking responses can be made on either, and food reinforcement might follow a peck on either. The schedules of reinforcement arranged for pecks on the two keys can be different. They may be independent, or they may have some links between them so that behaviour on one key affects the likelihood of reinforcement on the other.
It is not necessary for the responses on the two schedules to be physically distinct: in an alternative way of arranging concurrent schedules, introduced by Findley in 1958, both schedules are arranged on a single key or other response device, and the subject or participant can respond on a second key in order to change over between the schedules. In such a "Findley concurrent" procedure, a stimulus (e.g. the colour of the main key) is used to signal which schedule is currently in effect.
Concurrent schedules often induce rapid alternation between the keys. To prevent this, a "changeover delay" is commonly introduced: each schedule is inactivated for a brief period after the subject switches to it.
When both the concurrent schedules are variable intervals, a quantitative relationship known as the matching law is found between relative response rates in the two schedules and the relative reinforcement rates they deliver; this was first observed by R. J. Herrnstein in 1961.

[edit] Shaping
Main article: Shaping (psychology)
Shaping involves reinforcing successive, increasingly accurate approximations of a response desired by a trainer. In training a rat to press a lever, for example, simply turning toward the lever will be reinforced at first. Then, only turning and stepping toward it will be reinforced. As training progresses, the response reinforced becomes progressively more like the desired behavior.

[edit] Chaining
Main article: Chaining
Chaining involves linking discrete behaviors together in a series, such that each result of each behaviour is both the reinforcement (or consequence) for the previous behavior, and the stimuli (or antecedent) for the next behavior. There are many ways to teach chaining, such as forward chaining (starting from the first behavior in the chain), backwards chaining (starting from the last behavior) and total task chaining (in which the entire behavior is taught from beginning to end, rather than as a series of steps). An example would be opening a locked door. First the key is inserted, then turned, then the door opened. Forward chaining would teach the subject first to insert the key. Once that task is mastered, they are told to insert the key, and taught to turn it. Once that task is mastered, they are told to perform the first two, then taught to open the door. Backwards chaining would involve the teacher first inserting and turning the key, and the subject is taught to open the door. Once that is learned, the teacher inserts the key, and the subject is taught to turn it, then opens the door as the next step. Finally, the subject is taught to insert the key, and they turn and open the door. Once the first step is mastered, the entire task has been taught. Total task chaining would involve teaching the entire task as a single series, prompting through all steps. Prompts are faded (reduced) at each step as they are mastered.

[edit] Criticisms
The standard definition of behavioral reinforcement has been criticized as circular, since it appears to argue that response strength is increased by reinforcement while defining reinforcement as something which increases response strength; that is, the standard definition says only that response strength is increased by things which increase response strength. However, the correct usage[10] of reinforcement is that something is a reinforcer because of its effect on behavior, and not the other way around. It becomes circular if one says that a particular stimulus strengthens behavior because it is a reinforcer, and should not be used to explain why a stimulus is producing that effect on the behavior. Other definitions have been proposed, such as F. D. Sheffield's "consummatory behavior contingent on a response," but these are not broadly used in psychology.[11]

[edit] History of the terms
In the 1920s Russian physiologist Ivan Pavlov may have been the first to use the word reinforcement with respect to behavior, but (according to Dinsmoor) he used its approximate Russian cognate sparingly, and even then it referred to strengthening an already-learned but weakening response. He did not use it, as it is today, for selecting and strengthening new behavior. Pavlov's introduction of the word extinction (in Russian) approximates today's psychological use.
In popular use, positive reinforcement is often used as a synonym for reward, with people (not behavior) thus being "reinforced," but this is contrary to the term's consistent technical usage, as it is a dimension of behavior, and not the person, which is strengthened. Negative reinforcement is often used by laypeople and even social scientists outside psychology as a synonym for punishment. This is contrary to modern technical use, but it was B. F. Skinner who first used it this way in his 1938 book. By 1953, however, he followed others in thus employing the word punishment, and he re-cast negative reinforcement for the removal of aversive stimuli.
There are some within the field of behavior analysis[12] who have suggested that the terms "positive" and "negative" constitute an unnecessary distinction in discussing reinforcement as it is often unclear whether stimuli are being removed or presented. For example, Iwata[13] poses the question: “…is a change in temperature more accurately characterized by the presentation of cold (heat) or the removal of heat (cold)?” (p. 363). Thus, it may be best to conceptualize reinforcement simply as a pre-change condition being replaced by a post-change condition which reinforces the behavior which was followed by the change in stimulus conditions.

Motivation

Motivation is the set of reasons that determines one to engage in a particular behavior. The term is generally used for human motivation but, theoretically, it can be used to describe the causes for animal behavior as well. This article refers to human motivation. According to various theories, motivation may be rooted in the basic need to minimize physical pain and maximize pleasure, or it may include specific needs such as eating and resting, or a desired object, hobby, goal, state of being, ideal, or it may be attributed to less-apparent reasons such as altruism, morality, or avoiding mortality.
Contents
1 Motivational concepts
1.1 The Incentive Theory of Motivation
1.2 Intrinsic and extrinsic motivation
1.2.1 Intrinsic motivation
1.2.2 Extrinsic motivation
1.3 Self-control
2 Motivational Theories
2.1 Drive Reduction Theories
2.1.1 Cognitive dissonance theory
2.2 Need Theories
2.2.1 Need Hierarchy Theory
2.2.2 Herzberg’s two-factor theory
2.2.3 Alderfer’s ERG theory
2.2.4 Self-determination theory
2.3 Broad Theories
2.4 Cognitive theories
2.4.1 Goal-setting theory
2.5 Models of Behavior Change
2.6 Unconscious motivation
2.7 Intrinsic motivation and the 16 basic desires theory
3 Controlling motivation
3.1 Early programming
3.2 Organization
3.3 Drugs
4 Applications
4.1 Education
4.2 Business
4.3 Online communities
5 See also
6 References
6.1 Further readings
7 External links
//

[edit] Motivational concepts

[edit] The Incentive Theory of Motivation
A reward, tangible or intangible, is presented after the occurrence of an action (i.e. behavior) with the intent to cause the behavior to occur again. This is done by associating positive meaning to the behavior. Studies show that if the person receives the reward immediately, the effect would be greater, and decreases as duration lengthens. Repetitive action-reward combination can cause the action to become habit. Motivation comes from two things: you, and other people. There is extrinsic motivation, which comes from others, and intrinsic motivation, which comes from within you.
Applying proper motivational techniques can be much harder than it seems. Steven Kerr notes that when creating a reward system, it can be easy to reward A, while hoping for B, and in the process, reap harmful effects that can jeopardize your goals.[1]
Rewards can also be organized as extrinsic or intrinsic. Extrinsic rewards are external to the person; for example, praise or money. Intrinsic rewards are internal to the person; for example, satisfaction or a feeling of accomplishment.
Some authors distinguish between two forms of intrinsic motivation: one based on enjoyment, the other on obligation. In this context, obligation refers to motivation based on what an individual thinks ought to be done. For instance, a feeling of responsibility for a mission may lead to helping others beyond what is easily observable, rewarded, or fun.
A reinforcer is different from reward, in that reinforcement is intended to create a measured increase in the rate of a desirable behavior following the addition of something to the environment.

[edit] Intrinsic and extrinsic motivation

[edit] Intrinsic motivation
Intrinsic motivation occurs when people engage in an activity, such as a hobby, without obvious external incentives. This form of motivation has been studied by social and educational psychologists since the early 1970s. Research has found that it is usually associated with high educational achievement and enjoyment by students. Intrinsic motivation has been explained by Fritz Heider's attribution theory, Bandura's work on self-efficacy [2], and Ryan and Deci's cognitive evaluation theory. Students are likely to be intrinsically motivated if they:
attribute their educational results to internal factors that they can control (e.g. the amount of effort they put in),
believe they can be effective agents in reaching desired goals (i.e. the results are not determined by luck),
are interested in mastering a topic, rather than just rote-learning to achieve good grades.
In knowledge-sharing communities and organizations (such as Wikipedia), people often cite altruistic reasons for their participation, including contributing to a common good, a moral obligation to the group, mentorship or 'giving back'. In work environments, money may provide a more powerful extrinsic factor than the intrinsic motivation provided by an enjoyable workplace.
In terms of sports, intrinsic motivation is the motivation that comes from inside the performer. That is, the athlete competes for the love of the sport.
see also bellow the theory of 16 basic desires.

[edit] Extrinsic motivation
Extrinsic motivation comes from outside of the performer. Money is the most obvious example, but coercion and threat of punishment are also common extrinsic motivations.
In sports, the crowd may cheer the performer on, and this motivates him or her to do well. Trophies are also extrinsic incentives. Competition is often extrinsic because it encourages the performer to win and beat others, not to enjoy the intrinsic rewards of the activity.
Social psychological research has indicated that extrinsic rewards can lead to overjustification and a subsequent reduction in intrinsic motivation.
Extrinsic incentives sometimes can weaken the motivation as well. In one classic study done by green & lepper, children who were lavishly rewarded for drawing with felt-tip pens later showed little interest in playing with the pens again.

[edit] Self-control
See also: Self motivation
The self-control of motivation is increasingly understood as a subset of emotional intelligence; a person may be highly intelligent according to a more conservative definition (as measured by many intelligence tests), yet unmotivated to dedicate this intelligence to certain tasks. Yale School of Management professor Victor Vroom's "expectancy theory" provides an account of when people will decide whether to exert self control to pursue a particular goal.
Drives and desires can be described as a deficiency or need that activates behaviour that is aimed at a goal or an incentive. These are thought to originate within the individual and may not require external stimuli to encourage the behaviour. Basic drives could be sparked by deficiencies such as hunger, which motivates a person to seek food; whereas more subtle drives might be the desire for praise and approval, which motivates a person to behave in a manner pleasing to others.
By contrast, the role of extrinsic rewards and stimuli can be seen in the example of training animals by giving them treats when they perform a trick correctly. The treat motivates the animals to perform the trick consistently, even later when the treat is removed from the process.

[edit] Motivational Theories

[edit] Drive Reduction Theories
Main article: Drive theory
There are a number of drive theories. The Drive Reduction Theory grows out of the concept that we have certain biological needs, such as hunger. As time passes the strength of the drive increases as it is not satisfied. Then as we satisfy that drive by fulfilling its desire, such as eating, the drive's strength is reduced. It is based on the theories of Freud and the idea of feedback control systems, such as a thermostat.
There are several problems, however, that leave the validity of the Drive Reduction Theory open for debate. The first problem is that it does not explain how Secondary Reinforcers reduce drive. For example, money does not satisfy any biological or psychological need but reduces drive on a regular basis through a pay check second-order conditioning. Secondly, if the drive reduction theory held true we would not be able to explain how a hungry human being can prepare a meal without eating the food before they finished cooking it.
However, when comparing this to a real life situation such as preparing food, one does get hungrier as the food is being made (drive increases), and after the food has been consumed the drive decreases. The only reason the food does not get eaten before is the human element of restraint and has nothing to do with drive theory. Also, the food will either be nicer after it is cooked, or it won't be edible at all before it is cooked.

[edit] Cognitive dissonance theory
Main article: Cognitive dissonance
Suggested by Leon Festinger, this occurs when an individual experiences some degree of discomfort resulting from an incompatibility between two cognitions. For example, a consumer may seek to reassure himself regarding a purchase, feeling, in retrospect, that another decision may have been preferable.
Another example of cognitive dissonance is when a belief and a behavior are in conflict. A person may wish to be healthy, believes smoking is bad for one's health, and yet continues to smoke.

[edit] Need Theories

[edit] Need Hierarchy Theory
Main article: Hierarchy of needs
Abraham Maslow's hierarchy of human needs theory is the one of the most widely discussed theories of motivation.
The theory can be summarized as follows:
Human beings have wants and desires which influence their behavior. Only unsatisfied needs influence behavior, satisfied needs do not.
Since needs are many, they are arranged in order of importance, from the basic to the complex.
The person advances to the next level of needs only after the lower level need is at least minimally satisfied.
The further the progress up the hierarchy, the more individuality, humanness and psychological health a person will show.
The needs, listed from basic (lowest, earliest) to most complex (highest, latest) are as follows:
Physiological
Safety
Belongingness
Esteem
Self actualization

[edit] Herzberg’s two-factor theory
Main article: Frederick Herzberg
Frederick Herzberg's two-factor theory, aka intrinsic/extrinsic motivation, concludes that certain factors in the workplace result in job satisfaction, but if absent, lead to dissatisfaction.
The factors that motivate people can change over their lifetime, but "respect for me as a person" is one of the top motivating factors at any stage of life.
He distinguished between:
Motivators; (e.g. challenging work, recognition, responsibility) which give positive satisfaction, and
Hygiene factors; (e.g. status, job security, salary and fringe benefits) that do not motivate if present, but, if absent, result in demotivation.
The name Hygiene factors is used because, like hygiene, the presence will not make you healthier, but absence can cause health deterioration.
The theory is sometimes called the "Motivator-Hygiene Theory."

[edit] Alderfer’s ERG theory
Main article: Clayton Alderfer
Clayton Alderfer, expanding on Maslow's hierarchy of needs, created the ERG theory (existence, relatedness and growth). Physiological and safety, the lower order needs, are placed in the existence category, while love and self esteem needs are placed in the relatedness category. The growth category contains our self-actualization and self-esteem needs.

[edit] Self-determination theory
Self-determination theory, developed by Edward Deci and Richard Ryan, focuses on the importance of intrinsic motivation in driving human behavior. Like Maslow's hierarchical theory and others that built on it, SDT posits a natural tendency toward growth and development. Unlike these other theories, however, SDT does not include any sort of "autopilot" for achievement, but instead requires active encouragement from the environment. The primary factors that encourage motivation and development are autonomy, competence feedback, and relatedness.[3]

[edit] Broad Theories
The latest approach in Achievement Motivation is an integrative perspective as lined out in the "Onion-Ring-Model of Achievement Motivation" by Heinz Schuler, George C. Thornton III, Andreas Frintrup and Rose Mueller-Hanson. It is based on the premise that performance motivation results from way broad components of personality are directed towards performance. As a result, it includes a range of dimensions that are relevant to success at work but which are not conventionally regarded as being part of performance motivation. Especially it integrates formerly separated approaches as Need for Achievement with e.g. social motives like Dominance. The Achievement Motivation Inventory AMI (Schuler, Thornton, Frintrup & Mueller-Hanson, 2003) is based on this theory and assesses three factors (17 separated scales) relevant to vocational and professional success.

[edit] Cognitive theories

[edit] Goal-setting theory
Goal-setting theory is based on the notion that individuals sometimes have a drive to reach a clearly defined end state. Often, this end state is a reward in itself. A goal's efficiency is affected by three features: proximity, difficulty and specificity. An ideal goal should present a situation where the time between the initiation of behavior and the end state is close. This explains why some children are more motivated to learn how to ride a bike than mastering algebra. A goal should be moderate, not too hard or too easy to complete. In both cases, most people are not optimally motivated, as many want a challenge (which assumes some kind of insecurity of success). At the same time people want to feel that there is a substantial probability that they will succeed. Specificity concerns the description of the goal in their class. The goal should be objectively defined and intelligible for the individual. A classic example of a poorly specified goal is to get the highest possible grade. Most children have no idea how much effort they need to reach that goal. [4]
Douglas Vermeeren, has done extensive research into why many people fail to get to their goals. The failure is directly attributed to motivating factors. Vermeeren states that unless an individual can clearly identify their motivating factor or their significant and meaningful reasons why they wish to attain the goal, they will never have the power to attain it.

[edit] Models of Behavior Change
Social-cognitive models of behavior change include the constructs of motivation and volition. Motivation is seen as a process that leads to the forming of behavioral intentions. Volition is seen as a process that leads from intention to actual behavior. In other words, motivation and volition refer to goal setting and goal pursuit, respectively. Both processes require self-regulatory efforts. Several self-regulatory constructs are needed to operate in orchestration to attain goals. An example of such a motivational and volitional construct is perceived self-efficacy. Self-efficacy is supposed to facilitate the forming of behavioral intentions, the development of action plans, and the initiation of action. It can support the translation of intentions into action.
See also:
Health Action Process Approach
I-Change Model

[edit] Unconscious motivation
Some psychologists believe that a significant portion of human behavior is energized and directed by unconscious motives. According to Maslow, "Psychoanalysis has often demonstrated that the relationship between a conscious desire and the ultimate unconscious aim that underlies it need not be at all direct [5]." In other words, stated motives do not always match those inferred by skilled observers. For example, it is possible that a person can be accident-prone because he has an unconscious desire to hurt himself and not because he is careless or ignorant of the safety rules. Similarly, some overweight people are not hungry at all for food but for attention and love. Eating is merely a defensive reaction to lack of attention. Some workers damage more equipment than others do because they harbor unconscious feelings of aggression toward authority figures.
Psychotherapists point out that some behavior is so automatic that the reasons for it are not available in the individual's conscious mind. Compulsive cigarette smoking is an example. Sometimes maintaining self-esteem is so important and the motive for an activity is so threatening that it is simply not recognized and, in fact, may be disguised or repressed. Rationalization, or "explaining away", is one such disguise, or defense mechanism, as it is called. Another is projecting or attributing one's own faults to others. "I feel I am to blame", becomes "It is her fault; she is selfish". Repression of powerful but socially unacceptable motives may result in outward behavior that is the opposite of the repressed tendencies. An example of this would be the employee who hates his boss but overworks himself on the job to show that he holds him in high regard.
Unconscious motives add to the hazards of interpreting human behavior and, to the extent that they are present, complicate the life of the administrator. On the other hand, knowledge that unconscious motives exist can lead to a more careful assessment of behavioral problems. Although few contemporary psychologists deny the existence of unconscious factors, many do believe that these are activated only in times of anxiety and stress, and that in the ordinary course of events, human behavior — from the subject's point of view — is rationally purposeful.

[edit] Intrinsic motivation and the 16 basic desires theory
Starting from a studies involving more than 6,000 people, Professor Steven Reiss has proposed a theory that find 16 basic desires that guide nearly all human behavior. [6] [7]
The desires are:
Acceptance, the need for approval
Curiosity, the need to think
Eating, the need for food
Family, the need to raise children
Honor, the need to be loyal to the traditional values of one's clan/ethnic group
Idealism, the need for social justice
Independence, the need for individuality
Order, the need for organized, stable, predictable environments
Physical Activity, the need for exercise
Power, the need for influence of will
Romance, the need for sex
Saving, the need to collect
Social Contact, the need for friends (peer relationships)
Status, the need for social standing/importance
Tranquility, the need to be safe
Vengeance, the need to strike back
In this model, people differ in these basic desires. These basic desires represent intrinsic desires that directly motivate a person's behaviour, and not aimed at indirectly satisfying other desires. People may also be motivated by non-basic desires, but in this case this does not relate to deep motivation, or only as a means to achieve other basic desires.

[edit] Controlling motivation
The control of motivation is only understood to a limited extent. There are many different approaches of motivation training, but many of these are considered pseudoscientific by critics. To understand how to control motivation it is first necessary to understand why many people lack motivation.

[edit] Early programming
Modern imaging has provided solid empirical support for the psychological theory that emotional programming is largely defined in childhood. Harold Chugani, Medical Director of the PET Clinic at the Children's Hospital of Michigan and professor of pediatrics, neurology and radiology at Wayne State University School of Medicine, has found that children's brains are much more capable of consuming new information (linked to emotions) than those of adults. Brain activity in cortical regions is about twice as high in children as in adults from the third to the ninth year of life. After that period, it declines constantly to the low levels of adulthood. Brain volume, on the other hand, is already at about 95% of adult levels in the ninth year of life.

[edit] Organization
Besides the very direct approaches to motivation, beginning in early life, there are solutions which are more abstract but perhaps nevertheless more practical for self-motivation. Virtually every motivation guidebook includes at least one chapter about the proper organization of one's tasks and goals. It is usually suggested that it is critical to maintain a list of tasks, with a distinction between those which are completed and those which are not, thereby moving some of the required motivation for their completion from the tasks themselves into a "meta-task", namely the processing of the tasks in the task list, which can become a routine. The viewing of the list of completed tasks may also be considered motivating, as it can create a satisfying sense of accomplishment.
Most electronic to-do lists have this basic functionality, although the distinction between completed and non-completed tasks is not always clear (completed tasks are sometimes simply deleted, instead of kept in a separate list).
Other forms of information organization may also be motivational, such as the use of mind maps to organize one's ideas, and thereby "train" the neural network that is the human brain to focus on the given task. Simpler forms of idea notation such as simple bullet-point style lists may also be sufficient, or even more useful to less visually oriented persons.

[edit] Drugs
Some authors, especially in the transhumanist movement, have suggested the use of "smart drugs", also known as nootropics, as "motivation-enhancers". The effects of many of these drugs on the brain are emphatically not well understood, and their legal status often makes open experimentation difficult.
Converging neurobiological evidence also supports the idea that addictive drugs such as cocaine, nicotine, alcohol, and heroin act on brain systems underlying motivation for natural rewards, such as the mesolimbic dopamine system. Normally, these brain systems serve to guide us toward fitness-enhancing rewards (food, water, sex, etc.), but they can be co-opted by repeated use of drugs, causing addicts to excessively pursue drug rewards. Therefore, drugs can hijack brain systems underlying other motivations, causing the almost singular pursuit of drugs characteristic of addiction.

[edit] Applications

[edit] Education

This section needs additional citations for verification.Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (November 2007)
Motivation is of particular interest to Educational psychologists because of the crucial role it plays in student learning. However, the specific kind of motivation that is studied in the specialized setting of education differs qualitatively from the more general forms of motivation studied by psychologists in other fields.
Motivation in education can have several effects on how students learn and how they behave towards subject matter[8]. It can:
Direct behavior toward particular goals
Lead to increased effort and energy
Increase initiation of, and persistence in, activities
Enhance cognitive processing
Determine what consequences are reinforcing
Lead to improved performance.
Because students are not always internally motivated, they sometimes need situated motivation, which is found in environmental conditions that the teacher creates.
There are two kinds of motivation:
Intrinsic motivation occurs when people are internally motivated to do something because it either brings them pleasure, they think it is important, or they feel that what they are learning is significant.
Extrinsic motivation comes into play when a student is compelled to do something or act a certain way because of factors external to him or her (like money or good grades).
Note also that there is already questioning and expansion about this dichotomy on motivation, e.g., Self-Determination Theory.
Motivation has been found to be a pivotal area in treating Autism Spectrum Disorders, as in Pivotal Response Therapy.
Motivation is also an important element in the concept of Andragogy (what motivates the adult learner).

[edit] Business
At lower levels of Maslow's hierarchy of needs, such as Physiological needs, money is a motivator, however it tends to have a motivating effect on staff that lasts only for a short period (in accordance with Herzberg's two-factor model of motivation). At higher levels of the hierarchy, praise, respect, recognition, empowerment and a sense of belonging are far more powerful motivators than money, as both Abraham Maslow's theory of motivation and Douglas McGregor's Theory X and theory Y (pertaining to the theory of leadership) demonstrate.
Maslow has money at the lowest level of the hierarchy and shows other needs are better motivators to staff. McGregor places money in his Theory X category and feels it is a poor motivator. Praise and recognition are placed in the Theory Y category and are considered stronger motivators than money.
Motivated employees always look for better ways to do a job.
Motivated employees are more quality oriented.
Motivated workers are more productive.
The average workplace is about midway between the extremes of high threat and high opportunity. Motivation by threat is a dead-end strategy, and naturally staff are more attracted to the opportunity side of the motivation curve than the threat side. Motivation is a powerful tool in the work environment that can lead to employees working at their most efficient levels of production. [9]
Nonetheless, Steinmertz also discusses three common character types of subordinates: ascendant, indifferent, and ambivalent whom all react and interact uniquely, and must be treated, managed, and motivated accordingly. An effective leader must understand how to manage all characters, and more importantly the manager must utilize avenues that allow room for employees to work, grow, and find answers independently.[10]
The assumptions of Maslow and Herzberg were challenged by a classic study[11] at Vauxhall Motors' UK manufacturing plant. This introduced the concept of orientation to work and distinguished three main orientations: instrumental (where work is a means to an end), bureaucratic (where work is a source of status, security and immediate reward) and solidaristic (which prioritises group loyalty).
Other theories which expanded and extended those of Maslow and Herzberg included Kurt Lewin's Force Field Theory, Edwin Locke's Goal Theory and Victor Vroom's Expectancy theory. These tend to stress cultural differences and the fact that individuals tend to be motivated by different factors at different times.[12]
According to the system of scientific management developed by Frederick Winslow Taylor, a worker's motivation is solely determined by pay, and therefore management need not consider psychological or social aspects of work. In essence, scientific management bases human motivation wholly on extrinsic rewards and discards the idea of intrinsic rewards.
In contrast, David McClelland believed that workers could not be motivated by the mere need for money — in fact, extrinsic motivation (e.g., money) could extinguish intrinsic motivation such as achievement motivation, though money could be used as an indicator of success for various motives, e.g., keeping score. In keeping with this view, his consulting firm, McBer & Company, had as its first motto "To make everyone productive, happy, and free." For McClelland, satisfaction lay in aligning a person's life with their fundamental motivations.
Elton Mayo found out that the social contacts a worker has at the workplace are very important and that boredom and repetitiveness of tasks lead to reduced motivation. Mayo believed that workers could be motivated by acknowledging their social needs and making them feel important. As a result, employees were given freedom to make decisions on the job and greater attention was paid to informal work groups. Mayo named the model the Hawthorne effect. His model has been judged as placing undue reliance on social contacts at work situations for motivating employees.[13]

[edit] Online communities
Motivation to participate and contribute represents one of the most important element in the success of online communities (and virtual communities).
See more at: online participation

[edit] See also
Find more about Motivation on Wikipedia's sister projects: Definitions from Wiktionary
Textbooks from Wikibooks Quotations from Wikiquote Source texts from Wikisource Images and media from Commons News stories from Wikinews Learning resources from Wikiversity
Academy of Management
Addiction
Amotivational syndrome
Aptitude
Behavior
Equity theory
Human behavior
Humanistic psychology
Human Potential Movement
I-Change Model
Organizational behavior
Personality psychology
Preference
Successories
Regulatory Focus Theory
Social cycle theory
Victor Vroom
Operant conditioning
Flow
Motivation crowding theory
Organismic theory
Humanism
Andragogy
Health Action Process Approach
Self-efficacy
Volition

[edit] References
^ Kerr, Steven (1995) On the folly of rewarding A, while hoping for B. http://pages.stern.nyu.edu/~wstarbuc/mob/kerrab.html
^ Bandura, A. (1997), Self-efficacy: The exercise of control, New York: Freeman, pp. 604, ISBN 9780716726265, http://books.google.com/books?id=mXoYHAAACAAJ
^ Deci, Edward L.; & Ryan, Richard M. (1985). Intrinsic motivation and self-determination in human behavior. New York: Plenum. ISBN 0-30-642022-8.
^ Locke and Latham (2002)
^ Maslow, Motivation and Personality, p. 66.
^ Reiss, Steven (2000), Who am I: The 16 basic desires that motivate our actions and define our personalities, New York: Tarcher/Putnam, pp. 288, ISBN 1-58542-045-X, http://books.google.fr/books?id=EbOjA5oAsEUC
^ Reiss, Steven (2004), "Multifaceted nature of intrinsic motivation: The theory of 16 basic desires", Review of General Psychology 8 (3): 179-193, doi:10.1037/1089-2680.8.3.179, http://nisonger.osu.edu/papers/Multifaceted%20nature%20of%20intrinsic%20motivation.pdf
^ Ormrod, 2003
^ Steinmetz, L. (1983) Nice Guys Finish Last: Management Myths and Reality. Boulder, Colorado: Horizon Publications Inc.
^ Steinmetz, L.L. (1983) Nice Guys Finish Last: Management Myths and Reality. Boulder, Colorado: Horizon Publications Inc. (p.43-44)
^ Goldthorpe, J.H., Lockwood, D., Bechhofer, F. and Platt, J. (1968) The Affluent Worker: Attitudes and Behaviour Cambridge: Cambridge University Press.
^ Weightman, J. (2008) The Employee Motivation Audit: Cambridge Strategy Publications
^ Human Resources Management, HT Graham and R Bennett M+E Handbooks(1993) ISBN 0-7121-0844-0