Priscilla Lee

Priscilla Lee

IB Psychology Sample Essay for Paper 2 Options Exam

By priscillalee | May 6, 2015 | 0 Comment

Priscilla Lee Motivational Poster stay strong

I’ve been preparing notes for my students taking this year (2015) IB Psychology HL examination, and I thought it might be useful to compile some essays I found online. These essays covers only for the options: Abnormal Psychology and Human Relationships of the International Baccalaureate Psychology examination. There’s no guarantee that these are 6 or 7 marks answers, but sample essays for your references.

If you are the owner of any of the essays and feel uncomfortable with me publishing your work, do let me know! Otherwise I think that this is a great resource for students mugging for tomorrow’s examinations! Good luck!

Priscilla Lee Motivational Poster stay strong

(Sociocultural Level of Analysis)

Examine factors influencing bystanderism

The bystander effect is a situation where people do not offer help in emergency situations when other people are present, even when one is capable of doing so. This essay will examine the main factors that influence bystanderism.

One factor that influences bystanderism is pluralistic ignorance. When in a group, people often look to others to know how to react. This is known as informational social influence. This means that if people see that others are not reacting, then they will not react either. They will conform to the group norms of bystanderism. Latane and Darley carried out an experiment on pluralistic ignorance. They asked participants to sit in a waiting room before participating in an experiment. While they were waiting, they heard the female experimenter fall and cry out for help. They found that participants reacted more quickly and more often when they were the only ones in the waiting room compared to when they were alone sitting with a confederate who showed no reaction. During post-experimental interviews, the participants said that they had felt anxious when they heard the experimenter fall, but because the other people in the waiting room did not react, they thought that it was not an emergency. This experiment shows that people look to others to know how to react. However, participants were not in their natural environment, and so this study lacked ecological validity.

Another factor that influences bystanderism is diffusion of responsibility, which is a phenomenon that occurs in large groups where there is a tendency of individuals to refrain responsibility and to expect that someone else more competent would help. Latane and Darley conducted a laboratory experiment where they told student participants that they would be interviewed about living in a high-pressure urban environment. Anonymity was preserved as they were interviewed over the intercom. Some of the students were told that there were five other people in the discussion group, others were told there was just two and some there was only one. All the comments they heard from other groups were pre-recorded. At one point, one of the voices cried for help. When the students thought that they were the only person there, 85% rushed to help. When they thought there was another person there, this dropped to 65%. When they thought there were five other people, this dropped to 31%. Believing someone else would intervene lowers the probability of a person taking responsibility. This experiment shows that people tend to refrain from responsibility in larger groups.

Another factor that influences bystanderism is proximity of bystanderism. The smaller the distance between the bystander and the victim, the more directly responsible the bystander will feel – and thus they are more likely to help. Piliavin et al conducted a field experiment where confederates acted as strangers in need of help in the New York subway. This was an opportunity sample of over 4000 participants. They found that help was offered just as much in a crowded subway than in a non-crowded one, which suggest that it is difficult to refuse help in an emergency face-to-face situation. This shows that people are more likely to help when the distance between them and the victim is small. However, this study can be criticized on ethical grounds. Participants could not give their consent, as they did not know that they were participants in an experiment. They were also deceived because they are unaware that it is not a genuine emergency. Participants were also not debriefed as this would have been almost impossible. Participants may also experienced guilt, distress or anxiety. Another problem with this field experiment is that it was difficult to control. For example, we could question whether travellers on the train saw more than one trial. They are also difficult to replicate and time consuming. However, a main strength of this study was that it is high in ecological validity. The sample size was also very large and therefore it can be generalized.

Bystanderism is an complex issue and does not only depend on pluralistic ignorance, diffusion of responsibility and proximity of bystander, but also with age, identification with the victim and perception of emergency.

 

 

 ABNORMAL PSYCHOLOGY SAMPLE ESSAYS

Discuss cultural and gender variations in prevalence of disorders

This essay will focus on the cultural and gender variations in the prevalence of an affective disorder, depression, and an eating disorder, bulimia nervosa, and whether or not there is a cultural and gender difference in the onset of these disorders.

Individuals with bulimia are afraid of weight gain, and they will undertake binge eating and then use compensatory methods to lose weight, such as induced vomiting, excessive exercise and use of laxatives. Symptoms of bulimia include swollen salivary glands, due to vomiting, stomach and intestinal problems, feelings of guilt after binge eating and negative distorted image about their body weight.

According to statistical evidence, eating disorders are more common in females than in males. An estimated 35% of those with binge-eating disorders are males, with the rest of it being females. Eating disorders are also more common in teenagers, with 50% of girls between the ages 11-13 seeing themselves as overweight.

Fallon and Rozin wanted to see if there was a gender difference in body image. They showed US undergraduates figures of their own sex and asked them to indicate the figure that looked most like their shape and their ideal figure. Men selected very similar figures, whereas women tend to choose thinner attractive bodies that were much thinner than the shape they indicated as their own. They also asked men to choose a female figure that they thought was attractive to them, and found that the figure they found attractive was heavier than the ideal figure that women chose. Women believed that men prefer thinner women than they actually do. They concluded that there is a gender difference in the perception of body image, which explains why women are more susceptible to eating disorders than men.

There are also cultural differences in the susceptibility of bulimia. Lee, Hsu and Wing found that bulimia and anorexia was non-existent among the Chinese in Hong Kong. Chinese people are usually slim, and therefore they do not share the Western fear of being fat. The Chinese regard thinness as a sign of ill-health, unlike the Western view that it is a sign of self-discipline. Obesity is a sign as a sign of weak control in the West, whereas Chinese people see it as a sign of wealth and prosperity. Having grown up in Hong Kong myself, I have never met someone with an eating disorder, and the majority of people are either underweight or within the normal weight range.

Baguma et al also found cultural differences in the susceptibility of bulimia. It seems that the culture we live in really affects our eating behaviours. He asked British and Uganda students to examine a set of nude bodies ranging from very thin to very obese. When asked to rate which body they thought were ideal, the British people tend to chose very thin bodies, whereas Uganda students chose very obese bodies. In the Uganda society, fat is beautiful, and in the British society, slim is attractive. Thus this shows that cultural factors affect the way we think, which may explain why the Western society has such a high rate of people with eating disorders.

Similarly, there are also cultural and gender variations in the prevalence of depression. People with depression usually experience feelings of guilt and sadness, lack of enjoyment and pleasure in anything. They will have frequently negative thoughts, including low self-esteem and suicidal thoughts.

According to statistical evidence, women are two to three times more likely to be clinically depressed than men. Women are also more likely to experience several episodes of depression. This may be explained by gender norms or gender differences in society. Koss et al found that discrimination against women began early in their lives. Women are twice as likely to suffer sexual abuse in childhood and this pattern of victimization is maintained in adulthood, where women make up the majority of victims in physical assault.

Culture may also influence our onset of depression, as some cultures discourage depression more than others. For example, Chiao et al found that depression was higher in individualistic cultures than in collectivistic cultures. Similarly, Gabilondo et al found that depression occurs less frequently in Spain (collectivistic culture) and there there is a lower suicidal rate compared to Europe countries (individualistic). This is perhaps because collectivist groups discourage depression as they have more social support than individualistic cultures, who encourage independence.

In conclusion, there are cultural and gender variations in the onset of depression and bulimia. In both disorders, women are more vulnerable and susceptible in developing these disorders than men. Similarly, culture also plays an important role in the influence of these disorders. People are affected by their culture – if their culture rewards thinness, then they will strive to be thin.

 

Discuss cultural and ethical considerations in diagnosis

Diagnosis is the identification of groups or patterns of mental symptoms that reliably occur together to form a type of disorder. Diagnosing mental disorders is a very delicate process. Psychologists and clinicians must take precautions when making a diagnosis, as once a diagnosis is made, the life of the individual may be changed forever.

Concepts of abnormality differ between cultures, and this can have a significant influence over the validity of diagnosis. Behaviour that seems abnormal in one culture may be seen as perfectly normal in another, and therefore clinicians must take into account cultural considerations when making a diagnosis. They must take an emic approach to diagnosis.

For example, Koro is a culturally-bound syndrome in China where men believed that the penis is shrinking and that it will withdraw to the abdomen and cause death. Symptoms of this include fear and anxiety, and attempts to put weights on their penis to prevent it from retracting. Since this disorder is only found in China, some diagnostic manuals will not have it. The existence of culturally bound syndromes means that it is important for clinicians to consider the disorders found in many diagnostic manuals in order to make a fair assessment of the individual.

Cultural bias is also found in diagnosis. Sabin found cultural bias when clinicians were exposed to non-English speaking patients such as Mexican-Americans. The patient’s emotional problems and symptoms were often misunderstood, which may explain why there is a much higher incidence of diagnosis made on ethnic minorities in the US and UK. Jenkins-hall and Sacco took Western clinicians and asked them to watch interviews with possible patients. There were four different conditions. The first condition was a Western-American woman that was not depressed. The second condition was an African-American woman that was not depressed. The third condition was a Western-American woman that was depressed. The fourth condition was an African-American woman that was depressed. The researchers found that the clinicians rated the non-depressed woman as the same, but that they were more likely to diagnosis the African-American woman depressed and less socially competent than the Western-American depressed woman. This shows that cultural bias exists, and therefore clinicians must take this into account. For a more reliable diagnosis, perhaps more than one researcher from a different culture should assess a patient.

Apart from cultural issues, diagnosis of abnormality can also follow some serious ethical issues, and these should be considered before making a diagnosis, as after making one there may be no turning back.

The labelling of people with mental disorder is called stigmatization. Rosenhan (1973) conducted a study where 8 normal patients would try to gain admittance to psychiatric hospitals. These patients claimed to be hearing unfamiliar voices in their heads. All but one were admitted with schizophrenia. The patients were told to stop displaying the symptoms, and they were all discharged after 19 days. However, they were stigmatized with the label “schizophrenia in remission”. Had these participants been real patients, this label would follow them everywhere and may affect their ability to find a job or qualify for medical insurance. However, there are criticisms of this study. Firstly, the staff at the hospital are not entirely to blame as the participants admitted themselves and told the staff about their symptoms. The staff was simply just doing their job at identifying the symptoms and making a diagnosis. In real life, doctors are not normally confronted with people wishing to be admitted to psychiatric hospitals. The sample was too small, so there is a problem of whether it can be generalized. Even if a patient no longer shows any symptoms, the label “disorder in remission” still remains and this can affect the individual’s self esteem and confidence.

The self-fulfilling prophecy states that when a stereotype or label is placed on an individual, they will internalise the role and thus conforming to the stereotype and start believing that they are abnormal. For example, if a patient is diagnosed with a mental disorder, the patient may start to believe that they are abnormal and start to behave similarly to the illness. Doherty et al found that patients who do not internalise the role of a mentally ill stereotype recovered much faster than those who exhibited the self-fulfilling prophecy. This finding emphasizes on the importance of taking in the ethical considerations before diagnosing patients.

Another ethical issue in diagnosis is confirmation bias, where clinicians tend to attribute a patient’s behaviours to a disorder and looking for behaviours that confirm this disorder. This may be due to the assumption that if the patient is there in the first place, there must be something to diagnose. This is demonstrated again in Rosenhan (1973)’s study. Once the participants stopped exhibiting behaviours, they took notes on their experience. This was interpreted as a symptom of schizophrenia. When the participants were walking down the hallway, this was seen as a sign of nervousness. This shows that once a person is deemed mentally ill, any actions will be interpreted as symptoms of the disorder.

In conclusion, it is extremely important for clinicians to take into account the cultural and ethical considerations in diagnosis, as once a diagnosis has been made it is with the patient for the rest of their lives.

 

 

Discuss validity and reliability of diagnosis

Classification of mental disorder involves the identification of a group or pattern of mental symptoms that reliably occur together to form a type of disorder. This allows psychiatrists, doctors and psychologists to easily identify groups of similar patients. A diagnosis can be made, and a suitable treatment can be developed and administered to all those showing similar symptoms.

The DSM (Diagnostic and Statistical Manual of Mental Disorders) defines abnormality as a clinically significant syndrome associated with distress, loss of functioning, and an increase in the risk of pain or death. The DSM is a manual with over 200 specific diagnostic categories for mental disorder and lists the specific diagnostic criteria that have to be met for a diagnosis to be given.

However, one of the largest problems of diagnosing patients is whether or not the diagnosis is valid or reliable. Reliability is whether the same consistent diagnosis would be made for the same group of symptoms, and validity is whether a correct diagnosis is made.

There are two types of reliability. Inter-rater reliability is assessed by asking more than one practitioner to make a diagnosis for the same person and to see whether this diagnosis is consistent. Beck et al found that assessment on diagnosis for 153 patients, where each patient was assessed by more than 2 psychiatrists, was only 54%. This shows the unreliability of diagnosis. Similarly, Cooper et al found that New York psychiatrists were twice as likely to diagnose schizophrenia than London psychiatrists, who were twice as likely to diagnose mania or depression, when shown the same video-taped clinical interviews. This suggests that psychiatrists may be influenced by their culture beliefs when making a diagnosis. Lipton and Simon randomly selected 131 patients from a psychiatric hospital and attempted to re-diagnose them. This diagnosis was compared with the original diagnosis and found that of the original 89 patients who were diagnosed with schizophrenia, only 16 received this on re-evaluation. This clearly shows the unreliability of diagnosing patients, and how different psychiatrists will come up with different diagnosis.

Test-retest reliability is concerned with whether the same patient will receive the same diagnosis if assessed more than once by the same psychiatrist. Mary Seeman completed a literature review examining evidence relating to the reliability of diagnosis over time. She found that initial diagnosis of schizophrenia, especially in women, are more susceptible to change as clinicians found out more and more about their patients.

This clearly shows the unreliability of diagnosing patients. Although the DSM is constantly improving to better improve reliability, psychiatrists are still human and they are bound to make mistakes when diagnosing patients. Many factors need to be considered when making a diagnosis, such as their own researcher bias (reflexivity) and cultural bias.

Validity can also be a problem in diagnosis. Rosenhan (1973) conducted a study where 8 normal patients would try to gain admittance to psychiatric hospitals. These patients claimed to be hearing unfamiliar voices in their heads. All but one were admitted with schizophrenia. The patients were told to stop displaying the symptoms, and they were all discharged after 19 days. However, they were stigmatized with the label “schizophrenia in remission”. Rosenhan was not satisfied with the results that normal patients could be classified as abnormal. He told psychiatrists that pseudo-patients would try to gain admittance to the hospital. In fact, there were no pseudo-patients, but 41 real patients were judged with great confidence to be a pseudo-patient by at least one member of staff. Rosenhan concluded that it was not possible to distinguish between sane and insane in psychiatric hospitals. This study shows good reliability, but poor validity in that normal patients could be given a diagnosis. However, there are criticisms of this study. Firstly, the staff at the hospital are not entirely to blame as the participants admitted themselves and told the staff about their symptoms. The staff was simply just doing their job at identifying the symptoms and making a diagnosis. In real life, doctors are not normally confronted with people wishing to be admitted to psychiatric hospitals. The sample was too small, so there is a problem of whether it can be generalized.

In conclusion, there will always be issues of validity and reliability in diagnosis. Certain groups of people will be more likely to receive a diagnosis to a disorder compared to others, and it is very difficult to remove the subjectivity and bias of practitioners from the diagnostic process. Psychiatrists will need to be careful when diagnosing patients, as once diagnosed the life of the individual will be changed forever.

 

 

Examine the concepts of normality and abnormality

Abnormal behaviour presents psychologists with a difficult task: it is difficult to define and therefore it is difficult to diagnose as it is based on the symptoms that people report or exhibit. There are four definitions of abnormality: statistical infrequency, deviation from social norms, dysfunctional behaviour and deviation from ideal mental health.

 

Statistical infrequency defines abnormality as a deviation from the statistical norm, meaning infrequently occurring behaviour. This approach is useful when looking at human characteristics that can be reliably measured, such as height. Most people’s scores will cluster around the average, with very few tall people and very few small people. This is known as normal distribution. Therefore, statistically frequent behaviour is defined as normal and statistically infrequent behaviour is defined as abnormal.

 

However, there is no agreed definition as to how much behaviour must deviate from the norm to be considered as abnormal. Statistical deviation from the norm does not describe the desirability of the deviation. For example, both musical talent and high IQ are statistically infrequent but it is highly desirable. To ensure that behaviour is statistically infrequent requires the collection and maintenance of data which is both difficult and time consuming. It could be the case that by the time data is collected from a population and then inputted into a bell-curve, that the data of the population has already changed. The accuracy of data is also questionable.

 

Deviation from social norms defines abnormality as behaviour which departs from what is considered acceptable in a society. Norms are expected ways to behave in a society and those who do not think or behave like everyone else breaks these norms, and are considered abnormal. Most members of the society are aware of these norms and adjust their behaviour accordingly. For example, student-teacher relationships, behaviour on public transport etc.

 

However, there is no universal agreement for social norms. Different societies will have different social norms, and they will change over time. For example, it was much less socially acceptable to smoke cigarettes today than it was 20 years ago. Another problem of this definition is that it defines anyone who goes against social norms as abnormal. This means that people could be defined abnormal by their sexual preferences or religious beliefs.

 

Dysfunctional behaviour defines abnormality as psychological distress, such as negative thoughts, feelings or emotions, that causes discomfort to the individual. This approach is much more clear in defining abnormality rather than it being statistically infrequent or a deviation from social norms, as many of those with a mental disorder usually suffer from psychological distress. For example, those with eating disorders are typically disturbed by the perception that they are fat, and this causes distress and discomfort towards the individual – hence they can be defined as abnormal. Rosenhan suggested that dysfunctional behaviour can be judged based on 7 criteria:

 

  1. Personal distress (experience unpleasant emotions)
  2. Maladaptiveness (behaviour that interferes with our responsibility)
  3. Irrationality (behaviour that has no rational basis)
  4. Unpredictability (impulse behaviour)
  5. Statistical Infrequent (deviation from statistical norm)
  6. Observer discomfort (behaviour that causes discomfort to others)
  7. Violation of moral and ideal standards

 

However, many people experience distress at some point in their lives, but this does not mean that they are abnormal. For example, a lost of a loved one may cause someone to experience distress, behaving in ways that are irrational and unpredictable, but this does not mean that they are abnormal. In fact, it may even be an appropriate response to circumstances. Observer discomfort also depends on who the observer is – what may be discomforting to others may be seen as perfectly normal to another. Violation of moral and ideal standards also depends on which standards we are using.

 

Deviation from ideal mental health defines abnormality as behaviour which departs from what is considered mentally healthy. In this context, normal can be defined as mentally healthy, and abnormal can be defined as mentally unhealthy. Jahoda defined 6 criteria in which mental health can be measured:

 

  1. Attitudes of an individual towards his/herself
  2. Growth, development or self-actualization
  3. Integration
  4. Autonomy
  5. Perception of reality
  6. Environmental mastery

According to this approach, the more of these criteria that are satisfied, the healthier the individual. However, very few people are likely to achieve all of Jahoda’s objectives, and it is also hard to measure the extent to which an individual misses these criteria. Furthermore, different cultures will have different ideas on what is considered ideal. For example, autonomy is valued in individualistic cultures, but in collectivist cultures, working together is valued instead.

 

None of the above definitions provide a complete definition of abnormality. Attempting to define abnormality is in itself a culturally specific task. What seems abnormal in one culture may be seen as perfectly normal in another, and hence it is difficult to define abnormality.

 

 

Using one or more Research Studies, Explain Cross‑Cultural Differences in Prosocial Behaviour.

There are cross-cultural differences in many prosocial behaviors, such as helping behavior. One study that investigated cross-cultural differences in helping behavior was Levine et al. (1990).

In the 1990s, Levine et al. conducted studies in order to measure helping behavior in 36 American cities and 23 large cities around the world. The field experiments used simple staged non-emergency situations, such as dropping a pen, helping a blind person across a busy intersection, providing someone with change, to stamp an addressed letter that has been dropped.

 

One finding of the studies were that population density seemed to play a role in helping behavior. In fact, it was the best predictor of helping behavior. People tended to be more helpful in small and medium-sized cities in the southern United States compared to large North-eastern and West coast cities. Explanations for the increase in anti-social behavior in areas with a high population density may be an over-load of stimuli which makes it more difficult to recognize that a person is in need of help, or deindividuation factors such as pluralistic ignorance or diffusion of responsibility. It may also be that the population in larger cities is more atomized or individualistic. In such cities, people stick to small in-groups such as family and friends, and do not care so much for other people, whereas in smaller cities there is less anonymity and a stronger sense of community.

 

The studies did not find a clear relationship between the individualistic and collectivistic cultures. Individualistic societies are oriented to the individual whereas as collectivistic societies give higher priority to the welfare of the collective. Although there was a slight overall tendency for big cities in individualistic countries to be less helpful, there were several exceptions to the rule. Levine et al. speculates that this is because of the vagueness of the collectivism-individualism construct. This construct does not make clear predictions about behavior towards out-group members, or whether pedestrians will be categorized as such. Some studies have argued that collectivist societies focus less on outsiders, which may actually make them less helpful than individualistic societies. It is likely that there are many subtypes of collectivist and individualist societies. Individualist and collectivist societies that emphasize social responsibility, such as Sweden, Denmark, Austria and countries in Latin America may be more helpful. This hypothesis is supported by the findings.

 

Other findings may be of interest. There was a negative correlation between helping behavior and the economic situation of the city. Cities with low purchasing power per capita tended to be more helpful than cities with high purchasing power per capita. Helping rates were also higher in cities where people were less stressed (as measured by the average walking speeds) In addition the findings suggested that people tended to conform to the cultural norms of the area they live in. This means that South Americans were less helpful in New York and New Yorkers more helpful in Rio de Janeiro. As the saying goes, “When in Rome, do as the Romans do.”

 

In conclusion, the findings demonstrate that cross-cultural differences in helping depend on a multitude of factors; such as cultural norms, population density, economic factors, and stress levels.

 

 

 

 

Discuss the relative effectiveness of two strategies for reducing violence

 

According to the world health organization, violence can be defined as “the intentional use of physical force or power, threatened or actual, against oneself, another person, or against a group or community that either results in or has a high likelihood of resulting in injury, death, psychological harm, maldevelopment or deprivation.” Examples of violence include suicide, terrorism, child abuse, rape, and bullying. Violence is a leading cause of death and disability worldwide. It disproportionately affects low- and middle-income countries, where it has severe economic and social impact. Every day, more than 4,000 people die because of violence. Of those killed, approximately 2,300 die by suicide and over 1,500 because of violence of another person. It is therefore essential to find strategies to reduce its impact.

 

One way to reduce violence is to change social and cultural norms that promote or glorify violence towards others. Many studies have shown that such norms can increase the incidence of violence. The American south, for instance, is considered to have a “culture of honor”, in which means that men do not accept insults from others or accept improper conducts against them, and are willing to resort to violent retribution in order to maintain their reputation. The American south also has a higher level of violence than the American north. This may, of course, also be due to economic differences and prevalence of guns, but if those factors are controlled, violence is still more prevalent in the south. Media can help perpetuating norms of violence. In a field experiment by Cohen and Nisbett, employers were sent letters from job applicants who had allegedly killed someone in an honor-related conflict. Southern and western companies were more likely than northern companies to respond in an understanding and cooperative way.

 

One way to change social and cultural norms of violence is through education. One program aimed at preventing adolescent dating violence is Safe Dates. In a study by Foshee et al. to evaluate the effectiveness of the program, fourteen schools in a rural county in the United States were randomly allocated to treatment conditions. The participants’ attitudes toward adolescent dating violence were measured through the questionnaires, before and after the program. Less psychological abuse and sexual violence was reported in the treatment than the control group. Most of these effects were explained by changes in dating violence norms, gender stereotyping and awareness of services. Even though one should be cautious of drawing too far reached conclusions of the results, as the measure was based on self report, and not actual behavior, the reduction of adolescent dating violence through education seems promising.

 

The evidence for the effectiveness of modifying cultural norms and values is however limited. It can also be argued that campaigns aimed at changing norms have secondary positive effects. Victims may be informed about services to get protection from violence, and offenders may be informed about treatment. Those campaigns usually also address other issues that are related to violent behavior, such as alcohol consumption. Cultural norms of violence exist in every culture, to a more or less degree, and it may not be possible to eradicate those norms completely, as our proneness for violence can have an evolutionary basis. Our violent behavior can change, however. As Stephen Pinker has demonstrated, humans are less violent now than during Stone Age, which suggests that violence also has a cultural influence. It is therefore important for media and school to be aware of their influence on norms of violence.

 

Another way to reduce violence is to improve social skills and enhance life opportunities for children. High levels of impulsiveness and low empathy in children and adolescents are related to acts of violence. Many treatment programs, such as cognitive behavioral skills training and social development programs, have shown to increase empathy, reduce impulsiveness, antisocial and aggressive behavior in children. These programs are also beneficial for improving pro social behavior and skills in children. They can be carried out in school settings and typically focus on managing anger, behavior modification, adopting a social perspective, moral development, building social skills, solving social problems and resolving conflicts. They can be seen as positive as they involve children in trying to solve problems related to violence. They show promise of being effective. In a systematic review of the effectiveness of these programs, children who had participated in the training had reduced their violent and delinquent with 10 percent behavior compared with controls. The most effective program was the cognitive skills behavioral training, which had an average of 25 % decrease in delinquency. Another intervention of this kind is to enhance vocational opportunities through academic enrichment programs, helping youths at a high risk level for violence to complete secondary schooling and to pursue higher education, or to provide vocational training for youths and young adults in the risk zone. While these programs show promise in reducing violence in youths and young adults, more evidence is needed to confirm that they also prevent violence and aggressive in these individuals.

 

These two strategies for reducing violence have many things in common. They both aim at reducing violence through intervention and education programs in youth, but whereas the first strategy focus on changing norms, the second strategy focus on changing behavior or providing better opportunities. While the first strategy seems to have an effect, the latter strategy seems to be more effective, even though more research is needed to determine their effectiveness. Current research suggests that neither of them is effective in preventing violence completely. Combined with other strategies, such as gun control and reduction of alcohol, they may nonetheless help to reduce violence.

 

HUMAN RELATIONSHIPS SAMPLE ESSAYS

 

Discuss the effects of short term and long term exposure to violence

Researchers have long known that children who grow up in aggressive or violent households are more likely to become violent or aggressive in future relationships. Most of us have experienced bullying or violence at least once in our life, and some experience it for long periods of time than others. The short-term and long-term effects of this to an individual will be the focus of this essay.

According to Olweus (1992), short term exposure to violence typically leads to anger, depression, higher rate of illness, lower grades than non-bullied peers, and suicidal thoughts and feelings. Long term exposure to violence, on the other hand, leads to lingering feelings of anger and bitterness, difficulty in trusting people, avoidance of new social situations, increased tendency to be a loner and low self-esteem.

Barbara Wilson carried out a study investigating the short-term effects of media violence, where elementary school children were exposed to one episode of MightyMorphin Power Rangers. She found that the children demonstrated significantly more intentional acts of aggression afterwards compared to those that did not watch the program.

Carney and Hazler found that bullying affects our cortisol levels, which is a type of hormone secreted when we are stressed. The researchers measured cortisol levels in the saliva of 6th-grade students, and asked them to fill out a questionnaire on their experience of being bullied or watching someone being bullied. Cortisol levels were measured first thing in the morning and just before lunchtime. Lunchtime was chosen because it is one of the less supervised times of the day, when adolescents are more likely to be bullied or observe someone being bullied. They found that bullying appears to cause a spike in cortisol levels. However, Carney and Hazler found that humans who experience long-term bullying had lower levels of cortisol compared to those who experienced short-term bullying.

Elliot et al carried out a survey to discover if bullying at school affects people in later life. The survey of over 1000 adults shows that bullying affects not only your self-esteem as an adult, but your ability to make friends, succeed in education, and in work and social relationships. Nearly half (46%) of those who were bullied contemplated suicide compared to only 7%. The majority of the adults reported feeling angry and bitter now about the bullying they suffered as children. This shows that long-term effects of violence can cause us to feel angry and bitter.

Patterson et al (1989) wanted to see if children are affected by aggression at home. Families with at least one very aggressive child were compared with families with a normal child. They were matched for family size, socio-economic status, and many other factors. They found that aggressive children were more likely to become from home where less affection was shown and more arguments and punishment occurred. This study shows that long-term exposure to violence can cause individuals to become violent themselves, because of social learning theory, which is based on the assumption that people imitate behaviour by observing other people.

Having the support of family members and peers, who can be confided in when one has been bullied, tends to lessen the impact of bullying.

 

 

Contrast two theories explaining altruism in humans

Altruistic behaviour is when people help others with no reward, and even at a cost to themselves.Darwin suggested that the evolution of altruism should be seen in relation to what could be advantageous to the group a person belongs to, rather than what could be advantageous by the individual alone. This essay will contrast two theories that explain altruism in humans: the kin selection theory, which is a biological explanation, and the empathy-altruism model, which is a cognitive explanation.

The Kin Selection Theory predicts that the degree of altruism depends on the number of genes shared by the helper and the individual that is being helped. The closer the relationship, the greater the chance for altruistic behaviour. This is supported by many animal studies, where animals tend to help those that are related to them. Dawkins proposed the “selfish gene theory” arguing that there is an innate drive for the survival and propagation of one’s own genes. Organisms will try to make sure that their genes are passed on to the next generation. This may explain why mothers often protect their child and are willing to sacrifice themselves to protect them, whilst the vice versa is rare. However, this theory does not explain why some people work for charity or help strangers cross the road. It is also questionable whether animal behaviour can be generalized to human behaviour. Similarly, adoption does not benefit kin and thus cannot be explained by this theory.

By contrast, the Empathy-Altruism Model does explain why people help others that are not family. The Empathy-Altruism Model, by Batson et al, is based on the idea that an emotional response of empathy is generated when another person is perceived to be in need. According to Batson, two emotional responses are experienced when we see someone in need. The first one is personal distress, where we feel bad for the concern person, and this will lead to egoistic helping in order to make ourselves feel better. The second one is emphatic concern, where we feel like we ought to help others if we can, and this will lead to altruistic behaviour. In order words, if you feel empathy towards someone, you will help them regardless of what you gain from it. But if you do not feel empathy, you will weigh the costs and benefits before making a decision of whether to help.

Batson et al carried out an experiment where he asked students to listen to a recording of a student named Carol, who had broken both of her legs and is struggling to catch up with her school work. The students were divided into two groups: low empathy group and high empathy group. The students were then given a letter, asking them to meet up with Carol and share their lecture notes with her. Some participants were told that Carol would be finishing her work at home, and others were told that she would be in their class when she returned to school. Participants from the high empathy group were almost equally likely to help Carol, whether or not she would be in their class or not. Those from the low empathy group were more likely to help if they thought Carol would be in their class. The experiment concluded that if you feel empathy towards someone, you will be more likely to help.

Thus this theory explains what the Kin Selection Theory does not. However, there are also weaknesses of the Empathy-Altruism Model. The study only looked at short-term altruism – would the participants from the high empathy group continue to help Carol throughout her time in school? Interpretation of the results also do not take personality factors into account. It is also difficult to measure a person’s level of empathy. But nonetheless, Batson et al’s study is constantly being replicated, and the results are the same.

In conclusion, the Kin Selection Theory and the Empathy-Altruism Model both explain altruism in humans. We are more likely to help our family than our friends even when no empathy is felt, but with those that are not family we will help if we feel empathy towards them.

 

Discuss the Effects of Short Term and Long Term Exposure to Violence

The 1990s suggested tendencies of a possible displacement in technology use for children from television to computer games and internet, but despite the introduction of the new digital media, children still watch TV regularly (Odom Pecora, Murray, & Wartella, 2007). A telephone survey on more than 1,000 parents of toddlers and preschoolers shows that 73 % of children below 6 years of age watch TV on a regular day. 43 % of all children below 2 years of age watch TV every day and 74 % of all infants and toddlers have watched TV before the age of two. On average, children watch about one hour per day. (Rideout, Vandewater & Wartella, 2003) While watching television, children risk being exposed to media violence. A content analysis study of more than 9,000 programs over three years found that approximately 60 % of the programs contain some physical aggression. On average, a typical TV hour features six different violent incidents.

 

As it since long common knowledge that children often imitate what they see on TV, they are likely to imitate the violence they observe. This is in line with Bandura’s social cognitive theory that emphasizes learning by imitation of models. In a classic study by Bandura, Ross, & Ross (1963), children watched video clips of adults acting aggressively toward a bobo doll. They were later observed imitating the same aggressive behavior. In a more recent study, elementary school children who were exposed to one episode of MightyMorphin Power rangers demonstrated significantly more intentional acts of aggression, such as hitting, kicking and shoving than did a group that did not watch the program. In another experiment, five- to six-year-old children who had just watched a violent movie and were then observed playing together were then rated much higher on physical assault and other types of aggression than compared to a control group.

 

These studies clearly demonstrate a short term effect of TV violence on behavior, but it is important to be careful of drawing too far reaching conclusions. It is important to point out that all of the aggressive behavior that the children in the studies copied from television may not have been dangerous to other children. Many children might be aware of the difference of hitting a toy and another kid. Some children, however, may have been more negatively affected by the observed violence. Violence in children’s television is also less occurring than in adult television, and as long as children are not allowed to watch adult television they may be better protected.

 

Children can also be frightened by violent media content. Younger children tend to be frightened by characters and events that seem frightening, whereas older children are frightened by scenes that involve injury, violence and personal harm. Older children are also more responsive than younger children to violence that seem realistic or could happen in real life. Older children, for instance, are more frightened by television news than younger children. Repeated exposure to television may also increase children’s fear of victimization. In one study, primary school children who watched the news often believed that there were more murders in a nearby city than children who did not watch the news often. The researchers controlled for grade level, gender, exposure to fictional media violence, and overall TV viewing. This effect has also been observed in adults. Realistic violence seems to have more of a detrimental effect than violence perceived as imaginary, as suggested by an experiment by Feschbach (1976)

 

Current research has found some support for short time effects of television violence on children, such as imitation and anxiety. It has however been difficult to establish long term effects. One longitudinal study by Eron & Huesmann (1986) found that the amount of exposure to television violence in childhood was positively related to physical aggression in adulthood. The researchers controlled for the child’s initial level of aggressiveness, IQ, parent’s education, parents’ TV habits, and parents’ aggression. Still, it is possible that children who watch more TV violence in childhood have a different disposition than those children that watch less TV violence. There is for instance no evidence that violent television increases violent crime. The introduction of television has neither been shown to increase violence. Charlton, Gunter & Hannan’s (2002) longitudinal case study on the introduction of television in the island of S:t Helena showed no increase in crime, or in violence in children. However strange it may seem to us, we live in a comparably non-violent time in history. In earlier societies, when there was no television, people were more prone to commit violent acts. Some children’s misconduct is more likely to come from less parental involvement in child rearing, and that they may leave their child in front of the television instead of interacting with him/her, instead of television violence.

 

As most of the research about the effect of short term and long term exposure to television is correlational or laboratory, the conclusions that can be drawn from the findings must be modest. Correlational data cannot establish cause and effect and laboratory research may have problems with ecological validity. Nonetheless, it is not daring to claim that television have some short term effects, such as an increase in aggressive display and fright. As long as an aggressive act does not hurt another child or the actor itself, however, one cannot claim that all role play with mild aggressive content is bad. Splashing water at each other, shooting with cap guns, or pretending to be Harry Potter fighting villains seem to be very harmless activities common in any ordinary childhood. There are also many differences in how children are affected by television violence. Because of the stereotyping in media and possibly due to genetic differences, boys are more likely to imitate aggressive behavior than girls. The evidence also suggests that there are developmental and individual differences on the extent for which television violence has an influence. Children with ADHD like symptoms, or children who watch television to a larger extent than others, may be more at risk. Children may also be more negatively influenced by realistic or real violence compared to fantasy violence.

 

Although there are established short term effects of exposure to TV violence, the long term effects are less evident.  TV violence may have an impact on aggression and psychological distress, but children are also influenced by many other social and psychological factors. It seems obvious that exposure to real violence, such as bullying or abuse must have a more detrimental effect on children than exposure to TV violence. It is nonetheless recommended for parents to have some degree of control over their young child’s TV viewing and the content of it, and that children, especially those in the danger zone, are educated about how programs are made and what is real and pretend on television. Attempts with such a curriculum have been tried on emotionally disturbed children, with positive effects (e.g. Sprafkin, Gadow & Kant, 1988).


Distinguish Between Altruism and Prosocial Behaviour

Contrast Two Theories Explaining Altruism in Humans

 

Prosocial behavior is used within social psychology for every behavior that benefits others, such as caring, loving, helping, and feeling empathy. Altruism is a type of prosocial behavior, and according to evolutionary theory a behavior that reduces the fitness of the altruistic individual but increases the fitness of the individual receiving help (Okasha, 2008). On the face of it, altruism does not make much sense from an evolutionary point of view, as the behavior seems unlikely to have been transformed into an adaptation. Adaptations, which are driven by natural selection, are features especially important for an animal’s survival. It is believed by evolutionary psychologists that many psychological functions are adaptations. As altruism per definition decreases the fitness of individuals, genes influencing altruistic behavior should be less likely to be passed on to the next generation.

 

Altruism has posed a challenge to evolutionary theory. There have nevertheless been efforts to explain this behavior from a biological point of view. One such explanation is Reciprocal Altruism (RA), a model which which was developed by Trivers (1971). It basically assumees that individuals can be expected to behave altruistically if they believe that there is a chance that they can be in the same predicament and will need somebody’s else’s help in the future. Therefore you are more likely to act altruistically if you expect to meet the person you are helping again. For example, individuals in prairie dog colonies will give alarm calls if they see a predator approaching, even though it puts the calling prairie dog at risk (Sterelny and Griffiths, 1999). This behavior may be explained by RA. The callers warn the rest of the group because they expect to need to be warned by others in the future. Another example is the behavior of vampire bats. Vampire bats feed on blood, and can share it mouth to mouth to bats that have failed finding blood during their nightly hunt. This is needed, as vampire bats will begin to starve if they do not consume blood within 48 hours. (Wilkinson, 1985) According to a RA explanation individuals in a group of vampire bats expect to end up without blood during a hunt once in a while, and therefore share their blood in order to receive the same favor later on.

 

A hypothesis that differs greatly from RA is Batson’s Empathy-Altruism Hypothesis (EA). Batson recognizes that people sometimes help out of self interest, anxiety and fear, but that they often help out of empathy. Batson demonstrated this in a famous experiment. Participants listened to an interview of a girl who had been in a car accident and had both of her legs broken. One group of participants was asked to try to focus on how she was feeling. A second group of participants were not asked to be concerned about Carol’s feelings. After listening to the interview, participants were asked to share lecture notes with Carol. As a second independent variable, the experimenters varied the cost of not helping Carol. Participants in the high cost condition were told Carol would be in the same psychology class when she returned to school, whereas the low cost condition group was told that she would finish the class at home. The findings demonstrated that participants that were not encouraged to sympathize with Carol were more likely to help her if they were told that she would be participating in their class compared to if they were told she were not. In contrast, participants that had been told to emphasize with Carol were not influenced by the likelihood of seeing her in class. The logic behind the Carol experiment is that if helping behavior is influenced by pure self interest; helping should be more likely in the situation where participants risk embarrassment for not helping Carol. As this was not the case in the emphasizing condition, the findings suggest that empathy can sometimes motivate helping.

 

Naturally, the explanations of altruism differ between the theories. Whereas RA explains altruistic behavior as helping if one believes that there is a probability that one needs help in the future, EA suggests that we in some situations can be motivated by empathic concerns. In this sense, RA is more based on rational self interest compared to EA. Additionally, RA is arguably more reductionist than EA, because it focuses on self interest. EA, on the other hand, asserts that humans sometimes may act out of self interest and sometimes not. Paradoxically, it can also be claimed that RA is actually not a pure altruistic theory compared to EA, as the former has its basis in self interest and in increasing the fitness of the helping individual.

 

A second division between the explanations is the type of research they are based on. Whereas EA is based on experimental evidence on humans RA is mainly based on naturalistic observations of non-human animals and only in part on human research. RA in humans has been investigated by using the Prisoner’s dilemma. The Prisoner’s Dilemma is a game of cooperation between two players. It is in the best interest of the players to cooperate in the game, but because of lack of trust participants tend not to collaborate in single round games and thus lose collectively. However, if the players are playing the game repeatedly, they tend to be more cooperative (e.g. Axelrod & Hamilton 1981). This behavior is in line with predictions of RA. According to the model individuals tend to behave more helpful towards individuals they anticipate meeting again, such as in reiterated games, and are conversely more likely to cheat on people they do not expect to meet in the future, such as in single games.

 

A third distinction between the theories is their validity. It can be argued that EA has higher validity than RA, for several reasons. Firstly, the research supporting EA is mainly based on human research in contrast to RA, making the former more applicable to humans. Secondly, the ecological validity of the experimental research favoring RA in humans has lower ecological validity than experimental research favoring EA. For example, the scenario participants were presented in the Carol experiment is more realistic than the scenario presented in the prisoner dilemma games. Thirdly, there is more evidence challenging RA. The examples of reciprocal altruism that have been observed in the animal world may have alternative explanations. For instance, the apparent altruistic behavior of individuals in prairie dog colonies may actually be out of egoistic reasons rather than RA. As the alarm call causes the whole group to escape, this provides a distraction and may increase the chances of escape for the caller. Likewise, the observed behavior of vampire bats to donate blood to the less fortunate may be better explained by kin selection, a theory that states that we tend to give more help those that we are closer to genetically (Hamilton, 1964). Indeed, the data has suggested that vampire bats are more likely to share blood with relatives than non relatives. (Wilkinson, 1985)

 

Currently, EA seems to have more validity than RA. Due to the methodological weaknesses of the studies supporting either theory, one should however be careful in drawing too far reaching conclusions. Despite this precaution, this may still increase our understanding of altruism.

 

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *