Evaluating Programs With These Approaches

(
Objective Testing Programs: Similar to this case, I would use standard multiple choice testing programs to assess language proficiency norms in study abroad students as opposed to non-study abroad students. One of the goals of the Eurotech Program, as well as other study abroad programs, is to improve the linguistic ability of its participants by immersing them in a foreign study program. The assumption is that improvement will happen, but have studies been undertaken to see if it actually does happen? There are several reasons why it might have no positive effect on language learning. One is that most of the professionals that Eurotech students deal with in Germany are capable English speakers and thus some of the work is carried out in English. Another reason is that the course work Eurotech students enroll in at German Universities can be taught in English rather than German. Performing an evaluation by testing Eurotech students in German before they go abroad and after they return along with a control group of students in German classes who do not go abroad would be a useful evaluation of the language goal. Perhaps the results could lead to an improved selection of students (students who are motivated to learn German might do better than those who simply want the experience of foreign travel).

Case Study Evaluations: This could be used for an in-depth description and analysis of the Eurotech Program, which has not been performed since the start of the program in 1993. I would look at its organizational and historic aspects, as well as the geographic and cultural aspects of German universities involved. Such a study could look at the preparation and internship parts of the program to see how they affect its outcomes. Would students who have greater preparation in German language and culture have better outcomes in the internships abroad and in jobs at home? The main thrust of the evaluation would be to spell out the processes of selection, placement, support of the students and their benefits in post graduation job seeking among other things.

Approaches Not Appropriate To My Program:

11. Clarification Hearing. It would not be appropriate to put the program on trial, to find it guilty of having failed it stakeholders and recommend that it be terminated. One does not do that with educational programs. For the most part educational programs have a long history, are well thought out, and have professional and well trained people involved in them, people who are committed to the goals of the program. This is certainly true of the Eurotech Program. The goal of the evaluation is improve the program, not to demolish it.

Questions:

1. What is the Eurotech program in concept? Why was it created? Does it fulfill its goals? How has it changed over time?

2. How does the program actually operate to meet its goals? What has it produced? Are their success stories? Are their failures? How could the success be increased, the failures avoided?

There may be more appropriate approaches, less appropriate ones too. (5) can be quantitative. The averages scores can be computed and compared and (12) can largely be qualitative. Interviews could be a large part of it with students, faculty, and employers here and abroad. )

In this case, the word “model” means an approach to evaluation that is characterized by a particular developer’s idealized view of the main structure of what evaluation work would look like using his/her guidelines and descriptions. Some of the model developers prefer the word “approach,” so for our purposes the words “model” and “approach” can be used interchangeably. You can expect to see similarities, as well as distinct differences, between the various models, thereby increasing your awareness of alternative points of view and practice. In addition, you will be exposed to critical appraisal and endorsements of various approaches to program evaluation within the module.
Before we explore these models let’s consider two major research methods Quantitative and Qualitative methods. This is not intended to be a research methods course, but it is imperative that you have a basic understanding to enable proper selection of evaluation methods. Simple put should we evaluate program effectiveness using quantitative data and/or qualitative data? Is one more valid than the other?
In the end, our evaluation goal is to make a judgment or decision concerning effectiveness of some sort. Was the program implemented properly, did the results meet our expectations, did the investment yield positive returns, did the target audience benefit from the program and so on. The debate over quantitative versus qualitative is ongoing in the social sciences and is partially preference, how one was trained, and possible a prejudice the part of some researches (Collins, 1984; Smeyers, 2001; Smith, 2005) .
Evaluators and researches usually can agree that we must have a research question, possible some hypotheses or theory we are evaluating, and a target audience and data that can help us answer the question. Basically, quantitative research uses objective data that is number based or can be converted to numbers, where statistical methods can be used to evaluate like descriptive statistics, correlational methods, difference testing and so on (Creswell, 2005) . Qualitative methods are more subjective based on observations and are grounded in the researches ability to interpret the situation under study looking at words and language or symbols (Zawawi, 2007) .
Choosing the research method should be based on the research goal not the skill of the researcher. If advanced statistical data analysis is needed to properly evaluative, the research question and we are not skilled in the methods, we need to find outside help/expertise. Similarly, if case study analysis or ethnography are the proper approaches, we may need assistance. In this module, we will begin to explore models and methods to enable you to outline for your final project and for future evaluation projects proper approaches to evaluation.

References
Collins, R. (1984). Statistics versus words. [Article]. Sociological Theory , 329.
Creswell, J. W. (2005). Educational research: Planning, conducting, and evaluating quantitative and qualitative research . Upper Saddle River, NJ: Pearson.
Smeyers, P. (2001). Qualitative versus quantitative research design: a plea for paradigmatic tolerance in educational research. [Article]. Journal of Philosophy of Education, 35 (3), 477.
Smith, R. B. (2005). Qualitative versus quantitative research design: a plea for paradigmatic tolerance in educational research. [Article]. Quality & Quantity, 39 (6), 801-825.
Zawawi, D. (2007). Quantitative versus qualitative methods in social sciences: bridging the gap. [Article]. Integration & Dissemination, 1 , 3-4.

Evaluators tend to either relying exclusively upon “objective” survey questionnaires and statistical analyses or using only qualitative methodologies, rejecting the quantitative approach. I believe in a combination of quantitative and qualitative evaluation methods. The integrated approach is needed in order to have a broader understanding of the program, and protect against caveats of misleading information.

The strengths of the quantitative paradigm are that its methods produce quantifiable, reliable data that are usually generalized to some larger population. However, as Stufflebeam (2000) states this data may still be manipulated by evaluators by producing only positive results and not any short comings. I would use a quantifiable evaluation to do needs assessments to determine who best fits the program. I would follow up with qualitative evaluation to gain more insight of the participants, to make certain the most needy people and the ones who would return a greater benefit are assisted. For instance, in HIV intervention with truck drivers, I would have a questionnaire for a needs assessment asking how many unprotected sex partners and how many times the person paid for sex were asked. Because this survey is not anonymous, I will be able to determine which the truck drivers are at most risk and the most danger of spreading the disease to more people. At this point, a qualitative evaluation would take place analyzing more details, such as the truck driver’s motivation to change their own behavior; to be sure the correct participants are selected for the treatment.

An extreme example of a failure of quantifiable data was cited in Stufflebeam (2000) when the superintendent hid aggregate information on the school’s segregation and unequal imbalances of educational test scores, etc. Looking at only what the superintendent provided makes those evaluating naïve; thus, an in personal interview with the superintendent may have produced this information. Even though it was unethical to withhold results to keep the superintendent job, it was arguably not illegal. If the superintendent lied to an evaluator directly, this may be considered fraud.

Furthermore, when the program under study is difficult to measure or quantify, such as how many people are actually being prevented from contracting HIV in an HIV intervention program, qualitative evaluation methods can be integrated to assist, such as direct interaction with the people under study. These direct connections are imperative especially in a psychological program in order to study the stakeholder’s improvements or their falling off. These qualitative methods may include observations, in-depth interviews and focus groups. However, it is important to integrate quantitative data to verify the consistencies, for there is a Hawthorn Effect when people know they are being studied. In addition, because the researcher becomes the instrument of data collection, the results may vary greatly depending upon who conducts the research.

I am suggesting qualitative and quantitative studies should be done simultaneously whenever feasible financially. Both have strengths and weaknesses, and it appears to me that the weakness of qualitative subjectivity and the Hawthorn Effect can be resolved with a subsequent quantifiable study. A partial and biased counselor may not be so if they know a quantitative study will be done hence forward to verify the results. Those subjects that produce Hawthorn Effects during one on one interviews and study may answer correctly on forms. Furthermore, some of the people in the HIV study may not tell the truth to a counselor of how many sexual partners they have had etc, but they may do on a paper survey.

I am suggesting qualitative and quantitative studies should be done simultaneously whenever feasible financially. Both have strengths and weaknesses, and it appears to me that the weakness of qualitative subjectivity and the Hawthorn Effect can be resolved with a subsequent quantifiable study. A partial and biased counselor may not be so if they know a quantitative study will be done hence forward to verify the results. Those subjects that produce Hawthorn Effects during one on one interviews and study may answer correctly on forms. Furthermore, some of the people in the HIV study may not tell the truth to a counselor of how many sexual partners they have had etc, but they may do on a paper survey.

References:

Stufflebeam, D.L. (2000). Foundational models for 21st century program evaluation (pp.33-83). In D.L.

Stufflebeam, G.F. Madaus & T. Kellaghan. S. Mathison (Eds.), Evaluation models: Viewpoints on educational and human services evaluation 2nd edition. Kluwer Academic Publishers, Boston

Related Articles

1. Amway Products and Brands | Amway MLM Products
2. Amway XL Energy Drink and XL Energy Bar

(Quantitative methods are research methods concerned with numbers and anything that is quantifiable. Qualitative research uses very different methods of collecting information, mainly individual, in-depth interviews and focus groups. Quantitative research uses objective data that is number based or can be converted to numbers. Qualitative methods are more subjective based on observations and grounded in the researcher’s ability to interpret the situation under study looking at words and language or symbols. Although qualitative method is based on subjective observations, it still does not remove objectivity in the interpretation of those observations in the light of verifiable facts.)
APPROPRIATE RESEARCH METHOD
I would consider qualitative research method the most appropriate evaluation model to utilize in my final project. My project paper is to evaluate the peacekeeping performance of United Nations in Darfur, Sudan. This project would adopt qualitative approach for the following reasons; first, the goal of the evaluation project is to analyze the quality of the peacekeeping mission of UN in Darfur. This evaluation takes a critical look at the methods being used in Darfur by the UN peacekeepers. Are these methods in line with the United Nations peacekeeping principles and standards? What are the practical outcomes of the peacekeeping? Can it be said that there is true peace and security in Darfur? Did the peacekeeping succeed in establishing political stability in Darfur.
Second, the evaluation would utilize in-depth interviews of various participants in the peacekeeping mission, international observers, African opinion leader and, inhabitants of the region. It will also use focus groups and other individuals in information gathering and interpretation.
Third, the outcome of any peacekeeping is usually visible, concrete, and acceptable both in the region and in the international community. Mathematical data may not be required to show a state without crisis or conflict.
NOT APPROPRIATE EVALUATION MODEL
I would consider quantitative evaluation model as not so appropriate in my final project paper. However, this method could prove very helpful in so other ways in the work. Providing mathematical data would be good in clarifying certain issues and forming opinions that might lead to good decision making. For example, quantitative model could provide information/data such as how many rebels were disarmed, how many lives were saved through the peacekeeping mission, etc.
Despite these points, qualitative approach is still the most appropriate because peacekeeping mission is more of a practical and open-field intervention that the result speaks for itself
TWO KEY QUESTIONS
1. Is the goal of the peacekeeping mission which is to restore peace and establish political stability in Darfur successful?
2. What areas still need attention in the peacekeeping mission that will enhance a comprehensive recovery Darfur and Sudan in general?

Stufflebeam, D.L. (2000). Foundational models for 21st century program evaluation (pp.33-83). In D.L. Stufflebeam, G.F. Madaus & T. Kellaghan. S. Mathison (Eds.)

(Quantitative Approach: Benefit-Cost Analysis Approach

As a small private medical practice, I think it would be wise for us to analyze the benefits and costs of the implementation and continued use of our electronic medical record system. According to Stufflebeam (2000), this quantitative approach strives to “…determine costs associated with program inputs, determine the monetary value of the program outcomes, compute benefit-cost ratios, compare the computed ratios to those of similar programs, and ultimately judge a program’s productivity in economic terms” (p. 51). Ideally, the evaluator would be well trained in computing these valuable quantitative measurements of program effectiveness. I may not be qualified, but it seems that this approach would be very useful in my proposed study. In my view, the worthiness of an Electronic Medical Record cannot be judged without factoring the costs of the program.

Questions:

1. What is the average estimated cost for the practice to deliver medical services to patients using the EMR and how does this compare to previous costs?
2. Has the EMR’s streamlined electronic workflows reduced costs as measured by hourly wages and overtime?

Qualitative Approach: Client-Centered Evaluation

Since I am both a stakeholder and evaluator, this approach may work well for the purpose of my study. “The client-centered study embraces local autonomy and helps people who are involved in a program to evaluate it and use the evaluation for program improvement” (Stufflebeam, 2000, p. 69). It will be very important for us to evaluate the effectiveness of our EMR, but in the end, we must find ways to make improvements. Our program is not optional. We cannot evaluate, determine that it does not work, and then terminate the program. So, an evaluation method that focuses on improvement is key for our organization.

Questions:

1. Has the EMR contributed to improvements in patient safety? If so, how so? If not, what action is necessary to make necessary improvements?
2. How has the EMR affected patient satisfaction?

Inappropriate Model: Constructivist Evaluation

According to Stufflebeam (2000), this approach is “…heavily philosophical, service oriented, and paradigm-driven” (p. 71). This study works best with a wide range of stakeholders, participants and evaluators. It seeks to empower the disenfranchised and is seen by some to be too utopian (Stufflebeam, 2000). This approach may be rather useful, but is not suited for my proposed evaluation. We do not have a wide range of stakeholders, and constructivists may object to a stakeholder also being an evaluator due to ethical concerns. A utopian mentality cannot work in health care either.)

Reference:

Stufflebeam, D.L. (2000). Foundational models for 21st century program evaluation (pp.33-83). In D.L. Stufflebeam, G.F. Madaus & T. Kellaghan. S. Mathison (Eds.), Evaluation models: Viewpoints on educational and human services evaluation 2nd edition. Kluwer Academic Publishers, Boston. )

This review is done by
Thomas Woodfin

Amway Shop at

>

<a href="”>Amway Global

Leave a Reply

Your email address will not be published. Required fields are marked *