1) I would like to discuss objection #6.
2) This objection states that Ashley has the same rights to have sexual pleasure and proper development as you or I do since she is a human being. The value of physical integrity when it comes to life and human rights is the main concern.
3) I was able to paraphrase the objection a little bit even though there were a lot of organized reasons. The Ashley case raises concerns about the rights of individuals, including the right to bodily integrity, the right to grow, and the right to sexual pleasure. The Washington Protection and Advocacy System states that constitutional rights to liberty and privacy, such as the freedom to choose children, decline life-sustaining care, and provide psychiatric medicine, may be infringed upon. Ashley’s rights include love, care, and appropriate treatment, considering her unique needs.
4) I believe that the reasoning behind this response was well plausible and consistent. Some of their reasons consists of how unique Ashley’s situation was and how it’s unfortunate that she can’t grow and develop in the proper way. However, she has the right to have proper care and appropriate treatment.
5) I buy this response because it makes sense to me and all the reasons were well articulated. I believe it was successful in turning aside the objection
Category: Ethnic Studies
Part of learning about argumentation involves learning how to respond to objecti
Part of learning about argumentation involves learning how to respond to objections made against ones reasoning as well as learning how to evaluate author responses to objections. Author responses to objections should always be (1) relevant to the objection, (2) plausible, and (3) consistent with what the author claims/defends elsewhere. For an author’s response to be relevant it must first demonstrate an accurate understanding of the concerns contained within the objection and make every effort to assuage, mitigate, or demonstrate how the concerns should not really be concerns. For an author’s response to be plausible, the counter-reasoning contained in the response should at least appear to be true, i.e., it should not be immediately easy to falsify, etc. Finally, for a response to be consistent, it must not contradict something the author has claimed or attempted to defend elsewhere (within the list of objections being responded to or within other articles related to the objection). These are the minimum criteria that a response to an objection must meet in order to be decent/good.
This week’s tasks:
1) Select one objection/response from Diekema & Fost, Ashley Revisted: Response to Critics. There are 25 to choose from. Tell us which one you select. For example, ‘I will discuss objection 5.’
2) Summarize the objection in your own words. Make sure to be as charitable as possible in your restatement of the objection and to note the concerns and/or values/disvalues that were likely motivating the people that raised the objection. (Identifying the concerns, values/disvalues that may have been motivating an objection is often helpful in determining whether an author’s response was actually relevant to the objection.)
3) Summarize Diekema/Fost’s response to the objection in your own words. Making sure to be as charitable as possible.
4) Analyze the counter-reasoning contained within their response. That is, was their response/reasoning relevant to the objection, plausible, and consistent? What were it’s strengths? What were it’s weaknesses? Your reasoning? Note that most arguments, responses/counters, etc., will have at least one strength and at least one weakness.
5) Finally, let us know whether or not you ‘buy it’? That is, do you think the response/counter-reasoning was successful in rebutting/turning aside the objection? If so, why so? If not, why not?
1) The side of the transgender high school athletes debate that seems to have th
1) The side of the transgender high school athletes debate that seems to have the maximal amount of “respect for persons” is the one that doesn’t allow transgenders on female teams. Just because it will show respect for more people will not make it the best choice, because transgenders are not only men to women there are also women to men. If you deny all transgenders the right to compete you are also denying women to compete in male sports which falls under the Title IX law. I would say letting a transgender compete in male or female sports would show the most respect for all.
2) I think we should try to have “respect for persons” always, however, some interactions with people make that difficult. I would say show kindness to everyone because most of the time you get what you put out returned to you. If you cannot respect someone you can be kind and let them earn the respect. I think respect is earned before it can be given.
3) I do think having the intent to be respectful and Kind is sufficient to ensure your actions are moral toward others. Having respect for all is sometimes hard when some may feel disrespected. Either side that is taken on Title IX is doomed to cause some disrespect for part of the students.
4) It is definitely not possible for a person, administrator, policy maker, law, etc., to be respectful of all people all of the time. Our culture is made up of so many different types of people (sex, race, age, disabled, etc.) that it is almost impossible to be respectful to everyone. Someone will always feel disrespected.
1-Some people do not believe in technology because of their fears of using it. F
1-Some people do not believe in technology because of their fears of using it. For example, Amish people recognize that technology can lead to evil. If you have the ability to prove them wrong, how will you emphasize your view? 2-Why was there an invention for programming languages? How did programming change computers?3-In your own words, what is plagiarism? In the academic field, how does a student commit plagiarism? You are required to add an example? 4-Suppose you and I are debating an amoral problem in front of the crowd. You have concluded that a particular course of action is right, while I believe it is wrong. It is only natural for me to ask you,“ Why do you think doing such-and-such right?” In order to make such a decision, which one of the eight ethical theories you may use and why?
Now that we are familiar with consequentialism, its core moral/ethical imperativ
Now that we are familiar with consequentialism, its core moral/ethical imperative to “Always do the best for the most!”, its values/disvalues, etc., we can consider an interesting set of questions.
1) Should driverless cars be programmed to be consequentialists when confronted with trolley-style problems/decisions? If so, why so? If not, why not?
Alternatively stated, does it seem like a good idea to program a driverless car to tally up the potential positive and negative consequences of its possible actions and then to always select whichever action stands the statistically best chance of resulting in the most positive consequences/results for the greatest number of people effected/affected by its actions? Or might it be a better idea to program driverless cars to always choose actions that are most likely to protect/save its driver/passengers, even when such might result in greater overall harm, suffering, death, etc., to the majority of effected/affected people outside of the car? Your reasons?
2) If you tend to agree with the idea of programming driverless cars to calculate like consequentialists, then which of consequentialism’s assumptions and values/disvalues do you tend to find especially appealing? Your reasons?
If you tend to disagree with the idea of programming driverless cars to calculate like consequentialists, then which of consequentialism’s assumptions and values/disvalues do you tend to find especially unappealing? Your reasons?
Also, if you tend to think that programming cars to calculate like consequentialists is a bad idea, then what might be a better idea? That is, what should driverless cars be programmed to do when confronted with trolley-style problems/decisions? What values/disvalues should be given the greatest priority in such scenarios? Your reasons?
*Please make sure to watch Consequences & the “Best for the Most” (SE) Parts 1-2, and to read/watch this week’s articles/videos prior to posting to this discussion.*
I encourage everyone interested in learning even more about consequentialism, the ethical issues surrounding driverless car technology, etc., to check out some of the resources located inside this week’s voluntary submodule.*
Now that we are familiar with consequentialism, its core moral/ethical imperativ
Now that we are familiar with consequentialism, its core moral/ethical imperative to “Always do the best for the most!”, its values/disvalues, etc., we can consider an interesting set of questions.
1) Should driverless cars be programmed to be consequentialists when confronted with trolley-style problems/decisions? If so, why so? If not, why not?
Alternatively stated, does it seem like a good idea to program a driverless car to tally up the potential positive and negative consequences of its possible actions and then to always select whichever action stands the statistically best chance of resulting in the most positive consequences/results for the greatest number of people effected/affected by its actions? Or might it be a better idea to program driverless cars to always choose actions that are most likely to protect/save its driver/passengers, even when such might result in greater overall harm, suffering, death, etc., to the majority of effected/affected people outside of the car? Your reasons?
2) If you tend to agree with the idea of programming driverless cars to calculate like consequentialists, then which of consequentialism’s assumptions and values/disvalues do you tend to find especially appealing? Your reasons?
If you tend to disagree with the idea of programming driverless cars to calculate like consequentialists, then which of consequentialism’s assumptions and values/disvalues do you tend to find especially unappealing? Your reasons?
Also, if you tend to think that programming cars to calculate like consequentialists is a bad idea, then what might be a better idea? That is, what should driverless cars be programmed to do when confronted with trolley-style problems/decisions? What values/disvalues should be given the greatest priority in such scenarios? Your reasons?
*Please make sure to watch Consequences & the “Best for the Most” (SE) Parts 1-2, and to read/watch this week’s articles/videos prior to posting to this discussion.*
I encourage everyone interested in learning even more about consequentialism, the ethical issues surrounding driverless car technology, etc., to check out some of the resources located inside this week’s voluntary submodule.*
Now that we are familiar with consequentialism, its core moral/ethical imperativ
Now that we are familiar with consequentialism, its core moral/ethical imperative to “Always do the best for the most!”, its values/disvalues, etc., we can consider an interesting set of questions.
1) Should driverless cars be programmed to be consequentialists when confronted with trolley-style problems/decisions? If so, why so? If not, why not?
Alternatively stated, does it seem like a good idea to program a driverless car to tally up the potential positive and negative consequences of its possible actions and then to always select whichever action stands the statistically best chance of resulting in the most positive consequences/results for the greatest number of people effected/affected by its actions? Or might it be a better idea to program driverless cars to always choose actions that are most likely to protect/save its driver/passengers, even when such might result in greater overall harm, suffering, death, etc., to the majority of effected/affected people outside of the car? Your reasons?
2) If you tend to agree with the idea of programming driverless cars to calculate like consequentialists, then which of consequentialism’s assumptions and values/disvalues do you tend to find especially appealing? Your reasons?
If you tend to disagree with the idea of programming driverless cars to calculate like consequentialists, then which of consequentialism’s assumptions and values/disvalues do you tend to find especially unappealing? Your reasons?
Also, if you tend to think that programming cars to calculate like consequentialists is a bad idea, then what might be a better idea? That is, what should driverless cars be programmed to do when confronted with trolley-style problems/decisions? What values/disvalues should be given the greatest priority in such scenarios? Your reasons?
*Please make sure to watch Consequences & the “Best for the Most” (SE) Parts 1-2, and to read/watch this week’s articles/videos prior to posting to this discussion.*
I encourage everyone interested in learning even more about consequentialism, the ethical issues surrounding driverless car technology, etc., to check out some of the resources located inside this week’s voluntary submodule.*
Discuss how increasing your level of education would affect how your competitive
Discuss how increasing your level of education would affect how your competitiveness in the current job market and your role in the future of nursing.
Discuss the relationship of continuing nursing education to competency, attitudes, knowledge, and the ANA Scope and Standards for Practice and Code of Ethics.
Suppose that Marcus works as a programmer for a decent sized company. Around mid
Suppose that Marcus works as a programmer for a decent sized company. Around midday, his daughter calls and asks him to stop by the local office supply store after work to purchase some ink and printer paper so that she can finish a school project. Marcus readily agrees to the request. As the day progresses, Marcus’ workload becomes immense. By the end of the day, he is very tired. As he is about to leave work, Marcus remembers his daughter’s request. He definitely doesn’t want to disappoint or be guilty of lying to his daughter. He very much wants his daughter to always think of him as being very reliable and trustworthy. Nor does he want to hinder her ability to complete the school project. Success in school is, after all, very important. However, Marcus is so very tired and simply wants to return home and rest for the evening. Stopping by the office supply store would add an extra 45+ minutes to his return commute. At that moment, Marcus notices the numerous printer ink cartridges and reams of paper on shelves in the company breakroom. He begins to think, ‘No one is around and no one will ever miss a few ink cartridges nor packages of paper. Moreover, company administrators seem to waste multiple ink cartridges and packages of paper a week on useless interoffice memos. And, I worked really hard today. Come to think of it, I’ve worked really hard for the entire month! The company can well afford to give me a small bonus for my recent work performance and to donate some supplies to my daughter’s educational success.’ Marcus quickly relocates two ink cartridges and packages of paper from the shelves into his bag and then exits the building. Upon returning home, his daughter is happy to receive the supplies and his company never notices the missing ink nor paper.
Questions:
(1) What moral values/disvalues (or moral/ethical principles) does Marcus seem to assume?
(2) What ethical/moral question/s does this scenario raise? [minimum of two]
(3) What answer/s would Marcus’ thoughts/actions provide to the question/s you identified in (2)?
(4) What reasons does Marcus provide in defense of the answer/s provided in (3)?
(5) Do you tend to agree or disagree with Marcus’ reasoning? If you agree, then which of Marcus’ reasons seem to be especially plausible, true, etc.? Why so? If you tend to disagree, then which of Marcus’ reasons seem to be especially mistaken, false, etc.? Why so?
(1) Review List of Values/Disvalues (SE) (2) Offer three values/disvalues that
(1) Review List of Values/Disvalues (SE)
(2) Offer three values/disvalues that are not on the list. For example, “X, Y, and Z are not on the list.”
(3) Provide your reasoning for adding each to the list. For example, “X should be on the list because…..” “Y should be on the list because….”
Restrictions:
— Your proposed values/disvalues must not already be on the list.
— Your proposed values/disvalues must be different from your peers. This means that you will need to read through peer posts/proposals before making your own.
Start a New Thread