home > Tertiary education

BEST TERTIARY EDUCATION ESSAYS

  • Tertiary education
    Genetically modified organisms (GMO)
    GMO How are genetically modified organisms different from non-genetically modified organism? Genetically modified organisms are animals, plants and other organisms whose genetic composition was altered using genetic recombination and modification techniques performed in a laboratory. On the other hand, non-GMO organisms are those organisms that are produced naturally and were not modified (the organic & non organic report 2017; rumiano cheese 2011 & non-gmoproject 2016). The recent acts of activist intent on destruction of research plots included plants altered by molecular as well as classical genetic techniques. Is it possible to distinguish between plants altered by classical genetics and those altered by modern techniques? If it’s possible, how is it done?  It is possible and it can be distinguished by checking the DNA of the organism. Thion et al. 2002 conducted an experiment on how to extract/purify DNA of soybeans to check if the sample was transgenic and had undergone extraction and purification. The checking can be done through the use of a microscopic technology. Meanwhile, Schreiber (2013) adds that the detection could be done through a biochemical means where the present GMO will be measured. In isolating and amplifying a piece of DNA, the technique using polymerase chain reaction (PCR) is used to make millions of copies of the strands of the DNA. It is easier to see visually the altered and non-altered DNA if there are millions of copies of the DNA. What safeguards are in place to protect Americans from unsafe food? Are these methods science-based? Mention at least 2 methods. The US government safeguards the Americans from unsafe foods through the FDA or US Food and Drug Administration. Their methods are science-based, i.e. its whole genome sequencing technology and its measures in controlling microbial hazards. The whole genome sequencing technology is used by the FDA in identifying pathogens isolated from food. The FDA also safeguards foods by controlling microbial hazards through the process of elimination of growth and reduction of growth. The elimination methods are either through heating or freezing while the reduction of growth method involves the use of acidity, temperature and water activity. (Bradsher et al. 2015, pp. 85 – 86; FDA 2007; FDA 2013). Name at least 10 examples of harm to citizens from unsafe food. What percentage of these illnesses was caused by genetically modified organisms? If so, mention any example Some examples of harm to people from unsafe foods are harmful diseases extending from diarrhea to cancer caused by eating foods contaminated with viruses, bacteria, chemical substances and parasites. Around 600 million people around the world fell ill after consumption of contaminated food; diarrheal diseases cause around 125,000 death of children 0-5 years of age (WHO 2015). Based on the studies made by IRT (2011), foods from genetically modified organisms cause damage to the immune system, gastrointestinal and other organs, infertility and accelerated aging. These happen because residue or bits of materials of the GMO food can be left inside the person’s body, which eventually can cause long-term problems. Statistics show that in 9 years after the introduction of GMOs in the market, Americans who had chronic illnesses rose from 7 to 13% and other diseases such as digestive problems, autism, and reproductive disorders are rising (IRT 2011).
    $ 0.13
    0words / 0page
    READ MORE
  • Tertiary education
    ‘Globalisation is good’ or ‘is it not?’
    ‘Globalisation is good’ or ‘is it not?’ Globalisation is good because it opens doors of opportunities to many. It was the reason for the broad and speedy worldwide interconnectedness of the current social life – from cultural to criminal and from financial to spiritual. This is synonymous to having a borderless world but critics argue stating that globalisation has in fact disconnected the world from its national geographical divisions – the countries (Yoong & Huff 2007). Although some are discounting the benefits of globalisation to the world, I still consider globalisation to be the driving force in the global partnerships between companies that created more opportunities and jobs. The world trade may have plunged, the dollar dwindled, commodities slumped, but overall, globalisation has brought good to the peoples of the world. Globalisation through the internet has unlocked the doors to the sharing of cultures, knowledge, goods and services between peoples of all countries and the modern technologies lifted the barriers for accommodating a speedy transfer. The case of Inditex in marketing their Zara brand globally manifests that in business, one formula does not fit all. Every country has its own culture and styles and a business that is going global must do their homework well before entering the new market. Inditex’s Zara brand was a success to the Europeans but struggles in America and still trying their luck with the Chinese. But despite of these differences, the company is still considering going global because they needed new markets and they knew they will be opening bigger opportunities and jobs to more people (La Coruna 2012). Moreover, globalisation has also done well to the manufacturing sector. Statistics show that the global industrial output in 2010 registered fifty-seven times more than the production in the 1900. Also, globalisation has changed the way things are produced. The manufacturers going global take advantage of the skills and the costs of producing products in different countries. This means that the design of the product may be done in the US, manufactured in China or Taiwan then assembled in the Philippines. So every item – be it an iPad, a doll or a washing machine is collaboratively produced by the best skilled workers in the world at the lowest labor cost (The economist 2012). Consequently, since the product was a collaboration of different countries so it can be also marketed and patronized in those countries (The economist 2012). However, there are some who are openly argues that it failed to deliver the many publicized benefits to the poor. A Filipino economist, Walden Bello, coins a new term to describe the present global economic situation as caused by “deglobalisation” due to the downturn of the economies of big countries such as Singapore, Taiwan, Germany, Japan and Brazil. However, the poor countries are the ones that show faster growth than the rich countries, making globalisation still good because of the opportunities it gives to the needy. On the other hand, Dunning, et al (2007) claims that the current inclinations in the global economy reflect a more distributed rather than a geographical sharing of multi-national enterprise activity and foreign direct investments and to the carrying-out of transactions that are globally oriented. Contrary to the common beliefs, globalisation is not a new thing in the global business world. According to McMahon (2004) it existed since the late parts of the fifteenth century when a society of nations consisting of the countries in Northern Europe entered the rest of the world through exploration, trade and then conquest. This process which involves the exploitation of wealth and power by the European voyagers lead to industrialization in Britain, then mass international industrialization and eventually globalisation (McMahon 2004). Sheel (2005) adds that the interchange of technology and markets between countries have been among the first human innovations since the most primitive times. Globalisation was termed that time as “exchange” where the country’s surpluses were exchanged with other surpluses of peoples from other countries. This old system of exchange was developed, continued to grow and increased to greater heights in the modern times (Waters 2001 as cited in van Krieken, et al 2006). Robertson (2003) asserts that globalisation is inherent in people, motivated by their desire for self-interest and cooperation for survival. The author theorizes that globalisation existed due to the encouragement of interconnectedness by the social, political, economic and technological growths performing as catalysts for both local and global developments (Robertson 2003). Robertson (2003) claims that globalisation has emerged in three waves – during the 1500 to 1800 for the first wave, 18th century up to the 20th century for the second wave and the third wave is after the World War 2. However, Sheel (2008) categorizes globalisation in four phases – the 1st phase took place on the 16th century, the 2nd phase on the late 18th century, the 3rd phase during the 19th to 20th century and the fourth phase is during the end of the 20th century. According to the analysis of Robertson (2003), the first wave (1500 to 1800) saw the upsurge of colonization, invasion, imperialism, misery of the indigenous people, migration and changes in politics, economy and culture. The first wave has encouraged the creation of interconnectedness between peoples, countries and cultures, as instigated by commerce and trade. The second phase (18th to 20th century) was characterized by the start of Industrial Revolution, paving the way for industrialization and increase of income and profits especially to those who had technological skills. The trade routes created during the first wave were utilized by the manufacturers in sourcing their raw materials from other countries. However, by the end of the second wave, civil conflicts in many countries arose, same with the unfortunate events of World Wars 1 and 2 and the Great Depression. The third phase of globalisation transpired after World War 2. This was the phase when European economies were down whilst USA was enjoying a flourishing economy with tough industrial foundation and strong military. The latter part of the third phase (during the middle of the 20th century), the growth of globalisation was challenged by the emergence of communist ideology and the military force of Soviet Union. This challenge resulted to cold war between USA and Soviet Union where Soviet Union collapsed in 1989 (Robertson 2003). In addition to Robertson’s analysis, Sheel (2005) adds that there exists a fourth phase of globalisation that happened during the end of the 20th century where countries the developing and developed countries merged as partners in cross border trade and investments, stimulating the convergence of India and China. However, issues about globalisation’s worthiness have surfaced, some critics consisting of anti-globalisation groups argue that globalisation in corporate organisations have increased povery and inequality (Engler 2007). A study was made by World Value Survey regarding globalisation and 57% of the survey respondents consider globalisation as good. Most of the approving respondents were optimistic that globalisation would encourage the improvement of the workers’ working conditions, economic equality, global peace, global stability and human rights (Leiserowitz, et al 2006). But still, anti-globalisation groups insist that poverty, homelessness and environmental destruction will be highlighted if globalisation continues as it only centers on increasing trade and investment but ignores environmental protections and human rights (Engler 2007). But Edwards & Usher (2008) comment that the argument of the anti-globalisation groups are only superficial because despite their protests against globalisation they still engage in globalisation practices such the use of computers, internets and mobiles in their dissemination of their opposition. This manifests that these protesters are only selective in their opposition. They are not against the good effects of globalisation in communication but only on the aspect of capitalism. The inequality of wealth and poverty is one of the issues that plagued globalisation where critics claim that it makes the poor countries poorer and the rich countries richer as they exploit and amass the wealth of the minority country. But Holmes, et al (2007) reason that there is really a big difference on the distribution of benefits as the developed country provides the money or the capital whilst the developing country (minority) offers its resources and labor. This set-up ends-up with the developed country that provided the financial capitalization getting the bigger share of the profit. However, one aspect of globalisation that really brought good benefits to the people is the technological globalisation. Dahlman (2007) describes technological globalisation as the development of knowledge and skills through research by capable engineers and scientists and offering them to countries that have no inventive capability. The acquisition of these inventions by other countries enables them of acquiring technological transfer. Technologies can be transferred through technical assistance, direct foreign investment, importation of goods and components of products, licensing, copying and reverse engineering (Dahlman 2007). The advancement of communication technology through networking has opened more opportunities and economic growth. In addition, the video of Johan Norberg entitled “Globalisation is good – the case of Taiwan” illustrates the importance of globalisation in uplifting the poor conditions of poor countries. The video presented two former poor countries – Taiwan and Kenya – and compare and contrast what have they become 50 years after. Taiwan became 20 times progressive than Kenya whilst Kenya remained a poor country. Norberg explains that the reason for this difference is the globalisation that Taiwan embraced 50 years ago. Taiwan allowed capitalists to invest in their country whilst they provide the resources and labor. Moreover, Taiwan allowed the integration of their economy to the global trade whilst Kenya continued to shun globalisation. The video also presented the value of the multinational companies like Nike that employs the labor force of Vietnam in their sweatshop. Instead of being exploited, the Vietnamese were given good working conditions, high salaries and more benefits. Contrary of the claim of anti-globalisation groups that multinational investors will only exploit local workers, Vietnamese workers were given the opportunity to rise from their poverty through the works provided for them by globalisation. Conclusion: Contrary to what most people believe, globalisation has been in existence since time immemorial through surplus “exchange” and though the people were not yet privy to the term, they were already using the method of globalisation in their interconnection with other people’s business and lives. Now that the term globalisation is out in the open, people all around the world become mindful of each other’s affairs and consequences, disapproving how the system of globalisation makes the rich countries richer and the poor countries poorer. But as Norberg (2012) has seen it, globalisation is good as it intends to improve productivity and working condition. Though critics argue that it only exploits and amass the wealth of the poor country, Norberg was right when he said that if it is exploitation, then the world’s problem is by not exploiting the poor properly. The case of Taiwan and Kenya is already an eye-opener to those who still shut the door to globalisation. There may be ups and downs in the world of business but it cannot be blamed everything to globalisation because globalisation is only a method of interaction and not the one that is making the business or the deal. Globalisation through the internet has opened the doors to the sharing of cultures, knowledge, goods and services between peoples of all countries and the modern technologies lifted the barriers for accommodating a speedy transfer. The case of Inditex in marketing their Zara brand globally manifests that in business, one formula does not fit all. Every country has its own culture and styles and a business that is going global must be well prepared before entering the new market. Inditex’s Zara brand was a success to the Europeans but struggles in America and still trying their luck with the Chinese. But despite of these differences, the company is still considering going global because they needed new markets and they knew they will be opening bigger opportunities and jobs to more people. This proves that globalization brings good to many but one must know how to diversify and take advantage of the various benefits of globalization to reach greater success in the future.
    $ 0.13
    12,670words / 1page
    READ MORE
  • Tertiary education
    Explicit Teaching
    Explicit Teaching Introduction Not all students are equal. Some are fast learners; others need assistance while others are unruly – not because they are doing it intentionally, but because they are suffering from learning disabilities causing hyperactivity, inattention and impulsiveness. Some adjustments are needed in the learning environment and these adjustments should be tailored based on the individual learning needs of the students. Explicit teaching provides active communication and interaction between the student and the teacher and it involves direct explanation, modeling and guided practice (Rupley & Blair 2009). This paper will demonstrate Explicit Teaching applied to a class scenario with students suffering from a learning disability known as Attention Deficit/Hyperactivity. Furthermore, a lesson will be developed featuring an example of an explicit teaching approach showing how to differentiate the lesson to meet the needs of every student, with or without learning disability before finally concluding. 2A: ET Creating a Scenario One of the learning disabilities encountered is AD/HD or Attention Deficit/Hyperactivity Disorder, a neurological disorder that is likely instigated by biological factors that impact chemical messages (neurotransmitters) in some specific parts of the brain. In this type of learning disability, the parts of the brain that control reflective thought and the restriction of ill-considered behavior are affected by the slight imbalances in the neurotransmitters (ADCET 2014). AD/HD is characterized by hyperactivity, inattention and impulsiveness. Students with ADHD are those who never seem to listen, cannot sit still, do not follow instruction no matter how clear the instructions are presented to them, or those who just interrupt others and blurt-out improper comments at improper times. Moreover, these students are oftentimes branded as undisciplined, troublemakers or lazy (NHS 2008). In managing students with AD/HD, some adjustments in the learning environment are needed and these adjustments should be tailored based on the individual needs of the student. It should be noted that persons with AD/HD have different manifestations and the nature of disability as well as its effect on the student’s learning also vary (ADCET 2014). Direct instruction is considered as one of the best approaches in teaching students with AD/HD, but it must be used skilfully and the teacher should think of strategies to prevent it from becoming boring. Killen (2003) states that in using direct instruction, the teacher should emphasise teaching in small steps so the student will be able to practice every step and their practice will be guided to come-up with high level of success. In teaching a student with AD/HD, creative presentation of course material is advisable and this could be done through the use of visual aids and hands-on experience to stimulate the student’s senses. The teacher may use personal stories such as the student’s ideas and experiences (Killen (2003). It will also help if the teacher encourages the student with AD/HD to sit in front or near in front of the classroom to limit distractions (Tait 2010). Telling the student of what the teacher wants him to learn or be able to do – such as reading, writing, etc. - will help in the student’s better understanding of the lesson. In presenting the lesson, the teacher should present the lesson at a pace that the student can handle, such as not too slow or too fast. Important points should be emphasised so the student will realise its significance. To check if the student understands the lesson, the teacher may ask questions and if the student cannot answer, the teacher should re-explain everything that the student gets confused with. New words or new terms should be explained through examples. Assigning colors to different objects is a good visual aid in processing visual information. To help the student with AD/HD process written material, the teacher may use various verbal descriptions as possible. A list of acronyms and terms will also help, as well as a variety of teaching formats like films, flow charts or handouts. At the end of the lesson, a summary should be given, stressing the important points of the lesson. 2B: ET Lesson PlanKey Learning Area: Math Stage: 7 Year level: Year 7 Unit/Topic: Algebra Learner Outcomes: This lesson focuses in essential algebraic topics intended to prepare students for the study of Algebra and its applications. Students are introduced to topics involving mathematical operations with whole numbers, decimals and integers. Upon completion of this lesson, students are expected to answer and use mathematical language to show understanding; use reasoning to identify mathematical relationships; and continue and be familiar with repeating patterns. Indicators: At the end of the lesson, students are able to recognise what comes next in repeating patterns, identify patterns used in familiar activities, recognise an error in a pattern, able to simplify algebraic fractions, factorise quadratic expressions and operate with algebraic expressions. Resources: Whiteboard, colored visual aids, workbooks and class notes where the procedures are listed. Prior Knowledge: Students possess basic math knowledge (addition, subtraction, multiplication and division). They also have basic understanding of the terms such as whole numbers, positive, negative, decimals and integers. Assessment Strategies: To assess the students’ learning, students will be asked to do mathematical operations. Their answers will be checked, marked and recorded; and those who are unable to answer correctly will be asked what is it that they are getting confused. For students with learning disability, their computations will be checked and evaluated. Comments will be recorded in a record book regarding the student’s performance.
    $ 0.13
    0words / 0page
    READ MORE
  • Tertiary education
    Ethical Promotion Paper (Nursing)
    Ethical Promotion Paper In today’s globalization, the use of electronic health record significantly helps in sharing patient’s information to other healthcare providers across health organizations for patient’s better access to health care, decrease of costs and improvement of the quality of care (Ozair et al. 2015). However, the increasing use of electronic health record of patients over paper records sometimes generates ethical issues that should be given attention. Nurses are bound to follow the Code of Ethics and sharing of patient information, even digitally, should be done within the right conduct. This paper will discuss the article written by Ozair, Jamshed, Sharma & Aggrawal (2015) entitled, “Ethical issues in electronic health records: a general overview”, which was published in Perspectives in Clinical Research. My thoughts on the role that health care professionals should play in resolving the said ethical issue will also be discussed, as well as the specific theory that will support my position. Article’s Summary Ozair et al. (2015) aimed to explore the ethical issues created in the use of electronic health record (EHR), as well as to discuss its possible solutions. Although the use of digital health record could improve the patient’s quality of healthcare and decrease cost, transferring or sharing information through digital technology poses hazards that could lead to security breaches and endanger safety of information. When the patient’s information or health data are shared to others without the patient’s consent, then their autonomy is put at risk. Electronic health record contains the patient’s health data including his/her medical diagnoses, history, immunization dates, treatment plans and laboratory results. Every person has the right to privacy and confidentiality and his information can only be shared if he permits it or dictated by law. If the information was shared because of clinical interaction, then that information should be treated as confidential and be protected. The confidentiality of information can be protected by allowing only the authorized personnel to have access. Thus, the users are identified and assigned with passwords and usernames. However, these may not be enough to protect the confidentiality of the patient’s information and stronger policies on security and privacy are needed to secure the information. According to a survey, around 73% of doctors communicate with other doctors through text about work and when mobile devices get lost or stolen, the confidentiality of the information about patients are put at stake. Hence, security measures such as intrusion detection software, antivirus software and firewalls should be used to protect the integrity of data and maintain patient’s confidentiality and privacy. When patient data is transferred, there is a possibility of the data getting lost or destructed especially when errors are made during the “cut and paste” process. The integrity of data may also be compromised when the physician uses drop down menu and his/her choices become limited due to the choices available in the menu, causing him/her to select the wrong choice, thus, leading to huge errors. However, the authors claim that these ethical issues can be resolved through the creation of an effective EHR system, involving clinicians, educators, information technologies and consultants in the development and implementation of the ERH system. My Thoughts on the role of health care professionals The role of health care professionals is vital in ensuring that the right of patients to privacy and confidentiality are observed even in the use of electronic health record (EHR). Patient’s human rights in care include their rights to confidentiality and privacy (Cohen & Ezer 2013). To ensure that there will be no ethical issues created in the use of EHR, health care professionals should be properly informed about the importance of the system, as well as the ethical issues that could arise if the rights of the patient are not properly observed. Hence, it is vital that the knowledge of the health care professionals regarding the right implementation of EHR starts from their education curriculum, as well as in their continuous training and nurses’ participation in the workflow of EHR (Koolaee, Safdan & Bouraghi 2015). Computer literacy is a must for health care professionals to ensure that the sharing of health data information are not lost or destructed during the process and medical errors are not committed. Conclusion The use of electronic health record improves and increases efficiency in patient care, as well as patients’ access to care across health organizations. However, health care professionals should never ignore the rights of patients to their privacy and confidentiality so they should be properly informed if ever there is a need for their health data information to be shared to others to avoid ethical issues. List of References Cohen J. & Ezer T. (2013). ‘Human rights in patient care: a theoretical and practical
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    HEALTH CARE
    In The Report, Analyze the Business Issue, Summarize the Methods You Have Chosen To Analyze the Data, Create Five Descriptive Statistical Techniques, and Complete One Inferential Statistical Technique to Estimate, Test, or Predict Something Student Name Institution Abstract Before a business establishes in a specific location or moves its operation to a new location, various factors come to play to ensure that the choice of location guarantees success and sustainability of the business enterprise. The choice of location is determined to a larger extent ease of access to the market, the evidence that the market is growing into the future, availability of requisite raw materials, availability of skilled labor, logistical convenience, competitive forces, and a favorable business environment in terms of prevailing laws and government regulations among other factors. For a moving business, the ideal location would be a state or a locality with the highest number of moving people, which could be because of job mobility or a high incidence of immigration. This report will compare the two states of California and Alabama to determine the ideal state for a fast growing moving company. The report will use statistical methods to derive essential market indicators to select the most appropriate state for the growth and sustainability of the moving company. Moving Company
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    HEALTH CARE
    Name Professor Course Date Moral Boundaries around Gene Editing Using CRISPR Overview of the Topic CRISPR-Cas9 is a technology that is based on the bacterial immune system, which has been modified to recognize a short DNA sequence. In this technology, the sequence is cut out and inserted into a new sequence. The technology has been met with an avalanche of commentaries with some arguing that it rekindles hope for gene therapy while others criticize it for engineering genes in the future. This study relies on research articles that emphasize on ethics and bioethics. One of the area of agreement is that CRISPR-Cas9 has taken the prospect and pace for genetic applications and discovery to a high level. This has heightened anticipation for somatic genetic engineering that will help patients. The ethical concerns with CRISPR gene editing revolves around the human germline editing. The ethical concerns stem from the fact that changes in the human germline are transferred to future generations. The moral boundary with gene editing using CRISPR is that human genome editing for reproductive objectives is not ethically acceptable. However, gene editing using CRISPR should be used in research that will make gene therapy effective and safe. Introduction CRISPR-Cas9 has brought in a new era of gene editing. This has become attractive to both the public and scientific community. The metaphorical term gene editing implies that genes are texts, which are static in nature and can be corrected easily. This simplistic approach to gene editing is not useful in the appreciation of the analogy of gene editing. It can easily lead to the dismissal of the adverse consequences or collateral damage associated with gene editing. This exploration of the moral boundaries of gene editing consider the ethics and science of CRISPR-Cas9 as co-dependent factors that require safe, better, and efficient ways of treating diseases. CRISPR-Cas9 was first developed in 2012 as a technique that facilitated the precision and targeted manipulation of DNA sequences. It is vital to note that gene editing is not a new concept of developed in biology because transgenic mice were used in research in the 1970s. This heralded an era in which trans-genesis became a research tool for understanding the underlying biological mechanisms of diseases. Even though the technique is not highly employed in the introduction of a genetic component (transgene) in a cell, it is not able to execute targeted insertion in a genome. Advancements in the 1980s revealed that the technique has a high level of directionality. However, this could be achieved using genome alterations of embryonic stem cells. These cells retained their pluripotency, which gave rise to several cell types. In 2012, gene researchers discovered that Streptococcus pyogenes has a remarkable viral defense system. It was discovered that the defense system could be adapted and used as a programmable system for gene editing. CRISPR-Cas9 has two parts with the first part acting as a guide for genome testing. The second part is an associated protein that serves as an endonuclease that enables the double-stranded break. Scientists studying CRISPR established that they could manipulate the CRISPR-RNA molecule into single guide RNA. This could be engineered to target a genome of interest. However, this requires that sgRNA to recognize 17-22 bp genomic region, which is followed by 5’-NGG-3-protospacer adjacent motif site. Boundaries and Limitations of the CRISPR-Cas-9 Despite the rapid advancements with this technology, several questions arise that focus on the potential limitations and boundaries of the technology. One of the main concern with the technology is the potential off-target impact in a genome, which are sensitive to double-stranded breaks. Furthermore, the short length of the sgRNA combined with CRISPR-Cas9 system can tolerate 1-3 mismatch between the target site and sgRNA. This increases the probability of off-target effects. It is important to note that this problem can be reduced or addressed through the evaluation of the potential off-target effects and using sgRNA, which do not target other functional regions or proteins. Given sgRNA’s probability of having off-target effects in long-range enhancer regions, the effects are not eliminated. Scientists have engineered the sgRNA and Cas9 protein to increase its specificity, which lowers the off-target effects. The rapid advancement of the technology has provided researchers with a viable approach for the dissection of the molecular mechanism that underlay cellular function. Progress of the technology towards the clinical field requires improvements in the delivery of Cas9. Use of transfection reagents simultaneously delivers the Cas9 and sgRNA protein in the target cells in a laboratory. This has proven to be highly efficient in cell lines. This requires the integration of all or part of the plasmid in the genome of the host. It is also proven that the plasmid DNA can be inserted in the on and off-target site. Additionally, the host immune response can be induced by the bacterial sequences, which dampen the efficiency of the genome editing. To overcome the limitation of transfection delivery system, the application of viral delivery system is evaluated. Historically, viral vector is marred with ethical controversies because of the events that follow clinical trials, which have led to the death of patients. Application of CRISPR-Cas9 in Human Health CRISPR-Cas9 has potential applications in research because of its ability to cleave genome at the desired location. In the case of genome editing, CRISPR-Cas9 has been used in the generation of stable cell lines. This has the specific function of developing experimental models that increase an understanding of underlying disease pathology. It is important to note that CRISPR-Cas9 is effective when used in multiple organisms. The initial application of the technique in 2012 in human cells was successful. This was possible through the engineering of a novel CRISPR-Cas9 (Saey 1). The use of the CRISPR-Cas9 as a novel technique for the investigation of biological pathologies and processes began to expand to different fields. For instance, the technique was modified in programing specific transcription factors that activate and target, or silence specific genes. Additional applications of the technique have enabled the manipulation of the methyl group at specific positions of the DNA. This has allowed researchers to evaluate the changes that affect gene expressions. Recently, the technique has been employed in turning cells into programmable computers in which researchers integrate molecular switches that control the fate of cells and allow them to program conditioned behaviors. These examples show the versatility of the CRISPR-Cas9 system in the generation of basic research tools. Despite the in-vitro application of CRISPR-Cas9 technique, it can also be used in the generation of in-vivo animal models that are used in the study of diseases. For instance, CRISPR-Cas9 has been used in the creation of mouse models, which are deployed in the study of the deleterious effect of mutations in cancer. These studies are done using a system that introduces the loss of function mutations during tumor suppression or the gain of function mutation in oncogenes. Additionally, gene manipulation of mice at germline level has facilitated researchers in the generation of conditional models or whole organisms that model diseases that affect humans. Germline editing is a technique that has been used in the investigation of early onset human diseases through human embryos. A potential goal in the development of CRISPR-Cas9 genome editing is its use in the treatment or prevention of disability or disease. For instance, evidence shows that CRISPR-Cas9 can be used to target the genome of viruses such as HIV and Hepatitis B. This has been found to control or ultimately cure the patient. For instance, CRISPR-Cas9 has shown that the introduction of Indels in HIV is lethal to the HIV virus (Regalado 1). However, additional modifications to the HIV virus can potentially lead to an increase in the virulence. Currently, the modification of the immune system and targeting it to attack the HIV virus has gained attraction as the use of gene editing as a therapeutic strategy. The same strategy has been used in the treatment of leukemia and other blood cancers. Cell-based therapies have significant advantages because cells are removed, expanded, manipulated, and reintroduced into the patient with the aim of enhancing the desired therapeutic effect. For diseases such as solid tumor cancers or diseases that affect organs or tissues, CRISPR-Cas9 is not effective. Despite this setback, there are active research areas that pursue the use of CRISPR-Cas9 in gene editing CFTR gene in dystrophin and cystic fibrosis. Moral Questions with CRISPR-Cas9 Despite the advancements in gene editing, medical and biological research using CRISPR-Cas9, the technique has become controversial because of its modification of cells in the human body. Alteration of the germline genome in humans presumes that the technique can also transfer intended and non-intended modifications. It also presumes that the technique can lead to unforeseeable alteration to an offspring. This has led to questions on the irreversible effect of gene editing using a technique such as CRISPR-Cas9 on future generations. Recently, a study by Junjiu Huang has led to concerns and discussions on the suitability of the technology in pre-implantation embryos. Historically, social engineering and genetics have a profoundly toxic relationship. A suitable example is the misuse in Nazi Germany. There are fears with the degeneration of the human race. This has led to implement policies that block the breeding of inferior humans through gene editing. As a rapidly evolving field, gene editing using CRISPR-Cas9 has become attractive to interest groups for investments regardless of the ethical restrictions. It is important to consider the ethical limits or moral boundaries with CRISPR-Cas9. Even though bioethicists have a range of opinions on gene editing, the most compelling argument is in the area of the ethical use of the technique in germline manipulations (Yong 1). Limited knowledge in the area of germ cell manipulation and mutagenesis has the potential to cause uncertain consequences in the future. It implies that the argument from a potentiality viewpoint incorporates the concerns for safety and susceptibility to non-Mendelian diseases. It also implies that the gene technology has the potential to transform societies in economic status, social values, injustices, individuality, and accessibility. It appeals to the transformative potential of gene editing as a technique that has wide concerns for the ethical and moral texture of a society. Presently, over 40 countries have banned or discouraged research on germline editing because of the safety and ethical concerns associated with the technique. One of the main area of concern with this technique is the safety of the methods and safety of the outcomes. This area is associated with the probability of off-target effects, which mean edits made in the wrong place and mosaicism, which means that some of the targeted cells carry the edits while others do not carry the edit. Ethicists and researchers under the body of the International Summit on Human Gene Editing argue that gene editing using CRISPR must first be deemed safe through research (Regalado 1). Otherwise, the method should not be used in clinical reproduction. Other researchers have also argued that gene editing in embryos may not offer benefits that those of existing technologies such as in-vitro fertilization or preimplantation genetic diagnosis. It is vital to note that bioethicists and scientists acknowledge that germline editing has the potential to address the needs that have not been met by pre-implantation genetic diagnosis. This is applicable in cases where both parents are homozygous for disease causing variants, families that object to some of the elements of pre-implantation genetic diagnosis, and cases of polygenic disorders. Bioethicists and researchers are concerned that genome editing has the potential to initiate the use of the technique for both therapeutic and non-therapeutic use in addition to the use of the technique for enhancement purposes. Based on a moral imperative viewpoint, it has been argued that once proved effective and safe, the technique can be used to cure genetic diseases. The second area of concern is in informed consent. Scientists and bioethicists argue that it is challenging to obtain informed consent for gene editing, especially germline therapy. This argument is based on the knowledge that the people affected by the gene edits are usually embryos and the future generation. As a counterargument, it is argued that parents make numerous decisions affecting the future of their children. Some of these decisions include the complicated decisions made with in-vitro fertilization (IVF) and pre-implantation genetic diagnosis (PGD). Informed consent is a controversial area with gene editing using CRISPR as a reproductive option. The decision for informed consent has the potential to affect the genetic traits of future generations in cases where their informed consent is not obtainable. In most countries, IVF is a standard method for the screening of a germline-transmitted disease in humans. In IVF, informed consent is given by a family or couple that desires IVF. It is vital to note that these people are properly informed and they make their decision based on the conscious choice of their offspring. Conversely, the unforeseeable effect of CRISPR the effects can be greater than the benefits of the technique (Saey 1). This means that it is challenging to get an informed consent on behalf of the embryo or offspring additionally, the off-target effects of the technology mean that the potential effects of the technology can be transmitted to future offerings. These effects may not be observed until after several subsequent generations. The third area of concern is with equity and justice. Just like new technologies, there are concerns that gene editing is only accessible to the wealthy population. This will lead to an increase in disparity in the access of healthcare and associated interventions. There are concerns that germline editing has the potential to create a class of people that are defined by the quality of the engineered genome they have. The fourth area of concern is with gene editing research that involves embryos. There are religious and moral objections with the use of human embryos in research (Cyranoski 1). Governments such as the U.S place restrictions on the use of federal resources for research that leads to the creation or destruction of embryos. Additionally, the National Institute of Health does not fund gene editing that uses human embryos. The moral boundaries with this technique are set by the use of the technique as a treatment option rather than a reproductive option. This requires a focus on the definition of the appropriate risk-to-benefit ratio, which facilitates beneficial outcomes to a patient. This is dependent on factors such as disease progression, disease type, type of cell, and mode of therapeutic application. Risk to benefit ratio may be affected by the method of delivery. For instance, an appropriate delivery method is the use of the lentivirus approach, which is deemed stable and efficient.
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    HEALTH CARE
    “Parent Involvement in Education in Terms of Their Socio-Economic Status” Article Analysis Name Institution Affiliation Article Analysis Education plays an important role in the society that ensures people make informed decisions whether on issues touching on the social, economic or political setups. Different aspects that touch on education have attracted interests from scholars and researchers as well as policymakers. They have produced dynamic publications that address issues touching on the education fraternity. A thorough analysis of such articles is encouraged by scholars in order to note the shortfalls and discrepancies that might structure further topics for research in the future as well as improve on the current research studies by addressing concerns deeper. However, this paper presents an analysis of an academic article titled “Parent Involvement in Education in Terms of Their Socio-Economic Status” by Kuru Cetin and Taskin (2016) published in the Eurasian Journal of Educational Research. The piece by Kuru and Taskin (2016) talks on the subject of improving the quality of education by considering the informal resources that include the family and the impacts that parental involvement in matters education would have. The article, titled “Parent Involvement in Education in Terms of Their Socio-Economic Status” written by Kuru Cetin and Taskin (2016) was published in the Eurasian Journal of Educational Research. The problem statements according to Kuru Cetin and Taskin (2016) is that among the objective of formal education targets to increase the quality of education as well as having well-qualified students that can be realized by using both formal and informal resources effectively. The authors note that the family is the most critical informal resource and purposes to study the level of involvement in educational activities at schools by families using their socio-economic status. The study by Kuru Cetin and Taskin (2016) purposed to study the perceptions of primary stakeholders in the education sector that include teachers, parents and administrators on the involvement of families in the education process in terms of the socio-economic status in both primary and secondary schools from public and the private sector. The study used a qualitative method where it conducted interviews and analyzed relevant documents that had literature about the subject using a study group from the primary stakeholders involved in matters education. The findings of the study note that parents having a high socioeconomic status have shown great interests in actively participating in the improvement of education of their children. The study also found that the primary reason parents engage with schools is to determine the personal development of their children as well as the academic success. New Vocabulary Words: Phenomenological design. A research approach or method that notes an individual viewpoints and perceptions about a certain phenomenon. Semi-structured interviews. An interview that allows respondents to give detailed explanations about different questions asked. Opinion: The topic is controversial. The title of the work proves difficult to understand and comprehend as it is not clear and straight to the point. It presents ambiguity clauses such as “socio-economic” status and difficulty in linking it to parent’s decision-making steps in matters education. The interesting information found on the piece is the different ways parents can, directly and indirectly, get involved in education matters. It is because it makes it easy for readers to understand the challenges presented that tries to address the topic effectively. The additional information that should have been included is the incorporation of both qualitative and quantitative research methods in data collection and analysis (Denscombe & Overdrive Inc., 2014). It is because using only one method (qualitative) leaves out data that can be quantified to help people understand more easily the result and findings of the study (Creswell, 2014). For example, the researchers should have differentiated in terms of percentage the responses between the public and private parents involvement in different matters related to the subject. References Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approach. Thousand Oaks, California: SAGE Publications. Denscombe, M., & Overdrive Inc. (2014). The Good Research Guide. S.I.: McGraw-Hill Education. Kuru Cetin, S., & Taskin, P. (2016). Parent involvement in education in terms of their socio-economic status. Eurasian Journal of Educational Research, 66, 105-122 http://dx.doi.org/10.14689/ejer.2016.66.6
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    HEALTH CARE
    Name Professor Course Date MIXXO Mix Recommendation Overview The objective of this project is to provide a plan that will enable Coca Cola launch a new product. The new product is a combination of two existing products Fanta and Sprite. Coca Cola intends to mix Fanta and Sprite to create a product that will be sold through the slogan: “Half Fanta-Half Sprite. Coca-Cola is a global brand that has numerous product brands. It is vital to note that even though Coca-Cola appears as a convenience product, it is actually a specialty product. The firm is also a loyalty brand because customers do not just select its product brands unexpectedly. Instead, customers select the products because of its specialty brands or because the products are available in a convenience store. In this case, the product concept will be based on the firm’s trademark, packaging, mix of two existing products to create a unique product, and the differentiation of the new product from existing products. Basis for the Recommendation Coca Cola should pursue this product because there is a ready market for new and unique products. Additionally, the firm’s logistical resources give it the capability to design and make the new product, in addition to supporting the product in the market. Currently, customer needs are based on personal preferences. It is evident that customers are after products that are healthy and can be conveniently purchased. The customer’s benefit is that the new product will refresh them in a different way from the existing products. There are also health benefits to the customer based on the number of calories in a bottle of the new product. Another customer benefit is that they will have an opportunity to experience a new taste from the new product. Customers also have an opportunity to experience new values. For instance, customers will have the opportunity to experience new senses and cultures. The new product will elevate the customer’s sense of taste because the new product will have a new taste, which has never existed in the market. The second customer value is the firm’s passion for a unique and singular experience, expressed through Coca Cola’s identity. It will also be expressed through the new product. Customers will also get value from the firm’s recognition of changing preferences and taste and the emerging trend of health conscious customers. This has motivated the firm to offer a wide range of beverages, including beverages that have few calories, functional benefits, and natural attributes. The size of the opportunity is large enough to compel Coca Cola to pursue the new product. Currently, the beverage industry is experiencing an expansion. Studies into the market show that there is an upward and onward growth in the market. There are opportunities for new and emerging beverage categories, which have provided additional growth opportunities for the industry. Current projections indicate that the expected compound annual growth in the global soft drink market is 6 percent. Customers will buy the new product because of their desire for new beverage or drinking experiences. They will buy the product because it offers them an authentic and nutritious experience. Strategic Fit The concept fits into Coca Cola’s corporate strategy and business-unit product line. The concept fits into the corporate strategy because Coca Cola is a renowned market leader in product innovation. Product design plays a crucial role in the value the firm seeks in the market. Coca Cola seeks to give customers a refreshing experience through its products. In the base of product line, Coca Cola has adopted a strategy of the expansion of its business line. This has led to an increase in the brands and beverages offered to customers. It has also enabled customers to recognize Coca Cola in different ways. Forecast The current revenue estimates are $134 million annually for the new product. The unit estimate is $0.50 per unit. It is projected that revenues will increase by 15 percent annually. Increased revenues are expected from the can and bottle sales of the new beverage. The cost estimates are $80 million annually. These estimates are for activities such as research and development, logistics, and sales and marketing. The learning curve will show the trend in sales and consumption of the new product. The curve will also show the break-even point. The capital requirements are huge because the firm must acquire a new product line in addition to increasing its logistics capacity. The return on investments will be realized after 15 years. This is dependent on the growth in sales and profitability of the new product. Project Plan Coca Cola requires a research and development team for the realization of the concept. The research and development team will create the new product based on the framework provided by the Food and Drug Administration for a soft drink. In addition to the research and development team, the firm requires a product line for making and bottling the new beverage. The firm also requires logistical resources for the marking and distribution of the new product. This includes warehousing and transportation for the new beverage. The main issue that could affect the new product is timing. Additional roadblocks include competition, Food and Drug Administration regulations and restrictions. Packaging and the resultant taste of the new product could also be a roadblock because it may not be pleasing to customers. Customer perception may affect the performance of the new product in the market. It could be perceived that the new product is not healthy.
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    New Deal Documentary Assignment Name Institution Affiliation Date New Deal Documentary Assignment 1) How did Franklin Delano Roosevelt communicate his agenda to the American people? He communicated through his campaigns pledges and also in his acceptance speech as the Democratic presidential candidate. He also used close allies and friends from the “Tammany Hall” group to advance FDR agenda as well as leaders from the minority groups such as the black community and immigrants. 2) How did FDR restore confidence in the American banking system? By introducing reforms such as the emergency banking bill that revitalized the banking and credit system of the US. The banks first closed and were only reopened when they were solvent. The changes, in addition, brought adjustments such as that of moderate currency inflation relieved debtors and controlled prices. 3) What was the “alphabet soup” of the New Deal? What were three of the programs mentioned, and what did they accomplish? It was the new deal Acts and agencies that was formed in the second part of the reforms. The three include the Federal Emergency Relief Administration (FERA), Civilian Conservation Corps (CCC), and Public Works Administration (PWA). They accomplished reforms in the sector that created jobs for millions, provide economic relief to farmers and created strong unions (Yiu, 2012). 4) What were some of the reasons given by critics for opposing the New Deal? The deal did not provide enough relief, some aspects of the deal such as AAA were seen as unconstitutional, and the deals favored only big business and co-operate interests. 5) What was Tammany Hall? How did it control New York City politics? It was a New York City political organization. It organized campaigns and funded campaigns for the mayoral elections in New York. It advocated for the interests of the poor and immigrants of the New York City that were the majority in New York a factor that helped it control the politics of New York City. 6) How did Mayor Fiorella LaGuardia promote reforms in his district and in New York City during the Great Depression? The mayor embraced the reforms and ensured that it worked in his District that it was more apparent and seen than other Districts in the New York and in America at large. He read comic strips on the radio, fought the perception of bossism and worked along firefighters in putting out fires and also introduced slum- clearance projects. 7) How did the fight between FDR and Robert Moses impact LaGuardia’s plans for New York City? It impacted LaGuardia plans negatively by dividing the followers and supporters in New York that contributed to the revolt and continuous criticism of the New Deal. Robert Moses was equally popular and influential. It increased resistance to the mayor’s plans for the City. 8) How did the residents of Harlem cope with the Depression and participate in the New Deal? The Harlem residents that most were of African-American race were hard hit by the depression that resulted in massive retrenchment and worsened the black people’s economy and society. They, hence, formed movements and political organizations that advocated for the interests of the black community, for example, ‘Jobs for Negroes.” The leaders of Harlem communities supported the New Deal and advised FDR on the issues that affected the black community. 9) What was the Works Progress Administration? Why was it so important? Works Progress Administration was the second new deal that targeted to solve the economic and social measures to counter poverty and unemployment. It targeted to provide work for the people rather than welfare. It was important as it helped the economy by empowering people and induces consumption that would help stimulate economic growth. Reference Yiu, R. (2012, Oct 13). The Great Depression 3 - New Deal, New York. Retrieved from https://www.youtube.com/watch?v=a5n4u4cF4Pg
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Intersectionality and Crime Name of Student Name of University Intersectionality and Crime The concept of “Intersectionality” within the broader criminology study is fundamental in helping to understand the various dynamics underpinning the issue of crime and security in the society. It provides a basis upon which the structures of domination and power including homophobia, sexism, and racism operate simultaneously (Creek and Dunn, 2014). Intersectionality analysis to issues linked to punishment, surveillance and crime with particular reference to how aspects of ability, sexuality, race, gender, and class influence experiences of people in the criminal justice system is considered vital in explaining the specific dynamics within this system (Brown, 2015). As an analytic framework, the concept of intersectionality helps scholars and practitioners in the field of criminal justice system to identify how various interlinked power structures affect individuals and groups that are marginalized in the society (Oliver, 2013). This paper will discuss the experience of my friend, Michael, using the concept of intersectionality. Michael is an African American young male adult living in one of the poor neighborhoods in Chicago. Due to his race and low socio-economic class, people have frequently subjected him to suspicion of engaging in criminal activities in his neighborhood, in addition to being subjected to oppression and victimization by police. On a number of occasions when criminal activities have been reported in his neighborhood, he has been among the first ones to be questioned by the local security guards and law enforcement officers. He has been rounded up on many occasions by police when in a group of other African American young men. This has happened on mere suspicion that they might be planning to commit crime even when in reality they are just walking around the neighborhood or simply relaxing in the car of one of their friends talking and “catching up”. This predicament has deprived Michael of his dignity and self-confidence. He does not understand why despite being a law-abiding citizen, he has to be harassed and oppressed by law enforcement officers. Michael’s experiences relate significantly to that of many young African American male adults around the United States, especially those living or coming from poor neighborhoods. In the most extreme cases, the “criminalization” of the African American young men can go beyond mere suspicions; it can result in tragic police shootings of innocent black individuals assumed to be criminal (Oliver, 2013). Naturally, an individual may be mistakenly assumed a criminal or violent due to a host of variables such as gender, dress, and age. However, in the United States, race seems to be a major contributory factor as black men are often targeted by police as criminals even when in reality they are not (Rios, 2011). According to Brown (2015), this trend can be explained using key intersectionality concepts, particularly those relating to the interlocking oppression matrix. The interlocking oppression matrix is also called oppression and privilege vectors which refers to how differences among individuals including age, race, class and sexual orientation serve as an oppressive measures towards individuals and alter the experience of how they live in the society (Creek and Dunn, 2014). Michael’s experiences are supported by crime statistics in the United States, which shows that African American males are stereotyped, perceived, and depicted as dangerous criminals. It is a perception that appears regularly in the popular culture of the American society (Oliver, 2013). It has also been linked to the consequences in the American criminal justice system such as stiffer sentences for African American defendants facing trials. Available statistics show that in 2015, the blacks against the whites (Brown, 2015) committed about 92 percent of interracial crimes in America. In addition, statistics show that even though racial disparity in American prison populations for African Americans from 80 percent in 1979 to 61 percent in 2008, this category of population still represents the majority of those incarcerated (Brown, 2015). Statistics further show that African Americans account for the majority of those who commit crimes such as aggravated assault, rape, murder, and robbery. It is estimated that they are approximately six times more likely to be apprehended for committing violent crimes compared to the whites (Rios, 2011). Besides, statistics show that they constitute the majority of those who commit homicide offences. Additionally, African Americans have been found to be the majority of those incarcerated for crimes linked to drugs. It is estimated that they are as twice as likely to be arrested for this kind of crime compared to the whites (Oliver, 2013). The negative portrayal of African Americans on crime-related issues has led to their criminalization, a situation that has contributed to their oppression. The American society has internalized this criminal perception about African Americans. In this regard, statistics indicate that over 80 percent of the African Americans believe that they are viewed as violent and more likely to engage in crime by the whites (Brown, 2015). This portrayal and perception also has an impact on the justice system. According to psychologists, cultural stereotype of criminality of African Americans can result in an unconscious but significant influence regarding how people/society form judgments, process information, and perceive others (Rios, 2011)). This explains why Michael and other young adults from his race face criminal stereotype. It contributes to the reason behind why they are disproportionately more likely to be oppressed and targeted by law enforcement officers as suspects while their white counterparts are less likely to be targeted. It is due to this stereotyping that Michael has on several occasions been interrogated and some of his African American peers wrongfully convicted (Creek and Dunn, 2014). The concept of intersectionality equally explains the privilege that Michael’s white friends enjoy within the same neighborhood. Even though they all live in a poor neighborhood, his white friends are rarely interrogated by law enforcement officers in spite of crime level being very high. This privilege is attached to other social identity features and stereotypes as well. In the neighborhood where Michael lives, and in other parts of the United States, the white race is the epitome of prestige, power and beauty (Rios, 2011). It is believed that the fairer the complexion of an individual the more power they possess. It is a perception that has had negative effects in other aspects of life as well and which consequently shape the dynamics in the criminal justice system (Creek and Dunn, 2014). For example, the whites often earn more wages/salaries compared to the African Americans even when they are doing the same kind of work. In addition, the whites are more likely to get employed than the blacks. The aggregate implication of these trends is that a bigger proportion of the black population is likely to be poor and unemployed thereby increasing their likelihood of being involved in crime (Oliver, 2013). Conclusively, it is evident from the above discussion that the concept of intersectionality provides a framework to analyze issues related to crime, surveillance, and punishment. Based on the experiences of my friend, Michael, it is clear that structures of power such as race and age operate simultaneously to influence people’s experiences in the criminal justice system. The case of Michael demonstrates how African Americans are more likely to be oppressed on crime related issues compared to the whites. They are more likely to be interrogated and wrongfully convicted merely because of their race. References Brown, W. (2015). An Intersectional Approach to Criminological Theory: Incorporating the
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Percentage of Oxygen in a Compound: Stoichiometry and Catalysis Name of Student Name of University Percentage of Oxygen in a Compound: Stoichiometry and Catalysis Purpose The aim of this lab experiment was to determine the percentage of oxygen in potassium chlorate. The experiment also used stoichiometry to determine the percentage potassium chlorate in a mixture of potassium chloride and potassium chlorate. Both the weighting technique and use of stoichiometry to determine the percentage of substances were reviewed. Introduction The relationship among products and reactants in a chemical reaction is quantitative and it is known as stoichiometry. This chemical reaction is used in different ways. Firstly, stoichiometry is used to predict the amount of products formed when the amount of the starting reactant is known. Conversely, it can determine the starting amount of reactants when a product is desired. Stoichiometry is at the core of most of the calculations done in chemistry. Heat can decompose most of the compounds that contain oxygen. For instance, potassium chlorate (KCIO3) can be subjected to heat to remove oxygen in the compound. The resultant product is potassium chloride and the reaction is 2KCIO3 (s) heat 2 KCL (s) + 3O2 (g). The quantity of oxygen expelled can be determined by weighing the compound before and after heating it. The mass difference represents the amount of oxygen released (Burdge, 2014). In case the grams of oxygen released are unknown, then stoichiometry can determine the grams of KCIO3 that have decomposed. This enables the calculation of the amount of KCIO3 in a mixture that contains KCL and KCIO (Graves, 2013). It is vital to note that in this experiment, one of the reactants does not have oxygen. This validates the equation because one of the reactants does not have oxygen. Thermal decomposition of potassium chlorate is usually a slow reaction. To observe the reaction in a laboratory setting, it is essential to use a catalyst, which speeds up the reaction. The catalyst used in this experiment is manganese dioxide. Therefore, thermal decomposition of
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Narcissist, Psychopath, or Sociopath: How to Spot the Differences Name of Student Name of University Narcissist, Psychopath, or Sociopath: How to Spot the Differences From The Information Presented In The Video, Discuss Narcissism (A Person Who Is A Narcissist) The video provides valuable insights into narcissism, psychopath, and sociopath and their differences.
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Mythological Creatures in Ancient Grecian Culture Name Student ID: For (Course) Professor: Seminar leader: Date: Mythological Creatures in Ancient Grecian Culture History plays an important part that has enables people to understand different aspects of life that touch on the social, economic as well as political features in life. Ancient communities such as the Roman Empire, the Greek community as well as the Egyptian empire have contributed positively to shaping the world history and archaeology studies in many aspects. It is because of the unique and exceptional products that depicted the culture of the ancient people at the time as well as tangible products such as buildings and pyramids among others that showed different cultures of the people at the time in ancient times from 3000 B.C. and 31 B.C. The ancient Grecian culture is rich not only in its history but also on the many physical products that defined its rich culture at the time that has continued to impact the present populations in many aspects that touch on religion aspects, politics, architecture, war and military among many other aspects of culture that dominated the Grecian community. It is, hence, imperative to conduct a thorough analysis of the ancient culture by a review of its primary and secondary sources and the impacts that it had at the time and presently. However, this paper appreciates the domestic architecture of the Grecian culture and notes the impact of ancient Grecian architectural designs that showed proportion, harmony, simplicity, and perspective in their buildings greatly influenced the Roman world and beyond that provided the foundation for architectural orders. The piece presents the classical architectural orders that note how the Greeks contributed in influencing others. It then discusses the materials it used to develop different architectural products and the impact that such action had on the roman and the entire world. Next, it notes how Temples, treasuries, and stoas also contributed to shaping the architectural field in the past and presently. Theatre and stadium as well are presented indicating the Greeks contribution and finalizes by looking at housing as a product of the architectural process. Different architectural products of the ancient Greek were strategically placed where most were placed in cities and towns at the time as well as religious places and political administration places among other strategical places. The Architectural Orders Classical architecture presents five orders that place different architectural products and designs developed at various times as stated by the Romans that included architects products from the Greeks. The orders include Doric, Ionic, Corinthian, Tuscan, and Composite. The first three mentioned in that category were created by the Greeks, and they were genuine innovation.1 Eberhart 2011, 184. The latter two originated from the Romans that were composites. Columns in the buildings and architectural products developed formed the order and category used in the classification of the groups that were made with or without a base and an entablature. Doric column in stone an architectural idea from the Greeks evolved from the earlier use of wooden pillars. The development of different pillars and columns by incorporating newer ideas and materials such as adding of a base and volute or scroll capital among others that resulted in Ionic architectural order showed how the Greeks would impact the world of architecture positively. The other order also saw the incorporation of newer ideas and practices that resulted in the different classification of the orders. It was through innovation and creativity showed by the Greeks that it resulted in exceptional products that were emulated by the Romans in the creation of the architectural products noted in the Tuscan and Composite orders that used composites other than genuine innovations. The influence that the Greeks had in the development and progress of architecture traversed the Roman empire and times to include the western architecture that many architectures and designers have referenced in one way or the other in the formulation and design of their work. It is noted in many cities in the world presently that show buildings and other architectural products that have similarities in one element or the other to those developed by the Greeks2 Senseney 2011, 23-24. . The buildings among other architectural products showed by the Greeks displayed artistic and designed elements that enabled proportion, harmony, perspective, and simplicity presented in the architectural buildings and other construction made by the Greeks. It not only added beauty and aesthetic to such products but also conveyed messages that resonated to a particular ideology and events at the time that touched on the political, economic as well as social concerns in the society. Materials Materials play a significant part in any design and construction project. The materials used in the architectural plans and products displayed by the Greeks indicated a variety of options that at times incorporated using both in different parts of the buildings and architectural designs that they developed3 Casper 2014, 360.. The Greeks as indicated in most of their public buildings used and preferred marble, but wood also played an essential role in the materials that they used to develop different architectural products and in particular in the interior designs of the buildings constructed as well as the support of columns. The choice of materials used in the architectural buildings mainly was determined by the type of structure and the purpose that the building had to serve including projected number of people that could use the premises at any one time. It is the reasons most architectural structures developed by the Greeks showed strength and durability that some have been able to survive for centuries to the present times requiring minimal service and maintenance4 Seaman and Schultz 2017, 17. . It has helped the society realize benefits such as arising from tourists where millions of tourists flock the country to see the genius of architecture and buildings constructed by the Greeks many years ago. Temples that had thatched roofs were noted in 7-8 BCE that was later changed to include durable materials that made them exists for a longer time such as the use of stone edifices. A mixture of materials in the construction of such architectural buildings enhances beauty and decoration where architects and those buildings such products relied on creativity and innovation to mix materials used such in the creation of column capitals and entablature among others parts of the constructed buildings. The wooden architectural elements also have resulted in the development of carpentry as a profession and technical skills involved where some have specialized in wooden products that blend well with different architectural designs as observed in the different Greeks architectural products. The primary materials that the Greeks preferred include limestone that often was polished by marble dust stucco but pure white marble were also used and in particular in high profiled areas such as palaces and administrative units at the time. Carved stones that allowed presentation of different symbols that include gods, as well as other symbols of worship and religion as noted in temples among others sculptures designed and curved from stones, were polished with chamois. It provided resistance to water and gave a bright finish that made the products recognized from a distance, hence, making Greeks products earn respect and recognition in the architectural world beyond the times that such products were developed to include the present times5
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    In The Report, Analyze the Business Issue, Summarize the Methods You Have Chosen To Analyze the Data, Create Five Descriptive Statistical Techniques, and Complete One Inferential Statistical Technique to Estimate, Test, or Predict Something Student Name Institution Abstract Before a business establishes in a specific location or moves its operation to a new location, various factors come to play to ensure that the choice of location guarantees success and sustainability of the business enterprise. The choice of location is determined to a larger extent ease of access to the market, the evidence that the market is growing into the future, availability of requisite raw materials, availability of skilled labor, logistical convenience, competitive forces, and a favorable business environment in terms of prevailing laws and government regulations among other factors. For a moving business, the ideal location would be a state or a locality with the highest number of moving people, which could be because of job mobility or a high incidence of immigration. This report will compare the two states of California and Alabama to determine the ideal state for a fast growing moving company. The report will use statistical methods to derive essential market indicators to select the most appropriate state for the growth and sustainability of the moving company. Moving Company
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Name Professor Course Date Moral Boundaries around Gene Editing Using CRISPR Overview of the Topic CRISPR-Cas9 is a technology that is based on the bacterial immune system, which has been modified to recognize a short DNA sequence. In this technology, the sequence is cut out and inserted into a new sequence. The technology has been met with an avalanche of commentaries with some arguing that it rekindles hope for gene therapy while others criticize it for engineering genes in the future. This study relies on research articles that emphasize on ethics and bioethics. One of the area of agreement is that CRISPR-Cas9 has taken the prospect and pace for genetic applications and discovery to a high level. This has heightened anticipation for somatic genetic engineering that will help patients. The ethical concerns with CRISPR gene editing revolves around the human germline editing. The ethical concerns stem from the fact that changes in the human germline are transferred to future generations. The moral boundary with gene editing using CRISPR is that human genome editing for reproductive objectives is not ethically acceptable. However, gene editing using CRISPR should be used in research that will make gene therapy effective and safe. Introduction CRISPR-Cas9 has brought in a new era of gene editing. This has become attractive to both the public and scientific community. The metaphorical term gene editing implies that genes are texts, which are static in nature and can be corrected easily. This simplistic approach to gene editing is not useful in the appreciation of the analogy of gene editing. It can easily lead to the dismissal of the adverse consequences or collateral damage associated with gene editing. This exploration of the moral boundaries of gene editing consider the ethics and science of CRISPR-Cas9 as co-dependent factors that require safe, better, and efficient ways of treating diseases. CRISPR-Cas9 was first developed in 2012 as a technique that facilitated the precision and targeted manipulation of DNA sequences. It is vital to note that gene editing is not a new concept of developed in biology because transgenic mice were used in research in the 1970s. This heralded an era in which trans-genesis became a research tool for understanding the underlying biological mechanisms of diseases. Even though the technique is not highly employed in the introduction of a genetic component (transgene) in a cell, it is not able to execute targeted insertion in a genome. Advancements in the 1980s revealed that the technique has a high level of directionality. However, this could be achieved using genome alterations of embryonic stem cells. These cells retained their pluripotency, which gave rise to several cell types. In 2012, gene researchers discovered that Streptococcus pyogenes has a remarkable viral defense system. It was discovered that the defense system could be adapted and used as a programmable system for gene editing. CRISPR-Cas9 has two parts with the first part acting as a guide for genome testing. The second part is an associated protein that serves as an endonuclease that enables the double-stranded break. Scientists studying CRISPR established that they could manipulate the CRISPR-RNA molecule into single guide RNA. This could be engineered to target a genome of interest. However, this requires that sgRNA to recognize 17-22 bp genomic region, which is followed by 5’-NGG-3-protospacer adjacent motif site. Boundaries and Limitations of the CRISPR-Cas-9 Despite the rapid advancements with this technology, several questions arise that focus on the potential limitations and boundaries of the technology. One of the main concern with the technology is the potential off-target impact in a genome, which are sensitive to double-stranded breaks. Furthermore, the short length of the sgRNA combined with CRISPR-Cas9 system can tolerate 1-3 mismatch between the target site and sgRNA. This increases the probability of off-target effects. It is important to note that this problem can be reduced or addressed through the evaluation of the potential off-target effects and using sgRNA, which do not target other functional regions or proteins. Given sgRNA’s probability of having off-target effects in long-range enhancer regions, the effects are not eliminated. Scientists have engineered the sgRNA and Cas9 protein to increase its specificity, which lowers the off-target effects. The rapid advancement of the technology has provided researchers with a viable approach for the dissection of the molecular mechanism that underlay cellular function. Progress of the technology towards the clinical field requires improvements in the delivery of Cas9. Use of transfection reagents simultaneously delivers the Cas9 and sgRNA protein in the target cells in a laboratory. This has proven to be highly efficient in cell lines. This requires the integration of all or part of the plasmid in the genome of the host. It is also proven that the plasmid DNA can be inserted in the on and off-target site. Additionally, the host immune response can be induced by the bacterial sequences, which dampen the efficiency of the genome editing. To overcome the limitation of transfection delivery system, the application of viral delivery system is evaluated. Historically, viral vector is marred with ethical controversies because of the events that follow clinical trials, which have led to the death of patients. Application of CRISPR-Cas9 in Human Health CRISPR-Cas9 has potential applications in research because of its ability to cleave genome at the desired location. In the case of genome editing, CRISPR-Cas9 has been used in the generation of stable cell lines. This has the specific function of developing experimental models that increase an understanding of underlying disease pathology. It is important to note that CRISPR-Cas9 is effective when used in multiple organisms. The initial application of the technique in 2012 in human cells was successful. This was possible through the engineering of a novel CRISPR-Cas9 (Saey 1). The use of the CRISPR-Cas9 as a novel technique for the investigation of biological pathologies and processes began to expand to different fields. For instance, the technique was modified in programing specific transcription factors that activate and target, or silence specific genes. Additional applications of the technique have enabled the manipulation of the methyl group at specific positions of the DNA. This has allowed researchers to evaluate the changes that affect gene expressions. Recently, the technique has been employed in turning cells into programmable computers in which researchers integrate molecular switches that control the fate of cells and allow them to program conditioned behaviors. These examples show the versatility of the CRISPR-Cas9 system in the generation of basic research tools. Despite the in-vitro application of CRISPR-Cas9 technique, it can also be used in the generation of in-vivo animal models that are used in the study of diseases. For instance, CRISPR-Cas9 has been used in the creation of mouse models, which are deployed in the study of the deleterious effect of mutations in cancer. These studies are done using a system that introduces the loss of function mutations during tumor suppression or the gain of function mutation in oncogenes. Additionally, gene manipulation of mice at germline level has facilitated researchers in the generation of conditional models or whole organisms that model diseases that affect humans. Germline editing is a technique that has been used in the investigation of early onset human diseases through human embryos. A potential goal in the development of CRISPR-Cas9 genome editing is its use in the treatment or prevention of disability or disease. For instance, evidence shows that CRISPR-Cas9 can be used to target the genome of viruses such as HIV and Hepatitis B. This has been found to control or ultimately cure the patient. For instance, CRISPR-Cas9 has shown that the introduction of Indels in HIV is lethal to the HIV virus (Regalado 1). However, additional modifications to the HIV virus can potentially lead to an increase in the virulence. Currently, the modification of the immune system and targeting it to attack the HIV virus has gained attraction as the use of gene editing as a therapeutic strategy. The same strategy has been used in the treatment of leukemia and other blood cancers. Cell-based therapies have significant advantages because cells are removed, expanded, manipulated, and reintroduced into the patient with the aim of enhancing the desired therapeutic effect. For diseases such as solid tumor cancers or diseases that affect organs or tissues, CRISPR-Cas9 is not effective. Despite this setback, there are active research areas that pursue the use of CRISPR-Cas9 in gene editing CFTR gene in dystrophin and cystic fibrosis. Moral Questions with CRISPR-Cas9 Despite the advancements in gene editing, medical and biological research using CRISPR-Cas9, the technique has become controversial because of its modification of cells in the human body. Alteration of the germline genome in humans presumes that the technique can also transfer intended and non-intended modifications. It also presumes that the technique can lead to unforeseeable alteration to an offspring. This has led to questions on the irreversible effect of gene editing using a technique such as CRISPR-Cas9 on future generations. Recently, a study by Junjiu Huang has led to concerns and discussions on the suitability of the technology in pre-implantation embryos. Historically, social engineering and genetics have a profoundly toxic relationship. A suitable example is the misuse in Nazi Germany. There are fears with the degeneration of the human race. This has led to implement policies that block the breeding of inferior humans through gene editing. As a rapidly evolving field, gene editing using CRISPR-Cas9 has become attractive to interest groups for investments regardless of the ethical restrictions. It is important to consider the ethical limits or moral boundaries with CRISPR-Cas9. Even though bioethicists have a range of opinions on gene editing, the most compelling argument is in the area of the ethical use of the technique in germline manipulations (Yong 1). Limited knowledge in the area of germ cell manipulation and mutagenesis has the potential to cause uncertain consequences in the future. It implies that the argument from a potentiality viewpoint incorporates the concerns for safety and susceptibility to non-Mendelian diseases. It also implies that the gene technology has the potential to transform societies in economic status, social values, injustices, individuality, and accessibility. It appeals to the transformative potential of gene editing as a technique that has wide concerns for the ethical and moral texture of a society. Presently, over 40 countries have banned or discouraged research on germline editing because of the safety and ethical concerns associated with the technique. One of the main area of concern with this technique is the safety of the methods and safety of the outcomes. This area is associated with the probability of off-target effects, which mean edits made in the wrong place and mosaicism, which means that some of the targeted cells carry the edits while others do not carry the edit. Ethicists and researchers under the body of the International Summit on Human Gene Editing argue that gene editing using CRISPR must first be deemed safe through research (Regalado 1). Otherwise, the method should not be used in clinical reproduction. Other researchers have also argued that gene editing in embryos may not offer benefits that those of existing technologies such as in-vitro fertilization or preimplantation genetic diagnosis. It is vital to note that bioethicists and scientists acknowledge that germline editing has the potential to address the needs that have not been met by pre-implantation genetic diagnosis. This is applicable in cases where both parents are homozygous for disease causing variants, families that object to some of the elements of pre-implantation genetic diagnosis, and cases of polygenic disorders. Bioethicists and researchers are concerned that genome editing has the potential to initiate the use of the technique for both therapeutic and non-therapeutic use in addition to the use of the technique for enhancement purposes. Based on a moral imperative viewpoint, it has been argued that once proved effective and safe, the technique can be used to cure genetic diseases. The second area of concern is in informed consent. Scientists and bioethicists argue that it is challenging to obtain informed consent for gene editing, especially germline therapy. This argument is based on the knowledge that the people affected by the gene edits are usually embryos and the future generation. As a counterargument, it is argued that parents make numerous decisions affecting the future of their children. Some of these decisions include the complicated decisions made with in-vitro fertilization (IVF) and pre-implantation genetic diagnosis (PGD). Informed consent is a controversial area with gene editing using CRISPR as a reproductive option. The decision for informed consent has the potential to affect the genetic traits of future generations in cases where their informed consent is not obtainable. In most countries, IVF is a standard method for the screening of a germline-transmitted disease in humans. In IVF, informed consent is given by a family or couple that desires IVF. It is vital to note that these people are properly informed and they make their decision based on the conscious choice of their offspring. Conversely, the unforeseeable effect of CRISPR the effects can be greater than the benefits of the technique (Saey 1). This means that it is challenging to get an informed consent on behalf of the embryo or offspring additionally, the off-target effects of the technology mean that the potential effects of the technology can be transmitted to future offerings. These effects may not be observed until after several subsequent generations. The third area of concern is with equity and justice. Just like new technologies, there are concerns that gene editing is only accessible to the wealthy population. This will lead to an increase in disparity in the access of healthcare and associated interventions. There are concerns that germline editing has the potential to create a class of people that are defined by the quality of the engineered genome they have. The fourth area of concern is with gene editing research that involves embryos. There are religious and moral objections with the use of human embryos in research (Cyranoski 1). Governments such as the U.S place restrictions on the use of federal resources for research that leads to the creation or destruction of embryos. Additionally, the National Institute of Health does not fund gene editing that uses human embryos. The moral boundaries with this technique are set by the use of the technique as a treatment option rather than a reproductive option. This requires a focus on the definition of the appropriate risk-to-benefit ratio, which facilitates beneficial outcomes to a patient. This is dependent on factors such as disease progression, disease type, type of cell, and mode of therapeutic application. Risk to benefit ratio may be affected by the method of delivery. For instance, an appropriate delivery method is the use of the lentivirus approach, which is deemed stable and efficient.
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    “Parent Involvement in Education in Terms of Their Socio-Economic Status” Article Analysis Name Institution Affiliation Article Analysis Education plays an important role in the society that ensures people make informed decisions whether on issues touching on the social, economic or political setups. Different aspects that touch on education have attracted interests from scholars and researchers as well as policymakers. They have produced dynamic publications that address issues touching on the education fraternity. A thorough analysis of such articles is encouraged by scholars in order to note the shortfalls and discrepancies that might structure further topics for research in the future as well as improve on the current research studies by addressing concerns deeper. However, this paper presents an analysis of an academic article titled “Parent Involvement in Education in Terms of Their Socio-Economic Status” by Kuru Cetin and Taskin (2016) published in the Eurasian Journal of Educational Research. The piece by Kuru and Taskin (2016) talks on the subject of improving the quality of education by considering the informal resources that include the family and the impacts that parental involvement in matters education would have. The article, titled “Parent Involvement in Education in Terms of Their Socio-Economic Status” written by Kuru Cetin and Taskin (2016) was published in the Eurasian Journal of Educational Research. The problem statements according to Kuru Cetin and Taskin (2016) is that among the objective of formal education targets to increase the quality of education as well as having well-qualified students that can be realized by using both formal and informal resources effectively. The authors note that the family is the most critical informal resource and purposes to study the level of involvement in educational activities at schools by families using their socio-economic status. The study by Kuru Cetin and Taskin (2016) purposed to study the perceptions of primary stakeholders in the education sector that include teachers, parents and administrators on the involvement of families in the education process in terms of the socio-economic status in both primary and secondary schools from public and the private sector. The study used a qualitative method where it conducted interviews and analyzed relevant documents that had literature about the subject using a study group from the primary stakeholders involved in matters education. The findings of the study note that parents having a high socioeconomic status have shown great interests in actively participating in the improvement of education of their children. The study also found that the primary reason parents engage with schools is to determine the personal development of their children as well as the academic success. New Vocabulary Words: Phenomenological design. A research approach or method that notes an individual viewpoints and perceptions about a certain phenomenon. Semi-structured interviews. An interview that allows respondents to give detailed explanations about different questions asked. Opinion: The topic is controversial. The title of the work proves difficult to understand and comprehend as it is not clear and straight to the point. It presents ambiguity clauses such as “socio-economic” status and difficulty in linking it to parent’s decision-making steps in matters education. The interesting information found on the piece is the different ways parents can, directly and indirectly, get involved in education matters. It is because it makes it easy for readers to understand the challenges presented that tries to address the topic effectively. The additional information that should have been included is the incorporation of both qualitative and quantitative research methods in data collection and analysis (Denscombe & Overdrive Inc., 2014). It is because using only one method (qualitative) leaves out data that can be quantified to help people understand more easily the result and findings of the study (Creswell, 2014). For example, the researchers should have differentiated in terms of percentage the responses between the public and private parents involvement in different matters related to the subject. References Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approach. Thousand Oaks, California: SAGE Publications. Denscombe, M., & Overdrive Inc. (2014). The Good Research Guide. S.I.: McGraw-Hill Education. Kuru Cetin, S., & Taskin, P. (2016). Parent involvement in education in terms of their socio-economic status. Eurasian Journal of Educational Research, 66, 105-122 http://dx.doi.org/10.14689/ejer.2016.66.6
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Name Professor Course Date MIXXO Mix Recommendation Overview The objective of this project is to provide a plan that will enable Coca Cola launch a new product. The new product is a combination of two existing products Fanta and Sprite. Coca Cola intends to mix Fanta and Sprite to create a product that will be sold through the slogan: “Half Fanta-Half Sprite. Coca-Cola is a global brand that has numerous product brands. It is vital to note that even though Coca-Cola appears as a convenience product, it is actually a specialty product. The firm is also a loyalty brand because customers do not just select its product brands unexpectedly. Instead, customers select the products because of its specialty brands or because the products are available in a convenience store. In this case, the product concept will be based on the firm’s trademark, packaging, mix of two existing products to create a unique product, and the differentiation of the new product from existing products. Basis for the Recommendation Coca Cola should pursue this product because there is a ready market for new and unique products. Additionally, the firm’s logistical resources give it the capability to design and make the new product, in addition to supporting the product in the market. Currently, customer needs are based on personal preferences. It is evident that customers are after products that are healthy and can be conveniently purchased. The customer’s benefit is that the new product will refresh them in a different way from the existing products. There are also health benefits to the customer based on the number of calories in a bottle of the new product. Another customer benefit is that they will have an opportunity to experience a new taste from the new product. Customers also have an opportunity to experience new values. For instance, customers will have the opportunity to experience new senses and cultures. The new product will elevate the customer’s sense of taste because the new product will have a new taste, which has never existed in the market. The second customer value is the firm’s passion for a unique and singular experience, expressed through Coca Cola’s identity. It will also be expressed through the new product. Customers will also get value from the firm’s recognition of changing preferences and taste and the emerging trend of health conscious customers. This has motivated the firm to offer a wide range of beverages, including beverages that have few calories, functional benefits, and natural attributes. The size of the opportunity is large enough to compel Coca Cola to pursue the new product. Currently, the beverage industry is experiencing an expansion. Studies into the market show that there is an upward and onward growth in the market. There are opportunities for new and emerging beverage categories, which have provided additional growth opportunities for the industry. Current projections indicate that the expected compound annual growth in the global soft drink market is 6 percent. Customers will buy the new product because of their desire for new beverage or drinking experiences. They will buy the product because it offers them an authentic and nutritious experience. Strategic Fit The concept fits into Coca Cola’s corporate strategy and business-unit product line. The concept fits into the corporate strategy because Coca Cola is a renowned market leader in product innovation. Product design plays a crucial role in the value the firm seeks in the market. Coca Cola seeks to give customers a refreshing experience through its products. In the base of product line, Coca Cola has adopted a strategy of the expansion of its business line. This has led to an increase in the brands and beverages offered to customers. It has also enabled customers to recognize Coca Cola in different ways. Forecast The current revenue estimates are $134 million annually for the new product. The unit estimate is $0.50 per unit. It is projected that revenues will increase by 15 percent annually. Increased revenues are expected from the can and bottle sales of the new beverage. The cost estimates are $80 million annually. These estimates are for activities such as research and development, logistics, and sales and marketing. The learning curve will show the trend in sales and consumption of the new product. The curve will also show the break-even point. The capital requirements are huge because the firm must acquire a new product line in addition to increasing its logistics capacity. The return on investments will be realized after 15 years. This is dependent on the growth in sales and profitability of the new product. Project Plan Coca Cola requires a research and development team for the realization of the concept. The research and development team will create the new product based on the framework provided by the Food and Drug Administration for a soft drink. In addition to the research and development team, the firm requires a product line for making and bottling the new beverage. The firm also requires logistical resources for the marking and distribution of the new product. This includes warehousing and transportation for the new beverage. The main issue that could affect the new product is timing. Additional roadblocks include competition, Food and Drug Administration regulations and restrictions. Packaging and the resultant taste of the new product could also be a roadblock because it may not be pleasing to customers. Customer perception may affect the performance of the new product in the market. It could be perceived that the new product is not healthy.
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Mixed-Methods Approach Name Institution
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Mixed Method Approach to Teacher Effectiveness and Student Achievement Name of Student Name of University Mixed Method Approach to Teacher Effectiveness and Student Achievement Research Design The study will use a case study approach that integrates the mixed research methodology. Thus, the study will have a quantitative and qualitative portion. The quantitative portion will include a descriptive analysis of the average evaluation scores of a teacher in areas such as environment, instruction, SST benchmark scores, and peer and parent survey scores over a period of two years. Additionally, the qualitative portion will comprise of in-depth interviews with school administrators and teachers. The study can be characterized as a case study because it comprises of a detailed and in-depth exploration of the administrator and teacher experiences in a comprehensive teacher evaluation program. It will also include their perspectives on the relationship between their performance in the evaluation program and student achievement. In-depth interviews will be supported using a descriptive analysis of the quantitative data. Thus, the relationship that manifests in the school setting will constitute the unit of analysis or case. In designing the study, I will locate schools that have a comprehensive teacher evaluation program. I have identified six Marion County Public schools that are in the full implementation stage of a comprehensive teacher evaluation program. Two of these schools are elementary, two middle, and two high schools. These schools have a total student population of 1,552 and a total teacher population of 144. It is also important to note that according to the State Department of Education, the selected schools were rated as excelling. It is also important to note that the school populations are diverse with students and teachers from different racial, social, and cultural backgrounds. When planning the mixed-methods study, I considered four critical aspects. These aspects are weighting, timing, mixing, and theorizing. It is important to collect the qualitative and quantitative data concurrently. This will provide an opportunity to give each data set an equal weight based on a method called concurrent triangulation strategy. The concurrent triangulation method is a popular method among the six major methods that researchers can use in a mixed-methods model. Through the concurrent triangulation method, researchers collect qualitative and quantitative data within the same period. They then compare the two data sets with the aim of determining convergence. In this mixed-method approach, the concurrent triangulation approach will be used to compensate for weaknesses that that are inherent in one method. These weaknesses will be compensated with the strengths that are inherent in the other method of the mixed-method approach. In the mixed-method that I will use in this study, I will take the two data sets and place them side-by-side with the aim of completing a detailed analysis, which will be followed by a detailed discussion of the findings. The main advantage of the mixed-method is that it is a popular research approach and its findings are well substantiated and validated. Mixed Method The specific mixed method approach that will be used in this study is the sequential explanatory approach. The sequential explanatory strategy is one of six methods proposed by Dr. John Creswell (2003). The selection of this method was based on the consideration of factors that led to the generation of information used to select the sequential explanatory design. Developing mixed methods design is challenging because of the incorporation of both the qualitative and quantitative data collection methods. Thus, the selection of the best-mixed method approach requires a researcher to choose the most appropriate qualitative and quantitative research approaches necessary to address the research questions. The first consideration I made in the design of this research approach is my philosophy, capability, and comfort level with the qualitative and quantitative approaches. I also considered the resources at my disposal for the completion of the research work. In this case, I was certain that the approaches I selected were realistic for my parameters and timeframes. I also considered the goals of the study and determined portions of the study that required qualitative or quantitative methods. The main principle of the mixed method approach is that a researcher will employ a mix of qualitative and quantitative methods in order to give the research structure complementary strengths and avoid the overlapping of weaknesses. This means that the haphazard selection of the approaches could lead to a weak research approach. The selection of an appropriate method to mix in the mixed methods approach requires purposeful and logical planning and thought. Additional considerations that helped me to select the sequential explanation approach include the type of data to be collected, period when the data will be collected, whether the data will be collected in stages or simultaneously, and the methods that I will use in the integration of the data. Based on these considerations, I choose the sequential explanatory research design, which is a two-phase method. In this method, I will begin by collecting the quantitative data. After the collection of the quantitative data, I will collect the qualitative data. In using the sequential explanation design, the objective is to use the qualitative data and results to explain and interpret the quantitative data. For instance, a survey used to collect quantitative data can help researchers use the survey respondents during an interview. During these interviews, respondents can explain and provide insights into their responses in the survey. The rational for selecting the sequential explanatory design is that the quantitative results and data give a researcher a picture of the research problem. The researcher can use the analysis of collected qualitative data to refine, explain, and extend the research problem.
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Mixed Method Approach to Teacher Effectiveness and Student Achievement Name University Mixed Method Approach to Teacher Effectiveness and Student Achievement Research Design The study will use a case study approach that integrates the mixed research methodology. Thus, the study will have a quantitative and qualitative portion. The quantitative portion will include a descriptive analysis of the average evaluation scores of a teacher in areas such as environment, instruction, SST benchmark scores, and peer and parent survey scores over a period of two years. Additionally, the qualitative portion will comprise of in-depth interviews with school administrators and teachers. The study can be characterized as a case study because it comprises of a detailed and in-depth exploration of the administrator and teacher experiences in a comprehensive teacher evaluation program (Biddix, 2015). It will also include their perspectives on the relationship between their performance in the evaluation program and student achievement. In-depth interviews will be supported using a descriptive analysis of the quantitative data. Thus, the relationship that manifests in the school setting will constitute the unit of analysis or case. In designing the study, I will locate schools that have a comprehensive teacher evaluation program. I have identified six Marion County Public schools that are in the full implementation stage of a comprehensive teacher evaluation program. Two of these schools are elementary, two middle, and two high schools. These schools have a total student population of 1,552 and a total teacher population of 144. It is also important to note that according to the State Department of Education, the selected schools were rated as excelling. It is also important to note that the school populations are diverse with students and teachers from different racial, social, and cultural backgrounds. When planning the mixed-methods study, I considered four critical aspects. These aspects are weighting, timing, mixing, and theorizing. It is important to collect the qualitative and quantitative data concurrently. This will provide an opportunity to give each data set an equal weight based on a method called concurrent triangulation strategy. The concurrent triangulation method is a popular method among the six major methods that researchers can use in a mixed-methods model. Through the concurrent triangulation method, researchers collect qualitative and quantitative data within the same period. They then compare the two data sets with the aim of determining convergence. In this mixed-method approach, the concurrent triangulation approach will be used to compensate for weaknesses that that are inherent in one method (Borman & Kimball, 2015). These weaknesses will be compensated with the strengths that are inherent in the other method of the mixed-method approach. In the mixed-method that I will use in this study, I will take the two data sets and place them side-by-side with the aim of completing a detailed analysis, which will be followed by a detailed discussion of the findings. The main advantage of the mixed-method is that it is a popular research approach and its findings are well substantiated and validated. Mixed Method The specific mixed method approach that will be used in this study is the sequential explanatory approach. The sequential explanatory strategy is one of six methods proposed by Dr. John Creswell (2003). The selection of this method was based on the consideration of factors that led to the generation of information used to select the sequential explanatory design. Developing mixed methods design is challenging because of the incorporation of both the qualitative and quantitative data collection methods (Burch, 2014). Thus, the selection of the best-mixed method approach requires a researcher to choose the most appropriate qualitative and quantitative research approaches necessary to address the research questions. The first consideration I made in the design of this research approach is my philosophy, capability, and comfort level with the qualitative and quantitative approaches. I also considered the resources at my disposal for the completion of the research work. In this case, I was certain that the approaches I selected were realistic for my parameters and timeframes. I also considered the goals of the study and determined portions of the study that required qualitative or quantitative methods. The main principle of the mixed method approach is that a researcher will employ a mix of qualitative and quantitative methods in order to give the research structure complementary strengths and avoid the overlapping of weaknesses. This means that the haphazard selection of the approaches could lead to a weak research approach. The selection of an appropriate method to mix in the mixed methods approach requires purposeful and logical planning and thought. Additional considerations that helped me to select the sequential explanation approach include the type of data to be collected, period when the data will be collected, whether the data will be collected in stages or simultaneously, and the methods that I will use in the integration of the data. Based on these considerations, I choose the sequential explanatory research design, which is a two-phase method. In this method, I will begin by collecting the quantitative data. After the collection of the quantitative data, I will collect the qualitative data. In using the sequential explanation design, the objective is to use the qualitative data and results to explain and interpret the quantitative data (Croninger, Rice, Rathbun & Nishio, 2012). For instance, a survey used to collect quantitative data can help researchers use the survey respondents during an interview. During these interviews, respondents can explain and provide insights into their responses in the survey. The rational for selecting the sequential explanatory design is that the quantitative results and data give a researcher a picture of the research problem. The researcher can use the analysis of collected qualitative data to refine, explain, and extend the research problem. Site Selection The study will be conducted in Marion County and it will target Marion County public schools. I selected two elementary schools, two middle, and two high schools, which means that I will use participants from six schools from the Marion County public schools. It is important to note that the selected schools have a comprehensive teacher evaluation program. The two elementary schools have a total population of 610 students while the two middle schools have a total population of 247 students. The two high schools have a total population of 695 students. A significant percentage of students in the middle and high schools get free and reduced lunch. Additionally, the parents of students in the three levels of public schools are highly involved and motivated in the school and their children’s education. Parents of elementary students are expected to complete volunteer hours in the school each year. All schools selected for this study are classified by the State Department of Education as exceling. There are more white and English speaking students in these schools than any other language speakers or ethnic groups. It is also important to note that a significant percentage of the students have disabilities and special needs. The demographics of the teacher population are also as diverse as the student population. Most of the teachers are white and English speakers. However, there is a significant population of non-white teachers and teachers that use other native languages. The six schools selected for the researcher have a teacher evaluation program that takes into account the design, plan, and implementation of instructions. The evaluation program also considers the classroom and school environment in addition to the teacher’s effort in facilitating student achievement. In the selected elementary schools, the student population in a classroom is 24 students. In each classroom, there is an instructional assistant and a certified teacher. In the middle school, a classroom has 60 students with two certified teachers and an instructional assistant. In the high school, one classroom has 50 students with four certified teachers and two instructional assistants. Additionally, the six schools are departmentalized and a cluster of five students on average has teachers that are highly qualified in specific content areas. In the middle and high schools, students are grouped according to their abilities in reading and mathematics. Both the middle and high schools use project based learning for sciences and arts. The two high schools give students individual learning plans (ILP), which are used during the ILP conferences conducted three times a year with the student, teacher, and parents (Goldhaber & Liddle, 2012). The two high schools also have three categories of teachers. These are the teachers that serve as classroom teachers or the career teachers, classroom teachers that are mentors, and teachers not assigned to a classroom and they serve as coaches or support to the classroom teachers. Career and mentor teachers are certified in a content area. Participants
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Mitigation Plan Part 3 Student’s Name Institutional Affiliation Mitigation Plan, Part 3 Introduction The assignment focuses on developing mitigation strategies for cyberbullying among adolescents. Over the last two decades, children are increasingly using technology to bully one another on social media platforms such as Facebook or Instagram. In response to the rising trends of cyberbullying among children, different states in the United States have enforced anti-cyberbullying laws to protect children, especially the adolescents from online bullies. However, many schools and districts lack actual action plans, strategies, and guidelines to address the rising cases of cyberbullying among adolescengts effectively. To fight cyberbullying, all stakeholders must work to change the online culture that encourages it. Some of the strategies to fight to cyberbully among adolescents include; Safeguarding Personal Information
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Making Biblically-Based Ethical Decisions Name Course Abstract Currently, businesses operating on a local and global scale face a myriad of ethical issues. These issues emerge from their interactions with stakeholders such as employees, customers, and the environment in which they operate. Knowledge of the ethical context in which businesses operate is important in the development of ways to deal with emerging and existing ethical issues. This exploration focuses on the guiding principles that should be applied in making ethical decisions. It also focuses on the ethical frameworks and perspectives used to make ethical decisions.
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Mindfulness-Based Treatment and Depression Name University Abstract Mindfulness based treatment uses cognitive focused therapeutic interventions to provide relief for clients with chronic depression. The purpose of mindfulness-based cognitive therapy is to disempower the depressive thoughts and enabling the client to adopt thoughts that are more positive. It is a solution focused, rather than problem centered therapeutic intervention. A literature review of fourteen critical studies on mindful-based treatment shows that the technique reduces a patient’s probability of relapsing for a considerable duration regardless of the age, education, sex, or relationship status of the patient. It is important to note that the literature review compares mindfulness-based treatment with routine therapy for depression such as anti-depressants. The fourteen studies that have been selected for review show the popularity of mindfulness-based treatment in the treatment of depression and in some of the instances, multiple sclerosis. This literature review presents an overview of current research on the efficacy of mindfulness-based treatment, especially in comparison with other active control conditions, moderators of treatment outcomes, and the mechanisms of change for the therapy.
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Name Instructor Course Date Document 24-4: Militant Suffrage Emmeline Pankhurst, Speech from the Dock (1908) The Women’s Social and Political Union (WSPU) was at the forefront of fighting for women’s rights in Britain in the late 19th century and early 20th century, particularly their rights for political inclusion. In order to achieve its objective, the movement resorted to using tactics that the British government deemed to be militant. It is based on this perception by the government that some of the movement’s leaders and members were arrested and subjected to prosecution. However, the WSPU’s members led by the leader, Emmeline Pankhurst, appeared to be using these kind of tactics as a means of challenging the conventional notions of proper behavior for women at the time. These notions held that women ought to behave in a calm and “peaceful” manner. As such, the society perceived women who were militant and combative as being unconventional and breaking the social norms. WSPU’s tactics challenged these notions by demonstrating that women can use the combative and militant methods to achieve their goals in society. Pankhurst captures this aspect in her speech where she says that men got their reforms by being impatient and that women too can do the same. According to Pankhurst, the WSPU adopted such militant tactics because they had tried constitutional methods but they did not achieve their intended goals of enabling women to enjoy the same political rights as men. In supporting her assertion, she stated that they had tried using feminine influence but they did not see any positive outcome out of it. They saw it appropriate to use the combative or militant tactics as they had realized that their sex was very deplorable that it was their responsibility to break the law to call attention to why they were doing so. In the further justification of why the WSPU adopted such tactics, Pankhurst argued that they had been mistreated and harassed even when they had broken no law. It is against this background that they saw no reason to continue following the law in their effort to realize equal political rights as men. Instead, it was necessary for them to use unconstitutional tactics if that is what will enable them to achieve political inclusion. The WSPU’s agitation for political enfranchisement, irrespective of the tactics used to realize it, rested on the belief that it was necessary. Pankhurst argued that women had both a right to and a need for political enfranchisement because they performed the same, if not more, duties as ordinary duties such as educating and earning a living for their children. She also argued that women had both a right to and a need for political enfranchisement because they had a role of making the British society a better place than it was. Therefore, they needed equal political rights as men to have the power of influencing decisions on public issues such as taxation and public legislation. In conclusion, Emmeline Pankhurst, “Speech from the Dock”, provides a picture of the kind of political challenges that Britain experienced in the late 19th century and early 20th century. One of the primary challenges included political disenfranchisement where women were denied equal political rights as men. As has been noted, movements such as WSPU played an important role in fighting for women’s rights and promoting political enfranchisement, regardless of gender, race, or socio-economic class. Work Cited Pankhurst, Emmeline “Speech from the Dock [Police Court]” in Votes for Women (October 29, 1908), 1.
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Section I: Concepts and Foundations Q.1. Define cost. How would you distinguish between (a) direct and indirect costs, (b) fixed and variable costs, (c) sunk and future costs, (d) implicit and explicit costs, and (e) controllable and uncontrollable costs? Give an example of each. Answer to Q1: Cost refers to the value or price that is required for a specific task to be done or a payment (in money) made for a certain object. Direct costs are the monetary valuations that can be traced back to the specific object of purchase, while indirect costs cannot be directly related to the object or product. Fixed costs are expenses that remain the same with no regard to the volume of output, while variable costs depend on the amount of output of a particular object. Sunk costs are expenses that have been made and cannot be retrieved, while future costs refer to expenses spent on future projects and can vary depending on the future values and prices. Implicit cost is the cost that a firm must give up in order to produce another good or service while explicit cost is one that the firm uses directly as a factor of production. Controllable cost is one that a firm has a certain level of management over while uncontrollable costs are referred to as costs that are not under the influence of the organization, and no decisions can alter them. Distinguishing between types of costs Direct vs. Indirect Cost: A direct cost can be traced to the specific object that is purchased and an easy connection can be attributed between the two. Calculations of direct costs are easy as the specific product can give the required factors of production that led to the achieved costs. For instance, a company laptop is a direct cost as money was give directly to the suppliers in exchange for the specific asset. However, an indirect cost cannot be directly traced to the product or service that was purchased. It is also known as an overhead as the cost of the objects requires the calculation of various elements to achieve the value of product. For instance, labor, insurance, utilities are indirect costs. This is due to the fact that, utilities such as rent and electricity must be paid to the supplier so that tokens are received in the end while rent has to be paid to retain a house over the specified intervals. Fixed vs. Variable Cost: A fixed cost is one that does not change, regardless of changes made in the production, variety and quantity. Salaries as paid in company, are fixed costs as the firm needs to pay them out, whether the company experiences a loss or profit. A variable cost, on the other hand, is dependent on the various factors of production of the organization. It can change with the various differences in the company’s investments, incentives and materials used. For instance, a company’s variable costs will depend on the amount of materials used for production, the utilities paid and the wages incurred during production. Labor and capital may increase or decrease depending on the variables used in production. Sunk vs. Future Cost: A sunk cost is an expense that a company makes with no possibility of having in recovered in the future, either through resale or total reversals. For instance, when a firm purchases assets such as computers in advance, the amount of cost incurred for the purchase cannot be recovered after the sale. On the other hand, a company may make plans to purchase the computers at a future date, and in that case, the cost it will pay at that future date is the future cost. It is often dependent on the variable of change that can happen over time, unlike the sunk costs which have fixed prices. Implicit vs. Explicit Cost: Implicit cost is that which must be given up so as to produce another good or service that the firm needs. It can also be referred to as an imputed or implied cost as most of implicit costs are not actual costs but still necessary in decision making processes in an organization. For example, if a company bought a car at $1,000 before, but now the cost of maintenance amounts to $4,500 a month, the company might consider selling the car to get the current cost, to which the previous $1,000 is given up. In contrast, explicit cost can be referred to as opportunity costs as the company does not directly incur them, or pay them out. For instance, if the government pays for an activity with the company’s car, it is not a cost that it directly incurs and therefore not accountable. Controllable vs. Uncontrollable Cost: A controllable cost is one that a company can take care of to a certain level as it can be solved or managed by the company’s specified manager. For example, the delay in the wages, salaries, bonuses are costs that the company that effectively manage without the influence of an outside control. However, uncontrollable costs are those that exceed the influence of the company to make certain decisions about the costs made. This can include the influence of the government on certain costs of the company, such as payment of duty costs in case of imports after purchase is something that an organization has no control over. In most cases, a company cannot offer any decisions to alter the changes needed. Q.2. How does a cost differ from expense and expenditure? How would you reconcile their differences? Explain with an example. Answer to Q2: A cost is the value of economic resources, such as money, that is used in exchange for the production or delivery of a good or service. An expense, on the other hand, is the cost incurred or consumption of a product or service while doing a particular task. Lastly, an expenditure is the actual use of the particular product or service, and can be calculated over a long period of time. Reconciling the difference between cost, expense, and expenditure: a cost refers to the item getting purchased at a particular time, to which the item can be used to obtain certain results, which is the expense. Over an extended period of time, the expenses are referred to as expenditures as they define the process of consumption of the goods or services. The cost, expense and expenditure can therefore be reconciled through the calculations of time and arte of consumption. It means, therefore, that goods purchased and consumed in the same period, have the same costs and expenditures while goods that are purchased and not immediately consumed, will have a difference in the costs and expenses used over the period. For example, developing a computer may cost $4,000 as it calculates the amount of expenditure that was used during the development. However, the same computer would cost $50,000 as it is the final value of the product, with the inclusion of labor used, as well as amount of money paid for the product. In the final cost, factors such as the sales taxes, cost of delivery and repairs, labor, manufacturing and market supply had been considered. It means that in both calculations, the cost and expenditure, the company takes in to account the level of consumption that the computer will have if one or both is regarded. For this reason therefore, an expense will be the cost of the computer once its utility has been consumed, when a purchase occurs. Through depreciation, the computer will be an expenditure as the cost will reduce with the ultimate sale. Q.3. Define price. Why is it difficult to determine the price of public goods? What pricing method would you use for (a) public utilities, (b) recreation facilities, and (3) a toll road? Why? Answer to Q3: A price is the amount of payment offered in exchange for a particular good or service, mostly in form of money. Public goods can be very complex when determining the price as the characteristics of public goods include the non-excludability and consumption that do not limit one consumer. This makes it difficult to calculate the prices of the public goods as compared to the easier calculation of private goods. Levelized pricing method is the most suitable for pricing for public utilities as consumers are required the fixed price as set by the government or Public Service Enterprises (PSE). This is because the public utilities, such as electricity or water can be accessed by public consumers, therefore, setting a fixed price should consider all the factors involved in provision of the services or goods. A consumer is therefore expected to pay the cost that reflects after his or her expenditure has been calculated. The government can also offer subsidies in such cases of public utilities, reducing the cost of expenditure for the consumers. Marginal cost/Average cost pricing method is suitable for recreational facilities pricing as the Public Sector Enterprise does not need to increase or decrease the prices or such facilities as no profit is achieved. It can ensure that application of the marginal cost pricing method can be more efficient in providing certain allocations of the recreational facilities. It is often most suitable as the price of the facilities is often lower that the average costs required for the same goods and services. It is also easier to calculate as the output is evaluated through equating the price to the average costs of the goods and services, in this case, the recreational facilities. This is calculated as; P=AC Total Cost Pricing method is the most suitable for a toll road pricing as it worked on a no-profit, no-loss policy which ensures that the price of the goods and services provided are able to cover the total costs of the items. In this case, toll road pricing is easily calculated by evaluating the expenses incurred from the total costs of the toll road. Q.4. What is break-even analysis? What is the difference between a linear and a non-linear break-even analysis? Discuss the assumptions that underlie a linear break-even analysis and explain what happens if the assumptions are relaxed? Answer to Q4: Break-Even Analysis refers to a cost analysis method which is used in the integration of the costs, revenue and output of an activity with the intention to develop an alternative course of action for the particular activity. It works by determining the specific level at which an operation for a given good or service experiences no loss or gain at the point, referred to as the break-even point. It also categorizes the costs of production between the variable and fixed costs and makes calculations by determining the sales revenues. It uses the policy that at the lowest level of an activity, cost surpasses revenue, which leads to a loss, but with increase in the levels of activity, the revenue increases faster than the initial costs, resulting into the break-even situation that makes both the revenues and costs equal. This is called the break-even point. Linear Break-Even Analysis: this is evaluation of the costs of producing goods and services, with the calculations of the costs, revenues and output levels that must be least produced to avoid losses in the company. It uses the relationship between both the fixed and variable costs to calculate the output, with various conditions met. These conditions include; (1) both the total cost and total revenues are linear and independent to the output, (2) the fixed cost is independent of output; (3) there are no other financial costs incurred; (4) output can be increased without significantly affecting the cost structure; and (5) all output units are charged or sold at the same price. The conditions ensure that calculations are done with varying levels of output as well as constant prices and revenues represented from the linear representation. Nonlinear Break-Even Analysis: this involves the calculations of the non-linear relationship of the cost, revenues and output levels of an activity. It calculates that the cost of revenue and output levels are not always constant, regardless of the constant quantity of output levels. It deals with the reality that the cost, revenue and output levels of an activity will vary depending on the changing prices, and not in fixed and constant amounts. It calculates the fact that the output will be varied depending on the amount of materials used, labor and investments made for production. It therefore means that, the conditions that must be achieved in a linear break-even analysis are not all met, it becomes non-linear analysis. For instance, changes in the prices per unit can influence the output of the particular activity. Q.5. What is payback period? What are some of its strengths and weaknesses? Why is it important to consider cumulative net flow as opposed to simple net flow, when using the payback period? When should one use time-value of money for payback period? Answer to Q5: Payback period is defined as the period at which the rate of cash inflows and cash outflows during a project operation will be equal and the net flows equal zero.
    $ 0.09
    0words / 0page
    READ MORE
  • Tertiary education
    General Knowledge
    Business/Purchasing, Procurement and Contracts Management Name of Student Name of Institution Business/Purchasing, Procurement and Contracts Management Part A The performance of the Nestle stock and the General Mills stock in the last two years has been analyzed using the charts reflected below. The charts demonstrate the trend of each stock price changes in the last two years. Equally, the capital gain or loss realized by each of the stock has been computed in determining if the shareholders of the two companies realized wealth maximization in the last two financial years. Nestle stock price performance iVBORw0KGgoAAAANSUhEUgAAAvAAAAHECAMAAAEffN74AAAAAXNSR0IArs4c6QAAAARnQU1B AACxjwv8YQUAAAFHUExURVWGv5+fn3ODlgYGBoaGhqrC3+jv9ld/r8fHx0p+u3t7ewgICO/v 793n8snJyW2CmzAwMH2jzmCBp1J/tYWFhdLf7gsLC1hYWDIyMn9/f8bW6VlZWYSGiGiCoWaS xqenp0x+ubvO5VuBrPn7/fX19c/Pz1uKwX6FjHCDmhAQEPf39+7z+REREWOCplCCvY6v1Tg4 OOPr9IaGhmqCn2BgYK2trYeHh4On0fv7+12Bq6+vr4CFi3ifzMza7NfX12yWyGWCpBgYGP//
    $ 0.09
    0words / 0page
    READ MORE

Tertiary education Topics

Other Tertiary education Essays

  • Tertiary education
    Genetically modified organisms (GMO)
    GMO How are genetically modified organisms different from non-genetically modified organism? Genetically modified organisms are animals, plants and other organisms whose genetic composition was altered using genetic recombination and modification techniques performed in a laboratory. On the other hand, non-GMO organisms are those organisms that are produced naturally and were not modified (the organic & non organic report 2017; rumiano cheese 2011 & non-gmoproject 2016). The recent acts of activist intent on destruction of research plots included plants altered by molecular as well as classical genetic techniques. Is it possible to distinguish between plants altered by classical genetics and those altered by modern techniques? If it’s possible, how is it done?  It is possible and it can be distinguished by checking the DNA of the organism. Thion et al. 2002 conducted an experiment on how to extract/purify DNA of soybeans to check if the sample was transgenic and had undergone extraction and purification. The checking can be done through the use of a microscopic technology. Meanwhile, Schreiber (2013) adds that the detection could be done through a biochemical means where the present GMO will be measured. In isolating and amplifying a piece of DNA, the technique using polymerase chain reaction (PCR) is used to make millions of copies of the strands of the DNA. It is easier to see visually the altered and non-altered DNA if there are millions of copies of the DNA. What safeguards are in place to protect Americans from unsafe food? Are these methods science-based? Mention at least 2 methods. The US government safeguards the Americans from unsafe foods through the FDA or US Food and Drug Administration. Their methods are science-based, i.e. its whole genome sequencing technology and its measures in controlling microbial hazards. The whole genome sequencing technology is used by the FDA in identifying pathogens isolated from food. The FDA also safeguards foods by controlling microbial hazards through the process of elimination of growth and reduction of growth. The elimination methods are either through heating or freezing while the reduction of growth method involves the use of acidity, temperature and water activity. (Bradsher et al. 2015, pp. 85 – 86; FDA 2007; FDA 2013). Name at least 10 examples of harm to citizens from unsafe food. What percentage of these illnesses was caused by genetically modified organisms? If so, mention any example Some examples of harm to people from unsafe foods are harmful diseases extending from diarrhea to cancer caused by eating foods contaminated with viruses, bacteria, chemical substances and parasites. Around 600 million people around the world fell ill after consumption of contaminated food; diarrheal diseases cause around 125,000 death of children 0-5 years of age (WHO 2015). Based on the studies made by IRT (2011), foods from genetically modified organisms cause damage to the immune system, gastrointestinal and other organs, infertility and accelerated aging. These happen because residue or bits of materials of the GMO food can be left inside the person’s body, which eventually can cause long-term problems. Statistics show that in 9 years after the introduction of GMOs in the market, Americans who had chronic illnesses rose from 7 to 13% and other diseases such as digestive problems, autism, and reproductive disorders are rising (IRT 2011).
    $ 0.13
    0words / 0page
    READ MORE
  • Tertiary education
    ‘Globalisation is good’ or ‘is it not?’
    ‘Globalisation is good’ or ‘is it not?’ Globalisation is good because it opens doors of opportunities to many. It was the reason for the broad and speedy worldwide interconnectedness of the current social life – from cultural to criminal and from financial to spiritual. This is synonymous to having a borderless world but critics argue stating that globalisation has in fact disconnected the world from its national geographical divisions – the countries (Yoong & Huff 2007). Although some are discounting the benefits of globalisation to the world, I still consider globalisation to be the driving force in the global partnerships between companies that created more opportunities and jobs. The world trade may have plunged, the dollar dwindled, commodities slumped, but overall, globalisation has brought good to the peoples of the world. Globalisation through the internet has unlocked the doors to the sharing of cultures, knowledge, goods and services between peoples of all countries and the modern technologies lifted the barriers for accommodating a speedy transfer. The case of Inditex in marketing their Zara brand globally manifests that in business, one formula does not fit all. Every country has its own culture and styles and a business that is going global must do their homework well before entering the new market. Inditex’s Zara brand was a success to the Europeans but struggles in America and still trying their luck with the Chinese. But despite of these differences, the company is still considering going global because they needed new markets and they knew they will be opening bigger opportunities and jobs to more people (La Coruna 2012). Moreover, globalisation has also done well to the manufacturing sector. Statistics show that the global industrial output in 2010 registered fifty-seven times more than the production in the 1900. Also, globalisation has changed the way things are produced. The manufacturers going global take advantage of the skills and the costs of producing products in different countries. This means that the design of the product may be done in the US, manufactured in China or Taiwan then assembled in the Philippines. So every item – be it an iPad, a doll or a washing machine is collaboratively produced by the best skilled workers in the world at the lowest labor cost (The economist 2012). Consequently, since the product was a collaboration of different countries so it can be also marketed and patronized in those countries (The economist 2012). However, there are some who are openly argues that it failed to deliver the many publicized benefits to the poor. A Filipino economist, Walden Bello, coins a new term to describe the present global economic situation as caused by “deglobalisation” due to the downturn of the economies of big countries such as Singapore, Taiwan, Germany, Japan and Brazil. However, the poor countries are the ones that show faster growth than the rich countries, making globalisation still good because of the opportunities it gives to the needy. On the other hand, Dunning, et al (2007) claims that the current inclinations in the global economy reflect a more distributed rather than a geographical sharing of multi-national enterprise activity and foreign direct investments and to the carrying-out of transactions that are globally oriented. Contrary to the common beliefs, globalisation is not a new thing in the global business world. According to McMahon (2004) it existed since the late parts of the fifteenth century when a society of nations consisting of the countries in Northern Europe entered the rest of the world through exploration, trade and then conquest. This process which involves the exploitation of wealth and power by the European voyagers lead to industrialization in Britain, then mass international industrialization and eventually globalisation (McMahon 2004). Sheel (2005) adds that the interchange of technology and markets between countries have been among the first human innovations since the most primitive times. Globalisation was termed that time as “exchange” where the country’s surpluses were exchanged with other surpluses of peoples from other countries. This old system of exchange was developed, continued to grow and increased to greater heights in the modern times (Waters 2001 as cited in van Krieken, et al 2006). Robertson (2003) asserts that globalisation is inherent in people, motivated by their desire for self-interest and cooperation for survival. The author theorizes that globalisation existed due to the encouragement of interconnectedness by the social, political, economic and technological growths performing as catalysts for both local and global developments (Robertson 2003). Robertson (2003) claims that globalisation has emerged in three waves – during the 1500 to 1800 for the first wave, 18th century up to the 20th century for the second wave and the third wave is after the World War 2. However, Sheel (2008) categorizes globalisation in four phases – the 1st phase took place on the 16th century, the 2nd phase on the late 18th century, the 3rd phase during the 19th to 20th century and the fourth phase is during the end of the 20th century. According to the analysis of Robertson (2003), the first wave (1500 to 1800) saw the upsurge of colonization, invasion, imperialism, misery of the indigenous people, migration and changes in politics, economy and culture. The first wave has encouraged the creation of interconnectedness between peoples, countries and cultures, as instigated by commerce and trade. The second phase (18th to 20th century) was characterized by the start of Industrial Revolution, paving the way for industrialization and increase of income and profits especially to those who had technological skills. The trade routes created during the first wave were utilized by the manufacturers in sourcing their raw materials from other countries. However, by the end of the second wave, civil conflicts in many countries arose, same with the unfortunate events of World Wars 1 and 2 and the Great Depression. The third phase of globalisation transpired after World War 2. This was the phase when European economies were down whilst USA was enjoying a flourishing economy with tough industrial foundation and strong military. The latter part of the third phase (during the middle of the 20th century), the growth of globalisation was challenged by the emergence of communist ideology and the military force of Soviet Union. This challenge resulted to cold war between USA and Soviet Union where Soviet Union collapsed in 1989 (Robertson 2003). In addition to Robertson’s analysis, Sheel (2005) adds that there exists a fourth phase of globalisation that happened during the end of the 20th century where countries the developing and developed countries merged as partners in cross border trade and investments, stimulating the convergence of India and China. However, issues about globalisation’s worthiness have surfaced, some critics consisting of anti-globalisation groups argue that globalisation in corporate organisations have increased povery and inequality (Engler 2007). A study was made by World Value Survey regarding globalisation and 57% of the survey respondents consider globalisation as good. Most of the approving respondents were optimistic that globalisation would encourage the improvement of the workers’ working conditions, economic equality, global peace, global stability and human rights (Leiserowitz, et al 2006). But still, anti-globalisation groups insist that poverty, homelessness and environmental destruction will be highlighted if globalisation continues as it only centers on increasing trade and investment but ignores environmental protections and human rights (Engler 2007). But Edwards & Usher (2008) comment that the argument of the anti-globalisation groups are only superficial because despite their protests against globalisation they still engage in globalisation practices such the use of computers, internets and mobiles in their dissemination of their opposition. This manifests that these protesters are only selective in their opposition. They are not against the good effects of globalisation in communication but only on the aspect of capitalism. The inequality of wealth and poverty is one of the issues that plagued globalisation where critics claim that it makes the poor countries poorer and the rich countries richer as they exploit and amass the wealth of the minority country. But Holmes, et al (2007) reason that there is really a big difference on the distribution of benefits as the developed country provides the money or the capital whilst the developing country (minority) offers its resources and labor. This set-up ends-up with the developed country that provided the financial capitalization getting the bigger share of the profit. However, one aspect of globalisation that really brought good benefits to the people is the technological globalisation. Dahlman (2007) describes technological globalisation as the development of knowledge and skills through research by capable engineers and scientists and offering them to countries that have no inventive capability. The acquisition of these inventions by other countries enables them of acquiring technological transfer. Technologies can be transferred through technical assistance, direct foreign investment, importation of goods and components of products, licensing, copying and reverse engineering (Dahlman 2007). The advancement of communication technology through networking has opened more opportunities and economic growth. In addition, the video of Johan Norberg entitled “Globalisation is good – the case of Taiwan” illustrates the importance of globalisation in uplifting the poor conditions of poor countries. The video presented two former poor countries – Taiwan and Kenya – and compare and contrast what have they become 50 years after. Taiwan became 20 times progressive than Kenya whilst Kenya remained a poor country. Norberg explains that the reason for this difference is the globalisation that Taiwan embraced 50 years ago. Taiwan allowed capitalists to invest in their country whilst they provide the resources and labor. Moreover, Taiwan allowed the integration of their economy to the global trade whilst Kenya continued to shun globalisation. The video also presented the value of the multinational companies like Nike that employs the labor force of Vietnam in their sweatshop. Instead of being exploited, the Vietnamese were given good working conditions, high salaries and more benefits. Contrary of the claim of anti-globalisation groups that multinational investors will only exploit local workers, Vietnamese workers were given the opportunity to rise from their poverty through the works provided for them by globalisation. Conclusion: Contrary to what most people believe, globalisation has been in existence since time immemorial through surplus “exchange” and though the people were not yet privy to the term, they were already using the method of globalisation in their interconnection with other people’s business and lives. Now that the term globalisation is out in the open, people all around the world become mindful of each other’s affairs and consequences, disapproving how the system of globalisation makes the rich countries richer and the poor countries poorer. But as Norberg (2012) has seen it, globalisation is good as it intends to improve productivity and working condition. Though critics argue that it only exploits and amass the wealth of the poor country, Norberg was right when he said that if it is exploitation, then the world’s problem is by not exploiting the poor properly. The case of Taiwan and Kenya is already an eye-opener to those who still shut the door to globalisation. There may be ups and downs in the world of business but it cannot be blamed everything to globalisation because globalisation is only a method of interaction and not the one that is making the business or the deal. Globalisation through the internet has opened the doors to the sharing of cultures, knowledge, goods and services between peoples of all countries and the modern technologies lifted the barriers for accommodating a speedy transfer. The case of Inditex in marketing their Zara brand globally manifests that in business, one formula does not fit all. Every country has its own culture and styles and a business that is going global must be well prepared before entering the new market. Inditex’s Zara brand was a success to the Europeans but struggles in America and still trying their luck with the Chinese. But despite of these differences, the company is still considering going global because they needed new markets and they knew they will be opening bigger opportunities and jobs to more people. This proves that globalization brings good to many but one must know how to diversify and take advantage of the various benefits of globalization to reach greater success in the future.
    $ 0.13
    12,670words / 1page
    READ MORE
  • Tertiary education
    Explicit Teaching
    Explicit Teaching Introduction Not all students are equal. Some are fast learners; others need assistance while others are unruly – not because they are doing it intentionally, but because they are suffering from learning disabilities causing hyperactivity, inattention and impulsiveness. Some adjustments are needed in the learning environment and these adjustments should be tailored based on the individual learning needs of the students. Explicit teaching provides active communication and interaction between the student and the teacher and it involves direct explanation, modeling and guided practice (Rupley & Blair 2009). This paper will demonstrate Explicit Teaching applied to a class scenario with students suffering from a learning disability known as Attention Deficit/Hyperactivity. Furthermore, a lesson will be developed featuring an example of an explicit teaching approach showing how to differentiate the lesson to meet the needs of every student, with or without learning disability before finally concluding. 2A: ET Creating a Scenario One of the learning disabilities encountered is AD/HD or Attention Deficit/Hyperactivity Disorder, a neurological disorder that is likely instigated by biological factors that impact chemical messages (neurotransmitters) in some specific parts of the brain. In this type of learning disability, the parts of the brain that control reflective thought and the restriction of ill-considered behavior are affected by the slight imbalances in the neurotransmitters (ADCET 2014). AD/HD is characterized by hyperactivity, inattention and impulsiveness. Students with ADHD are those who never seem to listen, cannot sit still, do not follow instruction no matter how clear the instructions are presented to them, or those who just interrupt others and blurt-out improper comments at improper times. Moreover, these students are oftentimes branded as undisciplined, troublemakers or lazy (NHS 2008). In managing students with AD/HD, some adjustments in the learning environment are needed and these adjustments should be tailored based on the individual needs of the student. It should be noted that persons with AD/HD have different manifestations and the nature of disability as well as its effect on the student’s learning also vary (ADCET 2014). Direct instruction is considered as one of the best approaches in teaching students with AD/HD, but it must be used skilfully and the teacher should think of strategies to prevent it from becoming boring. Killen (2003) states that in using direct instruction, the teacher should emphasise teaching in small steps so the student will be able to practice every step and their practice will be guided to come-up with high level of success. In teaching a student with AD/HD, creative presentation of course material is advisable and this could be done through the use of visual aids and hands-on experience to stimulate the student’s senses. The teacher may use personal stories such as the student’s ideas and experiences (Killen (2003). It will also help if the teacher encourages the student with AD/HD to sit in front or near in front of the classroom to limit distractions (Tait 2010). Telling the student of what the teacher wants him to learn or be able to do – such as reading, writing, etc. - will help in the student’s better understanding of the lesson. In presenting the lesson, the teacher should present the lesson at a pace that the student can handle, such as not too slow or too fast. Important points should be emphasised so the student will realise its significance. To check if the student understands the lesson, the teacher may ask questions and if the student cannot answer, the teacher should re-explain everything that the student gets confused with. New words or new terms should be explained through examples. Assigning colors to different objects is a good visual aid in processing visual information. To help the student with AD/HD process written material, the teacher may use various verbal descriptions as possible. A list of acronyms and terms will also help, as well as a variety of teaching formats like films, flow charts or handouts. At the end of the lesson, a summary should be given, stressing the important points of the lesson. 2B: ET Lesson PlanKey Learning Area: Math Stage: 7 Year level: Year 7 Unit/Topic: Algebra Learner Outcomes: This lesson focuses in essential algebraic topics intended to prepare students for the study of Algebra and its applications. Students are introduced to topics involving mathematical operations with whole numbers, decimals and integers. Upon completion of this lesson, students are expected to answer and use mathematical language to show understanding; use reasoning to identify mathematical relationships; and continue and be familiar with repeating patterns. Indicators: At the end of the lesson, students are able to recognise what comes next in repeating patterns, identify patterns used in familiar activities, recognise an error in a pattern, able to simplify algebraic fractions, factorise quadratic expressions and operate with algebraic expressions. Resources: Whiteboard, colored visual aids, workbooks and class notes where the procedures are listed. Prior Knowledge: Students possess basic math knowledge (addition, subtraction, multiplication and division). They also have basic understanding of the terms such as whole numbers, positive, negative, decimals and integers. Assessment Strategies: To assess the students’ learning, students will be asked to do mathematical operations. Their answers will be checked, marked and recorded; and those who are unable to answer correctly will be asked what is it that they are getting confused. For students with learning disability, their computations will be checked and evaluated. Comments will be recorded in a record book regarding the student’s performance.
    $ 0.13
    0words / 0page
    READ MORE
  • Tertiary education
    Ethical Promotion Paper (Nursing)
    Ethical Promotion Paper In today’s globalization, the use of electronic health record significantly helps in sharing patient’s information to other healthcare providers across health organizations for patient’s better access to health care, decrease of costs and improvement of the quality of care (Ozair et al. 2015). However, the increasing use of electronic health record of patients over paper records sometimes generates ethical issues that should be given attention. Nurses are bound to follow the Code of Ethics and sharing of patient information, even digitally, should be done within the right conduct. This paper will discuss the article written by Ozair, Jamshed, Sharma & Aggrawal (2015) entitled, “Ethical issues in electronic health records: a general overview”, which was published in Perspectives in Clinical Research. My thoughts on the role that health care professionals should play in resolving the said ethical issue will also be discussed, as well as the specific theory that will support my position. Article’s Summary Ozair et al. (2015) aimed to explore the ethical issues created in the use of electronic health record (EHR), as well as to discuss its possible solutions. Although the use of digital health record could improve the patient’s quality of healthcare and decrease cost, transferring or sharing information through digital technology poses hazards that could lead to security breaches and endanger safety of information. When the patient’s information or health data are shared to others without the patient’s consent, then their autonomy is put at risk. Electronic health record contains the patient’s health data including his/her medical diagnoses, history, immunization dates, treatment plans and laboratory results. Every person has the right to privacy and confidentiality and his information can only be shared if he permits it or dictated by law. If the information was shared because of clinical interaction, then that information should be treated as confidential and be protected. The confidentiality of information can be protected by allowing only the authorized personnel to have access. Thus, the users are identified and assigned with passwords and usernames. However, these may not be enough to protect the confidentiality of the patient’s information and stronger policies on security and privacy are needed to secure the information. According to a survey, around 73% of doctors communicate with other doctors through text about work and when mobile devices get lost or stolen, the confidentiality of the information about patients are put at stake. Hence, security measures such as intrusion detection software, antivirus software and firewalls should be used to protect the integrity of data and maintain patient’s confidentiality and privacy. When patient data is transferred, there is a possibility of the data getting lost or destructed especially when errors are made during the “cut and paste” process. The integrity of data may also be compromised when the physician uses drop down menu and his/her choices become limited due to the choices available in the menu, causing him/her to select the wrong choice, thus, leading to huge errors. However, the authors claim that these ethical issues can be resolved through the creation of an effective EHR system, involving clinicians, educators, information technologies and consultants in the development and implementation of the ERH system. My Thoughts on the role of health care professionals The role of health care professionals is vital in ensuring that the right of patients to privacy and confidentiality are observed even in the use of electronic health record (EHR). Patient’s human rights in care include their rights to confidentiality and privacy (Cohen & Ezer 2013). To ensure that there will be no ethical issues created in the use of EHR, health care professionals should be properly informed about the importance of the system, as well as the ethical issues that could arise if the rights of the patient are not properly observed. Hence, it is vital that the knowledge of the health care professionals regarding the right implementation of EHR starts from their education curriculum, as well as in their continuous training and nurses’ participation in the workflow of EHR (Koolaee, Safdan & Bouraghi 2015). Computer literacy is a must for health care professionals to ensure that the sharing of health data information are not lost or destructed during the process and medical errors are not committed. Conclusion The use of electronic health record improves and increases efficiency in patient care, as well as patients’ access to care across health organizations. However, health care professionals should never ignore the rights of patients to their privacy and confidentiality so they should be properly informed if ever there is a need for their health data information to be shared to others to avoid ethical issues. List of References Cohen J. & Ezer T. (2013). ‘Human rights in patient care: a theoretical and practical
    $ 0.09
    0words / 0page
    READ MORE