Wednesday, November 27, 2019

Cold Dark Matter (CDM)

Cold Dark Matter (CDM) The universe is made up of at least two kinds of matter. Primarily, theres the material we can detect, which astronomers call baryonic matter. Its thought of as ordinary matter because its made of protons and neutrons, which can be measured. Baryonic matter includes stars and galaxies, plus all the objects they contain. However, there is also stuff out there in the universe that cant be detected through normal observational means. Yet, it does exist because astronomers can measure its gravitational effect on baryonic matter. Astronomers call this material dark matter because, well, its dark. It doesnt reflect or emit light. This mysterious form of matter presents some major challenges to understanding a great many things about the universe, going right back to the beginning, some 13.7 billion years ago.   The Discovery of Dark Matter Decades ago, astronomers found that there wasnt enough mass in the universe to explain things like the rotation of stars in  galaxies and the movements of star clusters. Mass affects an objects motion through space, whether its a galaxy or a star or a planet. Judging by the way some galaxies rotated, for example, it appeared that there was more mass out there somewhere. But, it wasnt being detected. It was somehow missing from the mass inventory they assembled using stars and nebulae to assign a galaxy a given mass. Dr. Vera Rubin and her team were observing galaxies when they first noticed a difference between expected rotation rates (based on estimated masses of those galaxies) and the actual rates they observed. Researchers began to dig more deeply into figuring out where all the missing mass had gone. They considered that perhaps our  understanding of physics, i.e. general relativity, was flawed, but too many other things didnt add up. So, they decided that perhaps the mass was still there, but simply not visible. While it is still possible that we are missing something fundamental in our theories of gravity, the second option has been more palatable to physicists. Out of that revelation was born the idea of dark matter. Theres observational evidence for it around galaxies, and theories and models point to the involvement of dark matter early in the universes formation. So, astronomers and cosmologists know its out there, but havent yet figured out what it is yet. Cold Dark Matter (CDM) So, what could dark matter be? As of yet, there are only theories and models. They can actually be slotted into three general groups: hot dark matter (HDM), warm dark matter (WDM), and cold dark matter (CDM). Of the three, CDM has long been the leading candidate for what this missing mass in the universe is. However, some researchers still favor a combination theory, where aspects of all three types of dark matter exist together to make up the total missing mass. CDM is a kind of dark matter that, if it exists, moves slowly compared to the speed of light. It is thought to have been present in the universe since the very beginning and has very likely influenced the growth and evolution of galaxies. as well as the formation of the first stars. Astronomers and physicists think that its most likely some exotic particle that hasnt yet been detected. It very likely has some very specific properties: It would have to lack an interaction with the electromagnetic force. This is fairly obvious, since dark matter is dark. Therefore it doesnt interact with, reflect, or radiate any type of energy in the electromagnetic spectrum.   However, any candidate particle that makes up cold dark matter would have to take into account that it has to interact with a gravitational field. For proof of this, astronomers have noticed that dark matter accumulations in galaxy clusters wield a gravitational influence on light from more distant objects that happens to be passing by. This so-called gravitational lensing effect has been observed many times. Candidate Cold Dark Matter Objects While no known matter meets all of the criteria for cold dark matter, at least three theories have been advanced to explain CDM (if they exist). Weakly Interacting Massive Particles: Also known as WIMPs, these particles, by definition, meet all the needs of CDM. However, no such particle has ever been found to exist. WIMPs have become the catch all term for all cold dark matter candidates, regardless of why the particle is thought to arise.  Axions: These particles possess (at least marginally) the necessary properties of dark matter, but for various reasons are probably not the answer to the question of cold dark matter..MACHOs: This is an acronym for Massive Compact Halo Objects, which are objects like black holes, ancient neutron stars, brown dwarfs and planetary objects. These are all non-luminous and massive. But, because of their large sizes, both in terms of volume and mass, they would be relatively easy to detect by monitoring localized gravitational interactions. However, there are problems with the MACHO hypothesis. The observed motion of galaxies, for instance, is uniform in a way that would be hard to explain if MACHOs supplied the missing mass. Furthermore, star clusters would require a very uniform distribution of such objects within their boundaries. That seems very unlikely. Also, the sheer number of MACHOs that would have to be fairly large in order to explain the missing mass. Right now, the mystery of dark matter doesnt have an obvious solution - yet. Astronomers continue to design experiments to search for these elusive particles. When they do figure out what they are and how they are distributed throughout the universe, they will have unlocked another chapter in our understanding of the cosmos. Edited by Carolyn Collins Petersen.

Saturday, November 23, 2019

The Woman Who Explained the Sun and Stars

The Woman Who Explained the Sun and Stars Today, ask any astronomer what the Sun and other stars are made of, and youll be told, Hydrogen and helium and trace amounts of other elements. We know this through a study of sunlight, using a technique called spectroscopy. Essentially, it dissects sunlight into its component wavelengths called a spectrum. Specific characteristics in the spectrum tell astronomers what elements exist in the Suns atmosphere. We see hydrogen, helium, silicon, plus carbon, and other common metals in stars and nebulae throughout the universe.  We have this knowledge thanks to the pioneering work done by Dr. Cecelia Payne-Gaposchkin throughout her career.   The Woman Who Explained the Sun and Stars In 1925, astronomy student Cecelia Payne turned in her doctoral thesis on the topic of stellar atmospheres. One of her most important findings was that the Sun is very rich in hydrogen and helium, more so than astronomers thought. Based on that, she concluded that hydrogen is THE major constituent of all stars, making hydrogen the most abundant element in the universe. It makes sense, since the Sun and other stars fuse hydrogen in their cores to create heavier elements. As they age, stars also fuse those heavier elements to make more complex ones. This process of stellar nucleosynthesis is what populates the universe with many of the elements heavier than hydrogen and helium. Its also an important part of the evolution of stars, which Cecelia sought to understand. The idea that stars are made mostly of hydrogen seems like a very obvious thing to astronomers today, but for its time, Dr. Paynes idea was startling. One of her advisors - Henry Norris Russell - disagreed with it and demanded she take it out of her thesis defense. Later, he decided it was a great idea, published it on his own, and got the credit for the discovery. She continued to work at Harvard, but for time, because she was a woman, she received very low pay and the classes she taught werent even recognized in the course catalogs at the time.   In recent decades, the credit for her discovery and subsequent work has been restored to Dr. Payne-Gaposchkin. She is also credited with establishing that stars can be classified by their temperatures, and published more than 150 papers on stellar atmospheres, stellar spectra. She also worked with her husband, Serge I. Gaposchkin, on variable stars. She published five books, and won a number of awards. She spent her entire research career at Harvard College Observatory, eventually becoming the first woman to chair a department at Harvard. Despite successes that would have gained male astronomers at the time incredible praise and honors, she faced gender discrimination throughout much of her life. Nonetheless, she is now celebrated as a brilliant and original thinker for her contributions that changed our understanding of how stars work.   As one of the first of a group of female astronomers at Harvard, Cecelia Payne-Gaposchkin blazed a trail for women in astronomy that many cite as their own inspiration to study the stars. In 2000, a special centenary celebration of her life and science at Harvard drew astronomers from around the world to discuss her life and findings and how they changed the face of astronomy. Largely due to her work and example, as well as the example of women who were inspired by her courage and intellect, the role of women in astronomy is slowly improving, as more select it as a profession.   A Portrait of the Scientist Throughout her Life Dr. Payne-Gaposchkin was born as Cecelia Helena Payne in England on May 10, 1900. She got interested in astronomy after hearing Sir Arthur Eddington describe his experiences on an eclipse expedition in 1919. She then studied astronomy, but because she was female, she was refused a degree from Cambridge. She left England for the United States, where she studied astronomy and got her PhD from Radcliffe College (which is now a part of Harvard University).   After she received her doctorate, Dr. Payne went on to study a number of different types of stars, particularly the very brightest high luminosity stars.  Her main interest was to understand the stellar structure of the Milky Way, and she ultimately studied variable stars in our galaxy and the nearby Magellanic Clouds. Her data played a large role in determining the ways that stars are born, live, and die.   Cecelia Payne married fellow astronomer Serge Gaposchkin in 1934 and they worked together on variable stars and other targets throughout their lives. They had three children. Dr. Payne-Gaposchkin continued teaching at Harvard until 1966, and continued her research into stars with the Smithsonian Astrophysical Observatory (headquartered at Harvards Center for Astrophysics. She died in 1979.

Thursday, November 21, 2019

The Social Movement of the 21st Century Essay Example | Topics and Well Written Essays - 1500 words

The Social Movement of the 21st Century - Essay Example This most diverse city have gone through in the history throughout from increasing the wave of immigration from Asia and Latin America to America’s counterculture of Beat Generation, Hippies in Haight-Ashbury, and the gay rights movement, and experienced many various progressive social activism. In 1950s, there were the civil rights movement that black people appealed to for liberation. In 60-70s, there was the woman leap that women appealed to for liberation and ecoactivity as for 80-90s. In our time, in the latter half of the twentieth century, the term "globalization was coined, and this leads us to answer the question; what would be the ideal social movement of the 21st century? Globalization encouraged the development of networks, identities and opportunities of organizations across borders. For the matter, even when social movements never place a toe in transnational waters, the fact that their societies are affected by globalization makes their domestic actions part of global civil society. Some of have begun to posit the development of a whole new spectrum of transnational social movements; others have focused on one particular movement like human rights, the environment, or the concerns of indigenous peoples; still others focus on cultural forms, deducing from the collapse of extinct meta-narratives a groping across borders towards new cultural codes and connections. To the extent that many such networks continue to appear, we can expect to see more boomerangs whizzing across transnational space. However, it is yet unclear how they relate to the existing domestic system, to international organizations, or to domestic social actors in their "target": Do they depend indirectly on the power of the domestic social networks that they come from? Do they depend on the support of international organizations? If so, how far beyond the policies of these organizations can their campaigns go? Are they

Wednesday, November 20, 2019

The Cheesecake Factory Marketing Plan Research Paper

The Cheesecake Factory Marketing Plan - Research Paper Example This essay discusses that the Cheesecake Factory is one of the successful American stories. The company has bagged some awards like the 2010 Zagat award for best dessert and best salads, 2010 Halo gold award for the best cause marketing event, etc. The Cheesecake Factory is planning to expand its business over the next five years in around five countries such as United Arab Emirates, Bahrain, Kuwait, the Kingdom of Saudi Arabia and Qatar in the Middle East and North Africa, Europe, Russia, and Turkey. The country where the Cheesecake Factory can trade in is Europe. Europe market is expected to grow at an annual growth rate of 0.37% in 2008-2011. The top companies are expected to supply at a rate of 23%. The largest market is Germany with a total share of 25%. The company deals in with bakery products. They produce good quality of cheesecakes and involve a lot of innovation in their products. The new product to be launched is the ‘Green Tea Cheesecake’. The company target market is people who love to have bakery products, dessert. The environment of the hotel is such which makes customers love the place and its offerings. It s target audience is consumers right from a kid to elders people. The US has always experienced a deficit in the bakery products. Its trade deficit was large with Canada and Europe. The condition improved in 2002 where the import value stood at $685 million. Since then the US market has been growing in terms of bakery products. Since the trade condition has improved, the company can take advantage and expand its business in countries where the trade conditions has improved. The company has been dealing in bakery products, and is planning to expand in other countries. The best option would be to expand in Europe. The company is a sole proprietary. It started of its business in the year 1978 and till date it has been among the top most company. The factory was started by Oscar and Evelyn Overton and eventually it was handed by their son David. David founded The Cheesecake Factory restaurant in Beverly Hills, California. Marketing Plan The European market produces 25millions of bread each year with the industrial plant share of 8 million tons. The craft bakers represent the bread production for about 48% of the total volume. The plant bakeries have a market share of 75% to 80% of market share. The fastest growing sector is the in store bakeries, as the retail market is booming and the retailers gain a market share (The Federation of Bakers, 2007). The retail bakery market in Europe was estimated to be 65.5 billion by 2000. There has been a variation in the European countries. In states like Italy and Sweden, the market grew from 12% to 17% in 2000. But in large markets like France and Germany, it grew only by 4% and 3% respectively. The per capita consumption in Europe is generally high as compared to other countries (Payne, 2003, p.25). There has been a good trend of bakery market in Europe. Thus the Cheesecake Factory would be able to adopt itself in the European market. The market has a good opportunity and the potential to grow is also high in the market. The industry is of bakery product, therefore there is no barrier to enter and exit. The European market is an open market. The barriers for entry are low. The European

Sunday, November 17, 2019

A Review on Lifeboat Ethics Essay Example for Free

A Review on Lifeboat Ethics Essay Lifeboat ethics: the case against helping the poor is a famous essay written by Garret Hardin, a human ecologist in 1974. This article aims to re examine the lifeboat ethics which was developed by the author to support his controversial proposal. In the theory, the world is compared to a lifeboat with a carrying capacity of 60. There are totally 50 people on board, representing comparatively rich nations, while the 100 others swimming in the ocean outside the lifeboat stands for the poor nations. To solve the dilemma of whether the swimmers should be allowed to climb aboard at the risk of lifeboat’s safety, Hardin suggested that no admission should be granted to boat, or to interpret it in a straight way, no humanitarian aids should be offered to the poor countries. Regardless of the additional factors which the author took into consideration from the real world in the essay, in my opinion, the basic metaphor itself is questionable. Firstly, the status of the lifeboat is not an accurate reflection of reality. Arguably, natural resources of the earth are finite, however, this does not equal to the scarcity of resources in the control of the rich nations. On the contrary, nowadays in the developed countries, what the rich have used is out of proportion to their actual needs, which not only leads to colossal waste each year but also creates disposal problems. A familiar example is the popularity of losing weight among the western world, which is not solely a way of pursing beauty but also a clear indication of the growing number of obese people who consume food excessively. In contrast, in the third world especially poverty-stricken nations like Ethiopia, millions of people are filled with untold suffering. They drag themselves on the street from day to day, begging for only a slice of stale bread. Due to the unfair distribution of resources caused by the affluent people’s favorable political position, most rich nations currently obtain more than enough resources and they are still casting their greedy eyes on the untapped poor regions. In the light of the facts above, in the lifeboat metaphor people on board actually occupy  more room than normal and the real carrying capacity of a lifeboat is more than 60. With no admission given to those swimmers who are in need, the room is not allocated to each according to his needs, a principle the author cited in explanation of the rationale behind the lifeboat ethics. The second doubtful point is related to Hardin’s computation of conscience. In defense of the survivors’ guilt arising from not helping the poor, he claimed that â€Å"the net result of conscience-stricken people giving up their unjustly held seats is the elimination of that sort of conscience from the lifeboat†. He defined guilty about one’s good luck as a type of conscience and the newcomer’s lack of guilt about the rich people’s loss as conscience drain; but the author deliberately omitted the morality of rich people’s indifference to the poor asking for help. Counting the negative effects on total conscience in the lifeboat if no rescue is attempted, the final solution to the lifeboat dilemma might be changed. Essentially, the author’s negligence of social injustice against impoverished people and the ethical issue indifference is just a result of his bias for the rich countries. To improve the general population quality, the author repeatedly emphasized the necessity of reproduction control in poor nations and increasing the proportion of rich nation’s population. This suggestion in fact is based on the assumption that the people in rich nations are innately superior to their counterparts in poor countries, which is an apparent violation of the creed that everyone is born equal. In conclusion, the poor people should not be the sacrifice of the population growth in the developed regions. Logic and rigorous as the essay Lifeboat ethics: the case against helping the poor may appear to be, the author wrote more on behalf of the countries on board, group of which he belonged to. The author urged people to get rid of sentiment and make rational decisions, but ironically he himself deceived his mind with prejudice and sense of superiority.

Friday, November 15, 2019

The Impact of ICT on Manufacturing :: ICT Essays

The Impact of ICT on Manufacturing Acknowledgements: This report has been done with the help of my teacher, friend & obviously with the help of ICT* (Internet). I researched through the Internet & found many helpful sites to complete my report. Search engines such as www.google.com has helped me to find the relevant sites for my report. I used some information from http://www.thekjs.essex.sch.uk/yates/it08_-_9.htm to complete my report. There are also some other sites from which the information has been taken. I have also acquired* some information from the PC World magazine to learn about the different soft wares that are used in world of manufacturing. Contents page ------------- Summary: In this report I found out the uses, advantages & disadvantages of ICT. They are listed below in their respective categories. I mainly concentrated on the Manufacturing* section where the ICT is used extensively. I have found the different ways in which the Industries work now days. ICT has improved the communication technology the way the different companies interact with each other & with their respected customers. ICT has brought the World to its feet. There is no place where ICT is never used or not being used. I have found vast amount of information with the help of ICT to complete my report. I have found how companies use ICT to manufacture the products in bulk with less effort & time waste. The accuracy of each product made is exactly the way it was designed on the computer. I have discovered some soft wares, which help in the design of the product & also manufacture them automatically. These are CAD/CAM* soft wares. I have concluded my report by saying that ICT has revolutionised the Manufacturing sector. Terms of reference: This report I for the portfolio Unit 12 of the GNVQ course following the criteria laid down by OCR. The deadline given to me for completing this report was 31/01/2003 & I have managed to stick to that deadline. The overall purpose of my report is to make sure that I have met the requirements to achieve a high grade. Methodology: I found out the task from my teacher & practised it on the Thomas Telford Website using banking scenarios. After my practise session I decided on the topic on which I would be doing my report. I decided it to be on 'Manufacturing'. I chose this section as ICT has changed the way of production & manufacturing. I started my research on 'The impact of ICT on Manufacturing' on the Internet, magazines, and books & also took some assistance from my parents on what they think about ICT changing the way we work & interact* with the

Tuesday, November 12, 2019

Boeing Australia Case Study

Executive Summary Barilla is operating in a very old-fashioned distribution system that needs to be changed. Implementing this new JITD will increase efficiency across the supply chain. The system will reduce manufacturing costs, increase supply chain visibility, increase distributors’ dependence on Barilla, establish better relationship with distributors, reduce inventory level and most importantly improve manufacturing planning and forecasting using objective data. This JITD will see Barilla’s supply chain synchronized from manufacturing to end – users.Strategically, the best decision for Barilla is to implement the JITD program. This will allow for greater capability and flexibility to respond to inputs from end-consumers. In the JITD system, each distributor would provide Barilla with data from products that they shipped to retailers in previous days as well as current stock levels for each Barilla SKU. This data would then be used to make forecasting and rep lenishment decisions. The will result in a smoother running operating system and excellent customer service.To prove the credibility of JITD and win over apprehensive customers the involvement of top management will be employed. Within the next six months Maggali and top management team will analyze daily shipment data of the distribution chain. Next a database of historical and present demand patterns of distributors will be created and shipments will be simulated with JITD in place. This system will reduce stock out rates and inventory levels while increasing service levels. Next, experiments will be run at the Pedrignano depot and then the Milano depot.This will establish the credibility of JITD and win over distributors and retailers who are apprehensive in buying into this new system. Approximately ten top managers, from customer service managers to vice presidents, logistics, purchasing, sales and marketing and information technology managers will be involved in the decision m aking, implementation and monitoring of this new system. This will prove credibility of JITD and convince customers that change is inevitable and in this this case the benefits will be mutual. IssuesDeciding whether or not the Just In Time Distribution (JITD) model should be implemented into Barilla†s operations. Barilla is suffering from escalating operational inefficiencies. The company is being burdened by demand fluctuations in its manufacturing and distribution systems. Also, this large weekly variation in distributors orders is increasing overhead costs. Trying to convince internal and external customers of the benefits of JITD Barilla’s customers are unwilling to give up authority to place orders as they please.The lack of faith in Barilla’s inventory management also made some customers reluctant in giving detailed sales data so that Barilla could improve its forecasting demands. Customers perceived this JITD move as a quest by Barilla to transfer power to themselves. Internal customers are also resistant to this change as they as they view this concept to be infeasible and or dangerous. Environmental and Root Cause Analysis There is a growing burden that demand fluctuations are imposing on the company’s manufacturing and distribution system.Vitali has suggested for years that the company implement this innovative JITD which is modeled off JIT manufacturing. Vitali proposed that rather than follow the practice of delivering products to Barilla’s distributors on the basis of whatever orders distributors placed with the company, Barilla’s own logistics organization would instead specify delivery quantities that would more effectively meet end users’ needs and would also more evenly distribute the workload on Barilla’s manufacturing and logistics. This was heavily resisted both internally and externally.External people are saying that Barilla wants power over its distributors and wants to manage their inventory for them. On the other hand, the internal sales and marketing people thinks this JITD is unworkable and will reduce their workload so they see it as a threat and as a result they are putting up a resistance. The variability in demand is as a result of lack of forecasting systems or sophisticated analytical tools at the distributors end. Orders for Barilla’s dry products swing from week to week and such extreme demand strains Barilla’s manufacturing and logistics operations.For example, the specific sequence of pasta production necessitated by the tight heat and humidity specifications in the tunnel kiln made it difficult to quickly produce a particular pasta that had been sold out due to unexpectedly high demand. In addition, holding sufficient finished goods inventories to meet distributors order requirements was extremely expensive when weekly demand fluctuated so much and was so difficult to forecast. Advertising and trade promotions are also intensifying the resistance to implementing this JITD.Distributors have become accustomed to price discounts through volume orders, promotional activities and transportation. Barilla’s sales strategy relied on the use of trade promotions to push products into the grocery distribution network. Distributors looks forward to these promotions and also sales people within Barilla looks forward to giving distributors discounts in this very old-fashioned distribution system. Alternatives and/or Options Implementing the JITD system would prove beneficial to the company and its overall supply chain management.Benefits of this JITD would be reduced manufacturing costs and inventory levels, better relationship with distributors due to increased supply chain visibility and distributor’s dependence on Barilla and overall improvement in manufacturing planning using objective data collected. For sales people this would be a selling tool rather than a threat to sales. Distributors will also see an improved fill rate to retail stores, additional service from Barilla without any extra cost and reduced inventory holding costs.Disadvantages to these are lack of infrastructure to handle JITD, risks of inability to adjust shipments quickly to stock-outs, cost benefits uncertainties, unconvinced distributors and reduction in responsibilities for Sales Representatives. Recommendation It is recommended that Barilla implement this JITD system in its supply chain. The system will provide customers with additional service at no extra cost. It will also improve Barilla’s visibility with the trade and make distributors more dependent on the company.This dependence or vendor management inventory (VMI) system will improve relationships between Barilla and distributors. More important, is the information regarding the supply at the distributors’ warehouses would provide the company with objective data that would allow for improvement in planning procedures and forecasting. In a ddition, distributors will not only improve their fill rates to retail stores but reduce their inventory holding costs. Sales and marketing people will realize that this JITD will be a selling tool rather than a threat to sales.This in the long run improves overall performance in operations. Implementation Maggiali needs to look at JITD as not only a logistics program but as a company wide effort and get top management from both sides involved in decision -making and teamwork. With top management on board, the first implementation will be done at Barilla’s largest DO (organized distributor) the Cortese. Within the next six months Maggali and top management team will analyze daily shipment data of the distribution chain.Next a database of historical and present demand patterns of distributors will be created and shipments will be simulated with JITD in place. This system will reduce stock out rates and inventory levels while increasing service levels. Next, experiments will be run at the Pedrignano depot and then the Milano depot. This will establish the credibility of JITD and win over distributors and retailers who are apprehensive in buying into this new system. An information system will also be implemented to communicate with all customers.SKUs will be barcoded so that they are easily identifiable; that is Barilla’s code and distributor/customer codes. Using this coding system, the company will be able to receive information through any code and also reduce the impact of internal changes in products on DO systems. Barilla’s forecasting systems will be under improvement so that the company can make good use of information received. Monitor and Control There has to be credibility of this new venture in order to convince customers both internally and externally to sign on.In order to reap success in any new initiative top management have to be involved. A team of approximately ten top managers including managing directors, marketing and s ales managers, logistics managers, purchasing managers, vice presidents and information technology managers will monitor the implementation of this new initiative, JITD. Each day customers will send information to Barilla using EDI (electronic data exchange) systems. This information will include; customer codes, previous day’s stock-outs, previous days sales and advance orders for future retailer promotions.This will help Barilla to improve internal operations for the company and customers alike, now that Barilla will be responsible for determining quantities and delivery schedules. This will see a reduction in inventory levels, distribution costs, manufacturing costs, improved responsiveness to distributors demands. Overall efficiencies in the company’s operations will be evident in every link of the supply chain. Monitoring and control will be an ongoing process to minimize inefficiencies in operations.

Sunday, November 10, 2019

Psychology Essay

Operant conditioning is a form of learning that is environmentally gathered. Learn the skill, practice the skill, then step back and examine the results. Observational learning also called social learning. A person behavior is influenced by what happens to other people when ten bases they behave certain ways. The person who is learning does so by seeing responses are elicited by other behaviors. The person then bases their behavior on the lessons learned by watching what happens to the other people. Social learning is in social context and can occur purely through observation or direct instruction. The different kinds of learning can be utilized in the workplace: Operant conditioning: One of my coworkers is having trouble with understanding the job. So I voluntarily helped them out. That increases my reputation at work. After that I will get positive feedback from coworkers. Observational learning: At the workplace, it is forbidden to do something which you’ve never done before. That’s why, before you start working on something new; ask someone about that job to show you how to do it. So you can learn and be able to do it. Social learning: Advertisements, TV, internet shows because we observe them, then copy them. How is prejudice developed and nurtured through classical and operant conditioning? Give specific examples that demonstrate each kind of learning. Prejudice is a learned, generally negative attitude directed toward specific people solely because of their membership in an identified group. Prejudice is developed and nurtured through classical and operant conditioning from three elements. Affective (emotions about the group), behavioral (negative action toward members of the group) and cognitive (stereotypical beliefs about team members). People learn prejudice the same way they learn all attitudes through classical and operant conditioning. For example, repeated exposure to stereotypical portrayals of minorities and women on TV, in movies and in magazines teach children that such images are correct. Similarly hearing parents, friends and teachers express their prejudices also reinforce prejudice. 3. ) You are scheduled to present the result of your work on creating a new software program for your company. What memory techniques will you use in order to be free of too much dependence on notes and power point slides? Be specific as to how you will relate the technique to the content of the presentation Long –term memory: Encoding because it is elaborative rehearsal, the processing is more than visual. The three R’s registration, retention and retrieval. 4. ) Name and describe the three qualities of emotional intelligence according to Goleman. If you were interviewing applicants for a position in your company and wanted to know whether they had emotional intelligence, how would you go about discovering that? Would you do that in an interview or some other means? Posses self control of emotions such as anger, impulsiveness and anxiety. The ability to understand what others feel such as empathy. The ability to motivate oneself. I feel you can find a person emotional intelligence in an interview because a person can manage their emotions. They don’t get angry in stressful situations. They have the ability to look at a problem calmly and find a solution. I would go about discovering by asking questions or just observing how the questions are answered and giving different scenarios of a situations and pay attention to responses.

Friday, November 8, 2019

What Is Statistical Significance How Is It Calculated

What Is Statistical Significance How Is It Calculated SAT / ACT Prep Online Guides and Tips If you've ever read a wild headline like, "Study Shows Chewing Rocks Prevents Cancer," you've probably wondered how that could be possible. If you look closer at this type of article you may find that the sample size for the study was a mere handful of people. If one person in a group of five chewed rocks and didn't get cancer, does that mean chewing rocks prevented cancer? Definitely not. The study for such a conclusion doesn't have statistical significance- though the study was performed, its conclusions don't really mean anything because the sample size was small. So what is statistical significance, and how do you calculate it? In this article, we'll cover what it is, when it's used, and go step-by-step through the process of determining if an experiment is statistically significant on your own. What Is Statistical Significance? As I mentioned above, the fake study about chewing rocks isn't statistically significant. What that means is that the conclusion reached in it isn't valid, because there's not enough evidence that what happened was not random chance. A statistically significant result would be one where, after rigorous testing, you reach a certain degree of confidence in the results. We call that degree of confidence our confidence level, which demonstrates how sure we are that our data was not skewed by random chance. More specifically, the confidence level is the likelihood that an interval will contain values for the parameter we're testing. There are three major ways of determining statistical significance: If you run an experiment and your p-value is less than your alpha (significance) level, your test is statistically significant If your confidence interval doesn't contain your null hypothesis value, your test is statistically significant If your p-value is less than your alpha, your confidence interval will not contain your null hypothesis value, and will therefore be statistically significant This info probably doesn't make a whole lot of sense if you're not already acquainted with the terms involved in calculating statistical significance, so let's take a look at what it means in practice. Say, for example, that we want to determine the average typing speed of 12-year-olds in America. We'll confirm our results using the second method, our confidence interval, as it's the simplest to explain quickly. First, we'll need to set our p-value, which tells us the probability of our results being at least as extreme as they were in our sample data if our null hypothesis (a statement that there is no difference between tested information), such as that all 12-year-old students type at the same speed) is true. A typical p-value is 5 percent, or 0.05, which is appropriate for many situations but can be adjusted for more sensitive experiments, such as in building airplanes. For our experiment, 5 percent is fine. If our p-value is 5 percent, our confidence level is 95 percent- it's always the inverse of your p-value. Our confidence level expresses how sure we are that, if we were to repeat our experiment with another sample, we would get the same averages- it is not a representation of the likelihood that the entire population will fall within this range. Testing the typing speed of every 12-year-old in America is unfeasible, so we'll take a sample- 100 12-year-olds from a variety of places and backgrounds within the US. Once we average all that data, we determine the average typing speed of our sample is 45 words per minute, with a standard deviation of five words per minute. From there, we can extrapolate that the average typing speed of 12-year-olds in America is somewhere between $45 - 5z$ words per minute and $45 + 5z$ words per minute. That's our confidence interval- a range of numbers we can be confident contain our true value, in this case the real average of the typing speed of 12-year-old Americans. Our z-score, ‘z,' is determined by our confidence value. In our case, given our confidence value, that would look like $45 - 5(1.96)$ and $45 + 5(1.96)$, making our confidence interval 35.2 to 54.8. A wider confidence interval, say with a standard deviation of 15 words per minute, would give us more confidence that the true average of the entire population would fall in that range ($45Â ± \bo{15}(1.96)$), but would be less accurate. More importantly for our purposes, if your confidence interval doesn't include the null hypothesis, your result is statistically significant. Since our results demonstrate that not all 12-year-olds type the same speed, our results are significant. One reason you might set your confidence rating lower is if you are concerned about sampling errors. A sampling error, which is a common cause for skewed data, is what happens when your study is based on flawed data. For example, if you polled a group of people at McDonald's about their favorite foods, you'd probably get a good amount of people saying hamburgers. If you polled the people at a vegan restaurant, you'd be unlikely to get the same results, so if your conclusion from the first study is that most peoples' favorite food is hamburgers, you're relying on a sampling error. It's important to remember that statistical significance is not necessarily a guarantee that something is objectively true. Statistical significance can be strong or weak, and researchers can factor in bias or variances to figure out how valid the conclusion is. Any rigorous study will have numerous phases of testing- one person chewing rocks and not getting cancer is not a rigorous study. Essentially, statistical significance tells you that your hypothesis has basis and is worth studying further. For example, say you have a suspicion that a quarter might be weighted unevenly. If you flip it 100 times and get 75 heads and 25 tails, that might suggest that the coin is rigged. That result, which deviates from expectations by over 5 percent, is statistically significant. Because each coin flip has a 50/50 chance of being heads or tails, these results would tell you to look deeper into it, not that your coin is definitely rigged to flip heads over tails. The results are statistically significant in that there is a clear tendency to flip heads over tails, but that itself is not an indication that the coin is flawed. What Is Statistical Significance Used For? Statistical significance is important in a variety of fields- any time you need to test whether something is effective, statistical significance plays a role. This can be very simple, like determining whether the dice produced for a tabletop role-playing game are well-balanced, or it can be very complex, like determining whether a new medicine that sometimes causes an unpleasant side effect is still worth releasing. Statistical significance is also frequently used in business to determine whether one thing is more effective than another. This is called A/B testing- two variants, one A and one B, are tested to see which is more successful. In school, you're most likely to learn about statistical significance in a science or statistics context, but it can be applied in a great number of fields. Any time you need to determine whether something is demonstrably true or just up to chance, you can use statistical significance! How to Calculate Statistical Significance Calculating statistical significance is complex- most people use calculators rather than try to solve equations by hand. Z-test calculators and t-test calculators are two ways you can drastically slim down the amount of work you have to do. However, learning how to calculate statistical significance by hand is a great way to ensure you really understand how each piece works. Let's go through the process step by step! Step 1: Set a Null Hypothesis To set up calculating statistical significance, first designate your null hypothesis, or H0. Your null hypothesis should state that there is no difference between your data sets. For example, let's say we're testing the effectiveness of a fertilizer by taking half of a group of 20 plants and treating half of them with fertilizer. Our null hypothesis will be something like, "This fertilizer will have no effect on the plant's growth." Step 2: Set an Alternative Hypothesis Next, you need an alternative hypothesis, Ha. Your alternative hypothesis is generally the opposite of your null hypothesis, so in this case it would be something like, "This fertilizer will cause the plants who get treated with it to grow faster." Step 3: Determine Your Alpha Third, you'll want to set the significance level, also known as alpha, or ÃŽ ±. The alpha is the probability of rejecting a null hypothesis when that hypothesis is true. In the case of our fertilizer example, the alpha is the probability of concluding that the fertilizer does make plants treated with it grow more when the fertilizer does not actually have an effect. An alpha of 0.05, or 5 percent, is standard, but if you're running a particularly sensitive experiment, such as testing a medicine or building an airplane, 0.01 may be more appropriate. For our fertilizer experiment, a 0.05 alpha is fine. Your confidence level is $1 - ÃŽ ±(100%)$, so if your alpha is 0.05, that makes your confidence level 95%. Again, your alpha can be changed depending on the sensitivity of the experiment, but most will use 0.05. Step 4: One- or Two-Tailed Test Fourth, you'll need to decide whether a one- or two-tailed test is more appropriate. One-tailed tests examine the relationship between two things in one direction, such as if the fertilizer makes the plant grow. A two-tailed test measures in two directions, such as if the fertilizer makes the plant grow or shrink. Since in our example we don't want to know if the plant shrinks, we'd choose a one-tailed test. But if we were testing something more complex, like whether a particular ad placement made customers more likely to click on it or less likely to click on it, a two-tailed test would be more appropriate. A two-tailed test is also appropriate if you're not sure which direction the results will go, just that you think there will be an effect. For example, if you wanted to test whether or not adding salt to boiling water while making pasta made a difference to taste, but weren't sure if it would have a positive or negative effect, you'd probably want to go with a two-tailed test. Step 5: Sample Size Next, determine your sample size. To do so, you'll conduct a power analysis, which gives you the probability of seeing your hypothesis demonstrated given a particular sample size. Statistical power tells us the probability of us accepting an alternative, true hypothesis over the null hypothesis. A higher statistical power gives lowers our probability of getting a false negative response for our experiment. In the case of our fertilizer experiment, a higher statistical power means that we will be less likely to accept that there is no effect from fertilizer when there is, in fact, an effect. A power analysis consists of four major pieces: The effect size, which tells us the magnitude of a result within the population The sample size, which tells us how many observations we have within the sample The significance level, which is our alpha The statistical power, which is the probability that we accept an alternative hypothesis if it is true Many experiments are run with a typical power, or ÃŽ ², of 80 percent. Because these calculations are complex, it's not recommended to try to calculate them by hand- instead, most people will use a calculator like this one to figure out their sample size. Conducting a power analysis lets you know how big of a sample size you'll need to determine statistical significance. If you only test on a handful of samples, you may end up with a result that's inaccurate- it may give you a false positive or a false negative. Doing an accurate power analysis helps ensure that your results are legitimate. Step 6: Find Standard Deviation Sixth, you'll be calculating the standard deviation, $s$ (also sometimes written as $ÏÆ'$). This is where the formula gets particularly complex, as this tells you how spread out your data is. The formula for standard deviation of a sample is: $$s = √{{∑(x_i – Â µ)^2}/(N – 1)}$$ In this equation, $s$ is the standard deviation $∑$ tells you to sum all the data you collected $x_i$ is each individual data $Â µ$ is the mean of your data for each group $N$ is your total sample So, to work this out, let's go with our preliminary fertilizer test on ten plants, which might give us data something like this: Plant Growth (inches) 1 2 2 1 3 4 4 5 5 3 6 1 7 5 8 4 9 4 10 4 We need to average that data, so we add it all together and divide by the total sample number. $(2 + 1 + 4 + 5 + 3 + 1 + 5 + 4 + 4 + 4) / 10 = 3.3$ Next, we subtract each sample from the average $(x_i – Â µ)$, which will look like this: Plant Growth (inches) $x_i – Â µ$ 1 2 1.3 2 1 2.3 3 4 -0.7 4 5 -1.7 5 3 0.3 6 1 2.3 7 5 -1.7 8 4 -0.7 9 4 -0.7 10 4 -0.7 Now we square all of those numbers and add them together. $1.32 + 2.32 + -0.72 + -1.72 + 0.32 + 2.32 + -1.72 + -0.72 + -0.72 + -0.72 = 20.1$ Next, we'll divide that number by the total sample number, N, minus 1. $20.1/9 = 2.23$ And finally, to find the standard deviation, we'll take the square root of that number. $√2.23=1.4933184523$ But that's not the end. We also need to calculate the variance between sample groups, if we have more than one sample group. In our case, let's say that we did a second experiment where we didn't add fertilizer so we could see what the growth looked like on its own, and these were our results: Plant Growth (inches) 1 1 2 1 3 2 4 1 5 3 6 1 7 1 8 2 9 1 10 1 So let's run through the standard deviation calculation again. #1: Average Data $1 + 1 + 2+ 1 + 3 + 1 + 1 + 2 + 1 + 1 = 14$ $14/10 = 1.4$ #2: Subtract each sample from the average $(x_i – Â µ)$. $0.4 + 0.4 + (-0.4) + 0.4 + (-1.6) + 0.4 + 0.4 + (-0.4) + 0.4 + 0.4 = 0.4$ #3: Divide the last number by the total sample number, N, minus 1. $0.4/9=0.0444$ #4: Take the square root of the previous number. $√0.0444 = 0.2107130751$ Step 7: Run Standard Error Formula Okay, now we have our two standard deviations (one for the group with fertilizer, one for the group without). Next, we need to run through the standard error formula, which is: $$s_d = √((s_1/N_1) + (s_2/N_2))$$ In this equation: $s_d$ is the standard error $s_1$ is the standard deviation of group one $N_1$ is the sample size of group one $s_2$ is the standard deviation of group two $N_2$ is the sample size of group two So let's work through this. First, let's figure out $s_1/N_1$. With our numbers, that becomes $1.4933184523/10$, or 0.14933184523. Next, let's do $s_2/N_2$. With our numbers, that becomes $0.2107130751/10$, or 0.02107130751. Next, we need to add those two numbers together. $0.14933184523 + 0.02107130751 = 0.17040315274$ And finally, we'll take the square root: $√0.17040315274 = 0.41279916756$ So our standard error $s_d$, is 0.41279916756. Step 8: Find t-Score But we're still not done! Now you're probably seeing why most people use a calculator for this. Next up: t-score. Your t-score is what allows you to compare your data to other data, which tells you the probability of the two groups being significantly different. The formula for t-score is $$t = (Â µ_1 – Â µ_2)/s_d$$ where: $t$ is the t-score $Â µ_1$ is the average of group one $Â µ_2$ is the average of group two $s_d$ is the standard error So for our numbers, this equation would look like: $t = (3.3 - 1.4)/0.41279916756$ $t = 4.60272246001$ Step 9: Find Degrees of Freedom We're almost there! Next, we'll find our degrees of freedom ($df$), which tells you how many values in a calculation can vary acceptably. To calculate this, we add the number of samples in each group and subtract two. In our case, that looks like this: $$(10 + 10) - 2 = 18$$ Step 10: Use a T-Table to Find Statistical Significance And now we'll use a t-table to figure out whether our conclusions are significant. To use the t-table, we first look on the left-hand side for our $df$, which in this case is 18. Next, scan along that row of variances until you find ours, which we'll round to 4.603. Whoa! We're off the chart! Scan upward until you see the p-values at the top of the chart and you'll find that our p-value is something smaller than 0.0005, which is well below our significance level. So is our study on whether our fertilizer makes plants grow taller valid? The final stage of determining statistical significance is comparing your p-value to your alpha. In this case, our alpha is 0.05, and our p-value is well below 0.05. Since one of the methods of determining statistical significance is to demonstrate that your p-value is less than your alpha level, we've succeeded! The data seems to suggest that our fertilizer does make plants grow, and with a p-value of 0.0005 at a significance level of 0.05, it's definitely significant! Now, if we're doing a rigorous study, we should test again on a larger scale to verify that the results can be replicated and that there weren't any other variables at work to make the plants taller. Tools to Use For Statistical Significance Calculators make calculating statistical significance a lot easier. Most people will do their calculations this way instead of by hand, as doing them without tools is more likely to introduce errors in an already sensitive process. To get you started, here are some calculators you can use to make your work simpler: How to Calculate T-Score on a TI-83 Find Sample Size and Confidence Interval T-Test Calculator T-Test Formula for Excel Find P-Value with Excel What's Next? Need to brush up on AP Stats? These free AP Statistics practice tests are exactly what you need! If you're struggling with statistics on the SAT Math section, check out this guide to strategies for mean, median, and mode! This formula sheet for AP Statistics covers all the formulas you'll need to know for a great score on your AP test!

Tuesday, November 5, 2019

Marshal Michel Ney - Napoleonic Wars Biography

Marshal Michel Ney - Napoleonic Wars Biography Michel Ney - Early Life: Born in Saarlouis, France on January 10, 1769, Michel Ney was the son of master barrel cooper Pierre Ney and his wife Margarethe. Due to Saarlouis location in Lorraine, Ney was raised bilingual and was fluent in both French and German. Coming of age, he received his education at the Collà ¨ge des Augustins and became a notary in his hometown. After a brief stint as an overseer of mines, he ended his career as a civil servant and enlisted in the Colonel-General Hussar Regiment in 1787. Proving himself a gifted soldier, Ney swiftly moved through the non-commissioned ranks. Michel Ney - Wars of the French Revolution: With the beginning of the French Revolution, Neys regiment was assigned to the Army of the North. In September 1792, he was present at the French victory at Valmy and was commissioned as an officer the next month. The following year he served at the Battle of Neerwinden and was wounded at the siege of Mainz. Transferring to the Sambre-et-Meuse in June 1794, Neys talents were quickly recognized and he continued to advance in rank, reaching gà ©nà ©ral de brigade in August 1796. With this promotion came command of the French cavalry on the German front. In April 1797, Ney led the cavalry at the Battle of Neuwied. Charging a body of Austrian lancers that were attempting to seize French artillery, Neys men found themselves counterattacked by enemy cavalry. In the fighting that ensued, Ney was unhorsed and taken prisoner. He remained a prisoner of war for a month until being exchanged in May. Returning to active service, Ney participated in the capture of Mannheim later that year. Two years later he was promoted to gà ©neral de division in March 1799. Commanding the cavalry in Switzerland and along the Danube, Ney was wounded in the wrist and thigh at Winterthur. Recovering from his wounds, he joined General Jean Moreaus Army of the Rhine and took part in the victory at the Battle of Hohenlinden on December 3, 1800. In 1802, he was assigned to command French troops in Switzerland and oversaw French diplomacy in the region. On August 5 of that year, Ney returned to France to marry Aglaà © Louise Auguià ©. The couple would be married for the remainder of Neys life and would have four sons. Michel Ney - Napoleonic Wars: With the rise of Napoleon, Neys career accelerated as he was appointed one of the first eighteen Marshals of the Empire on May 19, 1804. Assuming command of the VI Corps of the La Grand Armà ©e the following year, Ney defeated the Austrians at the Battle of Elchingen that October. Pressing into the Tyrol, he captured Innsbruck a month later. During the 1806 campaign, Neys VI Corps took part in the Battle of Jena on October 14, and then moved to occupy Erfurt and capture Magdeburg. As winter set in, the fighting continued and Ney played a key role in rescuing the French army at the Battle of Eylau on February 8, 1807. Pressing on, Ney participated in the Battle of Gà ¼ttstadt and commanded the right wing of the army during Napoleons decisive triumph against the Russians at Friedland on June 14. For his exemplary service, Napoleon created him Duke of Elchingen on June 6, 1808. Shortly thereafter, Ney and his corps were dispatched to Spain. After two years on the Iberian Peninsula, he was ordered to aid in the invasion of Portugal. After capturing Ciudad Rodrigo and Coa, he was defeated at the Battle of Buà §aco. Working with Marshal Andrà © Massà ©na, Ney and the French flanked the British position and continued their advance until they were turned back at the Lines of Torres Vedras. Unable to penetrate the allied defenses, Massà ©na ordered a retreat. During the withdrawal, Ney was removed from command for insubordination. Returning to France, Ney was given command of the III Corps of the La Grand Armà ©e for the 1812 invasion of Russia. In August of that year, he was wounded in the neck leading his men at the Battle of Smolensk. As the French drove further into Russia, Ney commanded his men in the central section of the French lines at the Battle of Borodino on September 7, 1812. With the collapse of the invasion later that year, Ney was assigned to command the French rearguard as Napoleon retreated back to France. Cut off from the main body of the army, Neys men were able to fight their way through and rejoin their comrades. For this action he was dubbed the bravest of the brave by Napoleon. After taking part in the Battle of Berezina, Ney helped hold the bridge at Kovno and reputedly was the last French soldier to leave Russian soil. In reward for his service in Russia, he was given the title Prince of the Moskowa on March 25, 1813. As the War of the Sixth Coalition raged, Ney took part in the victories at Là ¼tzen and Bautzen. That fall he was present when French troops were defeated at the Battles of Dennewitz and Leipzig. With the French Empire collapsing, Ney aided in defending France through early 1814, but became the spokesman for the Marshals revolt in April and encouraged Napoleon to abdicate. With the defeat of Napoleon and restoration of Louis XVIII, Ney was promoted and made a peer for his role in the revolt. Michel Ney - The Hundred Days Death: Neys loyalty to the new regime was quickly tested in 1815, with Napoleons return to France from Elba. Swearing allegiance to the king, he began assembling forces to counter Napoleon and pledged to bring the former emperor back to Paris in an iron cage. Aware of Neys plans, Napoleon sent him a letter encouraging him to rejoin his old commander. This Ney did on March 18, when he joined Napoleon at Auxerre Three months later, Ney was made the commander of the left wing of the new Army of the North. In this role, he defeated the Duke of Wellington at the Battle of Quatre Bras on June 16, 1815. Two days later, Ney played a key role at the Battle of Waterloo. His most famous order during the decisive battle was to send forward the French cavalry against the allied lines. Surging forward, they were unable to break the squares formed by the British infantry and were forced to retreat. Following the defeat at Waterloo, Ney was hunted down arrested. Taken into custody on August 3, he was tried for treason that December by the Chamber of Peers. Found guilty, he was executed by firing squad near the Luxembourg Garden on December 7, 1815. During his execution, Ney refused to wear a blindfold and insisted upon giving the order to fire himself. His final words were reportedly: Soldiers, when I give the command to fire, fire straight at my heart. Wait for the order. It will be my last to you. I protest against my condemnation. I have fought a hundred battles for France, and not one against her... Soldiers Fire!† Selected Sources Napoleonic Guide: Marshal Michel NeyNNDB: Marshal Michel NeyTrial of Marshal Ney

Sunday, November 3, 2019

The Market Failure Term Paper Example | Topics and Well Written Essays - 2000 words - 1

The Market Failure - Term Paper Example A healthy market is one which acquires a balance between supply and demand. When an imbalance occurs between supply and demand, the market may consider as going through the failure phase. The market is not an absolute entity. It undergoes relative changes every time because of its association with so many internal and external parameters. In other words, the market fluctuates all the time when any problems may arise to the associated entities of the market. The market often fails when the individual interests try to dominate over the general interests of the market. For example, China is accused of implementing unhealthy strategies in the market. China is concentrating on mass productions of goods. They were able to sell their goods for cheaper prices because of the mass production. The cheaper prices will definitely attract the consumers and they will purchase more and more goods of Chinese origin. Even though the profit obtained from selling a single unit may less, China was able to overcome such problems by selling huge volumes of goods. Moreover, the huge volume of production may mobilize the economic resources of China and also the unemployed youths in China may get more employment because of the healthy movement of Chinese products in the world market. On the other hand, the consumers who purchased cheaper goods of Chinese origin may realize later that the goods they purchased were not adequate quality. When they face troubles with the products they purchased, they would try to look suspiciously at other genuine products also produced by other manufacturers. The reluctance of the consumers to enter actively in the market may cause problems not only for China but for other countries as well. In short, the market may fail in such cases because of the inefficient production and distribution of goods by even a single entity.

Friday, November 1, 2019

Compare and contrast parliamentary and congressional democracies Essay

Compare and contrast parliamentary and congressional democracies - Essay Example This paper aims to answer this question, as well as consider which of the two is best and why. To evaluate the congressional and parliamentary systems, we must first understand the basic political structure. The legislature, in modern political systems, is representative of the population (Cheibub, 2011). It is composed of members elected indirectly or directly via a popular vote and are empowered to change, make or repeal the nation’s laws and also to regulate and levy its taxes. Legislatures that provide for direct representation are considered, as being extra democratic since they are less liable to domination by one faction. The executive is devoted to the administration and enforcement of laws created by the legislature. The key to variance found between the two systems is the relationship shared by the legislature and the executive and their degree of linkage (Cheibub, 2011). Under a parliamentary democracy, the executive is subordinate to the legislature’s majori ty (Cheibub, 2011). The executive is required to keep the legislature’s majority in check to remain in power. This is the key to stability since it promotes the creation and development of disciplined and cohesive parties, as well as emphasizing on compromise and cooperation. To keep the majority, the government, may have to form coalitions with other parliamentary parties to build a majority base on mutual gain and compromise. Parliamentary system structure also allows for easy power transition since the leadership is based on parties rather than individuals. As opposed to a parliamentary structure, a presidential form of democracy separates the legislature and the executive (Cheibub, 2011). The president gains power, not through a majority in the legislature, but through a direct election. The population in this system will vote for an individual rather than a party. The winner then becomes president for a fixed term. In a majority of the cases, a major political party back s the President, and gains popularity based on party stature and personal qualities (Cheibub, 2011). The discrepancy between the two begins with the origin of the two words (Cheibub, 2011). Congress comes from the Latin word that means coming together, where representatives from all over the country come together to discuss state matters. Parliament, however, is rooted, in a French term meaning, to talk since a lot of talking goes on, in parliament. A congress is based on primary elections where the population elects their candidates based on individual office plans and personalities while, in parliament, the delegates are elected to run by their parties based on their willingness to adhere by party standards. In congress, the party really means more in elections than the individual does. In a parliamentary democracy, the PM and his cabinet are elected from the country’s majority party in parliament. Therefore, if the members begin to vote against the ideals of their party, t hen the government may come apart and force new elections. Because of this, most parties restrict the freedoms of their delegates to ensure the PM safety. In congress, however, the branch of the executive is separated entirely from the legislature and it allows members to vote based on the wishes of their constituents and their consciences, without fearing harming the government permanently. This increased power of the individual leads to