VoyUser Login optional ] [ Contact Forum Admin ] [ Main index ] [ Post a new message ] [ Search | Check update time | Archives: 12345[6]78910 ]


History of malaria -- Hasanur Rahman, 06:50:58 02/09/16 Tue [1]

The history of malaria stretches from its prehistoric origin as a zoonotic disease in the primates of Africa through to the 21st century. A widespread and potentially lethal human infectious disease, at its peak malaria infested every continent, except Antarctica.[1] Various scientists and scientific journals, including Nature and National Geographic, have theorized that malaria may have killed around or above half of all humans who have ever lived.[2][3][4][5][6][7] Its prevention and treatment have been targeted in science and medicine for hundreds of years. Since the discovery of the parasites which cause it, research attention has focused on their biology, as well as that of the mosquitoes which transmit the parasites.

The most critical factors in the spread or eradication of disease have been human behavior (shifting population centers, changing farming methods and the like) and living standards. Precise statistics do not exist because many cases occur in rural areas where people do not have access to hospitals or other health care. As a consequence, the majority of cases are undocumented.[8] Poverty has been and remains associated with the disease.[9]

References to its unique, periodic fevers are found throughout recorded history beginning in 2700 BC in China.[10]

For thousands of years, traditional herbal remedies have been used to treat malaria.[11] The first effective treatment for malaria came from the bark of cinchona tree, which contains quinine. After the link to mosquitos and their parasites were identified in the early twentieth century, mosquito control measures such as widespread use of DDT, swamp drainage, covering or oiling the surface of open water sources, indoor residual spraying and use of insecticide treated nets was initiated. Prophylactic quinine was prescribed in malaria endemic areas, and new therapeutic drugs, including chloroquine and artemisinins, were used to resist the scourge.

Malaria researchers have won multiple Nobel Prizes for their achievements, although the disease continues to afflict some 200 million patients each year, killing more than 600,000.

Malaria was the most important health hazard encountered by U.S. troops in the South Pacific during World War II, where about 500,000 men were infected.[12] According to Joseph Patrick Byrne, "Sixty thousand American soldiers died of malaria during the African and South Pacific campaigns."[13]

At the close of the 20th century, malaria remained endemic in more than 100 countries throughout the tropical and subtropical zones, including large areas of Central and South America, Hispaniola (Haiti and the Dominican Republic), Africa, the Middle East, the Indian subcontinent, Southeast Asia, and Oceania. Resistance of Plasmodium to anti-malaria drugs, as well as resistance of mosquitos to insecticides and the discovery of zoonotic species of the parasite have complicated control measures.

The first evidence of malaria parasites was found in mosquitoes preserved in amber from the Palaeogene period that are approximately 30 million years old.[14] Human malaria likely originated in Africa and coevolved with its hosts, mosquitoes and non-human primates. Malaria protozoa are diversified into primate, rodent, bird, and reptile host lineages.[15][16] Humans may have originally caught Plasmodium falciparum from gorillas.[17] P. vivax, another malarial Plasmodium species among the six that infect humans, also likely originated in African gorillas and chimpanzees.[18] Another malarial species recently discovered to be transmissible to humans, P. knowlesi, originated in Asian macaque monkeys.[19] While P. malariae is highly host specific to humans, there is spotty evidence that low level non-symptomatic infection persists among wild chimpanzees.[20]

About 10,000 years ago, malaria started having a major impact on human survival, coinciding with the start of agriculture in the Neolithic revolution. Consequences included natural selection for sickle-cell disease, thalassaemias, glucose-6-phosphate dehydrogenase deficiency, Southeast Asian ovalocytosis, elliptocytosis and loss of the Gerbich antigen (glycophorin C) and the Duffy antigen on the erythrocytes, because such blood disorders confer a selective advantage against malaria infection (balancing selection).[21] The three major types of inherited genetic resistance (sickle-cell disease, thalassaemias, and glucose-6-phosphate dehydrogenase deficiency) were present in the Mediterranean world by the time of the Roman Empire, about 2000 years ago.[22]

Molecular methods have confirmed the high prevalence of P. falciparum malaria in ancient Egypt.[23] The Ancient Greek historian Herodotus wrote that the builders of the Egyptian pyramids (circa 2700 - 1700 BCE) were given large amounts of garlic,[24] probably to protect them against malaria. The Pharaoh Sneferu, the founder of the Fourth dynasty of Egypt, who reigned from around 2613 – 2589 BCE, used bed-nets as protection against mosquitoes. Cleopatra VII, the last Pharaoh of Ancient Egypt, similarly slept under a mosquito net.[25] The presence of malaria in Egypt from circa 800 BCE onwards has been confirmed using DNA-based methods.

Malaria became widely recognized in ancient Greece by the 4th century BCE, and is implicated in the decline of many city-state populations. The term μίασμα (Greek for miasma): "stain, pollution", was coined by Hippocrates of Kos who used it to describe dangerous fumes from the ground that are transported by winds and can cause serious illnesses. Hippocrates (460–370 BCE), the "father of medicine", related the presence of intermittent fevers with climatic and environmental conditions and classified the fever according to periodicity: Gk.:tritaios pyretos / L.:febris tertiana (fever every third day), and Gk.:tetartaios pyretos / L.:febris quartana (fever every fourth day).[28]

The Chinese Huangdi Neijing (The Inner Canon of the Yellow Emperor) dating from ~300 BCE - 200 CE apparently refers to repeated paroxysmal fevers associated with enlarged spleens and a tendency to epidemic occurrence.[29] Around 168 BCE, the herbal remedy Qing-hao (青蒿) (Artemisia annua) came into use in China to treat female hemorrhoids (Wushi'er bingfang translated as "Recipes for 52 kinds of diseases" unearthed from the Mawangdui tombs).[27] Qing-hao was first recommended for acute intermittent fever episodes by Ge Hong as an effective medication in the 4th-century Chinese manuscript Zhou hou bei ji fang, usually translated as "Emergency Prescriptions kept in one's Sleeve".[30] His recommendation was to soak fresh plants of the artemisia herb in cold water, wring it out and ingest the expressed bitter juice in its raw state.[31][32]

'Roman fever' refers to a particularly deadly strain of malaria that affected the Roman Campagna and the city of Rome throughout various epochs in history. An epidemic of Roman fever during the fifth century AD may have contributed to the fall of the Roman empire.[33][34] The many remedies to reduce the spleen in Pedanius Dioscorides's De Materia Medica have been suggested to have been a response to chronic malaria in the Roman empire.[35]

In 835, the celebration of Hallowmas was moved from May to November at the behest of Pope Gregory IV, on the "practical grounds that Rome in summer could not accommodate the great number of pilgrims who flocked to it", and perhaps because of public health considerations regarding Roman Fever, which claimed a number of lives of pilgrims during the sultry summers of the region

European Renaissance

The name malaria derived from mal aria ('bad air' in Medieval Italian). This idea came from the Ancient Romans who thought that this disease came from the horrible fumes from the swamps. The word malaria has its roots in the miasma theory, as described by historian and chancellor of Florence Leonardo Bruni in his Historiarum Florentini populi libri XII, which was the first major example of Renaissance historical writing:[37]

Avuto i Fiorentini questo fortissimo castello e fornitolo di buone guardie, consigliavano fra loro medesimi fosse da fare. Erano alcuni a' quali pareva sommamente utile e necessario a ridurre lo esercito, e massimamente essendo affaticato per la infermitŕ e per la mala ariae per lungo e difficile campeggiare nel tempo dell'autunno e in luoghi infermi, e vedendo ancora ch'egli era diminuito assai per la licenza conceduta a molti pel capitano di potersi partire: perocchč, nel tempo che eglino erano stati lungamente a quello assedio, molti, o per disagio del campo o per paura d'infermitŕ, avevano domandato e ottenuto licenza da lui (Acciajuoli 1476).

After the Florentines had conquered this stronghold, after putting good guardians on it they were discussing among themselves how to proceed. For some of them it appeared most useful and necessary to reduce the army, more so as it was extremely stressed by disease and bad air, and due to the long-lasting and difficult camps in unhealthy places during the autumn. They (the Florentines) further considered that the army was reduced in numbers due to the leave permits granted to many soldiers by their officers. In fact, during the siege, many soldiers had asked and obtained leave permits due to the camp hardships and fear of illness [translated from medieval Italian, Toscanic dialect].

The coastal plains of southern Italy fell from international prominence when malaria expanded in the sixteenth century. At roughly the same time, in the coastal marshes of England, mortality from "marsh fever" or "tertian ague" (ague: via French from medieval Latin acuta (febris), acute fever) was comparable to that in sub-Saharan Africa today.[38] William Shakespeare was born at the start of the especially cold period that climatologists call the "Little Ice Age", yet he was aware enough of the ravages of the disease to mention it in eight of his plays.[39]

Medical accounts and ancient autopsy reports state that tertian malarial fevers caused the death of four members of the prominent Medici family of Florence [Note 1]. These claims have been confirmed with more modern methodologies.[40]
Spread to the Americas

Malaria was not referenced in the "medical books" of the Mayans or Aztecs. European settlers and their West African slaves likely brought malaria to the Americas in the 16th century.[41][42]
Cinchona tree

Spanish missionaries found that fever was treated by Amerindians near Loxa (Peru) with powder from Peruvian bark (later established to be from any of several trees of genus Cinchona).[43] It was used by the Quechua Indians of Peru to reduce the shaking effects caused by severe chills.[44] Jesuit Brother Agostino Salumbrino (1561–1642), who lived in Lima and was an apothecary by training, observed the Quechua using the bark of the cinchona tree for that purpose. While its effect in treating malaria (and hence malaria-induced shivering) was unrelated to its effect in controlling shivering from cold, it was nevertheless effective for malaria. The use of the “fever tree” bark was introduced into European medicine by Jesuit missionaries (Jesuit's bark).[45] Jesuit Bernabé de Cobo (1582–1657), who explored Mexico and Peru, is credited with taking cinchona bark to Europe. He brought the bark from Lima to Spain, and then to Rome and other parts of Italy, in 1632. Francesco Torti wrote in 1712 that only “intermittent fever” was amenable to the fever tree bark.[46] This work finally established the specific nature of cinchona bark and brought about its general use in medicine.[47]

It would be nearly 200 years before the active principles, quinine and other alkaloids, of cinchona bark were isolated. Quinine, a toxic plant alkaloid, is, in addition to its anti-malarial properties, an effective muscle relaxant, as the modern use for nocturnal leg cramps suggests (corroborating its use for shivering by the Peruvian Indians).[48]
Clinical indications

In 1717, the dark pigmentation of a postmortem spleen and brain was published by the epidemiologist Giovanni Maria Lancisi in his malaria text book De noxiis paludum effluviis eorumque remediis. This was one of the earliest reports of the characteristic enlargement of the spleen and dark color of the spleen and brain which are the most constant post-mortem indications of chronic malaria infection. He related the prevalence of malaria in swampy areas to the presence of flies and recommended swamp drainage to prevent it.[

Antimalarial drugs

French chemist Pierre Joseph Pelletier and French pharmacist Joseph Bienaimé Caventou separated in 1820 the alkaloids cinchonine and quinine from powdered fever tree bark, allowing for the creation of standardized doses of the active ingredients.[50] Prior to 1820, the bark was simply dried, ground to a fine powder and mixed into a liquid (commonly wine) for drinking.[51]

An English trader, Charles Ledger, and his Amerindian servant spent four years collecting cinchona seeds in the Andes in Bolivia, highly prized for their quinine but whose export was prohibited. Ledger managed to get seeds out; in 1865, the Dutch government cultivated 20,000 trees of the Cinchona ledgeriana in Java (Indonesia). By the end of the nineteenth century, the Dutch had established a world monopoly over its supply.[52]
'Warburg's Tincture'

In 1834, in British Guiana, a German physician, Carl Warburg, invented an antipyretic medicine: 'Warburg's Tincture'. This secret, proprietary remedy contained quinine and other herbs. Trials were made in Europe in the 1840s and 1850s. It was officially adopted by the Austrian Empire in 1847. It was considered by many eminent medical professionals to be a more efficacious antimalarial than quinine. It was also more economical. The British Government supplied Warburg's Tincture to troops in India and other colonies.[53]
Methylene blue

In 1876, methylene blue was synthesized by German chemist Heinrich Caro.[54] Paul Ehrlich in 1880 described the use of "neutral" dyes – mixtures of acidic and basic dyes for the differentiation of cells in peripheral blood smears. In 1891 Ernst Malachowski[55] and Dmitri Leonidovich Romanowsky[56] independently developed techniques using a mixture of Eosin Y and modified methylene blue (methylene azure) that produced a surprising hue unattributable to either staining component: a shade of purple.[57] Malachowski used alkali-treated methylene blue solutions and Romanowsky used methylene blue solutions which were molded or aged. This new method differentiated blood cells and demonstrated the nuclei of malarial parasites. Malachowski's staining technique was one of the most significant technical advances in the history of malaria.[58]

In 1891, Paul Guttmann and Ehrlich noted that methylene blue had a high affinity for some tissues and that this dye had a slight antimalarial property.[59] Methylene blue and its congeners may act by preventing the biocrystallization of heme

In 1848, German anatomist Johann Heinrich Meckel[62] recorded black-brown pigment granules in the blood and spleen of a patient who had died in a mental hospital. Meckel was thought to have been looking at malaria parasites without realizing it; he did not mention malaria in his report. He hypothesized that the pigment was melanin.[63] The causal relationship of pigment to the parasite was established in 1880, when French physician Charles Louis Alphonse Laveran, working in the military hospital of Constantine, Algeria, observed pigmented parasites inside the red blood cells of malaria sufferers. He witnessed the events of exflagellation and became convinced that the moving flagella were parasitic microorganisms. He noted that quinine removed the parasites from the blood. Laveran called this microscopic organism Oscillaria malariae and proposed that malaria was caused by this protozoan.[64] This discovery remained controversial until the development of the oil immersion lens in 1884 and of superior staining methods in 1890–1891.

In 1885, Ettore Marchiafava, Angelo Celli and Camillo Golgi studied the reproduction cycles in human blood (Golgi cycles). Golgi observed that all parasites present in the blood divided almost simultaneously at regular intervals and that division coincided with attacks of fever. In 1886 Golgi described the morphological differences that are still used to distinguish two malaria parasite species Plasmodium vivax and Plasmodium malariae. Shortly after this Sakharov in 1889 and Marchiafava & Celli in 1890 independently identified Plasmodium falciparum as a species distinct from P. vivax and P. malariae. In 1890, Grassi and Feletti reviewed the available information and named both P. malariae and P. vivax (although within the genus Haemamoeba.)[65] By 1890, Laveran's germ was generally accepted, but most of his initial ideas had been discarded in favor of the taxonomic work and clinical pathology of the Italian school. Marchiafava and Celli called the new microorganism Plasmodium.[66] H. vivax was soon renamed Plasmodium vivax. In 1892, Marchiafava and Bignami proved that the multiple forms seen by Laveran were from a single species. This species was eventually named P. falciparum. Laveran was awarded the 1907 Nobel Prize for Physiology or Medicine "in recognition of his work on the role played by protozoa in causing diseases".[67]

Dutch physician Pieter Pel first proposed a tissue stage of the malaria parasite in 1886, presaging its discovery by over 50 years. This suggestion was reiterated in 1893 when Golgi suggested that the parasites might have an undiscovered tissue phase (this time in endothelial cells).[68] Pel in 1896 supported Gogli's latent phase theory.

[ Edit | View ]

History of Microsoft Word -- Hasanur Rahman, 06:47:28 02/09/16 Tue [1]

The first version of Microsoft Word was developed by Charles Simonyi and Richard Brodie, former Xerox programmers hired by Bill Gates and Paul Allen in 1981. Both programmers worked on Xerox Bravo, the first WYSIWYG (What You See Is What You Get) word processor. The first Word version, Word 1.0, was released in October 1983 for Xenix and MS-DOS; it was followed by four very similar versions that were not very successful. The first Windows version was released in 1989, with a slightly improved interface. When Windows 3.0 was released in 1990, Word became a huge commercial success. Word for Windows 1.0 was followed by Word 2.0 in 1991 and Word 6.0 in 1993. Then it was renamed to Word 95 and Word 97, Word 2000 and Word for Office XP (to follow Windows commercial names). With the release of Word 2003, the numbering was again year-based. Since then, Word 2007, Word 2010, and most recently, Word 2013 have been released for Windows.

In 1986, an agreement between Atari and Microsoft brought Word to the Atari ST.[1] The Atari ST version was a translation of Word 1.05 for the Apple Macintosh; however, it was released under the name Microsoft Write (the name of the word processor included with Windows during the 80s and early 90s).[2][3] Unlike other versions of Word, the Atari version was a one time release with no future updates or revisions. The release of Microsoft Write was one of two major PC applications that were released for the Atari ST (the other application being WordPerfect). Microsoft Write was released for the Atari ST in 1988.

In 2014 the source code for Word for Windows in the version 1.1a was made available to the Computer History Museum and the public for educational purposes.

Word 1990 to 1995

The first version of Word for Windows was released in 1990 at a price of US$498, but was not very popular as Windows users still comprised a minority of the market.[6] The next year, Windows 3.0 debuted, followed shortly afterwards by WinWord 1.1 which was updated for the new OS (WinWord 1.0 had been designed for Windows 2.x and could not operate in protected mode on 286 and up PCs). The failure of WordPerfect to produce a Windows version proved a fatal mistake. The following year, WinWord 2.0 was released which had further improvements and finally solidified Word's marketplace dominance. WinWord 3.0 came out in 1992 and was designed for the newly released Windows 3.1, also requiring a 386-based PC for the first time.[7]

The early versions of Word also included copy protection mechanisms that tried to detect debuggers, and if one was found, it produced the message "The tree of evil bears bitter fruit. Only the Shadow knows. Now trashing program disk." and performed a zero seek on the floppy disk (but did not delete its contents).[8][9][10]

After MacWrite, Word for Macintosh never had any serious rivals, although programs such as Nisus Writer provided features such as non-continuous selection, which were not added until Word 2002 in Office XP. Word 5.1 for the Macintosh, released in 1992, was a very popular word processor, owing to its elegance, relative ease of use and feature set. However, version 6.0 for the Macintosh, released in 1994, was widely derided, unlike the Windows version. It was the first version of Word based on a common code base between the Windows and Mac versions; many accused it of being slow, clumsy and memory intensive.

With the release of Word 6.0 in 1993 Microsoft again attempted to synchronize the version numbers and coordinate product naming across platforms; this time across the three versions for DOS, Macintosh, and Windows (where the previous version was Word for Windows 2.0). There may have also been thought given to matching the current version 6.0 of WordPerfect for DOS and Windows, Word's major competitor. However, this wound up being the last version of Word for DOS. In addition, subsequent versions of Word were no longer referred to by version number, and were instead named after the year of their release (e.g. Word 95 for Windows, synchronizing its name with Windows 95, and Word 98 for Macintosh), once again breaking the synchronization.

When Microsoft became aware of the Year 2000 problem, it released the entire DOS port of Microsoft Word 5.5 instead of getting people to pay for the update. As of February 2014, it is still available for download from Microsoft's web site.[11]

Word 6.0 was the second attempt to develop a common code base version of Word. The first, code-named Pyramid, had been an attempt to completely rewrite the existing product. It was abandoned when it was determined that it would take the development team too long to rewrite and then catch up with all the new capabilities that could have been added in the same time without a rewrite. Supporters of Pyramid claimed that it would have been faster, smaller, and more stable than the product that was eventually released for Macintosh, and which was compiled using a beta version of Visual C++ 2.0 that targets the Macintosh, so many optimizations have to be turned off (the version 4.2.1 of Office is compiled using the final version), and sometimes use the Windows API simulation library included.[12] Pyramid would have been truly cross-platform, with machine-independent application code and a small mediation layer between the application and the operating system.

More recent versions of Word for Macintosh are no longer ported versions of Word for Windows.

Later versions of Word have more capabilities than merely word processing. The drawing tool allows simple desktop publishing operations, such as adding graphics to documents.
Word 97

Word 97 had the same general operating performance as later versions such as Word 2000. This was the first copy of Word featuring the Office Assistant, "Clippit", which was an animated helper used in all Office programs. This was a takeover from the earlier launched concept in Microsoft Bob. Word 97 introduced the macro programming language Visual Basic for Applications (VBA) which remains in use in Word 2013.
Word 98

Word 98 for the Macintosh gained many features of Word 97, and was bundled with the Macintosh Office 98 package. Document compatibility reached parity with Office 97 and Word on the Mac became a viable business alternative to its Windows counterpart. Unfortunately, Word on the Mac in this and later releases also became vulnerable to future macro viruses that could compromise Word (and Excel) documents, leading to the only situation where viruses could be cross-platform. A Windows version of this was only bundled with the Japanese/Korean Microsoft Office 97 Powered By Word 98 and could not be purchased separately. It was then released in the same period as well.
Word 2001/Word X

Word 2001 was bundled with the Macintosh Office for that platform, acquiring most, if not all, of the feature set of Word 2000. Released in October 2000, Word 2001 was also sold as an individual product. The Macintosh version, Word X, released in 2001, was the first version to run natively on (and required) Mac OS X.
Word 2002/XP
See also: Microsoft Office XP

Word 2002 was bundled with Office XP and was released in 2001. It had many of the same features as Word 2000, but had a major new feature called the 'Task Panes', which gave quicker information and control to a lot of features that were before only available in modal dialog boxes. One of the key advertising strategies for the software was the removal of the Office Assistant in favor of a new help system, although it was simply disabled by default.
Word 2003
See also: Microsoft Office 2003

For the 2003 version, the Office programs, including Word, were rebranded to emphasize the unity of the Office suite, so that Microsoft Word officially became Microsoft Office Word.
Word 2004

A new Macintosh version of Office was released in May 2004. Substantial cleanup of the various applications (Word, Excel, PowerPoint) and feature parity with Office 2003 (for Microsoft Windows) created a very usable release. Microsoft released patches through the years to eliminate most known macro vulnerabilities from this version. While Apple released Pages and the open source community created NeoOffice, Word remains the most widely used word processor on the Macintosh.
Word 2007
See also: Microsoft Office 2007

The release includes numerous changes, including a new XML-based file format, a redesigned interface, an integrated equation editor and bibliographic management. Additionally, an XML data bag was introduced, accessible via the object model and file format, called Custom XML - this can be used in conjunction with a new feature called Content Controls to implement structured documents. It also has contextual tabs, which are functionality specific only to the object with focus, and many other features like Live Preview (which enables you to view the document without making any permanent changes), Mini Toolbar, Super-tooltips, Quick Access toolbar, SmartArt, etc.

Word 2007 uses a new file format called docx. Word 2000-2003 users on Windows systems can install a free add-on called the "Microsoft Office Compatibility Pack" to be able to open, edit, and save the new Word 2007 files.[13] Alternatively, Word 2007 can save to the old doc format of Word 97-2003.[14][15]
Word 2008
See also: Microsoft Office 2008 for Mac

Word 2008 was released on January 15, 2008. It includes some new features from Word 2007, such as a ribbon-like feature that can be used to select page layouts and insert custom diagrams and images. Word 2008 also features native support for the new Office Open XML format, although the old doc format can be set as a default.[16]
Word 2010
See also: Microsoft Office 2010
Word 2011
See also: Microsoft Office for Mac 2011
Word 2013

The release of Word 2013 has brought Word a cleaner look and this version focuses further on Cloud Computing with documents being saved automatically to OneDrive (previously Skydrive). If enabled, documents and settings roam with the user. Other notable features are a new read mode which allows for horizontal scrolling of pages in columns, a bookmark to find where the user left off reading their document and opening PDF documents in Word just like Word content. The version released for the Windows 8 operating system is modified for use with a touchscreen and on tablets. It is the first version of Word to not run on Windows XP or Windows Vista.

[ Edit | View ]

History of Microsoft -- Hasanur Rahman, 06:45:07 02/09/16 Tue [1]

Microsoft is a multinational computer technology corporation. The history of Microsoft began on April 4, 1975, when it was founded by Bill Gates and Paul Allen in Albuquerque.[1] Its current best-selling products are the Microsoft Windows operating system, Microsoft Office suite of productivity software, Xbox a line of entertainment of games, music and video and Bing, a line of search engines.

In 1980, Microsoft formed a partnership with IBM that allowed them to bundle Microsoft's operating system with IBM computers, paying Microsoft a royalty for every sale. In 1985, IBM requested that Microsoft write a new operating system for their computers called OS/2; Microsoft wrote the operating system, but also continued to sell their own alternative, which proved to be in direct competition with OS/2. Microsoft Windows eventually overshadowed OS/2 in terms of sales. When Microsoft launched several versions of Microsoft Windows in the 1990s, they had captured over 90% market share of the world's personal computers.

As of June 30, 2014, Microsoft has a global annual revenue of $86.83 Billion USD and 128,076 employees worldwide.[2] It develops, manufactures, licenses, and supports a wide range of software products for computing devices.

1975–1985: The founding of Microsoft
The idea that would spawn Microsoft germinated when Paul Allen showed Bill Gates the January 1, 1975 issue of Popular Electronics that demonstrated the Altair 8800.[8] Allen and Gates saw potential to develop an implementation of the programming language BASIC interpreter for the system.[9] Bill Gates called the creators of the new microcomputer, Micro Instrumentation and Telemetry Systems (MITS), offering to demonstrate the implementation in order to win a contract with the company. Allen and Gates had neither an interpreter nor an Altair system, yet in the eight weeks before the demo they developed an interpreter. When Allen flew to Albuquerque, New Mexico to meet with MITS, the interpreter worked and MITS agreed to distribute Altair BASIC.[10] Allen and Gates left Boston, where Allen worked for Honeywell and Gates was enrolled in Harvard,[11] moved to Albuquerque (where MITS was located), and co-founded Microsoft there. Revenues of the company totaled $16,005 by the end of 1976.

Allen came up with the original name of Micro-Soft, a portmanteau of microcomputer and software.[12] Hyphenated in its early incarnations, on November 26, 1976 the company was registered under that name with the Secretary of State of New Mexico. The company's first international office was founded on November 1, 1978, in Japan, entitled "ASCII Microsoft" (now called "Microsoft Japan"), and on November 29, 1979, the term, "Microsoft" was first used by Bill Gates.[8] On January 1, 1979, the company moved from Albuquerque to a new home in Bellevue, Washington,[8] since it was hard to recruit top programmers to Albuquerque. Shortly before the move, eleven of the then-thirteen employees posed for the staff photo on the right.[13]

Steve Ballmer joined the company on June 11, 1980, and would later succeed Bill Gates as CEO,[8] from January 2000 until February 2014. The company restructured on June 25, 1981, to become an incorporated business in its home state of Washington (with a further change of its name to "Microsoft Corporation, Inc."). As part of the restructuring, Bill Gates became president of the company and chairman of the board, and Paul Allen became Executive Vice President.[8]

Microsoft's early products were different variants of Microsoft BASIC which was the dominant programming language in late 1970s and early 1980s home computers such as Apple II (Applesoft BASIC) and Commodore 64 (Commodore BASIC), and were also provided with early versions of the IBM PC as the IBM Cassette BASIC.

The first hardware product[14] was the Z-80 SoftCard which enabled the Apple II to run the CP/M operating system, at the time an industry-standard operating system for running business software and many compilers and interpreters for several high-level languages on microcomputers. The SoftCard was first demonstrated publicly at the West Coast Computer Faire in March 1980.[15][16] It was an immediate success; 5,000 cards, a large number given the microcomputer market at the time, were purchased in the initial three months at $349 each and it was Microsoft's number one revenue source in 1980.[17]

The first operating system publicly released by the company was a variant of Unix announced on August 25, 1980. Acquired from AT&T through a distribution license, Microsoft dubbed it Xenix, and hired Santa Cruz Operation in order to port/adapt the operating system to several platforms.[18][19] This Unix variant would become home to the first version of Microsoft's word processor, Microsoft Word. Originally titled "Multi-Tool Word", Microsoft Word became notable for its use of "What You See Is What You Get", or WYSIWYG pioneered by the Xerox Alto and the Bravo text editor in the 1970s.[20][21]

Word was first released in the spring of 1983, and free demonstration copies of the application were bundled with the November 1983 issue of PC World, making it the first program to be distributed on-disk with a magazine.[22] However, Xenix was never sold to end users directly although it was licensed to many software OEMs for resale. It grew to become the most popular version of Unix, measured by the number of machines running it[23] (note that Unix is a multi-user operating system, allowing simultaneous access to a machine by several users). By the mid-1980s Microsoft had got out of the Unix business, except for an interest in SCO.[18]

DOS (Disk Operating System) was the operating system that brought the company its real success. International Business Machines (IBM) first approached Microsoft about its upcoming IBM Personal Computer (IBM PC) in July 1980.[24] On August 12, 1981, after negotiations with Digital Research failed, IBM awarded a contract to Microsoft to provide a version of the CP/M operating system, which was set to be used in the IBM PC. For this deal, Microsoft purchased a CP/M clone called 86-DOS from Tim Paterson of Seattle Computer Products for less than US$100,000, which IBM renamed to IBM PC DOS. Microsoft did not have an operating system when they closed the deal with IBM and IBM had not done their homework. Due to potential copyright infringement problems with CP/M, IBM marketed both CP/M and PC DOS for US$240 and US$40, respectively, with PC DOS eventually becoming the standard because of its lower price.[25][26] 35 of the company's 100 employees worked on the IBM project for more than a year. When the IBM PC debuted, Microsoft was the only company that offered operating system, programming language, and application software for the new computer.[24]

InfoWorld stated in 1984 that Microsoft, with $55 million in 1983 sales,[27]

is widely recognized as the most influential company in the microcomputer-software industry. Claiming more than a million installed MS-DOS machines, founder and chairman Bill Gates has decided to certify Microsoft's jump on the rest of the industry by dominating applications, operating systems, peripherals and, most recently, book publishing. Some insiders say Microsoft is attempting to be the IBM of the software industry.

In 1983, in collaboration with numerous companies, Microsoft created a home computer system, MSX, which contained its own version of the DOS operating system, called MSX-DOS; this became relatively popular in Japan, Europe and South America.[10][28][29] Later, the market saw a flood of IBM PC clones after Columbia Data Products successfully cloned the IBM BIOS, quickly followed by Eagle Computer and Compaq.[30][31][32][33] The deal with IBM allowed Microsoft to have control of its own QDOS derivative, MS-DOS, and through aggressive marketing of the operating system to manufacturers of IBM-PC clones Microsoft rose from a small player to one of the major software vendors in the home computer industry.[34] With the release of the Microsoft Mouse on May 2, 1983, Microsoft continued to expand its product line in other markets. This expansion included Microsoft Press, a book publishing division, on July 11 the same year, which debuted with two titles: Exploring the IBM PCjr Home Computer by Peter Norton, and The Apple Macintosh Book by Cary Lu.

Ireland became home to one of Microsoft's international production facilities in 1985, and on November 20 Microsoft released its first retail version of Microsoft Windows (Windows 1.0), originally a graphical extension for its MS-DOS operating system.[8] In August, Microsoft and IBM partnered in the development of a different operating system called OS/2. OS/2 was marketed in connection with a new hardware design proprietary to IBM, the PS/2.[36] On February 16, 1986, Microsoft relocated to Redmond, Washington. Around one month later, on March 13, the company went public with an IPO, raising US$61 million at US$21.00 per share. By the end of the trading day, the price had risen to US$28.00. In 1987, Microsoft eventually released their first version of OS/2 to OEMs.[37]

Meanwhile, Microsoft began introducing its most prominent office products. Microsoft Works, an integrated office program which combined features typically found in a word processor, spreadsheet, database and other office applications, saw its first release as an application for the Apple Macintosh towards the end of 1986.[10] Microsoft Works would later be sold with other Microsoft products including Microsoft Word and Microsoft Bookshelf, a reference collection introduced in 1987 that was the company's first CD-ROM product.[8][38] Later, on August 8, 1989, Microsoft would introduce its most successful office product, Microsoft Office. Unlike the model of Microsoft Works, Microsoft Office was a bundle of separate office productivity applications, such as Microsoft Word, Microsoft Excel and so forth. While Microsoft Word and Microsoft Office were mostly developed internally, Microsoft also continued its trend of rebranding products from other companies, such as Microsoft SQL Server on January 13, 1988, a relational database management system for companies that was based on technology licensed from Sybase.[8]

On May 22, 1990 Microsoft launched Windows 3.0.[10] The new version of Microsoft's operating system boasted such new features as streamlined graphic user interface GUI and improved protected mode ability for the Intel 386 processor; it sold over 100,000 copies in two weeks.[10][39] Windows at the time generated more revenue for Microsoft than OS/2, and the company decided to move more resources from OS/2 to Windows.[40] In an internal memo to Microsoft employees on May 16, 1991, Bill Gates announced that the OS/2 partnership was over, and that Microsoft would henceforth focus its platform efforts on Windows and the Windows NT kernel. Some people, especially developers who had ignored Windows and committed most of their resources to OS/2, were taken by surprise, and accused Microsoft of deception. This changeover from OS/2 was frequently referred to in the industry as "the head-fake".[41][42] In the ensuing years, the popularity of OS/2 declined, and Windows quickly became the favored PC platform. 1991 also marked the founding of Microsoft Research, an organization in Microsoft for researching computer science subjects, and Microsoft Visual Basic, a popular development product for companies and individuals.[8]

During the transition from MS-DOS to Windows, the success of Microsoft's product Microsoft Office allowed the company to gain ground on application-software competitors, such as WordPerfect and Lotus 1-2-3.[10][43] Novell, an owner of WordPerfect for a time, alleged that Microsoft used its inside knowledge of the DOS and Windows kernels and of undocumented Application Programming Interface features to make Office perform better than its competitors.[44] Eventually, Microsoft Office became the dominant business suite, with a market share far exceeding that of its competitors.[45] In March 1992, Microsoft released Windows 3.1 along with its first promotional campaign on TV; the software sold over three million copies in its first two months on the market.[8][10] In October, Windows for Workgroups 3.1 was released with integrated networking abilities such as peer-to-peer file and printing sharing.[10] In November, Microsoft released the first version of their popular database software Microsoft Access.[10]

By 1993, Windows had become the most widely used GUI operating system in the world.[10] Fortune Magazine named Microsoft as the "1993 Most Innovative Company Operating in the U.S."[46] The year also marked the end of a five-year copyright infringement legal case brought by Apple Computer, dubbed Apple Computer, Inc. v. Microsoft Corp., in which the ruling was in Microsoft's favor, the release of Windows for Workgroups 3.11, a new version of the consumer line of Windows, and Windows NT 3.1, a server-based operating system with a similar user interface to consumer versions of the operating system, but with an entirely different kernel.[10] As part of its strategy to broaden its business, Microsoft released Microsoft Encarta on March 22, 1993, the first encyclopedia designed to run on a computer.[8] Soon after, the Microsoft Home brand was introduced – encompassing Microsoft's new multimedia applications for Windows 3.x., Microsoft changed its slogan to "Where do you want to go today?" in 1994 as part of an attempt to appeal to nontechnical audiences in a US$100 million advertising campaign.[10]

Microsoft continued to make strategic decisions directed at consumers. The company released Microsoft Bob, a graphical user interface designed for novice computer users, in March 1995. The interface was discontinued in 1996 due to poor sales; Bill Gates later attributed its failure to hardware requirements that were too high for typical computers; Microsoft Bob is widely regarded as Microsoft's most unsuccessful product.[47][48][why?]DreamWorks SKG and Microsoft formed a new company, DreamWorks Interactive (in 2000 acquired by Electronic Arts which named it EA Los Angeles), to produce interactive and multimedia entertainment properties.[8] On August 24, 1995, Microsoft released Microsoft Windows 95, a new version of the company's flagship operating system which featured a completely new user interface, including a novel start button; more than a million copies of Microsoft Windows 95 were sold in the first four days after its release.[10]

Windows 95 was released without a web browser as Microsoft had not yet developed one. The success of the web caught them by surprise and they subsequently approached Spyglass to license their browser as Internet Explorer. Spyglass went on to later dispute the terms of the agreement, as Microsoft was to pay a royalty for every copy sold. However, Microsoft sold no copies of Internet Explorer, choosing instead to bundle it for free with the operating system.

Internet Explorer was first included in the Windows 95 Plus! Pack that was released in August 1995.[49] In September, the Chinese government chose Windows to be the operating system of choice in that country, and entered into an agreement with the Company to standardize a Chinese version of the operating system.[10] Microsoft also released the Microsoft Sidewinder 3D Pro joystick in an attempt to further expand its profile in the computer hardware market.[10]
1995–1999: Foray into the Web and other ventures

On, May 26, 1995, Bill Gates sent the "Internet Tidal Wave" memorandum to Microsoft executives. The memo described Netscape with their Netscape Navigator as a "new competitor 'born' on the Internet." The memo outlines Microsoft's failure to grasp the Internet's importance, and in it Gates assigns "the Internet the highest level of importance" from then on.[50] Microsoft began to expand its product line into computer networking and the World Wide Web. On August 24, 1995, it launched a major online service, MSN (Microsoft Network), as a direct competitor to AOL. MSN became an umbrella service for Microsoft's online services, using Microsoft Passport (now called a Microsoft account) as a universal login system for all of its web sites.[8][10][51] The company continued to branch out into new markets in 1996, starting with a joint venture with NBC to create a new 24-hour cable news television station, MSNBC. The station was launched on July 15, 1996 to compete with similar news outlets such as CNN.[10][52] Microsoft also launched Slate, an online magazine edited by Michael Kinsley, which offered political and social commentary along with the cartoon Doonesbury.[8] In an attempt to extend its reach in the consumer market, the company acquired WebTV, which enabled consumers to access the Web from their televisions.[8] Microsoft entered the personal digital assistant (PDA) market in November with Windows CE 1.0, a new built-from-scratch version of their flagship operating system, designed to run on low-memory, low-performance machines, such as handhelds and other small computers.[53] 1996 saw the release of Windows NT 4.0, which brought the Windows 95 GUI and Windows NT kernel together.[54]

While Microsoft largely failed to participate in the rise of the Internet in the early 1990s, some of the key technologies in which the company had invested to enter the Internet market started to pay off by the mid-90s. One of the most prominent of these was ActiveX, an application programming interface built on the Microsoft Component Object Model (COM); this enabled Microsoft and others to embed controls in many programming languages, including the company's own scripting languages, such as JScript and VBScript. ActiveX included frameworks for documents and server solutions.[10] The company also released the Microsoft SQL Server 6.5, which had built-in support for internet applications.[10] Later in 1997, Microsoft Office 97 as well as Internet Explorer 4.0 were released, marking the beginning of the takeover of the browser market from rival Netscape, and by agreement with Apple Computer, Internet Explorer was bundled with the Apple Macintosh operating system as well as with Windows.[10] Windows CE 2.0, the handheld version of Windows, was released this year, including a host of bug fixes and new features designed to make it more appealing to corporate customers.[53] In October, the Justice Department filed a motion in the federal district court in which they stated that Microsoft had violated an agreement signed in 1994, and asked the court to stop the bundling of Internet Explorer with Windows

The year 1998 was significant in Microsoft's history, with Bill Gates appointing Steve Ballmer as president of Microsoft but remaining as Chair and CEO himself.[8] The company released an update to the consumer version of Windows, Windows 98.[8] Windows 98 came with Internet Explorer 4.0 SP1 (which had Windows Desktop Update bundled), and included new features from Windows 95 OSR 2.x including the FAT32 file system, and new features designed for Windows 98, such as support for multiple displays.[55] Microsoft launched its Indian headquarters as well, which would eventually become the company's second largest after its U.S. headquarters.[10] Finally, a great deal of controversy took place when a set of internal memos from the company were leaked on the Internet. These documents, colloquially referred to as "The Halloween Documents", were widely reported by the media and go into detail of the threats that free software / open source software poses to Microsoft's own software, previously voiced mainly by analysts and advocates of open source software. The documents also allude to legal and other actions against Linux as well as other open source software.[56][57] While Microsoft acknowledges the documents, it claims that they are merely engineering studies. Despite this, some believe that these studies were used in the real strategies of the company

Microsoft, in 2000, released new products for all three lines of the company's flagship operating system, and saw the beginning of the end of one of its most prominent legal cases. On February 17, 2000, Microsoft released an update to its business line of software in Windows 2000. It provided a high level of stability similar to that of its Unix counterparts due to its usage of the Windows NT kernel, and matching features found in the consumer line of the Windows operating system including a DOS emulator that could run many legacy DOS applications.[10]

On April 3, 2000, a judgment was handed down in the case of United States v. Microsoft,[59] calling the company an "abusive monopoly"[60] and forcing the company to split into two separate units. Part of this ruling was later overturned by a federal appeals court, and eventually settled with the U.S. Department of Justice in 2001. On June 15, 2000, the company released a new version of its hand-held operating system, Windows CE 3.0.[53] The main change was the new programming APIs of the software. Previous versions of Windows CE supported only a small subset of the WinAPI, the main development library for Windows, and with Version 3 of Windows CE, the operating system now supported nearly all of the core functionality of the WinAPI. The next update to the consumer line, Windows Me (or Windows Millennium Edition), was released on September 14, 2000.[8] It sported several new features such as enhanced multimedia abilities and consumer-oriented PC maintenance options, but is often regarded as one of the worst versions of Windows due to installation problems and other issues

Microsoft released Windows XP and Office XP in 2001, a version that aimed to encompass the features of both its business and home product lines. The release included an updated version of the Windows 2000 kernel, enhanced DOS emulation abilities, and many of the home-user features found in previous consumer versions. XP introduced a new graphical user interface, the first such change since Windows 95.[8][62] The operating system was the first to require Microsoft Product Activation, an anti-piracy mechanism that requires users to activate the software with Microsoft within 30 days. Later, Microsoft would enter the multibillion-dollar game console market dominated by Sony and Nintendo, with the release of the Xbox.[8] The Xbox finished behind the dominant PlayStation 2 selling 24 million units compared to the PlayStation 2's 136 million however they managed to outsell the Nintendo Gamecube which sold 21 million units. Microsoft launched their second console, the Xbox 360, in 2005 – which has turned out to be a lot more successful than their first console. It has sold 40 million units as of 2010 and it has outsold Sony's PlayStation 3 which has so far sold 35 million units. However, despite beating Sony with their last Xbox console, Microsoft so far has been outsold by the Nintendo Wii which introduced gesture control and opened up a new market for video games. Microsoft later used their popular controller-free Kinect peripheral to increase the popularity of the Xbox. This was very successful. As of 2011 Kinect was the fastest selling consumer electronics product in history.[63] It sold 8 million units from November 4, 2010 to January 3, 2011 (its first 60 days). It averaged 133,333 units per day, outselling the iPhone and iPad over equivalent post-launch periods.[63]

In 2002, Microsoft launched the .NET initiative, along with new versions of some of its development products, such as Microsoft Visual Studio.[8] The initiative has been an entirely new development API for Windows programming, and includes a new programming language, C#. Windows Server 2003 was launched, featuring enhanced administration abilities, such as new user interfaces to server tools.[10] In 2004, the company released Windows XP Media Center Edition 2005, a version of Windows XP designed for multimedia abilities, and Windows XP Starter Edition, a version of Windows XP with a smaller feature set designed for entry-level consumers.[8] However, Microsoft would encounter more turmoil in March 2004 when antitrust legal action would be brought against it by the European Union for allegedly abusing its market dominance (see European Union Microsoft antitrust case). Eventually Microsoft was fined €497 million (US$613 million), ordered to divulge certain protocols to competitors, and to produce a new version of its Windows XP platform—called Windows XP Home Edition N—that did not include its Windows Media Player.[64][65] Microsoft was also ordered to produce separate packages of Windows after South Korea also landed a settlement against the company in 2005. It had to pay out US$32 million and produce more than one version of Windows for the country in the same vein as the European Union-one with Windows Media Player and Windows Messenger and one without the two programs.[66]
2005–present: Vista, Windows 7, Windows 8 and Windows 10

In guise of competing with other Internet Companies such as the search service Google, in 2005 Microsoft announced a new version of its MSN search service.[67] Later, in 2006, the company launched Microsoft adCenter, a service that offers pay per click advertisements, in an effort to further develop their search marketing revenue.[68] Soon afterward, Microsoft created the CodePlex collaborative development site for hosting open source projects. Activity grew quickly as developers from around the world began to participate, and by early 2007 commercial open source companies, such as Aras Corp,.[69] began to offer enterprise open source software exclusively on the Microsoft platform.

On June 15, 2006, Bill Gates announced his plans for a two-year transition period out of a day-to-day role with Microsoft until July 31, 2008. After that date, Gates will continue in his role as the company's chairman, head of the board of directors and act as an adviser on key projects. His role as Chief Software Architect will be filled immediately by Ray Ozzie, the Chief Technical Officer of the company as of June 15, 2006.[70] Bill Gates stated "My announcement is not a retirement – it’s a reordering of my priorities."[71]

Formerly codenamed "Longhorn" in the early development stages, Windows Vista was released to consumers on January 30, 2007.[72][73] Microsoft also released a new version of its Office suite, called Microsoft Office 2007, alongside Windows Vista. Windows Server 2008 and Visual Studio 2008, the next versions of the company's server operating system and development suite, respectively, were released on February 27, 2008.[74] Windows Vista was criticized for being heavy and needing large amounts of power to run the desktop widgets and the Aero theme. Many people continued to use Windows XP for many years after, due to its stability and low processing needs.

On December 19, 2007, Microsoft signed a five-year, $500 million contract with Viacom that included content sharing and advertisements. The deal allowed Microsoft to license many shows from Viacom owned cable television and film studios for use on Xbox Live and MSN. The deal also made Viacom a preferred publisher partner for casual game development and distribution through MSN and Windows. On the advertisement side of the deal, Microsoft's Atlas ad-serving division became the exclusive provider of previously unsold advertising inventory on Viacom owned web sites. Microsoft also purchased a large amount of advertising on Viacom owned broadcasts and online networks, and collaborated on promotions and sponsorships for MTV and BET award shows, two Viacom owned cable networks.[75]

In 2008, Microsoft wanted to purchase Yahoo (first completely, later partially) in order to strengthen its position on the search engine market vis-ŕ-vis Google.[76][77] The company rejected the offer, saying that it undervalued the company. In response, Microsoft withdrew its offer.[78]

In 2009, the opening show of the Consumer Electronics Show (CES) was hosted by Steve Ballmer for the first time. In past years, it has been hosted by Bill Gates. In the show, Ballmer announced the first public Beta Test of Windows 7 for partners and developers on January 8, but also for the general public on January 10.

On June 26, 2009, Microsoft started taking pre-orders at a discounted price for Windows 7 which was launched on October 22, 2009. Windows 7 has several editions, which acknowledge the rise of netbook computers with reduced processing power.

On May 10, 2011, Microsoft Corp. acquired Skype Communications, S.ŕ r.l for US $8.5 billion.[79]

On June 18, 2012, CEO Steve Ballmer announced that they would be releasing a range of Surface tablets running both Windows RT and Windows 8 Pro.

On August 23, 2012, Microsoft unveiled a new corporate logo at the opening of its 23rd Microsoft store in Boston indicating the company's shift of focus from the classic style to the tile-centric Metro interface which is used on the Windows Phone, Xbox 360 and Windows 8 platforms.

The new logo also includes four squares with the colors (red, green, blue, and yellow) of the then-current Windows logo, and it incorporates the company's Segoe font.

On October 26, 2012, Microsoft released Windows 8 to the general public. Nearly a year later, Microsoft released the first major revision to Windows 8, Windows 8.1, on October 17, 2013.

On January 31, 2013, I/P Engine filed a lawsuit against Microsoft for search related patents.[80]

On April 8, 2013, Microsoft sold its IPTV business, Mediaroom, to Ericsson.[81]

On July 12, 2013, Microsoft sued the U.S. Customs and Border Protection agency to attempt to force a ban on imports of Motorola Mobility phones. Homeland Security Secretary Janet Napolitano is also named in the lawsuit.[82]

On September 2, 2013, Microsoft acquired Nokia's smartphone and cellular business for $7.2 billion. Microsoft paid $5 billion for Nokia's Devices & Services and $2.2 billion to license Nokia's patents.[83]

On October 22, 2013 Surface 2 and Surface Pro 2 were released featuring a number of improvements including improved battery life and a redesigned kickstand.

In late November 2013, Microsoft launched the Xbox One and the Kinect 2 sensor to succeed the Xbox 360.

On the February 4, 2014, Satya Nadella succeeded Steve Ballmer as CEO of Microsoft.[84]

On May 20, 2014, Microsoft announced the Surface Pro 3, with it being released the following month.[85]

On September 30, 2014, Microsoft announced the replacement to the Windows 8/8.1 operating system; Windows 10, with the first public technical preview build first being released on October 1, 2014

[ Edit | View ]

B movie -- Hasanur Rahman, 06:40:58 02/09/16 Tue [1]

A B movie is a low-budget commercial motion picture that is not an arthouse film. In its original usage, during the Golden Age of Hollywood, the term more precisely identified a film intended for distribution as the less-publicized, bottom half of a double feature. Although the U.S. production of movies intended as second features largely ceased by the end of the 1950s, the term B movie continued to be used in the broader sense it maintains today. In its post–Golden Age usage, there is ambiguity on both sides of the definition: on the one hand, the primary interest of many inexpensive exploitation films is prurient; on the other, many B movies display a high degree of craft and aesthetic ingenuity.

In either usage, most B movies represent a particular genre—the Western was a Golden Age B movie staple, while low-budget science-fiction and horror films became more popular in the 1950s. Early B movies were often part of series in which the star repeatedly played the same character. Almost always shorter than the top-billed films they were paired with, many had running times of 70 minutes or less. The term connoted a general perception that B movies were inferior to the more handsomely budgeted headliners; individual B films were often ignored by critics.

Latter-day B movies still sometimes inspire multiple sequels, but series are less common. As the average running time of top-of-the-line films increased, so did that of B pictures. In its current usage, the term has somewhat contradictory connotations: it may signal an opinion that a certain movie is (a) a genre film with minimal artistic ambitions or (b) a lively, energetic film uninhibited by the constraints imposed on more expensive projects and unburdened by the conventions of putatively "serious" independent film. The term is also now used loosely to refer to some higher-budgeted, mainstream films with exploitation-style content, usually in genres traditionally associated with the B movie.

From their beginnings to the present day, B movies have provided opportunities both for those coming up in the profession and others whose careers are waning. Celebrated filmmakers such as Anthony Mann and Jonathan Demme learned their craft in B movies. They are where actors such as John Wayne and Jack Nicholson first became established, and they have provided work for former A movie actors, such as Vincent Price and Karen Black. Some actors, such as Béla Lugosi, Eddie Constantine and Pam Grier, worked in B movies for most of their careers. The term B actor is sometimes used to refer to a performer who finds work primarily or exclusively in B pictures.

In 1927–28, at the end of the silent era, the production cost of an average feature from a major Hollywood studio ranged from $190,000 at Fox to $275,000 at MGM. That average reflected both "specials" that might cost as much as $1 million and films made quickly for around $50,000. These cheaper films (not yet called B movies) allowed the studios to derive maximum value from facilities and contracted staff in between a studio's more important productions, while also breaking in new personnel.[2] Studios in the minor leagues of the industry, such as Columbia Pictures and Film Booking Offices of America (FBO), focused on exactly those sort of cheap productions. Their movies, with relatively short running times, targeted theaters that had to economize on rental and operating costs, particularly small-town and urban neighborhood venues, or "nabes". Even smaller production houses, known as Poverty Row studios, made films whose costs might run as low as $3,000, seeking a profit through whatever bookings they could pick up in the gaps left by the larger concerns.[3]

With the widespread arrival of sound film in American theaters in 1929, many independent exhibitors began dropping the then-dominant presentation model, which involved live acts and a broad variety of shorts before a single featured film. A new programming scheme developed that would soon become standard practice: a newsreel, a short and/or serial, and a cartoon, followed by a double feature. The second feature, which actually screened before the main event, cost the exhibitor less per minute than the equivalent running time in shorts. The majors' "clearance" rules favoring their affiliated theaters prevented the independents' timely access to top-quality films; the second feature allowed them to promote quantity instead.[4] The additional movie also gave the program "balance"—the practice of pairing different sorts of features suggested to potential customers that they could count on something of interest no matter what specifically was on the bill. The low-budget picture of the 1920s thus evolved into the second feature, the B movie, of Hollywood's Golden Age.[5]
Golden Age of Hollywood
Main article: B movies (Hollywood Golden Age)

The major studios, at first resistant to the double feature, soon adapted. All established B units to provide films for the expanding second-feature market. Block booking became standard practice: to get access to a studio's attractive A pictures, many theaters were obliged to rent the company's entire output for a season. With the B films rented at a flat fee (rather than the box office percentage basis of A films), rates could be set virtually guaranteeing the profitability of every B movie. The parallel practice of blind bidding largely freed the majors from worrying about their Bs' quality—even when booking in less than seasonal blocks, exhibitors had to buy most pictures sight unseen. The five largest studios—Metro-Goldwyn-Mayer, Paramount Pictures, Fox Film Corporation (20th Century Fox as of 1935), Warner Bros., and RKO Radio Pictures (descendant of FBO)—also belonged to companies with sizable theater chains, further securing the bottom line.[6]

Poverty Row studios, from modest outfits like Mascot Pictures, Tiffany Pictures, and Sono Art-World Wide Pictures down to shoestring operations, made exclusively B movies, serials, and other shorts, and also distributed totally independent productions and imported films. In no position to directly block book, they mostly sold regional distribution exclusivity to "states rights" firms, which in turn peddled blocks of movies to exhibitors, typically six or more pictures featuring the same star (a relative status on Poverty Row).[7] Two "major-minors"—Universal Studios and rising Columbia Pictures—had production lines roughly similar to, though somewhat better endowed than, the top Poverty Row studios. In contrast to the Big Five majors, Universal and Columbia had few or no theaters, though they did have top-rank film distribution exchanges.[8]

In the standard Golden Age model, the industry's top product, the A films, premiered at a small number of select first-run houses in major cities. Double features were not the rule at these prestigious venues. As described by historian Edward Jay Epstein, "During these first runs, films got their reviews, garnered publicity, and generated the word of mouth that served as the principal form of advertising."[9] Then it was off to the subsequent-run market where the double feature prevailed. At the larger local venues controlled by the majors, movies might turn over on a weekly basis. At the thousands of smaller, independent theaters, programs often changed two or three times a week. To meet the constant demand for new B product, the low end of Poverty Row turned out a stream of micro-budget movies rarely much more than sixty minutes long; these were known as "quickies" for their tight production schedules—as short as four days.[10] As Brian Taves describes, "Many of the poorest theaters, such as the 'grind houses' in the larger cities, screened a continuous program emphasizing action with no specific schedule, sometimes offering six quickies for a nickel in an all-night show that changed daily."[11] Many small theaters never saw a big-studio A film, getting their movies from the states rights concerns that handled almost exclusively Poverty Row product. Millions of Americans went to their local theaters as a matter of course: for an A picture, along with the trailers, or screen previews, that presaged its arrival, "[t]he new film's title on the marquee and the listings for it in the local newspaper constituted all the advertising most movies got", writes Epstein.[12] Aside from at the theater itself, B films might not be advertised at all.

The introduction of sound had driven costs higher: by 1930, the average U.S. feature film cost $375,000 to produce.[13] A broad range of motion pictures occupied the B category. The leading studios made not only clear-cut A and B films, but also movies classifiable as "programmers" (also known as "in-betweeners" or "intermediates"). As Taves describes, "Depending on the prestige of the theater and the other material on the double bill, a programmer could show up at the top or bottom of the marquee."[14] On Poverty Row, many Bs were made on budgets that would have barely covered petty cash on a major's A film, with costs at the bottom of the industry running as low as $5,000.[10] By the mid-1930s, the double feature was the dominant U.S. exhibition model, and the majors responded. In 1935, B movie production at Warner Bros. was raised from 12 to 50 percent of studio output. The unit was headed by Bryan Foy, known as the "Keeper of the Bs."[15] At Fox, which also shifted half of its production line into B territory, Sol M. Wurtzel was similarly in charge of more than twenty movies a year during the late 1930s.[16]

[ Edit | View ]

Ancient Chinese coinage -- Hasanur Rahman, 06:37:43 02/09/16 Tue [1]

Ancient Chinese coinage includes some of the earliest known coins. These coins, used as early as the Spring and Autumn period (770-476 BC), took the form of imitations of the cowrie shells that were used in ceremonial exchanges. The Spring and Autumn period also saw the introduction of the first metal coins; however, they were not initially round, instead being either knife shaped or spade shaped. Round metal coins with a round, and then later square hole in the center were first introduced around 350 BC. The beginning of the Qin Dynasty (221-206 BC), the first dynasty to unify China, saw the introduction of a standardised coinage for the whole Empire. Subsequent dynasties produced variations on these round coins throughout the imperial period. At first, distribution of the coinage was limited to use around the capital city district but by the beginning of the Han Dynasty, coins were widely used for such as when paying tax, salaries and fines.

Ancient Chinese coins are markedly different from coins produced in the west. Chinese coins were manufactured by being cast in molds, whereas western coins were typically cut and hammered or, in later times, milled. Chinese coins were usually made from mixtures of metals such copper, tin and lead, from bronze, brass or iron: precious metals like gold and silver were uncommonly used. The ratios and purity of the coin metals varied considerably. Most Chinese coins were produced with a square hole in the middle. This was used to allow collections of coins to be threaded on a square rod so that the rough edges could be filed smooth, and then threaded on strings for ease of handling.

Official coin production was not always centralised, but could be spread over many mint locations throughout the country. Aside from officially produced coins, private coining was common during many stages of history. Various steps were taken over time to try to combat the private coining and limit its effects and making it illegal. At other times private coining was tolerated. The coins varied in value throughout the history.

Some coins were produced in very large numbers—during the Western Han an average of 220 million coins a year were produced. Other coins were of limited circulation and are today extremely rare—only six examples of Da Quan Wu Qian from the Eastern Wu Dynasty (222–280) are known to exist. Occasionally, large hoards of coins have been uncovered. For example, a hoard was discovered in Jiangsu containing 4,000 Tai Qing Feng Le coins and at Zhangpu in Shaanxi, a sealed jar containing 1,000 Ban Liang coins of various weights and sizes, was discovered.

Pre-Imperial (770-220 BC)
Main article: Chinese coinage during the Spring and Autumn and Warring States periods

The earliest coinage of China was described by Sima Qian, the great historian of c. 100 BC:

"With the opening of exchange between farmers, artisans, and merchants, there came into use money of tortoise shells, cowrie shells, gold, coins (Chinese: 錢; pinyin: qián), knives (Chinese: 刀; pinyin: dāo), spades (Chinese: 布; pinyin: bů) This has been so from remote antiquity."

While nothing is known about the use of tortoise shells as money, gold and cowries (either real shells or replicas) were used to the south of the Yellow River. Although there is no doubt that the well-known spade and knife money were used as coins, it has not been demonstrated that other items often offered by dealers as coins such as fish, halberds, and metal chimes were also used as coins. They are not found in coin hoards, and the probability is that all these are in fact funerary items. Archaeological evidence shows that the earliest use of the spade and knife money was in the Spring and Autumn period (770-476 BC). As in Ancient Greece, socio-economic conditions at the time were favourable to the adoption of coinage.[1]

Inscriptions and archaeological evidence shows that cowrie shells were regarded as important objects of value in the Shang Dynasty (c. 1766-1154 BC). In the Zhou period, they are frequently referred to as gifts or rewards from kings and nobles to their subjects. Later imitations in bone, stone or bronze were probably used as money in some instances. Some think the first Chinese metallic coins were bronze imitations of cowrie shells[2][3] found in a tomb near Anyang dating from around 900 BC, but these items lack inscriptions.[4][5]

Similar bronze pieces with inscriptions, known as Ant Nose Money (Chinese: 蟻鼻錢; pinyin: yǐ bí qián) or Ghost Face Money (Chinese: 鬼臉錢; pinyin: guǐ liǎn qián) were definitely used as money. They have been found in areas to the south of the Yellow River corresponding to the State of Chu in the Warring States period. One hoard was of some 16,000 pieces. Their weight is very variable, and their alloy often contains a high proportion of lead. The name Ant [and] Nose refers to the appearance of the inscriptions, and is nothing to do with keeping ants out of the noses of corpses. [6]

The only minted gold of this period known is Chu Gold Block Money (Chinese: 郢爰; pinyin: yǐng yuán), which consists of sheets of gold 3-5mm thick, of various sizes, with inscriptions consisting of square or round stamps in which there are one or two characters. They have been unearthed in various locations south of the Yellow River indicating that they were products of the State of Chu. One of the characters in their inscription is often a monetary unit or weight which is normally read as yuan (Chinese: 爰; pinyin: yuán). Pieces are of a very variable size and thickness, and the stamps appear to be a device to validate the whole block, rather than a guide to enable it to be broken up into unit pieces. Some specimens have been reported in copper, lead, or clay. It is probable that these were funeral money, not circulating coinage, as they are found in tombs, but the gold coins are not. [7]
Jade pieces

It has been suggested that pieces of jade were a form of money in the Shang Dynasty.[8]
Money brand

Metal money brands (Chinese: 錢牌; pinyin: qián pái) were rarely used in the state of Chu.[9] They were used again in the Song dynasty.[10][11]

Hollow handled spades (Chinese: 布幣; pinyin: bůbě) are a link between weeding tools used for barter and stylised objects used as money. They are clearly too flimsy for use, but retain the hollow socket by which a genuine tool could be attached to a handle. This socket is rectangular in cross-section, and still retains the clay from the casting process. In the socket the hole by which the tool was fixed to its handle is also reproduced.

Prototype spade money: This type of Spade money is similar in shape and size to the original agricultural implements. While some are perhaps robust enough to be used in the fields, others are much lighter and bear an inscription, probably the name of the city which issued it. Some of these objects have been found in Shang and Western Zhou tombs, so they date from c. 1200-800 BC. Inscribed specimens appear to date from c. 700 BC.[12]
Square shoulder spades: Square shoulder spade coins have square shoulders, a straight or slightly curving foot, and three parallel lines on the obverse and reverse. They are found in quantities of up to several hundreds in the area corresponding to the Royal Domain of Zhou (south Hebei and north Henan). Archaeological evidence dates them to the early Spring and Autumn period, around 650 BC onwards. The inscriptions on these coins usually consist of one character, which can be a number, a cyclical character, a place name, or the name of a clan. The possibility that some inscriptions are the names of merchants has not been entertained. The crude writing is that of the artisans who made the coins, not the more careful script of the scholars who wrote the votive inscriptions on bronzes. The style of writing is consistent with that of the middle Zhou period. Over 200 inscriptions are known; many have not been fully deciphered. The characters can be found on the left or the right of the central line and are sometimes inverted or retrograde. The alloy of these coins is typically 80% copper, 15% lead, and 5% tin. They are found in hoards of hundreds, rather than thousands, sometimes tied together in bundles. Although there is no mention in the literature of their purchasing power, it is clear that they were not small change.[13]
Sloping shoulder spades: Sloping shoulder spades usually have a sloping shoulder, with the two outside lines on the obverse and reverse at an angle. The central line is often missing. This type is generally smaller than the prototype or square shoulder spades. Their inscriptions are clearer, and usually consist of two characters. They are associated with the Kingdom of Zhou and the Henan area. Their smaller size indicates that they are later in date than the square shoulder spades.[14]

Flat handled spade money

These have lost the hollow handle of the early spades. They nearly all have distinct legs, suggesting that their pattern was influenced by the pointed shoulder Hollow Handled Spades, but had been further stylized for easy handling. They are generally smaller, and sometimes have denominations specified in their inscriptions as well as place names. This, together with such little evidence as can be gleaned from the dates of the establishment of some of the mint towns, show that they were a later development. Archaeological evidence dates them to the Warring States period (475-221 BC). Arched Foot spades have an alloy consisting of about 80% copper; for other types the copper content varies between 40% and 70%.[16]

Arched foot spades: This type has an arched crutch, often like an inverted U. The shoulders can be rounded or angular. Denominations of half, one, or two jin are normally specified. They are associated with the State of Liang (also known as Wei) which flourished between 425 and 344 BC, and the State of Han (403-230 BC).[17]
Special spades of Liang: Similar in shape to the Arched Foot spades. Their inscriptions have been the subject of much debate. All are now agreed that these coins were issued by the State of Liang, and the inscriptions indicate a relationship between the jin weight of the coins, and the lie, another unit of weight or money.[18]
Pointed foot spades: This type has pointed feet, and a square crutch; the shoulders can be pointing upwards or straight. They are a clear descendant of the pointed shoulder Hollow Handled Spade. The weight and size of the larger specimens is compatible with the one jin unit of the Arched Foot Flat Handled Spades; smaller specimens sometimes specify the unit as one jin or more often as a half jin, but frequently do not specify a unit. This seems to imply that the half jin unit became the norm. They are associated with the State of Zhao, and their find spots are usually in Shanxi or Hebei provinces. They frequently have numerals on their reverses. The two character mint names mean that the cities that cast these coins can be identified with more certainty than those of earlier series.

[ Edit | View ]

Money creation -- Hasanur Rahman, 06:34:30 02/09/16 Tue [1]

Money creation is the process by which the money supply of a country or a monetary region (such as the Eurozone) is increased. A central bank may introduce new money into the economy (termed "expansionary monetary policy", or "money printing" by detractors) by purchasing financial assets or lending money to financial institutions. Commercial bank lending also creates money in the form of demand deposits. Through fractional reserve banking, bank lending multiplies the amount of broad money beyond the amount of base money originally created by the central bank. Reserve requirements and other capital adequacy ratios imposed by the central bank can limit this process.

Central banks monitor the amount of money in the economy by measuring monetary aggregates such as M2. The effect of monetary policy on the money supply is indicated by comparing these measurements on various dates. For example, in the United States, money supply measured as M2 grew from $6.407 trillion in January 2005, to $8.319 trillion in January 2009.

Money creation by the commercial banks
Main article: Monetary policy

In the contemporary monetary system, most money in circulation exists not as cash or coins but as bank deposits. The main way in which those bank deposits are created, is through loans made by commercial banks. When a bank makes a loan, a deposit is created at the same time in the borrower's bank account. In that way, new money is created as a bookkeeping entry, with the loan representing an asset and the deposit a liability on the bank's balance sheet. [2]

Monetary policy regulates a country's money supply, the amount of broad currency in circulation. Almost all modern nations have central banks such as the United States Federal Reserve System, the European Central Bank (ECB), and the People's Bank of China for conducting monetary policy. Charged with the smooth functioning of the money supply and financial markets, these institutions are generally independent of the government executive.

The primary tool of monetary policy is open market operations: the central bank buys and sells financial assets such as treasury bills, government bonds, or foreign currencies from private parties. Purchases of these assets result in currency entering market circulation, while sales of these assets remove currency. Usually, open market operations are designed to target a specific short-term interest rate. For example, the U.S. Federal Reserve may target the federal funds rate, the rate at which member banks lend to one another overnight. In other instances, they might instead target a specific exchange rate relative to some foreign currency, the price of gold, or indices such as the consumer price index.

Other monetary policy tools to expand the money supply include decreasing interest rates by fiat; increasing the monetary base; and decreasing reserve requirements. Some other means are: discount window lending (as lender of last resort); moral suasion (cajoling the behavior of certain market players); and "open mouth operations" (publicly asserting future monetary policy). The conduct and effects of monetary policy and the regulation of the banking system are of central concern to monetary economics.
Quantitative easing
Main article: Quantitative easing

Quantitative easing involves the creation of a significant amount of new base money by a central bank by the buying of assets that it usually does not buy. Usually, a central bank will conduct open market operations by buying short-term government bonds or foreign currency. However, during a financial crisis, the central bank may buy other types of financial assets as well. The central bank may buy long-term government bonds, company bonds, asset-backed securities, stocks, or even extend commercial loans. The intent is to stimulate the economy by increasing liquidity and promoting bank lending, even when interest rates cannot be pushed any lower.

Quantitative easing increases reserves in the banking system (i.e., deposits of commercial banks at the central bank), giving depository institutions the ability to make new loans. Quantitative easing is usually used when lowering the discount rate is no longer effective because interest rates are already close to or at zero. In such a case, normal monetary policy cannot further lower interest rates, and the economy is in a liquidity trap.
Physical currency

In modern economies, relatively little of the supply of broad money is in physical currency. For example, in December 2010 in the United States, of the $8.853 trillion in broad money supply (M2), only about 10% (or $915.7 billion) consisted of physical coins and paper money.[3] The manufacturing of new physical money is usually the responsibility of the central bank, or sometimes, the government's treasury.

Contrary to popular belief, money creation in a modern economy does not directly involve the manufacturing of new physical money, such as paper currency or metal coins. Instead, when the central bank expands the money supply through open market operations (e.g., by purchasing government bonds), it credits the accounts that commercial banks hold at the central bank (termed high-powered money). Commercial banks may draw on these accounts to withdraw physical money from the central bank. Commercial banks may also return soiled or spoiled currency to the central bank in exchange for new currency.[4]
Money creation through the fractional reserve system
Main article: Fractional reserve banking

Through fractional reserve banking, the modern banking system expands the money supply of a country beyond the amount initially created by the central bank.[5] There are two types of money in a fractional-reserve banking system: currency originally issued by the central bank, and bank deposits at commercial banks:[6][7]

Central bank money (all money created by the central bank regardless of its form, e.g., banknotes, coins, electronic money)
Commercial bank money (money created in the banking system through borrowing and lending) – sometimes referred to as checkbook money[8]

When a commercial bank loan is extended, new commercial bank money is created if the loan proceeds are issued in the form of an increase in a customer's demand deposit account (that is, an increase in the bank's demand deposit liability owed to the customer). As a loan is paid back through reductions in the demand deposit liabilities the bank owes to a customer, that commercial bank money disappears from existence. Because loans are continually being issued in a normally functioning economy, the amount of broad money in the economy remains relatively stable. Because of this money creation process by the commercial banks, the money supply of a country is usually a multiple larger than the money issued by the central bank; that multiple was traditionally determined by the reserve requirements and now essentially by other financial ratios (primarily the capital adequacy ratio that limits the overall credit creation of a bank) set by the relevant banking regulators in the jurisdiction.

An early table, featuring reinvestment from one period to the next and a geometric series, is found in the tableau économique of the Physiocrats, which is credited as the "first precise formulation" of such interdependent systems and the origin of multiplier theory.[

Alternative theories

There are also heterodox theories of how money is created. These include:

Chartalism sees the state as creating money when it spends, and destroying it when it taxes. More importantly, the private banking system is not, in empirical terms, reserve-limited, so its creation of money is an endogenous process, driven by credit demand and lending willingness. This accounts for the power of the state's interest rate policy in governing most of the money supply in normal times. [13][14]
Credit Theory of Money. This approach was initiated by Joseph Schumpeter. Credit theory asserts the central role of banks as creators and allocators of money supply, and distinguishes between "productive credit creation" (allowing non-inflationary economic growth even at full employment, in the presence of technological progress) and "unproductive credit creation" (resulting in inflation of either the consumer- or asset-price variety).

[ Edit | View ]

Banknote -- Hasanur Rahman, 06:18:28 02/09/16 Tue [1]

A banknote (often known as a bill, paper money, or simply a note) is a type of negotiable instrument known as a promissory note, made by a bank, payable to the bearer on demand. Banknotes were originally issued by commercial banks, who were legally required to redeem the notes for legal tender (usually gold or silver coin) when presented to the chief cashier of the originating bank. These commercial banknotes only traded at face value in the market served by the issuing bank.[2] Commercial banknotes have been replaced by national banknotes issued by central banks.

National banknotes are legal tender, meaning that medium of payment is allowed by law or recognized by a legal system to be valid for meeting a financial obligation.[3] Historically, banks sought to ensure that they could always pay customers in coins when they presented banknotes for payment. This practice of "backing" notes with something of substance is the basis for the history of central banks backing their currencies in gold or silver. Today, most national currencies have no backing in precious metals or commodities and have value only by fiat. With the exception of non-circulating high-value or precious metal issues, coins are used for lower valued monetary units, while banknotes are used for higher values.

The idea of using a durable light-weight substance as evidence of a promise to pay a bearer on demand originated in China during the Han Dynasty in 118 BC, and was made of leather.[4] The first known banknote was first developed in China during the Tang and Song dynasties, starting in the 7th century. Its roots were in merchant receipts of deposit during the Tang Dynasty (618–907), as merchants and wholesalers desired to avoid the heavy bulk of copper coinage in large commercial transactions.[5][6][7] During the Yuan Dynasty, banknotes were adopted by the Mongol Empire. In Europe, the concept of banknotes was first introduced during the 13th century by travelers such as Marco Polo,[8][9] with European banknotes appearing in 1661 in Sweden.

Counterfeiting, the forgery of banknotes, is an inherent challenge in issuing currency. It is countered by anticounterfeiting measures in the printing of banknotes. Fighting the counterfeiting of banknotes and cheques has been a principal driver of security printing methods development in recent centuries.

Main article: History of money

Paper currency first developed in Tang Dynasty China during the 7th century, although true paper money did not appear until the 11th century, during the Song Dynasty. The usage of paper currency later spread throughout the Mongol Empire. European explorers like Marco Polo introduced the concept in Europe during the 13th century.[8][9] Napoleon issued paper banknotes in the early 1800s.[10] Paper money originated in two forms: drafts, which are receipts for value held on account, and "bills", which were issued with a promise to convert at a later date.

The perception of banknotes as money has evolved over time. Originally, money was based on precious metals. Banknotes were seen as essentially an I.O.U. or promissory note: a promise to pay someone in precious metal on presentation (see representative money). With the gradual removal of precious metals from the monetary system, banknotes evolved to represent credit money, or (if backed by the credit of a government) also fiat money.

Notes or bills were often referred to in 18th century novels and were often a key part of the plot such as a "note drawn by Lord X for Ł100 which becomes due in 3 months' time".

Development of the banknote began in the Tang Dynasty during the 7th century, with local issues of paper currency, although true paper money did not appear until the 11th century, during the Song Dynasty.[11][12] Its roots were in merchant receipts of deposit during the Tang Dynasty (618–907), as merchants and wholesalers desired to avoid the heavy bulk of copper coinage in large commercial transactions.[5][6][7]

Before the use of paper, the Chinese used coins that were circular, with a rectangular hole in the middle. Several coins could be strung together on a rope. Merchants in China, if they became rich enough, found that their strings of coins were too heavy to carry around easily. To solve this problem, coins were often left with a trustworthy person, and the merchant was given a slip of paper recording how much money he had with that person. If he showed the paper to that person he could regain his money. Eventually, the Song Dynasty paper money called "jiaozi" originated from these promissory notes.

By 960 the Song Dynasty, short of copper for striking coins, issued the first generally circulating notes. A note is a promise to redeem later for some other object of value, usually specie. The issue of credit notes is often for a limited duration, and at some discount to the promised amount later. The jiaozi nevertheless did not replace coins during the Song Dynasty; paper money was used alongside the coins.

The central government soon observed the economic advantages of printing paper money, issuing a monopoly right of several of the deposit shops to the issuance of these certificates of deposit.[13] By the early 12th century, the amount of banknotes issued in a single year amounted to an annual rate of 26 million strings of cash coins.[14] By the 1120s the central government officially stepped in and produced their own state-issued paper money (using woodblock printing).

Even before this point, the Song government was amassing large amounts of paper tribute. It was recorded that each year before 1101 AD, the prefecture of Xinan (modern Xi-xian, Anhui) alone would send 1,500,000 sheets of paper in seven different varieties to the capital at Kaifeng.[15] In that year of 1101, the Emperor Huizong of Song decided to lessen the amount of paper taken in the tribute quota, because it was causing detrimental effects and creating heavy burdens on the people of the region.[16] However, the government still needed masses of paper product for the exchange certificates and the state's new issuing of paper money. For the printing of paper money alone, the Song court established several government-run factories in the cities of Huizhou, Chengdu, Hangzhou, and Anqi.[16]

The size of the workforce employed in these paper money factories were quite large, as it was recorded in 1175 AD that the factory at Hangzhou alone employed more than a thousand workers a day.[16] However, the government issues of paper money were not yet nationwide standards of currency at that point; issues of banknotes were limited to regional zones of the empire, and were valid for use only in a designated and temporary limit of 3-years' time.[14]

The geographic limitation changed between the years 1265 and 1274, when the late Southern Song government finally produced a nationwide standard currency of paper money, once its widespread circulation was backed by gold or silver.[14] The range of varying values for these banknotes was perhaps from one string of cash to one hundred at the most.[14] Ever since 1107, the government printed money in no less than six ink colors and printed notes with intricate designs and sometimes even with mixture of unique fiber in the paper to avoid counterfeiting.

The founder of the Yuan Dynasty, Kublai Khan, issued paper money known as Chao in his reign. The original notes during the Yuan Dynasty were restricted in area and duration as in the Song Dynasty, but in the later course of the dynasty, facing massive shortages of specie to fund their ruling in China, they began printing paper money without restrictions on duration.The Venetian merchants were impressed by the fact that the Chinese paper money was guaranteed by the State.

[ Edit | View ]

History of money -- Hasanur Rahman, 06:15:30 02/09/16 Tue [1]

The history of money concerns the development of means of carrying out transactions involving a physical medium of exchange. Money is any clearly identifiable object of value that is generally accepted as payment for goods and services and repayment of debts within a market or which is legal tender within a country.

Many things have been used as medium of exchange in markets including, for example, livestock and sacks of cereal grain (from which the Shekel is derived) – things directly useful in themselves, but also sometimes merely attractive items such as cowry shells or beads were exchanged for more useful commodities. Precious metals, from which early coins were made, fall into this second category.

Non-monetary exchange
Main article: Barter

In Politics Book 1:9[1] (c.350 B.C.) the Greek philosopher Aristotle contemplated on the nature of money. He considered that every object has two uses, the first being the original purpose for which the object was designed, and the second possibility is to conceive of the object as an item to sell or barter.[2] The assignment of monetary value to an otherwise insignificant object such as a coin or promissory note arises as people and their trading associate evolve a psychological capacity to place trust in each other and in external authority within barter exchange.[3][4]

With barter, an individual possessing any surplus of value, such as a measure of grain or a quantity of livestock could directly exchange that for something perceived to have similar or greater value or utility, such as a clay pot or a tool. The capacity to carry out barter transactions is limited in that it depends on a coincidence of wants. The seller of food grain has to find the buyer who wants to buy grain and who also could offer in return something the seller wants to buy. There is no agreed standard measure into which both seller and buyer could exchange commodities according to their relative value of all the various goods and services offered by other potential barter partners.

David Kinley considers the theory of Aristotle to be flawed because the philosopher probably lacked sufficient understanding of the ways and practices of primitive communities, and so may have formed his opinion from personal experience and conjecture.[citation needed]

In his book Debt: The First 5000 Years, anthropologist David Graeber argues against the suggestion that money was invented to replace barter. The problem with this version of history, he suggests, is the lack of any supporting evidence. His research indicates that "gift economies" were common, at least at the beginnings of the first agrarian societies, when humans used elaborate credit systems. Graeber proposes that money as a unit of account was invented the moment when the unquantifiable obligation "I owe you one" transformed into the quantifiable notion of "I owe you one unit of something". In this view, money emerged first as credit and only later acquired the functions of a medium of exchange and a store of value.[5][6]
Gift economy

In a gift economy, valuable goods and services are regularly given without any explicit agreement for immediate or future rewards (i.e. there is no formal quid pro quo).[7] Ideally, simultaneous or recurring giving serves to circulate and redistribute valuables within the community.

There are various social theories concerning gift economies. Some consider the gifts to be a form of reciprocal altruism. Another interpretation is that implicit "I owe you" debt[8] and social status are awarded in return for the "gifts".[9] Consider for example, the sharing of food in some hunter-gatherer societies, where food-sharing is a safeguard against the failure of any individual's daily foraging. This custom may reflect altruism, it may be a form of informal insurance, or may bring with it social status or other benefits.
Emergence of money

Anatolian obsidian as a raw material for stone-age tools was distributed as early as 12,000 B.C., with organized trade occurring in the 9th millennium.(Cauvin;Chataigner 1998)[10] In Sardinia, one of the four main sites for sourcing the material deposits of obsidian within the Mediterranean, trade in this was replaced in the 3rd millennium by trade in copper and silver.[11][12][13][14]

As early as 9000 BC both grain and cattle were used as money or as barter (Davies) (the first grain remains found, considered to be evidence of pre-agricultural practice date to 17,000 BC).[15][16][17]

In the earliest instances of trade with money, the things with the greatest utility and reliability in terms of re-use and re-trading of these things (their marketability), determined the nature of the object or thing chosen to exchange. So as in agricultural societies, things needed for efficient and comfortable employment of energies for the production of cereals and the like were the most easy to transfer to monetary significance for direct exchange. As more of the basic conditions of the human existence were met to the satisfaction of human needs,[18] so the division of labour increased to create new activities for the use of time to solve more advanced concerns. As people's needs became more refined, indirect exchange became more likely as the physical separation of skilled labourers (suppliers) from their prospective clients (demand) required the use of a medium common to all communities, to facilitate a wider market.[19][20]

Aristotle's opinion of the creation of money[21] as a new thing in society is:

When the inhabitants of one country became more dependent on those of another, and they imported what they needed, and exported what they had too much of, money necessarily came into use.[22]

The worship of Moneta is recorded by Livy with the temple built in the time of Rome 413 (123); a temple consecrated to the same god was built in the earlier part of the fourth century (perhaps the same temple).[23][24][25] The temple contained the mint of Rome for a period of four centuries.[26][27]

The Code of Hammurabi, the best preserved ancient law code, was created ca. 1760 BC (middle chronology) in ancient Babylon. It was enacted by the sixth Babylonian king, Hammurabi. Earlier collections of laws include the code of Ur-Nammu, king of Ur (ca. 2050 BC), the Code of Eshnunna (ca. 1930 BC) and the code of Lipit-Ishtar of Isin (ca. 1870 BC).[28] These law codes formalized the role of money in civil society. They set amounts of interest on debt... fines for 'wrongdoing'... and compensation in money for various infractions of formalized law.

The Mesopotamian civilization developed a large scale economy based on commodity money. The Babylonians and their neighboring city states later developed the earliest system of economics as we think of it today, in terms of rules on debt,[8] legal contracts and law codes relating to business practices and private property. Money was not only an emergence, it was a necessity.[29][30]
Early usage

The earliest jagah of storage were thought to be money-boxes containments (θησαυροί[31]) made similar to the construction of a bee-hive,[32][33] as of the Mycenae tombs of 1550–1500 BC.[34][35][36]

An early form of money were cattle, which were used as such from between 9000 to 6000 BC onwards (Davies 1996 & 1999).[37][38] Both the animal and the manure produced were valuable; animals are recorded as being used as payment as in Roman law where fines were paid in oxen and sheep (Rollin 1836)[39][40][41] and within the Iliad and Odyssey, attesting to a value c. 850–800 BC (Evans & Schmalensee 2005).[42][43]

It has long been assumed that metals, where available, were favored for use as proto-money over such commodities as cattle, cowry shells, or salt, because metals are at once durable, portable, and easily divisible.[44] The use of gold as proto-money has been traced back to the fourth millennium BC when the Egyptians used gold bars of a set weight as a medium of exchange,[citation needed] as had been done earlier in Mesopotamia with silver bars.[citation needed]

The first mention of the use of money within the Bible is within the Book of Genesis[45] in reference to criteria of the circumcision of a bought slave. Later, the Cave of Machpelah is purchased (with silver[46][47]) by Abraham,[48] during a period dated as being the beginning of the twentieth century BC,[49] some-time recent to 1900 BC[50] (after 1985).[51] The currency was also in use amongst the Philistine people of the same time-period.[52]

The shekel was an ancient unit[53] used in Mesopotamia around 3000 BC to define both a specific weight of barley and equivalent amounts of materials such as silver, bronze and copper. The use of a single unit to define both mass and currency was a similar concept to the British pound, which was originally defined as a one-pound mass of silver.

A description of how trade proceeded includes for sales the dividing (clipping) of an amount from a weight of something corresponding to the perceived value of the purchase. Of this the ancient Greek term was κέρδος. From this one might understand the development of how coinage was imagined from the small metallic clippings (of silver[54][55][56]) resulting from trade exchanges.[57] The word used in Thucydides writings History for money is χρήματα ("chremata"), translated in some contexts as "goods" or "property", although with a wider ranging possible applicable usage, having a definite meaning "valuable things".[58][59][60][61]

The Indus Valley Civilisation of India dates back between 2500 BC and 1750 BC. There, however, is no consensus on whether the seals excavated from the sites were in fact coins. The first gold coins of the Grecian age were struck in Lydia at a time approximated to the year 700 BC[62] The talent[53][63] in use during the periods of Grecian history both before and during the time of the life of Homer, weighed between 8.42 and 8.75 grammes.

Main article: Commodity money

Bartering has several problems, most notably that it requires a "coincidence of wants". For example, if a wheat farmer needs what a fruit farmer produces, a direct swap is impossible as seasonal fruit would spoil before the grain harvest. A solution is to trade fruit for wheat indirectly through a third, "intermediate", commodity: the fruit is exchanged for the intermediate commodity when the fruit ripens. If this intermediate commodity doesn't perish and is reliably in demand throughout the year (e.g. copper, gold, or wine) then it can be exchanged for wheat after the harvest. The function of the intermediate commodity as a store-of-value can be standardized into a widespread commodity money, reducing the coincidence of wants problem. By overcoming the limitations of simple barter, a commodity money makes the market in all other commodities more liquid.

Many cultures around the world eventually developed the use of commodity money. Ancient China, Africa, and India used cowry shells. Trade in Japan's feudal system was based on the koku – a unit of rice. The shekel was an ancient unit of weight and currency. The first usage of the term came from Mesopotamia circa 3000 BC and referred to a specific weight of barley, which related other values in a metric such as silver, bronze, copper etc. A barley/shekel was originally both a unit of currency and a unit of weight.[65]

Wherever trade is common, barter systems usually lead quite rapidly to several key goods being imbued with monetary properties[citation needed]. In the early British colony of New South Wales, rum emerged quite soon after settlement as the most monetary of goods. When a nation is without a currency it commonly adopts a foreign currency. In prisons where conventional money is prohibited, it is quite common for cigarettes to take on a monetary quality. Contrary to popular belief, precious metals have rarely been used outside of large societies. Gold, in particular, is sufficiently scarce that it has only been used as a currency for a few relatively brief periods in history.

From approximately 1000 BC money in the shape of small knives and spades made of bronze were in use in China during the Zhou dynasty, with cast bronze replicas of cowrie shells in use before this. The first manufactured coins seems to have taken place separately in India, China, and in cities around the Aegean sea between 700 and 500 BC.[66] While these Aegean coins were stamped (heated and hammered with insignia), the Indian coins (from the Ganges river valley) were punched metal disks, and Chinese coins (first developed in the Great Plain) were cast bronze with holes in the center to be strung together. The different forms and metallurgical process implies a separate development.[67]

The first ruler in the Mediterranean known to have officially set standards of weight and money was Pheidon.[68] Minting occurred in the latter parts of the 7th century amongst the cities of Grecian Asia Minor, spreading to Aegean parts of the Greek islands and the south of Italy by 500 BC.[27] The first stamped money (having the mark of some authority in the form of a picture or words) can be seen in the Bibliothčque Nationale of Paris. It is an electrum stater of a turtle coin, coined at Aegina island. This coin[69] dates about 700 BC.[70]

Other coins made of Electrum (a naturally occurring alloy of silver and gold) were manufactured on a larger scale about 650 BC in Lydia (on the coast of what is now Turkey).[71] Similar coinage was adopted and manufactured to their own standards in nearby cities of Ionia, including Mytilene and Phokaia (using coins of Electrum) and Aegina (using silver) during the 6th century BC. and soon became adopted in mainland Greece itself, and the Persian Empire (after it incorporated Lydia in 547 BC).

The use and export of silver coinage, along with soldiers paid in coins, contributed to the Athenian Empire's 5th century BC, dominance of the region. The silver used was mined in southern Attica at Laurium and Thorikos by a huge workforce of slave labour. A major silver vein discovery at Laurium in 483 BC led to the huge expansion of the Athenian military fleet.

It was the discovery of the touchstone which led the way for metal-based commodity money and coinage. Any soft metal can be tested for purity on a touchstone, allowing one to quickly calculate the total content of a particular metal in a lump. Gold is a soft metal, which is also hard to come by, dense, and storable. As a result, monetary gold spread very quickly from Asia Minor, where it first gained wide usage, to the entire world.

Using such a system still required several steps and mathematical calculation. The touchstone allows one to estimate the amount of gold in an alloy, which is then multiplied by the weight to find the amount of gold alone in a lump. To make this process easier, the concept of standard coinage was introduced. Coins were pre-weighed and pre-alloyed, so as long as the manufacturer was aware of the origin of the coin, no use of the touchstone was required. Coins were typically minted by governments in a carefully protected process, and then stamped with an emblem that guaranteed the weight and value of the metal. It was, however, extremely common for governments to assert that the value of such money lay in its emblem and thus to subsequently reduce the value of the currency by lowering the content of valuable metal.[citation needed]

Gold and silver were used as the most common form of money throughout history. In many languages, such as Spanish, French, and Italian, the word for silver is still directly related to the word for money. Although gold and silver were commonly used to mint coins, other metals were used. For instance, Ancient Sparta minted coins from iron to discourage its citizens from engaging in foreign trade.[72] In the early seventeenth century Sweden lacked more precious metal and so produced "plate money", which were large slabs of copper approximately 50 cm or more in length and width, appropriately stamped with indications of their value.

Gold coinage began to be minted again in Europe in the thirteenth century. Frederick the II is credited with having re-introduced the metal to currency during the time of the Crusades. During the fourteenth century Europe had en masse converted from use of silver in currency to minting of gold.[73][74] Vienna transferred from minting silver to instead gold during 1328.[73]

Metal based coins had the advantage of carrying their value within the coins themselves – on the other hand, they induced manipulations: the clipping of coins in the attempt to get and recycle the precious metal. A greater problem was the simultaneous co-existence of gold, silver and copper coins in Europe. English and Spanish traders valued gold coins more than silver coins, as many of their neighbors did, with the effect that the English gold-based guinea coin began to rise against the English silver based crown in the 1670s and 1680s. Consequently, silver was ultimately pulled out of England for dubious amounts of gold coming into the country at a rate no other European nation would share. The effect was worsened with Asian traders not sharing the European appreciation of gold altogether — gold left Asia and silver left Europe in quantities European observers like Isaac Newton, Master of the Royal Mint observed with unease.[75]

Stability came into the system with national Banks guaranteeing to change money into gold at a promised rate; it did, however, not come easily. The Bank of England risked a national financial catastrophe in the 1730s when customers demanded their money be changed into gold in a moment of crisis. Eventually London's merchants saved the bank and the nation with financial guarantees.[citation needed]

Another step in the evolution of money was the change from a coin being a unit of weight to being a unit of value. A distinction could be made between its commodity value and its specie value. The difference is these values is seigniorage.[76]
Trade bills of exchange

Bills of exchange became prevalent with the expansion of European trade toward the end of the Middle Ages. A flourishing Italian wholesale trade in cloth, woolen clothing, wine, tin and other commodities was heavily dependent on credit for its rapid expansion. Goods were supplied to a buyer against a bill of exchange, which constituted the buyer's promise to make payment at some specified future date. Provided that the buyer was reputable or the bill was endorsed by a credible guarantor, the seller could then present the bill to a merchant banker and redeem it in money at a discounted value before it actually became due. The main purpose of these bills nevertheless was, that traveling with cash was particularly dangerous at the time. A deposit could be made with a banker in one town, in turn a bill of exchange was handed out, that could be redeemed in another town.

These bills could also be used as a form of payment by the seller to make additional purchases from his own suppliers. Thus, the bills – an early form of credit – became both a medium of exchange and a medium for storage of value. Like the loans made by the Egyptian grain banks, this trade credit became a significant source for the creation of new money. In England, bills of exchange became an important form of credit and money during last quarter of the 18th century and the first quarter of the 19th century before banknotes, checks and cash credit lines were widely available.[77]

The acceptance of symbolic forms of money opened up vast new realms for human creativity. A symbol could be used to represent something of value that was available in physical storage somewhere else in space, such as grain in the warehouse. It could also be used to represent something of value that would be available later in time, such as a promissory note or bill of exchange, a document ordering someone to pay a certain sum of money to another on a specific date or when certain conditions have been fulfilled.

In the 12th century, the English monarchy introduced an early version of the bill of exchange in the form of a notched piece of wood known as a tally stick. Tallies originally came into use at a time when paper was rare and costly, but their use persisted until the early 19th Century, even after paper forms of money had become prevalent. The notches were used to denote various amounts of taxes payable to the crown. Initially tallies were simply used as a form of receipt to the tax payer at the time of rendering his dues. As the revenue department became more efficient, they began issuing tallies to denote a promise of the tax assessee to make future tax payments at specified times during the year. Each tally consisted of a matching pair – one stick was given to the assessee at the time of assessment representing the amount of taxes to be paid later and the other held by the Treasury representing the amount of taxes be collected at a future date.

The Treasury discovered that these tallies could also be used to create money. When the crown had exhausted its current resources, it could use the tally receipts representing future tax payments due to the crown as a form of payment to its own creditors, who in turn could either collect the tax revenue directly from those assessed or use the same tally to pay their own taxes to the government. The tallies could also be sold to other parties in exchange for gold or silver coin at a discount reflecting the length of time remaining until the taxes was due for payment. Thus, the tallies became an accepted medium of exchange for some types of transactions and an accepted medium for store of value. Like the girobanks before it, the Treasury soon realized that it could also issue tallies that were not backed by any specific assessment of taxes. By doing so, the Treasury created new money that was backed by public trust and confidence in the monarchy rather than by specific revenue receipts.[78]
Goldsmith bankers

Goldsmiths in England had been craftsmen, bullion merchants, money changers and money lenders since the 16th century. But they were not the first to act as financial intermediates; in the early 17th century, the scriveners were the first to keep deposits for the express purpose of relending them.[79] Merchants and traders had amassed huge hoards of gold and entrusted their wealth to the Royal Mint for storage. In 1640 King Charles I seized the private gold stored in the mint as a forced loan (which was to be paid back over time). Thereafter merchants preferred to store their gold with the goldsmiths of London, who possessed private vaults, and charged a fee for that service. In exchange for each deposit of precious metal, the goldsmiths issued receipts certifying the quantity and purity of the metal they held as a bailee (i.e. in trust). These receipts could not be assigned (only the original depositor could collect the stored goods). Gradually the goldsmiths took over the function of the scriveners of relending on behalf of a depositor and also developed modern banking practices; promissory notes were issued for money deposited which by custom and/or law was a loan to the goldsmith,[80] i.e. the depositor expressly allowed the goldsmith to use the money for any purpose including advances to his customers. The goldsmith charged no fee, or even paid interest on these deposits. Since the promissory notes were payable on demand, and the advances (loans) to the goldsmith's customers were repayable over a longer time period, this was an early form of fractional reserve banking. The promissory notes developed into an assignable instrument, which could circulate as a safe and convenient form of money backed by the goldsmith's promise to pay.[81] Hence goldsmiths could advance loans in the form of gold money, or in the form of promissory notes, or in the form of checking accounts.[82] Gold deposits were relatively stable, often remaining with the goldsmith for years on end, so there was little risk of default so long as public trust in the goldsmith's integrity and financial soundness was maintained. Thus, the goldsmiths of London became the forerunners of British banking and prominent creators of new money based on credit.

[ Edit | View ]

Hollywood -- Hasanur Rahman, 20:40:50 02/08/16 Mon [1]

Hollywood (/ˈhɒliwʊd/ HOL-ee-wuud) is a neighborhood in the central region of Los Angeles, California. The neighborhood is notable for its place as the home of the U.S. film industry, including several of its historic studios. Its name has come to be a metonym for the motion picture industry of the United States. Hollywood is also a highly ethnically diverse, densely populated, economically diverse neighborhood and retail business district.

Hollywood was a small community in 1870 and was incorporated as a municipality in 1903.[1][2] It officially merged with the city of Los Angeles in 1910, and soon thereafter a prominent film industry began to emerge, eventually becoming the most recognizable film industry in the world.

Early history and development

In 1853, one adobe hut stood in Nopalera (Nopal field), named for the Mexican Nopal cactus indigenous to the area. By 1870, an agricultural community flourished. The area was known as the Cahuenga Valley, after the pass in the Santa Monica Mountains immediately to the north.

According to the diary of H. J. Whitley, known as the "Father of Hollywood", on his honeymoon in 1886 he stood at the top of the hill looking out over the valley. Along came a Chinese man in a wagon carrying wood. The man got out of the wagon and bowed. The Chinese man was asked what he was doing and replied, "I holly-wood", meaning 'hauling wood.' H. J. Whitley had an epiphany and decided to name his new town Hollywood. Holly would represent England and wood would represent his Scottish heritage. Whitley had already started over 100 towns across the western United States.[5][6]

Whitley arranged to buy the 500-acre (200 ha) E.C. Hurd ranch and disclosed to him his plans for the land. They agreed on a price and Hurd agreed to sell at a later date. Before Whitley got off the ground with Hollywood, plans for the new town had spread to General Harrison Gray Otis, Hurd's wife, eastern adjacent ranch co-owner Daeida Wilcox, and others.

Daeida Wilcox may have learned of the name Hollywood from Ivar Weid, her neighbor in Holly Canyon (now Lake Hollywood) and a prominent investor and friend of Whitley's.[7][8] She recommended the same name to her husband, Harvey. H. Wilcox. In August 1887, Wilcox filed with the Los Angeles County Recorder's office a deed and parcel map of property he had sold named "Hollywood, California." Wilcox wanted to be the first to record it on a deed. The early real-estate boom busted that same year, yet Hollywood began its slow growth.

By 1900, the region had a post office, newspaper, hotel, and two markets. Los Angeles, with a population of 102,479 lay 10 miles (16 km) east through the vineyards, barley fields, and citrus groves. A single-track streetcar line ran down the middle of Prospect Avenue from it, but service was infrequent and the trip took two hours. The old citrus fruit-packing house was converted into a livery stable, improving transportation for the inhabitants of Hollywood.

The Hollywood Hotel was opened in 1902 by H. J. Whitley, president of the Los Pacific Boulevard and Development Company. Having finally acquired the Hurd ranch and subdivided it, Whitley built the hotel to attract land buyers. Flanking the west side of Highland Avenue, the structure fronted on Prospect Avenue, which, still a dusty, unpaved road, was regularly graded and graveled. The hotel was to become internationally known and was the center of the civic and social life and home of the stars for many years. [9]

Whitley's company developed and sold one of the early residential areas, the Ocean View Tract.[10] Whitley did much to promote the area. He paid thousands of dollars for electric lighting, including bringing electricity and building a bank, as well as a road into the Cahuenga Pass. The lighting ran for several blocks down Prospect Avenue. Whitley's land was centered on Highland Avenue.[11][12] His 1918 development, Whitley Heights, was named for him.
Incorporation and merger

Hollywood was incorporated as a municipality on November 14, 1903, by a vote of 88 for and 77 against. On January 30, 1904, the voters in Hollywood decided, by a vote of 113 to 96, for the banishment of liquor in the city, except when it was being sold for medicinal purposes. Neither hotels nor restaurants were allowed to serve wine or liquor before or after meals.[13]

In 1910, the city voted for merger with Los Angeles in order to secure an adequate water supply and to gain access to the L.A. sewer system. With annexation, the name of Prospect Avenue changed to Hollywood Boulevard and all the street numbers were also changed.[14]

[ Edit | View ]

Bollywood -- Hasanur Rahman, 20:36:38 02/08/16 Mon [1]

Bollywood is the sobriquet for the Hindi language film industry, based in Mumbai, Maharashtra.[3][4] The term is often incorrectly used as a synecdoche to refer to the whole of Indian cinema; however, it is only a part of the large Indian film industry, which includes other production centres producing films in many languages.[5]

Bollywood is one of the largest film producers in India, representing 43% of net box office revenue, while Telugu cinema, and Tamil cinema representing 36%, and rest of the regional cinema constitutes 21% as of 2014,[6] bollywood is also one of the largest centers of film production in the world.[7][8][9] It is more formally referred to as Hindi cinema.[10] Bollywood is classified as the biggest movie industry in the world in terms of amount of people employed and number of films produced.[11] In just 2011 alone, over 3.5 billion tickets were sold across the globe which in comparison is 900,000 tickets more than Hollywood.[12] Also in comparison, Bollywood makes approximately 1,041 films yearly, as opposed to less than 500 films made by Hollywood yearly


The name "Bollywood" is a portmanteau derived from Bombay (the former name for Mumbai) and Hollywood, the center of the American film industry.[13] However, unlike Hollywood, Bollywood does not exist as a physical place. Some deplore the name, arguing that it makes the industry look like a poor cousin to Hollywood.[13][14]

The naming scheme for "Bollywood" was inspired by "Tollywood", the name that was used to refer to the cinema of West Bengal. Dating back to 1932, "Tollywood" was the earliest Hollywood-inspired name, referring to the Bengali film industry based in Tollygunge, Calcutta, whose name is reminiscent of "Hollywood" and was the centre of the cinema of India at the time.[15] It was this "chance juxtaposition of two pairs of rhyming syllables," Holly and Tolly, that led to the portmanteau name "Tollywood" being coined. The name "Tollywood" went on to be used as a nickname for the Bengali film industry by the popular Kolkata-based Junior Statesman youth magazine, establishing a precedent for other film industries to use similar-sounding names, eventually leading to the term "Bollywood" being coined.[16] However, more popularly, Tollywood is now used to refer to the Telugu Film Industry in Telangana & Andhra Pradesh. The term "Bollywood" itself has origins in the 1970s, when India overtook America as the world's largest film producer. Credit for the term has been claimed by several different people, including the lyricist, filmmaker and scholar Amit Khanna,[17] and the journalist Bevinda Collaco.

Raja Harishchandra (1913), by Dadasaheb Phalke, is known as the first silent feature film made in India. By the 1930s, the industry was producing over 200 films per annum.[19] The first Indian sound film, Ardeshir Irani's Alam Ara (1931), was a major commercial success.[20] There was clearly a huge market for talkies and musicals; Bollywood and all the regional film industries quickly switched to sound filming.

The 1930s and 1940s were tumultuous times: India was buffeted by the Great Depression, World War II, the Indian independence movement, and the violence of the Partition. Most Bollywood films were unabashedly escapist, but there were also a number of filmmakers who tackled tough social issues, or used the struggle for Indian independence as a backdrop for their plots.[19]

In 1937, Ardeshir Irani, of Alam Ara fame, made the first colour film in Hindi, Kisan Kanya. The next year, he made another colour film, a version of Mother India. However, colour did not become a popular feature until the late 1950s. At this time, lavish romantic musicals and melodramas were the staple fare at the cinema.
Golden Age

Following India's independence, the period from the late 1940s to the 1960s is regarded by film historians as the "Golden Age" of Hindi cinema.[21][22][23] Some of the most critically acclaimed Hindi films of all time were produced during this period. Examples include the Guru Dutt films Pyaasa (1957) and Kaagaz Ke Phool (1959) and the Raj Kapoor films Awaara (1951), Shree 420 (1955) and Dilip Kumar's Aan (1952). These films expressed social themes mainly dealing with working-class urban life in India; Awaara presented the city as both a nightmare and a dream, while Pyaasa critiqued the unreality of city life.[24] Some of the most famous epic films of Hindi cinema were also produced at the time, including Mehboob Khan's Mother India (1957), which was nominated for the Academy Award for Best Foreign Language Film,[25] and K. Asif's Mughal-e-Azam (1960).[26] Madhumati (1958), directed by Bimal Roy and written by Ritwik Ghatak, popularised the theme of reincarnation in Western popular culture.[27] Other acclaimed mainstream Hindi filmmakers at the time included Kamal Amrohi and Vijay Bhatt. Successful actors at the time included Dev Anand, Dilip Kumar, Raj Kapoor and Guru Dutt, while successful actresses included Nargis, Vyjayanthimala, Meena Kumari, Nutan, Madhubala, Waheeda Rehman and Mala Sinha.[28]

While commercial Hindi cinema was thriving, the 1950s also saw the emergence of a new Parallel Cinema movement.[24] Though the movement was mainly led by Bengali cinema, it also began gaining prominence in Hindi cinema. Early examples of Hindi films in this movement include Chetan Anand's Neecha Nagar (1946)[29] and Bimal Roy's Do Bigha Zamin (1953). Their critical acclaim, as well as the latter's commercial success, paved the way for Indian neorealism[30] and the Indian New Wave.[31] Some of the internationally acclaimed Hindi filmmakers involved in the movement included Mani Kaul, Kumar Shahani, Ketan Mehta, Govind Nihalani, Shyam Benegal and Vijaya Mehta.[24]

Ever since the social realist film Neecha Nagar won the Grand Prize at the first Cannes Film Festival,[29] Hindi films were frequently in competition for the Palme d'Or at the Cannes Film Festival throughout the 1950s and early 1960s, with some of them winning major prizes at the festival.[32] Guru Dutt, while overlooked in his own lifetime, had belatedly generated international recognition much later in the 1980s.[32][33] Dutt is now regarded as one of the greatest Asian filmmakers of all time, alongside the more famous Indian Bengali filmmaker Satyajit Ray. The 2002 Sight & Sound critics' and directors' poll of greatest filmmakers ranked Dutt at No. 73 on the list.[34] Some of his films are now included among the greatest films of all time, with Pyaasa (1957) being featured in Time magazine's "All-TIME" 100 best movies list,[35] and with both Pyaasa and Kaagaz Ke Phool (1959) tied at No. 160 in the 2002 Sight & Sound critics' and directors' poll of all-time greatest films. Several other Hindi films from this era were also ranked in the Sight & Sound poll, including Raj Kapoor's Awaara (1951), Vijay Bhatt's Baiju Bawra (1952), Mehboob Khan's Mother India (1957) and K. Asif's Mughal-e-Azam (1960) all tied at No. 346 on the list.[36]
Modern cinema

In the late 1960s and early 1970s, romance movies and action films starred actors like Rajesh Khanna, Dharmendra, Sanjeev Kumar and Shashi Kapoor and actresses like Sharmila Tagore, Mumtaz and Asha Parekh. In the mid-1970s, romantic confections made way for gritty, violent films about gangsters (see Indian mafia) and bandits. Amitabh Bachchan, the star known for his "angry young man" roles, rode the crest of this trend with actors like Mithun Chakraborty, Anil Kapoor and Sunny Deol, which lasted into the early 1990s. Actresses from this era included Hema Malini, Jaya Bachchan and Rekha.[28]

Some Hindi filmmakers such as Shyam Benegal continued to produce realistic Parallel Cinema throughout the 1970s,[37] alongside Mani Kaul, Kumar Shahani, Ketan Mehta, Govind Nihalani and Vijaya Mehta.[24] However, the 'art film' bent of the Film Finance Corporation came under criticism during a Committee on Public Undertakings investigation in 1976, which accused the body of not doing enough to encourage commercial cinema. The 1970s thus saw the rise of commercial cinema in the form of enduring films such as Sholay (1975), which consolidated Amitabh Bachchan's position as a lead actor. The devotional classic Jai Santoshi Ma was also released in 1975.[38] Another important film from 1975 was Deewar, directed by Yash Chopra and written by Salim-Javed. A crime film pitting "a policeman against his brother, a gang leader based on real-life smuggler Haji Mastan", portrayed by Amitabh Bachchan; it was described as being "absolutely key to Indian cinema" by Danny Boyle.[39] The most internationally acclaimed Hindi film of the 1980s was Mira Nair's Salaam Bombay! (1988), which won the Camera d'Or at the 1988 Cannes Film Festival and was nominated for the Academy Award for Best Foreign Language Film.

During the late 1980s and early 1990s, the pendulum swung back toward family-centric romantic musicals with the success of such films as Qayamat Se Qayamat Tak (1988), Maine Pyar Kiya (1989), Dil (1990), Hum Aapke Hain Kaun (1994), Dilwale Dulhania Le Jayenge (1995) and Kuch Kuch Hota Hai (1998) making stars of a new generation of actors (such as Aamir Khan, Salman Khan and Shahrukh Khan) and actresses (such as Madhuri Dixit, Sridevi, Juhi Chawla).[28] In that point of time, action and comedy films were also successful, with actors like Govinda and actresses such as Raveena Tandon and Karisma Kapoor appearing in popular comedy films, and stunt actor Akshay Kumar gaining popularity for performing dangerous stunts in action films in his well known Khiladi (film series) and other action films.[40][41] Furthermore, this decade marked the entry of new performers in arthouse and independent films, some of which succeeded commercially, the most influential example being Satya (1998), directed by Ram Gopal Varma and written by Anurag Kashyap. The critical and commercial success of Satya led to the emergence of a distinct genre known as Mumbai noir,[42] urban films reflecting social problems in the city of Mumbai.[43] This led to a resurgence of Parallel Cinema by the end of the decade.[42] These films often featured actors like Nana Patekar, Manoj Bajpai, Manisha Koirala, Tabu and Urmila Matondkar, whose performances were usually critically acclaimed.

The 2000s saw a growth in Bollywood's popularity across the world. This led the nation's filmmaking to new heights in terms of production values, cinematography and innovative story lines as well as technical advances in areas such as special effects and animation.[44] Some of the largest production houses, among them Yash Raj Films and Dharma Productions were the producers of new modern films.[44] Some popular films of the decade were Koi... Mil Gaya (2003), Kal Ho Naa Ho (2003), Veer-Zaara (2004), Dhoom (2004), Hum Tum (2004), Dhoom 2 (2006), Krrish (2006), and Jab We Met (2007). These films starred established actors. However, the mid-2000s also saw the rise of popular actors like Hrithik Roshan, Saif Ali Khan, Shahid Kapoor, and Abhishek Bachchan, as well as actresses like Rani Mukerji, Preity Zinta, Aishwarya Rai, Kareena Kapoor, and Priyanka Chopra.

In the early 2010s, established actors like Salman Khan and Akshay Kumar became known for making big-budget masala entertainers like Dabangg and Rowdy Rathore opposite younger actresses like Sonakshi Sinha. These films were often not the subject of critical acclaim, but were nonetheless major commercial successes. While most stars from the 2000s continued their successful careers into the next decade, the 2010s also saw the rise of a new generation of actors like Ranbir Kapoor, Imran Khan, Ranveer Singh, and Arjun Kapoor, as well as actresses like Vidya Balan, Katrina Kaif, Deepika Padukone, Kangana Ranaut, Anushka Sharma, and Parineeti Chopra.

Hindi films can achieve distribution across at least 22 of India’s 29 states.[45] The Hindi film industry has preferred films that appeal to all segments of the audience (see the discussion in Ganti, 2004, cited in references), and has resisted making films that target narrower audiences. It was believed that aiming for a broad spectrum would maximise box office receipts. However, filmmakers may be moving towards accepting some box-office segmentation, between films that appeal to rural Indians, and films that appeal to urban and international audiences.
Influences for Bollywood

Gokulsing and Dissanayake identify six major influences that have shaped the conventions of Indian popular cinema:[46]

The ancient Indian epics of Mahabharata and Ramayana which have exerted a profound influence on the thought and imagination of Indian popular cinema, particularly in its narratives. Examples of this influence include the techniques of a side story, back-story and story within a story. Indian popular films often have plots which branch off into sub-plots; such narrative dispersals can clearly be seen in the 1993 films Khalnayak and Gardish.[46]
Ancient Sanskrit drama, with its highly stylised nature and emphasis on spectacle, where music, dance and gesture combined "to create a vibrant artistic unit with dance and mime being central to the dramatic experience." Sanskrit dramas were known as natya, derived from the root word nrit (dance), characterising them as specacular dance-dramas which has continued Indian cinema.[46] The theory of rasa dating back to ancient Sanskrit drama is believed to be one of the most fundamental features that differentiate Indian cinema, particularly Hindi cinema, from that of the Western world.[47]
The traditional folk theatre of India, which became popular from around the 10th century with the decline of Sanskrit theatre. These regional traditions include the Yatra of Bengal, the Ramlila of Uttar Pradesh, and the Terukkuttu of Tamil Nadu.[46]
The Parsi theatre, which "blended realism and fantasy, music and dance, narrative and spectacle, earthy dialogue and ingenuity of stage presentation, integrating them into a dramatic discourse of melodrama. The Parsi plays contained crude humour, melodious songs and music, sensationalism and dazzling stagecraft."[46]
Hollywood, where musicals were popular from the 1920s to the 1950s, though Indian filmmakers departed from their Hollywood counterparts in several ways. "For example, the Hollywood musicals had as their plot the world of entertainment itself. Indian filmmakers, while enhancing the elements of fantasy so pervasive in Indian popular films, used song and music as a natural mode of articulation in a given situation in their films. There is a strong Indian tradition of narrating mythology, history, fairy stories and so on through song and dance." In addition, "whereas Hollywood filmmakers strove to conceal the constructed nature of their work so that the realistic narrative was wholly dominant, Indian filmmakers made no attempt to conceal the fact that what was shown on the screen was a creation, an illusion, a fiction. However, they demonstrated how this creation intersected with people's day to day lives in complex and interesting ways."[46]
Western musical television, particularly MTV, which has had an increasing influence since the 1990s, as can be seen in the pace, camera angles, dance sequences and music of 2000s Indian films. An early example of this approach was in Mani Ratnam's Bombay (1995).[46]

Influence of Bollywood

Perhaps the biggest influence of Bollywood has been on nationalism in India itself, where along with rest of Indian cinema, it has become part and parcel of the 'Indian story'.[48] In the words of the economist and Bollywood biographer Lord Meghnad Desai,[48]

Cinema actually has been the most vibrant medium for telling India its own story, the story of its struggle for independence, its constant struggle to achieve national integration and to emerge as a global presence.

In the 2000s, Bollywood began influencing musical films in the Western world, and played a particularly instrumental role in the revival of the American musical film genre. Baz Luhrmann stated that his musical film Moulin Rouge! (2001) was directly inspired by Bollywood musicals.[49] The film incorporated an Indian-themed play based on the ancient Sanskrit drama Mṛcchakatika and a Bollywood-style dance sequence with a song from the film China Gate. The critical and financial success of Moulin Rouge! renewed interest in the then-moribund Western musical genre, and subsequently films such as Chicago, The Producers, Rent, Dreamgirls, Hairspray, Sweeney Todd, Across the Universe, The Phantom of the Opera, Enchanted and Mamma Mia! were produced, fuelling a renaissance of the genre.[50][51]

A. R. Rahman, an Indian film composer, wrote the music for Andrew Lloyd Webber's Bombay Dreams, and a musical version of Hum Aapke Hain Koun has played in London's West End. The Bollywood musical Lagaan (2001) was nominated for the Academy Award for Best Foreign Language Film, and two other Bollywood films Devdas (2002) and Rang De Basanti (2006) were nominated for the BAFTA Award for Best Film Not in the English Language. Danny Boyle's Slumdog Millionaire (2008), which has won four Golden Globes and eight Academy Awards, was also directly inspired by Bollywood films,[39][52] and is considered to be a "homage to Hindi commercial cinema".[53] The theme of reincarnation was also popularised in Western popular culture through Bollywood films, with Madhumati (1958) inspiring the Hollywood film The Reincarnation of Peter Proud (1975),[27] which in turn inspired the Bollywood film Karz (1980), which in turn influenced another Hollywood film Chances Are (1989).[54] The 1975 film Chhoti Si Baat is believed to have inspired Hitch (2005), which in turn inspired the Bollywood film Partner (2007).[55]

The influence of Bollywood filmi music can also be seen in popular music elsewhere in the world. In 1978, technopop pioneers Haruomi Hosono and Ryuichi Sakamoto of the Yellow Magic Orchestra produced an electronic album Cochin Moon based on an experimental fusion between electronic music and Bollywood-inspired Indian music.[56] Devo's 1988 hit song "Disco Dancer" was inspired by the song "I am a Disco Dancer" from the Bollywood film Disco Dancer (1982).[57] The 2002 song "Addictive", sung by Truth Hurts and produced by DJ Quik and Dr. Dre, was lifted from Lata Mangeshkar's "Thoda Resham Lagta Hai" from Jyoti (1981).[58] The Black Eyed Peas' Grammy Award winning 2005 song "Don't Phunk with My Heart" was inspired by two 1970s Bollywood songs: "Ye Mera Dil Yaar Ka Diwana" from Don (1978) and "Ae Nujawan Hai Sub" from Apradh (1972).[59] Both songs were originally composed by Kalyanji Anandji, sung by Asha Bhosle, and featured the dancer Helen.[60] Also in 2005, the Kronos Quartet re-recorded several R. D. Burman compositions, with Asha Bhosle as the singer, into an album You've Stolen My Heart: Songs from R.D. Burman's Bollywood, which was nominated for "Best Contemporary World Music Album" at the 2006 Grammy Awards. Filmi music composed by A. R. Rahman (who would later win two Academy Awards for the Slumdog Millionaire soundtrack) has frequently been sampled by musicians elsewhere in the world, including the Singaporean artist Kelly Poon, the Uzbek artist Iroda Dilroz, the French rap group La Caution, the American artist Ciara, and the German band Löwenherz,[61] among others. Many Asian Underground artists, particularly those among the overseas Indian diaspora, have also been inspired by Bollywood music.
Genre conventions
See also: Masala (film genre) and Parallel Cinema

Bollywood films are mostly musicals and are expected to contain catchy music in the form of song-and-dance numbers woven into the script. A film's success often depends on the quality of such musical numbers.[62] Indeed, a film's music is often released before the movie and helps increase the audience.

Indian audiences expect full value for their money, with a good entertainer generally referred to as paisa vasool, (literally, "money's worth").[63] Songs and dances, love triangles, comedy and dare-devil thrills are all mixed up in a three-hour extravaganza with an intermission. They are called masala films, after the Hindi word for a spice mixture. Like masalas, these movies are a mixture of many things such as action, comedy, romance and so on. Most films have heroes who are able to fight off villains all by themselves.

Bollywood plots have tended to be melodramatic. They frequently employ formulaic ingredients such as star-crossed lovers and angry parents, love triangles, family ties, sacrifice, corrupt politicians, kidnappers, conniving villains, courtesans with hearts of gold, long-lost relatives and siblings separated by fate, dramatic reversals of fortune, and convenient coincidences.

There have always been Indian films with more artistic aims and more sophisticated stories, both inside and outside the Bollywood tradition (see Parallel Cinema). They often lost out at the box office to movies with more mass appeal. Bollywood conventions are changing, however. A large Indian diaspora in English-speaking countries, and increased Western influence at home, have nudged Bollywood films closer to Hollywood models.[64]

Film critic Lata Khubchandani writes, "our earliest films ... had liberal doses of sex and kissing scenes in them. Strangely, it was after Independence the censor board came into being and so did all the strictures."[65] Plots now tend to feature Westernised urbanites dating and dancing in clubs rather than centring on pre-arranged marriages. Though these changes can widely be seen in contemporary Bollywood, traditional conservative ways of Indian culture continue to exist in India outside the industry and an element of resistance by some to western-based influences.[64] Despite this, Bollywood continues to play a major role in fashion in India.[64] Some studies into fashion in India have revealed that some people are unaware that the changing nature of fashion in Bollywood films are often influenced by globalisation; many consider the clothes worn by Bollywood actors as authentically Indian.

[ Edit | View ]

The Hands Resist Him -- robin, 05:44:40 02/07/16 Sun [1]

The Hands Resist Him is a painting created by artist Bill Stoneham in 1972. It depicts a young boy and female doll standing in front of a glass paneled door against which many hands are pressed. According to Stoneham, the boy is based on a photograph of himself at age five, the doorway is a representation of the dividing line between the waking world and the world of fantasy and impossibilities, while the doll is a guide that will escort the boy through it. The titular hands represent alternate lives or possibilities.[1][2] The painting became the subject of an urban legend and a viral internet meme in February 2000 when it was posted for sale on eBay along with an elaborate backstory implying that it was haunted.


The painting was first displayed at the Feingarten Gallery in Beverly Hills, California, during the early 1970s. A one-man Stoneham show at the gallery, which included the piece, was reviewed by the art critic at the Los Angeles Times. During the show, the painting was purchased by actor John Marley,[1] notable for his role as Jack Woltz in The Godfather.[5] Sometime after Marley's death, the painting was found on the site of an old brewery, by an elderly Californian couple, (as stated in their original eBay listing.[3][4]) The painting appeared on the auction website eBay in February 2000. According to the seller, the aforementioned couple, the painting carried some form of curse. Their eBay description made a series of claims that the painting was cursed or haunted. Included in those claims were that the characters in the painting moved during the night, and that they would sometimes leave the painting and enter the room in which it was being displayed. Also included with the listing were a series of photographs that were said to be evidence of an incident in which the female doll character threatened the male character with a gun that she was holding, causing him to attempt to leave the painting.[2][3] A disclaimer was included with the listing absolving the seller from all liability if the painting was purchased.[3][4]

News of the listing was quickly spread by internet users who forwarded the link to their friends or wrote their own pages about it.[3] Some people claimed that simply viewing the photos of the painting made them feel ill or have unpleasant experiences. Eventually, the auction page was viewed over 30,000 times.[3][4]

After an initial bid of $199, the painting eventually received 30 bids and sold for $1,025.00. The buyer, Perception Gallery in Grand Rapids, Michigan, eventually contacted Bill Stoneham and related the unusual story of its auction on eBay and their acquisition of it. He reported being quite surprised by all the stories and strange interpretations of the images in the painting.[2][3][4] According to the artist, the object presumed by the eBay sellers to be a gun is actually nothing more than a dry cell battery and a tangle of wires.[2]

Stoneham recalls that both the owner of the gallery in which the painting was first displayed, and the art critic who reviewed it, died within one year of coming into contact with the painting.[1]
New paintings in the series

An individual who saw the story about the original painting contacted Stoneham about commissioning a sequel to the painting.[6] Stoneham accepted and painted a sequel called Resistance at the Threshold.[7] The sequel depicts the same characters 40+ years later in the same style as the original. A second sequel, Threshold of Revelation, was completed in 2012 and can be seen on Stoneham's website

[ Edit | View ]

The Crying Boy -- robin, 05:40:56 02/07/16 Sun [1]

The Crying Boy is a mass-produced print of a painting by Italian painter Bruno Amadio, also known as Giovanni Bragolin.[1] It was widely distributed from the 1950s onwards. There are numerous alternative versions, all portraits of tearful young boys or girls.[1] In addition to being widely known, certain urban legend attribute a "curse" to the painting.


On September 4, 1985, the British tabloid newspaper The Sun reported that a firefighter from Princes Road, Essex, Chelmsford, was claiming that undamaged copies of the painting were frequently found amidst the ruins of burned houses.[1] He stated that no firefighter would allow a copy of the painting into his own house.[citation needed] Over the next few months, The Sun and other tabloids ran several articles on house fires suffered by people who had owned the painting.[citation needed]

By the end of November, belief in the painting's curse was widespread enough that The Sun was organising mass bonfires of the paintings, sent in by readers.[2]

Karl Pilkington has made reference to these events on The Ricky Gervais Show. Ricky Gervais dismissed the curse as "bollocks".

Steve Punt, a British writer and comedian, investigated the curse of the crying boy in a BBC radio Four production called Punt PI. Although the programme is comic in nature, Punt researched the history of the Crying Boy painting.[3] The conclusion reached by the programme, following testing at the Building Research Establishment, is that the prints were treated with some varnish containing fire repellent, and that the string holding the painting to the wall would be the first to perish, resulting in the painting landing face down on the floor and thus being protected, although no explanation was given as to why no other paintings were turning up unscathed. The picture was also mentioned in an episode about curses in the TV series Weird or What? in 2012.

[ Edit | View ]

Pyramus and Thisbe Club -- robin, 05:38:41 02/07/16 Sun [1]

The Pyramus and Thisbe Club is a UK based non-profit organization. It was founded in 1974 by a group of party wall surveyors. It helped draft the Party Wall etc. Act 1996 and continues to educate its members and the public about party wall issues.

The Pyramus and Thisbe Club was founded in 1974 at the instigation of the late John Anstey, following widespread misreporting of the case of Gyle-Thompson v Wall Street (1973). The Club was formed to exchange news and opinions about interesting party wall cases. Its first Chairman was Alan Gillett. The original membership of 46 active party wall surveyors agreed to meet quarterly and these early meetings took place at the Little Ship Club in the City of London. Membership grew but was then limited to 100 and the Club moved its meetings to The Cafe Royal in Regent Street.

The club takes its name from Ovid's Pyramus and Thisbe, lovers kept apart by their rival parents, but who whispered through a chink in a wall. The story has been retold by many including Shakespeare in A Midsummer Night's Dream. The Club's motto, a quotation from the play, is "The wall is down that parted their fathers." The Club's quarterly newsletter is called Whispers.

Until 1997, the Club's activities were confined to inner London, where the London Building Acts (Amendment) Act 1939 applied only to party walls in the former LCC area. In 1993 a Club working party began drafting a Parliamentary Private Bill for England and Wales. The Bill which was sponsored through Parliament by The Earl of Lytton (now a past chairman of the Club) received Government support and became the Party Wall etc. Act 1996. It came into force in July 1997.

The Club's pivotal role in framing the Act was acknowledged by The Earl of Kinnoull during the debate following the Bill's second reading in the House of Lords, when he said of the Club, "I know that that club of professionals has done tremendous work. I pay particular tribute to its chairman, John Anstey, who, like other colleagues has been active in helping to draft the Bill." The Pyramus and Thisbe Club continues to maintain relationships with Government and Parliament. Members of the Club have formed advisory panels to consider the Subterranean Development Bill and the Property Boundaries (Resolution of Disputes) Bill. The Club has assisted the Government in producing a guide to the Act and Club members have advised overseas governments on party wall and neighbourly matters.

In a 2008 case in Romford County Court, His Honour Judge Platt acknowledged the Club's members when he said, "It is a tribute to the surveyor's profession as a whole and to the members of the Pyramus and Thisbe Club in particular that issues over party walls have generally been resolved by a pragmatic and cooperative approach to the provisions of the Act and consequently appeals to the County Court have been extremely rare."

The Club's membership is drawn from a mixture of surveyors, architects, engineers, other construction professionals and lawyers, all of whom have an interest in party wall matters. Today there are some 1000 members practising throughout England and Wales. The only qualification for membership is a serious professional interest in the subject and a willingness to disseminate information among fellow members about difficult or interesting cases.

The Club is a non profit-making organisation and has acquired the status of a Learned Society.[citation needed] It promotes the highest standards of professional conduct among its members. The Club has published a two volume "Collected Papers" from the first 20 years of its proceedings and "The Party Wall Act Explained" written by the members of the original working party, now in its second, revised edition.

[ Edit | View ]

Mem and Zin -- robin, 05:36:50 02/07/16 Sun [1]

Mam and Zin (Kurdish: Mem ű Zîn‎) is a Kurdish classic love story written down 1692 and is considered to be the épopée of Kurdish literature. It is the most important work of Kurdish writer and poet Ahmad Khani (1651-1707). Mam and Zin is based on a true story, laid down from generation to generation through oral tradition. The content is similar to a Romeo and Juliet story. For Kurds, Mam and Zin are symbols of the Kurdish people and Kurdistan, which are separated and cannot come together.[citation needed] The Mem-u Zin Mausoleum in Cizre province has become a tourist attraction.


It tells the tragic story of two young people in love. Mem, a young Kurdish boy of the "Alan" clan & heir to the City of the West,[1] who falls in love with Zin, of the "Botan" clan & the daughter of the governor of Butan. They meet during Festival of "Newroz" (the ancient national ceremony of Kurds) when the people are celebrating. Their union is blocked by Bakr of the Bakran clan, Mem’s antagonists throughout story,[2] who is jealous of the two star-crossed lovers. Mem eventually dies during a complicated conspiracy by Bakr. When Zin receives the news, she collapses and dies while mourning the death of Mem at his grave. The immense grief leads to her death and she is buried next to Mem in Cizre. The news of the death of Mem and Zin spreads quickly among the people of Jazira Botan. When Bakr's role in the tragedy is revealed, the people are so angry they threaten him with death and so he takes sanctuary between the two graves. He is eventually captured and slain by the people of Jazira. Bakr is buried beneath the feet of Mem and Zin. However, a thorn bush, nourished by the blood of Bakr, grows out of his grave: the roots of malice penetrate deep into the earth among the lovers’ graves, thus separating the two even in death.
Symbolic Meaning of the Main Characters

For modern Kurdish nationalists, Mam and Zin symbolize their struggle for a homeland. Mem represents the Kurdish people and Zin the Kurdish country, thereofre both remain separated by unfortunate circumstances, and there can be no unity until they are reunited.

Among the various stories is the work of Ahmad Khani best known. Roger Lescot , a French Orientalist, added in the 1930s, the Meme Alan narrative with the help of several Kurdish Dengbęj singer from Syria. The forecast is partly historical roots, probably originated in the 14th century has been handed down by the Dengbęj. She describes in a precise and poetic language, the story of the ill-fated love of Mem and Zin. Against the background of chivalric traditions and social conventions .... This version is the version of the folk tale the next.[3] The full version of the legend Meme Alan is now an integral part of the Kurdish literature.
Filming of the epic

On the basis of the book was in 1992 by Ümit Elçi film of the same Mem and Zin (dt. Mem and Zin ) rotated. Since the Kurdish language in Turkey was prohibited until the late 1990s / early 21st century, the Kurdish epic needed to be rotated in Turkish.

In 2002, the Kurdistan satellite channel Kurdistan TV produced a dramatised series of Memi Alan,[4] Nasir Hassan, the director of the successful drama said, "Memi Alan" is the most substantial and the most sophisticated artistic work done. With a crew, more than 1000 people and 250 actors.

[ Edit | View ]

Pyramus and Thisbe -- robin, 05:34:38 02/07/16 Sun [1]

Pyramus and Thisbē are a pair of ill-fated lovers whose story forms part of Ovid's Metamorphoses. The story has since been retold by many authors.

In the Ovidian version, Pyramus and Thisbe are two lovers in the city of Babylon who occupy connected houses/walls, forbidden by their parents to be wed, because of their parents' rivalry. Through a crack in one of the walls, they whisper their love for each other. They arrange to meet near Ninus' tomb under a mulberry tree and state their feelings for each other. Thisbe arrives first, but upon seeing a lioness with a mouth bloody from a recent kill, she flees, leaving behind her veil. When Pyramus arrives he is horrified at the sight of Thisbe's veil, assuming that a wild beast has killed her. Pyramus kills himself, falling on his sword in proper Babylonian fashion, and in turn splashing blood on the white mulberry leaves. Pyramus' blood stains the white mulberry fruits, turning them dark. Thisbe returns, eager to tell Pyramus what had happened to her, but she finds Pyramus' dead body under the shade of the mulberry tree. Thisbe, after a brief period of mourning, stabs herself with the same sword. In the end, the gods listen to Thisbe's lament, and forever change the colour of the mulberry fruits into the stained colour to honour the forbidden love.

Ovid's is the oldest surviving version of the story, published in 8 AD, but he adapted an existing etiological myth. While in Ovid's telling Pyramus and Thisbe lived in Babylon and Ctesias had placed the tomb of his imagined king Ninus near that city, the myth probably originated in Cilicia (part of Ninus' Babylonian empire) as Pyramos is the historical Greek name of the local Ceyhan River. The metamorphosis in the primary story involves Pyramus changing in this river and Thisbe in a nearby spring. A 2nd-century mosaic unearthed near Nea Paphos on Cyprus depicts this older version of the myth

The story of Pyramus and Thisbe appears in Giovanni Boccaccio's On Famous Women as biography number twelve (sometimes thirteen) [2] and in his Decameron, in the fifth story on the seventh day, where a desperate housewife falls in love with her neighbor, and communicates with him through a crack in the wall, attracting his attention by dropping pieces of stone and straw through the crack.

In the 1380s, Geoffrey Chaucer, in his The Legend of Good Women, and John Gower, in his Confessio Amantis, were the first to tell the story in English. Gower altered the story somewhat into a cautionary tale. John Metham's Amoryus and Cleopes (1449) is another early English adaptation.

The tragedy of Romeo and Juliet ultimately sprang from Ovid's story. Here the star-crossed lovers cannot be together because Juliet has been engaged by her parents to another man and the two families hold an ancient grudge. As in Pyramus and Thisbe, the mistaken belief in one lover's death leads to consecutive suicides. The earliest version of Romeo and Juliet was published in 1476 by Masuccio Salernitano, while it mostly obtained its present form when written down in 1524 by Luigi da Porto. Salernitano and Da Porto both are thought to have been inspired by Ovid and Boccaccio's writing.[3] Shakespeare's most famous 1590s adaptation is a dramatization of Arthur Brooke's 1562 poem The Tragical History of Romeus and Juliet, itself a translation of a French translation of Da Porto's novella.

In Shakespeare's A Midsummer Night's Dream (Act V, sc 1), written in the 1590s, a group of "mechanicals" enact the story of "Pyramus and Thisbe". Their production is crude and, for the most part, badly done until the final monologues of Nick Bottom, as Pyramus and Francis Flute, as Thisbe. The theme of forbidden love is also present in A Midsummer Night's Dream (albeit a less tragic and dark representation) in that a girl, Hermia, is not able to marry the man she loves, Lysander, because her father Egeus despises him and wishes for her to marry Demetrius, and meanwhile Hermia and Lysander are confident that Helena is in love with Demetrius.

Spanish poet Luis de Góngora wrote a Fábula de Píramo y Tisbe in 1618, while French poet Théophile de Viau wrote Les amours tragiques de Pyrame et Thisbée, a tragedy in five acts, in 1621.

In 1718 Giuseppe Antonio Brescianello wrote his only opera "La Tisbe" for Württemberg court. François Francoeur and François Rebel composed Pirame et Thisbée, a liric tragedy in 5 acts and a prologue, with libretto by Jean-Louis-Ignace de la Serre; it was played at the Académie royale de musique, on October 17, 1726. The story was adapted by John Frederick Lampe as a "Mock Opera" in 1745, containing a singing "Wall" which was described as "the most musical partition that was ever heard."[4] In 1768 in Vienna, Johann Adolph Hasse composed a serious opera on the tale, titled Piramo e Tisbe.

Edmond Rostand adapted the tale from Romeo and Juliet, making the fathers of the lovers conspire to bring their children together by pretending to forbid their love, in Les Romanesques,[citation needed] whose musical adaptation, Fantasticks, became the world's longest-running musical.

In Geoffrey Chaucer's 'The Merchant's Tale' from The Canterbury Tales (1389-1400), the two illicit lovers Damian and May are likened to Pyramus and Thisbe for their forbidden love.

In Fernando de Rojas' book La Celestina or Tragicomedy of Calisto and Melibea (1499), Calisto talks briefly about the unfortunate Pyramus and Thisbe.

In part I of Miguel de Cervantes' novel Don Quixote (1605), when Cardenio relates the story of Luscinda and himself, he refers to "that famous Thisbe".

In part II of Miguel de Cervantes' novel Don Quixote (1615), the poet Don Lorenzo recites a sonnet he has written on the story of Pyramus and Thisbe.

There is a chapter entitled "Pyramus and Thisbe" in Alexandre Dumas' The Count of Monte Cristo (1844-45), alluding to the secret romance between Maximillian Morrel and Valentine de Villefort.

In Edmond Rostand's play Cyrano de Bergerac (1897), Cyrano mocks his "traitorous nose" in "parody of weeping Pyramus" during his "nose monologue".

In Edith Wharton's short story "The House of the Dead Hand", published in the Atlantic Monthly in 1904, the romance between Sybilla and Count Ottoviano is seen as "a new Pyramus and Thisbe".

In Willa Cather's novel O Pioneers! (1913), two of the story's lovers are killed under a Mulberry Tree.

In Luis Rafael Sánchez's play "Sol 13 Interior" (1961), the elderly married couple is called Piramo and Tisbe, in obvious reference to the doomed couple. In this play, in the part called "La Hiel Nuestra de Cada Día" (Our Daily Gall) the couple is aging and poor when they are separated by death caused from a life of hard knocks.

The Beatles made a parody of A Midsummer Night's Dream's version of Pyramus and Thisbe for the 400th anniversary of Shakespeare's birth. It was transmitted on the one-hour TV special "Around The Beatles" on May 6, 1964.

In Gail Carriger's novel Changeless (2010), Ivy Hisselpenny says that she loves Mr. Tunstell "as Pyramid did Thirsty", a comically inaccurate reference.

In The Simpsons (23X13/2012) episode "The Daughter Also Rises", Grandpa Simpson talks to Lisa about Pyramus & Thisbe.

[ Edit | View ]

Main index ] [ Archives: 12345[6]78910 ]

[ Contact Forum Admin ]

Forum timezone: GMT-8
VF Version: 3.00b, ConfDB:
Before posting please read our privacy policy.
VoyForums(tm) is a Free Service from Voyager Info-Systems.
Copyright © 1998-2017 Voyager Info-Systems. All Rights Reserved.