VoyForums
[ Show ]
Support VoyForums
[ Shrink ]
VoyForums Announcement: Programming and providing support for this service has been a labor of love since 1997. We are one of the few services online who values our users' privacy, and have never sold your information. We have even fought hard to defend your privacy in legal cases; however, we've done it with almost no financial support -- paying out of pocket to continue providing the service. Due to the issues imposed on us by advertisers, we also stopped hosting most ads on the forums many years ago. We hope you appreciate our efforts.

Show your support by donating any amount. (Note: We are still technically a for-profit company, so your contribution is not tax-deductible.) PayPal Acct: Feedback:

Donate to VoyForums (PayPal):

Login ] [ Contact Forum Admin ] [ Main index ] [ Post a new message ] [ Search | Check update time | Archives: [1]2345678910 ]


Header
Click here for the Il Pageant Calendar
Click here for information on Pageant Friends FREE Monthly Photo Contest


Subject: Plum pudding model


Author:
Anonymous
[ Edit | View ]

Date Posted: 16:27:42 01/22/16 Fri

The plum pudding model is an obsolete scientific model of the atom proposed by J. J. Thomson in 1904. It was devised shortly after the discovery of the electron but before the discovery of the atomic nucleus.

In this model, the atom is composed of electrons (which Thomson still called "corpuscles", though G. J. Stoney had proposed that atoms of electricity be called "electrons", in 1894) surrounded by a soup of positive charge to balance the electrons' negative charges, like negatively charged "plums" surrounded by positively charged "pudding". The electrons (as we know them today) were thought to be positioned throughout the atom, but with many structures possible for positioning multiple electrons, particularly rotating rings of electrons (see below). Instead of a soup, the atom was also sometimes said to have had a "cloud" of positive charge. With this model, Thomson abandoned his earlier "nebular atom" hypothesis in which the atom was composed of immaterial vortices. Now, at least part of the atom was to be composed of Thomson's particulate negative "corpuscles", although the rest of the positively charged part of the atom remained somewhat nebulous and ill-defined.

The 1904 Thomson model was disproved by the 1909 gold foil experiment of Hans Geiger and Ernest Marsden. This was interpreted by Ernest Rutherford in 1911 to imply a very small nucleus of the atom containing a very high positive charge (in the case of gold, enough to balance about 100 electrons), thus leading to the Rutherford model of the atom. Although gold has an atomic number of 79, immediately after Rutherford's paper appeared in 1911 Antonius Van den Broek made the intuitive suggestion that atomic number is nuclear charge. The matter required experiment to decide. Henry Moseley's work showed experimentally in 1913 (see Moseley's law) that the effective nuclear charge was very close to the atomic number (Moseley found only one unit difference), and Moseley referenced only the papers of Van den Broek and Rutherford. This work culminated in the solar-system-like (but quantum-limited) Bohr model of the atom in the same year, in which a nucleus containing an atomic number of positive charge is surrounded by an equal number of electrons in orbital shells. Bohr had also inspired Moseley's work.

Thomson's model was compared (though not by Thomson) to a British dessert called plum pudding, hence the name. Thomson's paper was published in the March 1904 edition of the Philosophical Magazine, the leading British science journal of the day. In Thomson's view:

... the atoms of the elements consist of a number of negatively electrified corpuscles enclosed in a sphere of uniform positive electrification, ...

In this model, the electrons were free to rotate within the blob or cloud of positive substance. These orbits were stabilized in the model by the fact that when an electron moved farther from the centre of the positive cloud, it felt a larger net positive inward force, because there was more material of opposite charge, inside its orbit (see Gauss's law). In Thomson's model, electrons were free to rotate in rings which were further stabilized by interactions between the electrons, and spectra were to be accounted for by energy differences of different ring orbits. Thomson attempted to make his model account for some of the major spectral lines known for some elements, but was not notably successful at this. Still, Thomson's model (along with a similar Saturnian ring model for atomic electrons, also put forward in 1904 by Nagaoka after James Clerk Maxwell's model of Saturn's rings), were earlier harbingers of the later and more successful solar-system-like Bohr model of the atom.

The plum pudding model with a single electron was used in part by the physicist Arthur Erich Haas in 1910 to estimate the numerical value of Planck's constant and the Bohr radius of hydrogen atoms. Haas' work estimated these values to within an order of magnitude and preceded the work of Niels Bohr by three years. Of note, the Bohr model itself only provides substantially-reasonable predictions for atomic and ionic systems having a single effective electron.

A particularly useful mathematics problem related to the plum pudding model is the optimal distribution of equal point charges on a unit sphere called the Thomson problem. The Thomson problem is a natural consequence of the plum pudding model in the absence of its uniform positive background charge.

The classical electrostatic treatment of electrons confined to spherical quantum dots is also similar to their treatment in the plum pudding model. In this classical problem, the quantum dot is modeled as a simple dielectric sphere (in place of a uniform, positively-charged sphere as in the plum pudding model) in which free, or excess, electrons reside. The electrostatic N-electron configurations are found to be exceptionally close to solutions found in the Thomson problem with electrons residing at the same radius within the dielectric sphere. Notably, the plotted distribution of geometry-dependent energetics has been shown to bear a remarkable resemblance to the distribution of anticipated electron orbitals in natural atoms as arranged on the periodic table of elements. Of great interest, solutions of the Thomson problem exhibit this corresponding energy distribution by comparing the energy of each N-electron solution with the energy of its neighbouring (N-1)-electron solution with one charge at the origin. However, when treated within a dielectric sphere model, the features of the distribution are much more pronounced and provide greater fidelity with respect to electron orbital arrangements in real atoms.

[ Post a Reply to This Message ]
Subject: Atomic nucleus


Author:
Anonymous
[ Edit | View ]

Date Posted: 16:25:21 01/22/16 Fri

The nucleus is the small, dense region consisting of protons and neutrons at the center of an atom. The atomic nucleus was discovered in 1911 by Ernest Rutherford based on the 1909 Geiger–Marsden gold foil experiment. After the discovery of the neutron in 1932, models for a nucleus composed of protons and neutrons were quickly developed by Dmitri Ivanenko and Werner Heisenberg. Almost all of the mass of an atom is located in the nucleus, with a very small contribution from the electron cloud. Protons and neutrons are bound together to form a nucleus by the nuclear force.

The diameter of the nucleus is in the range of 1.75 fm (1.75×10−15 m) for hydrogen (the diameter of a single proton)[7] to about 15 fm for the heaviest atoms, such as uranium. These dimensions are much smaller than the diameter of the atom itself (nucleus + electron cloud), by a factor of about 23,000 (uranium) to about 145,000 (hydrogen).

The branch of physics concerned with the study and understanding of the atomic nucleus, including its composition and the forces which bind it together, is called nuclear physics.

The nucleus was discovered in 1911, as a result of Ernest Rutherford's efforts to test Thomson's "plum pudding model" of the atom. The electron had already been discovered earlier by J.J. Thomson himself. Knowing that atoms are electrically neutral, Thomson postulated that there must be a positive charge as well. In his plum pudding model, Thomson suggested that an atom consisted of negative electrons randomly scattered within a sphere of positive charge. Ernest Rutherford later devised an experiment, performed by Hans Geiger and Ernest Marsden under Rutherford's direction, that involved the deflection of alpha particles (helium nucleii) directed at a thin sheet of metal foil. He reasoned that if Thomson's model were correct, the positively charged alpha particles would easily pass through the foil with very little deviation in their paths, as the foil should act as electrically neutral if the negative and positive charges are so intimately mixed as to make it appear neutral. To his surprise, many of the particles were deflected at very large angles. Because the mass of an alpha particle is about 8000 times that of an electron, it became apparent that a very strong force must be present if it could deflect the massive and fast moving alpha particles. He realized that the plum pudding model could not be accurate and that the deflections of the alpha particles could only be explained if the positive and negative charges were separated from each other and that the mass of the atom was a concentrated point of positive charge. This justified the idea of a nuclear atom with a dense center of positive charge and mass.

The term nucleus is from the Latin word nucleus, a diminutive of nux ("nut"), meaning the kernel (i.e., the "small nut") inside a watery type of fruit (like a peach). In 1844, Michael Faraday used the term to refer to the "central point of an atom". The modern atomic meaning was proposed by Ernest Rutherford in 1912. The adoption of the term "nucleus" to atomic theory, however, was not immediate. In 1916, for example, Gilbert N. Lewis stated, in his famous article The Atom and the Molecule, that "the atom is composed of the kernel and an outer atom or shell

[ Post a Reply to This Message ]
Subject: X-ray


Author:
Anonymous
[ Edit | View ]

Date Posted: 16:21:36 01/22/16 Fri

X-radiation (composed of X-rays) is a form of electromagnetic radiation. Most X-rays have a wavelength ranging from 0.01 to 10 nanometers, corresponding to frequencies in the range 30 petahertz to 30 exahertz (3×1016 Hz to 3×1019 Hz) and energies in the range 100 eV to 100 keV. X-ray wavelengths are shorter than those of UV rays and typically longer than those of gamma rays. In many languages, X-radiation is referred to with terms meaning Röntgen radiation, after Wilhelm Röntgen, who is usually credited as its discoverer, and who had named it X-radiation to signify an unknown type of radiation. Spelling of X-ray(s) in the English language includes the variants x-ray(s), xray(s), and X ray(s).

X-rays with photon energies above 5–10 keV (below 0.2–0.1 nm wavelength) are called hard X-rays, while those with lower energy are called soft X-rays. Due to their penetrating ability, hard X-rays are widely used to image the inside of objects, e.g., in medical radiography and airport security. As a result, the term X-ray is metonymically used to refer to a radiographic image produced using this method, in addition to the method itself. Since the wavelengths of hard X-rays are similar to the size of atoms they are also useful for determining crystal structures by X-ray crystallography. By contrast, soft X-rays are easily absorbed in air; the attenuation length of 600 eV (~2 nm) X-rays in water is less than 1 micrometer.

There is no consensus for a definition distinguishing between X-rays and gamma rays. One common practice is to distinguish between the two types of radiation based on their source: X-rays are emitted by electrons, while gamma rays are emitted by the atomic nucleus. This definition has several problems: other processes also can generate these high-energy photons, or sometimes the method of generation is not known. One common alternative is to distinguish X- and gamma radiation on the basis of wavelength (or, equivalently, frequency or photon energy), with radiation shorter than some arbitrary wavelength, such as 10−11 m (0.1 Ĺ), defined as gamma radiation.This criterion assigns a photon to an unambiguous category, but is only possible if wavelength is known. (Some measurement techniques do not distinguish between detected wavelengths.) However, these two definitions often coincide since the electromagnetic radiation emitted by X-ray tubes generally has a longer wavelength and lower photon energy than the radiation emitted by radioactive nuclei. Occasionally, one term or the other is used in specific contexts due to historical precedent, based on measurement (detection) technique, or based on their intended use rather than their wavelength or source. Thus, gamma-rays generated for medical and industrial uses, for example radiotherapy, in the ranges of 6–20 MeV, can in this context also be referred to as X-rays.

X-ray photons carry enough energy to ionize atoms and disrupt molecular bonds. This makes it a type of ionizing radiation, and therefore harmful to living tissue. A very high radiation dose over a short amount of time causes radiation sickness, while lower doses can give an increased risk of radiation-induced cancer. In medical imaging this increased cancer risk is generally greatly outweighed by the benefits of the examination. The ionizing capability of X-rays can be utilized in cancer treatment to kill malignant cells using radiation therapy. It is also used for material characterization using X-ray spectroscopy.


Attenuation length of X-rays in water showing the oxygen absorption edge at 540 eV, the energy−3 dependence of photoabsorption, as well as a leveling off at higher photon energies due to Compton scattering. The attenuation length is about four orders of magnitude longer for hard X-rays (right half) compared to soft X-rays (left half).
Hard X-rays can traverse relatively thick objects without being much absorbed or scattered. For this reason, X-rays are widely used to image the inside of visually opaque objects. The most often seen applications are in medical radiography and airport security scanners, but similar techniques are also important in industry (e.g. industrial radiography and industrial CT scanning) and research (e.g. small animal CT). The penetration depth varies with several orders of magnitude over the X-ray spectrum. This allows the photon energy to be adjusted for the application so as to give sufficient transmission through the object and at the same time good contrast in the image.

X-rays have much shorter wavelength than visible light, which makes it possible to probe structures much smaller than what can be seen using a normal microscope. This can be used in X-ray microscopy to acquire high resolution images, but also in X-ray crystallography to determine the positions of atoms in crystals.

X-rays interact with matter in three main ways, through photoabsorption, Compton scattering, and Rayleigh scattering. The strength of these interactions depend on the energy of the X-rays and the elemental composition of the material, but not much on chemical properties since the X-ray photon energy is much higher than chemical binding energies. Photoabsorption or photoelectric absorption is the dominant interaction mechanism in the soft X-ray regime and for the lower hard X-ray energies. At higher energies, Compton scattering dominates.

[ Post a Reply to This Message ]
Subject: Photomask


Author:
Anonymous
[ Edit | View ]

Date Posted: 16:19:25 01/22/16 Fri

A photomask is an opaque plate with holes or transparencies that allow light to shine through in a defined pattern. They are commonly used in photolithography.

Lithographic photomasks are typically transparent fused silica blanks covered with a pattern defined with a chrome metal-absorbing film. Photomasks are used at wavelengths of 365 nm, 248 nm, and 193 nm. Photomasks have also been developed for other forms of radiation such as 157 nm, 13.5 nm (EUV), X-ray, electrons, and ions; but these require entirely new materials for the substrate and the pattern film.

A set of photomasks, each defining a pattern layer in integrated circuit fabrication, is fed into a photolithography stepper or scanner, and individually selected for exposure. In double patterning techniques, a photomask would correspond to a subset of the layer pattern.

In photolithography for the mass production of integrated circuit devices, the more correct term is usually photoreticle or simply reticle. In the case of a photomask, there is a one-to-one correspondence between the mask pattern and the wafer pattern. This was the standard for the 1:1 mask aligners that were succeeded by steppers and scanners with reduction optics. As used in steppers and scanners, the reticle commonly contains only one layer of the chip. (However, some photolithography fabrications utilize reticles with more than one layer patterned onto the same mask). The pattern is projected and shrunk by four or five times onto the wafer surface. To achieve complete wafer coverage, the wafer is repeatedly "stepped" from position to position under the optical column until full exposure is achieved.

Features 150 nm or below in size generally require phase-shifting to enhance the image quality to acceptable values. This can be achieved in many ways. The two most common methods are to use an attenuated phase-shifting background film on the mask to increase the contrast of small intensity peaks, or to etch the exposed quartz so that the edge between the etched and unetched areas can be used to image nearly zero intensity. In the second case, unwanted edges would need to be trimmed out with another exposure. The former method is attenuated phase-shifting, and is often considered a weak enhancement, requiring special illumination for the most enhancement, while the latter method is known as alternating-aperture phase-shifting, and is the most popular strong enhancement technique.

As leading-edge semiconductor features shrink, photomask features that are 4× larger must inevitably shrink as well. This could pose challenges since the absorber film will need to become thinner, and hence less opaque. A recent study by IMEC has found that thinner absorbers degrade image contrast and therefore contribute to line-edge roughness, using state-of-the-art photolithography tools. One possibility is to eliminate absorbers altogether and use "chromeless" masks, relying solely on phase-shifting for imaging.

The emergence of immersion lithography has a strong impact on photomask requirements. The commonly used attenuated phase-shifting mask is more sensitive to the higher incidence angles applied in "hyper-NA" lithography, due to the longer optical path through the patterned film.

Leading-edge photomasks (pre-corrected) images of the final chip patterns magnified by 4 times. This magnification factor has been a key benefit in reducing pattern sensitivity to imaging errors. However, as features continue to shrink, two trends come into play: the first is that the mask error factor begins to exceed one, i.e., the dimension error on the wafer may be more than 1/4 the dimension error on the mask,[6] and the second is that the mask feature is becoming smaller, and the dimension tolerance is approaching a few nanometers. For example, a 25 nm wafer pattern should correspond to a 100 nm mask pattern, but the wafer tolerance could be 1.25 nm (5% spec), which translates into 5 nm on the photomask. The variation of electron beam scattering in directly writing the photomask pattern can easily well exceed this.

The term "pellicle" is used to mean "film," "thin film," or "membrane." Beginning in the 1960s, thin film stretched on a metal frame, also known as a "pellicle," was used as a beam splitter for optical instruments. It has been used in a number of instruments to split a beam of light without causing an optical path shift due to its small film thickness. In 1978, Shea et al. at IBM patented a process to use the "pellicle" as a dust cover to protect a photomask or reticle(hence will all be called "photomask" in the rest of this chapter) In the context of this entry, "pellicle" means "thin film dust cover to protect a photomask".

Particle contamination can be a significant problem in semiconductor manufacturing. A photomask is protected from particles by a pellicle – a thin transparent film stretched over a frame that is glued over one side of the photomask. The pellicle is far enough away from the mask patterns so that moderate-to-small sized particles that land on the pellicle will be too far out of focus to print. Although they are designed to keep particles away, pellicles become a part of the imaging system and their optical properties need to be taken into account. Pellicles material are Nitrocellulose and made for various Transmission Wavelengths.

[ Post a Reply to This Message ]
Subject: Photolithography


Author:
Anonymous
[ Edit | View ]

Date Posted: 16:17:12 01/22/16 Fri

Photolithography, also termed optical lithography or UV lithography, is a process used in microfabrication to pattern parts of a thin film or the bulk of a substrate. It uses light to transfer a geometric pattern from a photomask to a light-sensitive chemical "photoresist", or simply "resist," on the substrate. A series of chemical treatments then either engraves the exposure pattern into, or enables deposition of a new material in the desired pattern upon the material underneath the photo resist. For example, in complex integrated circuits, a modern CMOS wafer will go through the photolithographic cycle up to 50 times.

Photolithography shares some fundamental principles with photography in that the pattern in the etching resist is created by exposing it to light, either directly (without using a mask) or with a projected image using an optical mask. This procedure is comparable to a high precision version of the method used to make printed circuit boards. Subsequent stages in the process have more in common with etching than with lithographic printing. It is used because it can create extremely small patterns (down to a few tens of nanometers in size), it affords exact control over the shape and size of the objects it creates, and because it can create patterns over an entire surface cost-effectively. Its main disadvantages are that it requires a flat substrate to start with, it is not very effective at creating shapes that are not flat, and it can require extremely clean operating conditions.

The root words photo, litho, and graphy all have Greek origins, with the meanings 'light', 'stone' and 'writing' respectively. As suggested by the name compounded from them, photolithography is a printing method (originally based on the use of limestone printing plates) in which light plays an essential role. In the 1820s, Nicephore Niepce invented a photographic process that used Bitumen of Judea, a natural asphalt, as the first photoresist. A thin coating of the bitumen on a sheet of metal, glass or stone became less soluble where it was exposed to light; the unexposed parts could then be rinsed away with a suitable solvent, baring the material beneath, which was then chemically etched in an acid bath to produce a printing plate. The light-sensitivity of bitumen was very poor and very long exposures were required, but despite the later introduction of more sensitive alternatives, its low cost and superb resistance to strong acids prolonged its commercial life into the early 20th century. In 1940, Oskar Süß created a positive photoresist by using diazonaphthoquinone, which worked in the opposite manner: the coating was initially insoluble and was rendered soluble where it was exposed to light.[1] In 1954, Louis Plambeck Jr. developed the Dycryl polymeric letterpress plate, which made the platemaking process faster.

If organic or inorganic contaminations are present on the wafer surface, they are usually removed by wet chemical treatment, e.g. the RCA clean procedure based on solutions containing hydrogen peroxide. Other solutions made with trichloroethylene, acetone or methanol can also be used to clean.

The wafer is initially heated to a temperature sufficient to drive off any moisture that may be present on the wafer surface, 150 °C for ten minutes is sufficient. Wafers that have been in storage must be chemically cleaned to remove contamination. A liquid or gaseous "adhesion promoter", such as Bis(trimethylsilyl)amine ("hexamethyldisilazane", HMDS), is applied to promote adhesion of the photoresist to the wafer. The surface layer of silicon dioxide on the wafer reacts with HMDS to form tri-methylated silicon-dioxide, a highly water repellent layer not unlike the layer of wax on a car's paint. This water repellent layer prevents the aqueous developer from penetrating between the photoresist layer and the wafer's surface, thus preventing so-called lifting of small photoresist structures in the (developing) pattern. In order to ensure the development of the image, it is best covered and placed over a hot plate and let it dry while stabilizing the temperature at 120 °C.

The wafer is covered with photoresist by spin coating. A viscous, liquid solution of photoresist is dispensed onto the wafer, and the wafer is spun rapidly to produce a uniformly thick layer. The spin coating typically runs at 1200 to 4800 rpm for 30 to 60 seconds, and produces a layer between 0.5 and 2.5 micrometres thick. The spin coating process results in a uniform thin layer, usually with uniformity of within 5 to 10 nanometres. This uniformity can be explained by detailed fluid-mechanical modelling, which shows that the resist moves much faster at the top of the layer than at the bottom, where viscous forces bind the resist to the wafer surface. Thus, the top layer of resist is quickly ejected from the wafer's edge while the bottom layer still creeps slowly radially along the wafer. In this way, any 'bump' or 'ridge' of resist is removed, leaving a very flat layer. Final thickness is also determined by the evaporation of liquid solvents from the resist. For very small, dense features (< 125 or so nm), lower resist thicknesses (< 0.5 micrometres) are needed to overcome collapse effects at high aspect ratios; typical aspect ratios are < 4:1.

The photo resist-coated wafer is then prebaked to drive off excess photoresist solvent, typically at 90 to 100 °C for 30 to 60 seconds on a hotplate.

[ Post a Reply to This Message ]
Subject: Disk read-and-write head


Author:
Anonymous
[ Edit | View ]

Date Posted: 16:13:03 01/22/16 Fri

Disk read/write heads are the small parts of a disk drive, that move above the disk platter and transform the platter's magnetic field into electrical current (read the disk) or vice versa – transform electrical current into magnetic field (write the disk).The heads have gone through a number of changes over the years.

In a hard drive, the heads 'fly' above the disk surface with clearance of as little as 3 nanometres. The "flying height" is constantly decreasing to enable higher areal density. The flying height of the head is controlled by the design of an air-bearing etched onto the disk-facing surface of the slider. The role of the air bearing is to maintain the flying height constant as the head moves over the surface of the disk. If the head hits the disk's surface, a catastrophic head crash can result.

The heads themselves started out similar to the heads in tape recorders—simple devices made out of a tiny C-shaped piece of highly magnetizable material called ferrite wrapped in a fine wire coil. When writing, the coil is energized, a strong magnetic field forms in the gap of the C, and the recording surface adjacent to the gap is magnetized. When reading, the magnetized material rotates past the heads, the ferrite core concentrates the field, and a current is generated in the coil. In the gap the field is very strong and quite narrow. That gap is roughly equal to the thickness of the magnetic media on the recording surface. The gap determines the minimum size of a recorded area on the disk. Ferrite heads are large, and write fairly large features. They must also be flown fairly far from the surface thus requiring stronger fields and larger heads.

Metal in Gap (MIG) heads are ferrite heads with a small piece of metal in the head gap that concentrates the field. This allows smaller features to be read and written. MIG heads were replaced with thin film heads. Thin film heads were electronically similar to ferrite heads and used the same physics. But they were manufactured using photolithographic processes and thin films of material that allowed fine features to be created. Thin film heads were much smaller than MIG heads and therefore allowed smaller recorded features to be used. Thin film heads allowed 3.5 inch drives to reach 4GB storage capacities in 1995. The geometry of the head gap was a compromise between what worked best for reading and what worked best for writing.

The next head improvement was to optimize the thin film head for writing and to create a separate head for reading. The separate read head uses the magnetoresistive (MR) effect which changes the resistance of a material in the presence of magnetic field. These MR heads are able to read very small magnetic features reliably, but can not be used to create the strong field used for writing. The term AMR (A=anisotropic) is used to distinguish it from the later introduced improvement in MR technology called GMR (giant magnetoresistance) and "TMR" (tunneling magnetoresistance). The introduction of the AMR head in 1990 by IBM[2] led to a period of rapid areal density increases of about 100% per year. In 1997 GMR, giant magnetoresistive heads started to replace AMR heads.

In 2004, the first drives to use tunneling MR (TMR) heads were introduced by Seagate[2] allowing 400 GB drives with 3 disk platters. Seagate introduced TMR heads featuring integrated microscopic heater coils to control the shape of the transducer region of the head during operation. The heater can be activated prior to the start of a write operation to ensure proximity of the write pole to the disk/medium. This improves the written magnetic transitions by ensuring that the head's write field fully saturates the magnetic disk medium. The same thermal actuation approach can be used to temporarily decrease the separation between the disk medium and the read sensor during the readback process, thus improving signal strength and resolution. By mid-2006 other manufacturers have begun to use similar approaches in their products.

[ Post a Reply to This Message ]
Subject: Hard disk drive


Author:
Anonymous
[ Edit | View ]

Date Posted: 16:06:17 01/22/16 Fri

A hard disk drive (HDD), hard disk, hard drive or fixed disk[b] is a data storage device used for storing and retrieving digital information using one or more rigid ("hard") rapidly rotating disks (platters) coated with magnetic material. The platters are paired with magnetic heads arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile memory, retaining stored data even when powered off.

Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDD units, though most current units are manufactured by Seagate, Toshiba and Western Digital. As of 2015, HDD production (exabytes per year) and areal density are growing, although unit shipments are declining.

The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1,000 gigabytes (GB; where 1 gigabyte = 1 billion bytes). Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. Performance is specified by the time required to move the heads to a track or cylinder (average access time) plus the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally the speed at which the data is transmitted (data rate).

The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as PATA (Parallel ATA), SATA (Serial ATA), USB or SAS (Serial attached SCSI) cables.

As of 2016, the primary competing technology for secondary storage is flash memory in the form of solid-state drives (SSDs), which have higher data transfer rates, better reliability,and significantly lower latency and access times, but HDDs remain the dominant medium for secondary storage due to advantages in price per bit and per-device recording capacity. However, SSDs are replacing HDDs where speed, power consumption and durability are more important considerations.

Hard disk drives were introduced in 1956 as data storage for an IBM real-time transaction processing computer and were developed for use with general-purpose mainframe and minicomputers. The first IBM drive, the 350 RAMAC, was approximately the size of two refrigerators and stored five million six-bit characters (3.75 megabytes) on a stack of 50 disks.

The IBM 350 RAMAC disk storage unit was superseded by the IBM 1301 disk storage unit, which consisted of 50 platters, each about 1/8-inch thick and 24 inches in diameter. Whereas the IBM 350 used two read/write heads, pneumatically actuate and moving through two dimensions, the 1301 was one of the first disk storage units to use an array of heads, one per platter, moving as a single unit. Cylinder-mode read/write operations were supported, while the heads flew about 250 micro-inches above the platter surface. Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes. Access time was about 200 milliseconds.

In 1962, IBM introduced the model 1311 disk drive, which was about the size of a washing machine and stored two million characters on a removable disk pack. Users could buy additional packs and interchange them as needed, much like reels of magnetic tape. Later models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives.

Some high-performance HDDs were manufactured with one head per track (e.g. IBM 2305) so that no time was lost physically moving the heads to a track. Known as fixed-head or head-per-track disk drives they were very expensive and are no longer in production

[ Post a Reply to This Message ]
Subject: Random-access memory


Author:
Anonymous
[ Edit | View ]

Date Posted: 16:02:58 01/22/16 Fri

Random-access memory (RAM /rćm/) is a form of computer data storage. A random-access memory device allows data items to be accessed (read or written) in almost the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older drum memory, the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement.

Today, random-access memory takes the form of integrated circuits. RAM is normally associated with volatile types of memory (such as DRAM memory modules), where stored information is lost if power is removed, although many efforts have been made to develop non-volatile RAM chips.Other types of non-volatile memory exist that allow random access for read operations, but either do not allow write operations or have limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash.

Integrated-circuit RAM chips came into the market in the late 1960s, with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970.

Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order it was written. Drum memory could be expanded at relatively low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers. Such registers were relatively large and too costly to use for large amounts of data; generally only a few dozen or few hundred bits of such memory could be provided.

The first practical form of random-access memory was the Williams tube starting in 1947. It stored data as electrically charged spots on the face of a cathode ray tube. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller, faster, and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored-memory program was implemented in the Manchester Small-Scale Experimental Machine (SSEM) computer, which first successfully ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the SSEM, the SSEM was a testbed to demonstrate the reliability of the memory.

Magnetic-core memory was invented in 1947 and developed up until the mid-1970s. It became a widespread form of random-access memory, relying on an array of magnetized rings. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.

The two main forms of modern RAM are static RAM (SRAM) and dynamic RAM (DRAM). In SRAM, a bit of data is stored using the state of a six transistor memory cell. This form of RAM is more expensive to produce, but is generally faster and requires less power than DRAM and, in modern computers, is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM memory cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers.

Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory (ROM) stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM (such as EEPROM and flash memory) share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment. These persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, etc. ECC memory (which can be either SRAM or DRAM) includes special circuitry to detect and/or correct random faults (memory errors) in the stored data, using parity bits or error correction code.

In general, the term RAM refers solely to solid-state memory devices (either DRAM or SRAM), and more specifically the main memory in most computers. In optical storage, the term DVD-RAM is somewhat of a misnomer since, unlike CD-RW or DVD-RW it does not need to be erased before reuse. Nevertheless, a DVD-RAM behaves much like a hard disc drive if somewhat slower.

[ Post a Reply to This Message ]
Subject: Government agency


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:56:23 01/22/16 Fri

A government or state agency, often an appointed commission, is a permanent or semi-permanent organization in the machinery of government that is responsible for the oversight and administration of specific functions, such as an intelligence agency. There is a notable variety of agency types. Although usage differs, a government agency is normally distinct both from a department or ministry, and other types of public body established by government. The functions of an agency are normally executive in character, since different types of organizations (such as commissions) are most often constituted in an advisory role—this distinction is often blurred in practice however.

A government agency may be established by either a national government or a state government within a federal system. The term is not normally used for an organization created by the powers of a local government body. Agencies can be established by legislation or by executive powers. The autonomy, independence and accountability of government agencies also vary widely.

Early examples of organizations that would now be termed a government agency include the British Navy Board, responsible for ships and supplies, which was established[1] in 1546 by King Henry VIII and the British Commissioners of Bankruptcy established[2] in 1570.

From 1933, the New Deal saw rapid growth in US federal agencies, the "alphabet agencies" as they were used to deliver new programs mandated by legislation, such as federal emergency relief.

From the 1980s, as part of New Public Management, several countries including Australia and the United Kingdom developed the use of agencies to improve efficiency in public services.

Administrative law in France refers to autorité administrative indépendante (AAI) or Independent Administrative Authorities. They tend to be prominent in the following areas of public policy;

Economic and financial regulation
Information and communication
Defence of citizens' rights
Independent Administrative Authorities in France may not be instructed or ordered to take specific actions by government.

[ Post a Reply to This Message ]
Subject: Financial accounting


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:54:24 01/22/16 Fri

Financial accounting (or financial accountancy) is the field of accounting concerned with the summary, analysis and reporting of financial transactions pertaining to a business.This involves the preparation of financial statements available for public consumption. Stockholders, suppliers, banks, employees, government agencies, business owners, and other stakeholders are examples of people interested in receiving such information for decision making purposes.

Financial accountancy is governed by both local and international accounting standards. Generally Accepted Accounting Principles (GAAP) is the standard framework of guidelines for financial accounting used in any given jurisdiction. It includes the standards, conventions and rules that accountants follow in recording and summarising and in the preparation of financial statements. On the other hand, International Financial Reporting Standards (IFRS) is a set of international accounting standards stating how particular types of transactions and other events should be reported in financial statements. IFRS are issued by the International Accounting Standards Board (IASB). With IFRS becoming more widespread on the international scene, consistency in financial reporting has become more prevalent between global organisations.

While financial accounting is used to prepare accounting information for people outside the organisation or not involved in the day-to-day running of the company, management accounting provides accounting information to help managers make decisions to manage the business.

Financial accounting is the preparation of financial statements that can be consumed by the public and the relevant stakeholders using either HCA or CPPA. When producing financial statements, they must comply to the following:[6]

Relevance: Financial accounting which is decision-specific. It must be possible for accounting information to influence decisions. Unless this characteristic is present, there is no point in cluttering statements.
Materiality: information is material if its omission or misstatement could influence the economic decisions of users taken on the basis of the financial statements.
Reliability: accounting must be free from significant error or bias. It should be capable to be relied upon by managers. Often information that is highly relevant isn’t very reliable, and vice versa.
Understandability: accounting reports should be expressed as clearly as possible and should be understood by those at whom the information is aimed.
Comparability: financial reports from different periods should be comparable with one another in order to derive meaningful conclusions about the trends in an entity’s financial performance and position over time. Comparability can be ensured by applying the same accounting policies over time.

Objectives of Financial Accounting

Systematic recording of transactions: basic objective of accounting is to systematically record the financial aspects of business transactions (i.e. book-keeping). These recorded transactions are later on classified and summarized logically for the preparation of financial statements and for their analysis and interpretation.[8]
Ascertainment of result of above recorded transactions: accountant prepares profit and loss account to know the result of business operations for a particular period of time. If expenses exceed revenue then it is said that business running under loss. The profit and loss account helps the management and different stakeholders in taking rational decisions. For example, if business is not proved to be remunerative or profitable, the cause of such a state of affair can be investigated by the management for taking remedial steps.
Ascertainment of the financial position of business: businessman is not only interested in knowing the result of the business in terms of profits or loss for a particular period but is also anxious to know that what he owes (liability) to the outsiders and what he owns (assets) on a certain date. To know this, accountant prepares a financial position statement of assets and liabilities of the business at a particular point of time and helps in ascertaining the financial health of the business.
Providing information to the users for rational decision-making: accounting as a ‘language of business’ communicates the financial result of an enterprise to various stakeholders by means of financial statements. Accounting aims to meet the financial information needs of the decision-makers and helps them in rational decision-making.
To know the solvency position: by preparing the balance sheet, management not only reveals what is owned and owed by the enterprise, but also it gives the information regarding concern’s ability to meet its liabilities in the short run (liquidity position) and also in the long-run (solvency position) as and when they fall due.

[ Post a Reply to This Message ]
Subject: Accounting


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:49:02 01/22/16 Fri

Accounting or accountancy is the measurement, processing and communication of financial information about economic entities.The modern field was established by the Italian mathematician Luca Pacioli, in 1494.Accounting, which has been called the "language of business",measures the results of an organization's economic activities and conveys this information to a variety of users including investors, creditors, management, and regulators. Practitioners of accounting are known as accountants. The terms accounting and financial reporting are often used as synonyms.

Accounting can be divided into several fields including financial accounting, management accounting, auditing, and tax accounting.Accounting information systems are designed to support accounting functions and related activities. Financial accounting focuses on the reporting of an organization's financial information, including the preparation of financial statements, to external users of the information, such as investors, regulators and suppliers; and management accounting focuses on the measurement, analysis and reporting of information for internal use by management. The recording of financial transactions, so that summaries of the financials may be presented in financial reports, is known as bookkeeping, of which double-entry bookkeeping is the most common system.
Accounting is facilitated by accounting organizations such as standard-setters, accounting firms and professional bodies. Financial statements are usually audited by accounting firms, and are prepared in accordance with generally accepted accounting principles (GAAP). GAAP is set by various standard-setting organizations such as the Financial Accounting Standards Board (FASB) in the United States and the Financial Reporting Council in the United Kingdom. As of 2012, "all major economies" have plans to converge towards or adopt the International Financial Reporting Standards (IFRS).

[ Post a Reply to This Message ]
Subject: National Association for the Education of Young Children


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:43:30 01/22/16 Fri

The National Association for the Education of Young Children (NAEYC) is a large nonprofit association in the United States representing early childhood education teachers, para-educators, center directors, trainers, college educators, families of young children, policy makers, and advocates. NAEYC is focused on improving the well-being of young children, with particular emphasis on the quality of educational and developmental services for children from birth through age 8.

In the 1920s, concern over the varying quality of emerging nursery school programs in the United States inspired Patty Smith Hill to gather prominent figures in the field to decide how to best ensure the existence of high-quality programs. Meeting in Washington, DC, the group negotiated the issue of a manual, called "Minimum Essentials for Nursery Education," that set out standards and methods of acceptable nursery schools. Three years later, the group cemented the existence of a professional association of nursery school experts named the National Association for Nursery Education (NANE). NANE changed its name to NAEYC in 1964.

The association has existed for over 80 years. Its holds two national early childhood conferences per year, the NAEYC Annual Conference & Expo and the NAEYC National Institute for Early Childhood Professional Development.[5] The NAEYC Annual Conference & Expo is the largest early childhood education conference in the world. The association publishes periodicals, books, professional development materials, and resources, all of which relate to the education of young children. The association is also active in public policy work. The association is well known for accrediting high-quality child care/preschool centers, and more than 10,000 centers, programs and schools have earned NAEYC Accreditation.

NAEYC's mission is to serve and act on behalf of the needs, rights and well-being of all young children with primary focus on the provision of educational and developmental services and resources (NAEYC Bylaws, Article I., Section 1.1).

NAEYC's mission is based on three major goals and guidelines: Bettering well-qualified practitioners and improving the conditions these professionals work in, improving early childhood education by working to deliver a high-quality system of supporting early childhood programs, and encouraging excellence in childhood education for all children by constructing an extraordinary, all-around organization of groups and individuals who are committed to promoting excellence in early childhood education for all young children.

[ Post a Reply to This Message ]
Subject: Early childhood education


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:40:26 01/22/16 Fri

Early childhood education (ECE) is a branch of education theory which relates to the teaching of young children (formally and informally) up until the age of about eight. Infant/toddler education, a subset of early childhood education, denotes the education of children from birth to age two. In recent years, early childhood education has become a prevalent public policy issue, as municipal, state, and federal lawmakers consider funding for preschool and pre-k.

While the first two years of a child's life are spent in the creation of a child's first "sense of self", most children are able to differentiate between themselves and others by their second year. This differentiation is crucial to the child's ability to determine how they should function in relation to other people.

Early childhood education often focuses on learning through play, based on the research and philosophy of Jean Piaget, which posits that play meets the physical, intellectual, language, emotional and social needs (PILES) of children. Children's natural curiosity and imagination naturally evoke learning when unfettered. Thus, children learn more efficiently and gain more knowledge through activities such as dramatic play, art, and social games.

Tassoni suggests that "some play opportunities will develop specific individual areas of development, but many will develop several areas." Thus, It is important that practitioners promote children’s development through play by using various types of play on a daily basis. Key guidelines for creating a play-based learning environment include providing a safe space, correct supervision, and culturally aware, trained teachers who are knowledgeable about the Early Years Foundation.

The Developmental Interaction Approach is based on the theories of Jean Piaget, Erik Erikson, John Dewey and Lucy Sprague Mitchell. The approach focuses on learning through discovery. Jean Jacques Rousseau recommended that teachers should exploit individual children's interests in order to make sure each child obtains the information most essential to his personal and individual development.The five developmental domains of childhood development include:

Physical: the way in which a child develops biological and physical functions, including eyesight and motor skills
Social: the way in which a child interacts with others[15] Children develop an understanding of their responsibilities and rights as members of families and communities, as well as an ability to relate to and work with others.
Emotional: the way in which a child creates emotional connections and develops self-confidence. Emotional connections develop when children relate to other people and share feelings.
Language: the way in which a child communicates, including how they present their feelings and emotions. At 3 months, children employ different cries for different needs. At 6 months they can recognize and imitate the basic sounds of spoken language. In the first 3 years, children need to be exposed to communication with others in order to pick up language. "Normal" language development is measured by the rate of vocabulary acquisition.
Cognitive skills: the way in which a child organizes information. Cognitive skills include problem solving, creativity, imagination and memory.They embody the way in which children make sense of the world. Piaget believed that children exhibit prominent differences in their thought patterns as they move through the stages of cognitive development: sensorimotor period, the pre-operational period, and the operational period.

[ Post a Reply to This Message ]
Subject: Education


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:36:48 01/22/16 Fri

Education is the process of facilitating learning, or the acquisition of knowledge, skills, values, beliefs, and habits. Educational methods include storytelling, discussion, teaching, training, and directed research. Education frequently takes place under the guidance of educators, but learners may also educate themselves.[1] Education can take place in formal or informal settings and any experience that has a formative effect on the way one thinks, feels, or acts may be considered educational. The methodology of teaching is called pedagogy.

Education is commonly and formally divided into stages such as preschool or kindergarten, primary school, secondary school and then college, university or apprenticeship.

A right to education has been recognized by some governments, including at the global level: Article 13 of the United Nations' 1966 International Covenant on Economic, Social and Cultural Rights recognizes a universal right to education.[2] In most regions education is compulsory up to a certain age.

Education began in prehistory, as adults trained the young in the knowledge and skills deemed necessary in their society. In pre-literate societies this was achieved orally and through imitation. Story-telling passed knowledge, values, and skills from one generation to the next. As cultures began to extend their knowledge beyond skills that could be readily learned through imitation, formal education developed. Schools existed in Egypt at the time of the Middle Kingdom.

Plato founded the Academy in Athens, the first institution of higher learning in Europe. The city of Alexandria in Egypt, established in 330 BCE, became the successor to Athens as the intellectual cradle of Ancient Greece. There, the great Library of Alexandria was built in the 3rd century BCE. European civilizations suffered a collapse of literacy and organization following the fall of Rome in AD 476.

In China, Confucius (551-479 BCE), of the State of Lu, was the country's most influential ancient philosopher, whose educational outlook continues to influence the societies of China and neighbors like Korea, Japan and Vietnam. Confucius gathered disciples and searched in vain for a ruler who would adopt his ideals for good governance, but his Analects were written down by followers and have continued to influence education in East Asia into the modern era.

Formal education occurs in a structured environment whose explicit purpose is teaching students. Usually, formal education takes place in a school environment with classrooms of multiple students learning together with a trained, certified teacher of the subject. Most school systems are designed around a set of values or ideals that govern all educational choices in that system. Such choices include curriculum, physical classroom design, student-teacher interactions, methods of assessment, class size, educational activities, and more.

Preschools provide education from ages approximately three to seven, depending on the country, when children enter primary education. These are also known as nursery schools and as kindergarten, except in the US, where kindergarten is a term used for primary education.[citation needed] Kindergarten "provide[s] a child-centered, preschool curriculum for three- to seven-year-old children that aim[s] at unfolding the child's physical, intellectual, and moral nature with balanced emphasis on each of them.

Primary (or elementary) education consists of the first five to seven years of formal, structured education. In general, primary education consists of six to eight years of schooling starting at the age of five or six, although this varies between, and sometimes within, countries. Globally, around 89% of children aged six to twelve are enrolled in primary education, and this proportion is rising.[14] Under the Education For All programs driven by UNESCO, most countries have committed to achieving universal enrollment in primary education by 2015, and in many countries, it is compulsory. The division between primary and secondary education is somewhat arbitrary, but it generally occurs at about eleven or twelve years of age. Some education systems have separate middle schools, with the transition to the final stage of secondary education taking place at around the age of fourteen. Schools that provide primary education, are mostly referred to as primary schools or elementary schools. Primary schools are often subdivided into infant schools and junior school.

[ Post a Reply to This Message ]
Subject: Information


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:32:40 01/22/16 Fri

nformation (shortened as info) is that which informs. In other words, it is the answer to a question of some kind. It is also that from which knowledge and data can be derived as data represents values attributed to parameters, and knowledge signifies understanding of real things or abstract concepts.[1] As it regards data, the information's existence is not necessarily coupled to an observer (it exists beyond an event horizon, for example), while in the case of knowledge, the information requires a cognitive observer.

At its most fundamental, information is any propagation of cause and effect within a system. Information is conveyed either as the content of a message or through direct or indirect observation of some thing. That which is perceived can be construed as a message in its own right, and in that sense, information is always conveyed as the content of a message.

Information can be encoded into various forms for transmission and interpretation (for example, information may be encoded into a sequence of signs, or transmitted via a sequence of signals). It can also be encrypted for safe storage and communication.

Information resolves uncertainty. The uncertainty of an event is measured by its probability of occurrence and is inversely proportional to that. The more uncertain an event, the more information is required to resolve uncertainty of that event. The bit is a typical unit of information, but other units such as the nat may be used. Example: information in one "fair" coin flip: log2(2/1) = 1 bit, and in two fair coin flips is log2(4/1) = 2 bits.

The concept that information is the message has different meanings in different contexts.[2] Thus the concept of information becomes closely related to notions of constraint, communication, control, data, form, education, knowledge, meaning, understanding, mental stimuli, pattern, perception, representation, and entropy.

nformation (shortened as info) is that which informs. In other words, it is the answer to a question of some kind. It is also that from which knowledge and data can be derived as data represents values attributed to parameters, and knowledge signifies understanding of real things or abstract concepts.[1] As it regards data, the information's existence is not necessarily coupled to an observer (it exists beyond an event horizon, for example), while in the case of knowledge, the information requires a cognitive observer.

At its most fundamental, information is any propagation of cause and effect within a system. Information is conveyed either as the content of a message or through direct or indirect observation of some thing. That which is perceived can be construed as a message in its own right, and in that sense, information is always conveyed as the content of a message.

Information can be encoded into various forms for transmission and interpretation (for example, information may be encoded into a sequence of signs, or transmitted via a sequence of signals). It can also be encrypted for safe storage and communication.

Information resolves uncertainty. The uncertainty of an event is measured by its probability of occurrence and is inversely proportional to that. The more uncertain an event, the more information is required to resolve uncertainty of that event. The bit is a typical unit of information, but other units such as the nat may be used. Example: information in one "fair" coin flip: log2(2/1) = 1 bit, and in two fair coin flips is log2(4/1) = 2 bits.

The concept that information is the message has different meanings in different contexts.[2] Thus the concept of information becomes closely related to notions of constraint, communication, control, data, form, education, knowledge, meaning, understanding, mental stimuli, pattern, perception, representation, and entropy.

From the stance of information theory, information is taken as an ordered sequence of symbols from an alphabet, say an input alphabet χ, and an output alphabet ϒ. Information processing consists of an input-output function that maps any input sequence from χ into an output sequence from ϒ. The mapping may be probabilistic or deterministic. It may have memory or be memoryless

Dusenbery called these causal inputs. Other inputs (information) are important only because they are associated with causal inputs and can be used to predict the occurrence of a causal input at a later time (and perhaps another place). Some information is important because of association with other information but eventually there must be a connection to a causal input. In practice, information is usually carried by weak stimuli that must be detected by specialized sensory systems and amplified by energy inputs before they can be functional to the organism or system. For example, light is often a causal input to plants but provides information to animals. The colored light reflected from a flower is too weak to do much photosynthetic work but the visual system of the bee detects it and the bee's nervous system uses the information to guide the bee to the flower, where the bee often finds nectar or pollen, which are causal inputs, serving a nutritional function.

[ Post a Reply to This Message ]
Subject: Publishing


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:27:25 01/22/16 Fri

Publishing is the process of production and dissemination of literature, music, or information — the activity of making information available to the general public. In some cases, authors may be their own publishers, meaning originators and developers of content also provide media to deliver and display the content for the same. Also, the word publisher can refer to the individual who leads a publishing company or an imprint or to a person who owns/heads a magazine.

Traditionally, the term refers to the distribution of printed works such as books (the "book trade") and newspapers. With the advent of digital information systems and the Internet, the scope of publishing has expanded to include electronic resources such as the electronic versions of books and periodicals, as well as micropublishing, websites, blogs, video game publishers, and the like.

Publishing includes the following stages of development: acquisition, copy editing, production, printing (and its electronic equivalents), and marketing and distribution.

Publication is also important as a legal concept:

As the process of giving formal notice to the world of a significant intention, for example, to marry or enter bankruptcy;
As the essential precondition of being able to claim defamation; that is, the alleged libel must have been published, and
For copyright purposes, where there is a difference in the protection of published and unpublished works.
There are two categories of book publisher:

Non-Paid Publishers: The term non-paid publisher refers to those publication houses that do not charge authors at all to publish the book.
Paid Publishers: The author has to meet with the total expense to get the book published, and the author has full right to set up marketing policies. This is also known as vanity publishing.

Book and magazine publishers spend a lot of their time buying or commissioning copy; newspaper publishers, by contrast, usually hire their staff to produce copy, although they may also employ freelance journalists, called stringers. At a small press, it is possible to survive by relying entirely on commissioned material. But as activity increases, the need for works may outstrip the publisher's established circle of writers.

For works written independently of the publisher, writers often first submit a query letter or proposal directly to a literary agent or to a publisher. Submissions sent directly to a publisher are referred to as unsolicited submissions, and the majority come from previously unpublished authors. If the publisher accepts unsolicited manuscripts, then the manuscript is placed in the slush pile, which publisher's readers sift through to identify manuscripts of sufficient quality or revenue potential to be referred to acquisitions editors for review. The acquisitions editors send their choices to the editorial staff. The time and number of people involved in the process are dependent on the size of the publishing company, with larger companies having more degrees of assessment between unsolicited submission and publication. Unsolicited submissions have a very low rate of acceptance, with some sources estimating that publishers ultimately choose about three out of every ten thousand unsolicited manuscripts they receive

Many book publishers around the world maintain a strict "no unsolicited submissions" policy and will only accept submissions via a literary agent. This policy shifts the burden of assessing and developing writers out of the publisher and onto the literary agents. At these publishers, unsolicited manuscripts are thrown out, or sometimes returned, if the author has provided pre-paid postage.

Established authors may be represented by a literary agent to market their work to publishers and negotiate contracts. Literary agents take a percentage of author earnings (varying between 10 to 15 percent) to pay for their services.

Some writers follow a non-standard route to publication. For example, this may include bloggers who have attracted large readerships producing a book based on their websites, books based on Internet memes, instant "celebrities" such as Joe the Plumber, retiring sports figures and in general anyone a publisher feels could produce a marketable book. Such books often employ the services of a ghostwriter.

For a submission to reach publication, it must be championed by an editor or publisher who must work to convince other staff of the need to publish a particular title. An editor who discovers or champions a book that subsequently becomes a best-seller may find their reputation enhanced as a result of their success.

[ Post a Reply to This Message ]
Subject: Quality Communications


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:23:36 01/22/16 Fri

Quality Communications is a British publishing company founded by Dez Skinn in 1982. Quality was initially formed to publish the award-winning monthly comics anthology Warrior. The company has been involved with comics in both the UK and the U.S., mainly with reprint material from Warrior and repackaging 2000 AD material for the U.S. market. Quality was also involved in the U.S. completion of Marvelman and V for Vendetta.

Quality's main period as a comics publisher was from 1982–1988. In 1990, the company launched the comics trade magazine Comics International, which Skinn published and edited for the following 16 years. His "Sez Dez" column was a regular feature in issues #100–#200, at which point Skinn sold the magazine in 2006 to Cosmic Publications.

More recently, Quality has published Toy Max, a magazine for toy collectors, and the hardcovers The Art of John Watkiss and Comix: The Underground Revolution. In 2008, Quality Communications took over publishing The Jack Kirby Quarterly.

[ Post a Reply to This Message ]
Subject: Mission statement


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:18:05 01/22/16 Fri

A mission statement is a statement which is used as a way of communicating the purpose of the organization. Although most of the time it will remain the same for a long period of time, it is not uncommon for organizations to update their mission statement and generally happens when an organization evolves. Mission statements are normally short and simple statements which outline what the organization's purpose is and are related to the specific sector an organization operates in.

Properly crafted mission statements (1) serve as filters to separate what is important from what is not, (2) clearly state which markets will be served and how, and (3) communicate a sense of intended direction to the entire organization. A mission is different from a vision in that the former is the cause and the latter is the effect; a mission is something to be accomplished whereas a vision is something to be pursued for that accomplishment. Also called company mission, corporate mission, or corporate purpose.

The mission statement should guide the actions of the organization, spell out its overall goal, provide a path, and guide decision-making. It provides "the framework or context within which the company's strategies are formulated." It is like a goal for what the company wants to do for the world.

According to Dr. Christopher Bart, the commercial mission statement consists of three essential components:

Key market: Who is your target client or customer (generalize if needed)?
Contribution: What product or service do you provide to that client?
Distinction: What makes your product or service unique, so that the client would choose you?
A personal mission statement is developed in much the same way that an organizational mission statement is created. A personal mission statement is a brief description of what an individual wants to focus on, wants to accomplish and wants to become. It is a way to focus energy, actions, behaviors and decisions towards the things that are most important to the individual.

The sole purpose of a mission statement is to serve as your company's goal/agenda, it outlines clearly what the goal of the company is. Some generic examples of mission statements would be, "To provide the best service possible within the banking sector for our customers." or "To provide the best experience for all of our customers." The reason why businesses make use of mission statements is to make it clear what they look to achieve as an organisation, not only to themselves and their employees but to the customers and other people who are a part of the business, such as shareholders. As a company evolves, so will their mission statement, this is to make sure that the company remains on track and to ensure that the mission statement does not lose its touch and become boring or stale.

An article which can be found here explains the purpose of a mission statement as the following:

"The mission statement reflects every facet of your business: the range and nature of the products you offer, pricing, quality, service, marketplace position, growth potential, use of technology, and your relationships with your customers, employees, suppliers, competitors and the community."

It is important that a mission statement is not confused with a vision statement. As discussed earlier, the main purpose of a mission statement is to get across the ambitions of an organisation in a short and simple fashion, it is not necessary to go into detail for the mission statement which is evident in examples given. The reason why it is important that a mission statement and vision statement are not confused is because they both serve different purposes. Vision statements tend to be more related to strategic planning and lean more towards discussing where a company aims to be in the future.

The definition of a vision statement according to BusinessDictionary is "An aspirational description of what an organisation would like to achieve or accomplish in the mid-term or long-term future. It is intended to serve as a clear guide for choosing current and future courses of action.

It is not hard to see why a lot of people confuse a mission statement and a vision statement, although both statements serve a different purpose for a company.

A mission statement is all about how an organisation will get to where they want to be and makes the purposes and objectives clear, whereas a vision statement is outlining where the organisation wants to be in the future. Mission statements are more concerned about the current times and tend to answer questions about what the business does or what makes them stand out compared to the competition, whilst vision statements are solely focused on where the organisation sees themselves in the future and where they aim to be. Both statements may be adapted later into the organisation's life, however it is important to keep the core of the statement there such as core values, customer needs and vision.

Although it may not seem very important to know the difference between the two types of statements, it is very important to businesses. This is because it is common for businesses to base their strategic plans around clear vision and mission statements. Both statements play a big factor in the strategic planning of a business. A study carried out by Bain & Company showed that companies which had clearly outlined vision and mission statements outperformed other businesses that did not have clear vision and mission statements.

[ Post a Reply to This Message ]
Subject: Performance management


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:10:22 01/22/16 Fri

Performance management (PM) includes activities which ensure that goals are consistently being met in an effective and efficient manner. Performance management can focus on the performance of an organization, a department, employee, or even the processes to build a product or service, as well as many other areas.

This is used most often in the workplace, can apply wherever people interact — schools, churches, community meetings, sports teams, health setting, governmental agencies, social events, and even political settings - anywhere in the world people interact with their environments to produce desired effects. Armstrong and Baron (1998) defined it as a “strategic and integrated approach to increase the effectiveness of companies by improving the performance of the people who work in them and by developing the capabilities of teams and individual contributors.”

It may be possible to get all employees to reconcile personal goals with organizational goals and increase productivity and profitability of an organization using this process.[1] It can be applied by organizations or a single department or section inside an organization, as well as an individual person. The performance process is appropriately named the self-propelled performance process (SPPP).

First, a commitment analysis must be done where a job mission statement is drawn up for each job. The job mission statement is a job definition in terms of purpose, customers, product and scope. The aim with this analysis is to determine the continuous key objectives and performance standards for each job position.

Following the commitment analysis is the work analysis of a particular job in terms of the reporting structure and job description. If a job description is not available, then a systems analysis can be done to draw up a job description. The aim with this analysis is to determine the continuous critical objectives and performance standards for each job.

Werner Erhard, Michael C. Jensen, and their colleagues have developed a new approach to improving performance in organizations. Their model stresses how the constraints imposed by one’s own worldview can impede cognitive abilities that would otherwise be available. Their work delves into the source of performance, which is not accessible by mere linear cause-and-effect analysis. They assert that the level of performance that people achieve correlates with how work situations occur to them and that language (including what is said and unsaid in conversations) plays a major role in how situations occur to the performer. They assert that substantial gains in performance are more likely to be achieved by management understanding how employees perceive the world and then encouraging and implementing changes that make sense to employees' worldview.

Managing employee or system performance and aligning their objectives facilitates the effective delivery of strategic and operational goals. Some proponents argue that there is a clear and immediate correlation between using performance management programs or software and improved business and organizational results.[citation needed] In the public sector, the effects of performance management systems have differed from positive to negative, suggesting that differences in the characteristics of performance management systems and the contexts into which they are implemented play an important role to the success or failure of performance management.

Direct financial gain
Grow sales
Reduce costs in the organization
Stop project overruns
Aligns the organization directly behind the CEO's goals
Decreases the time it takes to create strategic or operational changes by communicating the changes through a new set of goals
Motivated workforce
Optimizes incentive plans to specific goals for over achievement, not just business as usual
Improves employee engagement because everyone understands how they are directly contributing to the organizations high level goals
Create transparency in achievement of goals
High confidence in bonus payment process
Professional development programs are better aligned directly to achieving business level goals
Improved management control
Flexible, responsive to management needs
Displays data relationships
Helps audit / comply with legislative requirement
Simplifies communication of strategic goals scenario planning
Provides well documented and communicated process documentation

[ Post a Reply to This Message ]
Subject: SMART criteria


Author:
Anonymous
[ Edit | View ]

Date Posted: 15:06:10 01/22/16 Fri

SMART is a mnemonic acronym, giving criteria to guide in the setting of objectives, for example in project management, employee-performance management and personal development. The letters S and M usually mean specific and measurable. The other letters have meant different things to different authors, as described below. Additional letters have been added by some authors.

SMART criteria are commonly attributed to Peter Drucker's management by objectives concept.

The principal advantage of SMART objectives is that they are easier to understand and to know when they have been done.

The November 1981 issue of Management Review contained a paper by George T. Doran called There's a S.M.A.R.T. way to write management's goals and objectives.[2][3] It discussed the importance of objectives and the difficulty of setting them.

Ideally speaking, each corporate, department, and section objective should be:

Specific – target a specific area for improvement.
Measurable – quantify or at least suggest an indicator of progress.
Assignable – specify who will do it.
Realistic – state what results can realistically be achieved, given available resources.
Time-related – specify when the result(s) can be achieved.
Notice that these criteria don’t say that all objectives must be quantified on all levels of management. In certain situations it is not realistic to attempt quantification, particularly in staff middle-management positions. Practising managers and corporations can lose the benefit of a more abstract objective in order to gain quantification. It is the combination of the objective and its action plan that is really important. Therefore serious management should focus on these twins and not just the objective.

Specific
The criterion stresses the need for a specific goal rather than a more general one. This means the goal is clear and unambiguous; without vagaries and platitudes. To make goals specific, they must tell a team exactly what's expected, why it's important, who’s involved, where it's going to happen and which attributes are important.

A specific goal will usually answer the five 'W' questions:

What: What do I want to accomplish?
Why: Specific reasons, purpose or benefits of accomplishing the goal.
Who: Who is involved?
Where: Identify a location.
Which: Identify requirements and constraints.

Measurable
The second criterion stresses the need for concrete criteria for measuring progress toward the attainment of the goal. The thought behind this is that if a goal is not measurable it is not possible to know whether a team is making progress toward successful completion. Measuring progress is supposed to help a team stay on track, reach its target dates and experience the exhilaration of achievement that spurs it on to continued effort required to reach the ultimate goal.

A measurable goal will usually answer questions such as:

How much?
How many?
How will I know when it is accomplished?
Indicators should be quantifiable

[ Post a Reply to This Message ]
Subject: Goal


Author:
Anonymous
[ Edit | View ]

Date Posted: 14:58:57 01/22/16 Fri

A goal is a desired result that a person or a system envisions, plans and commits to achieve: a personal or organizational desired end-point in some sort of assumed development. Many people endeavor to reach goals within a finite time by setting deadlines.

It is roughly similar to purpose or aim, the anticipated result which guides reaction, or an end, which is an object, either a physical object or an abstract object, that has intrinsic value.

Goal setting may involve establishing specific, measurable, achievable, relevant, and time-bounded (SMART) objectives, but not all researchers agree that these SMART criteria are necessary.

Research on goal setting by Edwin A. Locke and his colleagues suggests that goal setting can serve as an effective tool for making progress when it ensures that group members have a clear awareness of what each person must do to achieve a shared objective. On a personal level, the process of setting goals allows individuals to specify and then work toward their own objectives (such as financial or career-based goals). Goal-setting comprises a major component of personal development and management.

Goals can be long-term, intermediate, or short-term. The primary difference is the time required to achieve them.

Short-term goals expect accomplishment in a short period of time, such as trying to get a bill paid in the next few days. The definition of a short-term goal need not relate to any specific length of time. In other words, one may achieve (or fail to achieve) a short-term goal in a day, week, month, year, etc. The time-frame for a short-term goal relates to its context in the overall time line that it is being applied to. For instance, one could measure a short-term goal for a month-long project in days; whereas one might measure a short-term goal for someone's lifetime in months or in years. Planners usually define short-term goals in relation to long-term goals.

Individuals can set personal goals. A student may set a goal of a high mark in an exam. An athlete might run five miles a day. A traveler might try to reach a destination-city within three hours. Financial goals are a common example, to save for retirement or to save for a purchase.

Managing goals can give returns in all areas of personal life. Knowing precisely what one wants to achieve makes clear what to concentrate and improve on, and often subconsciously prioritizes that goal.

Goal setting and planning ("goal work") promotes long-term vision and short-term motivation. It focuses intention, desire, acquisition of knowledge, and helps to organize resources.

Efficient goal work includes recognizing and resolving all guilt, inner conflict or limiting belief that might cause one to sabotage one's efforts. By setting clearly defined goals, one can subsequently measure and take pride in the accomplishment of those goals. One can see progress in what might have seemed a long, perhaps difficult, grind.

Achieving complex and difficult goals requires focus, long-term diligence and effort (see Goal pursuit). Success in any field requires forgoing excuses and justifications for poor performance or lack of adequate planning; in short, success requires emotional maturity. The measure of belief that people have in their ability to achieve a personal goal also affects that achievement.

Long-term achievements rely on short-term achievements. Emotional control over the small moments of the single day makes a big difference in the long term.

Goal efficacy refers to how likely an individual is to succeed in achieving their goal. Goal integrity refers to how consistent one's goals are with core aspects of the self. Research has shown that a focus on goal efficacy is associated with well-being factor happiness (subjective well-being) and goal integrity is associated with the well-being factor meaning (psychology).

The self-concordance model is a model that looks at the sequence of steps that occur from the commencement of a goal to attaining that goal.It looks at the likelihood and impact of goal achievement based on the type of goal and meaning of the goal to the individual. Different types of goals impact goal achievement and the sense of subjective well-being brought about by achieving the goal. The model breaks down factors that promote, first, striving to achieve a goal, then achieving a goal, and then the factors that connect goal achievement to changes in subjective well-being.

Goals that are pursued to fulfill intrinsic values or to support an individual's self-concept are called self-concordant goals. Self-concordant goals fulfill basic needs and are aligned with what psychoanalyst Donald Winnicott called an individual's "True Self". Because these goals have personal meaning to an individual and reflect an individual's self-identity, self-concordant goals are more likely to receive sustained effort over time. In contrast, goals that do not reflect an individual's internal drive and are pursued due to external factors (e.g. social pressures) emerge from a non-integrated region of a person and are therefore more likely to be abandoned when obstacles occur.

[ Post a Reply to This Message ]
Subject: Management


Author:
Anonymous
[ Edit | View ]

Date Posted: 14:53:02 01/22/16 Fri

Management in businesses and organizations is the function that coordinates the efforts of people to accomplish goals and objectives by using available resources efficiently and effectively.

Management includes planning, organizing, staffing, leading or directing, and controlling an organization to accomplish the goal or target. Resourcing encompasses the deployment and manipulation of human resources, financial resources, technological resources, and natural resources. Management is also an academic discipline, a social science whose objective is to study social organization.

The English verb "manage" comes from the Italian maneggiare (to handle, especially tools), which derives from the two Latin words manus (hand) and agere (to act).

The French word for housekeeping, ménagerie, derived from ménager ("to keep house"; compare ménage for "household"), also encompasses taking care of domestic animals. The French word mesnagement (or ménagement) influenced the semantic development of the English word management in the 17th and 18th centuries.

Views on the definition and scope of management include:

According to Henri Fayol, "to manage is to forecast and to plan, to organise, to command, to co-ordinate and to control."[3]
Fredmund Malik defines it as "the transformation of resources into utility."
Management included as one of the factors of production - along with machines, materials and money
Ghislain Deslandes defines it as “a vulnerable force, under pressure to achieve results and endowed with the triple power of constraint, imitation and imagination, operating on subjective, interpersonal, institutional and environmental levels”.[4]
Peter Drucker (1909–2005) saw the basic task of a management as twofold: marketing and innovation. Nevertheless, innovation is also linked to marketing (product innovation is a central strategic marketing issue). Peter Drucker identifies marketing as a key essence for business success, but management and marketing are generally understood[by whom?] as two different branches of business administration knowledge.
Andreas Kaplan specifically defines European Management as a cross-cultural, societal management approach based on interdisciplinary principles.

In profitable organizations, management's primary function is the satisfaction of a range of stakeholders. This typically involves making a profit (for the shareholders), creating valued products at a reasonable cost (for customers), and providing great employment opportunities for employees. In nonprofit management, add the importance of keeping the faith of donors. In most models of management and governance, shareholders vote for the board of directors, and the board then hires senior management. Some organizations have experimented with other methods (such as employee-voting models) of selecting or reviewing managers, but this is rare.

In the public sector of countries constituted as representative democracies, voters elect politicians to public office. Such politicians hire many managers and administrators, and in some countries like the United States political appointees lose their jobs on the election of a new president/governor/mayor.

[ Post a Reply to This Message ]
Subject: Competition


Author:
Anonymous
[ Edit | View ]

Date Posted: 14:42:45 01/22/16 Fri

Competition in biology and sociology, is a contest between two or more organisms, animals, individuals, groups, etc., for territory, a niche, for a location of resources, for resources and goods, for mates, for prestige, for recognition, for awards, for group or social status, or for leadership. Competition is the opposite of cooperation.

It arises whenever at least two parties strive for a goal which cannot be shared or which is desired individually but not in sharing and cooperation. Competition occurs naturally between living organisms which co-exist in the same environment.

For example, animals compete over water supplies, food, mates, and other biological resources. Humans usually compete for food and mates, though when these needs are met deep rivalries often arise over the pursuit of wealth, prestige, and fame. Competition is also a major tenet of market economies and business is often associated with competition as most companies are in competition with at least one other firm over the same group of customers, and also competition inside a company is usually stimulated for meeting and reaching higher quality of services or products that the company produce or develop.

Competition can have both beneficial and detrimental effects. Many evolutionary biologists view inter-species and intra-species competition as the driving force of adaptation, and ultimately of evolution. However, some biologists disagree, citing competition as a driving force only on a small scale, and citing the larger scale drivers of evolution to be abiotic factors (termed 'Room to Roam').

Richard Dawkins, prefers to think of evolution in terms of competition between single genes, which have the welfare of the organism 'in mind' only insofar as that welfare furthers their own selfish drives for replication (termed the 'selfish gene').

Some social darwinists claim that competition also serves as a mechanism for determining the best-suited group; politically, economically and ecologically. Positively, competition may serve as a form of recreation or a challenge provided that it is non-hostile. On the negative side, competition can cause injury and loss to the organisms involved, and drain valuable resources and energy. In the human species competition can be expensive on many levels, not only in lives lost to war, physical injuries, and damaged psychological well beings, but also in the health effects from everyday civilian life caused by work stress, long work hours, abusive working relationships, and poor working conditions, that detract from the enjoyment of life, even as such competition results in financial gain for the owners.

[ Post a Reply to This Message ]
Subject: Market research


Author:
Anonymous
[ Edit | View ]

Date Posted: 14:37:23 01/22/16 Fri

Market research is any organized effort to gather information about target markets or customers. It is a very important component of business strategy.

The term is commonly interchanged with marketing research; however, expert practitioners may wish to draw a distinction, in that marketing research is concerned specifically about marketing processes, while market research is concerned specifically with markets.


Market research is a key factor in maintaining competitiveness over competitors. Market research provides important information to identify and analyze the market need, market size and competition. Market-research techniques encompass both qualitative techniques such as focus groups, in-depth interviews, and ethnography, as well as quantitative techniques such as customer surveys, and analysis of secondary data.

Market research, which includes social and opinion research, is the systematic gathering and interpretation of information about individuals or organizations using statistical and analytical methods and techniques of the applied social sciences to gain insight or support decision making.

Market research began to be conceptualized and put into formal practice during the 1920s, as an offshoot of the advertising boom of the Golden Age of radio in the United States. Advertisers began to realize the significance of demographics revealed by sponsorship of different radio programs.

Market research is a way of getting an overview of consumers' wants, needs and beliefs. It can also involve discovering how they act. The research can be used to determine how a product could be marketed. Peter Drucker believed market research to be the quintessence of marketing.

There are two major types of market research. Primary Research sub-divided into Quantitative and Qualitative research and Secondary research.

Factors that can be investigated through market research include

Market information
Through Market information one can know the prices of different commodities in the market, as well as the supply and demand situation. Market researchers have a wider role than previously recognized by helping their clients to understand social, technical, and even legal aspects of markets.

Market segmentation
Market segmentation is the division of the market or population into subgroups with similar motivations. It is widely used for segmenting on geographic differences, personality differences, demographic differences, technographic differences, use of product differences, psychographic differences and gender differences. For B2B segmentation firmographics is commonly used.

Market trends
Market trends are the upward or downward movement of a market, during a period of time. Determining the market size may be more difficult if one is starting with a new innovation. In this case, you will have to derive the figures from the number of potential customers, or customer segments. [Ilar 1998]

SWOT analysis
SWOT is a written analysis of the Strengths, Weaknesses, Opportunities and Threats to a business entity. Not only should a SWOT be used in the creation stage of the company but could also be used throughout the life of the company. A SWOT may also be written up for the competition to understand how to develop the marketing and product mixes.

Another factor that can be measured is marketing effectiveness. This includes

Customer analysis
Choice modelling
Competitor analysis
Risk analysis
Product research
Advertising the research
Marketing mix modeling
Simulated Test Marketing

[ Post a Reply to This Message ]
Subject: PEST analysis


Author:
Anonymous
[ Edit | View ]

Date Posted: 14:17:39 01/22/16 Fri

PEST analysis ("Political, Economic, Social and Technological") describes a framework of macro-environmental factors used in the environmental scanning component of strategic management. It is a part of the external analysis when conducting a strategic analysis or doing market research, and gives an overview of the different macro-environmental factors that the company has to take into consideration. It is a useful strategic tool for understanding market growth or decline, business position, potential and direction for operations.

The growing importance of environmental or ecological factors in the first decade of the 21st century have given rise to green business and encouraged widespread use of an updated version of the PEST framework. STEER analysis systematically considers Socio-cultural, Technological, Economic, Ecological, and Regulatory factors.

Other variants of the mnemonic include "Legal" to make SLEPT; inserting Environmental factors expands it to PESTEL or PESTLE, which is popular in the United Kingdom.

The basic PEST analysis includes four factors:

Political factors are basically how the government intervenes in the economy. Specifically, political factors has areas including tax policy, labor law, environmental law, trade restrictions, tariffs, and political stability. Political factors may also include goods and services which the government aims to provide or be provided (merit goods) and those that the government does not want to be provided (demerit goods or merit bads). Furthermore, governments have a high impact on the health, education, and infrastructure of a nation.
Economic factors include economic growth, interest rates, exchange rates, the inflation rate. These factors greatly affect how businesses operate and make decisions. For example, interest rates affect a firm's cost of capital and would therefore to what extent a business grows and expands. Exchange rates can affect the costs of exporting goods and the supply and price of imported goods in an economy.
Social factors include the cultural aspects and health consciousness, population growth rate, age distribution, career attitudes and emphasis on safety. High trends in social factors affect the demand for a company's products and how that company operates. For example, the aging population may imply a smaller and less-willing workforce (thus increasing the cost of labor). Furthermore, companies may change various management strategies to adapt to social trends caused from this (such as recruiting older workers).
Technological factors include technological aspects like R&D activity, automation, technology incentives and the rate of technological change. These can determine barriers to entry, minimum efficient production level and influence the outsourcing decisions. Furthermore, technological shifts would affect costs, quality, and lead to innovation.
Expanding the analysis to PESTLE or PESTEL adds:

Legal factors include discrimination law, consumer law, antitrust law, employment law, and health and safety law. These factors can affect how a company operates, its costs, and the demand for its products.
Environmental factors include ecological and environmental aspects such as weather, climate, and climate change, which may especially affect industries such as tourism, farming, and insurance. Furthermore, growing awareness of the potential impacts of climate change is affecting how companies operate and the products they offer, both creating new markets and diminishing or destroying existing ones.
Other factors for the various offshoots include:

Demographic factors include gender, age, ethnicity, knowledge of languages, disabilities, mobility, home ownership, employment status, religious belief or practice, culture and tradition, living standards and income level.
Regulatory factors include acts of parliament and associated regulations, international and national standards, local government by-laws, and mechanisms to monitor and ensure compliance with these.
More factors discussed in the SPELIT Power Matrix include:

Intercultural factors considers collaboration in a global setting.
Other specialized factors discussed in chapter 10 of the SPELIT Power Matrix include the Ethical, Educational, Physical, Religious, and Security environments. The security environment may include either personal, company, or national security.
Other business-related factors that might be considered in an environmental analysis include Competition, Demographics, Ecological, Geographical, Historical, Organizational, and Temporal (schedule).

The model's factors will vary in importance to a given company based on its industry and the goods it produces. For example, consumer and B2B companies tend to be more affected by the social factors, while a global defense contractor would tend to be more affected by political factors. Additionally, factors that are more likely to change in the future or more relevant to a given company will carry greater importance. For example, a company which has borrowed heavily will need to focus more on the economic factors (especially interest rates).

Furthermore, conglomerate companies who produce a wide range of products (such as Sony, Disney, or BP) may find it more useful to analyze one department of its company at a time with the PESTEL model, thus focusing on the specific factors relevant to that one department. A company may also wish to divide factors into geographical relevance, such as local, national, and global.

The PEST factors, combined with external micro-environmental factors and internal drivers, can be classified as opportunities and threats in a SWOT analysis. A graphical method for PEST analysis called 'PESTLEWeb' has been developed at Henley Business School in the UK. Research has shown that PESTLEWeb diagrams are considered by users to be more logical, rationale and convincing than traditional PEST analysis.

[ Post a Reply to This Message ]
Main index ] [ Archives: [1]2345678910 ]
[ Contact Forum Admin ]


Forum timezone: GMT-6
VF Version: 3.00b, ConfDB:
Before posting please read our privacy policy.
VoyForums(tm) is a Free Service from Voyager Info-Systems.
Copyright © 1998-2019 Voyager Info-Systems. All Rights Reserved.