Friday, February 26, 2016

Automatic fire suppression

Automatic fire suppression systems control and extinguish fires without human intervention. fire sprinkler system, gaseous fire suppression, and condensed aerosol fire suppression.

Examples of automatic systems include
The first fire extinguisher patent was issued to Alanson Crane of Virginia on Feb. 10, 1863. The first fire sprinkler system was patented by H.W. Pratt in 1872. But the first practical automatic sprinkler system was invented in 1874 by Henry S. Parmalee of New Haven, CT. He installed the system in a piano factory he owned.

Types of automatic systems
Today there are numerous types of Automatic Fire Suppression Systems. Systems are as diverse as the many applications. In general, however, Automatic Fire Suppression Systems fall into two categories: engineered and pre-engineered systems.

Engineered Fire Suppression Systems are design specific. Engineered systems are usually for

larger installations where the system is designed for the particular application. Examples include marine and land vehicle applications, computer clean rooms, public and private buildings, industrial paint lines, dip tanks and electrical switch rooms. Engineered systems use a number of gaseous or solid agents. Many are specifically formulated. Some, such as 3M Novec 1230 Fire Protection Fluid, are stored as a liquid and discharged as a gas.

Pre-Engineered Fire Suppression Systems use pre-designed elements to eliminate the need for
potassium carbonate or monoammonium phosphate (MAP), to protect spaces such as paint rooms and surfactant additive, and target retrofit applications where the risk of fire or fire injury is high but where a conventional fire sprinkler system would be unacceptably expensive. In addition, residential range hood fire suppression systems are becoming more common in shared-use cooking spaces, such as those found in assisted living facilities, hospice homes, and group homes.
booths, storage areas and commercial kitchens. A small number of residential designs have also emerged that typically employ water mist with or without a engineering work beyond the original product design. Typical industrial solutions use a simple wet or dry chemical agent, such as those found in assisted living facilities, hospice homes, and group homes.

Components
By definition, an automatic fire suppression system can operate without human intervention. To do so it must possess a means of detection, actuation and delivery.

In many systems, detection is accomplished by mechanical or electrical means. Mechanical detection uses fusible-link or thermo-bulb detectors. These detectors are designed to separate at a specific temperature and release tension on a release mechanism. Electrical detection uses heat detectors equipped with self-restoring, normally-open contacts which close when a predetermined temperature is reached. Remote and local manual operation is also possible.
Actuation usually involves either a pressurised fluid and a release valve, or in some cases an electric pump.
Delivery is accomplished by means of piping and nozzles. Nozzle design is specific to the agent used and coverage desired.


Extinguishing agents
In the early days, water was the exclusive fire suppression agent. Although still used today, water has limitations. Most notably, its liquid and conductive properties can cause as much property damage as fire itself.

AgentPrimary IngredientApplications
HFC 227ea (e.g.FM-200)HeptafluoropropaneElectronics, medical equipment, production equipment, libraries, data centers, medical record rooms, server rooms, oil pumping stations, engine compartments, telecommunications rooms, switch rooms, engine and machinery spaces, pump rooms, control rooms
 
FK-5-1-12 (3M Novec 1230 Fire Protection Fluid)Fluorinated KetoneElectronics, medical equipment, production equipment, libraries, data centers, medical record rooms, server rooms,
oil pumping stations, engine compartments, telecommunications rooms, switch rooms, engine and machinery spaces, pump rooms, control rooms

Health and environmental concerns
Despite their effectiveness, chemical fire extinguishing agents are not without disadvantages. In the early 20th century, carbon tetrachloride was extensively used as a dry cleaning solvent, a refrigerant and as a fire extinguishing agent. In time, it was found carbon tetrachloride could lead to severe health effects.
From the mid-1960s Halon 1301 was the industry standard for protecting high-value assets from the

threat of fire. Halon 1301 had many benefits as a fire suppression agent; it is fast-acting, safe for assets and required minimal storage space. Halon 1301's major drawbacks are that it depletes atmospheric ozone and is potentially harmful to humans.
Since 1987, some 191 nations have signed The Montreal Protocol on Substances That Deplete the Ozone Layer. The Protocol is an international treaty designed to protect the ozone layer by phasing out the production of a number of substances believed to be responsible for ozone depletion. Among these were halogenated hydrocarbons often used in fire suppression. As a result, manufacturers have focused on alternatives to Halon 1301 and Halon 1211 (halogenated hydrocarbons).
A number of countries have also taken steps to mandate the removal of installed Halon systems. Most notably these include Germany and Australia, the first two countries in the world to require this action. In both of these countries complete removal of installed Halon systems has been completed except for a very few essential-use applications. The European Union is currently undergoing a similar mandated removal of installed Halon systems.


Modern systems
Since the early 1990s manufacturers have successfully developed safe and effective Halon alternatives. These include DuPont FM-200, American Pacific’s Halotron and 3M Novec 1230 Fire Protection Fluid. Generally, the Halon replacement agents available today fall into two broad categories, in-kind (gaseous extinguishing agents) or not in-kind (alternative technologies). In-kind gaseous agents generally fall into two further categories, halocarbons and inert gases. Not in-kind alternatives include such options as water mist or the use of early warning smoke detection systems.
 


Tuesday, February 16, 2016

El Niño

El Niño /ɛl ˈnnj/ (Spanish pronunciation: [el ˈniɲo]) is the warm phase of the El Niño Southern Oscillation (commonly called ENSO) and is associated with a band of warm ocean water that Pacific International Date Line and 120°W), including off the Pacific coast of South America. El Niño Southern Oscillation refers to the cycle of warm and cold temperatures, as measured by sea surface temperature, SST, of the tropical central and eastern Pacific Ocean. El Niño is accompanied by high air pressure in the western Pacific and low air pressure in the eastern Pacific. The cool phase of ENSO is called "La Niña" with SST in the eastern Pacific below average and air pressures high in the eastern and low in western Pacific. The ENSO cycle, both El Niño and La Niña, causes global changes of both temperatures and rainfall. Mechanisms that cause the oscillation remain under study.
(between approximately the
develops in the central and east-central equatorial


Definition
El Niño is defined by prolonged warming in the Pacific Ocean sea surface temperatures when compared with the average value.
The U.S NOAA definition is a 3-month average warming of at least 0.5 °C (0.9 °F) in a specific area of the east-central tropical Pacific Ocean; other organizations define the term slightly differently. Typically, this anomaly happens at irregular intervals of two to seven years, and lasts nine months to two years. The average period length is five years. When this warming occurs for seven to nine months, it is classified as El Niño "conditions"; when its duration is longer, it is classified as an El Niño "episode".
The first signs of an El Niño are a weakening of the Walker circulation or trade winds and strengthening of the Hadley circulation and may include:

  1. Rise in surface pressure over the Indian Ocean, Indonesia, and Australia
  2. Fall in air pressure over Tahiti and the rest of the central and eastern Pacific Ocean
  3. Trade winds in the south Pacific weaken or head east
  4. Warm air rises near Peru, causing rain in the northern Peruvian deserts
El Niño's warm rush of nutrient-poor water heated by its eastward passage in the Equatorial Current, replaces the cold, nutrient-rich surface water of the Humboldt Current.
A recent study has appeared applying network theory to the analysis of El Niño events; the study presented evidence that the dynamics of a described "climate network" were very sensitive to such events, with many links in the network failing during the events.

Effects of ENSO warm phase (El Niño)

Economic impact
When El Niño conditions last for many months, extensive ocean warming and the reduction in easterly trade winds limits upwelling of cold nutrient-rich deep water, and its economic impact to local fishing for an international market can be serious.

More generally, El Niño can affect commodity prices and the macroeconomy of different countries. It can constrain the supply of rain-driven agricultural commodities; reduce agricultural output, construction, and services activities; create food-price and generalised inflation; and may trigger social unrest in commodity-dependent poor countries that primarily rely on imported food. A University of Cambridge Working Paper shows that while Australia, Chile, Indonesia, India, Japan, New Zealand and South Africa face a short-lived fall in economic activity in response to an El Niño shock, other countries may actually benefit from an El Niño weather shock (either directly or indirectly through positive spillovers from major trading partners), for instance, Argentina, Canada, Mexico and the United States. Furthermore, most countries experience short-run inflationary pressures following an El Niño shock, while global energy and non-fuel commodity prices increase The IMF estimates a significant El Niño can boost the GDP of the United States by about about 0.5% (due largely to lower heating bills) and reduce the GDP of Indonesia by about 1.0%.

Health and social impacts
Extreme weather conditions related to the El Niño cycle correlate with changes in the incidence of epidemic diseases.
For example, the El Niño cycle is associated with increased risks of some of the diseases transmitted by mosquitoes, such as malaria, dengue, and Rift Valley fever. Cycles of malaria in India, Venezuela, Brazil, and Colombia have now been linked to El Niño. Outbreaks of another mosquito-transmitted disease, Australian encephalitis (Murray Valley encephalitis—MVE), occur in temperate south-east Australia after heavy rainfall and flooding, which are associated with La Niña events. A severe outbreak of Rift Valley fever occurred after extreme rainfall in north-eastern Kenya and southern Somalia during the 1997–98 El Niño.
ENSO conditions have also been related to Kawasaki disease incidence in Japan and the west coast of the United States, via the linkage to tropospheric winds across the north Pacific Ocean.
ENSO may be linked to civil conflicts. Scientists at The Earth Institute of Columbia University, having analyzed data from 1950 to 2004, suggest ENSO may have had a role in 21% of all civil conflicts since 1950, with the risk of annual civil conflict doubling from 3% to 6% in countries affected by ENSO during El Niño years relative to La Niña years.

Recent occurrences
Since 2000, El Niño events have been observed in 2002–03, 2004–05, 2006–07, 2009–10 and 2015–16.
In December 2014, the Japan Meteorological Agency declared the onset of El Niño conditions, as warmer than normal sea surface temperatures were measured over the Pacific, albeit citing the lack of atmospheric conditions related to the event. In March and May 2015 both NOAA's Climate Prediction Center (CPC) and the Australian Bureau of Meteorology respectively confirmed the arrival of weak El Niño conditions. El Niño conditions were forecast in July to intensify into strong conditions by fall and winter of 2015. In July the NOAA CPC expected a greater than 90% chance that El Niño would continue through the 2015-2016 winter and more than 80% chance to last into the 2016 spring. In addition to the warmer than normal waters generated by the El Niño conditions, the Pacific Decadal Oscillation was also creating persistently higher than normal sea surface temperatures in the northeastern Pacific. In August, the NOAA CPC predicted that the 2015 El Niño "could be among the strongest in the historical record dating back to 1950.” In mid November, NOAA reported that the temperature anomaly in the Niño 3.4 region for the 3 month average from August to October 2015 was the 2nd warmest on record with only 1997 warmer.

Relation to climate change
During the last several decades the number of El Niño events increased, although a much longer period of observation is needed to detect robust changes.
The question is, or was, whether this is a random fluctuation or a normal instance of variation for that phenomenon or the result of global climate changes as a result of global warming. A 2014 study reported a robust tendency to more frequent extreme El Niños, occurring in agreement with a separate recent model prediction for the future.
Several studies of historical data suggest the recent El Niño variation is linked to anthropogenic climate change; in accordance with the larger consensus on climate change. For example, even after subtracting the positive influence of decade-to-decade variation (which is shown to be present in the ENSO trend), the amplitude of the ENSO variability in the observed data still increases, by as much as 60% in the last 50 years.
It may be that the observed phenomenon of more frequent and stronger El Niño events occurs only in the initial phase of the climate change, and then (e.g., after the lower layers of the ocean get warmer, as well), El Niño will become weaker than it was. It may also be that the stabilizing and destabilizing forces influencing the phenomenon will eventually compensate for each other. More research is needed to provide a better answer to that question. However, a new 2014 model appearing in a research report indicated unmitigated climate change would particularly affect the surface waters of the eastern equatorial Pacific and possibly double extreme El Niño occurrences.
 

Thursday, January 7, 2016

Gamma ray

Gamma radiation, also known as gamma rays, and denoted by the Greek letter γ, refers to electromagnetic radiation of an extremely high frequency and therefore consists of high-energy photons. Gamma rays are ionizing radiation, and are thus biologically hazardous. They are classically produced by the decay of atomic nuclei as they transition from a high energy state to a lower state known as gamma decay, but may also be produced by other processes. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900, while studying radiation emitted from radium. Villard's radiation was named "gamma rays" by Ernest Rutherford in 1903.

Natural sources of gamma rays on Earth include gamma decay from naturally occurring radioisotopes, and secondary radiation from atmospheric interactions with cosmic ray particles. Rare terrestrial natural sources produce gamma rays that are not of a nuclear origin, such as lightning strikes and terrestrial gamma-ray flashes. Additionally, gamma rays are produced by a number of astronomical processes in which very high-energy electrons are produced, that in turn cause secondary gamma rays via bremsstrahlung, inverse Compton scattering, and synchrotron radiation. However, a large fraction of such astronomical gamma rays are screened by Earth's atmosphere and can only be detected by spacecraft. Gamma rays are produced by nuclear fusion in the core of stars including the Sun (such as the CNO cycle), but are absorbed or inelastically scattered by the stellar material before escaping and are not observable from Earth.
Gamma rays typically have frequencies above 10 exahertz (or >1019 Hz), and therefore have energies above 100 keV and wavelengths less than 10 picometers (10−11 meter), which is less than the diameter of an atom. However, this is not a strict definition, but rather only a rule-of-thumb description for natural processes. Electromagnetic radiation from radioactive decay of atomic nuclei is referred to as "gamma rays" no matter its energy, so that there is no lower limit to gamma energy derived from radioactive decay. This radiation commonly has energy of a few hundred keV, and almost always less than 10 MeV. In astronomy, gamma rays are defined by their energy, and no production process needs to be specified. The energies of gamma rays from astronomical sources range to over 10 TeV, an energy far too large to result from radioactive decay. A notable example is extremely powerful bursts of high-energy radiation referred to as long duration gamma-ray bursts, of energies higher than can be produced by radioactive decay. These bursts of gamma rays, thought to be due to the collapse of stars called hypernovae, are the most powerful events so far discovered in the cosmos.

General characteristics
The distinction between X-rays and gamma rays has changed in recent decades. Originally, the electromagnetic radiation emitted by X-ray tubes almost invariably had a longer wavelength than the radiation (gamma rays) emitted by radioactive nuclei. Older literature distinguished between X- and gamma radiation on the basis of wavelength, with radiation shorter than some arbitrary wavelength, such as 10−11 m, defined as gamma rays. However, with artificial sources now able to duplicate any electromagnetic radiation that originates in the nucleus, as well as far higher energies, the wavelengths characteristic of radioactive gamma ray sources vs. other types, now completely overlap. Thus, gamma rays are now usually distinguished by their origin: X-rays are emitted by definition by electrons outside the nucleus, while gamma rays are emitted by the nucleus. Exceptions to this convention occur in astronomy, where gamma decay is seen in the afterglow of certain supernovas, but other high energy processes known to involve other than radioactive decay are still classed as sources of gamma radiation.

Health effects
Gamma rays cause damage at a cellular level and are penetrating, causing diffuse damage throughout the body. However, they are less ionising than alpha or beta particles, which are less penetrating.
Low levels of gamma rays cause a stochastic health risk, which for radiation dose assessment is defined as the probability of cancer induction and genetic damage. High doses produce deterministic effects, which is the severity of acute tissue damage that is certain to happen. These effects are compared to the physical quantity absorbed dose measured by the unit gray (Gy).


Uses
Gamma rays provide information about some of the most energetic phenomena in the universe; however, they are largely absorbed by the Earth's atmosphere. Instruments aboard high-altitude balloons and satellites missions, such as the Fermi Gamma-ray Space Telescope, provide our only view of the universe in gamma rays.
Gamma-induced molecular changes can also be used to alter the properties of semi-precious stones, and is often used to change white topaz into blue topaz.
Non-contact industrial sensors commonly use sources of gamma radiation in the refining, mining, chemical, food, soaps and detergents, and pulp and paper industries, for the measurement of levels, density, and thicknesses. Typically, these use Co-60 or Cs-137 isotopes as the radiation source.
In the US, gamma ray detectors are beginning to be used as part of the Container Security Initiative (CSI). These machines are advertised to be able to scan 30 containers per hour.
Gamma radiation is often used to kill living organisms, in a process called irradiation. Applications of this include the sterilization of medical equipment (as an alternative to autoclaves or chemical means), the removal of decay-causing bacteria from many foods and the prevention of the sprouting of fruit and vegetables to maintain freshness and flavor.Despite their cancer-causing properties, gamma rays are also used to treat some types of cancer, since the rays kill cancer cells also. In the procedure called gamma-knife surgery, multiple concentrated beams of gamma rays are directed to the growth in order to kill the cancerous cells. The beams are aimed from different angles to concentrate the radiation on the growth while minimizing damage to surrounding tissues.
Gamma rays are also used for diagnostic purposes in nuclear medicine in imaging techniques. A number of different gamma-emitting radioisotopes are used. For example, in a PET scan a radiolabeled sugar called fludeoxyglucose emits positrons that are annihilated by electrons, producing pairs of gamma rays that highlight cancer as the cancer often has a higher metabolic rate than the surrounding tissues. The most common gamma emitter used in medical applications is the nuclear isomer technetium-99m which emits gamma rays in the same energy range as diagnostic X-rays. When this radionuclide tracer is administered to a patient, a gamma camera can be used to form an image of the radioisotope's distribution by detecting the gamma radiation emitted (see also SPECT). Depending on which molecule has been labeled with the tracer, such techniques can be employed to diagnose a wide range of conditions (for example, the spread of cancer to the bones via bone scan).

Body response
When gamma radiation breaks DNA molecules, a cell may be able to repair the damaged genetic material, within limits. However, a study of Rothkamm and Lobrich has shown that this repair process works well after high-dose exposure but is much slower than in the case of a low-dose exposure.

Risk assessment
The natural outdoor exposure in Great Britain ranges from 0.1 to 0.5 µSv/h with significant increase around known nuclear and contaminated sites. Natural exposure to gamma rays is about 1 to 2 mSv per year, and the average total amount of radiation received in one year per inhabitant in the USA is 3.6 mSv. There is a small increase in the dose, due to naturally occurring gamma radiation, around small particles of high atomic number materials in the human body caused by the photoelectric effect.
By comparison, the radiation dose from chest radiography (about 0.06 mSv) is a fraction of the annual naturally occurring background radiation dose. A chest CT delivers 5 to 8 mSv. A whole-body PET/CT scan can deliver 14 to 32 mSv depending on the protocol. The dose from fluoroscopy of the stomach is much higher, approximately 50 mSv (14 times the annual yearly background).
An acute full-body equivalent single exposure dose of 1 Sv (1000 mSv) causes slight blood changes, but 2.0–3.5 Sv (2.0–3.5 Gy) causes very severe syndrome of nausea, hair loss, and hemorrhaging, and will cause death in a sizable number of cases—-about 10% to 35% without medical treatment. A dose of 5 Sv (5 Gy) is considered approximately the LD50 (lethal dose for 50% of exposed population) for an acute exposure to radiation even with standard medical treatment. A dose higher than 5 Sv (5 Gy) brings an increasing chance of death above 50%. Above 7.5–10 Sv (7.5–10 Gy) to the entire body, even extraordinary treatment, such as bone-marrow transplants, will not prevent the death of the individual exposed (see Radiation poisoning). (Doses much larger than this may, however, be delivered to selected parts of the body in the course of radiation therapy.)
For low dose exposure, for example among nuclear workers, who receive an average yearly radiation dose of 19 mSv, the risk of dying from cancer (excluding leukemia) increases by 2 percent. For a dose of 100 mSv, the risk increase is 10 percent. By comparison, risk of dying from cancer was increased by 32 percent for the survivors of the atomic bombing of Hiroshima and Nagasaki
 
 
 


 
Google SEO sponsored by Red Dragon Electric Cigarette Products