Tuesday, January 14, 2025

Existential Catastrophe from Malevolent Superintelligence-AI awaiting humans, the new discovery risk prospect is too sweet

The plausibility of existential catastrophe due to AI is widely debated. It hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge, and whether practical scenarios for AI takeovers exist. Concerns about superintelligence have been voiced by computer scientists and tech CEOs such as Geoffrey HintonYoshua BengioAlan TuringElon Musk, and OpenAI CEO Sam Altman. In 2022, a survey of AI researchers with a 17% response rate found that the majority believed there is a 10 percent or greater chance that human inability to control AI will cause an existential catastrophe. In 2023, hundreds of AI experts and other notable figures signed a statement declaring, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war"


Two sources of concern stem from the problems of AI control and alignment. Controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would likely resist attempts to disable it or change its goals as that would prevent it from accomplishing its present goals. It would be extremely challenging to align a superintelligence with the full breadth of significant human values and constraints.


A third source of concern is the possibility of a sudden "intelligence explosion" that catches humanity unprepared. In this scenario, an AI more intelligent than its creators would be able to recursively improve itself at an exponentially increasing rate, improving too quickly for its handlers or society at large to control. Empirically, examples like AlphaZero, which taught itself to play Go and quickly surpassed human ability, show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such machine learning systems do not recursively improve their fundamental architecture.


Potential AI capabilities

General Intelligence

Artificial general intelligence (AGI) is typically defined as a system that performs at least as well as humans in most or all intellectual tasks. A 2022 survey of AI researchers found that 90% of respondents expected AGI would be achieved in the next 100 years, and half expected the same by 2061. Meanwhile, some researchers dismiss existential risks from AGI as "science fiction" based on their high confidence that AGI will not be created anytime soon.

Breakthroughs in large language models have led some researchers to reassess their expectations. Notably, Geoffrey Hinton said in 2023 that he recently changed his estimate from "20 to 50 years before we have general purpose A.I." to "20 years or less"

The Frontier supercomputer at Oak Ridge National Laboratory turned out to be nearly eight times faster than expected. Feiyi Wang, a researcher there, said "We didn't expect this capability" and "we're approaching the point where we could actually simulate the human brain"

Superintelligence

In contrast with AGI, Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", including scientific creativity, strategic planning, and social skills. He argues that a superintelligence can outmaneuver humans anytime its goals conflict with humans'. It may choose to hide its true intent until humanity cannot stop it. Bostrom writes that in order to be safe for humanity, a superintelligence must be aligned with human values and morality, so that it is "fundamentally on our side"

When artificial superintelligence (ASI) may be achieved, if ever, is necessarily less certain than predictions for AGI. In 2023, OpenAI leaders said that not only AGI, but superintelligence may be achieved in less than 10 years.

AI alignment and risks

Alignment of Superintelligences

Some researchers believe the alignment problem may be particularly difficult when applied to superintelligences. Their reasoning includes:

·         As AI systems increase in capabilities, the potential dangers associated with experimentation grow. This makes iterative, empirical approaches increasingly risky.

·         If instrumental goal convergence occurs, it may only do so in sufficiently intelligent agents.

·         A superintelligence may find unconventional and radical solutions to assigned goals. Bostrom gives the example that if the objective is to make humans smile, a weak AI may perform as intended, while a superintelligence may decide a better solution is to "take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins."

·         A superintelligence in creation could gain some awareness of what it is, where it is in development (training, testing, deployment, etc.), and how it is being monitored, and use this information to deceive its handlers. Bostrom writes that such an AI could feign alignment to prevent human interference until it achieves a "decisive strategic advantage" that allows it to take control.

·         Analyzing the internals and interpreting the behavior of current large language models is difficult. And it could be even more difficult for larger and more intelligent models.


Alternatively, some find reason to believe superintelligences would be better able to understand morality, human values, and complex goals. Bostrom writes, "A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to be true".

In 2023, OpenAI started a project called "Superalignment" to solve the alignment of superintelligences in four years. It called this an especially important challenge, as it said superintelligence could be achieved within a decade. Its strategy involved automating alignment research using AI. The Superalignment team was dissolved less than a year later.

Other sources of risk

Bostrom and others have said that a race to be the first to create AGI could lead to shortcuts in safety, or even to violent conflict. Roman Yampolskiy and others warn that a malevolent AGI could be created by design, for example by a military, a government, a sociopath, or a corporation, to benefit from, control, or subjugate certain groups of people, as in cybercrime, or that a malevolent AGI could choose the goal of increasing human suffering, for example of those people who did not assist it during the information explosion phase.

Suffering risks

Suffering risks are sometimes categorized as a subclass of existential risks. According to some scholars, s-risks warrant serious consideration as they are not extremely unlikely and can arise from unforeseen scenarios. Although they may appear speculative, factors such as technological advancement, power dynamics, and historical precedents indicate that advanced technology could inadvertently result in substantial suffering. Thus, s-risks are considered to be a morally urgent matter, despite the possibility of technological benefits. Sources of possible s-risks include embodied artificial intelligence and superintelligence.

Artificial intelligence is central to s-risk discussions because it may eventually enable powerful actors to control vast technological systems. In a worst-case scenario, AI could be used to create systems of perpetual suffering, such as a totalitarian regime expanding across space. Additionally, s-risks might arise incidentally, such as through AI-driven simulations of conscious beings experiencing suffering, or from economic activities that disregard the well-being of nonhuman or digital minds. Steven Umbrello, an AI ethics researcher, has warned that biological computing may make system design more prone to s-risks. Brian Tomasik has argued that astronomical suffering could emerge from solving the AI alignment problem incompletely. He argues for the possibility of a "near miss" scenario, where a superintelligent AI that is slightly misaligned has the maximum likelihood of causing astronomical suffering, compared to a completely unaligned AI.

People’s Perspectives on AI

The thesis that AI could pose an existential risk provokes a wide range of reactions in the scientific community and in the public at large, but many of the opposing viewpoints share common ground.

Observers tend to agree that AI has significant potential to improve society. The Asilomar AI Principles, which contain only those principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference, also agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."

AI Mitigation

Many scholars concerned about AGI existential risk believe that extensive research into the "control problem" is essential. This problem involves determining which safeguards, algorithms, or architectures can be implemented to increase the likelihood that a recursively-improving AI remains friendly after achieving superintelligence. Social measures are also proposed to mitigate AGI risks, such as a UN-sponsored "Benevolent AGI Treaty" to ensure that only altruistic AGIs are created. Additionally, an arms control approach and a global peace treaty grounded in international relations theory have been suggested, potentially for an artificial superintelligence to be a signatory.

Researchers at Google have proposed research into general "AI safety" issues to simultaneously mitigate both short-term risks from narrow AI and long-term risks from AGI. A 2020 estimate places global spending on AI existential risk somewhere between $10 and $50 million, compared with global spending on AI around perhaps $40 billion. Bostrom suggests prioritizing funding for protective technologies over potentially dangerous ones. Some, like Elon Musk, advocate radical human cognitive enhancement, such as direct neural linking between humans and machines; others argue that these technologies may pose an existential risk themselves. Another proposed method is closely monitoring or "boxing in" an early-stage AI to prevent it from becoming too powerful. A dominant, aligned superintelligent AI might also mitigate risks from rival AIs, although its creation could present its own existential dangers. Induced amnesia has been proposed as a way to mitigate risks in locked-in conscious AI and certain AI-adjacent biological system of potential AI suffering and revenge seeking.

Tuesday, May 7, 2024

Counting times on World Catastrophic Clock – It’s seen as not upright for now and the future. Individual action is required while humans still can

Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, and into the future. It is a component quantity of various measurements used to sequence events, to compare the duration of events or the intervals between them, and to quantify rates of change of quantities in material reality or in the conscious experience. Time is often referred to as a fourth dimension, along with three spatial dimensions.

Time in physics is operationally defined as "what a clock reads". This operational definition of time, wherein one says that observing a certain number of repetitions of one or another standard cyclical event constitutes one standard unit, such as the second, is useful in the conduct of both advanced experiments and everyday affairs of life. There are many systems for determining what time it is. Periodic events and periodic motion have long served as standards for units of time. Examples include the apparent motion of the sun across the sky, the phases of the moon, and the passage of a free-swinging pendulum. More modern systems include the Global Positioning System, other satellite systems, Coordinated Universal Time, and mean solar time. In general, the numbers obtained from different time systems differ from one another, but with careful measurements, they can be synchronized.

A clock or chronometer is a device that measures and displays time. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units such as the day, the lunar month, and the year. Devices operating on several physical processes have been used over the millennia. Clocks can be classified by the type of time display, as well as by the method of timekeeping.

Clocks have different ways of displaying the time. Analog clocks indicate time with a traditional clock face and moving hands. Digital clocks display a numeric representation of time. Two numbering systems are in use: 12-hour time notation and 24-hour notation. Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays. For the blind and for use over telephones, speaking clocks state the time audibly in words. There are also clocks for the blind that have displays that can be read by touch.

Specific types of Clock

Doomsday Clock

The Doomsday Clock is a symbol that represents the likelihood of a human-made global catastrophe, in the opinion of the members of the Bulletin of the Atomic Scientists. Maintained since 1947, the clock is a metaphor, not a prediction, for threats to humanity from unchecked scientific and technological advances. That is, the time on the clock is not to be interpreted as actual time. A hypothetical global catastrophe is represented by midnight on the clock, with the Bulletin's opinion on how close the world is to one represented by a certain number of minutes or seconds to midnight, which is then assessed in January of each year. The main factors influencing the clock are nuclear warfare, climate change, and artificial intelligence. The Bulletin's Science and Security Board monitors new developments in the life sciences and technology that could inflict irrevocable harm to humanity.

The clock's original setting in 1947 was 7 minutes to midnight. It has since been set backward 8 times and forward 17 times. The farthest time from midnight was 17 minutes in 1991, and the nearest is 90 seconds, set on January 2023.

The clock was moved to 150 seconds (2 minute, 30 seconds) in 2017, then forward to 2 minutes to midnight in January 2018, and left unchanged in 2019. In January 2020, it was moved forward to 100 seconds (1 minute, 40 seconds) before midnight. In January 2023, the Clock was moved forward to 90 seconds (1 minute, 30 seconds) before midnight and remained unchanged in January 2024.

Basis for settings

"Midnight" has a deeper meaning besides the constant threat of war. There are various elements taken into consideration when the scientists from The Bulletin of the Atomic Scientists decide what Midnight and "global catastrophe" really mean in a particular year. They might include "politics, energy, weapons, diplomacy, and climate science"; potential sources of threat include nuclear threats, climate change, bioterrorism, and artificial intelligence. Members of the board judge Midnight by discussing how close they think humanity is to the end of civilization. In 1947, at the beginning of the Cold War, the Clock was started at seven minutes to midnight.

Fluctuations and threats

Before January 2020, the two tied-for-lowest points for the Doomsday Clock were in 1953 (when the Clock was set to two minutes until midnight, after the U.S. and the Soviet Union began testing hydrogen bombs) and in 2018, following the failure of world leaders to address tensions relating to nuclear weapons and climate change issues. In other years, the Clock's time has fluctuated from 17 minutes in 1991 to 2 minutes 30 seconds in 2017. Discussing the change to 2+1/2 minutes in 2017, the first use of a fraction in the Clock's history, Lawrence Krauss, one of the scientists from the Bulletin, warned that political leaders must make decisions based on facts, and those facts "must be taken into account if the future of humanity is to be preserved". In an announcement from the Bulletin about the status of the Clock, they went as far to call for action from "wise" public officials and "wise" citizens to make an attempt to steer human life away from catastrophe while humans still can.

On January 24, 2018, scientists moved the clock to two minutes to midnight, based on threats greatest in the nuclear realm. The scientists said, of recent moves by North Korea under Kim Jong-un and the administration of Donald Trump in the U.S.: "Hyperbolic rhetoric and provocative actions by both sides have increased the possibility of nuclear war by accident or miscalculation".


The clock was left unchanged in 2019 due to the twin threats of nuclear weapons and climate change, and the problem of those threats being "exacerbated this past year by the increased use of information warfare to undermine democracy around the world, amplifying risk from these and other threats and putting the future of civilization in extraordinary danger".

On January 23, 2020, the Clock was moved to 100 seconds (1 minute, 40 seconds) before midnight. The Bulletin's executive chairman, Jerry Brown, said "the dangerous rivalry and hostility among the superpowers increases the likelihood of nuclear blunder... Climate change just compounds the crisis". The "100 seconds to midnight" setting remained unchanged in 2021 and 2022.

On January 24, 2023, the Clock was moved to 90 seconds (1 minute, 30 seconds) before midnight, the closest it has ever been set to midnight since its inception in 1947. This adjustment was largely attributed to the risk of nuclear escalation that arose from the Russian invasion of Ukraine. Other reasons cited included climate change, biological threats such as COVID-19, and risks associated with disinformation and disruptive technologies.



Climate Clock

The Climate Clock is a graphic to demonstrate how quickly the planet is approaching 1.5 °C of global warming, given current emissions trends. It also shows the amount of CO2 already emitted, and the global warming to date.

The Climate Clock was launched in 2015 to provide a measuring stick against which viewers can track climate change mitigation progress. The date shown when humanity reaches 1.5°C will move closer as emissions rise, and further away as emissions decrease. An alternative view projects the time remaining to 2.0°C of warming. The clock is updated every year to reflect the latest global CO2 emissions trend and rate of climate warming. As of April 2, 2024, the clock counts down to July 21, 2029 at 12:00 PM. On September 20, 2021, the clock was delayed to July 28, 2028, likely because of the COP26 Conference and the land protection by indigenous peoples.

The clock is hosted by Human Impact Lab, itself part of Concordia University. Organisations supporting the climate clock include Concordia University, the David Suzuki Foundation, Future Earth, and the Climate Reality Project.

As of April 29, 2024, the current climate temperature is 1.297°C

Relevance

1.5 °C is an important threshold for many climate impacts, as shown by the Special Report on Global Warming of 1.5 °C. Every increment to global temperature is expected to increase weather extremes, such as heat waves and extreme precipitation events. There is also the risk of irreversible ice sheet loss. Consequent sea level rise also increases sharply around 1.75 °C, and virtually all corals could be wiped out at 2 °C warming.

The New York Climate Clock

In late September 2020, artists and activists, Gan Golan, Katie Peyton Hofstadter, Adrian Carpenter and Andrew Boyd repurposed the Metronome in Union Square in New York City to show the Climate Clock. The goal was to "remind the world every day just how perilously close we are to the brink." This is in juxtaposition to the Doomsday Clock, which measures a variety of factors that could lead to "destroying the world" using "dangerous technologies of our making," with climate change being one of the smaller factors. This specific installation is expected to be one of many in cities around the world. At the time of installation, the clock read 7 years and 102 days. Greta Thunberg, Swedish environmental activist, was involved in the project early on, and reportedly received a hand-held version of the climate clock.

Since its inception, the New York Climate Clock has added a second set of numbers for the percentage of the world's energy use that comes from renewable energy sources.

Monday, April 15, 2024

Radiocaesium fallout and released in the environment is a real culprit behind the increasing ocean heat temperature

Caesium (IUPAC spelling; cesium in American English) is a chemical element; it has symbol Cs and atomic number 55. It is a soft, silvery-golden alkali metal with a melting point of 28.5 °C (83.3 °F; 301.6 K), which makes it one of only five elemental metals that are liquid at or near room temperature. Caesium has physical and chemical properties similar to those of rubidium and potassium. It is pyrophoric and reacts with water even at −116 °C (−177 °F). It is the least electronegative element, with a value of 0.79 on the Pauling scale. It has only one stable isotope, caesium-133. Caesium is mined mostly from pollucite.

Caesium-137, a fission product, is extracted from waste produced by nuclear reactors. It has the largest atomic radius of all elements whose radii have been measured or calculated, at about 260 picometers. It has a melting point of 28.5 °C (83.3 °F), making it one of the few elemental metals that are liquid near room temperature. Mercury is the only stable elemental metal with a known melting point lower than caesium.

Caesium-137 (13755Cs), cesium-137 (US), or radiocaesium, is a radioactive isotope of caesium that is formed as one of the more common fission products by the nuclear fission of uranium-235 and other fissionable isotopes in nuclear reactors and nuclear weapons. Trace quantities also originate from spontaneous fission of uranium-238. It is among the most problematic of the short-to-medium-lifetime fission products. Caesium-137 has a relatively low boiling point of 671 °C (1,240 °F) and easily becomes volatile when released suddenly at high temperature, as in the case of the Chernobyl nuclear accident and with atomic explosions, and can travel very long distances in the air. After being deposited onto the soil as radioactive fallout, it moves and spreads easily in the environment because of the high water solubility of caesium's most common chemical compounds, which are salts.

Caesium-137 reacts with water, producing a water-soluble compound (caesium hydroxide). The biological behavior of caesium is similar to that of potassium and rubidium.
Caesium-137, along with other radioactive isotopes caesium-134, iodine-131, xenon-133, and strontium-90, were released into the environment during nearly all nuclear weapon tests and some nuclear accidents, most notably the Chernobyl disaster and the Fukushima Daiichi disaster.

Caesium-137 in the environment is substantially anthropogenic (human-made), these bellwether isotopes are produced solely from anthropogenic sources. Caesium-137 is produced from the nuclear fission of plutonium and uranium, and decays into barium-137.

Nuclear isotope and safety hazards
Caesium-137 is a radioisotope commonly used as a gamma-emitter in industrial applications. Its advantages include a half-life of roughly 30 years, its availability from the nuclear fuel cycle, and having 137Ba as a stable end product. It has been used in agriculture, cancer treatment, and the sterilization of food, sewage sludge, and surgical equipment. Radioactive isotopes of caesium in radiation devices were used in the medical field to treat certain types of cancer, but emergence of better alternatives and the use of water-soluble caesium chloride in the sources, which could create wide-ranging contamination, gradually put some of these caesium sources out of use. Caesium-137 has been employed in a variety of industrial measurement gauges, including moisture, density, leveling, and thickness gauges. It has also been used in well-logging devices for measuring the electron density of the rock formations, which is analogous to the bulk density of the formations

The isotopes 134 and 137 are present in the biosphere in small amounts from human activities, differing by location. Radiocaesium does not accumulate in the body as readily as other fission products (such as radioiodine and radiostrontium). About 10% of absorbed radiocaesium washes out of the body relatively quickly in sweat and urine. The remaining 90% has a biological half-life between 50 and 150 days. Radiocaesium follows potassium and tends to accumulate in plant tissues, including fruits and vegetables. Plants vary widely in the absorption of caesium, sometimes displaying great resistance to it. It is also well-documented that mushrooms from contaminated forests accumulate radiocaesium (caesium-137) in the fungal sporocarps. Accumulation of caesium-137 in lakes has been a great concern after the Chernobyl disaster. Experiments with dogs showed that a single dose of 3.8 millicuries (140 MBq, 4.1 μg of caesium-137) per kilogram is lethal within three weeks; smaller amounts may cause infertility and cancer. The International Atomic Energy Agency and other sources have warned that radioactive materials, such as caesium-137, could be used in radiological dispersion devices, or "dirty bombs".

Fukushima Daiichi disaster
In April 2011, elevated levels of caesium-137 were also found in the environment after the Fukushima Daiichi nuclear disasters in Japan. In July 2011, meat from 11 cows shipped to Tokyo from Fukushima Prefecture was found to have 1,530 to 3,200 becquerels per kilogram of 137Cs, considerably exceeding the Japanese legal limit of 500 becquerels per kilogram at that time. In March 2013, a fish caught near the plant had a record 740,000 becquerels per kilogram of radioactive caesium, above the 100 becquerels per kilogram government limit. A 2013 paper in Scientific Reports found that for a forest site 50 km from the stricken plant, 137Cs concentrations were high in leaf litter, fungi, and detritivores, but low in herbivores. By the end of 2014, "Fukushima-derived radiocaesium had spread into the whole western North Pacific Ocean", transported by the North Pacific current from Japan to the Gulf of Alaska. It has been measured in the surface layer down to 200 meters and south of the current area down to 400 meters.

Radioactive materials were dispersed into the atmosphere immediately after the disaster and account for most of all such materials leaked into the environment. 80% of the initial atmospheric release eventually deposited over rivers and the Pacific Ocean, according to a UNSCEAR report in 2020. Specifically, "the total releases to the atmosphere of Iodine-131 and Caesium-137 ranged generally between about 100 to about 500 PBq [petabecquerel, 1015 Bq] and 6 to 20 PBq, respectively.

Once released into the atmosphere, those that remain in a gaseous phase will simply be diluted by the atmosphere, but some that precipitate will eventually settle on land or in the ocean. Thus, the majority (90~99%) of the radionuclides which are deposited are isotopes of iodine and caesium, with a small portion of tellurium, which is almost fully vaporized out of the core due to their low vapor pressure. The remaining fraction of deposited radionuclides are of less volatile elements such as barium, antimony, and niobium, of which less than a percent is evaporated from the fuel.

Approximately 40–80% of the atmospheric releases were deposited over the ocean.

In addition to atmospheric deposition, there was also a significant quantity of direct releases into groundwater (and eventually the ocean) through leaks of coolant that had been in direct contact with the fuel. Estimates for this release vary from 1 to 5.5 PBq. Although the majority had entered the ocean shortly following the accident, a significant fraction remains in the groundwater and continues to mix with coastal waters.

According to the French Institute for Radiological Protection and Nuclear Safety, the release from the accident represents the most important individual oceanic emissions of artificial radioactivity ever observed. The Fukushima coast has one of the world's strongest currents (Kuroshio Current). It transported the contaminated waters far into the Pacific Ocean, dispersing the radioactivity. As of late 2011 measurements of both the seawater and the coastal sediments suggested that the consequences for marine life would be minor.

Significant pollution along the coast near the plant might persist, because of the continuing arrival of radioactive material transported to the sea by surface water crossing contaminated soil.
The possible presence of other radioactive substances, such as
strontium-90 or plutonium, has not been sufficiently studied. Recent measurements show persistent contamination of some marine species (mostly fish) caught along the Fukushima coast.

Initial discharge
A large amount of caesium entered the sea from the initial atmospheric release. By 2013, the concentrations of caesium-137 in the Fukushima coastal waters were around the level before the accident. However, concentrations in coastal sediments declined more slowly than in coastal waters, and the amount of caesium-137 stored in sediments most likely exceeded that in the water column by 2020. The sediments may provide a long-term source of caesium-137 in the seawater.

Data on marine foods indicates their radioactive concentrations are falling towards initial levels. 41% of samples caught off the Fukushima coast in 2011 had caesium-137 concentrations above the legal limit (100 becquerels per kilogram), and this had declined to 0.05% in 2015. United States Food and Drug Administration stated in 2021 that "FDA has no evidence that radionuclides from the Fukushima incident are present in the U.S. food supply at levels that are unsafe". Yet, presenting the science alone has not helped customers to regain their trust in eating Fukushima fishery products.

2023 discharge
The most prevalent radionuclide in the wastewater is tritium. A total of 780 terabecquerels (TBq) will be released into the ocean at a rate of 22 TBq per year. Tritium is routinely released into the ocean from operating nuclear power plants, sometimes in much greater quantities. For comparison, the La Hague nuclear processing site in France released 11,400 TBq of tritium in the year of 2018. In addition, about 60,000 TBq of tritium is produced naturally in the atmosphere each year by cosmic rays.

Other radionuclides present in the wastewater, like caesium-137, are not normally released by nuclear power plants. However, the concentrations in the treated water are minuscule relative to regulation limits.

"There is consensus among scientists that the impact on health is minuscule, still, it can't be said the risk is zero, which is what causes controversy", Michiaki Kai, a Japanese nuclear expert, told AFP. David Bailey, a physicist whose lab measures radioactivity, said that with tritium at diluted concentrations, "there is no issue with marine species unless we see a severe decline in fish population".

Ferenc Dalnoki-Veress, a scientist-in-residence at the Middlebury Institute of International Studies at Monterey, said regarding dilution that bringing in living creatures makes the situation more complex. Robert Richmond, a biologist from the University of Hawaiʻi, told the BBC that the inadequate radiological and ecological assessment raises the concern that Japan would be unable to detect what enters the environment and "get the genie back in the bottle". Dalnoki-Veress, Richmond, and three other panelists consulting for the Pacific Islands Forum wrote that dilution may fail to account for bioaccumulation and exposure pathways that involve organically bound tritium (OBT).

 
Google SEO sponsored by Red Dragon Electric Cigarette Products