Friday, February 14, 2025

Pacific Ocean Warming increased at an accelerating rate, heated with so much zettajoule. Heat Content in ocean change it current Climate Change phenomenon trends

The Pacific Ocean is the largest and deepest of Earth's five oceanic divisions. It extends from the Arctic Ocean in the north to the Southern Ocean (or, depending on the definition, to Antarctica) in the south, and is bounded by the continents of Asia and Australia in the west and the Americas in the east.

At 165,250,000 square kilometers (63,800,000 square miles) in area (as defined with a southern Antarctic border), the largest division of the World Ocean and the hydrosphere covers about 46% of Earth's water surface and about 32% of the planet's total surface area, larger than its entire land area (148,000,000 km2 (57,000,000 sq mi)). The centers of both the water hemisphere and the Western Hemisphere, as well as the oceanic pole of inaccessibility, are in the Pacific Ocean.

The Pacific Ocean's mean depth is 4,000 meters (13,000 feet). The Challenger Deep in the Mariana Trench, located in the northwestern Pacific, is the deepest known point in the world, reaching a depth of 10,928 meters (35,853 feet). The Pacific also contains the deepest point in the Southern Hemisphere, the Horizon Deep in the Tonga Trench, at 10,823 meters (35,509 feet). The third deepest point on Earth, the Sirena Deep, is also located in the Mariana Trench.

Due to the effects of plate tectonics, the Pacific Ocean is currently shrinking by roughly 2.5 cm (1 in) per year on three sides, roughly averaging 0.52 km2 (0.20 sq mi) a year. By contrast, the Atlantic Ocean is increasing in size.

Along the Pacific Ocean's irregular western margins lie many seas, the largest of which are the Celebes SeaCoral SeaEast China Sea (East Sea), Philippine SeaSea of JapanSouth China Sea (South Sea), Sulu SeaTasman Sea, and Yellow Sea (West Sea of Korea). The Indonesian Seaway (including the Strait of Malacca and Torres Strait) joins the Pacific and the Indian Ocean to the west, and Drake Passage and the Strait of Magellan link the Pacific with the Atlantic Ocean on the east. To the north, the Bering Strait connects the Pacific with the Arctic Ocean.

The Pacific Ocean has most of the islands in the world. There are about 25,000 islands in the Pacific Ocean. Many tropical storms batter the islands of the Pacific. The lands around the Pacific Rim are full of volcanoes and often affected by earthquakes. Unknown Tsunamis, caused by underwater earthquakes, have devastated many islands and in some cases destroyed entire towns.

Pacific Oceans and Heat Uptake

Ocean heat content (OHC) or ocean heat uptake (OHU) is the energy absorbed and stored by oceans. To calculate the ocean heat content, it is necessary to measure ocean temperature at many different locations and depths. The North Pacific, North Atlantic, the Mediterranean, and the Southern Ocean all recorded their highest heat observations for more than sixty years of global measurements.

Numerous independent studies in recent years have found a multi-decadal rise in OHC of upper ocean regions that has begun to penetrate to deeper regions. The upper ocean (0–700 m) has warmed since 1971, while it is very likely that warming has occurred at intermediate depths (700–2000 m) and likely that deep ocean (below 2000 m) temperatures have increased. There is very high confidence that increased ocean heat content in response to anthropogenic carbon dioxide emissions and waste pollution is essentially irreversible on human time scales.

In 2021 scientists from around the world revealed that, per their measurement, the world oceans are hotter than ever recorded for the sixth straight year. “One way to think about this is the oceans have absorbed heat equivalent to seven Hiroshima atomic bombs detonating each second, 24 hours a day, 365 days a year.” Scientifically, the data shows that the oceans heated up by about 14 zettajoules.

In 2023, the world's oceans were again the hottest in the historical record and exceeded the previous 2022 record maximum. The five highest ocean heat observations to a depth of 2000 meters occurred in the period 2019–2023.

With improving observation in recent decades, the heat content of the upper ocean has been analyzed to have increased at an accelerating rate. Changes in ocean temperature greatly affect ecosystems in oceans and on land.

Ocean heat uptake accounts for over 90% of total planetary heat uptake, mainly as a consequence of human-caused changes to the composition of Earth's atmosphere.

Concentrated releases in association with high sea surface temperatures help drive tropical cyclonesatmospheric riversatmospheric heat waves and other extreme weather events that can penetrate far inland. Altogether these processes enable the ocean to be Earth's largest thermal reservoir which functions to regulate the planet's climate; acting as both a sink and a source of energy.
 

Current trends phenomenon of Climate Change

Marine Heatwave
marine heatwave is a period of abnormally high sea surface temperatures compared to the typical temperatures in the past for a particular season and region. Unlike heatwaves on land, marine heatwaves can extend over vast areas, persist for weeks to months or even years, and occur at subsurface levels. It is clear that the ocean is warming as a result of climate change, and this rate of warming is increasing.

Scientists predict that the frequency, duration, scale (or area) and intensity of marine heatwaves will continue to increase. This is because sea surface temperatures will continue to increase with global warming, and therefore the frequency and intensity of marine heatwaves will also increase. Simply put, the more greenhouse gas emissions and waste pollution (or the less mitigation), the more the sea surface temperature will rise.

Many species already experience these temperature shifts during the course of marine heatwave events. There are many increased risk factors and health impacts to coastal and inland communities as global average temperature and extreme heat events increase.

The Blob
The Blob is a large mass of relatively warm water in the Pacific Ocean off the coast of North America that was first detected in late 2013 and continued to spread throughout 2014 and 2015. It is an example of a marine heatwave. Sea surface temperatures indicated that the Blob persisted into 2016, but it was initially thought to have dissipated later that year.

By September 2016, the Blob resurfaced and made itself known to meteorologists. The warm water mass was unusual for open ocean conditions and was considered to have played a role in the formation of the unusual weather conditions experienced along the Pacific coast of North America during the same time period. The warm waters of the Blob were nutrient-poor and adversely affected marine life.

In 2019 another scare was caused by a weaker form of the effect referred as "The Blob 2.0" and in 2021 the appearance of "The Southern Blob" at south of the equator near New Zealand has caused a major effect in South America, particularly Chile and Argentina.

The Blob was first detected in October 2013 and early 2014 by Nicholas Bond and his colleagues at the Joint Institute for the Study of the Atmosphere and Ocean of the University of Washington. It was detected when a large circular body of seawater did not cool as expected and remained much warmer than the average normal temperatures for that location and season.

Initially the Blob was reported as being 500 miles (800 km) wide and 300 feet (91 m) deep. It later expanded and reached a size of 1,000 miles (1,600 km) long, 1,000 miles (1,600 km) wide, and 300 feet (91 m) deep in. In February 2014, the temperature of the Blob was around 2.5 °C (4.5 °F) warmer than what was usual for the time of year. A NOAA scientist noted in September 2014 that, based on ocean temperature records, the North Pacific Ocean had not previously experienced temperatures so warm since climatologists began taking measurements.

In 2015 the atmospheric ridge causing the Blob finally disappeared. The Blob vanished shortly after in 2016. However, in its wake are many species that will take a long time to recover. Although the Blob is gone for now, scientists predict that similar marine heat waves are becoming more common due to the Earth's warming climate. Residual heat from the first blob in addition to warmer temperatures in 2019 lead to a second Blob scare. However, it was quelled by a series of storms that cooled the rising temperatures.

The reason for the phenomenon remains unclear, but it is speculated to partially be human caused climate change.

Caused of increasing heat in Pacific Ocean
Environment
The Northwestern Pacific Ocean is most susceptible to micro plastic pollution due to its proximity to highly populated countries like Japan and China. The quantity of small plastic fragments floating in the north-east Pacific Ocean increased a hundredfold between 1972 and 2012. The ever-growing Great Pacific Garbage Patch between California and Japan is three times the size of France. An estimated 80,000 metric tons of plastic inhabit the patch, totaling 1.8 trillion pieces.
Marine pollution is a generic term for the harmful entry into the ocean of chemicals or particles. The main culprits are those using the rivers for disposing of their waste. The rivers then empty into the ocean, often also bringing chemicals used as fertilizers in agriculture. The excess of oxygen-depleting chemicals in the water leads to hypoxia and the creation of a dead zone.
Marine debris, also known as marine litter, is human-created waste that has ended up floating in a lake, sea, ocean, or waterway. Oceanic debris tends to accumulate at the center of gyres and coastlines, frequently washing aground where it is known as beach litter.
In addition, the Pacific Ocean has served as the crash site of satellites, including Mars 96Fobos-Grunt, and Upper Atmosphere Research Satellite.
Nuclear waste
From 1946 to 1958, Marshall Islands served as the Pacific Proving Grounds, designated by the United States, and played host to a total of 67 nuclear tests conducted across various atolls. Several nuclear weapons were lost in the Pacific Ocean, including one-megaton bomb that was lost during the 1965 Philippine Sea A-4 incident.

In 2021, the discharge of radioactive water from the Fukushima nuclear plant into the Pacific Ocean over a course of 30 years was approved by the Japanese Cabinet. The Cabinet concluded the radioactive water would have been diluted to drinkable standard. Apart from dumping, leakage of tritium into the Pacific was estimated to be between 20 and 40 trillion 
Bqs from 2011 to 2013, according to the Fukushima plant.
Deep sea mining
An emerging threat for the Pacific Ocean is the development of deep-sea mining. Deep-sea mining is aimed at extracting manganese nodules that contain minerals such as magnesium, nickel, copper, zinc and cobalt. The largest deposits of these are found in the Pacific Ocean between Mexico and Hawaii in the Clarion Clipperton fracture zone.

Deep-sea mining for manganese nodules appears to have drastic consequences for the ocean. It disrupts deep-sea ecosystems and may cause irreversible damage to fragile marine habitats. Sediment stirring and chemical pollution threaten various marine animals. In addition, the mining process can lead to greenhouse gas emissions and promote further climate change. Preventing deep-sea mining is therefore important to ensure the long-term health of the ocean.

Options for reducing impacts
To address the root cause of more frequent and more intense marine heatwaves, climate change mitigation methods are needed to curb the increase in global temperature and in ocean temperatures.
Better forecasts of marine heatwaves and improved monitoring can also help to reduce impacts of these heatwaves.

Tuesday, January 14, 2025

Existential Catastrophe from Malevolent Superintelligence-AI awaiting humans, the new discovery risk prospect is too sweet

The plausibility of existential catastrophe due to AI is widely debated. It hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge, and whether practical scenarios for AI takeovers exist. Concerns about superintelligence have been voiced by computer scientists and tech CEOs such as Geoffrey HintonYoshua BengioAlan TuringElon Musk, and OpenAI CEO Sam Altman. In 2022, a survey of AI researchers with a 17% response rate found that the majority believed there is a 10 percent or greater chance that human inability to control AI will cause an existential catastrophe. In 2023, hundreds of AI experts and other notable figures signed a statement declaring, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war"


Two sources of concern stem from the problems of AI control and alignment. Controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would likely resist attempts to disable it or change its goals as that would prevent it from accomplishing its present goals. It would be extremely challenging to align a superintelligence with the full breadth of significant human values and constraints.


A third source of concern is the possibility of a sudden "intelligence explosion" that catches humanity unprepared. In this scenario, an AI more intelligent than its creators would be able to recursively improve itself at an exponentially increasing rate, improving too quickly for its handlers or society at large to control. Empirically, examples like AlphaZero, which taught itself to play Go and quickly surpassed human ability, show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such machine learning systems do not recursively improve their fundamental architecture.


Potential AI capabilities

General Intelligence

Artificial general intelligence (AGI) is typically defined as a system that performs at least as well as humans in most or all intellectual tasks. A 2022 survey of AI researchers found that 90% of respondents expected AGI would be achieved in the next 100 years, and half expected the same by 2061. Meanwhile, some researchers dismiss existential risks from AGI as "science fiction" based on their high confidence that AGI will not be created anytime soon.

Breakthroughs in large language models have led some researchers to reassess their expectations. Notably, Geoffrey Hinton said in 2023 that he recently changed his estimate from "20 to 50 years before we have general purpose A.I." to "20 years or less"

The Frontier supercomputer at Oak Ridge National Laboratory turned out to be nearly eight times faster than expected. Feiyi Wang, a researcher there, said "We didn't expect this capability" and "we're approaching the point where we could actually simulate the human brain"

Superintelligence

In contrast with AGI, Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", including scientific creativity, strategic planning, and social skills. He argues that a superintelligence can outmaneuver humans anytime its goals conflict with humans'. It may choose to hide its true intent until humanity cannot stop it. Bostrom writes that in order to be safe for humanity, a superintelligence must be aligned with human values and morality, so that it is "fundamentally on our side"

When artificial superintelligence (ASI) may be achieved, if ever, is necessarily less certain than predictions for AGI. In 2023, OpenAI leaders said that not only AGI, but superintelligence may be achieved in less than 10 years.

AI alignment and risks

Alignment of Superintelligences

Some researchers believe the alignment problem may be particularly difficult when applied to superintelligences. Their reasoning includes:

·         As AI systems increase in capabilities, the potential dangers associated with experimentation grow. This makes iterative, empirical approaches increasingly risky.

·         If instrumental goal convergence occurs, it may only do so in sufficiently intelligent agents.

·         A superintelligence may find unconventional and radical solutions to assigned goals. Bostrom gives the example that if the objective is to make humans smile, a weak AI may perform as intended, while a superintelligence may decide a better solution is to "take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins."

·         A superintelligence in creation could gain some awareness of what it is, where it is in development (training, testing, deployment, etc.), and how it is being monitored, and use this information to deceive its handlers. Bostrom writes that such an AI could feign alignment to prevent human interference until it achieves a "decisive strategic advantage" that allows it to take control.

·         Analyzing the internals and interpreting the behavior of current large language models is difficult. And it could be even more difficult for larger and more intelligent models.


Alternatively, some find reason to believe superintelligences would be better able to understand morality, human values, and complex goals. Bostrom writes, "A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to be true".

In 2023, OpenAI started a project called "Superalignment" to solve the alignment of superintelligences in four years. It called this an especially important challenge, as it said superintelligence could be achieved within a decade. Its strategy involved automating alignment research using AI. The Superalignment team was dissolved less than a year later.

Other sources of risk

Bostrom and others have said that a race to be the first to create AGI could lead to shortcuts in safety, or even to violent conflict. Roman Yampolskiy and others warn that a malevolent AGI could be created by design, for example by a military, a government, a sociopath, or a corporation, to benefit from, control, or subjugate certain groups of people, as in cybercrime, or that a malevolent AGI could choose the goal of increasing human suffering, for example of those people who did not assist it during the information explosion phase.

Suffering risks

Suffering risks are sometimes categorized as a subclass of existential risks. According to some scholars, s-risks warrant serious consideration as they are not extremely unlikely and can arise from unforeseen scenarios. Although they may appear speculative, factors such as technological advancement, power dynamics, and historical precedents indicate that advanced technology could inadvertently result in substantial suffering. Thus, s-risks are considered to be a morally urgent matter, despite the possibility of technological benefits. Sources of possible s-risks include embodied artificial intelligence and superintelligence.

Artificial intelligence is central to s-risk discussions because it may eventually enable powerful actors to control vast technological systems. In a worst-case scenario, AI could be used to create systems of perpetual suffering, such as a totalitarian regime expanding across space. Additionally, s-risks might arise incidentally, such as through AI-driven simulations of conscious beings experiencing suffering, or from economic activities that disregard the well-being of nonhuman or digital minds. Steven Umbrello, an AI ethics researcher, has warned that biological computing may make system design more prone to s-risks. Brian Tomasik has argued that astronomical suffering could emerge from solving the AI alignment problem incompletely. He argues for the possibility of a "near miss" scenario, where a superintelligent AI that is slightly misaligned has the maximum likelihood of causing astronomical suffering, compared to a completely unaligned AI.

People’s Perspectives on AI

The thesis that AI could pose an existential risk provokes a wide range of reactions in the scientific community and in the public at large, but many of the opposing viewpoints share common ground.

Observers tend to agree that AI has significant potential to improve society. The Asilomar AI Principles, which contain only those principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference, also agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."

AI Mitigation

Many scholars concerned about AGI existential risk believe that extensive research into the "control problem" is essential. This problem involves determining which safeguards, algorithms, or architectures can be implemented to increase the likelihood that a recursively-improving AI remains friendly after achieving superintelligence. Social measures are also proposed to mitigate AGI risks, such as a UN-sponsored "Benevolent AGI Treaty" to ensure that only altruistic AGIs are created. Additionally, an arms control approach and a global peace treaty grounded in international relations theory have been suggested, potentially for an artificial superintelligence to be a signatory.

Researchers at Google have proposed research into general "AI safety" issues to simultaneously mitigate both short-term risks from narrow AI and long-term risks from AGI. A 2020 estimate places global spending on AI existential risk somewhere between $10 and $50 million, compared with global spending on AI around perhaps $40 billion. Bostrom suggests prioritizing funding for protective technologies over potentially dangerous ones. Some, like Elon Musk, advocate radical human cognitive enhancement, such as direct neural linking between humans and machines; others argue that these technologies may pose an existential risk themselves. Another proposed method is closely monitoring or "boxing in" an early-stage AI to prevent it from becoming too powerful. A dominant, aligned superintelligent AI might also mitigate risks from rival AIs, although its creation could present its own existential dangers. Induced amnesia has been proposed as a way to mitigate risks in locked-in conscious AI and certain AI-adjacent biological system of potential AI suffering and revenge seeking.

 
Google SEO sponsored by Red Dragon Electric Cigarette Products