UK’s climate change readiness has made ‘significant progress’
by Renee Karunungan on May 4, 2022
There is significant progress in the UK for reporting and implementing climate change adaptation, according to a new study led by TyndallUEA’s Katie Jenkins. Katie has created an Adaptation Inventory of adaptation actions happening based on official records of adaptation projects being implemented by both public and private sector, accompanied by a systematic review of the peer-reviewed literature of adaptation case studies.
Adapting to climate change means taking action to prepare for and adjust to current and predicted effects of climate change. Adaptation plays an important role in managing past, present and future climate risk and impacts. However, there is an “adaptation gap” where the distance between existing adaptation efforts versus adaptation needs is widening, according to the United Nations Environment Programme’s Adaptation Gap Report. Tracking national adaptation plans is deemed critical to support future decision-making and drive future actions.
According to the Committee’s assessment, adaptation planning for 2C and 4C global warming is not happening, and that the gap between future risks and planned adaptation has widened, delivering the minimum level of resilience.
Katie’s new Adaptation Inventory provides insight on what is currently being implemented, which helps policymakers and practitioners learn from existing knowledge and practical case studies.
“The Adaptation Inventory provides a consistent and easily searchable database which will continue to evolve. It can provide evidence on the specific types of adaptation implemented on the ground as well as provide more detailed insight into the specific examples of action being implemented. This has the potential to help and inform UK-based decision-making,” Katie said.
The Adaptation Inventory identifies and documents current and planned adaptation in the UK, and how it is being implemented through adaptation actions, the sectors where adaptation is occurring, and where the gaps remain. There were 360 adaptation actions identified in the Inventory, comprising 134 adaptation types. Out of these 360 adaptation actions, 80% have already been implemented.
The private sector accounts for 74% of the actions with water companies dominating. Regulatory frameworks, standards, and reporting requirements are key drivers required by water companies by the Regulator. For example, water companies are already required to plan their resilience to drought.
The most common types of adaptation actions are flood protection (12%), leakage reduction (4%), water metering (3%), property level flood protection (3%), operational improvements (3%), and back-up generators (3%). Most actions were categorized as structural and physical interventions. Other interventions were categorized as technological and ecosystem based.
The Adaptation Inventory also looks at the types of climate hazards being addressed. It found that 76% of the actions were in response to drought, 26% for extreme rainfall, 13% for flooding, and 11% for higher temperatures. One example of adaptation for drought is rainwater recovery using storage facilities available on the site, reducing the demand for fresh water during drought. For alleviating flooding, a water company is using afforestation. The London Underground has doubled the capacity of ventilation shafts on the Victoria line, which provide more air flow on hot days.
The Brain Bank has released findings from its first three years of operation, analyzing the brains of professional and non-professional athletes who donate them after death.
The researchers say 12 of the athletes’ brains showed signs of chronic traumatic encephalopathy (CTE), a condition associated with a range of psychiatric problems, ranging from mood and behavior disorders to cognitive impairment and dementia.
“CTE was identified in the brains of older former professionals with long playing careers, but also in younger, non-professional sportsmen and in recent professionals who had played under modern concussion guidelines,” the authors found.
“Screening for CTE in all deaths by suicide is probably impractical, but our finding suggests it should be undertaken if a history of repetitive head injury is known or suspected,” the authors say.
The authors note that brains donated to the bank are more likely to show signs of trauma because donation is often done when an athlete’s family have concerns about the role head trauma may have played in a person’s death or condition.
Nonetheless, they say: “Our findings should encourage clinicians and policymakers to develop measures that further mitigate the risk of sport-related repetitive head injury.”
One Step Closer to Hydrogen-Fueled Planes
Airbus to Test Zero-Emissions Aircraft, but How Does It Work?
Hydrogen fuel, touted by some as the fuel of the future, is seen as a potential solution for the deeply polluting aviation and shipping industries in a net-zero world: hydrogen burns cleanly, producing just energy and water vapor.
But while engineers have promoted hydrogen as a possible transport fuel since at least the 1920s, real-world technologies are still in their infancy, thanks to the destructive dominance of fossil fuels over the last century.
Airbus’ announcement, then, marks an important early step in a move towards making the sector compatible with net-zero.
“This is the most significant step undertaken at Airbus to usher in a new era of hydrogen-powered flight since the unveiling of our ZEROe concepts back in September 2020,” said Sabine Klauke, Airbus Chief Technical Officer, in a statement.
“By leveraging the expertise of American and Europeanengine manufacturers to make progress on hydrogen combustion technology, this international partnership sends a clear message that our industry is committed to making zero-emission flight a reality.”
“Our ambition is to take this aircraft and add a stub in between the two rear doors at the upper level,” said Glenn Llewellyn, Airbus’ Vice President of Zero Emissions Aircraft, in a promotional video on YouTube. “That stub will have on the end of it a hydrogen powered gas turbine.”
There will be instruments and sensors around the hydrogen storage unit and engine, to monitor how the system functions both in ground tests and in-flight. Up in the cockpit, instruments will need to be modified with a new throttle to change the amount of power the engine operates at, and a display for pilots to monitor the system.
Why Hydrogen Fuel?
Hydrogen, the most abundant element in the Universe, burns cleanly, and can be produced using renewable energy through the electrolysis of water (though it can be produced using fossil fuels, too).
Given that it’s so abundant, can be made from water, and combusts to produce water vapor, it can be a closed-loop energy system; the definition of renewable.
It’s also highly reactive: hydrogen gas, made up of two hydrogen atoms, can combust at extremely low concentrations. It can combust in response to a simple spark, and it’s even been known to combust when exposed to sunlight or minor increases in temperature. That’s why it’s a suitable replacement fuel for kerosene, but it’s also why the system needs to be tested for safety.
“Aviation is one of these things that everyone agrees needs hydrogen for decarbonization, because it’s not going to be possible to electrify long distance air travel in the next few decades,” explains Fiona Beck, a senior lecturer at ANU and convener of the Hydrogen Fuels Project in the University’s Zero-carbon energy for the Asia Pacific grand challenge. “We just don’t have the battery technologies.
“One kilogram of hydrogen has 130 times the energy of one kilogram of batteries, so in something like air travel, where weight is really important, there’s just no way you’re going to get batteries light enough to directly electrify air travel.”
That’s a very high-profile incident in which hydrogen proved deadly, but a proverbial boatload of hydrogen gas encased within a fabric covering is nothing like the fuel cells proponents of hydrogen fuel are creating in the modern era.
Nonetheless, the incident demonstrates why it’s important to ensure the safety and impregnability of fuel storage; a single spark can prove fatal (though that’s the case with existing fuels, too).
“The key will be to have really good storage containers for the hydrogen, and you’re going to have to re-engineer all the fuel delivery lines,” says Beck, “because you can’t assume that the systems that deliver kerosene safely to an engine are going to be suitable for delivering hydrogen.”
Ultimately, Beck says pre-existing, sophisticated hydrogen technologies, even if they aren’t derived from aviation, mean engineers aren’t going into this blind.
“We already use quite a lot of hydrogen in industry, which is very different than flying a plane full of hydrogen, but still, we know how to handle it relatively safely.
“So, it’s just about designers and engineers making sure that they consider all the safety aspects of it. It’s different, but not necessarily more challenging.”
Two Paths to a Hydrogen Fueled Future of Flight?
Beck notes that Airbus aren’t the only commercial entity exploring hydrogen as a fuel type. In fact, Boeing are incorporating hydrogen into their vision of a cleaner future, but in a different way.
“There’s a difference between just getting hydrogen and burning it in a modified jet engine and what Boeing are doing, which is using sustainable air fuels,” she says.
But what are sustainable air fuels (SAFs)? Beck says they’re made by combining hydrogen with carbon dioxide to make a sustainably-produced kerosene.
“The difference is that instead of getting fossil fuels and refining them, you start with hydrogen, which you would hope comes from green sources, and then you take some carbon dioxide captured from another industrial process, and you’re cycling the carbon dioxide one more time before it gets released.”
So, CO2 is still released into the atmosphere, but the individual flight is not adding its own new load of greenhouse gases to the amount. Instead, it essentially piggy-backs off a pre-existing quantity of emissions that were already produced somewhere else.
The type of fuel that wins out remains to be seen.
“It’ll be really interesting to see which approach we go for in the longer term,” Beck muses. “With synthetic air fuels, your plane engine doesn’t need to change at all, nothing about the demand side needs to change–it’s just kerosene.
“But then there’s issues, because you’re still using carbon dioxide.”
Some commentators see Boeing’s bet on SAFs as a more pragmatic approach that may help us usher in a less polluting age, quicker. On the other hand, if successful, the Airbus system can be fully carbon-neutral from fuel production through to combustion.
“Climate Adaptation by Itself Is Not Enough”: The Latest IPCC Report Installment
The Second of Three Reports Shows Our Vulnerabilities and How We Can Protect Them.
In the next part of its Sixth Assessment Report, released today, the IPCC has examined the world population’s vulnerability to climate change, and what must be done to adapt to current and future changes.
It’s the second of three sections of this report (Working group II)–Working Group I’s section, released last August, demonstrates that anthropogenic climate change is continuing, while Working Group III’s component, on mitigation, will be released in April. An overall report is coming in September.
The IPCC reports represent a phenomenal amount of work from hundreds of researchers and government officials. It synthesizes information from over 10,000 studies, with over 62,000 comments from expert peer reviewers.
Literally every sentence of the summary for policymakers has been agreed upon by consensus from a group of experts and government delegations–the line-by-line approval process alone takes a fortnight. The report in its entirety is a product of several years.
Given the time and expertise involved in making the report, its conclusions aren’t revelatory: the world is becoming increasingly vulnerable to the effects of climate change, poorest people are often the most at risk, and adaptation to these effects will force changes in our lifestyle, infrastructure, economy and agriculture.
While adaptation is necessary, it’s also insufficient. “It’s increasingly clear that the pace of adaptation across the globe is not enough to keep up with climate change,” says Professor Mark Howden, Working Group II’s vice-chair and director of the Institute for Climate, Energy & Disaster Solutions at the Australian National University.
Under the IPCC’s projected emissions scenarios, the climate could warm much more or slightly more, based on the volume of greenhouse gas released into the atmosphere.
“Depending on which of those trajectories we go on, our adaptation options differ,” says Howden.
On our current, business-as-usual trajectory, we can’t avoid the crisis, no matter how much we change our human systems to prepare for or recover from the ravages of climate change.
“Climate adaptation, risk management, by itself is not enough,” says Howden.
The report comes at a pertinent time for Australia, as southern Queensland and northern New South Wales experience dramatic flooding from high, La Niña-related rainfall.
“One of the clear projections is an increase in the intensity of heavy rainfall events,” says Professor Brendan Mackey, director of National Climate Change Adaptation Research Facility at Griffith University, and a lead author on the Australasian chapter of the report.
Mackey also notes that he has extended family members in Lismore, NSW, who today needed to be rescued from their rooftops as the town floods.
Howden says that while it’s hard to link individual disasters to climate change as they occur, he agrees that there are more floods projected for northern Australia.
“I think we can say that climate change is already embedded in this event,” adds Howden.
“These events are driven by, particularly, ocean temperatures, and we know very well that those have gone up due to climate change due to human influence.”
He points out that flooding is a common side effect of a La Niña event, of which more are expected as the climate warms.
Flooding is not the only extreme weather event that can be linked to climate change.
“We’ve observed further warming and sea level rise, we’ve observed more flood days and heat waves, we’ve observed less snow,” says Mackey.
“Interestingly, [we’ve observed] more rainfall in the north, less winter rainfall in the southwest and southeast, and more extreme fire weather days in the south and east.”
All of these trends are expected to continue, especially under high-emissions scenarios.
For Australians, the predictions the IPCC has made with very high or high confidence include: both a decline in agricultural production and increase in extreme fire weather across the south of the continent; a nation-wide increase in heat-related mortality; increased stress on cities, infrastructure and supply chains from natural disasters; and inundation of low-lying coastal communities from sea level rise.
The final high-confidence prediction is that Australian institutions and governments aren’t currently able to manage these risks.
“Climate change impacts are becoming more complex and difficult to manage,” says Professor Lauren Rickards, director of the Urban Futures Enabling Capability Platform at RMIT, also a lead author on the Australasian chapter.
“Not only are climatic hazards becoming more severe–including, sometimes, nonlinear effects such as, for example, tipping over flood levees that have historically been sufficient–but also those climatic hazards are intersecting in very, very complex ways. And in turn, the flow-on effects on the ground are interacting, causing what’s called cascading and compounding impacts.”
She adds that many local and state governments and the private sector have both recognized the importance of changing their practices to prepare for or react to climate extremes.
“We have these systems, these infrastructural systems–energy, transport, water, communications, for example–and it’s the need to adapt those at the base of a lot of the adaptation that’s needed,” says Rickards.
Australia is missing a large investment in research on how different places and systems can adapt to the changing climate.
“We’ve seen a really significant reduction in the research into what actions different individuals, communities, sectors, can take,” says Howden.
“And what that means is we don’t have the portfolio of options available for people in a way that is easily communicable, and easily understood, and easily adopted.”
Without this research, as well as work from local and Indigenous experts, some adaptations can even risk worsening the impacts of climate change.
“The evidence that we’ve looked at shows really clearly that adaptation strategies, when they build on Indigenous and local knowledge and integrate science, that’s when they are most successful,” says Dr. Johanna Nalau, leader of the Adaptation Science Research Theme at Cities Research Institute, Griffith University.
While the risks Australia faces are dramatic, things are much worse for other parts of the world. Nalau, who was a lead author on the report’s chapter on small islands, says that “most of the communities and countries are constrained in what they can do in terms of adaptation”.
In April, we will have access to the IPCC’s dossier on mitigating climate change and emissions reduction. But in the meantime, Working Group II’s battalion of researchers advocate for better planning for climate disaster, more research into ways human systems can adapt, sustainable and just development worldwide, and rapid emissions reduction.
“Adaptation can’t be divorced from mitigation, conceptually or in practice,” says Rickards.
“We need adaptation to enable effective mitigation. We need effective mitigation to enable adaptation to give it a chance of succeeding. At present, we’re not on track and we need to pivot quickly.”
Piecing Together Pandemic Origins
New Research Asserts Market, Not Laboratory, Is the “Unambiguous” Birthplace of SARS-CoV-2
by Jamie Priest
Now in our third year of woe, most of us are naturally focused on the end of the pandemic. The global death toll is approaching 6 million, and the world is desperately searching for signs the ordeal’s over.
But amid the future watching, a team of researchers have turned their attention back to the beginning, tackling the question that was once on everyone’s lips: where did SARS-CoV-2 originate?
Outlining their evidence in two preprints, researchers assert an “unambiguous” origin in the Huanan market in Wuhan, spilling over not once, but twice into the human population and kicking off a global health crisis.
The paired papers, which have yet to undergo peer review and publication in a scientific journal, critically undermine the competing, and controversial, alternative origin story that involves a leak–intentional or otherwise–from a nearby Wuhan virology lab where scientists study coronaviruses.
The Huanan market was an immediate suspect when COVID first emerged in late 2019. Workers at the market were amongst the first individuals to present with the pneumonia that was quickly linked to a novel coronavirus, and Chinese officials, fearing a repeat of the 2002 SARS epidemic that killed 774 people, were quick to close the market down.
But by the time Chinese researchers descended on the Huanan market in 2020 to collect genetic samples, they found no wildlife present at all. Although they were able to detect traces of the virus in samples taken from surfaces and sewers in the market, the lack of direct evidence of infection in market animals sparked a debate over whether this truly was the epicenter of the outbreak. Alternative theories centered around the Wuhan Institute of Virology.
In the face of this absence of evidence, researchers working on the new reports turned to alternative information sources.
Using data pulled from the Chinesesocial media app Weibo, they were able to map the location of 737 COVID-positive Wuhan residents who turned to the app to seek health advice during the first three months of the outbreak.
Plotting the geographic concentrations of cases through time, the researchers clearly identified the market as the centre of origin, with the virus spreading radially through surrounding suburbs and across the city as time progressed. Through statistical analysis, the researchers demonstrated that the chances of such a pattern arising through mere chance was exceedingly unlikely.
However, the pattern alone was open to interpretation, with questions remaining about pathways of introduction to the market–was the virus carried in inside a caged animal, on the coat of an unwitting scientist, or via some as-yet unidentified vector?
To dig further into the mystery, the researchers looked at the genetic samples obtained from market surfaces in January 2020 by Chinese scientists, tracing the locations of individual positive samples to their exact location within the market complex.
This second map revealed a strong concentration of positive samples in one corner of the market, a sector that had been previously documented to house a range of wild mammals that are considered potential coronavirus hosts.
Finally, the researchers created an evolutionary family tree of the earliest coronavirus lineages that emerged in the first few panicked weeks of the pandemic.
Even in its very earliest stages SARS-CoV-2 was a variable beast, with evidence of two distinct lineages, dubbed A and B. Looking closely at the mutations that separate the two, the researchers found something surprising–rather than one descending from the other, it appears that they had separate origins and entries into the human population, with lineage B making the leap in late November and lineage A following suit shortly afterwards.
Initial studies of the Huanan market genetic samples found only lineage B, but this latest investigation detected the presence of lineage A in people who lived in close proximity to the market–a finding corroborated by a recent Chinese study that identified lineage A on a single glove collected from the market during the initial shutdown.
Questions remain about the identity of the intermediary animal host species. But by narrowing research focus to the most likely centre of origin, this research will significantly aid efforts to understand the process that saw COVID-19 enter the world, and hopefully help avert future pandemics.
Fake Viral Footage Is Spreading alongside the Real Horror in Ukraine—Here Are 5 Ways to Spot It
Manipulated or Falsified Videos and Images Can Spread Quickly—but There Are Strategies You Can Take to Evaluate Them.
By TJ Thompson, Daniel Angus and Paul Dootson
Amid the alarming images of Russia’s invasion of Ukraine over the past few days, millions of people have also seen misleading, manipulated or false information about the conflict on social media platforms such as Facebook, Twitter, TikTok and Telegram.
One example is this video of military jets posted to TikTok, which is historical footage but captioned as live video of the situation in Ukraine.
Visuals, because of their persuasive potential and attention-grabbing nature, are an especially potent choice for those seeking to mislead. Where creating, editing or sharing inauthentic visual content isn’t satire or art, it is usually politically or economically motivated.
Disinformation campaigns aim to distract, confuse, manipulate and sow division, discord, and uncertainty in the community. This is a common strategy for highly polarized nations where socioeconomic inequalities, disenfranchisement and propaganda are prevalent.
How is this fake content created and spread, what’s being done to debunk it, and how can you ensure you don’t fall for it yourself?
What Are the Most Common Fakery Techniques?
Using an existing photo or video and claiming it came from a different time or place is one of the most common forms of misinformation in this context. This requires no special software or technical skills—just a willingness to upload an old video of a missile attack or other arresting image, and describe it as new footage.
Another low-tech option is to stage or pose actions or events and present them as reality. This was the case with destroyed vehicles that Russia claimed were bombed by Ukraine.
Using a particular lens or vantage point can also change how the scene looks and can be used to deceive. A tight shot of people, for example, can make it hard to gauge how many were in a crowd, compared with an aerial shot.
Taking things further still, Photoshop or equivalent software can be used to add or remove people or objects from a scene, or to crop elements out from a photograph. An example of object addition is the below photograph, which purports to show construction machinery outside a kindergarten in eastern Ukraine. The satirical text accompanying the image jokes about the “calibre of the construction machinery”—the author suggesting that reports of damage to buildings from military ordinance are exaggerated or untrue.
Close inspection reveals this image was digitally altered to include the machinery. This tweet could be seen as an attempt to downplay the extent of damage resulting from a Russian-backed missile attack, and in a wider context to create confusion and doubt as to veracity of other images emerging from the conflict zone.
Journalists and fact-checkers are also working to verify content and raise awareness of known fakes. Large, well-resourced news outlets such as the BBC are also calling out misinformation.
Social media platforms have added new labels to identify state-run media organisations or provide more background information about sources or people in your networks who have also shared a particular story.
They have also tweaked their algorithms to change what content is amplified and have hired staff to spot and flag misleading content. Platforms are also doing some work behind the scenes to detect and publicly share information on state-linked information operations.
What Can I Do about It?
You can attempt to fact-check images for yourself rather than taking them at face value. An article we wrote late last year for the Australian Associated Press explains the fact-checking process at each stage: image creation, editing and distribution.
Here are five simple steps you can take:
Examine the metadata
This Telegram post claims Polish-speaking saboteurs attacked a sewage facility in an attempt to place a tank of chlorine for a “false flag” attack.
But the video’s metadata—the details about how and when the video was created—show it was filmed days before the alleged date of the incident.
To check metadata for yourself, you can download the file and use software such as Adobe Photoshop or Bridge to examine it. Online metadata viewers also exist that allow you to check by using the image’s web link.
One hurdle to this approach is that social media platforms such as Facebook and Twitter often strip the metadata from photos and videos when they are uploaded to their sites. In these cases, you can try requesting the original file or consulting fact-checking websites to see whether they have already verified or debunked the footage in question.
If old content has been recycled and repurposed, you may be able to find the same footage used elsewhere. You can use Google Images or TinEye to “reverse image search” a picture and see where else it appears online.
But be aware that simple edits such as reversing the left-right orientation of an image can fool search engines and make them think the flipped image is new.
Look for inconsistencies
Does the purported time of day match the direction of light you would expect at that time, for example? Do watches or clocks visible in the image correspond to the alleged timeline claimed?
You can also compare other data points, such as politicians’ schedules or verified sightings, Google Earth vision or Google Maps imagery, to try and triangulate claims and see whether the details are consistent.
Ask yourself some simple questions
Do you know where, when and why the photo or video was made? Do you know who made it, and whether what you’re looking at is the original version?
Using online tools such as InVID or Forensically can potentially help answer some of these questions. Or you might like to refer to this list of 20 questions you can use to “interrogate” social media footage with the right level of healthy skepticism.
Ultimately, if you’re in doubt, don’t share or repeat claims that haven’t been published by a reputable source such as an international news organization. And consider using some of these principles when deciding which sources to trust.
By doing this, you can help limit the influence of misinformation, and help clarify the true situation in Ukraine.
by Avid Mohammadi, Sareh Bagherichimeh, Yoojin Choi, Azadeh Fazel, Elizabeth Tevlin, Sanja Huibner, Zhongtian Shao, David Zuanazzi, Jessica L. Prodger, Sara V. Good, Wangari Tharao & Rupert Kaul
Summary: In heterosexual men, the penis is the primary site of Human Immunodeficiency Virus (HIV) acquisition. Levels of inflammatory cytokines in the coronal sulcus are associated with an increased HIV risk, and we hypothesized that these may be altered after insertive penile sex. Therefore, we designed the Sex, Couples and Science Study (SECS study) to define the impact of penile–vaginal sex on the penile immune correlates of HIV susceptibility. We found that multiple coronal sulcuscytokines increased dramatically and rapidly after sex, regardless of condom use, with a return to baseline levels by 72 hours. The changes observed after condomless sex were strongly predicted by cytokine concentrations in the vaginal secretions of the female partner, and were similar in circumcised and uncircumcised men. We believe that these findings have important implications for understanding the immunopathogenesis of penile HIV acquisition; in addition, they have important implications for the design of clinical studies of penileHIV acquisition and prevention.
by Quang Vinh Phan, Boris Bogdanow, Emanuel Wyler, Markus Landthaler, Fan Liu, Christian Hagemeier & Lüder Wiebusch
Summary: Human cytomegalovirus (HCMV) infection is associated with systemic disease in immunocompromised individuals and congenitally infectedneonates. Animal CMVs and their bacterial artificial chromosome (BAC) clones have been utilized as models for CMV infection and thereby contributed immensely to the understanding of pathogenesis, host immune response and underlying molecular mechanism of CMVinfections. As the closest relative to HCMV, the chimpanzeeCMV (CCMV) holds a great potential as a model system for HCMV infection but its application was limited due to the lack of tools and data for functional genomic analyses. Here, the cloning of the CCMV as a BACvector made its viralgenome available to gene targeting techniques that allow the efficient application of reverse genetic strategies. Furthermore, the multi-omic datasets created in this study provide an in-depth view of the viralgene repertoire and the host cell responses to infection, confirming the close phylogenetic relationship between HCMV and CCMV on a system level. Taken together, the newly established CCMV–BAC system presents a framework for HCMV modeling and comparative studies to address key questions in evolutionary processes and infection mechanisms.
by Dan Wang, Xinxin Zhang, Liwen Yin, Qi Liu, Zhaoli Yu, Congjuan Xu, Zhenzhen Ma, Yushan Xia, Jing Shi, Yuehua Gong, Fang Bai, Zhihui Cheng, Weihui Wu, Jinzhong Lin & Yongxin Jin
Summary:Ribosomes provide all living organisms the capacity to synthesize proteins. The production of many ribosomal proteins is often controlled by an autoregulatory feedback mechanism. P. aeruginosa is an opportunistic human pathogen and its type III secretion system (T3SS) is a critical virulence determinant in host infections. In this study, by screening a Tn5 mutant library, we identified rplI, encoding ribosomal large subunit protein L9, as a novel repressor for the T3SS. Further exploring the regulatory mechanism, we found that the RplI protein interacts with the 5’ UTR (5’ untranslated region) of exsA, a gene coding for transcriptional activator of the T3SS. Such an interaction likely blocks ribosome loading on the exsA 5’ UTR, inhibiting the initiation of exsA translation. The significance of this work is in the identification of a novel repressor for the T3SS and elucidation of its molecular mechanism. Furthermore, this work provides evidence for individual ribosomal protein regulating mRNA translation beyond its autogenous feedback control.
by Patrick Günther, Dennis Quentin, Shehryar Ahmad, Kartik Sachar, Christos Gatsogiannis, John C. Whitney & Stefan Raunser
Summary:Bacteria have developed a variety of strategies to compete for nutrients and limited resources. One system widely used by Gram-negative bacteria is the T6 secretion system which delivers a plethora of effectors into competing bacterial cells. Known functions of effectors are degradation of the cell wall, the depletion of essential metabolites such as NAD+ or the cleavage of DNA. RhsA is an effector from the widespread plant-protecting bacteriaPseudomonas protegens. We found that RhsA forms a closed cocoon similar to that found in bacterial Tc toxins and metazoanteneurin proteins. The effector cleaves its polypeptide chain by itself in three pieces, namely the N-terminal domain including a seal, the cocoon and the actual toxic component which potentially cleaves DNA. The toxic component is encapsulated in the large cocoon, so that the effector producing bacterium is protected from the toxin. In order for the toxin to exit the cocoon, we propose that the seal, which closes the cocoon at one end, is removed by mechanical forces during injection of the effector by the T6 secretion system. We further hypothesize about different scenarios for the delivery of the toxin into the cytoplasm of the host cell. Together, our findings expand the knowledge of the mechanism of action of the T6 secretion system and its essential role in interbacterial competition.
by Catarina E. Hioe, Guangming Li, Xiaomei Liu, Ourania Tsahouridis, Xiuting He, Masaya Funaki, Jéromine Klingler, Alex F. Tang, Roya Feyznezhad, Daniel W. Heindel, Xiao-Hong Wang, David A. Spencer, Guangnan Hu, Namita Satija, Jérémie Prévost, Andrés Finzi, Ann J. Hessell, Shixia Wang, Shan Lu, Benjamin K. Chen, Susan Zolla-Pazner, Chitra Upadhyay, Raymond Alvarez & Lishan Su
Summary: In the past decade, HIV-1 has infected an estimated 1.5 to 2 million people every year, but vaccines needed to control this pandemic are unavailable. Among vaccines tested in the human efficacy trials, the RV144 vaccine regimen showed a modest efficacy and revealed non-neutralizing antibodies against the virus envelopeglycoproteins as a correlate of reduced virus acquisition. To design more efficacious HIV-1vaccines, a better understanding about antiviral mechanisms of these antibodies is needed. Here non-neutralizing monoclonal antibodies against two immunogenic sites on the virus envelope were evaluated for passive administration to humanized mice that were subsequently challenged with HIV-1. The antibodies did not block mucosal HIV-1 infection but reduced virus burden. The level of virus reduction correlated with the antibody binding potency and the effector functions mediated through their Fc fragments, which included antibody-dependent phagocytosis and complement activation, but not the commonly studied antibody-dependent cellular cytotoxicity. The importance of the Fc functions was further demonstrated by reduced virus control when mutations were introduced to decrease Fc activities. This study provides new evidence for the important contribution of multiple Fc-dependent antibody functions in immune control against HIV-1.
by Evan John, Silke Jacques, Huyen T. T. Phan, Lifang Liu, Danilo Pereira, Daniel Croll, Karam B. Singh, Richard P. Oliver & Kar-Chun Tan
Summary:Breeding for durable resistance to fungal diseases in crops is a continual challenge for crop breeders. Fungal pathogens evolve ways to overcome host resistance by masking themselves through effector evolution and evasion of broad-spectrum defense responses. Association studies on mapping populations infected by isolate mixtures are often used by researchers to seek out novel sources of genetic resistance. Disease resistancequantitative trait loci (QTL) are often minor or inconsistent across environments. This is a particular problem with septoria diseases of cereals such as septoria nodorum blotch (SNB) of wheat caused by Parastagonospora nodorum. The fungus uses a suite of necrotrophic effectors (NEs) to cause SNB. We characterized a genetic element, called PE401, in the promoter of the major NE gene Tox1, which is present in some P. nodorum isolates. PE401 functions as a transcriptional repressor of Tox1 and exerts epistatic control on another major SNB resistance QTL in the host. In the context of crop protection, constant surveillance of the pathogen population for the frequency of PE401 in conjunction with NE diversity will enable agronomists to provide the best advice to growers on which wheat varieties can be tailored to provide optimal SNB resistance to regional pathogen population genotypes.
by Rommel J. Gestuveo, Rhys Parry, Laura B. Dickson, Sebastian Lequime, Vattipally B. Sreenu, Matthew J. Arnold, Alexander A. Khromykh, Esther Schnettler, Louis Lambrechts, Margus Varjak & Alain Kohl
Summary:Aedes aegyptimosquitoes that transmit human-pathogenicviruses rely on the exogenoussmall interfering RNA (exo-siRNA) pathway as part of antiviral responses. This pathway is triggered by virus-derived double-stranded RNA (dsRNA) produced during viral replication that is then cleaved by Dicer 2 (Dcr2) into virus-derived small interfering RNAs (vsiRNAs). These vsiRNAs target viral RNA, leading to suppression of viral replication. The importance of Dcr2 in this pathway has been intensely studied in the Drosophila melanogaster model but is largely lacking in mosquitoes. Here, we have identified conserved and functionally relevant amino acids in the helicase and RNase III domains of Ae. aegyptiDcr2 that are important in its silencing activity and antiviral responses against Semliki Forest virus (SFV). Small RNA sequencing of SFV-infected mosquito cells with functional or mutated Dcr2 gave new insights into the nature and origin of vsiRNAs. The findings of this study, together with the different molecular tools we have previously developed to investigate the exo-siRNA pathway of mosquito cells, have started to uncover important properties of Dcr2 that could be valuable in understanding mosquito-arbovirus interactions and potentially in developing or assisting vector control strategies.
by Kwok-ho Lam, Jacqueline M. Tremblay, Kay Perry, Konstantin Ichtchenko, Charles B. Shoemaker & Rongsheng Jin
Summary:Botulinum neurotoxins (BoNTs) are extremely toxic to humans by causing flaccid paralysis of botulism. The catalytic light chain (LC) of BoNTs is the warhead of the toxin, which is mainly responsible for BoNT’s neurotoxic effects. As an endopeptidase, LC is delivered by the toxin to inside neurons where it specifically cleaves neuronal SNARE proteins and causes muscle paralysis. While the currently available equine and human antitoxin sera can prevent further intoxication, they do not promote recovery from paralysis that has already occurred. We strike to develop single-domain variable heavy-chain (VHH) antibodies targeting the LC of BoNT/A (LC/A) and BoNT/B (LC/B) as antidotes to inhibit or eliminate the intraneuronal LC protease. Here, we report the identification and characterization of large panels of new and unique VHHs that bind to LC/A or LC/B. Using a combination of X-ray crystallography and biochemical assays, we reveal that VHHs exploit diverse mechanisms to interact with LC/A and LC/B and inhibit their protease activity, and such knowledge can be harnessed to predict their specificity towards different toxin subtypes within each serotype. We anticipate that the new VHHs and their characterization reported here will contribute to the development of improved botulism therapeutics having high potencies and broad specificities.
by Clinton O. Ogega, Nicole E. Skinner, Andrew I. Flyak, Kaitlyn E. Clark, Nathan L. Board, Pamela J. Bjorkman, James E. Crowe Jr., Andrea L. Cox, Stuart C. Ray & Justin R. Bailey
Summary: Antiviral immunity relies on production of protective immunoglobulin G (IgG) by B cells, but many hepatitis C virus (HCV)-infected individuals have very low levels of HCV-specific IgG in their serum. Elucidating mechanisms underlying this suboptimal IgG expression remains paramount in guiding therapeutic and vaccine strategies. In this study, we developed a highly specific method to capture HCV-specific B cells and characterized their surface protein expression. Two proteins analyzed were Fc receptor-like protein 5 (FCRL5), a cell surface receptor for IgG, and programmed cell death protein-1 (PD-1), a marker of lymphocyte activation and exhaustion. We measured serum levels of anti-HCVIgG in these subjects and demonstrated that overexpression of FCRL5 and PD-1 on memory B cells was associated with reduced anti-E2 IgG levels. This study uses HCV as a viral model, but the findings may be applicable to many viral infections, and they offer new potential targets to enhance antiviral IgG production.
(from the Forest Policy Info Mailing List and IUFRO WFSE)
By Dr. Pia Katila
Forests provide vital ecosystem services crucial to human well-being and sustainable development, and have an important role to play in achieving the seventeen Sustainable Development Goals (SDGs) of the United Nations 2030 Agenda. Little attention, however, has yet focused on how efforts to achieve the SDGs will impact forests and forest-related livelihoods, and how these impacts may, in turn, enhance or undermine the contributions of forests to climate and development. Understanding the potential impacts of SDGs on forests and forest-related livelihoods and development as well as the related trade-offs and synergies is crucial for the efforts undertaken to reach these goals. It is especially important for reducing potential negative impacts and to leverage opportunities to create synergies that will ultimately determine whether comprehensive progress towards the SDGs will be made.
This book discusses the conditions that influence how SDGs are implemented and prioritized, and provides a systematic, multidisciplinary global assessment of interlinkages among the SDGs and their targets, increasing understanding of potential synergies and unavoidable trade-offs between goals from the point of view of forests and people. Ideal for academic researchers, students and decision-makers interested in sustainable development in the context of forests, this book will provide invaluable knowledge for efforts to reach the SDGs.
The assessment was undertaken by the International Union of Forest Research Organization’s Special Project World Forests, Society and Environment (IUFRO WFSE). It involved 120 scientists and experts from 60 different universities and research and development institutions as well as 38 scientists who acted as peer reviewers of the different SDG chapters. The development and publication of the book and policy brief were made possible by the financial contributions of the Ministry for Foreign Affairs of Finland and the Natural Resources Institute Finland.
The Roadmap to Ocean and Climate Action (ROCA) Initiative is pleased to share the third annual Report on Assessing Progress on Oceans and Climate Action 2019, which provides a summary of major developments in ocean and climate science, policy, and action in 2019. The report has been written in collaboration with 47 contributing authors writing in their own personal capacities. The report reviews major developments taking place on each of the following major themes: New Scientific Findings on Oceans and Climate, Central Role of Nationally Determined Contributions, Mitigation, Adaptation, Low CarbonBlue Economy, Population Displacement, Financing, and Capacity Development. See the ROCA Report, Assessing Progress on Ocean and Climate Action: 2019 [archived PDF].
To find out more about the ROCA initiative and to find past ROCA reports, please visit the ROCA Initiative website.
The ROCA Progress Report 2019 will be presented at the Oceans Action Day at COP25 taking place on December 6 and 7, 2019 at COP25 in Madrid, Spain. Please visit the Oceans Action Day at COP25 webpage for more information, including the official program and postcard.
Is Article 6 on the home stretch? In this issue [PDF] of the Carbon Mechanisms Review, we look at success factors for the negotiations, and analyze what is needed to make the Article 6 rulebook text ready for enabling up-scaled mitigation action. With regard to the ‘Latin American COP’ (taking place in Madrid), we cover the emerging carbon pricing landscape in the region while our cover feature reports on and analyses the current Article 6 pilot initiatives. An analysis of the latest CORSIA developments rounds off the issue.
Summary: Using airborne laser, 400 million snow depth measurements at Hardangervidda in Southern Norway have been collected. The amount of data has made in-depth studies of the spatial distribution of snow and its interaction with the terrain and vegetation possible. We find that the terrain variability, expressed by the square slope, the average amount of snow, and whether the terrain is vegetated or not, largely explains the variation of snow depth. With this information it is possible to develop equations that predict snow depth variability that can be used in environmental models, which again are used for important tasks such as flood forecasting and hydropower planning. One major advantage is that these equations can be determined from the data that are, in principle, available everywhere, provided there exists a detailed digital model of the terrain.
by Christine L. Dolph, Evelyn Boardman, Mohammad Danesh-Yazdi, Jacques C. Finlay, Amy T. Hansen, Anna C. Baker & Brent Dalzell
Abstract: When phosphorus from farm fertilizer, eroded soil, and septic waste enters our water, it leads to problems like toxic algae blooms, fish kills, and contaminated drinking supplies. In this study, we examine how phosphorus travels through streams and rivers of farmed areas. In the past, soil lost from farm fields was considered the biggest contributor to phosphorus pollution in agricultural areas, but our study shows that phosphorus originating from fertilizer stores in the soil and from crop residue, as well as from soil eroded from sensitive ravines and bluffs, contributes strongly to the total amount of phosphorus pollution in agricultural rivers. We also found that most phosphorus leaves farmed watersheds during the very highest river flows. Increased frequency of large storms due to climate chaos will therefore likely worsen water quality in areas that are heavily loaded with phosphorus from farm fertilizers. Protecting water in agriculturalwatersheds will require knowledge of the local landscape along with strategies to address (1) drivers of climate chaos, (2) reduction in the highest river flows, and (3) ongoing inputs and legacy stores of phosphorus that are readily transported across land and water.
by Matteo Giuliani, Marta Zaniolo, Andrea Castelletti, Guido Davoli & Paul Block
Abstract: Increasingly variable hydrologic regimes combined with more frequent and intense extreme events are challenging water systems management worldwide. These trends emphasize the need of accurate medium- to long-term predictions to timely prompt anticipatory operations. Despite in some locations global climate oscillations and particularly the El Niño Southern Oscillation (ENSO) may contribute to extending forecast lead times, in other regions there is no consensus on how ENSO can be detected, and used as local conditions are also influenced by other concurrent climate signals. In this work, we introduce the Climate State Intelligence framework to capture the state of multiple global climate signals via artificial intelligence and improve seasonal forecasts. These forecasts are used as additional inputs for informing water system operations and their value is quantified as the corresponding gain in system performance. We apply the framework to the Lake Como basin, a regulated lake in northern Italy mainly operated for flood control and irrigation supply. Numerical results show the existence of notable teleconnection patterns dependent on both ENSO and the North Atlantic Oscillation over the Alpine region, which contribute in generating skillful seasonal precipitation and hydrologic forecasts. The use of this information for conditioning the lake operations produces an average 44% improvement in system performance with respect to a baseline solution not informed by any forecast, with this gain that further increases during extreme drought episodes. Our results also suggest that observed preseason sea surface temperature anomalies appear more valuable than hydrologic-based seasonal forecasts, producing an average 59% improvement in system performance.
by Daniel J. Short Gianotti, Guido D. Salvucci, Ruzbeh Akbar, Kaighin A. McColl, Richard Cuenca & Dara Entekhabi
Abstract: Surface soil moisture measurements are typically correlated to some degree with changes in subsurface soil moisture. We calculate a hydrologic length scale, λ, which represents (1) the mean-state estimator of total column water changes from surface observations, (2) an e-folding length scale for subsurface soil moisture profile covariance fall-off, and (3) the best second-moment mass-conserving surface layer thickness for a simple bucket model, defined by the data streams of satellite soil moisture and precipitation retrievals. Calculations are simple, based on three variables: the autocorrelation and variance of surface soil moisture and the variance of the net flux into the column (precipitation minus estimated losses), which can be estimated directly from the soil moisture and precipitation time series. We develop a method to calculate the lag-one autocorrelation for irregularly observed time series and show global surface soil moisture autocorrelation. λ is driven in part by local hydroclimate conditions and is generally larger than the 50-mm nominal radiometric length scale for the soil moisture retrievals, suggesting broad subsurface correlation due to moisture drainage. In all but the most arid regions, radiometricsoil moisture retrievals provide more information about ecosystem-relevant water fluxes than satellite radiometers can explicitly “see”; lower-frequency radiometers are expected to provide still more statistical information about subsurface water dynamics.
by Jordan S. Read, Xiaowei Jia, Jared Willard, Alison P. Appling, Jacob A. Zwart, Samantha K. Oliver, Anuj Karpatne, Gretchen J. A. Hansen, Paul C. Hanson, William Watkins, Michael Steinbach & Vipin Kumar
Abstract: The rapid growth of data in water resources has created new opportunities to accelerate knowledge discovery with the use of advanced deep learning tools. Hybrid models that integrate theory with state-of-the art empirical techniques have the potential to improve predictions while remaining true to physical laws. This paper evaluates the Process-Guided Deep Learning (PGDL) hybrid modeling framework with a use-case of predicting depth-specific lake water temperatures. The PGDL model has three primary components: a deep learning model with temporal awareness (long short-term memory recurrence), theory-based feedback (model penalties for violating conversation of energy), and model pre-training to initialize the network with synthetic data (water temperature predictions from a process-based model). In situ water temperatures were used to train the PGDL model, a deep learning (DL) model, and a process-based (PB) model. Model performance was evaluated in various conditions, including when training data were sparse and when predictions were made outside of the range in the training data set. The PGDL model performance (as measured by root-mean-square error (RMSE)) was superior to DL and PB for two detailed study lakes, but only when pretraining data included greater variability than the training period. The PGDL model also performed well when extended to 68 lakes, with a medianRMSE of 1.65 °C during the test period (DL: 1.78 °C, PB: 2.03 °C; in a small number of lakes PB or DL models were more accurate). This case-study demonstrates that integrating scientific knowledge into deep learning tools shows promise for improving predictions of many important environmental variables.
by Qiang Dai, Qiqi Yang, Dawei Han, Miguel A. Rico-Ramirez & Shuliang Zhang
Abstract:Radar-gauge rainfall discrepancies are considered to originate from radarrainfall measurements while ignoring the fact that radar observes rain aloft while a rain gauge measures rainfall on the ground. Observations of raindrops observed aloft by weather radars consider that raindrops fall vertically to the ground without changing in size. This premise obviously does not stand because raindrop location changes due to wind drift and raindrop size changes due to evaporation. However, both effects are usually ignored. This study proposes a fully formulated scheme to numerically simulate both raindrop drift and evaporation in the air and reduces the uncertainties of radarrainfall estimation. The Weather Research and Forecasting model is used to simulate high-resolution three-dimensional atmospheric fields. A dual-polarization radar retrieves the raindrop size distribution for each radarpixel. Three schemes are designed and implemented using the Hameldon Hillradar in Lancashire, England. The first considers only raindrop drift, the second considers only evaporation, and the last considers both aspects. Results show that wind advection can cause a large drift for small raindrops. Considerable loss of rainfall is observed due to raindropevaporation. Overall, the three schemes improve the radar-gauge correlation by 3.2%, 2.9%, and 3.8% and reduce their discrepancy by 17.9%, 8.6%, and 21.7%, respectively, over eight selected events. This study contributes to the improvement of quantitative precipitation estimation from radarpolarimetry and allows a better understanding of precipitation processes.
by K. Zhao, Z. Gong, F. Xu, Z. Zhou, C. K. Zhang, G. M. E. Perillo & G. Coco
Abstract: We develop a process-based model to simulate the geomorphodynamic evolution of tidal channels, considering hydrodynamics, flow-induced bank erosion, gravity-induced bank collapse, and sediment dynamics. A stress-deformation analysis and the Mohr-Coulomb criterion, calibrated through previous laboratory experiments, are included in a model simulating bank collapse. Results show that collapsed bank soil plays a primary role in the dynamics of bank retreat. For bank collapse with small bank height, tensile failure in the middle of the bank (Stage I), tensile failure on the bank top (Stage II), and sectional cracking from bank top to the toe (Stage III) are present sequentially before bank collapse occurs. A significant linear relation is observed between bank height and the contribution of bank collapse to bank retreat. Contrary to flow-induced bank erosion, bank collapse prevents further widening since the collapsed bank soil protects the bank from direct bank erosion. The bank profile is linear or slightly convex, and the planimetric shape of tidal channels (gradually decreasing in width landward) is similar when approaching equilibrium, regardless of the consideration of bank erosion and collapse. Moreover, the simulated width-to-depth ratio in all runs is comparable with observations from the Venice Lagoon. This indicates that the equilibrium configuration of tidal channels depends on hydrodynamic conditions and sediment properties, while bank erosion and collapse greatly affect the transient behavior (before equilibrium) of the tidal channels. Overall, this contribution highlights the importance of collapsed bank soil in investigating tidal channel morphodynamics using a combined perspective of geotechnics and soil mechanics.
by Yunquan Wang, Oliver Merlin, Gaofeng Zhu & Kun Zhang
Abstract: While numerous models exist for soilevaporation estimation, they are more or less empirically based either in the model structure or in the determination of introduced parameters. The main difficulty lies in representing the water stress factor, which is usually thought to be limited by capillarity-supported water supply or by vapor diffusion flux. Recent progress in understanding soil hydraulic properties, however, have found that the film flow, which is often neglected, is the dominant process under low moisture conditions. By including the impact of film flow, a reexamination on the typical evaporation process found that this usually neglected film flow might be the dominant process for supporting the Stage II evaporation (i.e., the fast falling rate stage), besides the generally accepted capillary flow-supported Stage I evaporation and the vapor diffusion-controlled Stage III evaporation. A physically based model for estimating the evaporation rate was then developed by parameterizing the Buckingham-Darcy’s law. Interestingly, the empirical Bucket model was found to be a specific form of the proposed model. The proposed model requires the in-equilibrium relative humidity as the sole input for representing water stress and introduces no adjustable parameter in relation to soil texture. The impact of vapor diffusion was also discussed. Model testing with laboratory data yielded an excellent agreement with observations for both thin soil and thick soil column evaporation experiments. Model evaluation at 15 field sites generally showed a close agreement with observations, with a great improvement in the lower range of evaporation rates in comparison with the widely applied Priestley and Taylor Jet Propulsion Laboratory model.
by Carmelo Juez, C. Schärer, H. Jenny, A. J. Schleiss & M. J. Franca
Abstract: Overbank sedimentation is predominantly due to fine sediments transported under suspension that become trapped and settle in floodplains when high-flow conditions occur in rivers. In a compound channel, the processes of exchanging water and fine sediments between the main channel and floodplains regulate the geomorphological evolution and are crucial for the maintenance of the ecosystem functions of the floodplains. These hydrodynamic and morphodynamic processes depend on variables such as the flow-depth ratio between the water depth in the main channel and the water depth in the floodplain, the width ratio between the width of the main channel and the width of the floodplain, and the floodplain land cover characterized by the type of roughness. This paper examines, by means of laboratoryexperiments, how these variables are interlinked and how the deposition of sediments in the compound channel is jointly determined by them. The combination of these compound channel characteristics modulates the production of vertically axised large turbulent vortical structures in the mixing interface. Such vortical structures determine the water mass exchange between the main channel and the floodplain, conditioning in turn the transport of sediment particles conveyed in the water, and, therefore, the resulting overbank sedimentation. The existence and pattern of sedimentation are conditioned by both the hydrodynamic variables (the flow-depth ratio and the width ratio) and the floodplain land cover simulated in terms of smooth walls, meadow-type roughness, sparse-wood-type roughness, and dense-wood-type roughness.
by D. F. Gold, P. M. Reed, B. C. Trindade & G. W. Characklis
Summary: Cooperation among neighboring urban water utilities can help water managers face challenges stemming from climate change and population growth. Water utilities can cooperate by coordinating water transfers and water restrictions in times of water scarcity (drought) so that water is provided to areas that need it most. In order to successfully implement these policies, however, cooperative partners must find a compromise that is acceptable to all regional actors, a task complicated by asymmetries in resources and risks often present in regional systems. The possibility of deviations from agreed upon actions is another complicating factor that has not been addressed in water resources literature. Our study focuses on four urban water utilities in the Research Triangle region of North Carolina who are investigating cooperative drought mitigation strategies. We contribute a framework that includes the use of simulation models, optimization algorithms, and statistical tools to aid cooperating partners in finding acceptable compromises that are tolerant modest deviations in planned actions. Our results can be used by regional utilities to avoid or alleviate potential planning conflicts and are broadly applicable to urban regional water supply planning across the globe.
Abstract: Changes in river flow may appear from shifts in land cover, constructions in the river channel, and climatic change, but currently there is a lack of understanding of the relative importance of these drivers. Therefore, we collected gauged river flow time series from 1961 to 2018 from across Sweden for 34 disturbed catchments to quantify how the various types of disturbances have affected river flow. We used trend analysis and the differences in observations versus hydrological modeling to explore the effects on river flow from (1) land cover changes from wildfires, storms, and urbanization; (2) dam constructions with regulations for hydropower production; and (3) climate-change impact in otherwise undisturbed catchments. A mini model ensemble, consisting of three versions of the S-HYPE model, was used, and the three models gave similar results. We searched for changes in annual and daily stream flow, seasonal flow regime, and flow duration curves. The results show that regulation of river flow has the largest impact, reducing spring floods with up to 100% and increasing winter flow by several orders of magnitude, with substantial effects transmitted far downstream. Climate changed the total river flow up to 20%. Tree removal by wildfires and storms has minor impacts at medium and large scales. Urbanization, on the contrary, showed a 20% increase in high flows also at medium scales. This study emphasizes the benefits of combining observed time series with numerical modeling to exclude the effect of varying weather conditions, when quantifying the effects of various drivers on long-term streamflow shifts.
by Matthew A. Thomas, Brian D. Collins & Benjamin B. Mirus
Summary:Soil wetness and rainfall contribute to landslides across the world. Using soil moisture sensors and rain gauges, these environmental conditions have been monitored at numerous points across the Earth’s surface to define threshold conditions, above which landsliding should be expected for a localized area. Satellite-based technologies also deliver estimates of soil wetness and rainfall, potentially offering an approach to develop thresholds as part of landslide warning systems over larger spatial scales. To evaluate the potential for using satellite-based measurements for landslide warning, we compare the accuracy of landslide thresholds defined with ground- versus satellite-based soil wetness and rainfall information. We find that the satellite-based data over-predict soil wetness during the time of year when landslides are most likely to occur, resulting in thresholds that also over-predict the potential for landslides relative to thresholds informed by direct measurements on the ground. Our results encourage the installation of more ground-based monitoring stations in landslide-prone settings and the cautious use of satellite-based data when more direct measurements are not available.
by Giuseppe Brunetti, Radka Kodešová & Jiří Šimůnek
Abstract:Food contamination is responsible for thousands of deaths worldwide every year. Plants represent the most common pathway for chemicals into the human and animal food chain. Although existing dynamic plant uptake models for chemicals are crucial for the development of reliable mitigation strategies for food pollution, they nevertheless simplify the description of physicochemical processes in soil and plants, mass transfer processes between soil and plants and in plants, and transformation in plants. To fill this scientific gap, we couple a widely used hydrological model (HYDRUS) with a multi-compartment dynamic plant uptake model, which accounts for differentiated multiple metabolization pathways in plant’s tissues. The developed model is validated first theoretically and then experimentally against measured data from an experiment on the translocation and transformation of carbamazepine in three vegetables. The analysis is further enriched by performing a global sensitivity analysis on the soil–plant model to identify factors driving the compound’s accumulation in plants’ shoots, as well as to elucidate the role and the importance of soil hydraulic properties on the plant uptake process. Results of the multilevel numerical analysis emphasize the model’s flexibility and demonstrate its ability to accurately reproduce physicochemical processes involved in the dynamic plant uptake of chemicals from contaminated soils.
by Lee R. Harrison, Erin Bray, Brandon Overstreet, Carl J. Legleiter, Rocko A. Brown, Joseph E. Merz, Rosealea M. Bond, Colin L. Nicol & Thomas Dunne
Abstract: Large-scale river restoration programs have emerged recently as a tool for improving spawning habitat for native salmonids in highly altered river ecosystems. Few studies have quantified the extent to which restored habitat is utilized by salmonids, which habitat features influence redd site selection, or the persistence of restored habitat over time. We investigated fall-run Chinook salmon spawning site utilization and measured and modeled corresponding habitat characteristics in two restored reaches: a reach of channel and floodplain enhancement completed in 2013 and a reconfigured channel and floodplain constructed in 2002. Redd surveys demonstrated that both restoration projects supported a high density of salmon redds, 3 and 14 years following restoration. Salmon redds were constructed in coarse gravel substrates located in areas of high sediment mobility, as determined by measurements of gravel friction angles and a grain entrainment model. Salmon redds were located near transitions between pool-riffle bedforms in regions of high predicted hyporheic flows. Habitat quality (quantified as a function of stream hydraulics) and hyporheic flow were both strong predictors of redd occurrence, though the relative roles of these variables differed between sites. Our findings indicate that physical controls on redd site selection in restored channels were similar to those reported for natural channels elsewhere. Our results further highlight that in addition to traditional habitat criteria (e.g., water depth, velocity, and substrate size), quantifying sediment texture and mobility, as well as intragravel flow, provides a more complete understanding of the ecological benefits provided by river restoration projects.
by Katherine H. Markovich, Andrew H. Manning, Laura E. Condon & Jennifer C. McIntosh
Abstract: Mountain-block recharge (MBR) is the subsurface inflow of groundwater to lowland aquifers from adjacent mountains. MBR can be a major component of recharge but remains difficult to characterize and quantify due to limited hydrogeologic, climatic, and other data in the mountain block and at the mountain front. The number of MBR-related studies has increased dramatically in the 15 years since the last review of the topic was conducted by Wilson and Guan (2004), generating important advancements. We review this recent body of literature, summarize current understanding of factors controlling MBR, and provide recommendations for future research priorities. Prior to 2004, most MBR studies were performed in the southwestern United States. Since then, numerous studies have detected and quantified MBR in basins around the world, typically estimating MBR to be 5–50% of basin-fill aquifer recharge. Theoretical studies using generic numerical modeling domains have revealed fundamental hydrogeologic and topographic controls on the amount of MBR and where it originates within the mountain block. Several mountain-focused hydrogeologic studies have confirmed the widespread existence of mountain bedrock aquifers hosting considerable groundwater flow and, in some cases, identified the occurrence of interbasin flow leaving headwatercatchments in the subsurface—both of which are required for MBR to occur. Future MBR research should focus on the collection of high-priority data (e.g., subsurface data near the mountain front and within the mountain block) and the development of sophisticated coupled models calibrated to multiple data types to best constrain MBR and predict how it may change in response to climate warming.
by Mohamed Hayek, Banda S. RamaRao & Marsh Lavenue
Abstract: This work presents an efficient mathematical/numerical model to compute the sensitivity coefficients of a predefined performance measure to model parameters for one-dimensional steady-state sequentially coupled radionuclide transport in a finite heterogeneous porous medium. The model is based on the adjoint sensitivity approach that offers an elegant and computationally efficient alternative way to compute the sensitivity coefficients. The transport parameters include the radionuclide retardation factors due to sorption, the Darcy velocity, and the effective diffusion/dispersion coefficients. Both continuous and discrete adjoint approaches are considered. The partial differential equations associated with the adjoint system are derived based on the adjoint state theory for coupled problems. Physical interpretations of the adjoint states are given in analogy to results obtained in the theory of groundwater flow. For the homogeneous case, analytical solutions for primary and adjoint systems are derived and presented in closed forms. Numerically calculated solutions are compared to the analytical results and show excellent agreements. Insights from sensitivity analysis are discussed to get a better understanding of the values of sensitivity coefficients. The sensitivity coefficients are also computed numerically by finite differences. The numerical sensitivity coefficients successfully reproduce the analytically derived sensitivities based on adjoint states. A derivative-based global sensitivity method coupled with the adjoint state method is presented and applied to a real field case represented by a site currently being considered for underground nuclear storage in Northern Switzerland, “Zürich Nordost,” to demonstrate the proposed method. The results show the advantage of the adjoint state method compared to other methods in term of computational effort.
by C. Ancey, E. Bardou, M. Funk, M. Huss, M. A. Werder & T. Trewhela
Summary: Every year, natural and man-made dams fail and cause flooding. For public authorities, estimating the risk posed by dams is essential to good risk management. Efficient computational tools are required for analyzing flood risk. Testing these tools is an important step toward ensuring their reliability and performance. Knowledge of major historical floods makes it possible, in principle, to benchmark models, but because historical data are often incomplete and fraught with potential inaccuracies, validation is seldom satisfactory. Here we present one of the few major historical floods for which information on flood initiation and propagation is available and detailed: the Giétro flood. This flood occurred in June 1818 and devastated the Drance Valley in Switzerland. In the spring of that year, ice avalanches blocked the valley floor and formed a glacial lake, whose volume is today estimated at 25×106 m3. The local authorities initiated protection works: A tunnel was drilled through the ice dam, and about half of the stored water volume was drained in 2.5 days. On 16 June 1818, the dam failed suddenly because of significant erosion at its base; this caused a major flood. This paper presents a numerical model for estimating flow rates, velocities, and depths during the dam drainage and flood flow phases. The numerical results agree well with historical data. The flood reconstruction shows that relatively simple models can be used to estimate the effects of a major flood with good accuracy.
by Marialaura Bancheri, Francesco Serafin & Riccardo Rigon
Abstract: This work presents a new graphical system to represent hydrological dynamical models and their interactions. We propose an extended version of the Petri Nets mathematical modeling language, the Extended Petri Nets (EPN), which allows for an immediate translation from the graphics of the model to its mathematical representation in a clear way. We introduce the principal objects of the EPN representation (i.e., places, transitions, arcs, controllers, and splitters) and their use in hydrological systems. We show how to cast hydrological models in EPN and how to complete their mathematical description using a dictionary for the symbols and an expression table for the flux equations. Thanks to the compositional property of EPN, we show how it is possible to represent either a single hydrological response unit or a complex catchment where multiple systems of equations are solved simultaneously. Finally, EPN can be used to describe complex Earth system models that include feedback between the water, energy, and carbon budgets. The representation of hydrological dynamical systems with EPN provides a clear visualization of the relations and feedback between subsystems, which can be studied with techniques introduced in nonlinear systems theory and control theory.
by A. de Lavenne, V. Andréassian, G. Thirel, M.-H. Ramos & C. Perrin
Abstract: In semidistributed hydrological modeling, sequential calibration usually refers to the calibration of a model by considering not only the flows observed at the outlet of a catchment but also the different gauging points inside the catchment from upstream to downstream. While sequential calibration aims to optimize the performance at these interior gauged points, we show that it generally fails to improve performance at ungauged points. In this paper, we propose a regularization approach for the sequential calibration of semidistributed hydrological models. It consists in adding a priori information on optimal parameter sets for each modeling unit of the semi-distributed model. Calibration iterations are then performed by jointly maximizing simulation performance and minimizing drifts from the a priori parameter sets. The combination of these two sources of information is handled by a parameter k to which the method is quite sensitive. The method is applied to 1,305 catchments in France over 30 years. The leave-one-out validation shows that, at locations considered as ungauged, model simulations are significantly improved (over all the catchments, the median KGE criterion is increased from 0.75 to 0.83 and the first quartile from 0.35 to 0.66), while model performance at gauged points is not significantly impacted by the use of the regularization approach. Small catchments benefit most from this calibration strategy. These performances are, however, very similar to the performances obtained with a lumped model based on similar conceptualization.
Summary:Droughts lasting longer than 1 year can have severe ecological, social, and economic impacts. They are characterized by below-average flows, not only during the low-flow period but also in the high-flow period when water stores such as groundwater or artificial reservoirs are usually replenished. Limited catchment storage might worsen the impacts of droughts and make water management more challenging. Knowledge on the occurrence of multiyear drought events enables better adaptation and increases preparedness. In this study, we assess the proneness of Europeancatchments to multiyear droughts by simulating long discharge records. Our findings show that multiyear drought events mainly occur in regions where the discharge seasonality is mostly influenced by rainfall, whereas catchments whose seasonality is dominated by melt processes are less affected. The strong link between the proneness of a catchment to multiyear events and its discharge seasonality leads to the conclusion that future changes toward less snow storage and thus less snow melt will increase the probability of multiyear drought occurrence.
by Sina Khatami, Murray C. Peel, Tim J. Peterson & Andrew W. Western
Abstract: Uncertainty analysis is an integral part of any scientific modeling, particularly within the domain of hydrological sciences given the various types and sources of uncertainty. At the center of uncertainty rests the concept of equifinality, that is, reaching a given endpoint (finality) through different pathways. The operational definition of equifinality in hydrological modeling is that various model structures and/or parameter sets (i.e., equal pathways) are equally capable of reproducing a similar (not necessarily identical) hydrological outcome (i.e., finality). Here we argue that there is more to model equifinality than model structures/parameters, that is, other model components can give rise to model equifinality and/or could be used to explore equifinality within model space. We identified six facets of model equifinality, namely, model structure, parameters, performance metrics, initial and boundary conditions, inputs, and internal fluxes. Focusing on model internal fluxes, we developed a methodology called flux mapping that has fundamental implications in understanding and evaluating model process representation within the paradigm of multiple working hypotheses. To illustrate this, we examine the equifinality of runoff fluxes of a conceptual rainfall-runoff model for a number of different Australian catchments. We demonstrate how flux maps can give new insights into the model behavior that cannot be captured by conventional model evaluation methods. We discuss the advantages of flux space, as a subspace of the model space not usually examined, over parameter space. We further discuss the utility of flux mapping in hypothesis generation and testing, extendable to any field of scientific modeling of open complex systems under uncertainty.
Abstract:Floods are the most frequent natural calamity in India. The Godavari river basin (GRB) witnessed several floods in the past 50 years. Notwithstanding the large damage and economic loss, the role of extreme precipitation and antecedent moisture conditions on floods in the GRB remains unexplored. Using the observations and the well-calibrated Variable Infiltration Capacity model, we estimate the changes in the extreme precipitation and floods in the observed (1955–2016) and projected future (2071–2100) climate in the GRB. We evaluate the role of initial hydrologic conditions and extreme precipitation on floods in both observed and projected future climate. We find a statistically significant increase in annual maximum precipitation for the catchments upstream of four gage stations during the 1955–2016 period. However, the rise in annual maximum streamflow at all the four gage stations in GRB was not statistically significant. The probability of floods driven by extreme precipitation (PFEP) varies between 0.55 and 0.7 at the four gage stations of the GRB, which declines with the size of the basins. More than 80% of extreme precipitation events that cause floods occur on wet antecedent moisture conditions at all the four locations in the GRB. The frequency of extreme precipitation events is projected to rise by two folds or more (under RCP 8.5) in the future (2071–2100) at all four locations. However, the increased frequency of floods under the future climate will largely be driven by the substantial rise in the extreme precipitation events rather than wet antecedent moisture conditions.
by Woei Keong Kuan, Pei Xin, Guangqiu Jin, Clare E. Robinson, Badin Gibbes & Ling Li
Abstract:Tides and seasonally varying inland freshwater input, with different fluctuation periods, are important factors affecting flow and salt transport in coastal unconfined aquifers. These processes affect submarine groundwater discharge (SGD) and associated chemical transport to the sea. While the individual effects of these forcings have previously been studied, here we conducted physical experiments and numerical simulations to evaluate the interactions between varying inland freshwater input and tidal oscillations. Varying inland freshwater input was shown to induce significant water exchange across the aquifer-sea interface as the saltwater wedge shifted landward and seaward over the fluctuation cycle. Tidal oscillations led to seawater circulations through the intertidal zone that also enhanced the density-driven circulation, resulting in a significant increase in the total SGD. The combination of the tide and varying inland freshwater input, however, decreased the SGD components driven by the separate forcings (e.g., tides and density). Tides restricted the landward and seaward movement of the saltwater wedge in response to the varying inland freshwater input in addition to reducing the time delay between the varying freshwater input signal and landward-seaward movement in the saltwater wedge interface. This study revealed the nonlinear interaction between tidal fluctuations and varying inland freshwater input will help to improve our understanding of SGD, seawater intrusion, and chemical transport in coastal unconfined aquifers.