• Panic and the Coronavirus: Is There is Better Approach?
    by (Cliff Mass Weather Blog) on April 7, 2020 at 9:43 pm

    See my new blog on this: society is now transitioning into panic about the coronavirus.Universities and schools are being shuttered, sports activities and public gatherings are being cancelled, individuals are hoarding toilet paper and supplies, travel is being severely constrained, the stock market has crashed, and business activity is nose-diving.  Major businesses are forcing their employees to work at home.This blog will try to summarize the coronavirus threat, suggest that some of the panic-driven actions may not be well-founded, and that there may be a far better, more effective approach to deal with the virus.Before I begin, let me note two things.   I am not a medical doctor, epidemiologist,  or viral expert. But I am a scientist with some facility with statistics and data, and my specialty, weather prediction, is all about helping people react appropriately to estimates of risk.  And I have talked to a number of doctors about this issue.  But don't read any more if my background bothers you.How Bad is the Situation Today?If one steps back and looks at the actual numbers, particularly against other threats we face, the situation is far less apocalyptic than some are suggesting.   As of today, the Centers for Disease Control and Prevention (CDC) notes 1215 cases and 36 deaths in the U.S. since January 1.  This is a very, very small percentage of the U.S.  population of 331 million.   The number of U.S. cases no longer appears to be going up as rapidly, as noted by the latest CDC graphic (see below).  Note the drop after the peak in early March.In China, where the problem started, the number of cases is rapidly declining (see below).According to Washington State's Department of Health, the state has had 457 coronavirus cases and 31 deaths.  Most (23) of the death's in Washington have been limited to one nursing facility in Kirkland with a large number of elderly, chronically ill patients.  In fact, according to the NY Times, this facility would typically lose 5 patients a month.This facility also represents about 50 of the coronavirus cases in Washington, since several first responders and staff were sickened (with no fatalities) due to exposure at this site.    In many ways, the Kirkland facility represented an unfortunate random event--the random exposure of a group of extremely vulnerable patients.    If this random exposure had not happened, Washington State would probably not be getting headlines as a center for this virus outbreak.An extremely important element of this coronavirus outbreak is that it hardly sickens young people, and healthy individuals of middle age or younger generally do not face a life-threatening illness.  To illustrate, here is the age distribution of cases in King County.   Few folks under 40 are sickened and none of them died.  The problem is with the sick and elderly.  This age distribution is going to be very, very important.  Similar statistics are found in China.There are undoubtedly many, many cases of coronavirus infection in the younger, healthier members of society, many of which are not aware of their infection.  But without testing, we don't really know other than by indirect statistical approaches.  Thus, the "death rates" are clearly far too high, and highly deceptive.Comparison to the FluIt is important to note that the coronavirus numbers are extraordinarily smaller than those of the flu.Below is a flu graphic I got from CDC and added the coronavirus cases (see the gray dot).  In fact, the gray dot should be much smaller.   For example, we had 36 coronavirus deaths nationally so far compared to 61,000 flu deaths in 2017-2018.  45 million cases that year compared to 1200 coronavirus cases so far this year.  In WA state, 75 have died of flu through the end of February and several years have brought 200- 300 deaths from influenza.Coronavirus is not even in the same league as flu, which also kills the youngest among us. We did not close down universities, businesses, and more for flu.Interestingly, many who are panicking about the coronavirus today, refused to get a flu shot in past years, or to practice reasonable hygiene when flu is around  (e.g., washing hands carefully).  Coronavirus is also not in the same league as auto accidents, which kill 1.25 millions a year (3287 deaths a day), with 25-50 million injured or disabled for the worldwide statistics, while about 38,000 die in the U.S. each year from auto wrecks.  Are our political leaders shutting down society for the flu or stopping auto travel because of deaths on the roadway? The answer is no.  So why are they willing to close down society to deal with the coronavirus, which has represented only a small smaller risk to the general population?  Life is full of risks that must be considered, mitigated, and dealt with.  But society must continue to function.Poor Response and Lack of Testing As the virus began to spread in China, the U.S. needed to develop a coherent plan for understanding and dealing with the crisis.  This did not happen.   President Trump probably made the right call about cutting off travel to China, but the lack of coherent planning beyond that is apparent.  The lack of testing is a major failure of his administration and others.A key capability is to develop sufficient testing resources to determine the progression of the disease in the U.S.  This was sorely lacking, and the flawed testing developed by the CDC was one example of it.  Other countries have tested vastly greater numbers of individuals.  Importantly, the U.S. has not begun randomly test the general population to determine the extent of spread among U.S. residents.The Extreme Cost of the Current "Social Distancing" Approach   Currently, the "social distancing" approach is being stressed by politicians and others.   The idea is that by canceling schools and large public gatherings, coupled with workers working online from home, there will be a reduction of coronavirus community spread, reducing the peak in the number of cases and put less stress on the limited resources of the medical community.  This is illustrated by the figures below.  You notice the number of cases doesn't change (the area under the curve).  And it has another issue:  it greatly extends the period in which society is affected by the disease.The cost of social distancing is immense, something many politicians do not seem to have thought through.  The stock market is in free fall, the economy is tanking, colleges are poorly educating their students through questionable online learning, K-12 students aren't being taught, business is contracting, and workers are losing salaries and being laid off.  The lowest income folks are hurt worst, making "social distancing" highly regressive.   I have read estimates that that the world economy could lose trillions of dollars and that recession is now becoming more likely in the U.S.    Social distancing is attractive and even necessary for  a short period to slow the virus, but in the end it is not sustainable.  It is also inefficient.  In an attempt to prevent the virus from getting to elderly people with health problems, a huge population that does not have the disease or unlikely to get very sick from it is restrained from normal activity.  Something more effective is needed, something I would call "smart quarantine."  More on that later.A number of the local politicians and others have been motivated to try massive social distancing based on a modeling study completed by several local researchers, suggesting only extreme social distancing can prevent a massive increase in cases and up to 400 deaths in our region.  This is a relatively simple model approach, which from my reading does not consider the variation of death rate with age, or the varying social interactions with age.  It assumes a uniform death rate of 1.6 %.   I think it would be useful to test an alternative strategy, based mainly on testing those that are not ill, and removing those people from social interaction.Media, Politicians, and the Web:  How and why they can promote panicThe tendency for stampeding the population into panic and promoting actions that are in the end counterproductive is a real risk of the current political and media landscape.For politicians, there is the potential for endless attention, with opportunities to give sober pronouncements and promote increasingly harsh measures.  Resources become freely available from a worried citizenry.  And the situation provides fuel to attack political foes, as is apparent with the attacks on Trump for virtually every action he takes (and some have been reasonable, like the China ban).  That said, President Trump is certainly guilty of underplaying the seriousness of the situation and providing inaccurate information.  The lack of testing is a massive failure.  There is, however, plenty of bipartisan blame to go around for ineffective responses.For the media, the situation is a bonanza, with huge increases in attention, which promotes more "clicks" and revenue. An increasingly isolated and home-bound populace is glued to the constant media barrage, promoting fear and anxiety.  A highly connected population, unlike any population before, is unable to escape the incessant coronavirus coverage that is constantly featuring the latest death and shut-down.Another WaySo it there another way to deal with the coronavirus epidemic that could be more effective and far less cost to society?  I suspect there is.   This approach would take advantage of several unique and new aspects of the current situation:The fact that young and healthy people, the bulwark of our nation's productive capacity, are only minimally affected by the coronavirus.That most of the mortality is among the sick and elderly.That the technology to test millions of individuals quickly is available.Perhaps these facts allow us to deal with the situation in a dramatically new way.  If a rational actor was running the response, perhaps they would:1.   Protect the most vulnerable with all available resources.  All nursing facilities, retirement homes, and the like would be essentially quarantined, with all patients and staff tested for the virus, with those testing positive isolated from the remainder.  All visitors would have to be tested.  All individuals who are over 60 and possessing serious health problems would be asked to self-quarantine, with food and other assistance provided to allow them to reduce contact with the outside community.  Governor Inslee has initiated some measures, more are needed,2.  Extensive random testing of the general population would be initiated, with millions of tests available for this purpose.  Such general testing would allow a determination of the extent of COVID-19 spread and the isolation of affected individuals and their close associates.  This is what I call "smart quarantine"-- the use of massive testing to identify the carriers and currently sick and to take them out of circulation.South Korea is trying this approach and it appears to be working (see below)3.  A fund to provide salaries for quarantined individuals would be initiated.  This would encourage all individuals to be tested and encourage financially marginal individuals to isolate themselves.4.  Social distancing would end and all schools reopened within a month..   It is poor public policy to cripple education and the productive capacity of individuals that are the bulwark of the U.S. economy, particularly since most of them are not at risk for serious impacts of the coronavirus.  Sustained social distancing is not a long term solution.  Social distancing is just a short-term stopgap approach.  Massive testing and isolation should be put in place within weeks.5.  Federal grants will be initiated to support additional hospital costs, the acquisition of additional medical supplies and equipment, and the huge testing program.This measures would help pull the nation back from the brink of economic disaster, effectively restrain the crisis, and restore normal life to most individuals.The American people have a long history of panicking when they are threatened, at enormous financial and human cost.  After 9/11, the American people agreed to loss of privacy and civil liberties, and allowed a tragic invasion of Iraq.  And after the attack on Pearl Harbor, fears of a third column led to the internment and loss of liberty of over 100,000 Japanese Americans.   Hopefully, fears of coronavirus won't lead to the unnecessary destruction of our economy and the undermining of the prospects of many Americans.  A creative solution to this crisis may be possible, acting as  bridge to the situation a year from now when hopefully a vaccine will be available.

  • Flying Blind on Coronavirus: Why Random Testing is so Important.
    by (Cliff Mass Weather Blog) on April 7, 2020 at 5:16 pm

    The coronavirus crisis is one of the most disruptive events to hit the U.S. in a very long time.Major sectors of the economy are being shuttered, people lives are being altered in the most profound ways, and the nation is facing extreme stress, the implications of which we do not understand.Extraordinarily serious decisions are being made without key information:  how many individuals have active infections?  How many have had the virus and now have immunity?  What percentage of infected individuals have few or no symptoms?  Who is currently infected and needs to be quarantined?  Is the current reduction in cases in Washington and elsewhere mainly the result of social distancing or the herd immunity of an increasing number of individuals that have had the virus?For all these questions, we do not know the answer. Our best medical scientists and epidemiologists, including a highly respected group at the University of Washington, are making projections of the future progression of the pandemic, but without sufficient information for initializing their models, resulting in a rapid decline of accuracy with time and large uncertainty in the projections.We are flying blind.Lack of information undermines our ability to manage the crisis.  And it doesn't have to be this way.The bottom line of this blog is that we must immediately begin random sampling of the population using both PCR testing that tells up about active infections and antibody tests that informs about who has been infected int the past.Random sampling of populations is an essential tool for the social, biological, and physical sciences. Political pollsters randomly sample potential voters to predict the outcomes of elections.   They don't provide election projections by counting the number of avowed Democrats or Republicans who come knocking on their door.   In a variety of fields, random sampling of populations is a key tool for decision making.  But in the coronavirus situation, we are content not to use this powerful tool, even when we make the gravest decisions.  It doesn't make sense.Here in Washington State, virtually all the testing is being used to determine whether individuals suffering from a respiratory illness ailments have active coronavirus infections.  There is, of course,  good reason to test sick individuals:  their treatment plan can be enhanced with such knowledge and their care-givers need to know whether they require protection (PPEs-- personal protection equipment).  We know how many folks are getting tested (and not everyone that is sick is getting tested) and the percentage of those tests that are positive for active COVID infections.  That is not enough.The current testing regime leaves decision makers poorly informed.   How many individuals currently have active infections, with or without symptoms?  How many people have had the virus and are now potentially immune?  We don't know.  And without such information, it is nearly impossible to project the future.Some of the best information we have on infections locally comes from the testing done by the UW Virology Department.  As of yesterday, they had tested about 51,000 individuals, with roughly 10% testing positive.  Surely, the percentage of the total population that is currently infected with COVID-19 must be less than this percentage derived from sick folks (who can be ill for a number of reasons)--but how much less?  Testing ramped up in March went goes up and down.Below is a figure that has never been shown in the media:  the ratio of positive to negative results from the UW Virology testing.  The ratio substantially increased between mid-March to the end of the month (from roughly 6% to  17%), suggesting that a higher proportion of tested sick people were suffering from COVID-19.  Importantly, that number has been declining rapidly in April, suggesting major progress.The Washington State Department of Health provides statewide totals of confirmed cases (see below), which have increased substantially since early March.   But how much of the increase in the number of confirmed infections is the result of vastly increased testing during the past several weeks (as shown by the second figure) and how much by increased presence of COVID-19 in the population?  We simply don't have that information.Epidemiological projection, like numerical weather prediction, is an initial value problem.  You start with an estimate of the initial state and your model, which contains information about the processes of the phenomena in question, attempts to project the initial state into the future.  If your initial state is uncertain, so is your forecast.There is an intrepid group at the University of Washington (IHME) attempting to do such projections using a statistical model, with COVID-19 forecasts for both individual states and the nation. Decision makers are making heavy use of their estimates.   Here is a sample of their latest forecasts for Washington State deaths from the virus. The observed COVID deaths are shown by the solid red line.  The dashed line shows their best estimate of the future and the shading indicates the uncertainty in the projections.  HUGE uncertainties, in part because of the uncertainties in our knowledge of  past and current active infections and the  "herd immunity" of folks that have recovered from the virus.But poor initial data (lack of knowledge of infections in the total populations) has resulted in their projections changing substantially in time, undermining their value to decision makers.Below are their projections starting on March 26, April 1, and April 5.  Huge differences are apparent. Their projection made on March 26th (when they started this valuable service) are shown below. The light gray line is their best projection (actually the median of their distribution) and the uncertainty is noted by light gray shading.  This projection shows a peak in late April of around 27 deaths a day, and high numbers continuing into May.  The April 5th projection (solid red line, again the median of their forecast distribution) is much more benign, showing the epidemic collapsing by the end of April with far less than half the deaths, and the peak being in early April.  According to their latest projections, we are now probably past the peak, something consistent with the drop in hospitalizations in our state due to the virus (see below).  I suspect that the next project will be even lower.Washington State is rapidly getting out of this terrible situation due to the combination of social distancing and the building immunity of the population.  But we are pretty much ignorant about the magnitude of the herd immunity because we do not know how many have had and currently have the virus.Some researchers are trying to indirectly secure an idea of what has happened using a combination of genetic testing of viral samples coupled with epidemiological modeling.  A recent paper by Bedford et al (2020, not yet peer reviewed) suggests that the virus reached Washington State during mid to late January and then spread throughout the local population, asymptomatic for many.  Since random sampling was not available, they used mathematical models to estimate spread (see below) using a large number of simulations informed by their genetics testing.  The red line is the median of these simulations (you might consider that to be their best estimate).  They suggest that by mid-March roughly 2000 Washington citizens were infected as the infection increased exponentially in the population.  Without any influence of social distancing, their work suggests the potential for roughly 10,000 cases by April 1 (simply by extrapolating the exponential rise of the red line).Unfortunately, we lack the information to know what has happened here in Washington.   But is clear there this virus has been spreading among us for at least two months, with many individuals unaware of being infected.  There is a substantial evidence for this asymptomatic spread such as the example of the Diamond Princess cruise ship, a vessel in which the disease went rampant and nearly everyone was tested.  Roughly 50% of the sick, in a population heavily skewed towards older folks, had no symptoms.  Could it be even higher in a younger, healthier population?  We don't know.The wake up call for our State was when the spreading virus randomly hit a nursing home full of elderly, sick individuals in Kirkland, producing dozens of deaths and serving as a focus for spread into the community.  What would have happened without that alarming situation?We need random sampling of the population.This situation of not knowing what is happening in the general population is crippling society's ability to deal with the virus in an informed, smart way.  The capability to test is increasing now and our State and nation must give random testing of the population very high priority immediately, securing a significant percentage of the rapidly increasing testing capability to random sampling.Such sampling offers enormous benefits.  First, it will greatly improve the quality of our projections of the disease.  Second, it will tell us why the situation is improving (social distancing versus herd immunity) and guide governmental actions.  Third, it will enable us to squash the disease spread, by quarantining every individual that tests positive for an active infection, and to trace (and quarantine) his/her contacts.  South Korea has shown the huge value of the widespread testing/quarantining approach.It is the only way we can stay on top of the virus when we inevitably have to loosen the restrictions on our population, which we will have to do to prevent economic collapse and social disorder.Here in Seattle, we have some of the most sophisticated mathematical modelers, statistical analysts, medical experts, and private sector analytic experts in the world, yet we are content to cripple our economy and deal with such an historic event in nearly complete ignorance of the situation. We, of all places in the world,  can do much better.    It is time for random sampling of our population for COVID-19 looking for both active and past infections.When the story of this event is written, the inability to rapidly initiate and actively use large scale random testing back in February will be seen as a terrible failure of the Centers for Disease Control,  state health organizations, and our Federal/state leadership that did not demand it.  And this failure was compounded by other issues, such as not encouraging all Americans to wear masks early in the event and not moving immediately to secure necessary PPE and other supplies.But we are where we are and it is time now to move from a reactive to more reasoned approach, with random testing of the population being a measure that needs immediate priority.

  • A Dry Five Days, With Welcome Warming, Thanks To Omega
    by (Cliff Mass Weather Blog) on April 5, 2020 at 3:45 am

    Get out your sunglasses and put away your umbrella--the next five days should be bright and dry here in western Washington.Who can you thank for this meteorological bounty?  The greek letter omega.The total precipitation for the five days ending Friday at 11 PM from the European Center model is, well, zero over most of western Washington. Bizarrely,  all the precipitation will be going into normally much drier California!  Here is the California precipitation total for the same period.  Wow.  Much of California, particularly in the mountains, will get several inches.  While we are dry.This extreme difference in precipitation and the persistence of the pattern this week is due to a highly persistent "locking" of the atmosphere called an "omega block."  To illustrate, here are the forecast upper level (500 hPa) heights (like pressure)  for Tuesday evening.  The solid lines are the heights and the colors represent anomalies from normal (blue is lower--a trough, orange is higher than normal--a ridge).There is a ridge, with two adjacent troughs/lows to the SE and SW.    Doesn't it look like the Greek letter "omega"? (see below)This omega configuration is very, very stable and can stick around for days (or even longer).  The Northwest is immediately downstream (east) of the ridge of high pressure/heights, thus we will have dry skies and warming temperatures.Those poor folks in California are downstream of the low/trough, bringing them unseasonably cool temperatures and lots of rain.Because of the ridge, temperatures will progressively warm during the week.  Here are the temperature forecasts from the European Center model for this week in Seattle.  Temperatures first rise into the fifties and then to 65F on Friday.  Can you imagine how good that will feel?After getting through a March that was cooler than a  normal January, we deserve this.  For California, the cool/wet weather is good news--filling the reservoirs, upping the snowpack, and delaying the inevitable fires.

  • It's Bizarre: March was Colder than January In Seattle
    by (Cliff Mass Weather Blog) on April 2, 2020 at 12:00 pm

    Everything seems topsy turvy and unnatural these days, and there is a meteorological oddity that must be added to the list:March was cold than January in Seattle this year.I knew March was a cool one, but it was not until Dr. Joseph Zagrodnik, a talented atmospheric scientist working at WSU's AgWeatherNet organization, pointed in out to me, did I realize how unusual the past month had been.According to Dr. Zagrodnik, the average temp in March at Sea-Tac Airport was 44.8 degrees F compared to 45.1 F in January.How unusual is this?  Rare, but not unprecedented.  March has been cooler than January 8 times in the 126 years we have temperature records in Seattle, with the last time it occurred in 2006.To appreciate this oddity visually, the graph below shows the numbers of year the March minus January temperatures fell in various bins.  On average, March is about 5F warmer than January, but in some extreme years March has been as much as 17F warmer.  That would get folks attention. Ten years were close to zero (within .5F of zero) and only a handful (3) were .5 to 1.5F cooler in March.Another way to appreciate our cool March would be to look at a map of the difference of this year's March temperature from normal  (see below).  Western Washington was much cooler than normal, with some areas 4-5F cooler than typical values.    More normal temperatures east of the Cascade crest.I know your next question: Why?A good question. It has to do with an unusual weather pattern that has persisted over the North Pacific during the past month, one that includes a ridge of high pressure offshore with persistent cool, northerly flow over the Northwest. The figure below  shows the  height (like pressure) anomalies (difference from normal) around 18,000 ft above the surface (500 h Pa pressure).  Unusually high heights offshore (red) and lower than normal heights (blue/purple).  This is a cold pattern for us, with unusually strong/cold northerly flow over the Northwest coast.Finally, I wanted to show you an extraordinary picture taken yesterday (Wednesday) around 5:20 PM from the Seattle SpaceNeedle PanoCam.   With cold air aloft and great instability, there was a magnificent line of cumulus clouds along the western slopes of the Cascades.  Just stunning.

  • A Weak Tornado Hits Richland
    by (Cliff Mass Weather Blog) on April 1, 2020 at 7:52 pm

    We have had very unstable air over the Northwest, with lots of convective showers and thunderstorms.  This instability is the result of colder than normal air aloft and a warming surface, producing a large decrease of temperature with height.   Not unlike your cereal pot on the stove top.One sign of the percolating atmosphere was a tornado that hit the north side of Richland, Washington yesterday (Tuesday) afternoon around 2:45 PM.This picture was taken by Gabriel Sanchez and was part of a video he made: satellite image near the time of the tornado shows a line of convection/thunderstorms extending over Richland (see below).  I put a black oval around the tornado location.  You can also see a number of convective storms bubbling up over the western side of the state as well.The NWS Pendleton radar imagery at the time of the tornado (2:46 PM) was not exactly impressive, but clearly showed the convective line (I put an oval around the relevant portion-- the image shows reflectivity, a measure of the intensity of the precipitation).  You can see the most intense part (red spot).The tops of the thunderstorms were wimpy--only around 15, 000 ft. Oklahoma folks would laugh at us!And the Doppler velocities (below), showing the strength of the flow towards or away from the radar, did not indicate any rotation (which would be shown by contrasting colors signifying a couplet of flow towards and away from the radar).The UW WWLLN lightning detection network did sense some lightning in portions of the convective line, but to the east of the tornado (picture shows lightning strikes between 2:30 and 3 PM)This kind of weak tornado, sometimes called a landspout, has a different mechanism from the big supercell tornadoes of the Midwest.If there is a change of wind speed or direction over distance, that implies some inherent rotation (see schematic).The updraft from the modest convective cell over Richland can take that rotation and spin it up, not unlike a skater speeds up when he/she stretches upward and pulls in their arms.  Most western WA tornadoes result from a similar mechanism.At this point, there are no reports of damage or injury from the landspout.

    Feed has no items.
  • Exeter builds on its world-class status as centre for climate research
    by Met Office Press Office on April 7, 2020 at 1:23 pm

    In Devon the concentration of environmental scientists is four times higher than the national UK average and Exeter is widely recognised as a global centre for climate science research with teams of scientists at both the Met Office and the … Continue reading →

  • Procession of spring begins with a moderate March
    by Met Office Press Office on April 1, 2020 at 1:31 pm

    Meteorologically March has been a relatively average month, certainly when compared to the record-breaking wet February the UK has just seen. The first month of meteorological Spring has delivered everything we expect in March from warm days to strong winds … Continue reading →

  • Climate change to bring heavier rainfall events
    by Met Office Press Office on March 23, 2020 at 2:05 pm

    The rise of global temperature is the most easily understood metric of climate change. Its outwardly simple concept has instant appeal as it captures the key element of climate change – the warmer the planet becomes, the more the climate … Continue reading →

  • Record-breaking rainfall
    by Met Office Press Office on March 4, 2020 at 2:28 pm

    Those interested in the UK’s climate records have only had to wait two months of the new decade for a significant new rainfall record to be set. February 2020 set a new UK record for February rainfall in a series … Continue reading →

  • Space mission blasts off for the sun
    by Met Office Press Office on February 11, 2020 at 10:55 am

    Solar Orbiter blasted off from Cape Canaveral at 04:03 GMT this morning. Solar Orbiter is a European Space Agency (ESA) science mission, with support from NASA in the US, that will provide data to help improve our understanding of the … Continue reading →

  • Blending Satellite Imagery is Both ‘Science and Art’ to Maximize Information Delivery
    by Chris on April 6, 2020 at 5:11 pm

    Monitoring the atmosphere by satellite has come a long, long way technologically since TIROS sent back its first snapshots of Earth in 1960. Along with marked advances in spectral, spatial, temporal, and radiometric resolution of state-of-the-art instrumentation, however, come copious volumes of new data as well as unique challenges with how to view it all.

  • “Decision-making under meteorological uncertainty” for D-Day’s Famous Forecast
    by Chris on March 27, 2020 at 4:13 pm

    The success of the D-Day Invasion of Normandy was due in part to one of history’s most famous weather forecasts, but new research shows this scientific success resulted more from luck than skill. Oft-neglected historical documentation, including audio files of top-secret phone calls, shows the forecasters were experiencing a situation still researched and practiced today: “decision-making

  • COVID-19 and the Weather, Water, and Climate Enterprise
    by Matt on March 27, 2020 at 3:48 pm

    by Mary Glackin, AMS President In normal times, our thousands of AMS professionals and colleagues are completely dedicated to helping people make the best possible weather-, water-, and climate-related decisions. In this COVID-19 period, were not just providing critical information; we are also receiving it. We are each of us following guidance from public health

  • AMS’s New Culture and Inclusion Cabinet
    by Matt on March 26, 2020 at 3:43 pm

    by Keith L. Seitter, CCM, AMS Executive Director One of the AMS Core Values is: “We believe that a diverse, inclusive, and respectful community is essential for our science.” AMS lives this value, which is articulated in the Centennial Update to the AMS Strategic Goals. We work to foster a culture that celebrates our diversity,

  • Snowflake Selfies as Meteo Teaching Tools
    by Chris on March 25, 2020 at 5:54 pm

    Undergrads at Penn State recently took to their cellphones to mingle with and snap pics of tiny snowflakes to reinforce meteorological concepts. The class, called “Snowflake Selfies” and described in a new paper in BAMS, was designed to use low-cost, low-tech methods that can be widely adapted at other institutions to engage students in hands-on

  • A simple model of convection to study the atmospheric surface layer
    by Katherine Fodor on September 20, 2019 at 9:10 am

    Since being immortalised in Hollywood film, “the butterfly effect” has become a commonplace concept, despite its obscure origins. Its name derives from an object known as the Lorenz attractor, which has the form of a pair of butterfly wings (Fig. 1). It is a portrait of chaos, the underlying principle hindering long-term weather prediction: just a small change in initial conditions leads to vastly different outcomes in the long run. The three-equation system that gives rise to the Lorenz attractor is often referred to as a simple model of atmospheric convection, yet amongst the atmospheric science community, attention is rarely paid to the original fluid flow that the Lorenz equations describe. Consisting of a fluid layer heated from below and cooled from above, Rayleigh-Bénard convection (Fig. 2) is a hallmark flow beloved by fluid dynamicists and mathematicians alike for its analytical tractability, yet rich behaviour. It is often cited as being of immediate relevance for many geophysical and astrophysical flows [1]. The success of turbulent Rayleigh-Bénard convection in leading to our understanding of chaos, as exemplified by the Lorenz attractor, suggests the enticing possibility of gaining other key conceptual insights into the behaviour of the Earth’s atmosphere through the use of this simple convective system.   In a recent study [2] we explored this potential by investigating to what extent turbulent Rayleigh-Bénard convection serves as an analogue of the daytime atmospheric boundary layer, also known as the convective boundary layer (CBL). In particular, we investigated whether statistical properties in the surface layer develop with height in a similar way in both systems. The surface layer is typically just a few tens of metres thick, but due to the strong turbulent mixing that takes place there, it is of primary importance for the development of the boundary layer. The surface boundary conditions of Rayleigh-Bénard convection and the CBL are the same, which might lead one to think that surface-layer properties should behave similarly in both cases. However, differences in the upper boundary conditions between the two systems modify the large-scale circulations that appear in both systems and this may have an impact in the surface layer. Indeed, despite the much-heralded relevance of Rayleigh-Bénard convection to geophysical flows, we find that its cooled upper plate modifies the large-scale structures in such a way that it substantially alters the behaviour of near-surface properties compared to the CBL. In particular, the downdrafts in Rayleigh-Bénard convection are considerably stronger than in the CBL and their impingement into the surface layer changes how velocity and temperature statistics develop with height. However, we also find that just an incremental change to the upper boundary condition of Rayleigh-Bénard convection is needed to closely match surface-layer statistics in the CBL. If instead of being cooled, the upper plate is made adiabatic, i.e. no heat is allowed to escape (Fig. 3), the influence of the strong, cold downdrafts is removed, resulting in surface-layer similarity between this modified version of Rayleigh-Bénard convection and the CBL. Rayleigh-Bénard convection with an adiabatic top lid has the advantage that it is a simpler experimental set-up than the CBL and provides a longer statistically steady state, allowing for greater statistical convergence to be achieved through long-time averaging. In the long term, the classical Rayleigh-Bénard system will continue to serve as a paradigm for studies of natural convection, though we are increasingly beginning to see that its practical application to geophysical and astrophysical [3] flows may not be as straightforward as past literature seems to suggest.  References [1] A. Pandey, J. Scheel, and J. Schumacher.  Turbulent superstructures in Rayleigh-Bénard convection.Nature Communications, 9:2118, 2018. [2] K.  Fodor,  JP  Mellado,  and  M.  Wilczek.   On the  Role  of  Large-Scale Updrafts and Downdrafts in Deviations From Monin-Obukhov Similarity Theory  in  Free  Convection. Boundary-Layer Meteorology,  2019. [3] F. Wilczynski, D. Hughes, S. Van Loo, W. Arter, and F. Militello.  Stability  of  scrape-off  layer  plasma:  a  modified  Rayleigh-Bénard  problem. Physics of Plasmas, 26:022510, 2019. Edited by Dasaraden Mauree Katherine Fodor is a PhD. candidate at the Max Planck Institute for Meteorology in Hamburg, Germany. She uses very high resolution computer simulations to study turbulence in the atmosphere. In particular, her research concerns interactions between large-scale structures and small-scale turbulence. You can find her on Twitter @FodorKatherine where, in addition to science, she also tweets about cycling.”    

  • A brighter future for the Arctic
    by Patrik Winiger on September 13, 2019 at 7:00 am

    This is a follow-up from a previous publication. Recently, a new analysis of the impact of Black Carbon in the Arctic was conducted within a European Union Action. “Difficulty in evaluating, or even discerning, a particular landscape is related to the distance a culture has traveled from its own ancestral landscape. As temperate-zone people, we have long been ill-disposed toward deserts and expanses of tundra and ice. They have been wastelands for us; historically we have not cared at all what happened in them or to them. I am inclined to think, however, that this landscape is able to expose in startling ways the complacency of our thoughts about land in general. Its unfamiliar rhythms point up the narrow impetuosity of Western schedules, by simply changing the basis of the length of the day. And the periodically frozen Arctic Ocean is at present an insurmountable impediment to timely shipping. This land, for some, is irritatingly and uncharacteristically uncooperative.”             -Barry Lopez, Arctic Dreams, 1986 Study Back in the 1980s the Arctic was a different place. It is one of the fastest changing regions of our planet and since then, Arctic sea ice volume has more than halved (Figure 1). Our study took place in the 2010s, when the Arctic moved into a new regime and sea ice volume showed unprecedented lows. In the last 3 years since our study ended this decline has just continued. During 4 years, we collected small airborne particles at 5 different sites around the Arctic; 1 to 2 years per site. Later, we measured  the concentrations and isotopic sources of black carbon (BC) aerosols, a product of incomplete combustion of biomass and fossil fuels, and a subfraction of the total collected aerosol. All living organisms have more or less the same relative amount of radiocarbon atoms. We call it a similar ‘isotopic fingerprint’. Through photosynthesis plants take up CO2. About 1 in 1012 CO2 molecules contains the naturally occurring (but unstable) radiocarbon atom (14C), which is formed high up in the atmosphere through solar radiation. Black carbon from biomass burning thereby has a contemporary radiocarbon fingerprint. When plants die, the radiocarbon atoms are left to decay, and no new radiocarbon is being built into the plant. Radiocarbon’s half-life is 5730 years, which means that fossils and consequentially soot from fossil fuels is completely depleted of radiocarbon. For the same periods and sites of our observations (see Figure 2), we also simulated black carbon concentration and sources. This was done with an atmospheric transport model (FLEXPART), using emission inventory data for fossil and biofuels (ECLIPSE), and biomass burning (GFED) (see Figure 3). Emission inventories like ECLIPSE calculate emissions of air pollutants and greenhouse gases in a consistent framework. They rely on international and national statistics of the  amount of consumed energy sources for e.g., energy use, industrial production, and agricultural activities. GFED uses MODIS satellite measurements of daily burnt area. This is used – together with ’emission factors’ (i.e. amount of emitted species per consumed energy source unit) – to calculated emissions of several different gas and particle species. The details and methodologies have also been described in a previous EGU ASxCR blogpost. Black carbon, a short live climate pollutant (SLCP), is the second or more likely third largest warming agent in the atmosphere after the greenhouse gases carbon dioxide and methane. Unlike the two gases, it is less clear how big the net warming effect of BC is. There are several open questions, that lead to the current uncertainty: 1. How much BC is exactly put into the atmosphere? 2. How long does it stay in the air and where is it located? 3. Where from and where to is it transported, and where and when is it deposited? 4. How does it affect the earths radiative balance by darkening snow and ice, and most importantly of all: how does it interact with clouds. We have a fair understanding of all these processes, but still, relatively large uncertainties remain to be resolved. Depending on how much BC is in the air and where it is located in the atmosphere, it can have different effects (e.g., strong warming, warming, or even cooling). And all these things need to be measured, and simulated correctly by computer models. Current multi-model best estimates by the Arctic Monitoring and Assessment Programme say that BC leads to increases of Arctic surface temperature of 0.6°C (0.4°C from BC in the atmosphere and 0.2°C from BC in snow) based on their radiative forcing (see Figure 4). It is important however to note, that our main focus on emission reduction should target (fossil-fuel) CO2 emissions, because they will affect the climate long after (several centuries) they have been emitted. And reduction in these sources means reduction in soot as well, since soot is also a combustion product. Reduction that targets soot specifically can be achieved by installation of particulate filters (retrofitting of old engines and stringent standard for new vehicles), shifting to cleaner fuels, burning techniques, or introduction and enforcement of inspection and maintenance programs to assure compliance with already existing legislation. It is recognized internationally that for effective implementation of the Paris Agreement (mitigation effort to hold the average global temperature well below 2°C relative to the preindustrial levels), mitigation measures of short live climate pollutants (such as BC and methane) need to be considered. As the Arctic environment is more sensitive to climate change, knowing exactly which origins (source types and regions) are contributing to black carbon in this part of the world is important for effective mitigation measures. Source attributions of black carbon depend on the altitude where the aerosol is located at the time of measurement or modelling. Wildfires are known to contribute more at higher elevations during the fire seasons (Paris et al., 2009) than at the Arctic surface. Several of the global chemical models have already approximately predicted the proportion of source influence but their accuracy depends on the emissions input and performance of the model. Part of this problem is, that these models get input from emission inventories. These inventories tell the model where, when and how much black carbon is emitted, kind of like instructions from a cookbook. But the different cook books don’t agree on the amount of black carbon that goes into our annual black carbon cake. Additionally, all the different cookbooks have different recipes for different years. If we take a best estimate of global black carbon emissions, our annual cake  has about the size (and weight; because of similar densities of limestone/granite and soot) of the great pyramid of Giza (7500 gigagrams). But the range of estimates vary immensely (2000-29000 gigagrams) (see Figure 5). And these numbers are only for man-made emissions (fossil fuels and biofuels) i.e. excluding wildfires and natural biomass burning. A recent multi-model analysis puts global annual BC fire emissions between 1000 and 6000 gigagrams. To correct these models and the emission inventories we rely on observational data to validate the model results. The model set-up we used did really well in simulating soot concentrations. A bit less well in simulating fuel types (sources) – better for fossil fuels than biofuels and biomass burning. The model simulated that 90% of BC emissions (by mass) – reaching surface level – in the Arctic originated form countries north of 42°N. In our isotope measurements, we found that black carbon sources had a strong seasonality, with high contributions of fossil fuels to black carbon in the winter (75%) and moderate (60%) in the summer. Black carbon concentrations where roughly four times higher in winter than in summer. Concentrations of black carbon at the different stations were also relatively different from each other. These surface level (<500m above sea level) Pan-Arctic results, based on our 14C method, were not very surprising. Few individual locations, as used in our latest study, have previously been published and had similar sources(e.g., Barrett et al., 2015. Winiger, et al, 2015, 2016). However, the sources in our study were relatively uniform for all stations and almost in seasonal sync with each other (high fossil winter, low fossil summer). This could have important implications for policy related questions. Uniform sources could mean that mitigation measures could have a stronger impact, if the right sources are tackled at the right time, to keep the Arctic from becoming a small ice floe, not large enough to stand on. There could be brighter days ahead of us. Edited by Dasaraden Mauree Patrik Winiger is Research Manager at the ETH Zürich and guest researcher at the Department of Earth Sciences, Vrije Universiteit Amsterdam. His research interest focuses on sources and impact of natural and anthropogenic Short Lived Climate Pollutants and Greenhouse Gases. He tweets as @PatrikWiniger.    

  • Water vapor isotopes: a never ending story!
    by Jean-Louis Bonne on April 9, 2019 at 7:00 am

    Water stables isotopes are commonly exploited in various types of archives for their information on past climate evolutions. Ice cores retrieved from polar ice sheets or high-altitude glaciers are probably the most famous type of climate archives. In ice cores, the message about past temperature variations is conserved in the ice, formed from the snow falls whose isotopic composition vary with the temperatures governing the snow crystals formation. However, deciphering the temperature variations from water isotopes is not always straightforward, as the temperature is not the only parameter which can have an imprint on the isotopic composition of the ice, but other variations like changes in origin of the moisture can also play a role. Water isotopic observations are useful tools to understand the water cycle, as the water masses keep in their isotopic composition a signal of the phase changes they have undergone. Many studies use complex atmospheric models representing the water isotopes to refine the analysis of paleoclimatic data. The same models are also used for future climate projections, a domain where large uncertainties are still linked to the prediction of the water cycle and the changes in precipitations. Water isotopes can be useful to benchmark such models and contribute to their improvement. To better understand the processes affecting the water isotopic composition in the atmosphere, our group at the Alfred Wegener Institut in Germany has been focusing on the first step of the atmospheric water cycle: the evaporation at the oceanic surface. For this purpose, we have been continuously measuring the water vapour isotopic composition since summer 2015 directly above the ocean surface, on-board the german research ice breaker Polarstern (Figure 1). We measured humidity level, δ18O and δ2H (representing the variations of the amount of water isotopes, H218O and H2H16O, compared to the most abundant isotope H216O) and calculated the second order parameter deuterium excess, defined as d-excess = δ2H – 8.δ18O. Our observations, newly published in Nature Communications, extend over all latitudes in the Atlantic sector, from the North Pole up to the coast of Antarctica (Figure 2), and could therefore also be exploited for many projects involving water isotopes at any latitude around the whole Atlantic basin. They allowed us to experimentally explore the interactions between the atmospheric moisture and the open sea as well as the sea ice. According to a commonly accepted theory proposed about 40 years ago (Merlivat and Jouzel, 1979), the meteorological conditions under which the oceanic evaporation takes place leave their fingerprint in the isotopic composition of the vapour. We have been able to test this theory on the field for the first time under such a large range of climatic conditions. Our observations indeed confirm the expected role of relative humidity and sea surface temperature in the isotopic composition of the evaporated flux. However, contrary to what was expected from this theory, our records reveal that the wind speed at which the evaporation takes place does not leaves its mark in the vapour isotopic composition above the oceans (Figure 3). In the sea ice covered areas of the polar oceans, the observations were showing highly different signals than above the open ocean. We have shown that an atmospheric model simulating the isotopic composition of water (ECHAM5-wiso) could at first not reproduce the intensity of these variations. We managed to identify the cause of this discrepancy, by adding a new source of humidity in the model. This model was already considering the sublimation of sea ice as a source of humidity. But so far, it was assuming that the sea ice was formed only from frozen oceanic water and therefore had the same isotopic composition as this oceanic water. However, the surface of the sea ice is also affected by snow falling on top of already formed sea ice. These snow falls having a totally different isotopic composition than the oceanic water, once integrated in the model as a new potential source of sublimation, they drastically changed the simulations of vapour isotopes in polar regions and the model now resolves our observations in these sectors much better (Figure 4). Our observations have shown their ability to benchmark atmospheric models of atmospheric water cycle. They highlight different processes having significant consequences on the simulation of water isotopic composition in vapour and precipitation at the global scale which should be considered in all atmospheric water cycle modelling experiments. They contribute to better understanding the creation of the first water isotopic signal during oceanic evaporation. This is particularly important as the oceanic evaporation will later determine the isotopic signal found in subsequent precipitation. The interpretation of past climate archives originally formed from precipitation can therefore be significantly affected by these results. Edited by Dasaraden Mauree Jean-Louis Bonne is an atmospheric scientist at the Alfred Wegener Institut, Bremerhaven in Germany. He prepaed his PhD at the LSCE, Gif-sur-Yvette, France. His research aims at understanding the contributions of local and remote sources to locally observed atmospheric composition by their combination with atmospheric simulations. He has been working on the atmospheric components of the water and carbon cycles, exploiting present-day observations of greenhouse gases and water vapour isotopic compositions. His current project, ISOARC, is focusing on the identification of the moisture sources of the eastern-Arctic. His personal blog can be reached here.

  • The puzzle of high Arctic aerosols
    by Julia Schmale, Andrea Baccarini, Paul Zieger on September 19, 2018 at 2:53 pm

    Current Position: 86°24’ N, 13°29’E (17th September 2018) The Arctic Ocean 2018 Expedition drifted for 33 days in the high Arctic and is now heading back south to Tromsø, Norway. With continuous aerosol observations, we hope to be able to add new pieces to the high Arctic aerosol puzzle to create a more complete picture that can help us to improve our understanding of the surface energy budget in the region. In recent years, considerable efforts have been undertaken to study Arctic aerosol. However, there are many facets to Arctic aerosol so that different kinds of study designs are necessary to capture the full picture. Just to name a few efforts, during the International Polar Year in 2008, flight campaigns over the North American and western European Arctic studied the northward transport of pollution plumes in spring and summer time [1,2,3]. More survey-oriented flights (PAMARCMIP) have been carried out over several years and seasons [4] around the western Arctic coasts. The NETCARE campaigns [5] have studied summertime Canadian Arctic aerosol in the marginal ice zone. And the Arctic Monitoring and Assessment Programme (AMAP) has issued reports on the radiative forcing of anthropogenic aerosol in the Arctic [6,7]. These and many other studies have advanced our understanding of Arctic aerosol substantially. Since the 1950s we are aware of the Arctic Haze phenomenon that describes the accumulation of air pollution in the Arctic emitted from high latitude sources during winter and early spring. In these seasons, the Arctic atmosphere is very stratified, air masses are trapped under the so-called polar dome and atmospheric cleansing processes are minimal. In springtime, with sunlight, when the Arctic atmosphere becomes more dynamic, the Arctic Haze dissolves with air mass movement and precipitation. Then, long-range transport from the mid-latitudes can be a source of Arctic aerosol. This includes anthropogenic as well as forest fire emissions. The latest AMAP assessment report [6] has estimated that the direct radiative forcing of current global black and organic carbon as well as sulfur emissions leads to a total Arctic equilibrium surface temperature response of 0.35 °C. While black carbon has a warming effect, organic carbon and particulate sulfate cool. Hence, over the past decades the reductions in sulfur emissions from Europe and North America have led to less cooling from air pollution in the Arctic [8]. Currently, much effort is invested in understanding new Arctic emission sources that might contribute to the black carbon burden in the future, for example from oil and gas facilities or shipping [9, 10, 11]. These studies contribute to a more thorough understanding of direct radiative effects from anthropogenic aerosol and fire emissions transported to the Arctic. However, neither long-range transported aerosol nor emissions within the lower Arctic contribute substantially to the aerosol found in the boundary layer of the high Arctic [12]. These particles are emitted in locations with warmer temperatures and these air masses travel north along isentropes that rise in altitude the further north they go. The high Arctic boundary layer aerosol, however, is important because it modulates the radiative properties of the persistent Arctic low-level clouds that are decisive for the surface energy budget (see first Arctic Ocean blog in August 2018). Currently, knowledge about sources and properties of high Arctic aerosol as well as their interactions with clouds is very limited, mainly because observations in the high Arctic are very rare. In principle, there are four main processes that shape the aerosol population in the high north: a) primary sea spray aerosol production from open water areas including open leads in the pack ice area, b) new particle formation, c) horizontal and vertical transport of natural and anthropogenic particles, and d) resuspension of particles from the snow and ice surface (snowflakes, frost flowers etc.). From previous studies, especially in the marginal ice zone and land-based Arctic observatories, we know that microbial emissions of dimethyl sulfide and volatile organic compounds are an important source of secondary aerosol species such as particulate sulfate or organics [13]. The marginal ice zone has also been identified as potential source region for new particle formation [14]. What is not known is to which degree these particles are transported further north. Several scavenging processes can occur during transport. These include coagulation of smaller particles to form larger particles, loss of smaller particles during cloud processing, precipitation of particles that acted as cloud condensation nuclei or ice nucleating particles, or sedimentation of large particles to the surface. Further north in the pack ice, the biological activity is thought to be different compared to the marginal ice zone, because it is limited by the availability of nutrients and light under the ice. Hence, local natural emissions in the high Arctic are expected to be lower. Similarly, since open water areas are smaller, the contribution of primary marine aerosol is expected to be lower. In addition, the sources of compounds for new particle formation that far north are not very well researched. To understand some of these sources and their relevance to cloud properties, an international team is currently measuring the aerosol chemical and microphysical properties in detail during the Arctic Ocean 2018 expedition on board the Swedish icebreaker Oden. It is the fifth expedition in a series of high Arctic field campaigns on the same icebreaker. Previous campaigns took place in 1991, 1996, 2001 and 2008 (see refs [15, 16, 17, 18] and references therein). The picture below describes the various types of air inlets and cloud probes that are used to sample ambient aerosol particles and cloud droplets or ice crystals. A large suite of instrumentation is used to determine in high detail the particle number concentrations and size distribution of particles in the diameter range between 2 nm and 20 µm. Several aerosol mass spectrometers help us to identify the chemical composition of particles between 15 nm and 1 µm as well as the clusters and ions that contribute to new particle formation. Filter samples of particles smaller than 10 µm will allow a detailed determination of chemical components of coarse particles. They will also give a visual impression of the nature of particles through electron microscopy. Filter samples are also used for the determination of ice nucleation particles at different temperatures. Cloud condensation nuclei counters provide information on the ability of particles to form cloud droplets. A multi-parameter bioaerosol spectrometer measures the number, shape and fluorescence of particles. Further instruments such as black carbon and gas monitors help us to distinguish pristine air masses from long-range pollution transport as well as from the influence of the ship exhaust. We can distinguish and characterize the particle populations that do or do not influence low-level Arctic clouds and fogs in detail by using three different inlets: i) a total inlet, which samples all aerosol particles and cloud droplets/ice crystals, ii) an interstitial inlet, which selectively samples particles that do not form droplets when we are situated inside fog or clouds, and iii) a counterflow virtual impactor inlet (CVI), which samples only cloud droplets or ice crystals (neglecting non-activated aerosol particles). The cloud droplets or ice crystals sampled by the CVI inlet are then dried and thus only the cloud residuals (or cloud condensation nuclei) are characterized in the laboratory situated below. To gain more knowledge about the chemical composition and ice nucleating activity of particles in clouds, we also collect cloud and fog water on the uppermost deck of the ship and from clouds further aloft by using tethered balloon systems. When doing vertical profiles with two tethered balloons, also particle number concentration and size distribution information are obtained to understand in how far the boundary layer aerosol is mixed with the cloud level aerosol. Furthermore, a floating aerosol chamber is operated at an open lead near the ship to measure the fluxes of particles from the water to the atmosphere. It is still unknown whether open leads are a significant source of particles. For more details on the general set-up of the expedition see the first two blogs of the Arctic Ocean Expedition (here and here). After 33 days of continuous measurements while drifting with the ice floe and after having experienced the partial freeze-up of the melt ponds and open water areas, it is now time for the expedition to head back south. We will use two stations in the marginal ice zone during the transit into and out of the pack ice as benchmarks for Arctic aerosol characteristics south of our 5-week ice floe station. As Oden is working her way back through the ice and the expedition comes to an end, we recapitulate what we have measured in the past weeks. What was striking, especially for those who have spent already several summers in the pack ice, is that this time the weather was very variable. There were hardly two days in row with stable conditions. Instead, one low pressure system after the other passed over us, skies changed from bright blue to pale grey, calm winds to storms… On average, we have experienced the same number of days with fog, clouds and sunshine as previous expeditions, but the rhythm was clearly different. From an aerosol perspective these conditions meant that we were able to sample a wide variety characteristics including new particle formation, absence of cloud condensation nuclei with total number concentrations as low as 2 particles per cubic centimeter, coarse mode particles, and size distributions with a Hoppel-minimum that is typical for cloud processed particles. Coming back home, we can hardly await to fully exploit our recorded datasets. Stay tuned! Do not hesitate to contact us for any question regarding the expedition and measurements. Check out this blog for more details of life during the expedition and our project website which is part of the Arctic Ocean 2018 expedition. Edited by Dasaraden Mauree Julia Schmale is a scientist in the Laboratory of Atmospheric Chemistry at the Paul Scherrer Institute, Switzerland. She has been involved in Arctic aerosol research for the past 10 years. Andrea Baccarini, is doing his PhD in the Laboratory of Atmospheric Chemistry at the Paul Scherrer Institute, Switzerland. He specializes in new particle formation in polar regions. Paul Zieger, is an Assistant Professor in Atmospheric Sciences at the University of Stockholm, Sweden. He is specialized in experimental techniques for studying atmospheric aerosols and clouds at high latitudes.

  • The perfect ice floe
    by Julia Schmale on August 28, 2018 at 8:07 pm

    Current position: 89°31.85 N, 62°0.45 E, drifting with a multi-year ice floe (24th August 2018) With a little more than three weeks into the Arctic Ocean 2018 Expedition, the team has found the right ice floe and settled down to routine operations. Finding the perfect ice floe for an interdisciplinary science cruise is not an easy task. The Arctic Ocean 2018 Expedition aims to understand the linkages between the sea, microbial life, the chemical composition of the lower atmosphere and clouds (see previous blog entry) in the high Arctic. This means that the “perfect floe” needs to serve a multitude of scientific activities that involve sampling from open water, drilling ice cores, setting up a meteorological tower, installing balloons, driving a remotely operated vehicle, measuring fluxes from open leads and sampling air uncontaminated from the expedition activities. The floe hence needs to be composed of multi-year ice, be thick enough to carry all installations but not too thick to allow for drilling through it. There should also be an open lead large enough for floating platforms and the shape of the floe needs to be such that the icebreaker can be moored against it on the port or starboard side facing all for cardinal directions depending on where the wind is coming from. The search for the ice floe actually turned out to be more challenging than expected. The tricky task was not only to find a floe that would satisfy all scientific needs, but getting to it north of 89°N proved exceptionally difficult this year. After passing the marginal ice zone north of Svalbard, see blue line on the track (Figure 2), progress through the first year ice was relatively easy. Advancing with roughly 6 knots, that is about 12 km/h, we advanced quickly. After a couple of days however, the ice became unexpectedly thick with up to three meters. This made progress difficult and slow, even for Oden with her 24,500 horse powers. In such conditions the strategy is to send a helicopter ahead to scout for a convenient route through cracks and thinner ice. However, persistent fog kept the pilot from taking off which meant for the expedition to sit and wait in the same spot. For us aerosol scientists looking at aerosol-cloud interactions this was a welcome occasion to get hand on the first exciting data. In the meantime, strong winds from the east pushed the pack ice together even harder, producing ridges that are hard to overcome with the ship. But with a bit of patience and improved weather conditions, we progressed northwards keeping our eyes open for the floe. As it happened, we met unsurmountable ice conditions at 89°54’ N, 38°32’ E, just about 12 km from the North Pole – reason enough to celebrate the farthest North. Going back South from there it just took a bit more than a day with helicopter flights and good visibility until we finally found ice conditions featuring multiple floes. And here we are. After a week of intense mobilization on the floe, the four sites on the ice and the instrumentation on the ship are now in full operation and routine, if you stretch the meaning of the term a bit, has taken over. A normal day looks approximately like this: 7:45:  breakfast, meteorological briefing, information about plan of the day; 8:30 – 9:00: heavy lifting of material from the ship to the ice floe with the crane; 9:00 (or later): weather permitting, teams go to the their sites, CTDs are casted from the ship if the aft is not covered by ice; 11:45: lunch for all on board and pick-nick on the floe; 17:30: end of day activities on the ice, lifting of the gangway to prevent polar bear visits on the ship; 17:45: dinner; evening: science meetings, data crunching, lab work or recreation. At the balloon site, about 200 m from the ship, one balloon and one heli-kite are lifted alternately to take profiles of radiation, basic meteorological variables and aerosol concentrations. Other instruments are lifted up to sit over hours in and above clouds to sample cloud water and ice nucleating particles, respectively. At the met alley, a 15 m tall mast carries radiation and flux instrumentation to characterize heat fluxes in the boundary layer. The red tent at the remotely operated vehicle (ROV) site houses a pool through which the ROV dives under the flow to measure physical properties of the water. The longest walk, about 20 minutes, is to the open lead site, where a catamaran takes sea surface micro layer samples, a floating platform observes aerosol production and cameras image underwater bubbles. The ice core drilling team visits different sites on the floe to take samples for microbial and halogen analyses. Importantly, all activities on the ice need to be accompanied by bear guards. Everybody carries a radio and needs to report when they go off the ship and come back. If the visibility decreases, all need to come in for safety reasons. Lab work and continuous measurements on the ship happen throughout the day and night. More details on the ship-based aerosol laboratory follow in the next contribution. Edited by Dasaraden Mauree Julia Schmale is an atmospheric scientist at the Paul Scherrer Institute in Switzerland. Her research focuses on aerosol-cloud interactions in extreme environments. She is a member of the Atmosphere Working Group of the International Arctic Science Committee and a member of the Arctic Monitoring and Assessment Programme Expert Group on Short-lived Climate Forcers .

  • At Home Meteorological Experiments
    by lewisferris on April 6, 2020 at 3:28 am

    If you are trying to entertain the kids at home or are just looking for a meteorological experiment then you've come to the right place! First up we have the Cloud in a Bottle experiment: Equipment: • 1 large clear-plastic bottle with a screw-on top (e.g. a 3-litre juice bottle) • 1 match • 1 cup of water Instructions: Step 1. Pour the water into the bottle, put the top on and shake it around. Leave it for a few hours so that the air in the bottle gets very humid. (You can do the word find below while you wait) Step 2. Remove the top of the bottle Step 3. Light the match (children need adult supervision), let it burn a few seconds, blow it out, then quickly drop the match into the bottle. Screw the top tightly back on the bottle. Step 4. Now squeeze the bottle firmly and hold for about 30 seconds Step 5. Now look closely at the air in the bottle as you suddenly release the pressure on the bottle. The temperature of the saturated air will cool, and there are so many condensation nuclei in there, that tiny droplets should form. It’s a cloud! Step 6. If you squeeze the bottle again, the cloud droplets will evaporate again and the cloud disappears Technical explanations: Step 1: When air is very humid air, it is called being “saturated” Step 3: The smoke from the match will act as tiny ‘condensation nuclei’ on which cloud droplets can form Step 4: Squeezing it increases the pressure of the air in the bottle, warming it slightly. Continuing to squeeze the bottle allows it to cool back to room temperature.   Meteorological Word Find.     Create a rain gauge Measuring how much rain has fallen needs a rain gauge. We can make a simple rain gauge by using a straight sided bottle, some stones, scissors, a marker and a ruler:    

  • The Physics of Fog
    by Met_Team on September 7, 2018 at 5:09 am

    By MetService Meteorologist Claire Flynn While the weather conditions that lead to the formation of fog are usually quite benign, fog itself can be very disruptive. In particular the aviation and marine industries are often interested in how fog or mist will affect the visibility for their journeys, though fog also affects road-users. Fog can also make for some excellent photography opportunities, creating an eerie backdrop (as you can see in some of the photos below!) A ‘fogbow’, photographed in the Waipa District by Peter Urich. To find out how a fogbow is formed, take a look at this blog on atmospheric optics. But what is fog, and how is it formed? Before we can talk about what fog is, we first need to define a key factor of fog: visibility. Visibility has a different definition at night time and at day time. The definition given by the International Civil Aviation Organisation (ICAO) is: During day time, visibility is the greatest distance at which a black object of suitable dimensions situated near the ground can be seen and recognised when observed against a bright background, During night time, visibility is given by the greatest distance through which light of around 1000 candelas can be seen and identified against an unlit black background. Fog and mist are created by microscopic water droplets suspended in the air. These tiny water droplets scatter any light that passes through or past them, meaning that objects in the fog become hard to see. Fog and mist are then defined by how much the visibility is reduced. The ICAO definitions are: Fog: a suspension of very small water droplets in air, reducing the visibility to 1000 metres or less. Mist: similar to fog, however visibility will be reduced to no less than 1000 metres. Haze is another phenomenon that can reduce visibility. However, this is a reduction in visibility caused by microscopic particles in the air, rather than by water droplets. For example, in New Zealand we sometimes get haze around our coastlines after we have had strong winds, as the wind can whip up sea spray and cause small particles of sea salt to be suspended in the air. So now we know what fog is, how is it formed? There are a number of processes that can cause fog, and fog is classified into different ‘types’ based on how it forms. The most common types are radiation fog and valley fog, followed by advection fog. We can also get post-frontal (or evaporation) fog and steaming fog. Let’s take a look at the different types of fog in more detail: Radiation Fog Radiation fog is so-called as the formation process depends upon a balance of “radiative heat fluxes” (I’ll explain what this means below). Radiation fog usually forms overnight or early morning during the coldest hours of the day, and then dissipates after the sun comes up. It primarily forms over land but has been observed to form over shallow inlets and harbours as well. It is the most common type of fog we see in New Zealand. Radiation Fog in Christchurch, photographed by Julienne Nacion. For radiation fog to form, we need several ingredients: Clear skies, Light winds, Sufficient moisture in the lowest layer of the atmosphere, near the ground. These conditions are most commonly met when we have high pressure over the country, but there are other situations where these conditions can be met too. In short, on a clear night with light winds, the air can cool down enough to reach its “dew point” (100% relative humidity) and water vapour in the air will start to condense into fog. Let’s have a look at this process in a little more detail. Radiative Heat Fluxes The process of radiation fog formation starts at sunset, when the balance of radiative heat fluxes changes. Everything radiates heat – with warmer objects emitting more radiation than cooler objects. On a sunny day, incoming solar radiation is greater than the radiation emitted by the Earth back to space – this causes the ground and the air directly above it to warm up. On a cloud-less night, the Earth continues to radiate heat away to space – but with no incoming solar radiation, this causes the ground, and the air directly above it, to cool quickly. Clear Skies Cloudy skies change the radiative heat balance described above. If there is cloud in the sky, it will absorb the heat emitted by the Earth’s surface and re-radiate it back towards the ground. This prevents the land surface from cooling as much as it would on a cloudless night. Clear skies allow the ground to cool quickly by emitting heat to space via radiation, allowing a ‘temperature inversion’ to form (this is when the temperature increases with height, rather than decreasing with height like it usually does – more on inversions here.) Radiative cooling is an essential part of the formation of radiation fog, so the less cloud around, the greater the chances that fog will form. Light winds If the wind is too strong, then this can cause warmer, drier air from aloft to be mixed up with the air near the ground, meaning that the land surface cannot cool as quickly. Conversely, if there is no wind at all, you are more likely to get a very heavy dew instead of fog – this is because very light winds help to mix the cool air through a shallow layer of air near the land surface. If there isn’t enough wind, the cooling will be restricted to the lowest few centimetres of the atmosphere, allowing dew but not fog. Sufficient moisture The relative humidity in the lowest layers of the atmosphere needs to be high enough that radiative cooling overnight will allow the air to reach its “dew point”. The dew point is a measure of how much water vapour is in the air – when the dew point is equal to the air temperature, this is equivalent to 100% relative humidity, and the air cannot hold any more water vapour. In addition, warm air can hold more water vapour than cool air. If the air cools to its dew point, and then continues to cool, the water vapour needs to go somewhere – initially it condenses on the ground as dew, but if we have sufficient radiative cooling and light winds to mix the cool air through a shallow layer as described above, the water vapour will begin to condense into fog as well. Shallow radiation fog by Addington Brook in Hagley Park, photographed by Don Gracia. The brook serves to provide a source of moisture in the lowest layer of the atmosphere, aiding the formation of fog. While having lots of water vapour in the lowest layer of the atmosphere helps fog to form, it is in fact beneficial for the air aloft to be quite dry. This is because water vapour in the air absorbs infra-red radiation, and, much like cloud, can re-radiate some of this heat back to Earth. This means that if the air aloft is dry, the land surface can cool much more efficiently. Many of our airports in New Zealand are prone to radiation fog, and our aviation forecasters spend a lot of time thinking about whether or not radiation fog is likely to affect each airport when night falls. Hamilton Airport is our foggiest airport in New Zealand, with an average of 92.4 nights a year when fog is reported. This is followed by Dunedin Airport, with 64 nights a year. Since it needs to be cool and damp for radiation fog to form, it is much more common during winter than summer. Auckland Airport shows a particularly strong seasonal trend – with only a couple of cases of fog occurring during the warmer months between September 20th and March 20th over the past ten years (all these cases were advection fog rather than radiation fog – you can find an explanation of advection fog later). Radiation fog can be particularly disruptive when it occurs at Auckland Airport, which has an average of 19.3 foggy nights a year. While some aircraft are equipped to land in low visibility, taxiing around the runway is still dangerous in fog, and fewer aircraft can land per hour during low visibility operations. Another type of fog that can affect Auckland Airport is post-frontal fog, which is a special case of radiation fog. Post-Frontal Fog (Evaporation Fog) Suppose a front moves over an area, bringing cloud and rain. Sometimes, after a front has moved over, the skies clear up and the winds ease. If this occurs at night then the radiation fog formation process can begin. In this case however we now also have water sitting on the ground, which can be evaporated into the air near the Earth’s surface, increasing the dew point of the air. This means that fog may start to form at slightly higher temperatures than it would have done had the ground been dry. The wet ground also means that fog can form without the need for dew. The satellite image above was taken at 4.40pm on 3rd August 2018. The frontal cloud had cleared away from Auckland Airport shortly before sunset (you can see that it is beginning to get dark in the bottom right hand corner of the image). Following the front, the air was relatively cloud-free. This meant that Auckland Airport had a ready supply of moisture (water on the ground due to the front that had just passed over) and clear skies. The winds had also died back overnight creating the perfect conditions for fog. In this case, fog formed between midnight and 2am, persisting until the sun came up the next day and clearing around 8am. Valley Fog Valley fog is another special case of radiation fog. The formation process also relies on the land surface cooling by emitting radiation to space, allowing the air to reach its dew point temperature and causing moisture to condense. Most people know that warm air rises. This is because warm air is less dense than cool air – it weighs less, and therefore becomes buoyant and rises above the cool air. Equally, cold air sinks. In hilly areas, on a calm and cloudless night, the land surface on the hills cools. This in turn cools the air directly above it. The cooler air then sinks down into the valleys, much like water flowing through a river. Pooling of cold air means that fog can form more quickly, and persist for longer, in valleys than over flat terrain. At times in the Winter, some valleys or basins (for example, the Mackenzie Basin) will see fog that persists for days, and does not even clear during the height of the day. This is because the weak sunshine is not enough to warm the extensive pool of cold air sitting in the valley. Valley fog, Dovedale River, Tasman District. Photographed by Sheryl Waters. Valley fog in the Wakatipu Basin, photographed by Marin Corinthian Kohn. Advection Fog The word ‘advection’ refers to the transfer of heat and moisture from one place to another by the wind. Advection fog primarily forms over the sea (also called sea fog), though we can get advection fog over the land sometimes as well. Let’s start with sea fog. Sea fog occurs when the wind pushes warm, moist air over a cooler sea surface. The sea surface cools the air directly above it, causing water vapour to condense into fog. Around New Zealand, high pressure to the east of the country is a good situation for sea fog to form – with northeasterly winds bringing air down from the sub-tropics, across cooler waters. If the high pressure stays in place for a long time (called a blocking high), then the sea fog can persist over the water for many days. We learnt above that radiation fog forms after the sun sets and the land cools, and then dissipates when the sun comes up again the next morning. Unlike the land, the ocean is deep and dark, and it does not warm up and cool down as the sun rises and sets. For this reason, sea fog can happen at any time of day (and this is also why radiation fog will not occur over the sea). But if sea fog doesn’t dissipate when the sun rises, what makes it go away? Essentially, we need an ‘air mass change’ – or another weather system to come along and push the sea fog away. Sea fog is most common during Spring, as the sea surface temperatures lag behind the increasingly warm land and air. The satellite image above shows an area of sea fog sitting along the coast of Canterbury on the morning of 7th August 2018. You can see Banks Peninsula sticking out above the fog. Christchurch Airport reported fog overnight, which began to lift to low cloud early in the morning. The satellite image below is an infra-red image of the same scenario. By measuring infra-red radiation, we can infer the temperature of the cloud tops or land surface. The red and green area to the west of the image indicates very cold temperatures of about -50°C. This is because this is part of a front, with cloud tops extending high into the atmosphere where it is very cold indeed. You’ll notice that the sea fog doesn’t appear on the infra-red image – this is because it is very low to the ground, and therefore it is similar in temperature to its surroundings. The front coming from the west moved over the country later in the day, pushing away the sea fog, and no fog was reported at Christchurch Airport the following night. Hybrid Radiation-Advection Fog Those who live along our coastlines likely will have found themselves engulfed in sea fog from time to time. When extensive sea fog forms around our coasts over the sea, it can sometimes advect onto the land as well, but tends to dissipate on warm days as it moves further inland. Once sea fog has advected onto the land, it will likely show a slight diurnal variation – thinning out during the afternoon and becoming denser again overnight – this is called hybrid radiation-advection fog. Wellington Airport is never affected by pure radiation fog but it is affected by sea fog on a handful of days a year. The graphs below compare the times of day when fog was observed at Wellington Airport (sea fog, though once it is over the airport it becomes hybrid radiation-advection fog) and Hamilton Airport (radiation fog alone). As you can see, Hamilton Airport has a very strong diurnal trend, with fog reported frequently at night, and very rarely in the afternoons. On the other hand, fog at Wellington Airport happens any time of day, with only a slight diurnal trend. You can read more about how fog occurs at Wellington airport here. While sea fog that moves onshore becomes hybrid radiation-advection fog, we can also get radiation fog that forms over land, and then later advects over another area. This is also hybrid radiation-advection fog. For example, radiation fog frequently forms up the Hutt Valley, north of Wellington. In rare cases the radiation fog then drifts southwards across the harbour and over Wellington Airport in a light northeasterly drift. Hybrid radiation-advection fog that occurs at Wellington Airport via this process tends to be very short-lived, whereas sea fog that moves onshore at Wellington Airport can persist for most of the day. Steaming Fog The final type of fog I will explain in this blog is steaming fog. Steaming fog is also the rarest type of fog in New Zealand, and only occurs under very specific conditions. It tends to be very shallow, so is not as troublesome as the likes of advection fog or radiation fog. The best example of steaming fog around New Zealand would be steam rising from hot pools in places like Rotorua. Steaming fog occurs when cold air lies over a much warmer water surface. Air close to the water surface is warmed and moistened by the water underneath. This makes the air buoyant (remember, “warm air rises” – this process is called convection). The air rises up and mixes with the cooler air slightly above it. If the increasing water content of the air outstrips the heating of the air from beneath, then the air can become supersaturated, and excess water vapour will condense into mist or fog. Here at MetService, we have dedicated marine and aviation meteorologists. There are also meteorologists who write the public weather forecasts on, our app, the tv and newspapers. Each of these different disciplines have a different focus when it comes to forecasting fog, and forecasters spend a lot of time thinking about how fog will affect the people they are forecasting for. Marine forecasters look for situations where extensive sea fog is likely. Radiation fog is less of a concern for our marine forecasters, but sometimes radiation fog can form over the likes of Manakau Harbour. Meanwhile, aviation forecasters spend a lot of time thinking about radiation fog, but if sea fog is looking likely to move onto the coast, then they need to take this into consideration also. While the processes behind fog formation are generally well understood, it can still be very difficult to forecast. Any error in the temperature forecasts or humidity forecasts can drastically change the likelihood of fog forming. To forecast fog perfectly, you would also need a perfect forecast for the cloud cover, temperature, humidity, and wind speed. You also need to know whether the ground will be wet or frosty. Weather models alone are not currently able to resolve weather conditions on a small enough scale to accurately depict the risk of fog. Therefore it comes down to our meteorologists’ experience and knowledge, together with local observations of the conditions on the ground. Because of this, fog can be very challenging to forecast – but it also makes it more rewarding when you get a fog forecast just right!   Tags: fogradiation fogadvection fogvalley fogpost-frontal fogevaporation fogsteaming fog

  • Am I at risk of experiencing a thunderstorm?
    by Lisa Murray on May 14, 2018 at 3:19 am

    By MetService meteorologist April Clark. Whether you actively seek the thrill of a good lightning storm or you need to make sure your nervous dog is inside in your protective arms when a thunderstorm hits, MetService has got your back. MetService has an experienced team of specialist meteorologists who take care of what we call 'mesoscale forecasting' – that is, the forecasting of weather features that are very small in size but can be very intense in nature, e.g. thunderstorms, squalls etc. The relatively small size of these weather features is what makes it a challenge when communicating their potential threat to you and your nervous Labrador.    Thunderstorms, by nature are coy things. They require very specific conditions to ‘kick off’ and when they do it can be within localised areas. By localised we mean that a thunderstorm is small enough that, while you sip your cool lemonade under a sunny sky, 5km down the road a cumulonimbus cloud (thunderstorm cloud) could be throwing 5cm hail on your Aunt Betty’s house.   Below is a bit of an explainer of how we forecast thunderstorms, feel free to skip ahead a couple of paragraphs for the nitty gritty on our Thunderstorm Charts. For thunderstorms to form they first need an atmosphere that is conducive to vigorous updrafts (i.e. air rising very quickly from the ground). These conditions arise when warm/moist air near the ground is overlaid by cooler/drier air aloft – in meteorologist speak this is called an unstable atmosphere. These conditions can happen in a number of ways. A well-known example happens during summer, when strong heating from the sun creates a much warmer air layer just above the ground, producing an unstable atmosphere. If this layer of warm air is also moist (often moisture is provided from a water source like the sea) then the atmosphere is ‘extra ripe’ for thunderstorms. So, there is the potential for thunderstorms when an unstable atmosphere is present, but this doesn’t necessarily mean any will form. What we call a trigger is needed to ‘nudge’ off a thunderstorm. These thunderstorms are turning out to be quite hard work to get going! The trigger to start a thunderstorm can come from several sources. To name a few; fronts which create their own updraft, sea breeze wind convergences where ground level winds from two directions meet and are forced upwards, mountains which force the air up as it tries to pass over them, or features in the upper atmosphere which can create the uplift needed near the ground to generate a thunderstorm.  The trigger is often the trickiest part of thunderstorm forecasting as a very subtle change in wind, temperature or frontal strength can be the difference between a crack of lightning or a non-event. Now you have an idea of what ‘princess and the pea’ type conditions are needed to produce a thunderstorm let’s investigate how MetService relays the risks of a thunderstorm to you at home. Three different risk charts are produced at MetService, depending on the certainty, severity and how far ahead we expect a thunderstorm to affect an area. The first and most frequent type issued is called the Thunderstorm Outlook. These charts are issued every day in the morning with an update in the evening (or more frequently if required). As its name suggests the Thunderstorm Outlook chart has the longest forecast range, highlighting areas anywhere in New Zealand with the potential for thunderstorm formation (both moderate and severe) for today and tomorrow. Three risk levels convey the likelihood of thunderstorms forming over the whole area, ranging from low to high. If you live within a shaded area that happens to be under a moderate or high risk that day, though it is likely that within that whole shaded area a thunderstorm will form, it is possible that you, sitting on your patio, will not see one.  This is due to the small and confined ‘mesoscale’ nature of a localised thunderstorm. For example, thundery showers caused by summer sea breezes can be very isolated leading to the ‘sunny day at my place, hail at my Auntie’s 5km down the road’ situation (more on this ‘afternoon convection’ here However, if we have an active squall that moves across a region you can expect more people to experience thunderstorm conditions. The second chart our meteorologists produce is called a Severe Thunderstorm Watch. This chart is much like the Thunderstorm Outlook but is only issued when severe thunderstorms are expected anywhere in New Zealand, and it usually has a shorter forecast outlook of 6-12 hours. Again, much like the Thunderstorm Outlook these Thunderstorm Watches are forecasts, created to alert people to the possibility of a severe thunderstorm forming in their area. So, what makes a thunderstorm severe you may be asking? For MetService, these heavyweights of the thunderstorm world must exhibit at least one of the following criteria; heavy rainfall of 25mm/h or more, hailstones with a diameter of 20mm or more, strong wind gusts of 110km/h or more, or damaging tornadoes. The third and final chart issued for thunderstorms is the Severe Thunderstorm Warning. This chart is different from the two previous charts as it is only issued after a thunderstorm has formed and is deemed severe through careful analysis of volumetric (3-dimensional) radar data. Mesoscale meteorologists need a thunderstorm to be within about 150 kilometres of the radar to fully probe the structure of the storm and therefore categorically define it as severe. For this reason, a Severe Thunderstorm Warning will only be issued inside the deep blue areas in the right-hand image below. When issued, a Severe Thunderstorm Warning will include the current location of the thunderstorm, a cone for its forecast path and an indication of where the storm will be in 30 and 60 minutes time. Due to how short the forecast time is on these warnings it will usually have a more specific area in which our mesoscale meteorologist has deemed at high risk of seeing a thunderstorm. This means if you are within the Severe Thunderstorm Warning area, keep your Labrador inside or get out the popcorn, you’re likely to see some lightning.  Though the chance of being caught under a severe thunderstorm in New Zealand is low, if you are unlucky enough to experience one unprepared, it can be dangerous, with lightning, large hail, flash flooding, and squally winds among some of the effects of a severe thunderstorm. What can you do if you have a lead time of an hour or so? Quite a lot in fact. A few simple examples are listed below. Torrential Rain: If you're in a narrow watercourse or working in a stormwater drain, get out of it. Very strong winds: If you're up on the roof, get down, secure loose roofing iron and other potentially dangerous flying objects. Lightning: Go inside, or at least stay away from trees which are out in the open, and consider unplugging electrical appliances. Hail:  If you're driving, be ready to slow down or stop, or put the car in the garage.   Thunderstorms are incredible forces of nature, which both fascinate and frighten. Although their effects are often very localised they can be extremely hazardous. Our highly skilled meteorologists use a range of charts to communicate thunderstorm risks and severity through a three-step alert process from Outlook to Watch to Warning. Now that you’ve read this blog, we hope you understand how we alert the public about the chance of thunderstorms from the forecasting bench! Tags: thunderstormdownburstdownpourthunderstorm Warningthunderstorm Watchthunderstorm outlookupdraftsApril Clark

  • Tropical Cyclone Hola Update - 11/03/18
    by Met_Team on March 11, 2018 at 3:25 am

    Tropical Cyclone Hola Update - 11/3/2018 By Meteorologists, Andy Best and April Clark This blog is a look at what has changed from yesterdays update which can be found at on this blog site.      CURRENT STATUS OF CYCLONE ACTIVITY Tropical Cyclone Hola (971hPa, Category 3) was analysed near 21.4S 169.0E (near the Loyalty Islands of New Caledonia) at 1300 New Zealand time this afternoon and is moving southeast at 14 knots. Tropical Cyclone Hola is expected to track southwards and start to gradually weaken as well as move out of the tropics on Sunday.The system is expected to undergo extra-tropical transition as it approaches 30S later on Sunday. Cyclone Hola is expected to be extra-tropical and track close to the upper North Island of New Zealand on Monday. Current models are predicting Cyclone Hola to swiftly skirt the northeast of the North Island during Monday, bringing Severe Weather to eastern parts of the upper North Island. Severe Weather Watches and Warnings have now been issued for Monday, with rain accumulations of as much as 150mm and wind gusts of up to 130km/h forecast for eastern regions from Northland to northern Hawkes Bay.  Hola will bring severe gale force winds and heavy rain to northern and eastern parts of the North Island during Monday and the early hours of Tuesday, and could cause coastal inundation for eastern areas from Northland to western Bay of Plenty including Gisborne as well. The heaviest rain is expected in Northland, Great Barrier Island, Coromandel Peninsula and Gisborne, where a heavy rain Warning is now in force. In addition, gale southeasterlies are expected to develop from early Monday morning, then changing southwesterly during Monday afternoon and evening. The strongest winds are expected in Northland, Auckland, Coromandel Peninsula, Bay of Plenty and Gisborne, and a strong wind Warning is in force for these areas. This system is compact but fast moving, meaning that those expected to be impacted should expect weather conditions to change rapidly, so keeping updated with the forecast, Civil Defence and New Zealand Transport Agency is key. Another low is located near 6.7S 160.1E near the Solomon Islands at 1300 New Zealand time today. This low is expected to track into the northern Coral Sea over the next few days with the risk of it developing into a tropical cyclone being LOW, but increasing to MODERATE from Tuesday next week. Tags: TC holasevere weatherTCtropical cyclone

  • Tropical Cyclone Hola Update - 10/3/18
    by Met_Team on March 10, 2018 at 3:04 am

    Tropical Cyclone Hola Update - 10/3/2018 By Meteorologist, Andy Best This blog is a look at what has changed from yesterdays update which can be found at on this blog site. You can find further information on the Tropical cyclone page at   Severe Tropical Cyclone Hola remains at Category 3 TC at 2:10pm today New Zealand time with winds close to its centre of around 130km/h with gales extending up to 260km from the centre. The TC is currently moving southeast at 25km/h. By 1am Sunday morning (NZ time) it is expected to weaken to a category 2 TC and still lie north of Latitide 25S. This means that the winds near the centre are expected to weaken to around 120km/h and by 1am Monday morning to around 90km/h. TC Hola will continue southeast out of the Tropics, and is expected to pass close the the upper North Island as a deep low during Monday and Tuesday.   There is still some uncertainty with respect to Hola s focecast track. The affected areas and associated confidence levels depicted are highly dependent on Hola s track. Relevant authorities and the public are advised to keep up to date with the latest forecasts from MetService. Hola will cause significant damage and disruption if it tracks across, or very near, to the North Island. This includes wind damage to structures and powerlines, large amounts of rain causing flooding, slips and damage to roads, and large waves affecting low-lying coastal areas. Based on Hola s current forecast track, there is high confidence of heavy rain and severe gales in Northland, Auckland, Coromandel Peninsula, Waikato, Bay of Plenty and Gisborne from on early Monday through to overnight Monday.   The confidence for heavy rain and severe gales is moderate during Monday and early Tuesday for the central North Island from Waitomo, Taranaki across to Hawkes Bay,including the Tongariro National Park and the Whanganui Hill Country, while the confidence is low for heavy rain in Wairarapa over the same period. It is very likely that Warnings and Watches associated with Hola will be issued tomorrow (Sunday March 11th). The Severe Weather Outlook Map at on shows the areas we are currently concerned about. The MetService TC Bench are keeping a close eye on developments 24/7 and will issue Severe Weather Watches and Warnings for any areas which could see any severe weather associated with this event, along with press releases and updates on social media. We’ll post further updates on TC Hola in the coming days, and as always, you can keep up to date with the latest forecasts and warnings at You can also watch the latest Tropical Cyclone video at     Tags: TCTropical cyclone Holasevere weather

  • An overview of a dataset digitized by citizen science volunteers – the 1900-1910 Daily Weather Reports
    by danaallen on April 6, 2020 at 5:46 am

    By: Philip Craig Two years ago the citizen science project Weather Rescue was used to digitize hand-written weather observations from the Met Office’s Daily Weather Reports from the years 1900-1910 (Figure 1). These were, as the name suggests, daily documents … Continue reading →

  • The Atmospheres of Two Exoplanets
    by danaallen on March 30, 2020 at 5:45 am

    By: Peter Cook One of the big discoveries of recent decades has been the finding of thousands of exoplanets, and it now seems that most stars have planets.  Remarkably, detailed measurements have now been made of some of these despite … Continue reading →

  • What is “net zero” for methane?
    by danaallen on March 23, 2020 at 5:45 am

    By: Bill Collins Recent research is suggesting that the way methane is accounted for in climate targets overemphasises its contribution to climate change at the end of the century. This might mean that countries or sectors (e.g. agriculture) with large … Continue reading →

  • Housebuilding ban on floodplains isn’t enough – flood-prone communities should take back control
    by danaallen on March 16, 2020 at 5:45 am

    By: Hannah Cloke February 2020 has brought more than its fair share of bad weather to the north of England, the Midlands and Wales. Shrewsbury, Bewdley and Telford swam in the Severn, while the Ouse invaded York. For some, the … Continue reading →

  • Meiyu—Baiu—Changma Rains
    by danaallen on March 9, 2020 at 5:45 am

    By: Amulya Chevuturi The arrival of the summer monsoon rains over southern China is called Meiyu, literally translated as “plum rains”. These are also called Baiu in Japan and Changma in Korea. As the monsoon progresses, these rain belts first occur … Continue reading →


Copy link
Powered by Social Snap