Thursday, April 9, 2009

Aerosols May Drive A Significant Portion Of Arctic Warming

Aerosols can influence climate directly by either reflecting or absorbing the sun's radiation as it moves through the atmosphere. The tiny airborne particles enter the atmosphere from sources such as industrial pollution, volcanoes and residential cooking stoves.


Though greenhouse gases are invariably at the center of discussions about global climate change, new NASA research suggests that much of the atmospheric warming observed in the Arctic since 1976 may be due to changes in tiny airborne particles called aerosols.

Emitted by natural and human sources, aerosols can directly influence climate by reflecting or absorbing the sun's radiation. The small particles also affect climate indirectly by seeding clouds and changing cloud properties, such as reflectivity.
A new study, led by climate scientist Drew Shindell of the NASA Goddard Institute for Space Studies, New York, used a coupled ocean-atmosphere model to investigate how sensitive different regional climates are to changes in levels of carbon dioxide, ozone, and aerosols.
The researchers found that the mid and high latitudes are especially responsive to changes in the level of aerosols. Indeed, the model suggests aerosols likely account for 45 percent or more of the warming that has occurred in the Arctic during the last three decades. The results were published in the April issue of Nature Geoscience.
Though there are several varieties of aerosols, previous research has shown that two types -- sulfates and black carbon -- play an especially critical role in regulating climate change. Both are products of human activity.
Sulfates, which come primarily from the burning of coal and oil, scatter incoming solar radiation and have a net cooling effect on climate. Over the past three decades, the United States and European countries have passed a series of laws that have reduced sulfate emissions by 50 percent. While improving air quality and aiding public health, the result has been less atmospheric cooling from sulfates.
At the same time, black carbon emissions have steadily risen, largely because of increasing emissions from Asia. Black carbon -- small, soot-like particles produced by industrial processes and the combustion of diesel and biofuels -- absorb incoming solar radiation and have a strong warming influence on the atmosphere.
In the modeling experiment, Shindell and colleagues compiled detailed, quantitative information about the relative roles of various components of the climate system, such as solar variations, volcanic events, and changes in greenhouse gas levels. They then ran through various scenarios of how temperatures would change as the levels of ozone and aerosols -- including sulfates and black carbon -- varied in different regions of the world. Finally, they teased out the amount of warming that could be attributed to different climate variables. Aerosols loomed large.
The regions of Earth that showed the strongest responses to aerosols in the model are the same regions that have witnessed the greatest real-world temperature increases since 1976. The Arctic region has seen its surface air temperatures increase by 1.5 C (2.7 F) since the mid-1970s. In the Antarctic, where aerosols play less of a role, the surface air temperature has increased about 0.35 C (0.6 F).
That makes sense, Shindell explained, because of the Arctic's proximity to North America and Europe. The two highly industrialized regions have produced most of the world's aerosol emissions over the last century, and some of those aerosols drift northward and collect in the Arctic. Precipitation, which normally flushes aerosols out of the atmosphere, is minimal there, so the particles remain in the air longer and have a stronger impact than in other parts of the world.
Since decreasing amounts of sulfates and increasing amounts of black carbon both encourage warming, temperature increases can be especially rapid. The build-up of aerosols also triggers positive feedback cycles that further accelerate warming as snow and ice cover retreat.
In the Antarctic, in contrast, the impact of sulfates and black carbon is minimized because of the continent's isolation from major population centers and the emissions they produce.
"There's a tendency to think of aerosols as small players, but they're not," said Shindell. "Right now, in the mid-latitudes of the Northern Hemisphere and in the Arctic, the impact of aerosols is just as strong as that of the greenhouse gases."
The growing recognition that aerosols may play a larger climate role can have implications for policymakers.
"We will have very little leverage over climate in the next couple of decades if we're just looking at carbon dioxide," Shindell said. "If we want to try to stop the Arctic summer sea ice from melting completely over the next few decades, we're much better off looking at aerosols and ozone."
Aerosols tend to be quite-short lived, residing in the atmosphere for just a few days or weeks. Greenhouses gases, by contrast, can persist for hundreds of years. Atmospheric chemists theorize that the climate system may be more responsive to changes in aerosol levels over the next few decades than to changes in greenhouse gas levels, which will have the more powerful effect in coming centuries.
"This is an important model study, raising lots of great questions that will need to be investigated with field research," said Loretta Mickley, an atmospheric chemist from Harvard University, Cambridge, Mass. who was not directly involved in the research. Understanding how aerosols behave in the atmosphere is still very much a work-in-progress, she noted, and every model needs to be compared rigorously to real life observations. But the science behind Shindell's results should be taken seriously.
"It appears that aerosols have quite a powerful effect on climate, but there's still a lot more that we need to sort out," said Shindell.
NASA's upcoming Glory satellite is designed to enhance our current aerosol measurement capabilities to help scientists reduce uncertainties about aerosols by measuring the distribution and microphysical properties of the particles.
________________________________________
Adapted from materials provided by NASA/Goddard Space Flight Center.

Your Ad Here READ MORE - Aerosols May Drive A Significant Portion Of Arctic Warming

Sound From Exploding Volcanoes Compared With Jet Engines

Scripps researchers installed an array of microbarometers at Mount St. Helens in November 2004 to collect infrasound near the site


New research on infrasound from volcanic eruptions shows an unexpected connection with jet engines. Researchers at Scripps Institution of Oceanography at UC San Diego speeded up the recorded sounds from two volcanoes and uncovered a noise very similar to typical jet engines.

These new research findings provide scientists with a more useful probe of the inner workings of volcanic eruptions. Infrasound is sound that is lower in frequency than 20 cycles per second, below the limit of human hearing.
The study led by Robin Matoza, a graduate student at Scripps Oceanography, will be published in an upcoming issue of the journal Geophysical Research Letters, a publication of the American Geophysical Union (AGU). Matoza measured infrasonic sound from Mount St. Helens in Washington State and Tungurahua volcano in Ecuador, both of which are highly active volcanoes close to large population centers.
"We hypothesized that these very large natural volcanic jets were making very low frequency jet noise," said Matoza, who conducts research in the Scripps Laboratory for Atmospheric Acoustics.
Using 100-meter aperture arrays of microbarometers, similar to weather barometers but sensitive to smaller changes in atmospheric pressure and low-frequency infrasonic microphones, the research team tested the hypothesis, revealing the physics of how the large-amplitude signals from eruptions are produced. Jet noise is generated by the turbulent flow of air out of a jet engine. Matoza and colleagues recorded these very large-amplitude infrasonic signals during the times when ash-laden gas was being ejected from the volcano. The study concluded that these large-scale volcanic jets are producing sound in a similar way to smaller-scale man-made jets.
"We can draw on this area of research to speed up our own study of volcanoes for both basic research interests, to provide a deeper understanding of eruptions, and for practical purposes, to determine which eruptions are likely ash-free and therefore less of a threat and which are loaded with ash," said Michael Hedlin, director of Scripps' Atmospheric Acoustics Lab and a co-author on the paper.
Large-amplitude infrasonic signals from volcanic eruptions are currently used in a prototype real-time warning system that informs the Volcanic Ash Advisory Center (VAAC) when large infrasonic signals have come from erupting volcanoes. Researchers hope this new information can improve hazard mitigation and inform pilots and the aviation industry.
"The more quantitative we can get about how the sound is produced the more information we can provide to the VAAC," said Matoza. "Eventually it could be possible to provide detailed information such as the size or flow rate of the volcanic jet to put into ash-dispersal forecasting models."
The paper's co-authors include D. Fee and M A. Garcés, Infrasound Laboratory at the University of Hawaii at Manoa; J.M. Seiner of the National Center for Physical Acoustics at the University of Mississippi; and P.A. Ramón of Instituto Geofisico, Escuela Politecnica Naional. The research study was funded by a National Science Foundation grant.
________________________________________
Adapted from materials provided by University of California - San Diego.

Your Ad Here READ MORE - Sound From Exploding Volcanoes Compared With Jet Engines

Saturday, April 4, 2009

Physical Activity May Strengthen Children's Ability To Pay Attention

Charles Hillman and Darla Castelli, professors of kinesiology and community health, have found that physical activity may increase students' cognitive control -- or ability to pay attention -- and also result in better performance on academic achievement tests.


As school districts across the nation revamped curricula to meet requirements of the federal “No Child Left Behind” Act, opportunities for children to be physically active during the school day diminished significantly.

Future mandates, however, might be better served by taking into account findings from a University of Illinois study suggesting the academic benefits of physical education classes, recess periods and after-school exercise programs. The research, led by Charles Hillman, a professor of kinesiology and community health and the director of the Neurocognitive Kinesiology Laboratory at Illinois, suggests that physical activity may increase students’ cognitive control – or ability to pay attention – and also result in better performance on academic achievement tests.
“The goal of the study was to see if a single acute bout of moderate exercise – walking – was beneficial for cognitive function in a period of time afterward,” Hillman said. “This question has been asked before by our lab and others, in young adults and older adults, but it’s never been asked in children. That’s why it’s an important question.”
For each of three testing criteria, researchers noted a positive outcome linking physical activity, attention and academic achievement.
Study participants were 9-year-olds (eight girls, 12 boys) who performed a series of stimulus-discrimination tests known as flanker tasks, to assess their inhibitory control.
On one day, students were tested following a 20-minute resting period; on another day, after a 20-minute session walking on a treadmill. Students were shown congruent and incongruent stimuli on a screen and asked to push a button to respond to incongruencies. During the testing, students were outfitted with an electrode cap to measure electroencephalographic (EEG) activity.
“What we found is that following the acute bout of walking, children performed better on the flanker task,” Hillman said. “They had a higher rate of accuracy, especially when the task was more difficult. Along with that behavioral effect, we also found that there were changes in their event-related brain potentials (ERPs) – in these neuroelectric signals that are a covert measure of attentional resource allocation.”
One aspect of the neuroelectric activity of particular interest to researchers is a measure referred to as the P3 potential. Hillman said the amplitude of the potential relates to the allocation of attentional resources.
“What we found in this particular study is, following acute bouts of walking, children had a larger P3 amplitude, suggesting that they are better able to allocate attentional resources, and this effect is greater in the more difficult conditions of the flanker test, suggesting that when the environment is more noisy – visual noise in this case – kids are better able to gate out that noise and selectively attend to the correct stimulus and act upon it.”
In an effort to see how performance on such tests relates to actual classroom learning, researchers next administered an academic achievement test. The test measured performance in three areas: reading, spelling and math.
Again, the researchers noted better test results following exercise.
“And when we assessed it, the effect was largest in reading comprehension,” Hillman said. In fact, he said, “If you go by the guidelines set forth by the Wide Range Achievement Test, the increase in reading comprehension following exercise equated to approximately a full grade level.
“Thus, the exercise effect on achievement is not statistically significant, but a meaningful difference.”
Hillman said he’s not sure why the students’ performance on the spelling and math portions of the test didn’t show as much of an improvement as did reading comprehension, but suspects it may be related to design of the experiment. Students were tested on reading comprehension first, leading him to speculate that too much time may have elapsed between the physical activity and the testing period for those subjects.
“Future attempts will definitely look at the timing,” he said. Subsequent testing also will introduce other forms of physical-activity testing.
“Treadmills are great,” Hillman said. “But kids don’t walk on treadmills, so it’s not an externally valid form of exercise for most children. We currently have an ongoing project that is looking at treadmill walking at the same intensity relative to a Wii Fit game – which is a way in which kids really do exercise.”
Still, given the preliminary study’s positive outcomes on the flanker task, ERP data and academic testing, study co-author Darla Castelli believes these early findings could be used to inform useful curricular changes.
“Modifications are very easy to integrate,” Castelli said. For example, she recommends that schools make outside playground facilities accessible before and after school.
“If this is not feasible because of safety issues, then a school-wide assembly containing a brief bout of physical activity is a possible way to begin each day,” she said. “Some schools are using the Intranet or internal TV channels to broadcast physical activity sessions that can be completed in each classroom.”
Among Castelli’s other recommendations for school personnel interested in integrating physical activity into the curriculum:

  • scheduling outdoor recess as a part of each school day;
  • offering formal physical education 150 minutes per week at the elementary level, 225 minutes at the secondary level;
  • encouraging classroom teachers to integrate physical activity into learning.
An example of how physical movement could be introduced into an actual lesson would be “when reading poetry (about nature or the change of seasons), students could act like falling leaves,” she said.
The U. of I. study appears in the current issue of the journal Neuroscience. Along with Castelli and Hillman, co-authors are U. of I. psychology professor Art Kramer and kinesiology and community health graduate student Mathew Pontifex and undergraduate Lauren Raine.
________________________________________
Adapted from materials provided by University of Illinois at Urbana-Champaign.

Your Ad Here READ MORE - Physical Activity May Strengthen Children's Ability To Pay Attention

Rocket Launches May Need Regulation To Prevent Ozone Depletion, Says Study

A Delta rocket launches from NASA's Kennedy Space Center carrying Mars Phoenix lander in 2007.


The global market for rocket launches may require more stringent regulation in order to prevent significant damage to Earth's stratospheric ozone layer in the decades to come, according to a new study by researchers in California and Colorado.

Future ozone losses from unregulated rocket launches will eventually exceed ozone losses due to chlorofluorocarbons, or CFCs, which stimulated the 1987 Montreal Protocol banning ozone-depleting chemicals, said Martin Ross, chief study author from The Aerospace Corporation in Los Angeles. The study, which includes the University of Colorado at Boulder and Embry-Riddle Aeronautical University, provides a market analysis for estimating future ozone layer depletion based on the expected growth of the space industry and known impacts of rocket launches.
"As the rocket launch market grows, so will ozone-destroying rocket emissions," said Professor Darin Toohey of CU-Boulder's atmospheric and oceanic sciences department. "If left unregulated, rocket launches by the year 2050 could result in more ozone destruction than was ever realized by CFCs."
A paper on the subject by Ross and Manfred Peinemann of The Aerospace Corporation, CU-Boulder's Toohey and Embry-Riddle Aeronautical University's Patrick Ross appeared online in March in the journal Astropolitics.
Since some proposed space efforts would require frequent launches of large rockets over extended periods, the new study was designed to bring attention to the issue in hopes of sparking additional research, said Ross. "In the policy world uncertainty often leads to unnecessary regulation," he said. "We are suggesting this could be avoided with a more robust understanding of how rockets affect the ozone layer."
Current global rocket launches deplete the ozone layer by no more than a few hundredths of 1 percent annually, said Toohey. But as the space industry grows and other ozone-depleting chemicals decline in the Earth's stratosphere, the issue of ozone depletion from rocket launches is expected to move to the forefront.
Today, just a handful of NASA space shuttle launches release more ozone-depleting substances in the stratosphere than the entire annual use of CFC-based medical inhalers used to treat asthma and other diseases in the United States and which are now banned, said Toohey. "The Montreal Protocol has left out the space industry, which could have been included."
Highly reactive trace-gas molecules known as radicals dominate stratospheric ozone destruction, and a single radical in the stratosphere can destroy up to 10,000 ozone molecules before being deactivated and removed from the stratosphere. Microscopic particles, including soot and aluminum oxide particles emitted by rocket engines, provide chemically active surface areas that increase the rate such radicals "leak" from their reservoirs and contribute to ozone destruction, said Toohey.
In addition, every type of rocket engine causes some ozone loss, and rocket combustion products are the only human sources of ozone-destroying compounds injected directly into the middle and upper stratosphere where the ozone layer resides, he said.
Although U.S. science agencies spent millions of dollars to assess the ozone loss potential from a hypothetical fleet of 500 supersonic aircraft -- a fleet that never materialized -- much less research has been done to understand the potential range of effects the existing global fleet of rockets might have on the ozone layer, said Ross.
Since 1987 CFCs have been banned from use in aerosol cans, freezer refrigerants and air conditioners. Many scientists expect the stratospheric ozone layer -- which absorbs more than 90 percent of harmful ultraviolet radiation that can harm humans and ecosystems -- to return to levels that existed prior to the use of ozone-depleting chemicals by the year 2040.
Rockets around the world use a variety of propellants, including solids, liquids and hybrids. Ross said while little is currently known about how they compare to each other with respect to the ozone loss they cause, new studies are needed to provide the parameters required to guide possible regulation of both commercial and government rocket launches in the future.
"Twenty years may seem like a long way off, but space system development often takes a decade or longer and involves large capital investments," said Ross. "We want to reduce the risk that unpredictable and more strict ozone regulations would be a hindrance to space access by measuring and modeling exactly how different rocket types affect the ozone layer."
The research team is optimistic that a solution to the problem exists. "We have the resources, we have the expertise, and we now have the regulatory history to address this issue in a very powerful way," said Toohey. "I am optimistic that we are going to solve this problem, but we are not going to solve it by doing nothing."
The research was funded by the National Science Foundation, NASA and The Aerospace Corporation.
________________________________________
Adapted from materials provided by University of Colorado at Boulder.

Your Ad Here READ MORE - Rocket Launches May Need Regulation To Prevent Ozone Depletion, Says Study

Tuesday, March 31, 2009

Action Video Games Improve Vision, New Research Shows

This is a photo illustrating 58 percent better contrast perception versus "regular" contrast perception.



Video games that involve high levels of action, such as first-person-shooter games, increase a player's real-world vision, according to research in Nature Neuroscience March 29.

The ability to discern slight differences in shades of gray has long been thought to be an attribute of the human visual system that cannot be improved. But Daphne Bavelier, professor of brain and cognitive sciences at the University of Rochester, has discovered that very practiced action gamers become 58 percent better at perceiving fine differences in contrast.
"Normally, improving contrast sensitivity means getting glasses or eye surgery—somehow changing the optics of the eye," says Bavelier. "But we've found that action video games train the brain to process the existing visual information more efficiently, and the improvements last for months after game play stopped."
The finding builds on Bavelier's past work that has shown that action video games decrease visual crowding and increases visual attention. Contrast sensitivity, she says, is the primary limiting factor in how well a person can see. Bavelier says that the findings show that action video game training may be a useful complement to eye-correction techniques, since game training may teach the visual cortex to make better use of the information it receives.
To learn whether high-action games could affect contrast sensitivity, Bavelier, in collaboration with graduate student Renjie Li and colleagues Walt Makous, professor of brain and cognitive sciences at the University of Rochester, and Uri Polat, professor at the Eye Institute at Tel Aviv University, tested the contrast sensitivity function of 22 students, then divided them into two groups: One group played the action video games "Unreal Tournament 2004" and "Call of Duty 2." The second group played "The Sims 2," which is a richly visual game, but does not include the level of visual-motor coordination of the other group's games. The volunteers played 50 hours of their assigned games over the course of 9 weeks. At the end of the training, the students who played the action games showed an average 43% improvement in their ability to discern close shades of gray—close to the difference she had previously observed between game players and non-game players—whereas the Sims players showed none.
"To the best of our knowledge, this is the first demonstration that contrast sensitivity can be improved by simple training," says Bavelier. "When people play action games, they're changing the brain's pathway responsible for visual processing. These games push the human visual system to the limits and the brain adapts to it, and we've seen the positive effect remains even two years after the training was over."
Bavelier says that the findings suggest that despite the many concerns about the effects of action video games and the time spent in front of a computer screen, that time may not necessarily be harmful, at least for vision.
Bavelier is now taking what she has learned with her video game research and collaborating with a consortium of researchers to look into treatments for amblyopia, a problem caused by poor transmission of the visual image to the brain.
This research was funded by the National Eye Institute and the Office of Naval Research.
________________________________________
Adapted from materials provided by University of Rochester, via EurekAlert!, a service of AAAS.

Your Ad Here READ MORE - Action Video Games Improve Vision, New Research Shows

Ice Storms Devastating To Pecan Orchards

This is the aftermath of an ice storm in a pecan grove near Eufaula, Okla.


Ice storms and other severe weather can have devastating impacts on agricultural crops, including perennial tree crops. Major ice storms occur at least once a decade, with truly catastrophic "icing events" recorded once or twice a century within a broad belt extending from eastern Texas through New England. Ice storms can result in overwhelming losses to orchards and expensive cleanup for producers.
Because the long limbs of pecan trees act as levers and increase the likelihood of breakage, pecan orchards and groves are particularly susceptible to damage from tornadoes, hurricanes, and ice storms. Ice damage is typically more severe in pecan orchards than other orchard crops.

Oklahoma has 85,740 acres of pecans on 2,879 farms. Ice storms struck Oklahoma four times from 2000 through 2007. The crippling ice storm in December 2000, which hit the southeast quarter of Oklahoma, extended into parts of Texas, Louisiana, and Arkansas. An estimated 25,000 to 30,000 acres of pecans were damaged in Oklahoma during this storm alone.
Michael W. Smith from the Department of Horticulture and Landscape Architecture at Oklahoma State University, and Charles T. Rohla of the Samuel Roberts Noble Foundation published a research report in the latest issue of HortTechnology that provides pecan producers, government agencies, and insurance companies with important information concerning orchard management and economics following destructive ice storms.
Cleanup of pecan orchards following ice damage presents enormous challenges for producers. Typical damage, cleanup, and recovery from four ice storms that hit the region from 2000 to 2007 were reported in the study. Trees less than 15 feet tall typically had the least damage; trees 15 to 30 feet tall incurred as much or more damage than larger trees and cleanup costs were greater.
The silver lining: pecan trees are resilient. Most trees can survive and eventually return to productivity following loss of most of their crown. But cleanup costs to ice-damaged pecan orchards are high, ranging from $207 to $419 per acre based on the dollar value in 2008. According to the researchers, these costs were consistent among orchards where the owner supervised the labor and had the resources to obtain equipment necessary to prune and remove debris from the orchard. The cleanup costs paid to "custom operators" for renovating orchards following ice storms were significantly more expensive, ranging from $500 to $800 per acre in 2008 for orchards with similar damage levels.
Explaining the outcomes of the research study, Smith stated; "Following damaging weather events, producers seek information concerning effective cleanup procedures, subsequent management, recovery duration, and economic impact. State and Federal agencies and insurance companies seek guidance concerning economic impact and how to assist producers. Our objective was to provide information for producers and others regarding the impact of an ice storm on pecans."
________________________________________
Adapted from materials provided by American Society for Horticultural Science, via EurekAlert!, a service of AAAS.




Your Ad Here READ MORE - Ice Storms Devastating To Pecan Orchards

Mice And Humans Should Have More In Common In Clinical Trials

Purdue researcher Joseph Garner found that traditional testing methods in mice increase errors in lab results. His study suggests researchers vary the environmental conditions for mice during tests to lessen the possibility of false positives.



Just as no two humans are the same, a Purdue University scientist has shown treating mice more as individuals in laboratory testing cuts down on erroneous results and could significantly reduce the cost of drug development.

Mice have long been used as test subjects for treatments and drugs before those products are approved for human testing. But new research shows that the customary practice of standardizing mice by trying to limit environmental variation in laboratories actually increases the chance of getting an incorrect result.
The study, done by Joseph Garner, a Purdue assistant professor of animal sciences, and professor Hanno Würbel of the Justus-Liebig University of Giessen in Germany, was published in the early online edition of Nature Methods on Monday (March 30). It suggests scientists should change their methods and test mice in deliberately varying environmental conditions. Garner said that will decrease the number of false positive test results and eliminate further costly testing of drugs or treatments destined to fail.
"In lab animals, we have this bizarre idea that we can control everything that happens," Garner said. "But we would never be able to do that with humans, and we wouldn't want to. You want to know if a drug is going to work in all people, so you test it on a wide range of different people. We should do the same thing with mice."
Garner said human testing uses a broad range of subjects, giving scientists an idea of how a drug or treatment might affect different types of people. But scientists often use mice that are basically genetically identical and try to limit internal and external environmental factors such as stress, diet and age to eliminate variables affecting the outcome.
Garner said there is no practical way to ensure that all environmental conditions are the same with mice, however, because they respond to cues humans cannot detect. For example, a researcher's odor in one lab might cause more stress for a mouse than another researcher's odor in a second lab with different mice, giving different results. But scientists, unaware of the odor difference, may believe a treatment worked when the mice were actually responding to an environmental cue, giving a false positive.
The study used three different strains of mice from previously published data and compared their behavioral characteristics against each other. The observations were done in three different labs, two different types of cages and at three different times to make 18 different replicates of the same experiment. Traditional testing theories say the results should have been the same in all those experiments.
Once the results were compared, however, the researchers found many false positives, or instances when one strain appeared to act differently from another when it actually should not.
"There were nearly 10 times more false positives than we would expect by chance," Garner said. "There had to be a gremlin causing these false positives."
The researchers suspected the problem was in the traditional lab experiment design. So they reevaluated the data, picking a mouse of each strain from each environment - similar to matching pairs in human clinical trials - and found only the same number of false positives as would be expected by chance.
When mouse testing creates a false positive, leading a researcher to believe a drug has worked, the drug could be sent to further animal testing and human clinical trials at a cost of millions of dollars. Drugs that fail in clinical trials cannot be marketed, and the money is wasted. To recoup those losses, drug companies must increase the costs of marketable drugs.
"Drugs aren't expensive because they're costly to make," Garner said. "They're expensive because the company has to recoup the costs of the other drugs that have failed in human clinical trials. Numbers are hard to estimate, but for every drug that reaches the marketplace, well over 100 have been abandoned at some point in their development."
Garner said giving mice varying environments also could be better for the animals because fewer could be used. Weeding out an unsuccessful drug would eliminate an unnecessary second round of animal testing.
"The really exciting message is that we have shown how the false positives in early drug discovery can be drastically reduced without costing anything more than a change in experimental design," Garner said. "These are positive results for pharmaceutical research, patients and for mice."
Garner and Würbel, along with Würbel's doctoral student Helene Richter, received research funding from the German Research Foundation. Their research will now focus on which environmental factors have the most impact on results.
________________________________________
Adapted from materials provided by Purdue University.

Your Ad Here READ MORE - Mice And Humans Should Have More In Common In Clinical Trials

Hollow Gold Nanospheres Show Promise For Biomedical And Other Applications

Partial view of a gold nanosphere (shown), magnified by a factor of one billion, as seen through an electron microscope. The darker ring shows the "wall" of the nanosphere, while the lighter area to the right of the ring shows the interior region of the shell.



A new metal nanostructure developed by researchers at the University of California, Santa Cruz, has already shown promise in cancer therapy studies and could be used for chemical and biological sensors and other applications as well.
The hollow gold nanospheres developed in the laboratory of Jin Zhang, a professor of chemistry and biochemistry at UCSC, have a unique set of properties, including strong, narrow, and tunable absorption of light. Zhang is collaborating with researchers at the University of Texas M. D. Anderson Cancer Center, who have used the new nanostructures to target tumors for photothermal cancer therapy. They reported good results from preclinical studies earlier this year (Clinical Cancer Research, February 1, 2009).

Zhang will describe his lab's work on the hollow gold nanospheres in a talk on Sunday, March 22, at the annual meeting of the American Chemical Society in Salt Lake City.
"What makes this structure special is the combination of the spherical shape, the small size, and the strong absorption in visible and near infrared light," Zhang said. "The absorption is not only strong, it is also narrow and tunable. All of these properties are important for cancer treatment."
Zhang's lab is able to control the synthesis of the hollow gold nanospheres to produce particles with consistent size and optical properties. The hollow particles can be made in sizes ranging from 20 to 70 nanometers in diameter, which is an ideal range for biological applications that require particles to be incorporated into living cells. The optical properties can be tuned by varying the particle size and wall thickness.
In the cancer studies, led by Chun Li of the M. D. Anderson Cancer Center, researchers attached a short peptide to the nanospheres that enabled the particles to bind to tumor cells. After injecting the nanospheres into mice with melanoma, the researchers irradiated the animals' tumors with near-infrared light from a laser, heating the gold nanospheres and selectively killing the cancer cells to which the particles were bound.
Cancer therapy was not the goal, however, when Zhang's lab began working several years ago on the synthesis and characterization of hollow gold nanospheres. Zhang has studied a wide range of metal nanostructures to optimize their properties for surface-enhanced Raman scattering (SERS). SERS is a powerful optical technique that can be used for sensitive detection of biological molecules and other applications.
Adam Schwartzberg, then a graduate student in Zhang's lab at UCSC, initially set out to reproduce work reported by Chinese researchers in 2005. In the process, he perfected the synthesis of the hollow gold nanospheres, then demonstrated and characterized their SERS activity.
"This process is able to produce SERS-active nanoparticles that are significantly smaller than traditional nanoparticle structures used for SERS, providing a sensor element that can be more easily incorporated into cells for localized intracellular measurements," Schwartzberg, now at UC Berkeley, reported in a 2006 paper published in Analytical Chemistry.
The collaboration with Li began when Zhang heard him speak at a conference about using solid nanoparticles for photothermal cancer therapy. Zhang immediately saw the advantages of the hollow gold nanospheres for this technique. Li uses near-infrared light in the procedure because it provides good tissue penetration. But the solid gold nanoparticles he was using do not absorb near-infrared light efficiently. Zhang told Li he could synthesize hollow gold nanospheres that absorb light most efficiently at precisely the wavelength (800 nanometers) emitted by Li's near-infrared laser.
"The heat that kills the cancer cells depends on light absorption by the metal nanoparticles, so more efficient absorption of the light is better," Zhang said. "The hollow gold nanospheres were 50 times more effective than solid gold nanoparticles for light absorption in the near-infrared."
Zhang's group has been exploring other nanostructures that can be synthesized using the same techniques. For example, graduate student Tammy Olson has designed hollow double-nanoshell structures of gold and silver, which show enhanced SERS activities compared to the hollow gold nanospheres.
The ability to tune the optical properties of the hollow nanospheres makes them highly versatile, Zhang said. "It is a unique structure that offers true advantages over other nanostructures, so it has a lot of potential," he said.
________________________________________
Adapted from materials provided by University of California - Santa Cruz, via EurekAlert!, a service of AAAS.

Your Ad Here READ MORE - Hollow Gold Nanospheres Show Promise For Biomedical And Other Applications

Food Choices Evolve Through Information Overload

Just as information overload leads to people repeatedly choosing what they know, same concept applies equally to hundreds of animal species, too, new research shows

Ever been so overwhelmed by a huge restaurant menu that you end up choosing an old favourite instead of trying something new?
Psychologists have long since thought that information overload leads to people repeatedly choosing what they know. Now, new research has shown that the same concept applies equally to hundreds of animal species, too.
Researchers from the University of Leeds have used computer modelling to examine the evolution of specialisation, casting light on why some animal species have evolved to eat one particular type of food. For example some aphids choose to eat garden roses, but not other plants which would offer similar nutritional values.
"This is a major leap forward in our understanding of the way in which animals interact with their environment," says lead researcher Dr Colin Tosh from the University's Faculty of Biological Sciences. "Our computer models show the way in which neural networks operate in different environments. They have made it possible for us to see how different species make decisions, based on what's happening – or in this case, which foods are available - around them."
Despite the prevalence of specialisation in the animal kingdom, very little is known about why it occurs. The work conducted at Leeds has provided strong evidence in support of the 'neural limitations' hypothesis put forward by academics in the 1990s. This hypothesis, derived from human psychology, is based on the concept of information overload.
"There are several hypotheses to explain specialisation: one suggests that animals adapt to eat certain foods and this prevents them from eating other types of food," says Dr Tosh.
"For example, cows have evolved flat teeth which allow them to chew grass but they are unable to efficiently process meat. However, the problem with these hypotheses is that they don't apply across the board. Some species – such as many plant eating insects – have evolved to specialise even though there are many other available foods they could eat perfectly well."
This is the first study to provide a realistic representation of neural information processing in animals and how these interact with their environment. The research team believe that it could also have major implications for predicting the effects of environmental change.
"A good example of a struggling specialist is the giant panda, which relies on high mountain bamboo," says Dr Tosh. "In understanding how neural processes work, we may be able to gain an insight into how future environmental conditions – such as the dying out of particular types of plants - may affect a range of different animal species that utilise them for food."
This research was funded by the Natural Environment Research Council in the UK.
________________________________________
Adapted from materials provided by University of Leeds, viaEurekAlert!, a service of AAAS.
Your Ad Here READ MORE - Food Choices Evolve Through Information Overload

Wednesday, March 11, 2009

Genetic Study Finds Treasure Trove Of New Lizards

New species of gecko that was once thought to be Diplodactylus tessellatus.


University of Adelaide research has discovered that there are many more species of Australian lizards than previously thought, raising new questions about conservation and management of Australia's native reptiles.
PhD student Paul Oliver, from the University's School of Earth and Environmental Sciences, has done a detailed genetic study of the Australian gecko genus Diplodactylus and found more than twice the recognised number of gecko species, from 13 species to 29. This study was done in collaboration with the South Australian Museum and Western Australian Museum.

"Many of these species are externally very similar, leading to previous severe underestimation of true species diversity," says Mr Oliver.
"One of the major problems for biodiversity conservation and management is that many species remain undocumented.
"This problem is widely acknowledged to be dire among invertebrates and in developing countries.
"But in this group of vertebrates in a developed nation, which we thought we knew reasonably well, we found more than half the species were unrecognised."
Mr Oliver says this has great significance for conservation. For instance, what was thought to be a single very widespread species of gecko has turned out to be eight or nine separate species with much narrower, more restricted habitats and possibly much more vulnerable to environmental change, he says.
"This completely changes how we look at conservation management of these species," he says.
"Even at just the basic inventory level, this shows that there is a lot of work still to be done. Vertebrate taxonomy clearly remains far from complete with many species still to be discovered. This will require detailed genetic and morphological work, using integrated data from multiple sources. It will require considerable effort and expense but with potentially rich returns."
The research was supported by grants from the Australia Pacific Science Foundation and the Australian Biological Resources Study.
________________________________________
Adapted from materials provided by University of Adelaide.

Your Ad Here READ MORE - Genetic Study Finds Treasure Trove Of New Lizards

Coral Reefs May Start Dissolving When Atmospheric Carbon Dioxide Doubles

Coral reef. If carbon dioxide reaches double pre-industrial levels, coral reefs can be expected to not just stop growing, but also to begin dissolving all over the world.


Rising carbon dioxide in the atmosphere and the resulting effects on ocean water are making it increasingly difficult for coral reefs to grow, say scientists. A study to be published online March 13, 2009 in Geophysical Research Letters by researchers at the Carnegie Institution and the Hebrew University of Jerusalem warns that if carbon dioxide reaches double pre-industrial levels, coral reefs can be expected to not just stop growing, but also to begin dissolving all over the world.
The impact on reefs is a consequence of both ocean acidification caused by the absorption of carbon dioxide into seawater and rising water temperatures. Previous studies have shown that rising carbon dioxide will slow coral growth, but this is the first study to show that coral reefs can be expected to start dissolving just about everywhere in just a few decades, unless carbon dioxide emissions are cut deeply and soon.

"Globally, each second, we dump over 1000 tons of carbon dioxide into the atmosphere and, each second, about 300 tons of that carbon dioxide is going into the oceans," said co-author Ken Caldeira of the Carnegie Institution's Department of Global Ecology, testifying to the U.S. House of Representatives Subcommittee on Insular Affairs, Oceans and Wildlife of the Committee on Natural Resources on February 25, 2009. "We can say with a high degree of certainty that all of this CO2 will make the oceans more acidic – that is simple chemistry taught to freshman college students."
The study was designed determine the impact of this acidification on coral reefs. The research team, consisting of Jacob Silverman, Caldeira, and Long Cao of the Carnegie Institution as well as Boaz Lazar and Jonathan Erez from The Hebrew University of Jerusalem, used field data from coral reefs to determine the effects of temperature and water chemistry on coral calcification rates. Armed with this information, they plugged the data into a computer model that calculated global seawater temperature and chemistry at different atmospheric levels of CO2 ranging from the pre-industrial value of 280 ppm (parts per million) to 750 ppm. The current atmospheric concentration is over 380 ppm, and is rapidly rising due to human-caused emissions, primarily through the burning of fossil fuels.
Based on the model results for more than 9,000 reef locations, the researchers determined that at the highest concentration studied, 750 ppm, acidification of seawater would reduce calcification rates of three quarters of the world's reefs to less than 20% of pre-industrial rates. Field studies suggest that at such low rates, coral growth would not be able to keep up with dissolution and other natural as well as manmade destructive processes attacking reefs.
Prospects for reefs are even gloomier when the effects of coral bleaching are included in the model. Coral bleaching refers to the loss of symbiotic algae that are essential for healthy growth of coral colonies. Bleaching is already a widespread problem, and high temperatures are among the factors known to promote bleaching. According to their model the researchers calculated that under present conditions 30% of reefs have already undergone bleaching and that at CO2 levels of 560 ppm (twice pre-industrial levels) the combined effects of acidification and bleaching will reduce the calcification rates of all the world's reefs by 80% or more. This lowered calcification rate will render all reefs vulnerable to dissolution, without even considering other threats to reefs, such as pollution.
"Our fossil-fueled lifestyle is killing off coral reefs," says Caldeira. "If we don't change our ways soon, in the next few decades we will destroy what took millions of years to create."
"Coral reefs may be the canary in the coal mine," he adds. "Other major pieces of our planet may be similarly threatened because we are using the atmosphere and oceans as dumps for our CO2 pollution. We can save the reefs if we decide to treat our planet with the care it deserves. We need to power our economy with technologies that do not dump carbon dioxide into the atmosphere or oceans."
________________________________________
Adapted from materials provided by Carnegie Institution, viaEurekAlert!, a service of AAAS.

Your Ad Here READ MORE - Coral Reefs May Start Dissolving When Atmospheric Carbon Dioxide Doubles

Teenage Boys Who Eat Fish At Least Once A Week Achieve Higher Intelligence Scores

New research has found that 15-year-old males who ate fish at least once a week displayed higher cognitive skills at the age of 18 than those who it ate it less frequently.


Fifteen-year-old males who ate fish at least once a week displayed higher cognitive skills at the age of 18 than those who it ate it less frequently, according to a study of nearly 4,000 teenagers published in the March issue of Acta Paediatrica.
Eating fish once a week was enough to increase combined, verbal and visuospatial intelligence scores by an average of six per cent, while eating fish more than once a week increased them by just under 11 per cent.
Swedish researchers compared the responses of 3,972 males who took part in the survey with the cognitive scores recorded in their Swedish Military Conscription records three years later.

"We found a clear link between frequent fish consumption and higher scores when the teenagers ate fish at least once a week" says Professor Kjell Torén from the Sahlgrenska Academy at the University of Gothenburg, one of the senior scientists involved in the study. "When they ate fish more than once a week the improvement almost doubled.
"These findings are significant because the study was carried out between the ages of 15 and 18 when educational achievements can help to shape the rest of a young man's life."
The research team found that:
  • • 58 per cent of the boys who took part in the study ate fish at least once a week and a further 20 per cent ate fish more than once a week.
  • • When male teenagers ate fish more than once a week their combined intelligence scores were on average 12 per cent higher than those who ate fish less than once a week. Teenagers who ate fish once a week scored seven per cent higher.
  • • The verbal intelligence scores for teenagers who ate fish more than once a week were on average nine per cent higher than those who ate fish less than once a week. Those who ate fish once a week scored four per cent higher.
  • • The same pattern was seen in the visuospatial intelligence scores, with teenagers who ate fish more than once a week scoring on average 11 per cent higher than those who ate fish less than once a week. Those who ate fish once a week scored seven per cent higher.
"A number of studies have already shown that fish can help neurodevelopment in infants, reduce the risk of impaired cognitive function from middle age onwards and benefit babies born to women who ate fish during pregnancy" says Professor Torén.

"However we believe that this is the first large-scale study to explore the effect on adolescents."
The exact mechanism that links fish consumption to improved cognitive performance is still not clear.
"The most widely held theory is that it is the long-chain polyunsaturated fatty acids found in fish that have positive effects on cognitive performance" explains Professor Torén.
"Fish contains both omega-3 and omega-6 fatty acids which are known to accumulate in the brain when the foetus is developing. Other theories have been put forward that highlight their vascular and anti-inflammatory properties and their role in suppressing cytokines, chemicals that can affect the immune system."
In order to isolate the effect of fish consumption on the study subjects, the research team looked at a wide range of variables, including ethnicity, where they lived, their parents' educational level, the teenagers' well-being, how frequently they exercised and their weight.
"Having looked very carefully at the wide range of variables explored by this study it was very clear that there was a significant association between regular fish consumption at 15 and improved cognitive performance at 18" concludes lead author Dr Maria Aberg from the Centre for Brain Repair and Rehabilitation at the University of Gothenburg.
"We also found the same association between fish and intelligence in the teenagers regardless of their parents' level of education."
The researchers are now keen to carry out further research to see if the kind of fish consumed - for example lean fish in fish fingers or fatty fish such as salmon - makes any difference to the results.
"But for the time being it appears that including fish in a diet can make a valuable contribution to cognitive performance in male teenagers" says Dr Aberg.
________________________________________
Adapted from materials provided by Wiley-Blackwell, viaEurekAlert!, a service of AAAS.



Your Ad Here READ MORE - Teenage Boys Who Eat Fish At Least Once A Week Achieve Higher Intelligence Scores

Gray Wolves No Longer To Be Listed As Threatened And Endangered Species In Western Great Lakes, Portion Of Northern Rockies

Two gray wolves.


Secretary of the Interior Ken Salazar has affirmed on March 6 the decision by the U.S. Fish and Wildlife Service to remove gray wolves from the list of threatened and endangered species in the western Great Lakes and the northern Rocky Mountain states of Idaho and Montana and parts of Washington, Oregon and Utah. Wolves will remain a protected species in Wyoming.
“The recovery of the gray wolf throughout significant portions of its historic range is one of the great success stories of the Endangered Species Act,” Salazar said. “When it was listed as endangered in 1974, the wolf had almost disappeared from the continental United States. Today, we have more than 5,500 wolves, including more than 1,600 in the Rockies.”
“The successful recovery of this species is a stunning example of how the Act can work to keep imperiled animals from sliding into extinction,” he said. “The recovery of the wolf has not been the work of the federal government alone. It has been a long and active partnership including states, tribes, landowners, academic researchers, sportsmen and other conservation groups, the Canadian government and many other partners.”

The Fish and Wildlife Service originally announced the decision to delist the wolf in January, but the new administration decided to review the decision as part of an overall regulatory review when it came into office. The Service will now send the delisting regulation to the Federal Register for publication.
The Service decided to delist the wolf in Idaho and Montana because they have approved state wolf management plans in place that will ensure the conservation of the species in the future.
At the same time, the Service determined wolves in Wyoming would still be listed under the Act because Wyoming’s current state law and wolf management plan are not sufficient to conserve its portion of northern Rocky Mountain wolf population.
Gray wolves were previously listed as endangered in the lower 48 states, except in Minnesota where they were listed as threatened. The Service oversees three separate recovery programs for the gray wolf; each has its own recovery plan and recovery goals based on the unique characteristics of wolf populations in each geographic area.
Wolves in other parts of the 48 states, including the Southwest wolf population, remain endangered and are not affected by the actions taken today.

About Northern Rocky Mountain Wolves
The northern Rocky Mountain Distinct Population Segment includes all of Montana, Idaho and Wyoming, the eastern one-third of Washington and Oregon, and a small part of north-central Utah. The minimum recovery goal for wolves in the northern Rocky Mountains is at least 30 breeding pairs and at least 300 wolves for at least three consecutive years, a goal that was attained in 2002 and has been exceeded every year since. There are currently about 95 breeding pairs and 1,600 wolves in Montana, Idaho, and Wyoming.
The Service believes that with approved state management plans in place in Montana and Idaho, all threats to the wolf population will be sufficiently reduced or eliminated in those states. Montana and Idaho will always manage for more than 15 breeding pairs and 150 wolves per state and their target population level is about 400 wolves in Montana and 500 in Idaho.
As a result of a Montana United States District Court decision on July 18, 2008, the Service reexamined Wyoming law, its management plans and implementing regulations. While the Service has approved wolf management plans in Montana and Idaho, it has determined that Wyoming’s state law and wolf management plan are not sufficient to conserve Wyoming’s portion of a recovered northern Rocky Mountain wolf population. Therefore, even though Wyoming is included in the northern Rocky Mountain District Population Segment, the subpopulation of gray wolves in Wyoming is not being removed from protection of the Endangered Species Act.
Continued management under the Endangered Species Act by the Service will ensure that wolves in Wyoming will be conserved. Acting U.S. Fish and Wildlife Service Director Rowan Gould said the Service will continue to work with the State of Wyoming in developing its state regulatory framework so that the state can continue to maintain its share of a recovered northern Rocky Mountain population. Once adequate state regulatory mechanisms are in place, the Service could propose removing the Act’s protections for wolves in Wyoming. National parks and the Wind River Reservation in Wyoming already have adequate regulatory mechanisms in place to conserve wolves. However, at this time, wolves will remain protected as a nonessential, experimental population under the ESA throughout the state, including within the boundaries of the Wind River Reservation and national park and refuge units.

Western Great Lakes Region
The Service’s delisting of the gray wolf also applies to gray wolves in the Western Great Lakes Distinct Population Segment. As the result of another legal ruling from the Washington D.C. United States District Court on September 29, 2008, the Service reexamined its legal authorization to simultaneously identify and delist a population of wolves in the western Great Lakes. The Service today reissued the delisting decision in order to comply with the Court’s concerns.
The area included in the DPS boundary includes the states of Minnesota, Wisconsin and Michigan as well as parts of North Dakota, South Dakota, Iowa, Illinois, Indiana and Ohio. The DPS includes all the areas currently occupied by wolf packs in Minnesota, Michigan, and Wisconsin, as well as nearby areas in these states in which wolf packs may become established in the future. The DPS also includes surrounding areas into which wolves may disperse but are not likely to establish packs.
Rebounding from a few hundred wolves in Minnesota in the 1970s when listed as endangered, the region’s gray wolf population now numbers about 4,000 and occupies large portions of Wisconsin, Michigan and Minnesota. Wolf numbers in the three states have exceeded the numerical recovery criteria established in the species’ recovery plan for several years. In Minnesota, the population is estimated at 2,922. The estimated wolf population in Wisconsin is a minimum of 537, and about 520 wolves are believed to inhabit Michigan’s Upper Peninsula.
The Michigan, Minnesota, and Wisconsin Departments of Natural Resources have developed plans to guide wolf management actions in the future. The Service has determined that these plans establish a sufficient basis for long-term wolf management. They address issues such as protective regulations, control of problem animals, possible hunting and trapping seasons, and the long-term health of the wolf population, and will be governed by the appropriate state or tribe.
The Service will monitor the delisted wolf populations for a minimum of five years to ensure that they continue to sustain their recovery. At the end of the monitoring period, the Service will decide if relisting, continued monitoring or ending Service monitoring is appropriate.
________________________________________
Adapted from materials provided by U.S. Department of the Interior.

Your Ad Here READ MORE - Gray Wolves No Longer To Be Listed As Threatened And Endangered Species In Western Great Lakes, Portion Of Northern Rockies

Big-hearted Fish Reveals Genetics Of Cardiovascular Condition

Enlarged heart of a 48-hour-post-fertilization zebrafish embryo lacking the gene for ccm2. Nuclei from endothelial cells shown in red and junctions in between in green.


Researchers at the University of Pennsylvania School of Medicine have unlocked the mystery of a puzzling human disease and gained insight into cardiovascular development, all thanks to a big-hearted fish.
Mark Kahn, MD, Associate Professor of Medicine, graduate student Benjamin Kleaveland, and colleagues report in the February issue of Nature Medicine that a human vascular condition called Cerebral Cavernous Malformation (CCM) is caused by leaky junctions between cells in the lining of blood vessels. By combining studies with zebrafish and mice, the researchers found that the aberrant junctions are the result of mutated or missing proteins in a novel biochemical process, the so-called Heart-of-glass (HEG)-CCM pathway.
The HEG-CCM pathway "is essential to regulate endothelial cell-cell interaction, both during the time that vertebrates make the cardiovascular system and later in life," says Kahn. "Its loss later in life confers this previously unexplained disease, cerebral cavernous malformation."
CCM proteins, along with the receptor HEG, are responsible for building properly formed blood and lymphatic vessels during embryonic development by sealing the cell-cell junctions in the walls of vessels; loss of any of these proteins disrupts those seals, causing leaky vasculature.

Cerebral Cavernous Malformations are abnormal clusters of leaky blood vessels, typically in the brain, which can cause both seizures and strokes. The condition affects about 1 in 1,000 people, about 20% of whom carry a genetic predisposition for the condition. Researchers had already identified the genes responsible for the disease– indeed they were named CCM1, CCM2, and CCM3, in recognition of that fact – but not what those genes did.
That's where the big-hearted fish come in. Several years ago, another research team discovered that mutations in CCM1, CCM2, or HEG (which had not previously been linked to CCM) caused zebrafish to develop enlarged hearts. Sensing that this observation could help unlock the mystery of what CCM proteins do, Kleaveland decided to see if these results could be extended to mice.
"Our notion was to take the zebrafish developmental studies and use the mouse as a way of bridging between what appeared to be a role in heart development in fish and blood vessel disease in people," says Kahn.
Kleaveland genetically engineered mice that both completely lack the HEG protein and produce diminished amounts of CCM2. This combination of genetic defects is fatal for the mice; they die during embryonic development. But, examination of their cardiovascular system and that of genetically altered fish, as well revealed several key findings, Kleaveland says.
First, loss of HEG produces cardiovascular defects—mainly leakiness—in the heart, in blood vessels in the lung, and in the lymphatic system. Second, loss of HEG with partial loss of CCM2 produces a worse cardiovascular defect—failure to even form critical blood vessels. Third, all of these defects are characterized by malformed cell-cell junctions in the endothelial cells that line these organs. And finally, HEG actually physically interacts with CCM proteins.
"It looks like the disease is a reflection of a disruption in endothelial cell-cell junctions, and this pathway is required to regulate them," Kahn says.
These data underscore the evolutionary significance of the biochemical process underlying CCM. "With millions of years of evolution between fish and mammals, genes typically acquire new roles and lose old roles," Kahn explains. "When things are that conserved, it just tends to mean that it's a highly important and central process, and it probably also tells us that whatever it's doing is fundamental to blood vessels and the whole cardiovascular system."
The study, Kahn adds, addresses a debate in the field as to whether CCM is the result of defects that cause the disease present in the affected endothelial cells themselves, or in the cells that surround them, such as neurons in the brain?
"We think the developmental model has shown us that the requirement is in the endothelial cell," he says.
Now Kahn, Kleaveland, and their colleagues are working to determine just what it is that HEG is doing in endothelial cell-cell junctions – what proteins it "talks" to on adjacent endothelial cells – and also, to build a true mouse model of the CCM disease.
The mice in this study died in utero, but CCM disease tends to affect humans in their 30s and older. With a good model, however, "you could watch the progression of it, and you could try to change that progression, essentially to treat a mouse,” Kleaveland says.
The research was funded by the National Institutes of Health, the Swiss National Science Foundation, and the European Community, and involved researchers from the University of California, San Diego, Columbia University Medical Center, New York, and the University of Basel, Switzerland.
________________________________________
Adapted from materials provided by University of Pennsylvania School of Medicine.



Your Ad Here READ MORE - Big-hearted Fish Reveals Genetics Of Cardiovascular Condition

Inactivity Of Proteins Behind Longer Shelf Life When Freezing

Frozen biological material, for example food, can be kept for a long time without perishing. A new study is close to providing answers as to why.


Frozen biological material, for example food, can be kept for a long time without perishing. A study by researchers at the University of Gothenburg, Sweden, is close to providing answers as to why.
A cell's proteins are programmed to carry out various biological functions. The protein's level of activity and its ability to successfully carry out these functions is dependent on the amount of water by which it is surrounded. For example, dry proteins are completely inactive. A critical amount of water is required in order for the function to get going, after which point the protein's level of activity increases concurrently with an increase in the amount of water. Proteins achieve full biological activity when the surrounding water has approximately the same weight as the protein.

Researchers at the University of Gothenburg and Chalmers University of Technology have together with a group of American researchers used advanced experimental techniques to study how movements in the water that surrounds the protein cause movements in the protein itself. The study, which is being published in the journal PNAS, indicates that the dynamics in the surrounding water have a direct effect on the protein's dynamics, which, in turn, should affect the activity.
The results explain, for example, why biological material such as foodstuffs or research material can be stored at low temperatures for a long period of time without perishing.
"When the global movements in the surrounding water freeze, then significant movements within the protein also come to a stop. This results in the protein being preserved in a state of minimum energy and biological activity comes to a stop," says researcher Helén Jansson at the Swedish NMR Centre, University of Gothenburg, Sweden.
________________________________________
Adapted from materials provided by University of Gothenburg.

Your Ad Here READ MORE - Inactivity Of Proteins Behind Longer Shelf Life When Freezing

Your Ad Here