first_imgAn am/pm store in Osaka City, Japan. By analyzing 100 million receipts from 1,000 Japanese am/pm convenience stores, researchers have discovered a strong economic inequality among shoppers. Among their findings is that the top 25% and 2% of the customers in a given group account for 80% and 25% of the store’s sales, respectively. Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Research team brings computation and experimentation closer together Citation: Shoppers’ Spending Habits Follow Well-Known Economic Law (2007, October 26) retrieved 18 August 2019 from The researchers, Takayuki Mizuno from Hitotsubashi University, and Masahiro Toriyama, Takao Terano, and Misako Takayasu from the Tokyo Institute of Technology, performed the “econophysics” study, the first to use a large, recently published point-of-sale (POS) database to analyze individual shopping habits. They presented their results at the APFA6 international conference, and have submitted their study to the Proceedings of the APFA6.“The data we analyzed from 100 million receipts is much larger than the data in previous similar studies in the marketing field,” Mizuno told “We scientifically studied the inequality of wealth distribution by using results with high statistical precision.”One question the researchers investigated was how much a store’s sales depend on a few loyal customers. They analyzed data from customers who paid with an Edy card (a pre-paid card with a unique ID), used on about 5% of the receipts. The researchers observed the 80/20 rule: about 80% of the store’s sales come from about 20% of its customers. The observation is not too surprising, as the so-called “Pareto principle” has been observed in a wide variety of economic phenomena, such as wealth distribution (20% of a nation’s citizens own 80% of its wealth). The principle has also become a rule of thumb for business owners trying to maximize profits by focusing on that high-spending 20% of the customers.Using this data, the researchers were also able to determine the inequality of wealth distribution, which stems from the “Gini coefficient.” An equal wealth distribution would relate to a coefficient of zero, and if one person were to purchase everything, the coefficient would be one. In a market economy, the coefficient is usually less than 0.4; in the convenience store data, the estimated coefficient was 0.7, implying significant inequality in the store’s sales.“We think that the inequality of wealth distribution is representative of the larger society,” Mizuno explained. “When Sega Corporation introduced the Edy into game arcades, similar wealth distribution was observed in the game arcades. Also, when we conducted hearing investigations with employees of convenience stores, many thought that most customers have similar buying behaviors and that the wealth distribution doesn’t have a fat-tail. Therefore, they were surprised at this research result.”The researchers also analyzed an individual’s spending amount in a single shopping trip, and found that this expenditure follows a power law. Specifically, for receipt totals above about 100 yen (about US $0.86), the probability of an individual spending more yen decreases following a power function. This probability is independent of the store location, shopper’s age, and time of day.The researchers hope that the results may assist convenience store chains in developing marketing strategies aimed at the high-spending shoppers who contribute significantly to a store’s sales.“When a product that a customer needs is out of stock in a store, the customer often stops going to the store in the future,” Mizuno explained. “Conversely, when there are products that a new visitor needs, the store can get new regular customers. It is important that sellers always display products that the high-spending shoppers always buy. They can execute this strategy by using the purchase history that can be observed from Edy-IDs.”More information: Mizuno, Takayuki, Toriyama, Masahiro, Terano, Takao, and Takayasu, Misako. “Pareto law of the expenditure of a person in convenience stores.” arXiv:0710.1432v1, 7 Oct 2007. Copyright 2007 All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of read more

first_imgHolograms are recorded within the thermo-plastic based holographic material, which can then be processed like a normal plastic and can be laminated within the card itself. A single card can have multiple holograms embedded in the plastic for maximum flexibility, personalization and security.The GE Global Research blog shows an example of a 3-D image of a face that rotates as the card is tilted. Holograms can also include binary images, images of fingerprints, or even animations, all of which gives the authenticating card an unprecedented level of security.SABIC’s Vice President of Technology, Tom Stanley, said the new technology could be used in the authentication of all kinds of electronic devices, such as cell phones, laptops, and numerous other kinds of consumer goods, apart from ID cards and credit cards. General Electrics employs over 300,000 people in more than 100 countries. SABIC (Saudi Basic Industries Corporation) Innovative Plastics employs around 9,000 people in 25 countries, and manufactures and supplies thermoplastic coatings, resins and other products around the globe. SABIC Innovative Plastics and GE Global Research have been working on the system for over six years, and hope to commercialize the new holographic materials within the next two or three years.© 2009 Citation: Security ID cards with built-in holograms (w/ Video) (2009, December 1) retrieved 18 August 2019 from Li-Air: Argonne opens new chapter in battery research (w/ Video) The new “Secure ID Technology” will be much more secure than current technologies because the holograms are built into the volume of the plastic rather than being stamped on the surface. The system has been developed by General Electric Global Research with SABIC Innovative Plastics and will have wider applications than just cards, because the new class of holographic materials can be shaped, cast into film, or injection molded into plastics. ( — Plastic cards with security features are ubiquitous these days, having a wide variety of uses such as credit cards, employee cards, licenses, and so on. Many have holographic images, but they are relatively easy to tamper with. Now researchers at SABIC Innovative Plastics and GE Global Research have developed a new class of thermoplastic holographic materials that embed holograms within the plastic of cards, making them virtually impossible to copy or alter. Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

first_img Citation: Spaceplane that takes off from airport runway could be ready in 10 years (2010, September 21) retrieved 18 August 2019 from This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. The spaceplane, called Skylon, is 82 meters long and has a 25-meter wingspan. Like an airplane, the spaceplane takes off and lands horizontally from a typical airport runway. Traveling at speeds of up to Mach 25, the vehicle could reach altitudes of 460 km (285 miles). It could carry payloads of up to 12 tonnes (twice that of a normal rocket), as well as about 30 passengers.Skylon has no external rockets, but is propelled by two hybrid air-breathing/rocket engines that burn liquid hydrogen and liquid oxygen. In the first phase, the vehicle combines air from the atmosphere with on-board liquid hydrogen to reach speeds of Mach 5.5. In the second phase, on-board liquid hydrogen and liquid oxygen propel the vehicle to orbital velocities of Mach 25. Before take-off, the spaceplane weighs 275 tonnes, but only 55 tonnes when landing. The weight difference is due to the on-board fuel: at take-off, the vehicle carries about 66 tonnes of liquid hydrogen and 150 tonnes of liquid oxygen. Before re-entering the atmosphere, any unused liquid hydrogen is evaporated and vented overboard, since re-entry is easier for lighter vehicles.It will cost an estimated $12 billion to develop the spaceplane (about the same amount that it costs to develop an Airbus jet). It would cost an additional $10 million per launch, compared to the approximately $150 million cost of a rocket launch. The company predicts that a trip to orbit for two weeks would cost tourists about $500,000 per seat. Skylon takes off and lands on a normal runway, reducing the launch cost. Image credit: Reaction Engines. Skylon can reach speeds of up to Mach 25 and altitudes of up to 460 km (285 miles). Image credit: Reaction Engines. © 2010 The rocket that thinks it’s a jet For these reasons, Reaction Engines expects that Skylon could replace the space shuttles that travel to the International Space Station, as well as revolutionize space travel and offer the potential for space-based industry. The company predicts that there is a market for up to 70 reusable Skylon spaceplanes worldwide. “You can imagine a situation when some of our industrially important but polluting processes are done in space and the finished products are brought back down to Earth,” said Richard Varvill, technical director and one of the founders of Reaction Engines. More information: The Engineer and Daily Mail Explore further ( — An unpiloted, air-breathing spaceplane that takes off from an airport runway, carries up to 30 passengers, and costs less than one-tenth to launch into space compared to a conventional rocket could be ready to fly in 10 years, according to its developers, Reaction Engines of Oxfordshire, UK. Although the spaceplane is currently in the proof-of-concept phase, the country’s new UK Space Agency is hosting a workshop this week to discuss developing the spaceplane commercially. If successful, the spaceplane could be the first single-stage-to-orbit craft to reach orbit.last_img read more

first_img Ultrafast imaging of electron waves in graphene (w/ Video) Explore further Graphene, is a single layer of graphite, and has been in the news a lot of late, since first being isolated by Andre Geim and Konstantin Novoselov (which won them the Nobel prize in physics last year), due to its unique properties. Some have even suggested it will completely revolutionize the entire technology field. To create loudspeakers from graphene, the researchers used a simple four step process.The first step as described in the paper, was to synthesize the Graphene Oxide (GO) using a method previously demonstrated by other researchers. Next, the GO was exfoliated in water using sound waves to prevent the inkjet printer nozzles from clogging. The result was then flushed with water to remove any impurities. After that, an empty inkjet printer cartridge was thoroughly cleaned and the graphene “ink” inserted into it.To create the surface on which to print their special ink, a low temperature oxygen plasma treatment was performed on the surface of a piece of Poly (vinylidene fluoride) (PVDF).Next, the newly created ink was printed onto the treated PVDF (repeatedly on both sides) using a commercially available inkjet printer, creating graphene electrodes. The output was immediately immersed in a hydrazine and ammonia solution (in a vacuum) for 3 minutes. This completed the graphene portion of the speaker construction, the rest of the project consisted mostly of hooking up normal acoustic electronics parts as are found in regular speakers. The resulting speakers work by generating a piezoelectric effect that causes the PVDF to distort, creating sound waves.The research team suggests the speakers could be used as window or computer screen speakers or even perhaps as a means of damping external noise by running anti-noise waves through them.Because the process uses readily available materials (the graphite flakes used to make the ink were simply purchased from a vendor) and is relatively simple and straightforward, it’s expected that the resultant speakers would be inexpensive as well, though Jang readily concedes their product isn’t ready for prime-time just yet; the sound quality leaves much to be desired, especially the base tones, a problem the team is already hard at work on trying to solve. © 2010 More information: Flexible and transparent graphene films as acoustic actuator electrodes using inkjet printing, Keun-Young Shin, Jin-Yong Hong and Jyongsik Jang, Chem. Commun., 2011, Advance Article, DOI:10.1039/C1CC12913AAbstractFlexible and transparent graphene films have been fabricated via inkjet printing and vapor deposition (VDP) methods, and the graphene-based acoustic actuator could be used as an extremely thin and lightweight loudspeaker.center_img Citation: Korean researchers use graphene to create transparent loudspeakers (2011, July 12) retrieved 18 August 2019 from This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. ( — In yet another novel use for graphene, researchers from Seoul University have devised a method of creating transparent loudspeakers by printing them onto a special kind of plastic, using an ordinary inkjet printer. Jyongsik Jang and coworkers describe the process in Chemical Communications.last_img read more

first_img Citation: European style stone tools suggest Stone Age people actually discovered America (2012, February 29) retrieved 18 August 2019 from ( — Archeologists and historians have long known that it wasn’t really Christopher Columbus who discovered America. Native Americans had been living all over North, Central and South America long before he arrived. And Native Americans came from Asia across the frozen-over Bering Sea in the west. But now, it appears Europeans might have been first to arrive on the scene after all. Stone tools found recently in Delaware, Maryland and Virginia in the eastern United States, all appear to bear a striking resemblance to tools used by Stone Age peoples in early Europe, and have been dated to a time between 19,000 and 26,000 years ago, a period during which Stone Age people were making such tools, and long before the early Asians arrived. It’s not an implausible theory, suggests Dennis Stanford, of the Smithsonian Institution and Bruce Bradley of the University of Exeter, because Stone Age people could have come from Europe by traveling across the ice-bound North Atlantic during the Ice Age. The evidence is further bolstered by the recent discovery that an ancient knife found in Virginia in 1971 was made of flint that originated from France. They two have coauthored a book on the subject, Across Atlantic Ice.Stanford and Bradley also point out the lack of evidence of any human activity in the north-east part of Siberia or in Alaska any earlier than 15,500 years ago. And the reason early Asians won out, evolving into the people now called Native Americans, was because their window of opportunity was much wider, 15,000 years versus just 4500 for the early Europeans. Thus the original Native Americans were either assimilated or killed by the large numbers of migrating Asians. Evidence that it was likely the former has been found in the DNA of skeletons of North American Native American people. Also, the language of several Native American tribes doesn’t seem to have originated from Asia.The two also say that it’s conceivable that Stone Age people could have traveled such a long way over ice from Europe to America because there would have been more than enough food to be had from the ocean. It all adds up the two say, to a compelling case for Stone Age travelers being titled as the people who truly did discover America. First Americans arrived as 2 separate migrations, according to new genetic evidence Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. © 2011 PhysOrg.comlast_img read more

first_img(—Microsoft architects must wake up to the smell of burning blogs once again. While not everyone may have or want Windows 8, the situation is neither good for branding nor at all good for the people who do have Windows 8. Windows 8 already has security vulnerabilities, where the Windows 8 built-in Internet Explorer puts users at risk of exploitation via the Flash plugin. Windows 8 for PCs won’t be available until next month, so who would this affect? Windows 8 has been released to hardware manufacturers. Some users also may have Windows 8 for evaluation purposes. Last month, Adobe had released a batch of critical security updates for Flash Player. Those updates were available for browsers but Microsoft has yet to release the update for IE10 in Windows 8. That will not happen until well into October.The problem is that Flash is built right into IE10. How convenient? How inconvenient, as only Microsoft can deliver updates, and users may have to wait for them. The Internet Explorer 10’s bundled Flash leaves users exploitable, and the flaw may cause Flash to crash, with the attacker wresting control over the system. How could that happen? The answer appears to be in the timing between Adobe and Microsoft responses.The troublesome version of Flash, now out of date, was baked into Windows 8. Microsoft decided to add Adobe’s Flash Player to the browser as a built-in component instead of as a third-party plugin. So when Adobe patched Flash on August 21 to resolve what they knew were known security flaws, the standalone version used by Firefox could be patched but not the embedded version in Internet Explorer.Microsoft is aware of the timing disconnect. According to a Microsoft response, while the current version of Flash in the “Windows 8 RTM build” does not have the latest fix, a security update will come through Windows Update in the GA timeframe.RTM refers to release to manufacturing. A GA timeframe is a reference to general availability. The timeframe refers to the target date of October 26 when Windows 8 will go on sale. Critics note that in doing so Microsoft is talking about fixing something two months after Adobe released its critical security update for the same problem. That puts a user of Windows 8 in danger. “If you’re using Internet Explorer 10 on any version of Windows 8, including the RTM bits available via MSDN or TechNet and the enterprise preview, you are at risk.” warned Ed Bott on ZDNet.Adobe had already classified this as an important patch. Its statement said, “This update resolves vulnerabilities being targeted, or which have a higher risk of being targeted, by exploit(s) in the wild for a given product version and platform. Adobe recommends administrators install the update as soon as possible. (for instance, within 72 hours).”The Flash security flaw in this instance involves Windows 8 which is not yet in widespread use. Still, technology watchers hope the situation sends a stronger message: Users will always appreciate aligned timing between Adobe and Microsoft when it comes to browser updates and security patches. Outside Microsoft, several technology sites are advising early Windows 8 users, for now, to disable the built-in Flash player. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further © 2012 Phys.orgcenter_img Adobe confirms zero-day danger in Reader and Acrobat Citation: Flash in Windows 8 RTM build is missing latest fix (2012, September 8) retrieved 18 August 2019 from Windows 8 screenshotlast_img read more

first_img Explore further Journal information: Proceedings of the National Academy of Sciences More information: Yunuen Montelongo, et al. “Plasmonic nanoparticle scattering for color holograms.” PNAS Early Edition. DOI: 10.1073/pnas.1405262111 Different types of nanoparticles, which scatter light at different wavelengths, are used to create a multicolored hologram. Credit: Montelongo, et al. ©2014 PNAS The subwavelength distance offers certain advantages. For instance, two different types of plasmonic nanoparticles can be multiplexed, or combined but not coupled, at subwavelength distances. By using nanoparticles of silver with different shapes and sizes, the researchers could control the colors. In addition to providing multiple colors, multiplexing two nanoparticles has the advantage of increasing the bandwidth information limits. The researchers showed that each nanoparticle carries independent information, such as polarization and wavelength, which can be controlled simultaneously. With twice the number of nanoparticles, the total amount of binary information stored can exceed the traditional limits of diffraction.”It has been shown that nanoparticles with resonant properties can be uncoupled over subwavelength distances so their electromagnetic fields have minimal interaction,” Montelongo said. “The device presented demonstrates that these nanoparticles can store and transfer independent information beyond the diffraction limits, which is in contrast to nonresonant structures. Because of the nature of this phenomenon, it has been possible to demonstrate, for the first time, a hologram that projects color images in 180 degrees. This projection is so wide that it is not even possible to display it on a plane, and a diffusive sphere should be used.”These features make the new hologram very attractive for future applications.”Besides the evident application in replacing the typical ‘rainbow holograms’ of credit cards and other security items, this phenomenon can be used for image projection on spheres, which so far has not been achieved with conventional optics,” said coauthor Calum Williams at the University of Cambridge. “Furthermore, this concept can be applied as the basis to produce dynamic three-dimensional color displays. In the area of informatics, these holographic configurations could store information in subwavelength areas. This means that optical data storage devices such as CDs, DVDs or Blu-ray could potentially expand their storage limits.”The researchers plan to further investigate these applications and others in the future.”Future research is focused on the study of mechanisms for the tuning the plasmonic effect for display applications,” Montelongo said. “The main goal is the integration of new modulation schemes to produce ultra-thin displays and dynamic holograms.” In a new study published in the Proceedings of the National Academy of Sciences, Yunuen Montelongo, et al., at the University of Cambridge in the UK, have used surface plasmon resonance as a new way to construct holograms. Similar to the Lycurgus cup, the new holograms can change colors due to light scattering off silver nanoparticles of specific sizes and shapes. Due to their ability to simultaneously create two colors and to store large amounts of information, the new holograms could have applications in 3D displays and information storage devices, among others.”This experiment is inspired by the very unique optical properties shown by the Lycurgus cup,” Montelongo told “This exceptional piece changes in color according to the position of the light source. If illuminated from one side it looks green, but if it is illuminated from the other it becomes red. In contrast to other dichroic effects produced by some crystals, such as precious opals, the colorful effects of the Lycurgus cup have little dependence on the position of the observer. In fact, the dichroism found in the Lycurgus cup has a different origin than crystals and so far this ‘plasmonic effect’ has not been observed in naturally occurring materials.”Although there are several different ways to construct holograms, almost all traditional holograms are single-color, and the multicolor holograms that do exist face limitations. For instance, no methodology exists that can produce multicolor holograms from a surface. Here, the researchers demonstrated that it is possible to construct multicolor holograms from a single plane. The new holograms consist of precisely engineered silver nanoparticles patterned over a substrate. Citation: Color hologram uses plasmonic nanoparticles to store large amounts of information (2014, August 21) retrieved 18 August 2019 from ( —In the 4th century, the Romans built a special glass cup, called the Lycurgus cup, that changes colors depending on which way the light is shining through it. The glass is made of finely ground silver and gold dust that produces a dichroic, or color-changing, effect. Although the makers of the Lycurgus cup likely did not know the mechanism responsible for the color-changing glass, today scientists recognize the mechanism as surface plasmon resonance, and it is a phenomenon that continues to hold great scientific interest. © 2014 A key difference in the new holograms is the smaller size of the diffraction fringes, which control the light wavelength interference. In traditional holograms, these fringes are larger than half the wavelength of light. In contrast, the fringes here are replaced with nanoparticles smaller than half the wavelength of light. The researchers showed that the narrower band diffraction, which creates the colorful effects, is produced by plasmonic-enhanced optical scattering of the nanostructures. The new multicolored holograms offer a wide field of view, projecting images in 180°, which is better displayed on a sphere than on a plane. Credit: Montelongo, et al. ©2014 PNAS Goblet tricks suggests ancient Romans were first to use nanotechnology This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

first_img Journal information: Proceedings of the National Academy of Sciences A scanning electron microscope micrograph depicting a mass of Yersinia pestis bacteria in the foregut of an infected flea. Credit: Wikipedia (—A pair of researchers with NIH has discovered the evolutionary path that a bacterium that causes the plague took to allow for transmission via fleas. In their paper published in Proceedings of the National Academy of Sciences, Iman Chouikha and Joseph Hinnebusch describe how they studied the bacterium and its genes to learn how it adapted to become less lethal to fleas and thus better able to infect more hosts. Explore further More information: Silencing urease: A key evolutionary step that facilitated the adaptation of Yersinia pestis to the flea-borne transmission route, PNAS, Iman Chouikha, DOI: 10.1073/pnas.1413209111AbstractThe arthropod-borne transmission route of Yersinia pestis, the bacterial agent of plague, is a recent evolutionary adaptation. Yersinia pseudotuberculosis, the closely related food-and water-borne enteric species from which Y. pestis diverged less than 6,400 y ago, exhibits significant oral toxicity to the flea vectors of plague, whereas Y. pestis does not. In this study, we identify the Yersinia urease enzyme as the responsible oral toxin. All Y. pestis strains, including those phylogenetically closest to the Y. pseudotuberculosis progenitor, contain a mutated ureD allele that eliminated urease activity. Restoration of a functional ureD was sufficient to make Y. pestis orally toxic to fleas. Conversely, deletion of the urease operon in Y. pseudotuberculosis rendered it nontoxic. Enzymatic activity was required for toxicity. Because urease-related mortality eliminates 30–40% of infective flea vectors, ureD mutation early in the evolution of Y. pestis was likely subject to strong positive selection because it significantly increased transmission potential. © 2014 Phys.orgcenter_img Citation: Research pair learn how plague bacterium adapted to help fleas pass on disease (2014, December 2) retrieved 18 August 2019 from The plague, as most are aware, has a deadly history—it’s killed millions of people over time and still today evokes fear when mentioned. Scientists have known for some time that the reason it was so deadly was because of the easy transmission route, from fleas to rodents and humans. But fleas, it turns out, weren’t always such great carriers, the researchers with this new effort learned. In fact, the bacteria had to evolve to be less harmful to fleas so that they could be better carriers.The research pair actually studied two types of bacteria, Yersinia pseudotuberculosis and Yersinia pestis. The former is relatively harmless in that it’s not a good carrier of plague. The later is the real culprit. Prior research has shown that Y. pseudotuberculosis appears to be a representation of what Y. pestis used to be, thus, to learn how the bacterium evolved to take better advantage of fleas, the team needed to look at how the two types differed.They found that while Y. pseudotuberculosis colonizes just the end of the flea digestive track, Y. pestis forms a film from one end to the other. The former makes it more difficult to infect a host, but is more toxic to the flea as it causes death in almost half of those that are infected. Further research revealed that the bacterium needed to develop just a single gene to allow for growing deeper in the GI tract and had to lose three that hindered the spread of a film. They also discovered the gene change that caused the bacteria to be less toxic to the flea: UreD.Taken together, these simple genetic changes allowed the bacterium to harness the carrier strength of fleas which ultimately led to the deaths of millions of people over many years from the dreaded plague. Plague outbreak kills 40 in Madagascar: WHO This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

first_img Researchers construct a model of impact for El Nino / La Nina events © 2016 Explore further More information: Gaby Rädel et al. Amplification of El Niño by cloud longwave coupling to atmospheric circulation, Nature Geoscience (2016). DOI: 10.1038/ngeo2630AbstractThe El Niño/Southern Oscillation (ENSO) is the dominant mode of inter-annual variability, with major impacts on social and ecological systems through its influence on extreme weather, droughts and floods. The ability to forecast El Niño, as well as anticipate how it may change with warming, requires an understanding of the underlying physical mechanisms that drive it. Among these, the role of atmospheric processes remains poorly understood. Here we present numerical experiments with an Earth system model, with and without coupling of cloud radiative effects to the circulation, suggesting that clouds enhance ENSO variability by a factor of two or more. Clouds induce heating in the mid and upper troposphere associated with enhanced high-level cloudiness over the El Niño region, and low-level clouds cool the lower troposphere in the surrounding regions. Together, these effects enhance the coupling of the atmospheric circulation to El Niño surface temperature anomalies, and thus strengthen the positive Bjerknes feedback mechanism14 between west Pacific zonal wind stress and sea surface temperature gradients. Behaviour consistent with the proposed mechanism is robustly represented in other global climate models and in satellite observations. The mechanism suggests that the response of ENSO amplitude to climate change will in part be determined by a balance between increasing cloud longwave feedback and a possible reduction in the area covered by upper-level clouds. Scientists predicted that El Niño’ weather events would be more severe this winter compared to recent events, and thus far, their predictions have proved to be true—temperatures have fluctuated wildly in parts of Europe and the U.S. along with associated rain events, leading to serious flooding. In this new effort, the researchers have found that cloud formation may have more influence on such weather events than has been thought.The El Niño/Southern Oscillation (ENSO) as it is known formally, causes the most weather variability on a small time scale, and attracts an enormous amount of attention due to associated changes in rain patterns—the western parts of the U.S. and the northern parts of South America, for example, typically see more than normal amounts of rainfall, while parts of Africa experience droughts. As scientists struggle to truly understand global weather patterns associated with ENSO, they debate the degree of impact of oceanic process versus those that occur in the atmosphere. In this new effort, the researchers suggest that the atmosphere may exert much more of an impact on ENSO events than has been thought, due in large part, to cloud formations which can serve as a blanket of sorts, preventing warm air from escaping from lower elevations, which can lead to more rainfall.To better understand the impact of cloud formations on ENOS weather events, the researchers input cloud data into standard climate models and compared the results with models running under the same conditions without cloud data. They report that they were surprised to find that the cloud formation data caused large changes to atmospheric circulation patterns and accounted for more than half of the strength of ENOS events. They suggest their findings indicate that all future climate models include cloud data so that they can offer a better representation of real events, and thus, give better predictions. (—A small team of researchers from the U.S., Australia and Germany has found evidence that suggests cloud formation may have a much bigger impact on weather patterns associated with El Niño events than has been thought. In their paper published in the journal Nature Geoscience, the team describes they differences they found when they input cloud data into computer models that simulated weather patterns associated with El Niño’ events and why they now believe that all such models should include such data going forward.center_img Journal information: Nature Geoscience This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Citation: Clouds may have more of an impact on El Nino than thought (2016, January 5) retrieved 18 August 2019 from The 1997 El Nino seen by TOPEX/Poseidon. Credit: NASAlast_img read more

first_img This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. How to make the gene-editing tool CRISPR work even better CRISPR is a gene editing technique that can identify DNA segments and snip them out of a genome. It can also be used to replace segments that have been cut out. In order to perform its duties, CRISPR uses a protein nuclease to serve as a guide or template, outlining which genes are to be cut and/or replaced. Cas9, often used for this purpose in many early studies, came to be associated with the technique. But for Cas9 to work properly, researchers also had to add in a Cas9 inhibitor—its job was to prevent extraneous cutting. More recently, researchers have used Cas12a because it offers other desired characteristics not found in Cas9. But until now, there were no known inhibitors for Cas12a, which stymied its use in research efforts. In these two new efforts, both teams have found several inhibitors that they claim are suitable for use with CRISPR.Both teams used a bioinformatics pipeline approach in their search for Cas12 inhibitors—it is a system for searching through bacterial genomes that involves looking for genetic fragments that are normally deadly to bacteria but are not toward bacteria that have inhibitors. Using this approach, the first team found three inhibitors, including one that stood out called AcrVA1. Testing with human cells showed that all three could be used with CRISPR. Using the same basic approach, the second team found several candidates that also worked as hoped when tested with human cells. The second team also found AcrVA1 to be particularly effective.Taken together, the work by the two teams has yielded several possible Cas12a inhibitors, one of which appears to be particularly promising. And that could lead to more fruitful research using CRISPR-Cas12a as a new gene editing tool. Two teams working independently of one another have identified several CRISPR-Cas12a inhibitors. The first team was made up of members from the University of California, Berkeley, the other had members from Massachusetts General Hospital and the University of California. Both used bioinformatics tools to scan bacterial genomes for possible inhibitors and both have published their results in the journal Science. Citation: Two unrelated studies result in discovery of CRISPR-Cas12a inhibitors (2018, September 7) retrieved 18 August 2019 from Explore furthercenter_img Journal information: Science © 2018 All rights reserved. More information: 1. Kyle E. Watters et al. Systematic discovery of natural CRISPR-Cas12a inhibitors, Science (2018). DOI: 10.1126/science.aau5138AbstractCas12a (Cpf1) is a CRISPR-associated nuclease with broad utility for synthetic genome engineering, agricultural genomics, and biomedical applications. While bacteria harboring CRISPR-Cas9 or CRISPR-Cas3 adaptive immune systems sometimes acquire mobile genetic elements encoding anti-CRISPR proteins that inhibit Cas9, Cas3, or the DNA-binding Cascade complex, no such inhibitors have been found for CRISPR-Cas12a. Here we employ a comprehensive bioinformatic and experimental screening approach to identify three different inhibitors that block or diminish CRISPR-Cas12a-mediated genome editing in human cells. We also find a widespread connection between CRISPR self-targeting and inhibitor prevalence in prokaryotic genomes, suggesting a straightforward path to the discovery of many more anti-CRISPRs from the microbial world. Nicole D. Marino et al. Discovery of widespread Type I and Type V CRISPR-Cas inhibitors, Science (2018). DOI: 10.1126/science.aau5174AbstractBacterial CRISPR-Cas systems protect their host from bacteriophages and other mobile genetic elements. Mobile elements, in turn, encode various anti-CRISPR (Acr) proteins to inhibit the immune function of CRISPR-Cas. To date, Acr proteins have been discovered for type I (subtypes I-D, I-E, and I-F) and type II (II-A and II-C) but not other CRISPR systems. Here we report the discovery of 12 acr genes, including inhibitors of type V-A and I-C CRISPR systems. Notably, AcrVA1 inhibits a broad spectrum of Cas12a (Cpf1) orthologs including MbCas12a, Mb3Cas12a, AsCas12a, and LbCas12a when assayed in human cells. The acr genes reported here provide useful biotechnological tools and mark the discovery of acr loci in many bacteria and phages. CRISPR (= Clustered Regularly Interspaced Short Palindromic Repeats) + DNA fragment, E.Coli. Credit: Mulepati, S., Bailey, S.; Astrojan/Wikipedia/ CC BY 3.0last_img read more