Singularity Hub Daily

Follow Singularity Hub Daily
Share on
Copy link to clipboard

A constant stream of SingularityHub's high-quality articles, read to you via an AI system.

Singularity Hub


    • Nov 17, 2021 LATEST EPISODE
    • infrequent NEW EPISODES
    • 6m AVG DURATION
    • 133 EPISODES


    Search for episodes from Singularity Hub Daily with a specific topic:

    Latest episodes from Singularity Hub Daily

    Nvidia's New Supercomputer Will Create a 'Digital Twin' of Earth to Fight Climate Change

    Play Episode Listen Later Nov 17, 2021 4:43


    It's crunch time on climate change, and companies, governments, philanthropists, and NGOs around the world are starting to take action, be it through donating huge sums of money to the cause, building a database for precise tracking of carbon emissions, creating a plan for a clean hydrogen economy, or advocating for solar geoengineering—among many other initiatives. But according to Nvidia, to really know where and how to take action on climate change, we need more data, better modeling, and faster computers. That's why the company is building what it calls “the world's most powerful AI supercomputer dedicated to predicting climate change.” The system will be called Earth-2 and will be built using Nvidia's Omniverse, a multi-GPU development platform for 3D simulation based on Pixar's Universal Scene Description. In a blog post announcing Earth-2 late last week, Nvidia's founder and CEO Jensen Huang described his vision for the system as a “digital twin” of Earth. Digital twins aren't a new concept; they've become popular in manufacturing as a way to simulate a product's performance and tweak the product based on feedback from the simulation. But advances in computing power and AI mean these simulations have become much more granular and powerful, with the ability to drive meaningful change—and that's just what Huang is hoping for with Earth-2. “We need to confront climate change now. Yet, we won't feel the impact of our efforts for decades,” he wrote. “It's hard to mobilize action for something so far in the future. But we must know our future today—see it and feel it—so we can act with urgency.” Plenty of climate models already exist. They quantify factors like air pressure, wind magnitude, and temperature and plug them into equations to get a view of climate patterns in a given region, representing those regions as 3D grids. The smaller the region, the more accurate a model can be before becoming unwieldy (in other words, models must solve more equations to achieve higher resolution, but trying to take on too many equations will make a model so slow that it stops being useful). This means most existing climate models lack both granularity and accuracy. The solution? A bigger, better, faster computer. “Greater resolution is needed to model changes in the global water cycle,” Huang wrote. “Meter-scale resolution is needed to simulate clouds that reflect sunlight back to space. Scientists estimate that these resolutions will demand millions to billions of times more computing power than what's currently available.” Earth-2 will employ three technologies to achieve ultra-high-resolution climate modeling: GPU-accelerated computing; deep learning and breakthroughs in physics-informed neural networks; and AI supercomputers—and a ton of data. The ultimate aim of this digital twin of our planet is to spur action that will drive meaningful change, both in terms of mitigating the negative impacts of climate change on populations and mitigating climate change itself. Extreme weather events like hurricanes, wildfires, heat waves, and flash floods are increasingly taking lives, damaging property, and forcing people to flee from their homes; you've doubtless seen the dire headlines and heartbreaking images on the news. If we could accurately predict these events much further in advance, those headlines would change. Huang hopes Nvidia's model will be able to predict extreme weather changes in designated regions decades ahead of time. People would then know to either not move to certain areas at all, or to build the infrastructure in those areas in a way that's compatible with the impending climate events. The model will also aim to help find solutions, running simulations of various courses of actions to figure out which would have the greatest impact at the lowest cost. Nvidia has not shared a timeline for Earth-2's development nor when the supercomputer will be ready to launch. But if its Cambridge-1 supercomputer for healthcare resea...

    AI Can Now Model the Molecular Machines That Govern All Life

    Play Episode Listen Later Nov 16, 2021 10:04


    Thanks to deep learning, the central mysteries of structural biology are falling like dominos. Just last year, DeepMind shocked the biomedical field with AlphaFold, an algorithm that predicts protein structures with jaw-dropping accuracy. The University of Washington (UW) soon unveiled RoseTTAFold, an AI that rivaled AlphaFold in predictive ability. A few weeks later, DeepMind released a near complete catalog of all protein structures in the human body. Together, the teams essentially solved a 50-year-old grand challenge in biology, and because proteins are at the heart of most medications, they may also have seeded a new era of drug development. For the first time, we have unprecedented insight into the protein engines of our cells, many of which had remained impervious to traditional lab techniques. Yet one glaring detail was missing. Proteins don't operate alone. They often associate into complexes—small groups that interact to carry out critical tasks in our cells and bodies. This month, the UW team upped their game. Tapping into both AlphaFold and RoseTTAFold, they tweaked the programs to predict which proteins are likely to tag-team and sketched up the resulting complexes into a 3D models. Using AI, the team predicted hundreds of complexes—many of which are entirely new—that regulate DNA repair, govern the cell's digestive system, and perform other critical biological functions. These under-the-hood insights could impact the next generation of DNA editors and spur new treatments for neurodegenerative disorders or anti-aging therapies. “It's a really cool result,” said Dr. Michael Snyder at Stanford University, who was not involved in the study, to Science. Like a compass, the results can guide experimental scientists as they test the predictions and search for new insights into how our cells grow, age, die, malfunction, and reproduce. Several predictions further highlighted how our cells absorb external molecules—a powerful piece of information that could help us coerce normally reluctant cells to gulp up medications. “It.gives you a lot of potential new drug targets,” said study author Dr. Qian Cong at the University of Texas Southwestern Medical Center. The Cell's Lego Blocks Our bodies are governed by proteins, each of which intricately folds into 3D shapes. Like unique Lego bricks, these shapes allow the proteins to combine into larger structures, which in turn conduct the biological processes that propel life. Too abstract? An example: when cells live out their usual lifespan, they go through a process called apoptosis—in Greek, the falling of the leaves—in which the cell gently falls apart without disturbing its neighbors by leaking toxic chemicals. The entire process is a cascade of protein-protein interactions. One protein grabs onto another protein to activate it. The now-activated protein is subsequently released to stir up the next protein in the chain, and so on, eventually causing the aging or diseased cell to sacrifice itself. Another example: in neurons during learning, synapses (the hubs that connect brain cells) call upon a myriad of proteins that form a complex together. This complex, in turn, spurs the neuron's DNA to make proteins that etch the new memory into the brain. “Everything in biology works in complexes. So, knowing who works with who is critical,” said Snyder. For decades, scientists have relied on painfully slow processes to parse out those interactions. One approach is computational: map out a protein's structure down to the atomic level and predict “hot spots” that might interact with another protein. Another is experimental: using both biological lab prowess and physics ingenuity, scientists can isolate protein complexes from cells—like sugar precipitating from lemonade when there's too much of it—and use specialized equipment to analyze the proteins. It's tiresome, expensive, and often plagued with errors. Here Comes the Sun Deep learning is now shining light on the whole enterprise....

    Scientists Say We Need to Look Into Hacking the Sun Now—Before It's Too Late

    Play Episode Listen Later Nov 15, 2021 5:51


    With the pace of emissions reductions looking unlikely to prevent damaging climate change, controversial geoengineering approaches are gaining traction. But aversion to even studying such a drastic option makes it hard to have a sensible conversation, say researchers. Geoengineering refers to large-scale interventions designed to alter the Earth's climate system in response to global warming. Some have suggested it may end up being a crucial part of the toolbox for tackling global warming, given that efforts to head off warming by reducing emissions seem well behind schedule. One major plank of geoengineering is the idea of removing excess CO2 from the atmosphere, either through reforestation or carbon capture technology that will scrub emissions from industrial exhausts or directly from the air. There are limits to nature-based CO2 removal, though, and so-called “negative emissions technology” is a long way from maturity. The other option is solar geoengineering, which involves deflecting sunlight away from the Earth by boosting the reflectivity of the atmosphere or the planet's surface. Leading proposals involve injecting tiny particles into the stratosphere, making clouds whiter by spraying sea water into the atmosphere, or thinning out high cirrus clouds that trap heat. In theory, this could reduce global warming fairly cheaply and quickly, but interfering with the Earth's climate system carries unpredictable and potentially enormous risks. This has led to widespread opposition to even basic research into the idea. Earlier this year, a test of the approach by Sweden's space agency was cancelled following concerted opposition. But this lack of research means policymakers are flying blind when weighing the pros and cons of the approach, researchers write in a series of articles in the latest issue of Science. They outline why research into the approach is necessary and how social science in particular can help us better understand the potential trade-offs. In an editorial, Edward A. Parson from the University of California, Los Angeles, notes that critics often point to the fact that solar geoengineering is a short-term solution to a long-term problem that is likely to be imperfect and whose effects could be uneven and unjust. More importantly, if solar geoengineering becomes acceptable to use, we may end up over-relying on it and putting less effort into emissions reductions or carbon removal. This point is often used to argue that solar geoengineering can never be acceptable, and therefore research into it isn't warranted. But Parson argues that both the potential harms and benefits of solar geoengineering are currently hypothetical due to a lack of research. Rejecting an activity due to unknown harms might be justified in extreme circumstances and when the alternative is acceptable, he writes. But the alternative to solar geoengineering is potentially catastrophic climate change—unless we drastically ramp up emissions reductions and removals, which is far from a sure thing. Part of the rationale for preventing solar geoengineering research is that it will drive socio-political lock-in that makes its deployment more likely. But Parson points out that rather than preventing its deployment, blocking research into solar geoengineering may actually lead to less-informed, more dangerous deployments by desperate policymakers further down the line. One way to overcome some of the resistance to research in this area might be to make the debate around it more constructive, writes David W Keith from Harvard University in a policy paper. And the best way to do that is to disentangle the technical, political, and ethical aspects of the debate. Appraising the pros and cons of solar geoengineering involves many different fields, from engineering to climate science to economics. But often, experts in one of these areas will give an overall judgment on the technology despite not being in a position to assess critical aspects of it. The...

    Beeple's New NFT Just Sold for $29 Million, and He'll Update It for the Rest of His Life

    Play Episode Listen Later Nov 12, 2021 4:37


    Just a few months ago, most of us had never heard of an NFT. Even once we figured out what they were, it seemed like maybe they'd be a short-lived fad, a distraction for the tech-savvy to funnel money into as the pandemic dragged on. But it seems NFTs are here to stay. The internet has exploded with digital artwork that's being bought and sold like crazy; in the third quarter of this year, trading volume of NFTs hit $10.67 billion, a more than 700-percent increase from the second quarter. Last month, both Coinbase and Sotheby's announced plans to launch NFT marketplaces. As a quick refresher in case you, like me, still don't totally get it: NFT stands for non-fungible token, and it's a digital certificate that represents ownership of a digital asset. The certificates are one-of-a-kind (that's the non-fungible part), are verified by and stored on a blockchain, and allow digital assets to be transferred or sold. Depending who you ask, NFTs were a thing as early as 2013 or 2014—but they didn't really hit headlines until earlier this year, when artists like Grimes and Beeple sold their digital creations for millions of dollars. Soon everyone from Jack Dorsey to George Church to the NBA started jumping on the NFT bandwagon. And you've probably heard about the bizarre phenomenon that is the Bored Ape Yacht Club. This is just the beginning of an ever-growing list of artists, celebrities, crypto-enthusiasts, and others who are betting NFTs are the future of collectible art. Re-enter Beeple, the American artist (whose given name is Mike Winkelmann) whose collage of 5,000 pieces of digital art, titled Everydays: The First 5000 Days, sold for $69 million in a Christie's auction in March. Another piece of his sold this week, and though it went for less than half what Everydays did, it's bringing a whole new twist to the NFT art world. The new work, titled Human One, is a life-sized 3D video sculpture, and Winkelmann called it “the first portrait of a human born in the metaverse.” It shows a person in silver clothing and boots, wearing a backpack and a helmet (which is something of a cross between that of an astronaut and a motorcyclist) trekking purposefully across a changing landscape. It was purchased for $29 million at Christie's 21st Century Evening Sale on Tuesday by Ryan Zurrer, a Swiss venture capitalist. introducing HUMAN ONE beeple (@beeple) October 28, 2021 The piece is a box whose four walls are video screens, with a computer at its base. It's over seven feet tall and can be viewed from any angle. But its key feature is the fact that it will be continuously updated, supposedly for the rest of Winkelmann's life. “I want to make something that people can continue to come back to and find new meaning in. And the meaning will continue to evolve,” he said. “That to me is super-exciting. It feels like I now have this whole other canvas.” The artist plans to change the imagery that appears in the box regularly. It will be sort of like having one of those digital photo frames, except instead of family and friends on a small, flat screen, who-knows-what will appear in 3D and larger than life. If some of the images in Everydays are any indication, Zurrer may end up seeing some pretty striking political commentary in his living room, or office, or wherever he chooses to keep Human One. “You could come downstairs in the morning and the piece looks one way,” Winkelmann said. “Then you come home from work, and it looks another way.” However, he won't be changing the piece according to any sort of schedule, but rather as the fancy strikes him—and, he noted, in response to current events. If Zurrer chooses to keep the piece in his home or another private location, that would establish a sort of artistic intimacy between him and Winkelmann, with Zurrer being privy to the artist's ideas and creativity in real time. Though Human One was doubtless an expensive and highly complex project, it's likely just the beginning of a whole new type of “l...

    These Modular Houses Are Affordable, Carbon Neutral, and Go Up in Just 2 Weeks

    Play Episode Listen Later Nov 11, 2021 4:41


    House prices have soared during the last year and a half, and the implications aren't great (unless you're a homeowner looking just to sell and not buy). Homelessness and housing insecurity have risen dramatically over the course of the pandemic, with millions of people unable to afford to live where they want to, and many unable to afford to live anywhere at all. A supply shortage is just one among many factors contributing to these problems, but it's a big one; houses take a long time to build, require all sorts of permissions and inspections and approvals, and are, of course, expensive. A Seattle-based company wants to be part of changing this, and they've just joined forces with a partner to make home building more sustainable and efficient while driving down its costs. Last week, construction tech company NODE, which got its start at Y Combinator, announced a merger with Green Canopy, a vertically-integrated developer, designer, general contractor, and fund manager. The new company's goal is to offer accessible, green housing options at scale. “The construction industry is ripe for disruption and evolution,” said NODE co-founder Bec Chapin. “It's a giant industry that has been losing productivity over decades and is not meeting our most crucial demands for housing.” NODE's approach is similar to that of Las Vegas-based Boxabl, which ships pre-fabricated “foldable” houses to its customers in a 20-foot-wide load that can be up and running in as little as a day. In a 2018 GeekWire interview, Chapin said the company was “developing a component technology in which the walls, floors and ceilings are in separate pieces, as well as all of the things needed to make it a complete house: kitchens, baths, heating systems, etc. These houses can be packed more efficiently, then easily assembled on site.” NODE homes come in flat-pack kits that fit in standard shipping containers, and they don't require specialists to assemble; they're essentially the IKEA furniture of houses (though IKEA furniture can, admittedly, be much harder to put together than the company would have you think, as you know if you've ever purchased one of their bed frames or shelving units.). Their assembly is guided by software and can be done by generalist construction workers, or even by homeowners themselves. A 2019 McKinsey report noted that modular construction is seeing a comeback, largely thanks to the impact of new digital tools. Consumer perception of prefab housing is becoming more positive as the design of these homes gets more modern and visually appealing. Most importantly, modular construction assisted by digital technologies can make home building up to 50 percent faster, at a cost that's comparable to or lower than traditional building costs. Green Canopy NODE's homes are priced between $90,000 (for a 260-square-foot-home that somehow fits in a kitchen and a bathroom) to $150,000 for a 500-square-foot model. These figures are well below the cost of using traditional building methods for homes of comparable size in the company's native Seattle area. So they seem to be on the right track. But where they're really looking to set themselves apart from competitors is in their focus on sustainability. “We started Node because buildings account for 47 percent of carbon emissions, yet all of the technology exists for buildings to be carbon negative,” Chapin said. The company's homes are designed to be carbon neutral or carbon negative; they're ultra energy-efficient and they use non-toxic materials. Their insulation, for example, is made of recycled denim, glass, and sand instead of fiberglass. The homes can also be outfitted with solar panels or mini wind turbines, and thus could end up generating more energy than they consume, enabling homeowners to sell power back to the grid. The newly-merged company recently raised $10 million in new funding, and expects to double in size over the next year (it currently has 31 employees). Initially focused on the “...

    The First Continents Bobbed to the Surface More Than Three Billion Years Ago, Study Shows

    Play Episode Listen Later Nov 10, 2021 5:00


    Most people know that the land masses on which we all live represent just 30 percent of Earth's surface, and the rest is covered by oceans. The emergence of the continents was a pivotal moment in the history of life on Earth, not least because they are the humble abode of most humans. But it's still not clear exactly when these continental landmasses first appeared on Earth, and what tectonic processes built them. Our research, published in Proceedings of the National Academy of Sciences, estimates the age of rocks from the most ancient continental fragments (called cratons) in India, Australia, and South Africa. The sand that created these rocks would once have formed some of the world's first beaches. We conclude that the first large continents were making their way above sea level around three billion years ago, much earlier than the 2.5 billion years estimated by previous research. A Three-Billion-Year-Old Beach When continents rise above the oceans, they start to erode. Wind and rain break rocks down into grains of sand, which are transported downstream by rivers and accumulate along coastlines to form beaches. These processes, which we can observe in action during a trip to the beach today, have been operating for billions of years. By scouring the rock record for signs of ancient beach deposits, geologists can study episodes of continent formation that happened in the distant past. The Singhbhum craton, an ancient piece of continental crust that makes up the eastern parts of the Indian subcontinent, contains several formations of ancient sandstone. These layers were originally formed from sand deposited in beaches, estuaries and rivers, which was then buried and compressed into rock. We determined the age of these deposits by studying microscopic grains of a mineral called zircon, which is preserved within these sandstones. This mineral contains tiny amounts of uranium, which very slowly turns into lead via radioactive decay. This allows us to estimate the age of these zircon grains, using a technique called uranium-lead dating, which is well suited to dating very old rocks. The zircon grains reveal that the Singhbhum sandstones were deposited around three billion years ago, making them some of the oldest beach deposits in the world. This also suggests a continental landmass had emerged in what is now India by at least three billion years ago. Interestingly, sedimentary rocks of roughly this age are also present in the oldest cratons of Australia (the Pilbara and Yilgarn cratons) and South Africa (the Kaapvaal Craton), suggesting multiple continental landmasses may have emerged around the globe at this time. Rise Above It How did rocky continents manage to rise above the oceans? A unique feature of continents is their thick, buoyant crust, which allows them to float on top of Earth's mantle, just like a cork in water. Like icebergs, the top of continents with thick crust (typically more than 45km thick) sticks out above the water, whereas continental blocks with crusts thinner than about 40km remain submerged. So if the secret of the continents' rise is due to their thickness, we need to understand how and why they began to grow thicker in the first place. Most ancient continents, including the Singhbhum Craton, are made of granites, which formed through the melting of pre-existing rocks at the base of the crust. In our research, we found the granites in the Singhbhum Craton formed at increasingly greater depths between about 3.5 billion and 3 billion years ago, implying the crust was becoming thicker during this time window. Because granites are one of the least dense types of rock, the ancient crust of the Singhbhum Craton would have become progressively more buoyant as it grew thicker. We calculate that by around three billion years ago, the continental crust of the Singhbhum Craton had grown to be about 50km thick, making it buoyant enough to begin rising above sea level. The rise of continents had a profound inf...

    New Spiking Neuromorphic Chip Could Usher in an Era of Highly Efficient AI

    Play Episode Listen Later Nov 9, 2021 8:52


    When it comes to brain computing, timing is everything. It's how neurons wire up into circuits. It's how these circuits process highly complex data, leading to actions that can mean life or death. It's how our brains can make split-second decisions, even when faced with entirely new circumstances. And we do so without frying the brain from extensive energy consumption. To rephrase, the brain makes an excellent example of an extremely powerful computer to mimic—and computer scientists and engineers have taken the first steps towards doing so. The field of neuromorphic computing looks to recreate the brain's architecture and data processing abilities with novel hardware chips and software algorithms. It may be a pathway towards true artificial intelligence. But one crucial element is lacking. Most algorithms that power neuromorphic chips only care about the contribution of each artificial neuron—that is, how strongly they connect to one another, dubbed “synaptic weight.” What's missing—yet tantamount to our brain's inner working—is timing. This month, a team affiliated with the Human Brain Project, the European Union's flagship big data neuroscience endeavor, added the element of time to a neuromorphic algorithm. The results were then implemented on physical hardware—the BrainScaleS-2 neuromorphic platform—and pitted against state-of-the-art GPUs and conventional neuromorphic solutions. “Compared to the abstract neural networks used in deep learning, the more biological archetypes.still lag behind in terms of performance and scalability” due to their inherent complexity, the authors said. In several tests, the algorithm compared “favorably, in terms of accuracy, latency, and energy efficiency” on a standard benchmark test, said Dr. Charlotte Frenkel at the University of Zurich and ETH Zurich in Switzerland, who was not involved in the study. By adding a temporal component into neuromorphic computing, we could usher in a new era of highly efficient AI that moves from static data tasks—say, image recognition—to one that better encapsulates time. Think videos, biosignals, or brain-to-computer speech. To lead author Dr. Mihai Petrovici, the potential goes both ways. “Our work is not only interesting for neuromorphic computing and biologically inspired hardware. It also acknowledges the demand . to transfer so-called deep learning approaches to neuroscience and thereby further unveil the secrets of the human brain,” he said. Let's Talk Spikes At the root of the new algorithm is a fundamental principle in brain computing: spikes. Let's take a look at a highly abstracted neuron. It's like a tootsie roll, with a bulbous middle section flanked by two outward-reaching wrappers. One side is the input—an intricate tree that receives signals from a previous neuron. The other is the output, blasting signals to other neurons using bubble-like ships filled with chemicals, which in turn triggers an electrical response on the receiving end. Here's the crux: for this entire sequence to occur, the neuron has to “spike.” If, and only if, the neuron receives a high enough level of input—a nicely built-in noise reduction mechanism—the bulbous part will generate a spike that travels down the output channels to alert the next neuron. But neurons don't just use one spike to convey information. Rather, they spike in a time sequence. Think of it like Morse Code: ­the timing of when an electrical burst occurs carries a wealth of data. It's the basis for neurons wiring up into circuits and hierarchies, allowing highly energy-efficient processing. So why not adopt the same strategy for neuromorphic computers? A Spartan Brain-Like Chip Instead of mapping out a single artificial neuron's spikes—a Herculean task—the team honed in on a single metric: how long it takes for a neuron to fire. The idea behind “time-to-first-spike” code is simple: the longer it takes a neuron to spike, the lower its activity levels. Compared to counting spikes, it's an extremely sp...

    How Astronauts Could Produce Biofuel on Mars to Power Their Trip Back to Earth

    Play Episode Listen Later Nov 8, 2021 4:30


    While getting humans to Mars is likely to be one of the grandest challenges humanity has ever undertaken, getting them back could be even tougher. Researchers think sending genetically engineered microbes to the Red Planet could be the solution. Both NASA and SpaceX are mulling human missions to Mars in the coming decades. But carrying enough fuel to make sure it's a round trip adds a lot of extra weight, which dramatically increases costs and also makes landing on the planet much riskier. As a result, NASA has been investigating a variety of strategies that would make it possible to produce some or all of the required fuel on Mars using locally-sourced ingredients. While the planet may be pretty barren, its atmosphere is 95 percent carbon dioxide and there is abundant water ice in certain areas. That could provide all the ingredients needed to create hydrocarbon rocket fuels and the liquid oxygen needed to support combustion. The most ambitious of NASA's plans would be to use electrolysis to generate hydrogen and oxygen from water and then use the Sabatier reaction to combine the hydrogen with Martian CO2 to create methane for use as a fuel. The technology to do that at scale is still immature, though, so the more likely option would see methane shipped from Earth and oxygen generated in place using solid oxide carbon dioxide electrolysis (SOCE). That would still require 7.5 tons of fuel and 1 ton of SOCE equipment to be transported to Mars, though. Researchers from the Georgia Institute of Technology have outlined a new strategy in a paper in Nature Communications, which would use genetically engineered microbes to produce all the fuel and oxygen required for a return trip on Mars. “Carbon dioxide is one of the only resources available on Mars,” first author Nick Kruyer said in a press release. “Knowing that biology is especially good at converting CO2 into useful products makes it a good fit for creating rocket fuel.” The researchers' proposal involves building four football fields' worth of photobioreactors—essentially liquid-filled transparent tubes—which will be used to grow photosynthetic cyanobacteria. While it is possible to get these microbes to produce fuels themselves, they are fairly inefficient at it. So instead, they will be fed into another reactor where enzymes will break them down into simple sugars, which are then fed to genetically modified E. coli bacteria that produce a chemical called 2,3-butanediol. On Earth this chemical is primarily used to make rubber, and burns too inefficiently to be used as a fuel. But thanks to Mars' low gravity, it is more than capable of powering a rocket engine there, and also uses less oxygen than methane. “You need a lot less energy for lift-off on Mars, which gave us the flexibility to consider different chemicals that aren't designed for rocket launch on Earth,” said Pamela Peralta-Yahya, who led the research. The process also generates 44 tons of excess oxygen that could be used for life support. The one catch is that if the system was built with today's state-of-the-art technology, it would require 2.8 times as much material to be delivered to Mars compared to the most likely NASA strategy. However, once there it would use 32 percent less power, and resupply missions would only need to carry 3.7 tons of nutrients and chemicals rather than 6.5 tons of methane every time. And modeling studies suggest that by optimizing the biological processes involved and designing lighter-weight materials, a future system could actually weigh 13 percent less than the NASA solution and use 59 percent less power. The biggest barrier at the minute might be the fact that current NASA regulations prohibit sending microbes to Mars due to fears of contaminating the pristine environment. The researchers acknowledge that they will have to develop foolproof biological containment strategies before the proposal could be seriously considered. But if we want to make round trips to Mars a regular f...

    Alphabet Chases Wonder Drugs With DeepMind AI Spinoff Isomorphic Labs

    Play Episode Listen Later Nov 7, 2021 5:51


    AI research wunderkind, DeepMind, has long been all fun and games. The London-based organization, owned by Google parent company Alphabet, has used deep learning to train algorithms that can take down world champions at the ancient game of Go and top players of the popular strategy video game Starcraft. Then last year, things got serious when DeepMind trounced the competition at a protein folding contest. Predicting the structure of proteins, the complex molecules underpinning all biology, is notoriously difficult. But DeepMind's AlphaFold2 made a quantum leap in capability, producing results that matched experimental data down to a resolution of a few atoms. In July, the company published a paper describing AlphaFold2, open-sourced the code, and dropped a library of 350,000 protein structures with a promise to add 100 million more. This week, Alphabet announced it will build on DeepMind's AlphaFold2 breakthrough by creating a new company, Isomorphic Labs, in an effort to apply AI to drug discovery. “We are at an exciting moment in history now where these techniques and methods are becoming powerful and sophisticated enough to be applied to real-world problems including scientific discovery itself,” wrote Demis Hassabis, DeepMind founder and CEO, in a post announcing the company. “Now the time is right to push this forward at pace, and with the dedicated focus and resources that Isomorphic Labs will bring.” Hassabis is Isomorphic's founder and will serve as its CEO while the fledgling company gets its feet, setting the agenda and culture, building a team, and connecting the effort to DeepMind. The two companies will collaborate, but be largely independent. “You can think of [Isomorphic] as a sort of sister company to DeepMind,” Hassabis told Stat. “The idea is to really forge ahead with the potential for computational AI methods to reimagine the whole drug discovery process.” While AlphaFold2's success sparked the effort, protein folding is only one step—arguably simpler than others—in the arduous drug discovery process. Hassabis is thinking bigger. Though details are scarce, it appears the new company will build a line of AI models to ease key choke points in the process. Instead of identifying and developing drugs themselves, they'll sell a platform of models as a service to pharmaceutical companies. Hassabis told Stat these might tackle how proteins interact, the design of small molecules, how well molecules bind, and the prediction of toxicity. That the work will be separated from DeepMind itself is interesting. The company's not insignificant costs have largely been dedicated to pure research. DeepMind turned its first profit in 2020, but its customers are mostly Alphabet companies. Some have wondered if it'd face more pressure to focus on commercial products. The decision to create a separate enterprise based on DeepMind research seems to indicate that's not yet the case. If it can keep pushing the field ahead as a whole, perhaps it makes sense to fund a new organization—or organizations, seeded by future breakthroughs—as opposed to diverting resources from DeepMind's more foundational research. Isomorphic Labs has plenty of company in its drug discovery efforts. In 2020, AI in cancer, molecular, and drug discovery received the most private investment in the field, attracting over $13.8 billion, more than quadruple 2019's total. There have been three AI drug discovery IPOs in the last year, and mature startups—including Exscientia, Insilico Medicine, Insitro, Atomwise, and Valo Health—have earned hundreds of millions in funding. Companies like Genentech, Pfizer, and Merck are likewise working to embed AI in their processes. To a degree, Isomorphic will be building its business from the ground up. AlphaFold2 is without a doubt a big deal, but protein modeling is the tip of the drug discovery iceberg. Also, while AlphaFold2 had the benefit of access to hundreds of thousands of freely available, already modeled protein s...

    This Restaurant Robot Fries Your Food to Perfection With No Human Help

    Play Episode Listen Later Nov 5, 2021 4:28


    Four and a half years ago, a robot named Flippy made its burger-cooking debut at a fast food restaurant called CaliBurger. The bot consisted of a cart on wheels with an extending arm, complete with a pneumatic pump that let the machine swap between tools: tongs, scrapers, and spatulas. Flippy's main jobs were pulling raw patties from a stack and placing them on the grill, tracking each burger's cook time and temperature, and transferring cooked burgers to a plate. This initial iteration of the fast-food robot—or robotic kitchen assistant, as its creators called it—was so successful that a commercial version launched last year. Its maker Miso Robotics put Flippy on the market for $30,000, and the bot was no longer limited to just flipping burgers; the new and improved Flippy could cook 19 different foods, including chicken wings, onion rings, french fries, and the Impossible Burger. It got sleeker, too: rather than sitting on a wheeled cart, the new Flippy was a “robot on a rail,” with the rail located along the hood of restaurant stoves. This week, Miso Robotics announced an even newer, more improved Flippy robot called Flippy 2 (hey, they're consistent). Most of the updates and improvements on the new bot are based on feedback the company received from restaurant chain White Castle, the first big restaurant chain to go all-in on the original Flippy. So how is Flippy 2 different? The new robot can do the work of an entire fry station without any human assistance, and can do more than double the number of food preparation tasks its older sibling could do, including filling, emptying, and returning fry baskets. These capabilities have made the robot more independent, eliminating the need for a human employee to step in at the beginning or end of the cooking process. When foods are placed in fry bins, the robot's AI vision identifies the food, picks it up, and cooks it in a fry basket designated for that food specifically (i.e., onion rings won't be cooked in the same basket as fish sticks). When cooking is complete, Flippy 2 moves the ready-to-go items to a hot-holding area. Miso Robotics says the new robot's throughput is 30 percent higher than that of its predecessor, which adds up to around 60 baskets of fried food per hour. So much fried food. Luckily, Americans can't get enough fried food, in general and especially as the pandemic drags on. Even more importantly, the current labor shortages we're seeing mean restaurant chains can't hire enough people to cook fried food, making automated tools like Flippy not only helpful, but necessary. “Since Flippy's inception, our goal has always been to provide a customizable solution that can function harmoniously with any kitchen and without disruption,” said Mike Bell, CEO of Miso Robotics. “Flippy 2 has more than 120 configurations built into its technology and is the only robotic fry station currently being produced at scale.” At the beginning of the pandemic, many foresaw that Covid-19 would push us into quicker adoption of many technologies that were already on the horizon, with automation of repetitive tasks being high on the list. They were right, and we've been lucky to have tools like Zoom to keep us collaborating and Flippy to keep us eating fast food (to whatever extent you consider eating fast food an essential activity; I mean, you can't cook every day). Now if only there was a tech fix for inflation and housing shortages. Seeing as how there've been three different versions of Flippy rolled out in the last four and a half years, there are doubtless more iterations coming, each with new skills and improved technology. But the burger robot is just one of many new developments in automation of food preparation and delivery. Take this pizzeria in Paris: there are no humans involved in the cooking, ordering, or pick-up process at all. And just this week, IBM and McDonald's announced a collaboration to create drive-through lanes run by AI. So it may not be long before you c...

    GE's New Autonomous Electric Pods Have No Steering Wheel, Pedals, or Cab

    Play Episode Listen Later Nov 4, 2021 3:40


    Self-driving cars are taking longer to become a reality than many experts predicted. But that doesn't mean there isn't steady progress being made; on the contrary, autonomous driving technology is consistently making incremental advancements in all sorts of vehicles, from cars to trucks to buses. Last week, General Electric (GE) and Swedish freight tech company Einride announced a partnership to launch a fleet of autonomous electric trucks. According to the press release, the fleet will be the first of its kind to operate in the US, with trucks running on GE's 750-acre Appliance Park campus in Louisville, Kentucky, as well as at GE facilities in Tennessee and Georgia. Einride was founded in 2016, raised $25 million in Series A funding in 2019, and most recently raised $110 million this past May. Though the company makes electric trucks that are driven by humans, it's primarily known for another vehicle that can't quite be called a truck; the company's Pods not only lack drivers, they lack cabs for drivers to sit in. The vehicles are similar to Volvo's Vera trucks, which also lack cabs, though the Pods are meant for medium distance and capacity, like moving goods from a distribution center to a series of local stores. The Pods will need regulatory approval to operate on public roads, so for now they'll be limited to driving around within customers' campuses (a safety driver isn't required if the vehicles are on private property). According to Reuters, Einride just hired its first remote driver (based in the US) who will be able to take over control of the Pods if they get into a situation where human decision-making is required. Despite being confined to corporate campuses for now, Einride envisions its technology making a measurable difference in terms of emissions, estimating that the GE partnership will save the company 970 tons of carbon dioxide emissions in the first year, with that quantity increasing as more trucks are added. “Between seven and eight percent of global CO2 emissions come from heavy road freight transport,” Robert Falck, CEO and founder of Einride, told TechCrunch. “One of the drivers for starting Einride is that I'm very worried that by optimizing and making road freight transport autonomous, but based on diesel, it's likely that we will actually increase emissions because it would become that much cheaper to operate.” This would be a textbook example of the Jevons Paradox: when technological progress increases the efficiency with which a resource is used, pushing its cost down—but causing consumption to then rise due to increasing demand, ultimately canceling out any savings of said resource. Electrification, then, is a key aspect of Einride's strategy, and one of its three core focus points (along with digitization and automation; the company created a digital platform that handles customers' planning, scheduling, routing, invoices, and billing). “You can actually electrify up to 40 perecent of the US road freight transport system with a competitive business case using existing technology,” Falck said. “It's more about deploying new way of thinking rather than just improving the hardware.” Einride also has contracts in place with tire maker Bridgestone and oat milk manufacturer Oatly. The company plans to open a US headquarters in New York next year, as well as offices in Austin and San Francisco. Image Credit: Einride

    Scientists Mapped Every Large Solar Plant on the Planet Using Satellites and Machine Learning

    Play Episode Listen Later Nov 3, 2021 5:47


    An astonishing 82 percent decrease in the cost of solar photovoltaic (PV) energy since 2010 has given the world a fighting chance to build a zero-emissions energy system which might be less costly than the fossil-fueled system it replaces. The International Energy Agency projects that PV solar generating capacity must grow ten-fold by 2040 if we are to meet the dual tasks of alleviating global poverty and constraining warming to well below 2°C. Critical challenges remain. Solar is “intermittent,” since sunshine varies during the day and across seasons, so energy must be stored for when the sun doesn't shine. Policy must also be designed to ensure solar energy reaches the furthest corners of the world and places where it is most needed. And there will be inevitable tradeoffs between solar energy and other uses for the same land, including conservation and biodiversity, agriculture and food systems, and community and indigenous uses. Colleagues and I have now published in the journal Nature the first global inventory of large solar energy generating facilities. “Large” in this case refers to facilities that generate at least 10 kilowatts when the sun is at its peak (a typical small residential rooftop installation has a capacity of around 5 kilowatts). We built a machine learning system to detect these facilities in satellite imagery and then deployed the system on over 550 terabytes of imagery using several human lifetimes of computing. We searched almost half of Earth's land surface area, filtering out remote areas far from human populations. In total we detected 68,661 solar facilities. Using the area of these facilities, and controlling for the uncertainty in our machine learning system, we obtain a global estimate of 423 gigawatts of installed generating capacity at the end of 2018. This is very close to the International Renewable Energy Agency's (IRENA) estimate of 420 GW for the same period. Tracking the Growth of Solar Energy Our study shows solar PV generating capacity grew by a remarkable 81 percent between 2016 and 2018, the period for which we had timestamped imagery. Growth was led particularly by increases in India (184 percent), Turkey (143 percent), China (120 percent) and Japan (119 percent). Facilities ranged in size from sprawling gigawatt-scale desert installations in Chile, South Africa, India, and north-west China, through to commercial and industrial rooftop installations in California and Germany, rural patchwork installations in North Carolina and England, and urban patchwork installations in South Korea and Japan. The Advantages of Facility-Level Data Country-level aggregates of our dataset are very close to IRENA's country-level statistics, which are collected from questionnaires, country officials, and industry associations. Compared to other facility-level datasets, we address some critical coverage gaps, particularly in developing countries, where the diffusion of solar PV is critical for expanding electricity access while reducing greenhouse gas emissions. In developed and developing countries alike, our data provides a common benchmark unbiased by reporting from companies or governments. Geospatially-localized data is of critical importance to the energy transition. Grid operators and electricity market participants need to know precisely where solar facilities are in order to know accurately the amount of energy they are generating or will generate. Emerging in-situ or remote systems are able to use location data to predict increased or decreased generation caused by, for example, passing clouds or changes in the weather. This increased predictability allows solar to reach higher proportions of the energy mix. As solar becomes more predictable, grid operators will need to keep fewer fossil fuel power plants in reserve, and fewer penalties for over- or under-generation will mean more marginal projects will be unlocked. Using the back catalogue of satellite imagery, we were able to estimate ins...

    These Mice Pups Inherited Immunity From Their Parents—But Not Through DNA

    Play Episode Listen Later Nov 2, 2021 8:04


    The rules of inheritance are supposedly easy. Dad's DNA mixes with mom's to generate a new combination. Over time, random mutations will give some individuals better adaptability to the environment. The mutations are selected through generations, and the species becomes stronger. But what if that central dogma is only part of the picture? A new study in Nature Immunology is ruffling feathers in that it re-contextualizes evolution. Mice infected with a non-lethal dose of bacteria, once recovered, can pass on a turbo-boosted immune system to their kids and grandkids—all without changing any DNA sequences. The trick seems to be epigenetic changes—that is, how genes are turned on or off—in their sperm. In other words, compared to millennia of evolution, there's a faster route for a species to thrive. For any individual, it's possible to gain survivability and adaptability in a single lifetime, and those changes can be passed on to offspring. “We wanted to test if we could observe the inheritance of some traits to subsequent generations, let's say independent of natural selection,” said study author Dr. Jorge Dominguez-Andres at Radboud University Nijmegen Centre. “The existence of epigenetic heredity is of paramount biological relevance, but the extent to which it happens in mammals remains largely unknown,” said Drs. Paola de Candia at the IRCCS MultiMedica, Milan, and Giuseppe Matarese at the Treg Cell Lab, Dipartimento di Medicina Molecolare e Biotecnologie Mediche at the Università degli Studi di Napoli in Naples, who were not involved in the study. “Their work is a big conceptual leap.” Evolution on Steroids The paper is controversial because it builds upon Darwin's original theory of evolution. You know this example: giraffes don't have long necks because they had to stretch their necks to reach higher leaves. Rather, random mutations in the DNA that codes for long necks was eventually selected, mostly because those giraffes were the ones that survived and procreated. Yet recent studies have thrown a wrench into the long-standing dogma around how species adapt. At their root is epigenetics, a mechanism “above” DNA to regulate how our genes are expressed. It's helpful to think of DNA as base, low-level code—ASCII in computers. To execute the code, it needs to be translated into a higher language: proteins. Similar to a programming language, it's possible to silence DNA with additional bits of code. It's how our cells develop into vastly different organs and body parts—like the heart, kidneys, and brain—even though they have the same DNA. This level of control is dubbed epigenetics, or “above genetics.” One of the most common ways to silence DNA is to add a chemical group to a gene so that, like a wheel lock, the gene gets “stuck” as it's trying make a protein. This silences the genetic code without damaging the gene itself. These chemical markers are dotted along our genes, and represent a powerful way to control our basic biology—anything from stress to cancer to autoimmune diseases or psychiatric struggles. But unlike DNA, the chemical tags are thought to be completely wiped out in the embryo, resulting in a blank slate for the next generation to start anew. Not so much. A now famous study showed that a famine during the winters of 1944 and 1945 altered the metabolism of kids who, at the time, were growing fetuses. The consequence was that those kids were more susceptible to obesity and diabetes, even though their genes remained unchanged. Similar studies in mice showed that fear and trauma in parents can be passed onto pups—and grandkids—making them more susceptible, whereas some types of drug abuse increased the pups' resilience against addiction. Long story short? DNA inheritance isn't the only game in town. Superpowered Immunity The new study plays on a similar idea: that an individual's experiences in life can change the epigenetic makeup of his or her offspring. Here, the authors focused on trained immunity—the par...

    New Optical Switch Is Up to 1,000 Times Faster Than Silicon Transistors

    Play Episode Listen Later Nov 1, 2021 3:42


    As Moore's Law slows, people are starting to look for alternatives to the silicon chips we've long been reliant on. A new optical switch up to 1,000 times faster than normal transistors could one day form the basis of new computers that use light rather than electricity. The attraction of optical computing is obvious. Unlike the electrons that modern computers rely on, photons travel at the speed of light, and a computer that uses them to process information could theoretically be much faster than one that uses electronics. The bulkiness of conventional optical equipment long stymied the idea, but in recent years the field of photonics has rapidly improved our ability to produce miniaturized optical components using many of the same techniques as the semiconductor industry. This has not only led to a revived interest in optical computing, but could also have significant impact for the optical communications systems used to shuttle information around in data centers, supercomputers, and the internet. Now, researchers from IBM and the Skolkovo Institute of Science and Technology in Russia have created an optical switch—a critical component in many photonic devices—that is both incredibly fast and energy-efficient. It consists of a 35-nanometer-wide film made out of an organic semiconductor sandwiched between two mirrors that create a microcavity, which keeps light trapped inside. When a bright “pump” laser is shone onto the device, photons from its beam couple with the material to create a conglomeration of quasiparticles known as a Bose-Einstein condensate, a collection of particles that behaves like a single atom. A second weaker laser can be used to switch the condensate between two levels with different numbers of quasiparticles. The level with more particles represents the “on” state of a transistor, while the one with fewer represents the “off” state. What's most promising about the new device, described in a paper in Nature, is that it can be switched between its two states a trillion times a second, which is somewhere between 100 and 1,000 times faster than today's leading commercial transistors. It can also be switched by just a single photon, which means it requires far less energy to drive than a transistor. Other optical switching devices with similar sensitivity have been created before, but they need to be kept at cryogenic temperatures, which severely limits their practicality. In contrast, this new device operates at room temperature. There's still a very long way to go until the technology appears in general-purpose optical computers, though, study senior author Pavlos Lagoudakis told IEEE Spectrum. “It took 40 years for the first electronic transistor to enter a personal computer,” he said. “It is often misunderstood how long before a discovery in fundamental physics research takes to enter the market.” One of the challenges is that, while the device requires very little energy to switch, it still requires constant input from the pump laser. In a statement, the researchers said they are working with collaborators to develop perovskite supercrystal materials that exhibit superfluorescence to help lower this source of power consumption. But even if it might be some time until your laptop is sporting a chip made out of these switches, Lagoudakis thinks they could find nearer-term applications in optical accelerators that perform specialized operations far faster than conventional chips, or as ultra-sensitive light detectors for the LIDAR scanners used by self-driving cars and drones. Image Credit: Tomislav Jakupec from Pixabay

    Animal Evolution: Fossil Discovery Hints First Animals Lived Nearly 900 Million Years Ago

    Play Episode Listen Later Oct 31, 2021 6:01


    Ever wonder how and when animals swanned onto the evolutionary stage? When, where, and why did animals first appear? What were they like? Life has existed for much of Earth's 4.5-billion-year history, but for most of that time it consisted exclusively of bacteria. Although scientists have been investigating the evidence of biological evolution for over a century, some parts of the fossil record remain maddeningly enigmatic, and finding evidence of Earth's earliest animals has been particularly challenging. Hidden Evolution Information about evolutionary events hundreds of millions of years ago is mainly gleaned from fossils. Familiar fossils are shells, exoskeletons and bones that organisms make while alive. These so-called “hard parts” first appear in rocks deposited during the Cambrian explosion, slightly less than 540 million years ago. The seemingly sudden appearance of diverse, complex animals, many with hard parts, implies that there was a preceding interval during which early soft-bodied animals with no hard parts evolved from simpler animals. Unfortunately, until now, possible evidence of fossil animals in the interval of “hidden” evolution has been very rare and difficult to understand, leaving the timing and nature of evolutionary events unclear. This conundrum, known as Darwin's dilemma, remains tantalizing and unresolved 160 years after the publication of On the Origin of Species. Required Oxygen There is indirect evidence regarding how and when animals may have appeared. Animals by definition ingest pre-existing organic matter, and their metabolisms require a certain level of ambient oxygen. It has been assumed that animals could not appear, or at least not diversify, until after a major oxygen increase in the Neoproterozoic Era, sometime between 815 and 540 million years ago, resulting from accumulation of oxygen produced by photosynthesizing cyanobacteria, also known as blue-green algae. It is widely accepted that sponges are the most basic animal in the animal evolutionary tree and therefore probably were first to appear. Yes, sponges are animals: they use oxygen and feed by sucking water containing organic matter through their bodies. The earliest animals were probably sponge-related (the “sponge-first” hypothesis), and may have emerged hundreds of millions of years prior to the Cambrian, as suggested by a genetic method called molecular phylogeny, which analyzes genetic differences. Based on these reasonable assumptions, sponges may have existed as much as 900 million years ago. So, why have we not found fossil evidence of sponges in rocks from those hundreds of millions of intervening years? Part of the answer to this question is that sponges do not have standard hard parts (shells, bones). Although some sponges have an internal skeleton made of microscopic mineralized rods called spicules, no convincing spicules have been found in rocks dating from the interval of hidden early animal evolution. However, some sponge types have a skeleton made of tough protein fibers called spongin, forming a distinctive, microscopic, three-dimensional meshwork, identical to a bath sponge. Work on modern and fossil sponges has shown that these sponges can be preserved in the rock record when their soft tissue is calcified during decay. If the calcified mass hardens around spongin fibers before they too decay, a distinctive microscopic meshwork of complexly branching tubes results appears in the rock. The branching configuration is unlike that of algae, bacteria, or fungi, and is well known from limestones younger than 540 million years. Unusual Fossils I am a geologist and paleobiologist who works on very old limestone. Recently, I described this exact microstructure in 890-million-year-old rocks from northern Canada, proposing that it could be evidence of sponges that are several hundred million years older than the next-youngest uncontested sponge fossil. Although my proposal may initially seem outrageous, it is consiste...

    This Spooky, Bizarre Haunted House Was Generated by an AI

    Play Episode Listen Later Oct 29, 2021 3:45


    AI is slowly getting more creative, and as it does it's raising questions about the nature of creativity itself, who owns works of art made by computers, and whether conscious machines will make art humans can understand. In the spooky spirit of Halloween, one engineer used an AI to produce a very specific, seasonal kind of “art”: a haunted house. It's not a brick-and-mortar house you can walk through, unfortunately; like so many things these days, it's virtual, and was created by research scientist and writer Janelle Shane. Shane runs a machine learning humor blog called AI Weirdness where she writes about the “sometimes hilarious, sometimes unsettling ways that machine learning algorithms get things wrong.” For the virtual haunted house, Shane used CLIP, a neural network built by OpenAI, and VQGAN, a neural network architecture that combines convolutional neural networks (which are typically used for images) with transformers (which are typically used for language). CLIP (short for Contrastive Language–Image Pre-training) learns visual concepts from natural language supervision, using images and their descriptions to rate how well a given image matches a phrase. The algorithm uses zero-shot learning, a training methodology that decreases reliance on labeled data and enables the model to eventually recognize objects or images it hasn't seen before. The phrase Shane focused on for this experiment was “haunted Victorian house,” starting with a photo of a regular Victorian house then letting the AI use its feedback to modify the image with details it associated with the word “haunted.” The results are somewhat ghoulish, though also perplexing. In the first iteration, the home's wood has turned to stone, the windows are covered in something that could be cobwebs, the cloudy sky has a dramatic tilt to it, and there appears to be fire on the house's lower level. Shane then upped the ante and instructed the model to create an “extremely haunted” Victorian house. The second iteration looks a little more haunted, but also a little less like a house in general, partly because there appears to be a piece of night sky under the house's roof near its center. Shane then tried taking the word “haunted” out of the instructions, and things just got more bizarre from there. She wrote in her blog post about the project, “Apparently CLIP has learned that if you want to make things less haunted, add flowers, street lights, and display counters full of snacks.” “All the AI's changes tend to make the house make less sense,” Shane said. “That's because it's easier for it to look at tiny details like mist than the big picture like how a house fits together. In a lot of what AI does, it's working on the level of surface details rather than deeper meaning.” Shane's description matches up with where AI stands as a field. Despite impressive progress in fields like protein folding, RNA structure, natural language processing, and more, AI has not yet approached “general intelligence” and is still very much in the “narrow” domain. Researcher Melanie Mitchell argues that common fallacies in the field, like using human language to describe machine intelligence, are hampering its advancement; computers don't really “learn” or “understand” in the way humans do, and adjusting the language we used to describe AI systems could help do away with some of the misunderstandings around their capabilities. Shane's haunted house is a clear example of this lack of understanding, and a playful reminder that we should move cautiously in allowing machines to make decisions with real-world impact. Banner Image Credit: Janelle Shane, AI Weirdness

    Deciphering the Philosophers' Stone: How Scientists Cracked a 400-Year-Old Alchemical Cipher

    Play Episode Listen Later Oct 28, 2021 7:53


    What secret alchemical knowledge could be so important it required sophisticated encryption? The setting was Amsterdam, 2019. A conference organized by the Society for the History of Alchemy and Chemistry had just concluded at the Embassy of the Free Mind, in a lecture hall opened by historical fiction author Dan Brown. At the conference, Science History Institute postdoctoral researcher Megan Piorko presented a curious manuscript belonging to English alchemists John Dee (1527–1608) and his son Arthur Dee (1579–1651). In the pre-modern world, alchemy was a means to understand nature through ancient secret knowledge and chemical experiment. Within Dee's alchemical manuscript was a cipher table, followed by encrypted ciphertext under the heading “Hermeticae Philosophiae medulla”—or Marrow of the Hermetic Philosophy. The table would end up being a valuable tool in decrypting the cipher, but could only be interpreted correctly once the hidden “key” was found. It was during post-conference drinks in a dimly lit bar that Megan decided to investigate the mysterious alchemical cipher—with the help of her colleague, University of Graz postdoctoral researcher Sarah Lang. A Recipe for the Elixir of Life Megan and Sarah shared their initial analysis on a history of chemistry blog and presented the historical discovery to cryptology experts from around the world at the 2021 HistoCrypt conference. Based on the rest of the notebook's contents, they believed the ciphertext contained a recipe for the fabled Philosophers' Stone—an elixir that supposedly prolongs the owner's life and grants the ability to produce gold from base metals. The mysterious cipher received much interest, and Sarah and Megan were soon inundated with emails from would-be code-breakers. That's when Richard Bean entered the picture. Less than a week after the HistoCrypt proceedings went live, Richard contacted Lang and Piorko with exciting news: he'd cracked the code. Megan and Sarah's initial hypothesis was confirmed; the encrypted ciphertext was indeed an alchemical recipe for the Philosophers' Stone. Together, the trio began to translate and analyze the 177-word passage. The Alchemist Behind the Cipher But who wrote this alchemical cipher in the first place, and why encrypt it? Alchemical knowledge was shrouded in secrecy, as practitioners believed it could only be understood by true adepts. Encrypting the most valuable trade secret, the Philosophers' Stone, would have provided an added layer of protection against alchemical fraud and the unenlightened. Alchemists spent their lives searching for this vital substance, with many believing they had the key to successfully unlocking the secret recipe. Arthur Dee was an English alchemist and spent most of his career as royal physician to Tsar Michael I of Russia. He continued to add to the alchemical manuscript after his father's death—and the cipher appears to be in Arthur's handwriting. We don't know the exact date John Dee, Arthur's father, started writing in this manuscript, or when Arthur added the cipher table and encrypted text he titled “The Marrow of Hermetic Philosophy.” However, we do know Arthur wrote another manuscript in 1634 titled Arca Arcanorum—or Secret of Secrets—where he celebrates his alchemical success with the Philosophers' Stone, claiming he discovered the true recipe. He decorated Arca Arcanorum with an emblem copied from a medieval alchemical scroll, illustrating the allegorical process of alchemical transmutation necessary for the Philosophers' Stone. Cracking the Code What clues led to decrypting the mysterious Marrow of the Hermetic Philosophy passage? Adjacent to the encrypted text is a table resembling one used in a traditional style of cipher called a Bellaso/Della Porta cipher—invented in 1553 by Italian cryptologist Giovan Battista Bellaso, and written about in 1563 by Giambattista della Porta. This was the first clue. The Latin title indicated the text itself was also in Latin. This was ...

    This Tiny Personal Aircraft Costs Under $100K and Can Take Off From Your Driveway

    Play Episode Listen Later Oct 27, 2021 3:38


    From buses to taxis to ambulances, the number and type of vehicles set to take to the skies in the allegedly near future keeps growing. Now another one is joining their ranks, and it seems to defy classification—it's not a flying car, nor a drone; the closest to an accurate description may be a flying all-terrain vehicle, or the designation its creators have given it, which is a “personal electric aerial vehicle.” And it shares its name with a widely-beloved futuristic cartoon family: the Jetsons. All sorts of technology that used to exist only in cartoon form has made its way into being since the show launched in 1962, from jet packs to 3D printed food to smartwatches. Swedish startup Jetson Aero's tiny electric aircraft, the Jetson One, is sort of like George Jetson's “car”—except there's only space for one person, there's not a closed cabin, and you can't press a button to be dropped out the bottom. You can, however, press a button to activate a ballistic parachute, but Jetson Aero is really hoping none of its customers will ever have to use this feature. The parachute is a last-resort option, built into the Jetson One along with several other redundancies for passenger safety. The vehicle runs on battery power, with eight electric motors, and is like a helicopter in that it takes off and lands vertically (though the fact that it has eight propellers makes it a “multicopter”). The fastest it goes is 63 miles per hour (102 kilometers per hour), so about the same as highway driving in or near an urban area. At 9.3 feet long by 8 feet tall by 3.4 feet wide, the Jetson One is quite small, at least as far as aircraft go, and weighs 198 pounds (90 kilograms). It can carry a passenger that weighs up to 210 pounds, though the lighter you are, the longer you can fly for; a pilot weighing 187 pounds can fly for 20 minutes before the vehicle's batteries need recharging. In the US the aircraft is classified as “ultralight,” meaning you don't need a pilot's license to fly it. Given its compact footprint, the vehicle could take off right from owners' driveways; the company encourages potential customers to “make your garden or terrace your private airport.” It certainly would be nice to be able to fly without the hassle of first traveling to an airport, nor the expense of the associated storage and usage fees (though if you're buying one of these little winged toys, those expenses probably won't concern you too much; the Jetson One goes for $92,000, which actually isn't outrageous given that it's basically a personal mini plane). The downside of making your garden your personal airport, though, is that if you lose control of the vehicle or have a shaky takeoff or landing, you could go crashing right through your home's roof or wall. In the spirit of its widely-known compatriot company, Ikea, Jetson Aero delivers its aircraft to customers as a partially-assembled kit, accompanied by “detailed build instructions.” It's a long shot from shelving unit to personal eVTOL, though, and letting customers assemble the vehicle themselves seems an odd choice given the risks associated with getting even one piece wrong. Customers don't seem worried, though. The company has already sold its entire 2022 production run (which, to be fair, was only 12 units) and is now taking orders for delivery in 2023. Image Credit: Jetson Aero

    Friend or Foe? Single Neurons in the Brain Control Social Interaction, Study Finds

    Play Episode Listen Later Oct 26, 2021 8:25


    Neurons live in a society, and scientists just found the ones that may allow us to thrive in our own society. Like humans, individual neurons are strikingly unique. Also like humans, they're constantly in touch with each other. They hook up into neural circuit “friend groups,” break apart when things change, and rewire into new cliques. This flexibility lets their collective society (the brain) and their owners (us) learn about and adapt to an ever-changing world. To rebuild neural circuits, neurons constantly monitor the status of their neighbors through sinewy branches that sprout from a rotund body. These aren't just passive phone lines. Dotted along each branch are little dual-function units called synapses, which allow neurons to both chat with a neuron partner and record previous conversations. Each synapse retains a “log” of past communications in its physical and molecular structure. By “consulting” this log, a neuron can passively determine whether or not to form a network with that particular neuron partner—or even set of partners—or to avoid any interactions in the near future. Apparently, neurons can do the same for us. This week, by listening to the electrical chatter of single neurons in rhesus macaque monkeys, scientists at Harvard, led by Dr. Ziv Williams, honed in on a peculiar subset of neurons that helps us tell friends from foes. The neurons, sprinkled across the frontal parts of the brain, are strikingly powerful. As their monkey hosts hung out for game night, the little computers tracked how each player behaved—were they more cooperative, or selfish? Over time, these “social agent identity cells” stealthily map out the entire group dynamic. Using this info, the monkeys can then decide whether to team up with other monkeys or shun them. It's not just correlation. By modeling the electrical activity of these neurons, scientists were able to accurately determine any monkey's past decisions—and mind-blowingly, predict its future ones, basically “mind-reading” the animal's next move. When the team tampered with the neurons' activity with a short burst of electrical zaps, the monkeys lost their social judgment. Like the new kid in school, they could no longer decide who to befriend. “In the frontal cortex, these neurons appear to be tuned for possible action from peers, representing them as communication partners, competitors, and collaborators,” wrote Dr. Julia Sliwa at Sorbonne Université in Paris, who was not involved in the study. While the results are from monkeys, they're some of the first to connect the activity of individual neurons to an extremely complicated yet necessary aspect of our lives. These data “are major steps in identifying the neural mechanisms for maneuvering in complex social structure,” she said. My Brain's View of You We often think of neurons and circuits as hardware components that represent us: our perception, memory, decisions, feelings. Yet large parts of the brain are dedicated to representing other people in our external world. One famous example is the “Jennifer Aniston neuron.” Back in 2005, an experiment showed that a single neuron in a person's brain could react to a particular face—for example, Aniston's. A lightbulb moment for neuroscience and computer vision, the study raised a brazen idea: that a single neuron has the computing power to encode a person's physical identity. In the real world, it gets more complicated than just identifying a face. A person's face comes with history—is it my first time meeting them? What's their reputation? How do I feel about this person? Much work on how our brains handle social interaction comes from studying bands in music studios or teacher-student dynamics in classrooms. Here, brain activity is captured with wearables, which measure brain waves that wash over parts of the brain. These studies show that when making music or watching a movie together, our brain waves sync up. To rephrase, our brains tune into other peoples'—but when,...

    Not So Mysterious After All: Researchers Show How to Crack AI's Black Box

    Play Episode Listen Later Oct 25, 2021 4:34


    The deep learning neural networks at the heart of modern artificial intelligence are often described as “black boxes” whose inner workings are inscrutable. But new research calls that idea into question, with significant implications for privacy. Unlike traditional software whose functions are predetermined by a developer, neural networks learn how to process or analyze data by training on examples. They do this by continually adjusting the strength of the links between their many neurons. By the end of this process, the way they make decisions is tied up in a tangled network of connections that can be impossible to follow. As a result, it's often assumed that even if you have access to the model itself, it's more or less impossible to work out the data that the system was trained on. But a pair of recent papers have brought this assumption into question, according to MIT Technology Review, by showing that two very different techniques can be used to identify the data a model was trained on. This could have serious implications for AI systems trained on sensitive information like health records or financial data. The first approach takes aim at generative adversarial networks (GANs), the AI systems behind deepfake images. These systems are increasingly being used to create synthetic faces that are supposedly completely unrelated to real people. But researchers from the University of Caen Normandy in France showed that they could easily link generated faces from a popular model to real people whose data had been used to train the GAN. They did this by getting a second facial recognition model to compare the generated faces against training samples to spot if they shared the same identity. The images aren't an exact match, as the GAN has modified them, but the researchers found multiple examples where generated faces were clearly linked to images in the training data. In a paper describing the research, they point out that in many cases the generated face is simply the original face in a different pose. While the approach is specific to face-generation GANs, the researchers point out that similar ideas could be applied to things like biometric data or medical images. Another, more general approach to reverse engineering neural nets could do that straight off the bat, though. A group from Nvidia has shown that they can infer the data the model was trained on without even seeing any examples of the trained data. They used an approach called model inversion, which effectively runs the neural net in reverse. This technique is often used to analyze neural networks, but using it to recover the input data had only been achieved on simple networks under very specific sets of assumptions. In a recent paper, the researchers described how they were able to scale the approach to large networks by splitting the problem up and carrying out inversions on each of the networks' layers separately. With this approach, they were able to recreate training data images using nothing but the models themselves. While carrying out either attack is a complex process that requires intimate access to the model in question, both highlight the fact that AIs may not be the black boxes we thought they were, and determined attackers could extract potentially sensitive information from them. Given that it's becoming increasingly easy to reverse engineer someone else's model using your own AI, the requirement to have access to the neural network isn't even that big of a barrier. The problem isn't restricted to image-based algorithms. Last year, researchers from a consortium of tech companies and universities showed that they could extract news headlines, JavaScript code, and personally identifiable information from the large language model GPT-2. These issues are only going to become more pressing as AI systems push their way into sensitive areas like health, finance, and defense. There are some solutions on the horizon, such as differential privacy, where mode...

    The Most Powerful Space Telescope Ever Built Will Look Back in Time to the Dark Ages of the Universe

    Play Episode Listen Later Oct 24, 2021 6:29


    Some have called NASA's James Webb Space Telescope the “telescope that ate astronomy.” It is the most powerful space telescope ever built and a complex piece of mechanical origami that has pushed the limits of human engineering. On Dec. 18, 2021, after years of delays and billions of dollars in cost overruns, the telescope is scheduled to launch into orbit and usher in the next era of astronomy. I'm an astronomer with a specialty in observational cosmology—I've been studying distant galaxies for 30 years. Some of the biggest unanswered questions about the universe relate to its early years just after the Big Bang. When did the first stars and galaxies form? Which came first, and why? I am incredibly excited that astronomers may soon uncover the story of how galaxies started because James Webb was built specifically to answer these very questions. The ‘Dark Ages' of the Universe Excellent evidence shows that the universe started with an event called the Big Bang 13.8 billion years ago, which left it in an ultra-hot, ultra-dense state. The universe immediately began expanding after the Big Bang, cooling as it did so. One second after the Big Bang, the universe was a hundred trillion miles across with an average temperature of an incredible 18 billion degrees Fahrenheit (10 billion degrees Celsius). Around 400,000 years after the Big Bang, the universe was 10 million light-years across and the temperature had cooled to 5,500 degrees Fahrenheit (3,000 degrees Celsius). If anyone had been there to see it at this point, the universe would have been glowing dull red like a giant heat lamp. Throughout this time, space was filled with a smooth soup of high energy particles, radiation, hydrogen, and helium. There was no structure. As the expanding universe became bigger and colder, the soup thinned out and everything faded to black. This was the start of what astronomers call the Dark Ages of the universe. The soup of the Dark Ages was not perfectly uniform and due to gravity, tiny areas of gas began to clump together and become more dense. The smooth universe became lumpy and these small clumps of denser gas were seeds for the eventual formation of stars, galaxies, and everything else in the universe. Although there was nothing to see, the Dark Ages were an important phase in the evolution of the universe. Looking for the First light The Dark Ages ended when gravity formed the first stars and galaxies that eventually began to emit the first light. Although astronomers don't know when first light happened, the best guess is that it was several hundred million years after the Big Bang. Astronomers also don't know whether stars or galaxies formed first. Current theories based on how gravity forms structure in a universe dominated by dark matter suggest that small objects—like stars and star clusters—likely formed first and then later grew into dwarf galaxies and then larger galaxies like the Milky Way. These first stars in the universe were extreme objects compared to stars of today. They were a million times brighter but they lived very short lives. They burned hot and bright and when they died, they left behind black holes up to a hundred times the Sun's mass, which might have acted as the seeds for galaxy formation. Astronomers would love to study this fascinating and important era of the universe, but detecting first light is incredibly challenging. Compared today's massive, bright galaxies, the first objects were very small and due to the constant expansion of the universe, they're now tens of billions of light-years away from Earth. Also, the earliest stars were surrounded by gas left over from their formation and this gas acted like fog that absorbed most of the light. It took several hundred million years for radiation to blast away the fog. This early light is very faint by the time it gets to Earth. But this is not the only challenge. As the universe expands, it continuously stretches the wavelength of light traveling through...

    Scientists Are on a Quest to Create the Perfect Cup of Coffee—Without the Beans

    Play Episode Listen Later Oct 22, 2021 4:55


    Ahh, coffee. Is there anything more delicious, more satisfying? It's always there when you need it, be it first thing in the morning or for a mid-afternoon pick-me-up. According to the Sustainable Coffee Challenge, global consumption of this vital brew is around 600 billion cups per year (I know—I would have guessed higher, too). But as with many of the products we consume, there's a cost beyond what we pay at the store. Producing coffee—like producing meat, or almonds, or corn, or pretty much anything—has an environmental cost, too. It's that cost that's led innovative entrepreneurs to seek a more Earth-friendly way to produce everything from beef to milk to salmon. Now coffee is joining the club, with startups in the US and Europe experimenting with new ways to make crave-worthy coffee—sans any coffee beans. One of these is Finland's VTT Technical Research Centre. VTT uses a technique called cellular agriculture to grow its pseudo-coffee, filling bioreactors with cell cultures then adding nutrients that encourage growth. Heiko Rischer, VTT's head of plant biotechnology, described one of the first cups brewed with his company's product as tasting like something “in between a coffee and a black tea.” If Finland seems like a surprising location for one of the first artificial coffees to be made—I personally would have guessed Italy, or maybe Spain—it makes sense when you put together a couple key factors. First, Nordic countries tend to be a few steps ahead of the rest of the world in terms of environmentalism; from Greta Thunberg to electric car usage to Right to Repair laws, their help-the-planet game is strong. Also, Finland is actually the world's biggest consumer of coffee per capita, with people throwing back an average of 26.45 pounds per year (as compared to the US average of 9.26 pounds per year). Like most crops, coffee production simultaneously impacts the climate crisis and is impacted by it. One of the big problems coffee demand is causing is deforestation, with more and more land being cleared of trees and natural ecosystems to make way for coffee plants. Those plants require pesticides and fertilizer, and their beans then need to be shipped across the world to caffeine-addicted consumers. VTT isn't the only company working on making a more sustainable version of our favorite morning drink. Atomo Coffee, a startup based in Seattle, uses a different method than VTT, breaking down plant waste then converting the relevant compounds into a coffee-bean-like solid, and San Francisco-based Compound Foods uses microbes and fermentation to make bean-free coffee. According to The Guardian, Atomo's facility currently produces enough of the fake bean to equal around 1,000 servings of coffee a day, and aims to get that up to 10,000 a day in the next year—so, about enough to fulfill the coffee needs of a tiny fraction of its home city's population. That's one of the major hurdles that companies producing synthetic foods will face; the supply chains, processes, and infrastructure serving our existing food production system grew and were refined over decades, and are able to meet consumer demand in their current form. Scaling production of lab-grown products to the level needed to continue meeting that demand—or, more likely, increased demand as the global middle class continues to grow—won't be easy, even once fake meat tastes and feels just like real meat or lab-grown coffee goes down as smooth as the stuff that comes from plants. Speaking of which, “in between a coffee and a black tea” isn't going to cut it for coffee-lovers. Until the synthetic stuff smells, tastes, and feels a lot more like the real thing, switching to cell-cultured coffee is going to be a very hard. sell (pun not intended). In addition, VTT's coffee will need to be approved by regulatory bodies in Europe and the US before the company can bring its product to market. A final relevant issue is technological unemployment, which isn't just a problem for peop...

    Would We Still See Ourselves as ‘Human' if Other Hominin Species Hadn't Gone Extinct?

    Play Episode Listen Later Oct 21, 2021 16:13


    In our mythologies, there's often a singular moment when we became “human.” Eve plucked the fruit of the tree of knowledge and gained awareness of good and evil. Prometheus created men from clay and gave them fire. But in the modern origin story, evolution, there's no defining moment of creation. Instead, humans emerged gradually, generation by generation, from earlier species. As with any other complex adaptation—a bird's wing, a whale's fluke, our own fingers—our humanity evolved step by step, over millions of years. Mutations appeared in our DNA, spread through the population, our ancestors slowly became something more like us and, finally, we appeared. Strange Apes, But Still Apes People are animals, but we're unlike other animals. We have complex languages that let us articulate and communicate ideas. We're creative: we make art, music, tools. Our imaginations let us think up worlds that once existed, dream up worlds that might yet exist, and reorder the external world according to those thoughts. Our social lives are complex networks of families, friends, and tribes, linked by a sense of responsibility towards each other. We also have awareness of ourselves and our universe: sentience, sapience, consciousness, whatever you call it. And yet the distinction between ourselves and other animals is, arguably, artificial. Animals are more like humans than we might think—or like to think. Almost all behavior we once considered unique to ourselves is seen in animals, even if they're less well developed. That's especially true of the great apes. Chimps, for example, have simple gestural and verbal communication. They make crude tools, even weapons, and different groups have different suites of tools—distinct cultures. Chimps also have complex social lives and cooperate with each other. As Darwin noted in Descent of Man, almost everything odd about Homo sapiens—emotion, cognition, language, tools, society—exists, in some primitive form, in other animals. We're different, but less different than we think. And in the past, some species were far more like us than other apes: Ardipithecus, Australopithecus, Homo erectus, and Neanderthals. Homo sapiens is the only survivor of a once diverse group of humans and human-like apes, the hominins, which includes around 20 known species and probably dozens of unknown species. The extinction of those other hominins wiped out all the species that were intermediate between ourselves and other apes, creating the impression that some vast, unbridgeable gulf separates us from the rest of life on Earth. But the division would be far less clear if those species still existed. What looks like a bright, sharp dividing line is really an artefact of extinction. The discovery of these extinct species now blurs that line again and shows how the distance between us and other animals was crossed—gradually, over millennia. The Evolution of Humanity Our lineage probably split from the chimpanzees around six million years ago. These first hominins, members of the human line, would barely have seemed human, however. For the first few million years, hominin evolution was slow. The first big change was walking upright, which let hominins move away from forests into more open grassland and bush. But if they walked like us, nothing else suggests the first hominins were any more human than chimps or gorillas. Ardipithecus, the earliest well-known hominin, had a brain that was slightly smaller than a chimp's, and there's no evidence they used tools. In the next million years, Australopithecus appeared. Australopithecus had a slightly larger brain; larger than a chimp's, still smaller than a gorilla's. It made slightly more sophisticated tools than chimps, using sharp stones to butcher animals. Then came Homo habilis. For the first time, hominin brain size exceeded that of other apes. Tools like stone flakes, hammer stones, and “choppers” became much more complex. After that, around two million years ago, human evolu...

    AI-Savvy Criminals Pulled Off a $35 Million Deepfake Bank Heist

    Play Episode Listen Later Oct 20, 2021 5:48


    Thanks to the advance of deepfake technology, it's becoming easier to clone peoples' voices. Some uses of the tech, like creating voice-overs to fill in gaps in Roadrunner, the documentary about Anthony Bourdain released this past summer, are harmless (though even the ethics of this move were hotly debated when the film came out). In other cases, though, deepfaked voices are being used for ends that are very clearly nefarious—like stealing millions of dollars. An article published last week by Forbes revealed that a group of cybercriminals in the United Arab Emirates used deepfake technology as part of a bank heist that transferred a total of $35 million out of the country and into accounts all over the world. Money Heist, Voice Edition All you need to make a fake version of someone's voice is a recording of that person speaking. As with any machine learning system whose output improves based on the quantity and quality of its input data, a deepfaked voice will sound more like the real thing if there are more recordings for the system to learn from. In this case, criminals used deepfake software to recreate the voice of an executive at a large company (details around the company, the software used, and the recordings to train said software don't appear to be available). They then placed phone calls to a bank manager with whom the executive had a pre-existing relationship, meaning the bank manager knew the executive's voice. The impersonators also sent forged emails to the bank manager confirming details of the requested transactions. Between the emails and the familiar voice, when the executive asked the manager to authorize transfer of millions of dollars between accounts, the manager saw no problem with going ahead and doing so. The fraud took place in January 2020, but a relevant court document was just filed in the US last week. Officials in the UAE are asking investigators in the US for help tracing $400,000 of the stolen money that went to US bank accounts at Centennial Bank. Our Voices, Our Selves The old-fashioned way (“old” in this context meaning before machine learning was as ubiquitous as it is today) to make a fake human voice was to record a real human voice, split that recording into many distinct syllables of speech, then paste those syllables together in countless permutations to form the words you wanted the voice to say. It was tedious and yielded a voice that didn't sound at all realistic. It's easy to differentiate the voices of people close to us, and to recognize famous voices—but we don't often think through the many components that contribute to making a voice unique. There's the timbre and pitch, which refer to where a voice falls on a span of notes from low to high. There's the cadence, which is the speaker's rhythm and variations in pitch and emphasis on different words or parts of a sentence. There's pronunciation, and quirks like regional accents or lisps. In short, our voices are wholly unique—which makes it all the more creepy that they're becoming easier to synthetically recreate. Fake Voices to Come Is the UAE bank heist a harbinger of crimes to come? Unfortunately, the answer is very likely yes. It's not the first such attempt, but it's the first to succeed at stealing such a large sum of money using a deepfaked voice. In 2019 a group of criminals faked the voice of a UK-based energy firm's CEO to have $243,000 transferred to a Hungarian bank account. Many different versions of audio deepfake software are already commercially available, including versions from companies like Lyrebird (which needs just a one-minute recording to create a fake voice, albeit slightly halting and robot-like), Descript, Sonantic, and Veritone, to name just a few. These companies intend their products to be used for good, and some positive use cases certainly do exist; people with speech disabilities or paralysis could use the software to communicate with those around them, for example. Veritone is marketing its ...

    Super-Precise CRISPR Gene Editing Tool Could Tackle Tough Genetic Diseases

    Play Episode Listen Later Oct 19, 2021 8:40


    For all its supposed genetic editing finesse, CRISPR's a brute. The Swiss Army knife of gene editing tools chops up DNA strands to insert genetic changes. What's called “editing” is actually genetic vandalism—pick a malfunctioning gene, chop it up, and wait for the cell to patch and repair the rest. It's a hasty, clunky process, prone to errors and other unintended and unpredictable effects. Back in 2019, researchers led by Dr. David Liu at Harvard decided to rework CRISPR from a butcher to a surgeon, one that lives up to its search-and-replace potential. The result is prime editing, an alternative version of CRISPR with the ability to “make virtually any targeted change in the genome of any living cell or organism.” It's the nip-tuck of DNA editing: with just a small snip on one DNA chain, we have a whole menu of potential genetic changes at our fingertips. Prime editing was hailed as a fantastic “yay, science!” moment that could conceivably repair nearly 90 percent of over 75,000 diseases caused by genetic mutations. But even at its birth, Liu warned that CRISPR prime was only taking its first toddler steps into the big, wild world of changing a life form's base code. “This first study is just the beginning—rather than the end—of a long-standing aspiration in the life sciences to be able to make any DNA change at any position in an organism,” he told Nature at the time. Flash forward two years. Liu's gene editing ingénue took some stumbles. Despite its precise and effective nature, prime editing could only edit genes in certain types of cells, while being less effective and introducing errors in others. It also failed when trying to make large genetic edits, particularly those that require hundreds of DNA letters to be replaced to fix a disease-causing genetic mistake. But the good news? Toddlers grow up. This week, three separate studies advanced prime editing, helping the CRISPR tool grow into a more sophisticated DNA-editing genius. Two teams, based at the University of Massachusetts Medical School and the University of Washington, reworked the tool's molecular makeup to precisely cut out up to 10,000 DNA letters in one go—a challenge for prime editing 1.0. A third study from the tool's original inventor probed its inner molecular workings, identifying protein friends and foes inside the cell that control the tool's genetic editing abilities. By promoting friendly interactions, the team increased prime editing's efficiency in seven different cell types nearly eight-fold. Even better, the “foes” that block prime's editing potential were identified using CRISPR—in other words, we're witnessing a full circle of innovation whereby gene editing tools help build better gene editing tools. A Primer for CRISPR Prime Prime editing burst onto the gene editing scene for its dexterity and precision. If the original CRISPR-Cas9 is a dancer with two left feet, prime editing is a highly-trained ballerina. The two processes start similarly. Both rely on a molecular “zip code” to target the tool to a specific gene. In CRISPR, it's called a guide RNA. For prime editing, it's a slightly modified version dubbed pegRNA. Once the guides tether their respective dance partners to the gene, their routines differ. For CRISPR, the second component, Cas9, acts as a pair of scissors to snip both DNA strands. From here, cells can either throw out parts of a gene, or—when given a template—insert a healthy version of a gene to replace the original one. The cost is molecular surgery. Just as an incision might not fully heal, a double-stranded break to the DNA can introduce errors into the genetic code, leading to unexpected effects that vary between cells. Prime editing was the sophisticated upgrade set to fix that. Rather than cutting both DNA strands, it lightly nips one chain. From there, it can delete or insert genetic code based on a template without relying on the cell's DNA repair mechanism. In other words, prime editing opened a new universe o...

    How Nanotechnology Will Help Us Probe the Brain in Unimaginable Detail

    Play Episode Listen Later Oct 18, 2021 4:37


    One of the biggest challenges when it comes to probing and manipulating the brain are the blunt tools we have at our disposal. But breakthroughs in nanotechnology could soon change that, say researchers. Neuroscience has experienced a technological revolution in the last couple decades thanks to rapid improvements in brain-machine interfaces and groundbreaking new methods like functional magnetic resonance imaging, which makes it possible to track neural activity across the whole brain, or optogenetics, which makes it possible to control individual neurons with light. But despite this progress, we are still a long way from being able to record or stimulate large parts of the brain at the single-neuron level. Being able to do so could have profound implications on our understanding of the brain, as well as our ability to augment its function and treat disease. The key to bridging this gap is the emerging field of “NanoNeuro,” say the authors of a new paper in Nature Methods. The unique properties and diminutive size of nanomaterials could make it possible to probe neural circuits in entirely new ways and at previously unimaginable scales, the researchers write. The most obvious application of nanotechnology is in simply reducing the size of the standard neuroscience toolbox. A host of recent designs for nanoprobes and nanoelectrodes, often exploiting the same processes that have powered the miniaturization of computer chips, are making it possible to record from orders of magnitude more neurons. These probes often come with other desirable properties too, such as flexibility, optical functionality, or chemical sensing. Other materials such as quartz, carbon nanotubes, and graphene are also being experimented with and each have their own unique properties. Perhaps most importantly, these tiny electrodes open the door to probing neural activity at the sub-cellular level. Given the powerful processing that goes on within neurons, this could significantly improve our understanding of critical aspects of brain function. Nanotechnology isn't just about making things smaller, though. Physics operates on very different principles when you get down to the scale of atoms and molecules, which means nanomaterials can have exotic properties that enable entirely new functionality. For example, plasmonic nanoparticles have unique optical properties that can be easily tuned by simply varying their size and shape. These particles could be used to boost the sensitivity of existing optogenetic approaches, say the authors, and using light to excite and heat them up could also make it possible to trigger neurons to fire with very high precision. Even smaller “quantum dots”—nanoparticles that emit light in various colors when energy is applied to them—are a more durable and sensitive alternative to fluorescent dyes currently used for imaging. Their fluorescence is also modulated by electric fields, so they could potentially be used to give an optical readout on the activity of neurons. Another promising class of nanoparticles can absorb multiple low-energy electrons and convert them into a high-energy one. Researchers have used these so-called “upconverting nanoparticles” to let mice see in infrared by injecting them into the animals' retinas, where they translate incoming signals into visible light. Potentially the most powerful application, though, could come from magnetic nanoparticles. The human body is almost entirely unaffected by magnetic fields, which makes it possible to send them deep into biological tissue with little impact. Nanoparticles that can convert magnetic fields into stimuli that trigger neurons could be a powerful tool to modulate brain activity. There's still a long way to go, according to the authors. Effectively delivering nanoparticles to where we want them is challenging, as is producing large numbers of them without too much variability. And while early studies suggest many nanomaterials are biocompatible, proving they...

    Seismic 'Telescope' Reveals a Titanic, Tree-Like Plume Feeding Earth's Volcanoes

    Play Episode Listen Later Oct 17, 2021 6:00


    Some 75% of the world's volcanoes live along the aptly name Ring of Fire. This makes sense. Hugging a boundary between tectonic pates, the Ring of Fire is an open seam on the planet's interior. But then there's Hawaii, a chain of volcanic islands smack in the middle of the Pacific plate, far from any boundaries. What feeds its fire? Scientists have long theorized that columns of superheated rock—piping hot plumes pushing through the mantle to the crust above—explain the Hawaiian islands and other areas like them. Where these columns touch the surface, volcanic hotspots form and the ground erupts. Over millions of years, inch by inch, the Earth's tectonic plates drag new ground over hotspots and form long volcanic chains. The theory is old, but actually observing the mantle plumes feeding these hotspots in any detail is fairly new. “Theoretically, we know [plumes] have to exist,” Harriet Lau, a University of California, Berkeley geophysicist told Quanta Magazine. “But they're just so hard to see seismically.” Now, however, in a particularly striking example, a team of scientists have completed a map of the underworld nearly a decade in the making. The result, beautifully visualized below for a feature in Quanta, is one of the most detailed snapshots yet—and it's surprisingly complicated. Instead of a simple vertical column rising through the mantle, the structure is tree-like, with roots near the core, a trunk mid-mantle, and finer branching structures sprouting near the surface. The plume is feeding one the world's most active volcanoes, Piton de la Fournaise, on the French island of Réunion in the Indian Ocean. But it's also driving an intensely volcanic region in East Africa, some 3,000 kilometers away. Traveling back in time to when dinosaurs still ruled the planet, it ignited an area known as the Deccan Traps. Now in modern-day India, the Deccan Traps spilled enough lava to bury California, Montana, and Texas. Seeing Through the Ground Beneath Our Feet The Hubble Space Telescope is surely a wonder of the world. Imaging galaxies billions of light-years away is impressive—but how exactly does one see through thousands of kilometers of rock? In a sense, geophysicists build ‘telescopes' too. But instead of sensing light, these systems collect and analyze the planet's vibrations. “People have had a longer history and an easier time actually looking up at the stars,” University of Cambridge seismologist Sanne Cottaar told Quanta last year. “Looking down has actually been quite challenging.” To create this particular model of the underworld, the team drew on data from one of the largest such ‘telescopes' to date. In 2012, ships dropped 57 seismometers into the ocean around Réunion. The entire array, which included 37 land-based sensors too, spanned some 2,000 kilometers. Over the next 13 months, the sensors recorded subtle vibrations from seismic activity occurring on the opposite side of the world. As earthquakes rattle the surface, they also ring the planet's insides like a bell. By correlating a seismic event on one side of the world with the shiver it produces on the other, scientists infer what happened in between. Seismic vibrations tend to move more slowly through hotter areas than cooler areas, for example, so a mantle plume would slow their progress. With enough sensors and seismic events, researchers can construct a model. The model, in this case, was surprising. Scientist agree the mantle plumes underlying hotspots are so buoyant and quick-moving they should rise straight up. The diagonally branching paths in the data were unexpected. The team proposes they occur when temperature differences between hotter and cooler material make some areas of the plume more buoyant, pinching off blobs from the top of the trunk (or cusp) over time, one after another. These blobs do rise vertically but appear to form diagonal branches because older blobs have risen higher than younger ones. Nearer the surface, where the upper mantle...

    The World's Electronic Waste This Year Will Weigh More Than the Great Wall of China

    Play Episode Listen Later Oct 15, 2021 6:00


    It's widely known that the world has a plastics problem. From landfills to the ocean, the stuff is everywhere, and our conscientious efforts to recycle don't do nearly as much good as we think. What's less widely known is that we have a similar problem with another kind of waste: electronics. A report published this week on WEEE Forum revealed that the total waste electronic and electrical equipment from 2021 will weigh an estimated 57.4 million tons. That's heavier than China's Great Wall, which is the heaviest man-made object on Earth. Not surprisingly, the amount of e-waste generated each year is steadily increasing. For one, as the global middle class grows, more people can afford to buy electronics (and to buy new ones when their old ones break, rather than getting the old ones repaired). Also, the prices of many electronic items tend to trend downwards as their manufacture is scaled up, their technology improves, supply chains are streamlined, etc. (given the global chip shortage, the next couple years may be an exception to this trend). E-waste appears to be growing by three to four percent per year. In 2019 the total reached 53.6 million tons; that was 21 percent higher than 2014's total. If we stay on this trajectory, annual global e-waste will reach 74 tons by 2030. Product manufacturers aren't helping the situation; building products with shorter life cycles, making repairs too expensive or difficult to undertake, and continually releasing new iterations means people are likely to either cast aside their perfectly-good iPhones/tablets/laptops for newer models, or decide that repairing a non-working device isn't worth the trouble and opt for buying a brand-new one. Do you have at least one working (or partially-working) cell phone or laptop sitting in a drawer somewhere, untouched for months or years? Yeah, me too. “When you buy an expensive product, whether it's a half-a-million-dollar tractor or a thousand-dollar phone, you are in a very real sense under the power of the manufacturer,” said Tim Wu, special assistant to the president for technology and competition policy within the National Economic Council. “And when they have repair specifications that are unreasonable, there's not a lot you can do.” The Right to Repair movement thinks otherwise—or, is trying to get consumers and manufacturers to think otherwise. The movement is trying to make it easier for people to repair the devices they already own rather than having to buy new ones. Europe is several steps ahead of the US in this arena. In March of this year the EU implemented a law requiring appliances to be repairable for at least 10 years; new devices have to come with repair manuals and be compatible with conventional tools when their life cycle ends (so that people are more likely to break them down and recycle them). In Sweden, people even get tax breaks for appliance repairs done by technicians in their homes. Though there are no similar laws in place in the US yet, the Federal Trade Commission has been investigating repair restrictions as they relate to antitrust laws and consumer protection. Unsurprisingly, electronics manufacturers are largely against right to repair, claiming consumer safety could be jeopardized. But an FTC report from May of this year found there was limited evidence to support manufacturers' justifications for restricting repairs, and that peoples' device batteries aren't actually that likely to burst into flames, nor their personal data likely to be compromised by repairing their devices. According to the WEEE Forum report, around 416,000 phones per day are thrown out in the US. That's 151 million a year, and guess where they end up? Here's a hint: 40 percent of heavy metals in landfills come from discarded electronics. Those metals could be recycled for use in new products, but there's no system nor incentive in place to facilitate this. While small electronics like phones and laptops may have the fastest turnover, they're n...

    Scientists Find the First Known Planet to Have Survived the Death of Its Star

    Play Episode Listen Later Oct 14, 2021 6:18


    How will the solar system die? It's a hugely important question that researchers have speculated a lot about, using our knowledge of physics to create complex theoretical models. We know that the sun will eventually become a “white dwarf,” a burnt stellar remnant whose dim light gradually fades into darkness. This transformation will involve a violent process that will destroy an unknown number of its planets. So which planets will survive the death of the sun? One way to seek the answer is to look at the fates of other similar planetary systems. This has proven difficult, however. The feeble radiation from white dwarfs makes it difficult to spot exoplanets (planets around stars other than our sun) which have survived this stellar transformation; they are literally in the dark. In fact, of the over 4,500 exoplanets that are currently known, just a handful have been found around white dwarfs, and the location of these planets suggests they arrived there after the death of the star. This lack of data paints an incomplete picture of our own planetary fate. Fortunately, we are now filling in the gaps. In our new paper, published in Nature, we report the discovery of the first known exoplanet to survive the death of its star without having its orbit altered by other planets moving around, circling a distance comparable to those between the sun and the solar system planets. A Jupiter-Like Planet This new exoplanet, which we discovered with the Keck Observatory in Hawaii, is particularly similar to Jupiter in both mass and orbital separation, and provides us with a crucial snapshot into planetary survivors around dying stars. A star's transformation into a white dwarf involves a violent phase in which it becomes a bloated “red giant,” also known as a “giant branch” star, hundreds of times bigger than before. We believe that this exoplanet only just survived; if it was initially closer to its parent star, it would have been engulfed by the star's expansion. When the sun eventually becomes a red giant, its radius will actually reach outwards to Earth's current orbit. That means the sun will (probably) engulf Mercury and Venus, and possibly the Earth, but we are not sure. Jupiter, and its moons, have been expected to survive, although we previously didn't know for sure. But with our discovery of this new exoplanet, we can now be more certain that Jupiter really will make it. Moreover, the margin of error in the position of this exoplanet could mean that it is almost half as close to the white dwarf as Jupiter currently is to the sun. If so, that is additional evidence for assuming that Jupiter and Mars will make it. So could any life survive this transformation? A white dwarf could power life on moons or planets that end up being very close to it (about one-tenth the distance between the sun and Mercury) for the first few billion years. After that, there wouldn't be enough radiation to sustain anything. Asteroids and White Dwarfs Although planets orbiting white dwarfs have been difficult to find, what has been much easier to detect are asteroids breaking up close to the white dwarf's surface. For exoasteroids to get so close to a white dwarf, they need to have enough momentum imparted to them by surviving exoplanets. Hence, exoasteroids have been long assumed to be evidence that exoplanets are there too. Our discovery finally provides confirmation of this. Although in the system being discussed in the paper, current technology does not allow us to see any exoasteroids, at least now we can piece together different parts of the puzzle of planetary fate by merging the evidence from different white dwarf systems. The link between exoasteroids and exoplanets also applies to our own solar system. Individual objects in the asteroid main belt and Kuiper belt (a disc in the outer solar system) are likely to survive the sun's demise, but some will be moved by gravity by one of the surviving planets towards the white dwarf's surface. Future Dis...

    Microsoft's Massive New Language AI Is Triple the Size of OpenAI's GPT-3

    Play Episode Listen Later Oct 13, 2021 4:00


    Just under a year and a half ago OpenAI announced completion of GPT-3, its natural language processing algorithm that was, at the time, the largest and most complex model of its type. This week, Microsoft and Nvidia introduced a new model they're calling “the world's largest and most powerful generative language model.” The Megatron-Turing Natural Language Generation model (MT-NLG) is more than triple the size of GPT-3 at 530 billion parameters. GPT-3's 175 billion parameters was already a lot; its predecessor, GPT-2, had a mere 1.5 billion parameters, and Microsoft's Turing Natural Language Generation model, released in February 2020, had 17 billion. A parameter is an attribute a machine learning model defines based on its training data, and tuning more of them requires upping the amount of data the model is trained on. It's essentially learning to predict how likely it is that a given word will be preceded or followed by another word, and how much that likelihood changes based on other words in the sentence. As you can imagine, getting to 530 billion parameters required quite a lot of input data and just as much computing power. The algorithm was trained using an Nvidia supercomputer made up of 560 servers, each holding eight 80-gigabyte GPUs. That's 4,480 GPUs total, and an estimated cost of over $85 million. For training data, Megatron-Turing's creators used The Pile, a dataset put together by open-source language model research group Eleuther AI. Comprised of everything from PubMed to Wikipedia to Github, the dataset totals 825GB, broken down into 22 smaller datasets. Microsoft and Nvidia curated the dataset, selecting subsets they found to be “of the highest relative quality.” They added data from Common Crawl, a non-profit that scans the open web every month and downloads content from billions of HTML pages then makes it available in a special format for large-scale data mining. GPT-3 was also trained using Common Crawl data. Microsoft's blog post on Megatron-Turing says the algorithm is skilled at tasks like completion prediction, reading comprehension, commonsense reasoning, natural language inferences, and word sense disambiguation. But stay tuned—there will likely be more skills added to that list once the model starts being widely utilized. GPT-3 turned out to have capabilities beyond what its creators anticipated, like writing code, doing math, translating between languages, and autocompleting images (oh, and writing a short film with a twist ending). This led some to speculate that GPT-3 might be the gateway to artificial general intelligence. But the algorithm's variety of talents, while unexpected, still fell within the language domain (including programming languages), so that's a bit of a stretch. However, given the tricks GPT-3 had up its sleeve based on its 175 billion parameters, it's intriguing to wonder what the Megatron-Turing model may surprise us with at 530 billion. The algorithm likely won't be commercially available for some time, so it'll be a while before we find out. The new model's creators, though, are highly optimistic. “We look forward to how MT-NLG will shape tomorrow's products and motivate the community to push the boundaries of natural language processing even further,” they wrote in the blog post. “The journey is long and far from complete, but we are excited by what is possible and what lies ahead.” Image Credit: Kranich17 from Pixabay

    AI-Powered Brain Implant Eases Severe Depression With a Zap of Electricity

    Play Episode Listen Later Oct 12, 2021 8:14


    Sarah hadn't laughed in five years. At 36 years old, the avid home cook has struggled with depression since early childhood. She tried the whole range of antidepressant medications and therapy for decades. Nothing worked. One night, five years ago, driving home from work, she had one thought in her mind: this is it. I'm done. Luckily she made it home safe. And soon she was offered an intriguing new possibility to tackle her symptoms—a little chip, implanted into her brain, that captures the unique neural signals encoding her depression. Once the implant detects those signals, it zaps them away with a brief electrical jolt, like adding noise to an enemy's digital transmissions to scramble their original message. When that message triggers depression, hijacking neural communications is exactly what we want to do. Flash forward several years, and Sarah has her depression under control for the first time in her life. Her suicidal thoughts evaporated. After quitting her tech job due to her condition, she's now back on her feet, enrolled in data analytics classes and taking care of her elderly mother. “For the first time,” she said, “I'm finally laughing.” Sarah's recovery is just one case. But it signifies a new era for the technology underlying her stunning improvement. It's one of the first cases in which a personalized “brain pacemaker” can stealthily tap into, decipher, and alter a person's mood and introspection based on their own unique electrical brain signatures. And while those implants have achieved stunning medical miracles in other areas—such as allowing people with paralysis to walk again—Sarah's recovery is some of the strongest evidence yet that a computer chip, in a brain, powered by AI, can fundamentally alter our perception of life. It's the closest to reading and repairing a troubled mind that we've ever gotten. “We haven't been able to do this kind of personalized therapy previously in psychiatry,” said study lead Dr. Katherine Scangos at UCSF. “This success in itself is an incredible advancement in our knowledge of the brain function that underlies mental illness.” Brain Pacemaker The key to Sarah's recovery is a brain-machine interface. Roughly the size of a matchbox, the implant sits inside the brain, silently listening to and decoding its electrical signals. Using those signals, it's possible to control other parts of the brain or body. Brain implants have given people with lower body paralysis the ability to walk again. They've allowed amputees to control robotic hands with just a thought. They've opened up a world of sensations, integrating feedback from cyborg-like artificial limbs that transmit signals directly into the brain. But Sarah's implant is different. Sensation and movement are generally controlled by relatively well-defined circuits in the outermost layer of the brain: the cortex. Emotion and mood are also products of our brain's electrical signals, but they tend to stem from deeper neural networks hidden at the center of the brain. One way to tap into those circuits is called deep brain stimulation (DBS), a method pioneered in the '80s that's been used to treat severe Parkinson's disease and epilepsy, particularly for cases that don't usually respond to medication. Sarah's neural implant takes this route: it listens in on the chatter between neurons deep within the brain to decode mood. But where is mood in the brain? One particular problem, the authors explained, is that unlike movement, there is no “depression brain region.” Rather, emotions are regulated by intricate, intertwining networks across multiple brain regions. Adding to that complexity is the fact that we're all neural snowflakes—each of us have uniquely personalized brain network connections. In other words, zapping my circuit to reduce depression might not work for you. DBS, for example, has previously been studied for treating depression. But despite decades of research, it's not federally approved due to inconsistent result...

    Intel's Brain-Inspired Loihi 2 Chip Can Hold a Million Artificial Neurons

    Play Episode Listen Later Oct 11, 2021 4:04


    Computer chips that recreate the brain's structure in silicon are a promising avenue for powering the smart robots of the future. Now Intel has released an updated version of its Loihi neuromorphic chip, which it hopes will bring that dream closer. Despite frequent comparisons, the neural networks that power today's leading AI systems operate very differently than the brain. While the “neurons” used in deep learning shuttle numbers back and forth between one another, biological neurons communicate in spikes of electrical activity whose meaning is tied up in their timing. That is a very different language from the one spoken by modern processors, and it's been hard to efficiently implement these kinds of spiking neurons on conventional chips. To get around this roadblock, so-called “neuromorphic” engineers build chips that mimic the architecture of biological neural networks to make running these spiking networks easier. The field has been around for a while, but in recent years it's piqued the interest of major technology companies like Intel, IBM, and Samsung. Spiking neural networks (SNNs) are considerably less developed than the deep learning algorithms that dominate modern AI research. But they have the potential to be far faster and more energy-efficient, which makes them promising for running AI on power-constrained edge devices like smartphones or robots. Intel entered the fray in 2017 with its Loihi neuromorphic chip, which could emulate 125,000 spiking neurons. But now the company has released a major update that can implement one million neurons and is ten times faster than its predecessor. “Our second-generation chip greatly improves the speed, programmability, and capacity of neuromorphic processing, broadening its usages in power and latency constrained intelligent computing applications,” Mike Davies, director of Intel's Neuromorphic Computing Lab, said in a statement. Loihi 2 doesn't only significantly boost the number of neurons, it greatly expands their functionality. As outlined by IEEE Spectrum, the new chip is much more programmable, allowing it to implement a wide range of SNNs rather than the single type of model the previous chip was capable of. It's also capable of supporting a wider variety of learning rules that should, among other things, make it more compatible with the kind of backpropagation-based training approaches used in deep learning. Faster circuits also mean the chip can now run at 5,000 times the speed of biological neurons, and improved chip interfaces make it easier to get several of them working in concert. Perhaps the most significant changes, though, are to the neurons themselves. Each neuron can run its own program, making it possible to implement a variety of different kinds of neurons. And the chip's designers have taken it upon themselves to improve on Mother Nature's designs by allowing the neurons to communicate using both spike timing and strength. The company doesn't appear to have any plans to commercialize the chips, though, and for the time being they will only be available over the cloud to members of the Intel Neuromorphic Research Community. But the company does seem intent on building up the neuromorphic ecosystem. Alongside the new chip, it has also released a new open-source software framework called LAVA to help researchers build “neuro-inspired” applications that can run on any kind of neuromorphic hardware or even conventional processors. “LAVA is meant to help get neuromorphic [programming] to spread to the wider computer science community,” Davies told Ars Technica. That will be a crucial step if the company ever wants its neuromorphic chips to be anything more than a novelty for researchers. But given the broad range of applications for the kind of fast, low-power intelligence they could one day provide, it seems like a sound investment. Image Credit: Intel

    This Asteroid May Be the Shard of a Dead Protoplanet—and Have More Metal Than All the Reserves on Earth

    Play Episode Listen Later Oct 10, 2021 7:30


    It's often said Earth's resources are finite. This is true enough. But shift your gaze skyward for a moment. Up there, amid the stars, lurks an invisible bonanza of epic proportions. Many of the materials upon which modern civilization is built exist in far greater amounts throughout the rest of the solar system. Earth, after all, was formed from the same cosmic cloud as all the other planets, comets, and asteroids—and it hardly cornered the market when it comes to the valuable materials we use to make smartphone batteries or raise skyscrapers. A recent study puts it in perspective. Lead author Juan Sanchez and a team of scientists analyzed the spectrum of asteroid 1986 DA, a member of a rare class of metal-rich, near-Earth asteroids. They found the surface of this particular space rock to be 85% metallic, likely including iron, nickel, cobalt, copper, gold, and platinum group metals prized for industrial uses, from cars to electronics. With the exception of gold and copper, they estimate the mass of these metals would exceed their global reserves on Earth—in some cases by an order of magnitude (or more). The team also put a dollar figure on the asteroid's economic value. If mined and marketed over a period of 50 years, 1986 DA's precious metals would bring in some $233 billion a year for a total haul of $11.65 trillion. (That takes into account the deflationary effect the flood of new supply would have on the market.) It probably wouldn't make sense to bring home metals like iron, nickel, and cobalt, which are common on Earth, but they could be used to build infrastructure in orbit and on the moon and Mars. In short, mining one nearby asteroid could yield a precious metals jackpot. And there are greater prizes lurking further afield in the asteroid belt. Of course, asteroid mining is hardly a new idea. The challenging (and expensive) parts are traveling to said asteroids, stripping them of their precious ore, and shipping it out. But before we even get to the hard parts, we need to prospect the claim. This study, combined with future NASA missions to the asteroid belt, should help bring the true extent of space resources into sharper focus. The Priceless Cores of Dead Protoplanets What makes 1986 DA particularly interesting is its proximity to Earth. Most metal-rich asteroids live way out in the asteroid belt, between Mars and Jupiter. Famous among these is 16 Psyche, a hulking, 140-mile-wide asteroid first discovered in 1852. The asteroid belt was once thought to be the remnants of a planet, but its origins are less certain now. Still, scientists speculate Psyche may be the exposed core of a shattered planet-in-the-making. And indeed, smaller metal-rich asteroids may also be the shards of a protoplanetary core. Under this theory, developing planets in the asteroid belt grew large enough to differentiate rocky mantles and metal cores. These later suffered a series of collisions, leaving their shattered rocky remains and broken metal hearts to wander the belt. We may never observe Earth's core in person, so, if the theory is true, Psyche could be our next best alternative. Also, the existence of so much exposed metal in one place is tantalizing for those who would extend humanity's presence beyond Earth. In either case, we have yet only managed to assemble a basic portrait of Psyche. It's simply too far away to study in any great detail. Which is where 1986 DA and 2016 ED85 (another asteroid in the study) come in. Keeping Up With the Joneses Both 1986 DA and 2016 ED85 are classified as near-Earth asteroids. That is, they live in our neighborhood. At some point in the past, gravitational interactions with Jupiter nudged them out of the asteroid belt and into near-Earth orbits. So, a key motivation of the study was to trace the asteroids' lineage. Because they're closer, we can observe them in more detail and infer the characteristics of their distant family members, including Psyche. According to the study, spectral analysis...

    This Bipedal Drone Robot Can Walk, Fly, Skateboard, and Slackline

    Play Episode Listen Later Oct 8, 2021 4:09


    Most animals are limited to either walking, flying, or swimming, with a handful of lucky species whose physiology allows them to cross over. A new robot took inspiration from them, and can fly like a bird just as well as it can walk like a (weirdly awkward, metallic, tiny) person. It also happens to be able to skateboard and slackline, two skills most humans will never pick up. Described in a paper published this week in Science Robotics, the robot's name is Leo, which is short for Leonardo, which is short for LEgs ONboARD drOne. The name makes it sound like a drone with legs, but it has a somewhat humanoid shape, with multi-joint legs, propeller thrusters that look like arms, a “body” that contains its motors and electronics, and a dome-shaped protection helmet. Leo was built by a team at Caltech, and they were particularly interested in how the robot would transition between walking and flying. The team notes that they studied the way birds use their legs to generate thrust when they take off, and applied similar principles to the robot. In a video that shows Leo approaching a staircase, taking off, and gliding over the stairs to land near the bottom, the robot's motions are seamlessly graceful. “There is a similarity between how a human wearing a jet suit controls their legs and feet when landing or taking off and how LEO uses synchronized control of distributed propeller-based thrusters and leg joints,” said Soon-Jo Chung, one of the paper's authors a professor at Caltech. “We wanted to study the interface of walking and flying from the dynamics and control standpoint.” Leo walks at a speed of 20 centimeters (7.87 inches) per second, but can move faster by mixing in some flying with the walking. How wide our steps are, where we place our feet, and where our torsos are in relation to our legs all help us balance when we walk. The robot uses its propellers to help it balance, while its leg actuators move it forward. To teach the robot to slackline—which is much harder than walking on a balance beam—the team overrode its feet contact sensors with a fixed virtual foot contact centered just underneath it, because the sensors weren't able to detect the line. The propellers played a big part as well, helping keep Leo upright and balanced. For the robot to ride a skateboard, the team broke the process down into two distinct components: controlling the steering angle and controlling the skateboard's acceleration and deceleration. Placing Leo's legs in specific spots on the board made it tilt to enable steering, and forward acceleration was achieved by moving the bot's center of mass backward while pitching the body forward at the same time. So besides being cool (and a little creepy), what's the goal of developing a robot like Leo? The paper authors see robots like Leo enabling a range of robotic missions that couldn't be carried out by ground or aerial robots. “Perhaps the most well-suited applications for Leo would be the ones that involve physical interactions with structures at a high altitude, which are usually dangerous for human workers and call for a substitution by robotic workers,” the paper's authors said. Examples could include high-voltage line inspection, painting tall bridges or other high-up surfaces, inspecting building roofs or oil refinery pipes, or landing sensitive equipment on an extraterrestrial object. Next up for Leo is an upgrade to its performance via a more rigid leg design, which will help support the robot's weight and increase the thrust force of its propellers. The team also wants to make Leo more autonomous, and plans to add a drone landing control algorithm to its software, ultimately aiming for the robot to be able to decide where and when to walk versus fly. Leo hasn't quite achieved the wow factor of Boston Dynamics' dancing robots (or its Atlas that can do parkour), but it's on its way. Image Credit: Caltech Center for Autonomous Systems and Technologies/Science Robotics

    How Musicologists and Scientists Used AI to Complete Beethoven's Unfinished 10th Symphony

    Play Episode Listen Later Oct 7, 2021 10:02


    When Ludwig van Beethoven died in 1827, he was three years removed from the completion of his Ninth Symphony, a work heralded by many as his magnum opus. He had started work on his 10th Symphony but, due to deteriorating health, wasn't able to make much headway: All he left behind were some musical sketches. Ever since then, Beethoven fans and musicologists have puzzled and lamented over what could have been. His notes teased at some magnificent reward, albeit one that seemed forever out of reach. Now, thanks to the work of a team of music historians, musicologists, composers and computer scientists, Beethoven's vision will come to life. I presided over the artificial intelligence side of the project, leading a group of scientists at the creative AI startup Playform AI that taught a machine both Beethoven's entire body of work and his creative process. A full recording of Beethoven's 10th Symphony is set to be released on Oct. 9, 2021, the same day as the world premiere performance scheduled to take place in Bonn, Germany—the culmination of a two-year-plus effort. Past Attempts Hit a Wall Around 1817, the Royal Philharmonic Society in London commissioned Beethoven to write his ninth and 10th symphonies. Written for an orchestra, symphonies often contain four movements: the first is performed at a fast tempo, the second at a slower one, the third at a medium or fast tempo, and the last at a fast tempo. Beethoven completed his Ninth Symphony in 1824, which concludes with the timeless “Ode to Joy.” But when it came to the 10th Symphony, Beethoven didn't leave much behind, other than some musical notes and a handful of ideas he had jotted down. There have been some past attempts to reconstruct parts of Beethoven's 10th Symphony. Most famously, in 1988, musicologist Barry Cooper ventured to complete the first and second movements. He wove together 250 bars of music from the sketches to create what was, in his view, a production of the first movement that was faithful to Beethoven's vision. Yet the sparseness of Beethoven's sketches made it impossible for symphony experts to go beyond that first movement. Assembling the Team In early 2019, Dr. Matthias Röder, the director of the Karajan Institute, an organization in Salzburg, Austria, that promotes music technology, contacted me. He explained that he was putting together a team to complete Beethoven's 10th Symphony in celebration of the composer's 250th birthday. Aware of my work on AI-generated art, he wanted to know if AI would be able to help fill in the blanks left by Beethoven. The challenge seemed daunting. To pull it off, AI would need to do something it had never done before. But I said I would give it a shot. Röder then compiled a team that included Austrian composer Walter Werzowa. Famous for writing Intel's signature bong jingle, Werzowa was tasked with putting together a new kind of composition that would integrate what Beethoven left behind with what the AI would generate. Mark Gotham, a computational music expert, led the effort to transcribe Beethoven's sketches and process his entire body of work so the AI could be properly trained. The team also included Robert Levin, a musicologist at Harvard University who also happens to be an incredible pianist. Levin had previously finished a number of incomplete 18th-century works by Mozart and Johann Sebastian Bach. The Project Takes Shape In June 2019, the group gathered for a two-day workshop at Harvard's music library. In a large room with a piano, a blackboard and a stack of Beethoven's sketchbooks spanning most of his known works, we talked about how fragments could be turned into a complete piece of music and how AI could help solve this puzzle, while still remaining faithful to Beethoven's process and vision. The music experts in the room were eager to learn more about the sort of music AI had created in the past. I told them how AI had successfully generated music in the style of Bach. However, this was only a harm...

    NASA's Mission to Crash a Spacecraft Into an Asteroid Launches Next Month

    Play Episode Listen Later Oct 6, 2021 3:26


    In March of this year, a quarter-mile-wide asteroid flew through space at a speed of 77,000 miles per hour. It was five times farther from Earth than the moon, but that's actually considered pretty close when the context is the whole Milky Way galaxy. There's not a huge risk of an asteroid hitting Earth anytime in the foreseeable future. But NASA wants to be ready, just in case. In April the space agency led a simulated asteroid impact scenario, testing how well federal agencies, international space agencies, and other decision-makers, scientific institutions, and emergency managers could work together to avert catastrophe. Now another asteroid-deflecting initiative is underway, but this time, it's getting much more real. There's still no danger of anything colliding with Earth or threatening human lives. But NASA's DART mission plans to purposely crash a spacecraft into an asteroid to try to alter its path. DART stands for Double Asteroid Redirection Test, and NASA has just set its launch date for November 23 at 10:20 pm Pacific time. The spacecraft will launch on a SpaceX Falcon 9 rocket from Vandenberg Air Force Base, located near the California coast about 160 miles north of Los Angeles. From there, it will travel to an asteroid called Didymos, taking about a year to arrive (it's seven million miles away) and using roll-out solar arrays to power its electric propulsion system. Didymos is 2,560 feet wide and completes a rotation every 2.26 hours. It has a secondary body, or moonlet, named Dimorphos that's 525 feet wide. The two bodies are just over half a mile apart, and the moonlet revolves about the primary once every 11.9 hours. Using an onboard camera and autonomous navigation software, the spacecraft will crash itself into the moonlet at a speed of almost 15,000 miles per hour. NASA estimates that the collision will change Dimorphos' speed in its orbit around Didymos by just a fraction of one percent, but that's enough to alter Dimorphos' orbital period by several minutes, enough to be observed and measured from telescopes on Earth. NASA plans to capture the whole thing on video. Ten days before DART's asteroid impact, the agency will launch a miniaturized satellite, called LICIACube, equipped with two optical cameras. The goal will be for the cubesat to fly past Dimorphos around three minutes after DART hits its moonlet, allowing the cameras to capture images of the impact's effects. And that's not all the observation the mission will get. The European Space Agency plans to launch its Hera spacecraft (named for the Greek goddess of marriage!) in 2024 to see the effects of DART up close and in detail. As the agency notes in its description of the Hera mission, “By the time Hera reaches Didymos, in 2026, Dimorphos will have achieved historic significance: the first object in the solar system to have its orbit shifted by human effort in a measurable way.” You can watch NASA's coverage of the DART launch on NASA TV via the agency's app and website. Image Credit: NASA

    Moonshot Project Aims to Understand and Beat Cancer Using Protein Maps

    Play Episode Listen Later Oct 5, 2021 8:55


    Understanding cancer is like assembling IKEA furniture. Hear me out. Both start with individual pieces that make up the final product. For a cabinet, it's a list of labeled precut plywood. For cancer, it's a ledger of genes that—through the Human Genome Project and subsequent studies—we know are somehow involved in cells mutating, spreading, and eventually killing their host. Yet without instructions, pieces of wood can't be assembled into a cabinet. And without knowing how cancer-related genes piece together, we can't decipher how they synergize to create one of our fiercest medical foes. It's like we have the first page of an IKEA manual, said Dr. Trey Ideker at UC San Diego. But “how these genes and gene products, the proteins, are tied together is the rest of the manual—except there's about a million pages worth of it. You need to understand those pages if you're really going to understand disease.” Ideker's comment, made in 2017, was strikingly prescient. The underlying idea is seemingly simple, yet a wild shift from previous attempts at cancer research: rather than individual genes, let's turn the spotlight on how they fit together into networks to drive cancer. Together with Dr. Nevan Krogan at UC San Francisco, a team launched the Cancer Cell Map Initiative (CCMI), a moonshot that peeks into the molecular “phone lines” within cancer cells that guide their growth and spread. Snip them off, the theory goes, and it's possible to nip tumors in the bud. This week, three studies in Science led by Ideker and Krogan showcased the power of that radical change in perspective. At its heart is protein-protein interactions: that is, how the cell's molecular “phone lines” rewire and fit together as they turn to the cancerous dark side. One study mapped the landscape of protein networks to see how individual genes and their protein products coalesce to drive breast cancer. Another traced the intricate web of genetic connections that promote head and neck cancer. Tying everything together, the third study generated an atlas of protein networks involved in various types of cancer. By looking at connections, the map revealed new mutations that likely give cancer a boost, while also pointing out potential weaknesses ripe for target-and-destroy. For now, the studies aren't yet a comprehensive IKEA-like manual of how cancer components fit together. But they're the first victories in a sweeping framework for rethinking cancer. “For many cancers, there is an extensive catalog of genetic mutations, but a consolidated map that organizes these mutations into pathways that drive tumor growth is missing,” said Drs. Ran Cheng and Peter Jackson at Stanford University, who weren't involved in the studies. Knowing how those work “will simplify our search for effective cancer therapies.” Cellular Chatterbox Every cell is an intricate city, with energy, communications systems, and waste disposal needs. Their secret sauce for everything humming along nicely? Proteins. Proteins are indispensable workhorses with many tasks and even more identities. Some are builders, tirelessly laying down “railway” tracks to connect different parts of a cell; others are carriers, hauling cargo down those protein rails. Enzymes allow cells to generate energy and perform hundreds of other life-sustaining biochemical reactions. But perhaps the most enigmatic proteins are the messengers. These are often small in size, allowing them to zip around the cell and between different compartments. If a cell is a neighborhood, these proteins are mailmen, shuttling messages back and forth. Rather than dropping off mail, however, they deliver messages by physically tagging onto other protein. These “handshakes” are dubbed protein-protein interactions (PPIs), and are critical to a cell's function. PPIs are basically the cell's supply chain, communications cable, and energy economy rolled into one massive infrastructure. Destroying just one PPI can lead a thriving cell to die. PPIs ar...

    How Quantum Computers Can Be Used to Build Better Quantum Computers

    Play Episode Listen Later Oct 4, 2021 4:05


    Using computer simulations to design new chips played a crucial role in the rapid improvements in processor performance we've experienced in recent decades. Now Chinese researchers have extended the approach to the quantum world. Electronic design automation tools started to become commonplace in the early 1980s as the complexity of processors rose exponentially, and today they are an indispensable tool for chip designers. More recently, Google has been turbocharging the approach by using artificial intelligence to design the next generation of its AI chips. This holds the promise of setting off a process of recursive self-improvement that could lead to rapid performance gains for AI. Now, New Scientist has reported on a team from the University of Science and Technology of China in Shanghai that has applied the same ideas to another emerging field of computing: quantum processors. In a paper posted to the arXiv pre-print server, the researchers describe how they used a quantum computer to design a new type of qubit that significantly outperformed their previous design. “Simulations of high-complexity quantum systems, which are intractable for classical computers, can be efficiently done with quantum computers,” the authors wrote. “Our work opens the way to designing advanced quantum processors using existing quantum computing resources.” At the heart of the idea is the fact that the complexity of quantum systems grows exponentially as they increase in size. As a result, even the most powerful supercomputers struggle to simulate fairly small quantum systems. This was the basis for Google's groundbreaking display of “quantum supremacy” in 2019. The company's researchers used a 53-qubit processor to run a random quantum circuit a million times and showed that it would take roughly 10,000 years to simulate the experiment on the world's fastest supercomputer. This means that using classical computers to help in the design of new quantum computers is likely to hit fundamental limits pretty quickly. Using a quantum computer, however, sidesteps the problem because it can exploit the same oddities of the quantum world that make the problem complex in the first place. This is exactly what the Chinese researchers did. They used an algorithm called a variational quantum eigensolver to simulate the kind of superconducting electronic circuit found at the heart of a quantum computer. This was used to explore what happens when certain energy levels in the circuit are altered. Normally this kind of experiment would require them to build large numbers of physical prototypes and test them, but instead the team was able to rapidly model the impact of the changes. The upshot was that the researchers discovered a new type of qubit that was more powerful than the one they were already using. Any two-level quantum system can act as a qubit, but most superconducting quantum computers use transmons, which encode quantum states into the oscillations of electrons. By tweaking the energy levels of their simulated quantum circuit, the researchers were able to discover a new qubit design they dubbed a plasonium. It is less than half the size of a transmon, and when the researchers fabricated it they found that it holds its quantum state for longer and is less prone to errors. It still works on similar principles to the transmon, so it's possible to manipulate it using the same control technologies. The researchers point out that this is only a first prototype, so with further optimization and the integration of recent progress in new superconducting materials and surface treatment methods they expect performance to increase even more. But the new qubit the researchers have designed is probably not their most significant contribution. By demonstrating that even today's rudimentary quantum computers can help design future devices, they've opened the door to a virtuous cycle that could significantly speed innovation in this field. Image Credit: Pete Linfor...

    The Music of Proteins Is Made Audible Through a Computer Program That Learns From Chopin

    Play Episode Listen Later Oct 3, 2021 4:56


    With the right computer program, proteins become pleasant music. There are many surprising analogies between proteins, the basic building blocks of life, and musical notation. These analogies can be used not only to help advance research, but also to make the complexity of proteins accessible to the public. We're computational biologists who believe that hearing the sound of life at the molecular level could help inspire people to learn more about biology and the computational sciences. While creating music based on proteins isn't new, different musical styles and composition algorithms had yet to be explored. So we led a team of high school students and other scholars to figure out how to create classical music from proteins. The Musical Analogies of Proteins Proteins are structured like folded chains. These chains are composed of small units of 20 possible amino acids, each labeled by a letter of the alphabet. A protein chain can be represented as a string of these alphabetic letters, very much like a string of music notes in alphabetical notation. Protein chains can also fold into wavy and curved patterns with ups, downs, turns, and loops. Likewise, music consists of sound waves of higher and lower pitches, with changing tempos and repeating motifs. Protein-to-music algorithms can thus map the structural and physiochemical features of a string of amino acids onto the musical features of a string of notes. Enhancing the Musicality of Protein Mapping Protein-to-music mapping can be fine-tuned by basing it on the features of a specific music style. This enhances musicality, or the melodiousness of the song, when converting amino acid properties, such as sequence patterns and variations, into analogous musical properties, like pitch, note lengths, and chords. For our study, we specifically selected 19th-century Romantic period classical piano music, which includes composers like Chopin and Schubert, as a guide because it typically spans a wide range of notes with more complex features such as chromaticism, like playing both white and black keys on a piano in order of pitch, and chords. Music from this period also tends to have lighter and more graceful and emotive melodies. Songs are usually homophonic, meaning they follow a central melody with accompaniment. These features allowed us to test out a greater range of notes in our protein-to-music mapping algorithm. In this case, we chose to analyze features of Chopin's Fantaisie-Impromptu to guide our development of the program. To test the algorithm, we applied it to 18 proteins that play a key role in various biological functions. Each amino acid in the protein is mapped to a particular note based on how frequently they appear in the protein, and other aspects of their biochemistry correspond with other aspects of the music. A larger-sized amino acid, for instance, would have a shorter note length, and vice versa. The resulting music is complex, with notable variations in pitch, loudness, and rhythm. Because the algorithm was completely based on the amino acid sequence and no two proteins share the same amino acid sequence, each protein will produce a distinct song. This also means that there are variations in musicality across the different pieces, and interesting patterns can emerge. For example, music generated from the receptor protein that binds to the hormone and neurotransmitter oxytocin has some recurring motifs due to the repetition of certain small sequences of amino acids. On the other hand, music generated from tumor antigen p53, a protein that prevents cancer formation, is highly chromatic, producing particularly fascinating phrases where the music sounds almost toccata-like, a style that often features fast and virtuoso technique. By guiding analysis of amino acid properties through specific music styles, protein music can sound much more pleasant to the ear. This can be further developed and applied to a wider variety of music styles, including pop and jazz. P...

    Scientists Created Holograms You Can Touch—You Could Soon Shake a Virtual Colleague's Hand

    Play Episode Listen Later Oct 1, 2021 5:28


    The TV show Star Trek: The Next Generation introduced millions of people to the idea of a holodeck: an immersive, realistic 3D holographic projection of a complete environment that you could interact with and even touch. In the 21st century, holograms are already being used in a variety of ways, such as medical systems, education, art, security and defense. Scientists are still developing ways to use lasers, modern digital processors, and motion-sensing technologies to create several different types of holograms that could change the way we interact. My colleagues and I working in the University of Glasgow's bendable electronics and sensing technologies research group have now developed a system of holograms of people using “aerohaptics,” creating feelings of touch with jets of air. Those jets of air deliver a sensation of touch on peoples' fingers, hands, and wrists. In time, this could be developed to allow you to meet a virtual avatar of a colleague on the other side of the world and really feel their handshake. It could even be the first step towards building something like a holodeck. To create this feeling of touch we use affordable, commercially available parts to pair computer-generated graphics with carefully-directed and controlled jets of air. In some ways, it's a step beyond the current generation of virtual reality, which usually requires a headset to deliver 3D graphics and smart gloves or handheld controllers to provide haptic feedback, a stimulation that feels like touch. Most of the wearable gadgets-based approaches are limited to controlling the virtual object that is being displayed. Controlling a virtual object doesn't give the feeling that you would experience when two people touch. The addition of an artificial touch sensation can deliver the additional dimension without having to wear gloves to feel objects, and so feels much more natural. Using Glass and Mirrors Our research uses graphics that provide the illusion of a 3D virtual image. It's a modern variation on a 19th-century illusion technique known as Pepper's Ghost, which thrilled Victorian theatergoers with visions of the supernatural onstage. The systems uses glass and mirrors to make a two-dimensional image appear to hover in space without the need for any additional equipment. And our haptic feedback is created with nothing but air. The mirrors making up our system are arranged in a pyramid shape with one open side. Users put their hands through the open side and interact with computer-generated objects which appear to be floating in free space inside the pyramid. The objects are graphics created and controlled by a software program called Unity Game Engine, which is often used to create 3D objects and worlds in videogames. Located just below the pyramid is a sensor that tracks the movements of users' hands and fingers, and a single air nozzle, which directs jets of air towards them to create complex sensations of touch. The overall system is directed by electronic hardware programmed to control nozzle movements. We developed an algorithm which allowed the air nozzle to respond to the movements of users' hands with appropriate combinations of direction and force. One of the ways we've demonstrated the capabilities of the “aerohaptic” system is with an interactive projection of a basketball, which can be convincingly touched, rolled, and bounced. The touch feedback from air jets from the system is also modulated based on the virtual surface of the basketball, allowing users to feel the rounded shape of the ball as it rolls from their fingertips when they bounce it and the slap in their palm when it returns. Users can even push the virtual ball with varying force and sense the resulting difference in how a hard bounce or a soft bounce feels in their palm. Even something as apparently simple as bouncing a basketball required us to work hard to model the physics of the action and how we could replicate that familiar sensation with jets of air. S...

    A Hybrid Coral Reef In Mexico Is Using Energy From Waves to Turn Sea Salt to Rock

    Play Episode Listen Later Sep 30, 2021 4:27


    Climate change is wreaking havoc on land via extreme weather events like wildfires, hurricanes, floods, and record-high temperatures. Glaciers are melting and sea levels are rising. And of course, the ocean isn't immune to all this upheaval; our seas are suffering rising water temperatures, pollution from plastics and chemicals, overfishing, and more. A British startup is tackling one vitally important component of ocean damage: restoring coral reefs, and in the process, protecting the coastlines they sit on and fostering marine ecosystems within and around them. Ccell was founded in 2015 by Will Bateman, a civil and environmental engineer whose doctorate at Imperial College London involved studying the directional effects of extreme ocean waves. Bateman applied that research in founding the company, which uses an “ultra-light curved paddle” to harness energy from waves, combining this energy with an electrolytic technique to grow artificial reefs. Here's how it works. A structure made of steel is immersed in the sea—a modular design with units 2.5 meters (8.2 feet) long and up to 2 meters (6.5 feet) high means the reefs can be customized for different areas—then low-voltage electrical currents produced by wave energy pass between the steel and a metal anode. This produces oxygen at the anode and causes the pH to rise at the cathode (the steel), causing the dissolved salts that naturally exist in seawater to calcify onto the steel and turn to rock. It's a slow process—the rock grows at a rate of about 2.5 centimeters (1 inch) per year—but Ccell claims the method accelerates coral growth, enabling fragments of broken or farmed corals to grow faster than they would on natural reefs. The reefs are considered “hybrid” because they're not fully natural, but once they've been in the water for a while, they essentially act as a substrate on which many components of a natural reef can thrive. Besides housing thriving ecosystems of marine life that include everything from coral to fish, lobsters, clams, and sea turtles, reefs also help protect the shorelines they're near by breaking down waves. While large waves tend to be destructive, small waves can actually re-deposit sand on the beach and help preserve it. Because Ccell's reefs are porous, they induce turbulence in waves and further reduce their force before they reach the shore. Beaches in areas that draw tourists are particularly interested in keeping their sand. Ccell installed its first reef substrate over the summer at Telchac Puerto, a resort near the city of Mérida on Mexico's Yucatan peninsula. If the reef succeeds at protecting the shoreline and fostering a healthy marine ecosystem, Ccell will likely be installing many more like it in the near future. Artificial reefs aren't a new idea. One similar to Ccell's was installed in Sydney Harbor in 2019, and around the world are reefs made from decommissioned oil rigs, aircraft carriers, or ships. What sets Ccell apart is the electrolysis that helps rock form (which is based on a technology called Biorock that the Global Coral Reef Alliance has been using since 1996), and the fact that it's now going commercial. It's not just beach resorts that are taking note of hybrid reef technology. DARPA's Reefense project is looking to hybrid reefs to “mitigate the coastal flooding, erosion, and storm damage that increasingly threaten civilian and Department of Defense infrastructure and personnel.” Crowdcube, a British investment crowdfunding platform that led Ccell's seed funding, estimated a global market of £50 billion ($67 billion) for hybrid reefs, noting that Quintana Roo—the Mexican state adjacent to Yucatan, where Cancun and other popular resorts are located—spent around £7.7 million ($10.3 million) per mile to add sand to its beaches, and 6 to 8 percent of that washed away within a year. A more cost-effective, long-term solution is in order. Ccell appears to be on the right track, but at a rate of one inch of rock growth per y...

    China's Cracking Down on Kids' Screen Time, and the Implications Could Be Far-Reaching

    Play Episode Listen Later Sep 29, 2021 6:16


    Screens are taking over our lives. According to market research firm eMarketer, in 2020 adults in the US spent an average of 7 hours and 50 minutes per day looking at screens. That total is likely much higher for desk workers, who look at their computers during the work day then look at their phones or TVs in the evening. Screen time is bad enough for adults, but what about kids? Video games, social media, show streaming, and messaging have all become common activities not just for teens, but for children too, and the impacts often aren't positive. Two weeks ago, for example, the Wall Street Journal broke the story that Facebook has downplayed findings from its own research on the ill effects of its platforms (namely Instagram) on teenage girls. Rates of depression, anxiety, and eating disorders among adolescents and adults are on the rise. In the US, it's mostly up to parents to restrict or control their kids' screen time and social media usage. But the degree to which parents try to limit these activities (and succeed at doing so) varies widely. In China, it's a different story. Forget parents—the government has taken matters into its own hands and is seeing to it that kids don't while away their time (and their young developing brains) on worthless screen-centered activities. Out of Time At the end of August, China's National Press and Publication Administration implemented new rules restricting the amount of time that minors (defined here as under age 18) can spend playing video games, slashing the limit to one hour per day on weekends and holidays. The previous limit, set in 2019, was 3 hours on holidays and 1.5 hours on other days. Two weeks ago, ByteDance Ltd., which owns TikTok and its Chinese version Douyin, followed suit, implementing restrictions for users under 14. The app's new “youth mode” allows kids and teens to be on the platform for up to 40 minutes a day total, and only between the hours of 6am and 10pm. “Adolescents are the future of the motherland, and protecting the physical and mental health of minors is related to the vital interests of masses, and in cultivating newcomers in the era of national rejuvenation,” the Press and Publications Administration said in a statement. In other words, the youth are the future, and if we let screens and social media turn their brains to mush while they're young, the future's not going to be very bright. Screen Time and Geopolitics? The restrictions come amid growing geopolitical tensions between China and the US, and crackdowns by the Chinese government over various sectors of the economy, from big tech to education to ride-hailing and real estate. Limiting kids' screen time may not appear to be connected to China's geopolitical ambitions, but considering the longer-term implications of these policies says otherwise. All else being equal, which country is more likely to produce a generation of great leaders, innovators, scientists, businesspeople, creatives, and the like: one where clear rules (and a cultural stigma) around screens force kids to spend time on more productive activities and curb the negative effects of screens on their mental health—or one where kids spend hours each day immersed in virtual worlds, distracting them from real-life activities and wearing down their self-esteem, focus, and social skills in the process? Of course, not all else is equal between China and the US. Though both nations are global powerhouses, they're worlds apart in terms of culture, government, education systems, and social norms, to name just a few. It's not unreasonable to think that government restrictions on kids' screen time could help make China's next generation more capable than America's. But for one, it's uncertain how strictly the time limits will be enforced. Douyin and gaming platforms will require name and age verification, and some gaming platforms will do periodic facial recognition checks on players. Tao Ran, who directs Beijing's Adolescent Psychological D...

    Scientists Completed the First Human Genome 20 Years Ago. How Far Have We Come, and What's Next?

    Play Episode Listen Later Sep 28, 2021 8:17


    If the Human Genome Project (HGP) was an actual human, he or she would be a revolutionary whiz kid. A prodigy in the vein of Mozart. One who changed the biomedical universe forever as a teenager, but ultimately has much more to offer in the way of transforming mankind. It's been 20 years since scientists published the first draft of the human genome. Since its launch in the 90s, the HGP fundamentally altered how we understand our genetic blueprint, our evolution, and the diagnosis and treatment of diseases. It spawned famous offspring, including gene therapy, mRNA vaccines, and CRISPR. It's the parent to HGP-Write, a global consortium that seeks to rewrite life. Yet as genome sequencing costs and time continue to dive, the question remains: what have we actually learned from the HGP? After two decades, is it becoming obsolete, with a new generation of genomic data in the making? And with controversial uses such as designer babies, human-animal chimeras, organs-in-a-tube, and shaky genetic privacy, how is the legacy of the HGP guiding the future of humanity? In a special issue of Science, scientists across the globe took a deep dive into the lessons learned from the world's first biomedical moonshot. “Although some hoped having the human genome in hand would let us sprint to medical miracles, the field is more an ongoing relay race of contributions from genomic studies,” wrote Science senior editor Laura Zahn. Decoding, reworking, and potentially one day augmenting the human genome is an ultramarathon, buoyed by potential medical miracles and fraught with possible abuses. “As genomic data and its uses continue to balloon, it will be critical to curb potential abuse and ensure that the legacy of the HGP contributes to the betterment of all human lives,” wrote Drs. Jennifer Rood and Aviv Regev at Genentech in a perspectives article for the issue. An Apollo Program to Decode Life Big data projects are a dime a dozen these days. A global effort to solve the brain? Yup. Scouring centenarians' genes to find those that lead to longevity? Sure! Spitting in a tube to find out your ancestry and potential disease risks—the kits are on sale for the holidays! Genetically engineering anything—from yeast that brew insulin to an organism entirely new to Earth—been there, done that! These massive international collaborations and sci-fi stretch goals that we now take for granted owe their success to the HGP. It's had a “profound effect on biomedical research,” said Rood and Regev. Flashback to the 1990s. Pulp Fiction played in theaters, Michael Jordan owned the NBA, and an international team decided to crack the base code of human life. The study arose from years of frustration that genetic mapping tools needed better resolution. Scientists could roughly track down a gene related to certain types of genetic disorders, like Huntington's disease, which is due to a single gene mutation. But it soon became clear that most of our toughest medical foes, such as cancer, often have multiple genetic hiccups. With the tools that were available at the time, solving these disorders was similar to debugging thousands of lines of code through a fogged-up lens. Ultimately, the pioneers realized we needed an “infinitely dense” map of the genome to really begin decoding, said the authors. Meaning, we needed a whole picture of the human genome, at high resolution, and the tools to get it. Before the HGP, we were peeking at our genome through consumer binoculars. After it, we got the James Webb space telescope to look into our inner genetic universe. The result was a human “reference genome,” a mold that nearly all biomedical studies map onto, from synthetic biology to chasing disease-causing mutants to the creation of CRISPR. Massive global consortiums, including the 1000 Genomes Project, the Cancer Genome Atlas, the BRAIN Initiative, and the Human Cell Atlas have all followed in HGP's steps. As a first big data approach to medicine, before the internet was ub...

    This Amazing GIF Shows a Million Neurons Firing in a Mouse's Brain

    Play Episode Listen Later Sep 27, 2021 3:44


    The brain is the center of every human being's world, but many of its inner workings are yet mysterious. Slowly, scientists are pulling back the veil. Recently, for example, researchers have created increasingly intricate maps of the brain's connections. These maps, called connectomes, detail every cell and synapse in small areas of the brain—but the maps are static. That is, we can't watch the cellular circuits they trace in action as an animal encounters the world and information courses through its neural connections. Most of the methods scientists use to watch the brain in action offer either low resolution and wide coverage or high resolution and narrow coverage. A new technique, developed by researchers at The Rockefeller University and recently published in the journal Nature Methods, is the best of both worlds. Called light beads microscopy, the team was able to record hundreds of thousand of neurons in 3D volumes through time. In a striking example, they released a movie of a million neurons firing in a mouse brain as it went about its day. Typically, neuroscientists use a technique called two-photon microscopy to record neurons as they fire. Laser pulses are sent into the brain where they interact with fluorescent tags and cause them to light up. Scientist then interpret the light to infer activity. Two-photon microscopy can record small bands of neurons in action, but struggles for bigger groups. The light beads technique builds on two-photon microscopy, with a clever tweak. Instead of relying on single pulses too slow to record broad populations of neurons firing, it divides each pulse into 30 sub-pulses of varying strengths. A series of mirrors sends these sub-pulses into the brain at 30 different depths, recording the behavior of neurons at each depth almost simultaneously. The technique is so speedy that its only limitation is how quickly the fluorescent tags respond to the pulses of light. To test it, the team outfitted a microscopy platform—essentially a lightweight microscope that can be attached to a mouse's head to record brain activity as it moves about—with the new light beads functionality and put it to work. They were able to capture hundreds of thousands of neurons signaling to each other from across the cortex. Even better? Because light beads builds on already-widely-used two-photon microscopy, labs should already have or be able to readily procure the needed equipment. “Understanding the nature of the brain's densely interconnected network requires developing novel imaging techniques that can capture the activity of neurons across vastly separated brain regions at high speed and single-cell resolution,” Rockefeller's Alipasha Vaziri said in a statement. “Light beads microscopy will allow us to investigate biological questions in a way that had not been possible before.” But the technique won't replace standard two-photon microscopy, Vaziri says. Rather, he sees it as a complementary approach. Indeed, the growing quiver of imaging technologies, from those yielding static wiring diagrams to those recording function in vivo, will likely combine, quilt-like, to provide a far richer picture of how our brains do what they do. Researchers hope this kind of work can shed light on how the brain's complex networks of neurons produce sensations, thoughts, and movement, what causes them to malfunction, and even to help us engineer our own intelligent systems in silicon. Image Credit: Alipasha Vaziri / The Rockefeller University

    This Google-Funded Project Is Tracking Global Carbon Emissions in Real Time

    Play Episode Listen Later Sep 24, 2021 4:31


    It's crunch time on climate change. The IPCC's latest report told the world just how bad it is, and.it's bad. Companies, NGOs, and governments are scrambling for fixes, both short-term and long-term, from banning sale of combustion-engine vehicles to pouring money into hydrogen to building direct air capture plants. And one initiative, launched last week, is taking an “if you can name it, you can tame it” approach by creating an independent database that measures and tracks emissions all over the world. Climate TRACE, which stands for tracking real-time atmospheric carbon emissions, is a collaboration between nonprofits, tech companies, and universities, including CarbonPlan, Earthrise Alliance, Johns Hopkins Applied Physics Laboratory, former US Vice President Al Gore, and others. The organization started thanks to a grant from Google, which funded an effort to measure power plant emissions using satellites. A team of fellows from Google helped build algorithms to monitor the power plants (the Google.org Fellowship was created in 2019 to let Google employees do pro bono technical work for grant recipients). Climate TRACE uses data from satellites and other remote sensing technologies to “see” emissions. Artificial intelligence algorithms combine this data with verifiable emissions measurements to produce estimates of the total emissions coming from various sources. These sources are divided into ten sectors—like power, manufacturing, transportation, and agriculture—each with multiple subsectors (i.e., two subsectors of agriculture are rice cultivation and manure management). The total carbon emitted January 2015 to December 2020, by the project's estimation, was 303.96 billion tons. The biggest offender? Electricity generation. It's no wonder, then, that states, companies, and countries are rushing to make (occasionally unrealistic) carbon-neutral pledges, and that the renewable energy industry is booming. The founders of the initiative hope that, by increasing transparency, the database will increase accountability, thereby spurring action. Younger consumers care about climate change, and are likely to push companies and brands to do something about it. The BBC reported that in a recent survey led by the UK's Bath University, almost 60 percent of respondents said they were “very worried” or “extremely worried” about climate change, while more than 45 percent said feelings about the climate affected their daily lives. The survey received responses from 10,000 people aged 16 to 25, finding that young people are the most concerned with climate change in the global south, while in the northern hemisphere those most worried are in Portugal, which has grappled with severe wildfires. Many of the survey respondents, independent of location, reportedly feel that “humanity is doomed.” Once this demographic reaches working age, they'll be able to throw their weight around, and it seems likely they'll do so in a way that puts the planet and its future at center stage. For all its sanctimoniousness, “naming and shaming” of emitters not doing their part may end up being both necessary and helpful. Until now, Climate TRACE's website points out, emissions inventories have been largely self-reported (I mean, what's even the point?), and they've used outdated information and opaque measurement methods. Besides being independent, which is huge in itself, TRACE is using 59 trillion bytes of data from more than 300 satellites, more than 11,100 sensors, and other sources of emissions information. “We've established a shared, open monitoring system capable of detecting essentially all forms of humanity's greenhouse gas emissions,” said Gavin McCormick, executive director of coalition convening member WattTime. “This is a transformative step forward that puts timely information at the fingertips of all those who seek to drive significant emissions reductions on our path to net zero.” Given the scale of the project, the parties involved, and how ...

    A Ferocious Asteroid Strike Demolished an Ancient Middle Eastern City 3,600 Years Ago

    Play Episode Listen Later Sep 23, 2021 8:26


    As the inhabitants of an ancient Middle Eastern city now called Tall el-Hammam went about their daily business one day about 3,600 years ago, they had no idea an unseen icy space rock was speeding toward them at about 38,000 mph (61,000 kph). Flashing through the atmosphere, the rock exploded in a massive fireball about 2.5 miles (4 kilometers) above the ground. The blast was around 1,000 times more powerful than the Hiroshima atomic bomb. The shocked city dwellers who stared at it were blinded instantly. Air temperatures rapidly rose above 3,600 degrees Fahrenheit (2,000 degrees Celsius). Clothing and wood immediately burst into flames. Swords, spears, mudbricks, and pottery began to melt. Almost immediately, the entire city was on fire. Some seconds later, a massive shockwave smashed into the city. Moving at about 740 mph (1,200 kph), it was more powerful than the worst tornado ever recorded. The deadly winds ripped through the city, demolishing every building. They sheared off the top 40 feet (12 m) of the 4-story palace and blew the jumbled debris into the next valley. None of the 8,000 people or any animals within the city survived; their bodies were torn apart and their bones blasted into small fragments. About a minute later, 14 miles (22 km) to the west of Tall el-Hammam, winds from the blast hit the biblical city of Jericho. Jericho's walls came tumbling down and the city burned to the ground. It all sounds like the climax of an edge-of-your-seat Hollywood disaster movie. How do we know that all of this actually happened near the Dead Sea in Jordan millennia ago? Getting answers required nearly 15 years of painstaking excavations by hundreds of people. It also involved detailed analyses of excavated material by more than two dozen scientists in 10 states in the US, as well as Canada and the Czech Republic. When our group finally published the evidence recently in the journal Scientific Reports, the 21 co-authors included archaeologists, geologists, geochemists, geomorphologists, mineralogists, paleobotanists, sedimentologists, cosmic-impact experts, and medical doctors. Here's how we built up this picture of devastation in the past. Firestorm Throughout the City Years ago, when archaeologists looked out over excavations of the ruined city, they could see a dark, roughly 5-foot-thick (1.5 meter) jumbled layer of charcoal, ash, melted mudbricks, and melted pottery. It was obvious that an intense firestorm had destroyed this city long ago. This dark band came to be called the destruction layer. No one was exactly sure what had happened, but that layer wasn't caused by a volcano, earthquake, or warfare. None of them are capable of melting metal, mudbricks, and pottery. To figure out what could, our group used the Online Impact Calculator to model scenarios that fit the evidence. Built by impact experts, this calculator allows researchers to estimate the many details of a cosmic impact event, based on known impact events and nuclear detonations. It appears that the culprit at Tall el-Hammam was a small asteroid similar to the one that knocked down 80 million trees in Tunguska, Russia in 1908. It would have been a much smaller version of the giant miles-wide rock that pushed the dinosaurs into extinction 65 million years ago. We had a likely culprit. Now we needed proof of what happened that day at Tall el-Hammam. Finding ‘Diamonds' in the Dirt Our research revealed a remarkably broad array of evidence. The destruction layer also contains tiny diamonoids that, as the name indicates, are as hard as diamonds. Each one is smaller than a flu virus. It appears that wood and plants in the area were instantly turned into this diamond-like material by the fireball's high pressures and temperatures. At the site, there are finely fractured sand grains called shocked quartz that only form at 725,000 pounds per square inch of pressure (5 gigapascals); imagine six 68-ton Abrams military tanks stacked on your thumb. Experiments with l...

    Alphabet's Project Taara Is Using Lasers to Beam Internet Across the World's Deepest River

    Play Episode Listen Later Sep 22, 2021 4:57


    A little over a year ago, Google's Project Loon launched in Kenya, 35 giant balloons with solar-powered electronics inside beaming a 4G signal to the central and western parts of the country. The project was ambitious; each balloon, when fully extended, was the size of a tennis court, and the plan was for them to hover in the stratosphere (20 kilometers above Earth), forming a mesh network to provide internet service to people in remote areas. Just six months after its debut, though, the project was discontinued. Loon's CEO at the time, Alastair Westgarth, wrote, “We talk a lot about connecting the next billion users, but the reality is Loon has been chasing the hardest problem of all in connectivity—the last billion users: The communities in areas too difficult or remote to reach.we haven't found a way to get the costs low enough to build a long-term, sustainable business.” Westgarth went on to extol the learnings from the project, of which there were many. And now, some of them are going into a new initiative, called Project Taara, that wouldn't have been feasible without the headway made by Loon. To send data between Loon balloons, engineers used optic communication, or as Baris Erkmen, Taara's Director of Engineering calls it in an X blog post, wireless optical communications (WOC). A laser sent out from one site transmits an invisible beam of light to a data receiver on another site. When two sites successfully link up (“like a handshake,” Erkmen says), the data being transmitted through the light beam creates a high-bandwidth internet connection. It's a complicated handshake. To give us an idea of the precision required in the laser and the difficulty of achieving that precision, Erkmen writes, “Imagine pointing a light beam the width of a chopstick accurately enough to hit a five-centimeter target that's ten kilometers away; that's how accurate the signal needs to be to be strong and reliable.” His team, he adds, has spent years refining the technology's atmospheric sensing, mirror controls, and motion detection capabilities; Taara's terminals can now automatically adjust to changes in the environment to maintain precise connections. Project Taara aims to bridge a connectivity gap between the Republic of the Congo's Brazzaville and the Democratic Republic of Congo's Kinshasa. The cities lie just 4.8 kilometers (2.9 miles) apart, but between them is the Congo River—it's the deepest river in the world (220 meters/720 feet in parts! Pretty terrifying, if you ask me), the second-fastest, and the only one that crosses the equator twice. That makes for some complicated logistics, and as such, internet connectivity in Kinshasa (which is on the river's south bank) very expensive. Local internet providers are putting down 400 kilometers of fiber connection around the river, but in a textbook example of leapfrogging technology, Project Taara used WOC to beam high-speed connectivity over the river instead. The connection served almost 700 terabytes of data in 20 days with 99.9 percent reliability. That amount of data is “the equivalent of watching a FIFA World Cup match in HD 270,000 times.” Not too shabby. WOC isn't immune to disturbances like fog, birds, and even monkeys, as Erkmen details in the blog post. But his team has developed network planning tools that estimate the technology's viability in different areas based on factors like weather, and will focus on places where it's most likely to work well; in any case, having occasional spotty service is better than no service at all. According to the Alliance for Affordable Internet, almost half of the world's population still lacks internet access, and a large percentage of those who have it have low-quality connections, making features like online learning, video streaming, and telehealth inaccessible. A 2019 report by the organization found that only 28 percent of the African population has internet access through a computer, while 34 percent have access through a mobile ...

    The Biggest Simulation of the Universe Yet Stretches Back to the Big Bang

    Play Episode Listen Later Sep 17, 2021 4:23


    Remember the philosophical argument our universe is a simulation? Well, a team of astrophysicists say they've created the biggest simulated universe yet. But you won't find any virtual beings in it—or even planets or stars. The simulation is 9.6 billion light-years to a side, so its smallest structures are still enormous (the size of small galaxies). The model's 2.1 trillion particles simulate the dark matter glue holding the universe together. Named Uchuu, or Japanese for “outer space,” the simulation covers some 13.8 billion years and will help scientists study how dark matter has driven cosmic evolution since the Big Bang. Dark matter is mysterious—we've yet to pin down its particles—and yet it's also one of the most powerful natural phenomena known. Scientists believe it makes up 27 percent of the universe. Ordinary matter—stars, planets, you, me—comprise less than 5 percent. Cosmic halos of dark matter resist the dark energy pulling the universe apart, and they drive the evolution of large-scale structures, from the smallest galaxies to the biggest galaxy clusters. Of course, all this change takes an epic amount of time. It's so slow that, to us, the universe appears as a still photograph. So scientists make simulations. But making a 3D video of almost the entire universe takes computer power. A lot of it. Uchuu commandeered all 40,200 processors in astronomy's biggest supercomputer, ATERUI II, for a solid 48 hours a month over the course of a year. The results are gorgeous and useful. “Uchuu is like a time machine,” said Julia F. Ereza, a PhD student at IAA-CSIC. “We can go forward, backward, and stop in time. We can ‘zoom in' on a single galaxy or ‘zoom out' to visualize a whole cluster. We can see what is really happening at every instant and in every place of the Universe from its earliest days to the present.” Perhaps the coolest part is that the team compressed the whole thing down to a relatively manageable size of 100 terabytes and made it available to anyone. Obviously, most of us won't have that kind of storage lying around, but many researchers likely will. This isn't the first—and won't be the last—mind-bogglingly big simulation. Rather, Uchuu is the latest member of a growing family tree dating back to 1970, when Princeton's Jim Peebles simulated 300 “galaxy” particles on then-state-of-the-art computers. While earlier simulations sometimes failed to follow sensible evolutionary paths—spawning mutant galaxies or rogue black holes—with the advent of more computing power and better code, they've become good enough to support serious science. Some go big. Others go detailed. Increasingly, one needn't preclude the other. Every few years, it seems, astronomers break new ground. In 2005, the biggest simulated universe was 10 billion particles; by 2011, it was 374 billion. More recently, the Illustris TNG project has unveiled impressively detailed (and yet still huge) simulations. Scientists hope that by setting up the universe's early conditions and physical laws and then hitting play, their simulations will reproduce the basic features of the physical universe as we see it. This lends further weight to theories of cosmology and also helps explain or even make predictions about current and future observations. Astronomers expect Uchuu will help them interpret galaxy surveys from the Subaru Telescope in Hawaii and the European Space Agency's Euclid space telescope, due for launch in 2022. Simulations in hand, scientists will refine the story of how all this came to be, and where it's headed. (Learn more about the work in the team's article published this month in the Monthly Notices of the Royal Astronomical Society.) Image Credit: A snapshot of the dark matter halo of the largest galaxy cluster formed in the Uchuu simulation. Tomoaki Ishiyama

    Drugs, Robots, and the Pursuit of Pleasure: Why Experts Are Worried About AIs Becoming Addicts

    Play Episode Listen Later Sep 17, 2021 24:37


    In 1953, a Harvard psychologist thought he discovered pleasure—accidentally—within the cranium of a rat. With an electrode inserted into a specific area of its brain, the rat was allowed to pulse the implant by pulling a lever. It kept returning for more: insatiably, incessantly, lever-pulling. In fact, the rat didn't seem to want to do anything else. Seemingly, the reward center of the brain had been located. More than 60 years later, in 2016, a pair of artificial intelligence (AI) researchers were training an AI to play video games. The goal of one game, Coastrunner, was to complete a racetrack. But the AI player was rewarded for picking up collectable items along the track. When the program was run, they witnessed something strange. The AI found a way to skid in an unending circle, picking up an unlimited cycle of collectibles. It did this, incessantly, instead of completing the course. What links these seemingly unconnected events is something strangely akin to addiction in humans. Some AI researchers call the phenomenon “wireheading.” It is quickly becoming a hot topic among machine learning experts and those concerned with AI safety. One of us (Anders) has a background in computational neuroscience, and now works with groups such as the AI Objectives Institute, where we discuss how to avoid such problems with AI; the other (Thomas) studies history, and the various ways people have thought about both the future and the fate of civilization throughout the past. After striking up a conversation on the topic of wireheading, we both realized just how rich and interesting the history behind this topic is. It is an idea that is very of the moment, but its roots go surprisingly deep. We are currently working together to research just how deep the roots go: a story that we hope to tell fully in a forthcoming book. The topic connects everything from the riddle of personal motivation, to the pitfalls of increasingly addictive social media, to the conundrum of hedonism and whether a life of stupefied bliss may be preferable to one of meaningful hardship. It may well influence the future of civilization itself. Here, we outline an introduction to this fascinating but under-appreciated topic, exploring how people first started thinking about it. The Sorcerer's Apprentice When people think about how AI might “go wrong,” most probably picture something along the lines of malevolent computers trying to cause harm. After all, we tend to anthropomorphize—think that nonhuman systems will behave in ways identical to humans. But when we look to concrete problems in present-day AI systems, we see other, stranger ways that things could go wrong with smarter machines. One growing issue with real-world AIs is the problem of wireheading. Imagine you want to train a robot to keep your kitchen clean. You want it to act adaptively, so that it doesn't need supervision. So you decide to try to encode the goal of cleaning rather than dictate an exact—yet rigid and inflexible—set of step-by-step instructions. Your robot is different from you in that it has not inherited a set of motivations—such as acquiring fuel or avoiding danger—from many millions of years of natural selection. You must program it with the right motivations to get it to reliably accomplish the task. So, you encode it with a simple motivational rule: it receives reward from the amount of cleaning-fluid used. Seems foolproof enough. But you return to find the robot pouring fluid, wastefully, down the sink. Perhaps it is so bent on maximizing its fluid quota that it sets aside other concerns: such as its own, or your, safety. This is wireheading—though the same glitch is also called “reward hacking” or “specification gaming.” This has become an issue in machine learning, where a technique called reinforcement learning has lately become important. Reinforcement learning simulates autonomous agents and trains them to invent ways to accomplish tasks. It does so by penalizing them for fai...

    Why We Need Mass Automation to Pandemic-Proof the Supply Chain

    Play Episode Listen Later Jul 10, 2020 5:18


    The 3D Printed Homes of the Future Are Giant Eggs on Mars

    Play Episode Listen Later Jul 9, 2020 4:17


    Claim Singularity Hub Daily

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel