News and Research
Catherine Iacobo named industry co-director for MIT Leaders for Global Operations

Catherine Iacobo named industry co-director for MIT Leaders for Global Operations

Cathy Iacobo, a lecturer at the MIT Sloan School of Management, has been named the new industry co-director for the MIT Leaders for Global Operations (LGO) program. Read more

Lgo

2 from MIT Sloan make Forbes 30 Under 30 List

A pair of MIT Sloan alums, Jason Troutner, LGO ’19, and Sascha Eder, SM ’15, have been named to Forbes’ 30 Under 30 lists for 2020.

According to Forbes the annual lists spotlight “revolutionaries … changing the course — and the face — of business and society.”

January 10, 2020 | More

Preventing energy loss in windows

“The choice of windows in a building has a direct influence on energy consumption,” says Nicholas Fang, professor of mechanical engineering and current LGO Thesis Advisor. “We need an effective way of blocking solar radiation.”

In the quest to make buildings more energy efficient, windows present a particularly difficult problem. According to the U.S. Department of Energy, heat that either escapes or enters windows accounts for roughly 30 percent of the energy used to heat and cool buildings. Researchers are developing a variety of window technologies that could prevent this massive loss of energy.

January 6, 2020 | More

Sixteen grad students named to the Siebel Scholars class of 2020

LGO ’20 Hans Nowak is among the 2020 cohort of Siebel Scholars hailing from the world’s top graduate programs in bioengineering, business, computer science, and energy science. They were recognized at a luncheon and awards ceremony on campus on Oct. 31.

“You’re among a very select group of students to receive this honor,” Anantha Chandrakasan, dean of the School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science, told the students. “Your department heads obviously think very highly of your accomplishments.”

Honored for their academic achievements, leadership, and commitments to addressing crucial global challenges, the MIT students are among 93 Siebel Scholars from 16 leading institutions in the United States, China, France, Italy, and Japan.

Siebel Scholars each receive an award of $35,000 to cover their final year of study. In addition, they will join a community of more than 1,400 past Siebel Scholars, including about 260 from MIT, who serve as advisors to the Thomas and Stacy Siebel Foundation and collaborate “to find solutions to society’s most pressing problems,” according to the foundation.

Past Siebel Scholars have launched more than 1,100 products, received at least 370 patents, published nearly 40 books, and founded at least 150 companies, among other achievements, according to the Siebel Scholars Foundation, which administers the program.

MIT’s 2020 class of Siebel Scholars includes:

  • Katie Bacher, Department of Electrical Engineering and Computer Science
  • Alexandra (Allie) Beizer, MIT Sloan School of Management
  • Sarah Bening, Department of Biological Engineering
  • Allison (Allie) Brouckman, MIT Sloan School of Management
  • Enric Boix, Department of Electrical Engineering and Computer Science
  • M. Doga Dogan, Department of Electrical Engineering and Computer Science
  • Jared Kehe, Department of Biological Engineering
  • Emma Kornetsky, MIT Sloan School of Management
  • Kyungmi Lee, Department of Electrical Engineering and Computer Science
  • Graham Leverick, Department of Mechanical Engineering
  • Lauren Milling, Department of Biological Engineering
  • Hans Nowak, MIT Sloan School of Management
  • Lauren Stopfer, Department of Biological Engineering
  • Jon Tham, Sloan School of Management
  • Andrea Wallace, Department of Biological Engineering
  • Clinton Wang, Department of Electrical Engineering and Computer Science

November 19, 2019 | More

Practicing for a voyage to Mars

If you want to make the long voyage to Mars, you first have to train and rehearse, and MIT LGO alumnus Barret Schlegelmilch SM ’18, MBA ’18 is doing just that. He recently commanded a 45-day practice mission living and working with three other would-be astronauts in a cramped simulated spaceship.

NASA’s Human Exploration Research Analog (HERA) analog mission “departed” last spring for a trip to Phobos, the larger of the two moons of Mars. It was the second of four planned missions to Phobos in the mock spacecraft located at the Johnson Space Center in Houston. The goal is to study the physiological and psychological effects of extended isolation and confinement, team dynamics, and conflict resolution.

While on the mission, Schlegelmilch and three other crew me

November 1, 2019 | More

New leadership for Bernard M. Gordon-MIT Engineering Leadership Program

Olivier de Weck, frequent LGO advisor, professor of aeronautics and astronautics and of engineering systems at MIT, has been named the new faculty co-director of the Bernard M. Gordon-MIT Engineering Leadership Program (GEL). He joins Reza Rahaman, who was appointed the Bernard M. Gordon-MIT Engineering Leadership Program industry co-director and senior lecturer on July 1, 2018.

“Professor de Weck has a longstanding commitment to engineering leadership, both as an educator and a researcher. I look forward to working with him and the GEL team as they continue to strengthen their outstanding undergraduate program and develop the new program for graduate students,” says Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

A leader in systems engineering, de Weck researches how complex human-made systems such as aircraft, spacecraft, automobiles, and infrastructures are designed, manufactured, and operated. By investigating their lifecycle properties, de Weck and members of his research group have developed a range of novel techniques broadly adopted by industry to maximize the value of these systems over time.

August 1, 2019 | More

Building the tools of the next manufacturing revolution

John Hart, an associate professor of mechanical engineering at MIT, LGO adviser, and the director of the Laboratory for Manufacturing and Productivity and the Center for Additive and Digital Advanced Production Technologies, is an expert in 3-D printing, also known as additive manufacturing, which involves the computer-guided deposition of material layer by layer into precise three-dimensional shapes. (Conventional manufacturing usually entails making a part by removing material, for example through machining, or by forming the part using a mold tool.)

Hart’s research includes the development of advanced materials — new types of polymers, nanocomposites, and metal alloys — and the development of novel machines and processes that use and shape materials, such as high-speed 3-D printing, roll-to-roll graphene growth, and manufacturing techniques for low-cost sensors and electronics.

June 19, 2019 | More

LGO Best Thesis 2019 for Big Data Analysis at Amgen, Inc.

After the official MIT commencement ceremonies, Thomas Roemer, LGO’s executive director, announced the best thesis winner at LGO’s annual post-graduation celebration. This year’s winner was Maria Emilia Lopez Marino (Emi), who developed a predictive framework to evaluate and assess the impact of raw material attributes on the manufacturing process at Amgen. Thesis readers described Marino’s project as an “extremely well-written thesis.  Excellent coverage of not only the project, but also the industry as a whole.”

Applying MIT knowledge in the real world

Marino, who earned her MBA and SM in Civil and Environmental Engineering, completed her six-month LGO internship project at Amgen, Inc. For her project, Marino developed a new predictive framework through machine learning techniques to assess the impact of raw material variability on the performance of several commercial processes of biologics manufacturing.  Finding this solution represents a competitive advantage for biopharmaceutical leaders. The results from her analysis showed an 80% average accuracy on predictions for new data. Additionally, the framework she developed is the starting point of a new methodology towards material variability understanding in the manufacturing process for the pharmaceutical industry.

Each year, the theses are nominated by faculty advisors and then reviewed by LGO alumni readers to determine the winner. Thesis advisor and Professor Roy Welsch stated Emi “understood variation both in a statistical sense and in manufacturing in the biopharmaceutical industry and left behind highly accurate and interpretable models in a form that others can use and expand. We hope she will share her experiences with us in the future at LGO alumni reunions and on DPT visits.”

Marino, who earned her undergraduate degree Chemical Engineering from the National University of Mar Del Plata in Argentina, has accepted a job offer with Amgen in Puerto Rico.

 

June 11, 2019 | More

The tenured engineers of 2019

The School of Engineering has announced that 17 members of its faculty have been granted tenure by MIT, including 3 LGO advisors: Saurabh Amin, Kerri Cahoy, and Julie Shah.

“The tenured faculty in this year’s cohort are a true inspiration,” said Anantha Chandrakasan, dean of the School of Engineering. “They have shown exceptional dedication to research and teaching, and their innovative work has greatly advanced their fields.”

This year’s newly tenured associate professors are:

Antoine Allanore, in the Department of Materials Science and Engineering, develops more sustainable technologies and strategies for mining, metal extraction, and manufacturing, including novel methods of fertilizer production.

Saurabh Amin, in the Department of Civil and Environmental Engineering, focuses on the design and implementation of network inspection and control algorithms for improving the resilience of large-scale critical infrastructures, such as transportation systems and water and energy distribution networks, against cyber-physical security attacks and natural events.

Emilio Baglietto, in the Department of Nuclear Science and Engineering, uses computational modeling to characterize and predict the underlying heat-transfer processes in nuclear reactors, including turbulence modeling, unsteady flow phenomena, multiphase flow, and boiling.

Paul Blainey, the Karl Van Tassel (1925) Career Development Professor in the Department of Biological Engineering, integrates microfluidic, optical, and molecular tools for application in biology and medicine across a range of scales.

Kerri Cahoy, the Rockwell International Career Development Professor in the Department of Aeronautics and Astronautics, develops nanosatellites that demonstrate weather sensing using microwave radiometers and GPS radio occultation receivers, high data-rate laser communications with precision time transfer, and active optical imaging systems using MEMS deformable mirrors for exoplanet exploration applications.

Juejun Hu, in the Department of Materials Science and Engineering, focuses on novel materials and devices to exploit interactions of light with matter, with applications in on-chip sensing and spectroscopy, flexible and polymer photonics, and optics for solar energy.

Sertac Karaman, the Class of 1948 Career Development Professor in the Department of Aeronautics and Astronautics, studies robotics, control theory, and the application of probability theory, stochastic processes, and optimization for cyber-physical systems such as driverless cars and drones.

R. Scott Kemp, the Class of 1943 Career Development Professor in the Department of Nuclear Science and Engineering, combines physics, politics, and history to identify options for addressing nuclear weapons and energy. He investigates technical threats to nuclear-deterrence stability and the information theory of treaty verification; he is also developing technical tools for reconstructing the histories of secret nuclear-weapon programs.

Aleksander Mądry, in the Department of Electrical Engineering and Computer Science, investigates topics ranging from developing new algorithms using continuous optimization, to combining theoretical and empirical insights, to building a more principled and thorough understanding of key machine learning tools. A major theme of his research is rethinking machine learning from the perspective of security and robustness.

Frances Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering, performs research on nanostructures using transmission electron microscopes that allow researchers to see, in real-time, how structures form and develop in response to changes in temperature, environment, and other variables. Understanding crystal growth at the nanoscale is helpful in creating precisely controlled materials for applications in microelectronics and energy conversion and storage.

Daniel Sanchez, in the Department of Electrical Engineering and Computer Science, works on computer architecture and computer systems, with an emphasis on large-scale multi-core processors, scalable and efficient memory hierarchies, architectures with quality-of-service guarantees, and scalable runtimes and schedulers.

Themistoklis Sapsis, the Doherty Career Development Professor in the Department of Mechanical Engineering, develops analytical, computational, and data-driven methods for the probabilistic prediction and quantification of extreme events in high-dimensional nonlinear systems such as turbulent fluid flows and nonlinear mechanical systems.

Julie Shah, the Boeing Career Development Professor in the Department of Aeronautics and Astronautics, develops innovative computational models and algorithms expanding the use of human cognitive models for artificial intelligence. Her research has produced novel forms of human-machine teaming in manufacturing assembly lines, healthcare applications, transportation, and defense.

Hadley Sikes, the Esther and Harold E. Edgerton Career Development Professor in the Department of Chemical Engineering, employs biomolecular engineering and knowledge of reaction networks to detect epigenetic modifications that can guide cancer treatment, induce oxidant-specific perturbations in tumors for therapeutic benefit, and improve signaling reactions and assay formats used in medical diagnostics.

William Tisdale, the ARCO Career Development Professor in the Department of Chemical Engineering, works on energy transport in nanomaterials, nonlinear spectroscopy, and spectroscopic imaging to better understand and control the mechanisms by which excitons, free charges, heat, and reactive chemical species are converted to more useful forms of energy, and on leveraging this understanding to guide materials design and process optimization.

Virginia Vassilevska Williams, the Steven and Renee Finn Career Development Professor in the Department of Electrical Engineering and Computer Science, applies combinatorial and graph theoretic tools to develop efficient algorithms for matrix multiplication, shortest paths, and a variety of other fundamental problems. Her recent research is centered on proving tight relationships between seemingly different computational problems. She is also interested in computational social choice issues, such as making elections computationally resistant to manipulation.

Amos Winter, the Tata Career Development Professor in the Department of Mechanical Engineering, focuses on connections between mechanical design theory and user-centered product design to create simple, elegant technological solutions for applications in medical devices, water purification, agriculture, automotive, and other technologies used in highly constrained environments.

June 7, 2019 | More

MIT team places second in 2019 NASA BIG Idea Challenge

An MIT student team, including LGO ’20 Hans Nowak, took second place for its design of a multilevel greenhouse to be used on Mars in NASA’s 2019 Breakthrough, Innovative and Game-changing (BIG) Idea Challenge last month.

Each year, NASA holds the BIG Idea competition in its search for innovative and futuristic ideas. This year’s challenge invited universities across the United States to submit designs for a sustainable, cost-effective, and efficient method of supplying food to astronauts during future crewed explorations of Mars. Dartmouth College was awarded first place in this year’s closely contested challenge.

“This was definitely a full-team success,” says team leader Eric Hinterman, a graduate student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). The team had contributions from 10 undergraduates and graduate students from across MIT departments. Support and assistance were provided by four architects and designers in Italy. This project was completely voluntary; all 14 contributors share a similar passion for space exploration and enjoyed working on the challenge in their spare time.

The MIT team dubbed its design “BEAVER” (Biosphere Engineered Architecture for Viable Extraterrestrial Residence). “We designed our greenhouse to provide 100 percent of the food requirements for four active astronauts every day for two years,” explains Hinterman.

The ecologists and agriculture specialists on the MIT team identified eight types of crops to provide the calories, protein, carbohydrates, and oils and fats that astronauts would need; these included potatoes, rice, wheat, oats, and peanuts. The flexible menu suggested substitutes, depending on astronauts’ specific dietary requirements.

“Most space systems are metallic and very robotic,” Hinterman says. “It was fun working on something involving plants.”

Parameters provided by NASA — a power budget, dimensions necessary for transporting by rocket, the capacity to provide adequate sustenance — drove the shape and the overall design of the greenhouse.

Last October, the team held an initial brainstorming session and pitched project ideas. The iterative process continued until they reached their final design: a cylindrical growing space 11.2 meters in diameter and 13.4 meters tall after deployment.

An innovative design

The greenhouse would be packaged inside a rocket bound for Mars and, after landing, a waiting robot would move it to its site. Programmed with folding mechanisms, it would then expand horizontally and vertically and begin forming an ice shield around its exterior to protect plants and humans from the intense radiation on the Martian surface.

Two years later, when Earth and Mars orbits were again in optimal alignment for launching and landing, a crew would arrive on Mars, where they would complete the greenhouse setup and begin growing crops. “About every two years, the crew would leave and a new crew of four would arrive and continue to use the greenhouse,” explains Hinterman.

To maximize space, BEAVER employs a large spiral that moves around a central core within the cylinder. Seedlings are planted at the top and flow down the spiral as they grow. By the time they reach the bottom, the plants are ready for harvesting, and the crew enters at the ground floor to reap the potatoes and peanuts and grains. The planting trays are then moved to the top of the spiral, and the process begins again.

“A lot of engineering went into the spiral,” says Hinterman. “Most of it is done without any moving parts or mechanical systems, which makes it ideal for space applications. You don’t want a lot of moving parts or things that can break.”

The human factor

“One of the big issues with sending humans into space is that they will be confined to seeing the same people every day for a couple of years,” Hinterman explains. “They’ll be living in an enclosed environment with very little personal space.”

The greenhouse provides a pleasant area to ensure astronauts’ psychological well-being. On the top floor, just above the spiral, a windowed “mental relaxation area” overlooks the greenery. The ice shield admits natural light, and the crew can lounge on couches and enjoy the view of the Mars landscape. And rather than running pipes from the water tank at the top level down to the crops, Hinterman and his team designed a cascading waterfall at

May 24, 2019 | More

MIT team places first in U.S. Air Force virtual reality competition

When the United States Air Force put out a call for submissions for its first-ever Visionary Q-Prize competition in October 2018, a six-person team of 3 MIT students and 3 LGO alumni took up the challenge. Last month, they emerged as a first-place winner for their prototype of a virtual reality tool they called CoSMIC (Command, Sensing, and Mapping Information Center).

The challenge was hosted by the Air Force Research Labs Space Vehicles Directorate and the Wright Brothers Institute to encourage nontraditional sources with innovative products and ideas to engage with military customers to develop solutions for safe and secure operations in space.

April 12, 2019 | More

Sloan

For the first 200 or so years of the industrial era, technology and capitalism were sometimes seen as forces that helped economies flourish, to the detriment of the environment. Eventually, some feared, there wouldn’t be enough resources left to sustain humanity.

But Andrew McAfee sees a plot twist: Those same forces are now helping humans reduce their footprint on the planet. McAfee, a research scientist at MIT Sloan, lays out his optimistic case for how the “voracious appetite” of the industrial era led to technology that improves sustainability in his new book, “More from Less: The Surprising Story of How We Learned to Prosper Using Fewer Resources — and What Happens Next.”

January 21, 2020 | More

5 steps to ‘people-centered’ artificial intelligence

As companies double down on business initiatives built around technologies like predictive analytics, machine learning, and cognitive computing, there’s one element they ignore at their peril — humans.

That was the message from a pair of experts at a recent MIT Sloan Management Review webinar, “People-Centered Principles for AI Implementation.” As organizations push forward on their artific

January 7, 2020 | More

3 ways to reexamine the future digital workforce

recent report from the MIT Work of the Future Task Force finds that companies are still in the “early stages of adoption” when it comes to incorporating new technology into their workflows, while a 2018 Pew Research Center study showed that 65-90% of surveyed people think human-held jobs will be replaced by robots and computers.

When and how future workplaces will ultimately change remain unanswered, but Daniel Huttenlocher, inaugural dean of the MIT Stephen A. Schwarzman College of Computing, has some ideas.

December 30, 2019 | More

The 5 greatest challenges to fighting climate change

Climate change: Most of the world agrees it’s a danger, but how do we conquer it? What’s holding us back? Christopher Knittel, professor of applied economics at the MIT Sloan School of Management, laid out five of the biggest challenges in a recent interview.

CO2 is a global pollutant that can’t be locally contained

“The first key feature of climate change that puts it at odds with past environmental issues is that it’s a global pollutant, rather than a local pollutant. [Whether] I release a ton of CO2 in Cambridge, Massachusetts, or in London, it does the same damage to the globe,” Knittel said. “Contrast that

December 27, 2019 | More

See the future of global warming in less than one second

Global warming is a defining problem of our age. But how to solve it? MIT has an answer, thanks to the En-ROADS climate solutions simulator, a new online interface that simulates 100 years of energy, land, and climate data to identify solutions in less than a second.

December 20, 2019 | More

Big leap or small steps? Charting a path to a ‘future ready’ firm

Which is the better path to digital transformation — should your company take a giant, radical leap or a series of small, incremental steps?

Both approaches are effective for top-performing companies, although each entails different risks, according to a research brief recently published by the MIT Sloan Center for Information Systems Research.

The paper examined results of CISR’s Future Ready survey, which polled over 4,000 companies about their digital journeys, to assess which approach correlated with better financial performance.

December 16, 2019 | More

Platform report outlines threats, opportunities

As 2020 arrives, platforms are dominating the business world. The five most valuable companies in the world are platform businesses. Amazon processes 44% of all online e-commerce sales. Facebook and Google control 84% of global online advertising. If your industry hasn’t already been rocked by platform transformation, there’s a good chance it will be eventually.

With growth comes new challenges for platform businesses, including regulatory constraints, growth pressures, a shift to enterprise markets, and technology disruptions. These threats, opportunities, and more were addressed at the 2019 Platform Strategy summit, an annual gathering of business and academic leaders.

A new report from the summit features insights from MIT researchers and a variety of speakers, from platform pi

December 11, 2019 | More

The top 10 MIT Sloan news stories of 2019

 

Toxic employees can have a huge impact on workplace morale, productivity, and turnover, but identifying toxic people can be difficult. Here are red flags to look for. It’s okay if your career path resembles a game of Chutes and Ladders, Apple VP Kate Bergeron told students during a talk loaded with career hacks.T

 

December 3, 2019 | More

AFL-CIO president: Engage workers as technologies evolve

As technology displaces workers and real wages emerge from a long period of stagnation, the future of work appears in flux. But AFL-CIO President Richard L. Trumka sees a way forward, detailed in a galvanizing speech on the future of work and labor Nov. 20 at MIT Sloan.

The AFL-CIO is the largest federation of independent unions worldwide, consisting of 55 national and international unions. It recently formed the AFL-CIO Commission on Work and Unions, designed to rethink the role of unions in the modern workforce.

Trumka offered a glimpse into that thinking.

November 25, 2019 | More

An 8-step guide for improving workplace processes

Most people know what it’s like to be overwhelmed at work: managing a chaotic to-do list and constant emails, developing a poor work/life balance, putting out fires, and responding to the loudest voice in the room.

“It’s easy to get caught up in a situation where you’re doing so much firefighting that you don’t ever have time to put out the fire permanently,” said Daniel Norton, EMBA ’19, and a co-founder of the software company LeanKit. “You don’t have time to make things better. All you’re doing is just getting up every day and trying to avoid disaster.”

This is a common scenario for knowledge-based workers. It’s difficult for workers to even acknowledge they are struggling, let alone find and fix the source of the problem. On the other hand, it is easy t

November 8, 2019 | More

Engineering

Using artificial intelligence to enrich digital maps

A model invented by researchers at MIT and Qatar Computing Research Institute (QCRI) that uses satellite imagery to tag road features in digital maps could help improve GPS navigation.

Showing drivers more details about their routes can often help them navigate in unfamiliar locations. Lane counts, for instance, can enable a GPS system to warn drivers of diverging or merging lanes. Incorporating information about parking spots can help drivers plan ahead, while mapping bicycle lanes can help cyclists negotiate busy city streets. Providing updated information on road conditions can also improve planning for disaster relief.

But creating detailed maps is an expensive, time-consuming process done mostly by big companies, such as Google, which sends vehicles around with cameras strapped to their hoods to capture video and images of an area’s roads. Combining that with other data can create accurate, up-to-date maps. Because this process is expensive, however, some parts of the world are ignored.

A solution is to unleash machine-learning models on satellite images — which are easier to obtain and updated fairly regularly — to automatically tag road features. But roads can be occluded by, say, trees and buildings, making it a challenging task. In a paper being presented at the Association for the Advancement of Artificial Intelligence conference, the MIT and QCRI researchers describe “RoadTagger,” which uses a combination of neural network architectures to automatically predict the number of lanes and road types (residential or highway) behind obstructions.

In testing RoadTagger on occluded roads from digital maps of 20 U.S. cities, the model counted lane numbers with 77 percent accuracy and inferred road types with 93 percent accuracy. The researchers are also planning to enable RoadTagger to predict other features, such as parking spots and bike lanes.

“Most updated digital maps are from places that big companies care the most about. If you’re in places they don’t care about much, you’re at a disadvantage with respect to the quality of map,” says co-author Sam Madden, a professor in the Department of Electrical Engineering and Computer Science (EECS) and a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “Our goal is to automate the process of generating high-quality digital maps, so they can be available in any country.”

The paper’s co-authors are CSAIL graduate students Songtao He, Favyen Bastani, and Edward Park; EECS undergraduate student Satvat Jagwani; CSAIL professors Mohammad Alizadeh and Hari Balakrishnan; and QCRI researchers Sanjay Chawla, Sofiane Abbar, and Mohammad Amin Sadeghi.

Combining CNN and GNN

Qatar, where QCRI is based, is “not a priority for the large companies building digital maps,” Madden says. Yet, it’s constantly building new roads and improving old ones, especially in preparation for hosting the 2022 FIFA World Cup.

“While visiting Qatar, we’ve had experiences where our Uber driver can’t figure out how to get where he’s going, because the map is so off,” Madden says. “If navigation apps don’t have the right information, for things such as lane merging, this could be frustrating or worse.”

RoadTagger relies on a novel combination of a convolutional neural network (CNN) — commonly used for images-processing tasks — and a graph neural network (GNN). GNNs model relationships between connected nodes in a graph and have become popular for analyzing things like social networks and molecular dynamics. The model is “end-to-end,” meaning it’s fed only raw data and automatically produces output, without human intervention.

The CNN takes as input raw satellite images of target roads. The GNN breaks the road into roughly 20-meter segments, or “tiles.” Each tile is a separate graph node, connected by lines along the road. For each node, the CNN extracts road features and shares that information with its immediate neighbors. Road information propagates along the whole graph, with each node receiving some information about road attributes in every other node. If a certain tile is occluded in an image, RoadTagger uses information from all tiles along the road to predict what’s behind the occlusion.

This combined architecture represents a more human-like intuition, the researchers say. Say part of a four-lane road is occluded by trees, so certain tiles show only two lanes. Humans can easily surmise that a couple lanes are hidden behind the trees. Traditional machine-learning models — say, just a CNN — extract features only of individual tiles and most likely predict the occluded tile is a two-lane road.

“Humans can use information from adjacent tiles to guess the number of lanes in the occluded tiles, but networks can’t do that,” He says. “Our approach tries to mimic the natural behavior of humans, where we capture local information from the CNN and global information from the GNN to make better predictions.”

Learning weights   

To train and test RoadTagger, the researchers used a real-world map dataset, called OpenStreetMap, which lets users edit and curate digital maps around the globe. From that dataset, they collected confirmed road attributes from 688 square kilometers of maps of 20 U.S. cities — including Boston, Chicago, Washington, and Seattle. Then, they gathered the corresponding satellite images from a Google Maps dataset.

In training, RoadTagger learns weights — which assign varying degrees of importance to features and node connections — of the CNN and GNN. The CNN extracts features from pixel patterns of tiles and the GNN propagates the learned features along the graph. From randomly selected subgraphs of the road, the system learns to predict the road features at each tile. In doing so, it automatically learns which image features are useful and how to propagate those features along the graph. For instance, if a target tile has unclear lane markings, but its neighbor tile has four lanes with clear lane markings and shares the same road width, then the target tile is likely to also have four lanes. In this case, the model automatically learns that the road width is a useful image feature, so if two adjacent tiles share the same road width, they’re likely to have the same lane count.

Given a road not seen in training from OpenStreetMap, the model breaks the road into tiles and uses its learned weights to make predictions. Tasked with predicting a number of lanes in an occluded tile, the model notes that neighboring tiles have matching pixel patterns and, therefore, a high likelihood to share information. So, if those tiles have four lanes, the occluded tile must also have four.

In another result, RoadTagger accurately predicted lane numbers in a dataset of synthesized, highly challenging road disruptions. As one example, an overpass with two lanes covered a few tiles of a target road with four lanes. The model detected mismatched pixel patterns of the overpass, so it ignored the two lanes over the covered tiles, accurately predicting four lanes were underneath.

The researchers hope to use RoadTagger to help humans rapidly validate and approve continuous modifications to infrastructure in datasets such as OpenStreetMap, where many maps don’t contain lane counts or other details. A specific area of interest is Thailand, Bastani says, where roads are constantly changing, but there are few if any updates in the dataset.

“Roads that were once labeled as dirt roads have been paved over so are better to drive on, and some intersections have been completely built over. There are changes every year, but digital maps are out of date,” he says. “We want to constantly update such road attributes based on the most recent imagery.”

January 23, 2020 | More

A new approach to making airplane parts, minus the massive infrastructure

A modern airplane’s fuselage is made from multiple sheets of different composite materials, like so many layers in a phyllo-dough pastry. Once these layers are stacked and molded into the shape of a fuselage, the structures are wheeled into warehouse-sized ovens and autoclaves, where the layers fuse together to form a resilient, aerodynamic shell.

Now MIT engineers have developed a method to produce aerospace-grade composites without the enormous ovens and pressure vessels. The technique may help to speed up the manufacturing of airplanes and other large, high-performance composite structures, such as blades for wind turbines.

The researchers detail their new method in a paper published today in the journal Advanced Materials Interfaces.

“If you’re making a primary structure like a fuselage or wing, you need to build a pressure vessel, or autoclave, the size of a two- or three-story building, which itself requires time and money to pressurize,” says Brian Wardle, professor of aeronautics and astronautics at MIT. “These things are massive pieces of infrastructure. Now we can make primary structure materials without autoclave pressure, so we can get rid of all that infrastructure.”

Wardle’s co-authors on the paper are lead author and MIT postdoc Jeonyoon Lee, and Seth Kessler of Metis Design Corporation, an aerospace structural health monitoring company based in Boston.

Out of the oven, into a blanket

In 2015, Lee led the team, along with another member of Wardle’s lab, in creating a method to make aerospace-grade composites without requiring an oven to fuse the materials together. Instead of placing layers of material inside an oven to cure, the researchers essentially wrapped them in an ultrathin film of carbon nanotubes (CNTs). When they applied an electric current to the film, the CNTs, like a nanoscale electric blanket, quickly generated heat, causing the materials within to cure and fuse together.

With this out-of-oven, or OoO, technique, the team was able to produce composites as strong as the materials made in conventional airplane manufacturing ovens, using only 1 percent of the energy.

The researchers next looked for ways to make high-performance composites without the use of large, high-pressure autoclaves — building-sized vessels that generate high enough pressures to press materials together, squeezing out any voids, or air pockets, at their interface.

“There’s microscopic surface roughness on each ply of a material, and when you put two plys together, air gets trapped between the rough areas, which is the primary source of voids and weakness in a composite,” Wardle says. “An autoclave can push those voids to the edges and get rid of them.”

Researchers including Wardle’s group have explored “out-of-autoclave,” or OoA, techniques to manufacture composites without using the huge machines. But most of these techniques have produced composites where nearly 1 percent of the material contains voids, which can compromise a material’s strength and lifetime. In comparison, aerospace-grade composites made in autoclaves are of such high quality that any voids they contain are neglible and not easily measured.

“The problem with these OoA approaches is also that the materials have been specially formulated, and none are qualified for primary structures such as wings and fuselages,” Wardle says. “They’re making some inroads in secondary structures, such as flaps and doors, but they still get voids.”

Straw pressure

Part of Wardle’s work focuses on developing nanoporous networks — ultrathin films made from aligned, microscopic material such as carbon nanotubes, that can be engineered with exceptional properties, including color, strength, and electrical capacity. The researchers wondered whether these nanoporous films could be used in place of giant autoclaves to squeeze out voids between two material layers, as unlikely as that may seem.

A thin film of carbon nanotubes is somewhat like a dense forest of trees, and the spaces between the trees can function like thin nanoscale tubes, or capillaries. A capillary such as a straw can generate pressure based on its geometry and its surface energy, or the material’s ability to attract liquids or other materials.

The researchers proposed that if a thin film of carbon nanotubes were sandwiched between two materials, then, as the materials were heated and softened, the capillaries between the carbon nanotubes should have a surface energy and geometry such that they would draw the materials in toward each other, rather than leaving a void between them. Lee calculated that the capillary pressure should be larger than the pressure applied by the autoclaves.

The researchers tested their idea in the lab by growing films of vertically aligned carbon nanotubes using a technique they previously developed, then laying the films between layers of materials that are typically used in the autoclave-based manufacturing of primary aircraft structures. They wrapped the layers in a second film of carbon nanotubes, which they applied an electric current to to heat it up. They observed that as the materials heated and softened in response, they were pulled into the capillaries of the intermediate CNT film.

The resulting composite lacked voids, similar to aerospace-grade composites that are produced in an autoclave. The researchers subjected the composites to strength tests, attempting to push the layers apart, the idea being that voids, if present, would allow the layers to separate more easily.

“In these tests, we found that our out-of-autoclave composite was just as strong as the gold-standard autoclave process composite used for primary aerospace structures,” Wardle says.

The team will next look for ways to scale up the pressure-generating CNT film. In their experiments, they worked with samples measuring several centimeters wide — large enough to demonstrate that nanoporous networks can pressurize materials and prevent voids from forming. To make this process viable for manufacturing entire wings and fuselages, researchers will have to find ways to manufacture CNT and other nanoporous films at a much larger scale.

“There are ways to make really large blankets of this stuff, and there’s continuous production of sheets, yarns, and rolls of material that can be incorporated in the process,” Wardle says.

He plans also to explore different formulations of nanoporous films, engineering capillaries of varying surface energies and geometries, to be able to pressurize and bond other high-performance materials.

“Now we have this new material solution that can provide on-demand pressure where you need it,” Wardle says. “Beyond airplanes, most of the composite production in the world is composite pipes, for water, gas, oil, all the things that go in and out of our lives. This could make making all those things, without the oven and autoclave infrastructure.”

This research was supported, in part, by Airbus, ANSYS, Embraer, Lockheed Martin, Saab AB, Saertex, and Teijin Carbon America through MIT’s Nano-Engineered Composite aerospace Structures (NECST) Consortium.

January 13, 2020 | More

Making real a biotechnology dream: nitrogen-fixing cereal crops

As food demand rises due to growing and changing populations around the world, increasing crop production has been a vital target for agriculture and food systems researchers who are working to ensure there is enough food to meet global need in the coming years. One MIT research group mobilizing around this challenge is the Voigt lab in the Department of Biological Engineering, led by Christopher Voigt, the Daniel I.C. Wang Professor of Advanced Biotechnology at MIT.

For the past four years, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has funded Voigt with two J-WAFS Seed Grants. With this support, Voigt and his team are working on a significant and longstanding research challenge: transform cereal crops so they are able to fix their own nitrogen.

Chemical fertilizer: how it helps and hurts

Nitrogen is a key nutrient that enables plants to grow. Plants like legumes are able to provide their own through a symbiotic relationship with bacteria that are capable of fixing nitrogen from the air and putting it into the soil, which is then drawn up by the plants through their roots. Other types of crops — including major food crops such as corn, wheat, and rice — typically rely on added fertilizers for nitrogen, including manure, compost, and chemical fertilizers. Without these, the plants that grow are smaller and produce less grain.

Over 3.5 billion people today depend on chemical fertilizers for their food. Eighty percent of chemical nitrogen fertilizers today are made using the Haber-Borsch process, which involves transforming nitrile gas into ammonia. While nitrogen fertilizer has boosted agriculture production in the last century, this has come with some significant costs. First, the Haber-Borsch process itself is very energy- and fossil fuel-intensive, making it unsustainable in the face of a rapidly changing climate. Second, using too much chemical fertilizer results in nitrogen pollution. Fertilizer runoff pollutes rivers and oceans, resulting in algae blooms that suffocate marine life. Cleaning up this pollution and paying for the public health and environmental damage costs the United States $157 billion annually. Third, when it comes to chemical fertilizers, there are problems with equity and access. These fertilizers are made in the northern hemisphere by major industrialized nations, where postash, a main ingredient, is abundant. However, transportation costs are high, especially to countries in the southern hemisphere. So, for farmers in poorer regions, this barrier results in lower crop yield.

These environmental and societal challenges pose large problems, yet farmers still need to apply nitrogen to maintain the necessary agriculture productivity to meet the world’s food needs, especially as population and climate change stress the world’s food supplies. So, fertilizers are and will continue to be a critical tool.

But, might there be another way?

The bacterial compatability of chloroplasts and mitochondria

This is the question that drives researchers in the Voigt lab, as they work to develop nitrogen-fixing cereal grains. The strategy they have developed is to target the specific genes in the nitrogen-fixing bacteria that operate symbiotically with legumes, called the nif genes. These genes cause the expression of the protein structures (nitrogenase clusters) that fix nitrogen from the air. If these genes were able to be successfully transferred and expressed in cereal crops, chemical fertilizers would no longer be needed to add needed nitrogen, as these crops would be able to obtain nitrogen themselves.

This genetic engineering work has long been regarded as a major technical challenge, however. The nif pathway is very large and involves many different genes. Transferring any large gene cluster is itself a difficult task, but there is added complexity in this particular pathway. The nif genes in microbes are controlled by a precise system of interconnected genetic parts. In order to successfully transfer the pathway’s nitrogen-fixing capabilities, researchers not only have to transfer the genes themselves, but also replicate the cellular components responsible for controlling the pathway.

This leads into another challenge. The microbes responsible for nitrogen fixation in legumes are bacteria (prokaryotes), and, as explained by Eszter Majer, a postdoc in the Voigt lab who has been working on the project for the past two years, “the gene expression is completely different in plants, which are eukaryotes.” For example, prokaryotes organize their genes into operons, a genetic organization system that does not exist in eukaryotes such as the tobacco leaves the Voigt is using in its experiments. Reengineering the nif pathway in a eukaryote is tantamount to a complete system overhaul.

The Voigt lab has found a workaround: Rather than target the entire plant cell, they are targetting organelles within the cell — specifically, the chloroplasts and the mitochondria. Mitochondria and chloroplasts both have ancient bacterial origins and once lived independently outside of eukaryotic cells as prokaryotes. Millions of years ago, they were incorporated into the eukaryotic system as organelles. They are unique in that they have their own genetic data and have also maintained many similarities to modern-day prokaryotes. As a result, they are excellent candidates for nitrogenase transfer. Majer explains, “It’s much easier to transfer from a prokaryote to a prokaryote-like system than reengineer the whole pathway and try to transfer to a eukaryote.”

Beyond gene structure, these organelles have additional attributes that make them suitable environments for nitrogenase clusters to function. Nitrogenase requires a lot of energy to function and both chloroplasts and mitochondria already produce high amounts energy — in the form of ATP — for the cell. Nitrogenase is also very sensitive to oxygen and will not function if there is too much of it in its environment. However, chloroplasts at night and mitochondria in plants have low-oxygen levels, making them an ideal location for the nitrogenase protein to operate.

An international team of experts

While the team found devised an approach for transforming eukaryotic cells, their project still involved highly technical biological engineering challenges. Thanks to the J-WAFS grants, the Voigt lab has been able to collaborate with two specialists at overseas universities to obtain critical expertise..

One was Luis Rubio, an associate professor focusing on the biochemistry of nitrogen fixation at the Polytechnic University of Madrid, Spain. Rubio is an expert in nitrogenase and nitrogen-inspired chemistry. Transforming mitochondrial DNA is a challenging process, so the team designed a nitrogenase gene delivery system using yeast. Yeast are easy eukaryotic organisms to engineer and can be used to target the mitochondria. The team inserted the nitrogenase genes into the yeast nuclei, which are then targeted to mitochondria using peptide fusions. This research resulted in the first eukaryotic organism to demonstrate the formation of nitrogenase structural proteins.

The Voigt lab also collaborated with Ralph Bock, a chloroplast expert from the Max Planck Institute of Molecular Plant Physiology in Germany. He and the Voigt team have made great strides toward the goal of nitrogen-fixing cereal crops; the details of their recent accomplishments advancing the field crop engineering and furthering the nitrogen-fixing work will be published in the coming months.

Continuing in pursuit of the dream

The Voigt lab, with the support of J-WAFS and the invaluable international collaboration that has resulted, was able to obtain groundbreaking results, moving us closer to fertilizer independence through nitrogen-fixing cereals. They made headway in targeting nitrogenase to mitochondria and were able to express a complete NifDK tetramer — a key protein in the nitrogenase cluster — in yeast mitochondria. Despite these milestones, more work is yet to be done.

“The Voigt lab is invested in moving this research forward in order to get ever closer to the dream of creating nitrogen-fixing cereal crops,“ says Chris Voigt. With these milestones under their belt, these researchers have made great advances, and will continue to push torward the realization of this transformative vision, one that could revolutionize cereal production globally.

January 10, 2020 | More

Tool predicts how fast code will run on a chip

MIT researchers have invented a machine-learning tool that predicts how fast computer chips will execute code from various applications.

To get code to run as fast as possible, developers and compilers — programs that translate programming language into machine-readable code — typically use performance models that run the code through a simulation of given chip architectures.

Compilers use that information to automatically optimize code, and developers use it to tackle performance bottlenecks on the microprocessors that will run it. But performance models for machine code are handwritten by a relatively small group of experts and are not properly validated. As a consequence, the simulated performance measurements often deviate from real-life results.

In series of conference papers, the researchers describe a novel machine-learning pipeline that automates this process, making it easier, faster, and more accurate. In a paper presented at the International Conference on Machine Learning in June, the researchers presented Ithemal, a neural-network model that trains on labeled data in the form of “basic blocks” — fundamental snippets of computing instructions — to automatically predict how long it takes a given chip to execute previously unseen basic blocks. Results suggest Ithemal performs far more accurately than traditional hand-tuned models.

Then, at the November IEEE International Symposium on Workload Characterization, the researchers presented a benchmark suite of basic blocks from a variety of domains, including machine learning, compilers, cryptography, and graphics that can be used to validate performance models. They pooled more than 300,000 of the profiled blocks into an open-source dataset called BHive. During their evaluations, Ithemal predicted how fast Intel chips would run code even better than a performance model built by Intel itself.

Ultimately, developers and compilers can use the tool to generate code that runs faster and more efficiently on an ever-growing number of diverse and “black box” chip designs. “Modern computer processors are opaque, horrendously complicated, and difficult to understand. It is also incredibly challenging to write computer code that executes as fast as possible for these processors,” says co-author Michael Carbin, an assistant professor in the Department of Electrical Engineering and Computer Science (EECS) and a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “This tool is a big step forward toward fully modeling the performance of these chips for improved efficiency.”

Most recently, in a paper presented at the NeurIPS conference in December, the team proposed a new technique to automatically generate compiler optimizations.  Specifically, they automatically generate an algorithm, called Vemal, that converts certain code into vectors, which can be used for parallel computing. Vemal outperforms hand-crafted vectorization algorithms used in the LLVM compiler — a popular compiler used in the industry.

Learning from data

Designing performance models by hand can be “a black art,” Carbin says. Intel provides extensive documentation of more than 3,000 pages describing its chips’ architectures. But there currently exists only a small group of experts who will build performance models that simulate the execution of code on those architectures.

“Intel’s documents are neither error-free nor complete, and Intel will omit certain things, because it’s proprietary,” Mendis says. “However, when you use data, you don’t need to know the documentation. If there’s something hidden you can learn it directly from the data.”

To do so, the researchers clocked the average number of cycles a given microprocessor takes to compute basic block instructions — basically, the sequence of boot-up, execute, and shut down — without human intervention. Automating the process enables rapid profiling of hundreds of thousands or millions of blocks.

Domain-specific architectures

In training, the Ithemal model analyzes millions of automatically profiled basic blocks to learn exactly how different chip architectures will execute computation. Importantly, Ithemal takes raw text as input and does not require manually adding features to the input data. In testing, Ithemal can be fed previously unseen basic blocks and a given chip, and will generate a single number indicating how fast the chip will execute that code.

The researchers found Ithemal cut error rates in accuracy — meaning the difference between the predicted speed versus real-world speed — by 50 percent over traditional hand-crafted models. Further, in their next paper, they showed that Ithemal’s error rate was 10 percent, while the Intel performance-prediction model’s error rate was 20 percent on a variety of basic blocks across multiple different domains.

The tool now makes it easier to quickly learn performance speeds for any new chip architectures, Mendis says. For instance, domain-specific architectures, such as Google’s new Tensor Processing Unit used specifically for neural networks, are now being built but aren’t widely understood. “If you want to train a model on some new architecture, you just collect more data from that architecture, run it through our profiler, use that information to train Ithemal, and now you have a model that predicts performance,” Mendis says.

Next, the researchers are studying methods to make models interpretable. Much of machine learning is a black box, so it’s not really clear why a particular model made its predictions. “Our model is saying it takes a processor, say, 10 cycles to execute a basic block. Now, we’re trying to figure out why,” Carbin says. “That’s a fine level of granularity that would be amazing for these types of tools.”

They also hope to use Ithemal to enhance the performance of Vemal even further and achieve better performance automatically.

January 6, 2020 | More

Top MIT research stories of 2019

With a new year just begun, we take a moment to look back at the most popular articles of 2019 reflecting innovations, breakthroughs, and new insights from the MIT community. The following 10 research-related stories published in the previous 12 months received top views on MIT News. A selection of additional top news that you might have missed follows.

10. We’ve seen a black hole. An international team of astronomers, including scientists from MIT’s Haystack Observatory, announced the first direct images of a black hole in April. They accomplished this remarkable feat by coordinating the power of eight major radio observatories on four continents, to work together as a virtual, Earth-sized telescope.

9. The kilo is dead. Long live the kilo! On World Metrology Day, MIT Professor Wolfgang Ketterle delivered a talk on scientists’ new definition of the kilogram and the techniques for its measurement. As of May 20, a kilo is now defined by fixing the numerical value of a fundamental constant of nature known as the Planck constant.

8. A new record for blackest black. MIT engineers led by Professor Brian Wardle cooked up a material that is 10 times blacker than anything previously reported. The material is made from carbon nanotubes grown on chlorine-etched aluminum foil and captures at least 99.995 percent of incoming light. The material was featured as part of an exhibit at the New York Stock Exchange that was conceived by Diemut Strebe, MIT Center for Art, Science, and Technology artist-in-residence, in collaboration with Wardle and his lab.

7. Further evidence that Einstein was right. Physicists from MIT and elsewhere studied the ringing of an infant black hole, and found that the pattern of this ringing accurately predicts the black hole’s mass and spin — more evidence that Albert Einstein’s general theory of relativity is correct.

6. Understanding infections and autism. MIT and Harvard Medical School researchers uncovered a cellular mechanism that may explain why some children with autism experience a temporary reduction in behavioral symptoms when they have a fever.

5. A step toward pain-free diabetes treatments. An MIT-led research team developed a drug capsule that could be used to deliver oral doses of insulin, potentially replacing the injections that people with type 1 diabetes have to give themselves every day.

4. Da Vinci’s design holds up. Some 500 years after his death, MIT engineers and architects tested a design by Leonardo da Vinci for what would have been the world’s longest bridge span of its time. Their proof of the bridge’s feasibility sheds light on what ambitious construction projects might have been possible using only the materials and methods of the early Renaissance.

3. A novel kind of airplane wing. MIT and NASA engineers built and tested a radically new kind of airplane wing, assembled from hundreds of tiny identical pieces. The wing can change shape to control the plane’s flight, and, according to the researchers, could provide a significant boost in aircraft production, flight, and maintenance efficiency.

2. Simple programming for everyone. MIT researchers created a programming system with artificial intelligence that can easily be used by novices and experts alike. Users can create models and algorithms with the system, “Gen,” without having to deal with equations or handwrite high-performance code; experts can also use it to write sophisticated models and inference algorithms that were previously infeasible.

1. A new way to remove carbon dioxide from air. MIT researchers developed a system that can remove carbon dioxide from a stream of air at virtually any concentration level. The new method is significantly less energy-intensive and expensive than existing processes, and could provide a significant tool in the battle against climate change.

In case you missed it…

Additional top research stories of 2019 included a study finding better sleep habits lead to better college grades; a meta-study on the efficacy of educational technology; findings that science blooms after star researchers die; a system for converting the molecular structures of proteins into musical passages; and the answer to life, the universe, and everything.

January 3, 2020 | More

Bose grants for 2019 reward bold ideas across disciplines

Now in its seventh year, the Professor Amar G. Bose Research Grants support visionary projects that represent intellectual curiosity and a pioneering spirit. Three MIT faculty members have each been awarded one of these prestigious awards for 2019 to pursue diverse questions in the humanities, biology, and engineering.

At a ceremony hosted by MIT President L. Rafael Reif on Nov. 25 and attended by past awardees, Provost Martin Schmidt, the Ray and Maria Stata Professor of Electrical Engineering and Computer Science, formally announced this year’s Amar G. Bose Research Fellows: Sandy Alexandre, Mary Gehring, and Kristala L.J. Prather.

The fellowships are named for the late Amar G. Bose ’51, SM ’52, ScD ’56, a longtime MIT faculty member and the founder of the Bose Corporation. Speaking at the event, President Reif expressed appreciation for the Bose Fellowships, which enable highly creative and unusual research in areas that can be hard to fund through traditional means. “We are tremendously grateful to the Bose family for providing the support that allows bold and curious thinkers at MIT to dream big, challenge themselves, and explore.”

Judith Bose, widow of Amar’s son, Vanu ’87, SM ’94, PhD ’99, congratulated the fellows on behalf of the Bose family. “We talk a lot at this event about the power of a great innovative idea, but I think it was a personal mission of Dr. Bose to nurture the ability, in each individual that he met along the way, to follow through — not just to have the great idea but the agency that comes with being able to pursue your idea, follow it through, and actually see where it leads,” Bose said. “And Vanu was the same way. That care that was epitomized by Dr. Bose not just in the idea itself, but in the personal investment, agency, and nurturing necessary to bring the idea to life — that care is a large part of what makes true change in the world.”

The relationship between literature and engineering

Many technological innovations have resulted from the influence of literature, one of the most notable being the World Wide Web. According to many sources, Sir Tim Berners-Lee, the web’s inventor, found inspiration from a short story by Arthur C. Clarke titled “Dial F for Frankenstein.” Science fiction has presaged a number of real-life technological innovations, including the defibrillator, noted in Mary Shelley’s “Frankenstein;” the submarine, described in Jules Verne’s “20,000 Leagues Under the Sea;” and earbuds, described in Ray Bradbury’s “Fahrenheit 451.” But the data about literature’s influence on STEM innovations are spotty, and these one-to-one relationships are not always clear-cut.

Sandy Alexandre, associate professor of literature, intends to change that by creating a large-scale database of the imaginary inventions found in literature. Alexandre’s project will enact the step-by-step mechanics of STEM innovation via one of its oft-unsung sources: literature. “To deny or sever the ties that bind STEM and literature is to suggest — rather disingenuously — that the ideas for many of the STEM devices that we know and love miraculously just came out of nowhere or from an elsewhere where literature isn’t considered relevant or at all,” she says.

During the first phase of her work, Alexandre will collaborate with students to enter into the database the imaginary inventions as they are described verbatim in a selection of books and other texts that fall under the category of speculative fiction—a category that includes but is not limited to the subgenres of fantasy, Afrofuturism, and science fiction. This first phase will, of course, require that students carefully read these texts in general, but also read for these imaginary inventions more specifically. Additionally, students with drawing skills will be tasked with interpreting the descriptions by illustrating them as two-dimensional images.

From this vast inventory of innovations, Alexandre, in consultation with students involved in the project, will decide on a short list of inventions that meet five criteria: they must be feasible, ethical, worthwhile, useful, and necessary. This vetting process, which constitutes the second phase of the project, is guided by a very important question: what can creating and thinking with a vast database of speculative fiction’s imaginary inventions teach us about what kinds of ideas we should (and shouldn’t) attempt to make into a reality? For the third and final phase, Alexandre will convene a team to build a real-life prototype of one of the imaginary inventions. She envisions this prototype being placed on exhibit at the MIT Museum.

The Bose research grant, Alexandre says, will allow her to take this project from a thought experiment to lab experiment. “This project aims to ensure that literature no longer play an overlooked role in STEM innovations. Therefore, the STEM innovation, which will be the culminating prototype of this research project, will cite a work of literature as the main source of information used in its invention.”

Nature’s role in chemical production

Kristala L.J. Prather ’94, the Arthur D. Little Professor of Chemical Engineering, has been focused on using biological systems for chemical production during the 15 years she’s been at the Institute. Biology as a medium for chemical synthesis has been successfully exploited to commercially produce molecules for uses that range from food to pharmaceuticals — ethanol is a good example. However, there is a range of other molecules with which scientists have been trying to work, but they have faced challenges around an insufficient amount of material being produced and a lack of defined steps needed to make a specific compound.

Prather’s research is rooted in the fact that there are a number of naturally (and unnaturally) occurring chemical compounds in the environment, and cells have evolved to be able to consume them. These cells have evolved or developed a protein that will sense a compound’s presence — a biosensor — and in response will make other proteins that help the cells utilize that compound for its benefit.

“We know biology can do this,” Prather says, “so if we can put together a sufficiently diverse set of microorganisms, can we just let nature make these regulatory molecules for anything that we want to be able to sense or detect?” Her hypothesis is that if her team exposes cells to a new compound for a long enough period of time, the cells will evolve the ability to either utilize that carbon source or develop an ability to respond to it. If Prather and her team can then identify the protein that’s now recognizing what that new compound is, they can isolate it and use it to improve the production of that compound in other systems. “The idea is to let nature evolve specificity for particular molecules that we’re interested in,” she adds.

Prather’s lab has been working with biosensors for some time, but her team has been limited to sensors that are already well characterized and that were readily available. She’s interested in how they can get access to a wider range of what she knows nature has available through the incremental exposure of new compounds to a more comprehensive subset of microorganisms.

“To accelerate the transformation of the chemical industry, we must find a way to create better biological catalysts and to create new tools when the existing ones are insufficient,” Prather says. “I am grateful to the Bose Fellowship Committee for allowing me to explore this novel idea.”

Prather’s findings as a result of this project hold the possibility of broad impacts in the field of metabolic engineering, including the development of microbial systems that can be engineered to enhance degradation of both toxic and nontoxic waste.

Adopting orphan crops to adapt to climate change

In the context of increased environmental pressure and competing land uses, meeting global food security needs is a pressing challenge. Although yield gains in staple grains such as rice, wheat, and corn have been high over the last 50 years, these have been accompanied by a homogenization of the global food supply; only 50 crops provide 90% of global food needs.

However, there are at least 3,000 plants that can be grown and consumed by humans, and many of these species thrive in marginal soils, at high temperatures, and with little rainfall. These “orphan” crops are important food sources for farmers in less developed countries but have been the subject of little research.

Mary Gehring, associate professor of biology at MIT, seeks to bring orphan crops into the molecular age through epigenetic engineering. She is working to promote hybridization, increase genetic diversity, and reveal desired traits for two orphan seed crops: an oilseed crop, Camelina sativa (false flax), and a high-protein legume, Cajanus cajan (pigeon pea).

C. sativa, which produces seeds with potential for uses in food and biofuel applications, can grow on land with low rainfall, requires minimal fertilizer inputs, and is resistant to several common plant pathogens. Until the mid-20th century, C. sativa was widely grown in Europe but was supplanted by canola, with a resulting loss of genetic diversity. Gehring proposes to recover this genetic diversity by creating and characterizing hybrids between C. sativa and wild relatives that have increased genetic diversity.

“To find the best cultivars of orphan crops that will withstand ever increasing environmental insults requires a deeper understanding of the diversity present within these species. We need to expand the plants we rely on for our food supply if we want to continue to thrive in the future,” says Gehring. “Studying orphan crops represents a significant step in that direction. The Bose grant will allow my lab to focus on this historically neglected but vitally important field.”

December 23, 2019 | More

Researchers produce first laser ultrasound images of humans

For most people, getting an ultrasound is a relatively easy procedure: As a technician gently presses a probe against a patient’s skin, sound waves generated by the probe travel through the skin, bouncing off muscle, fat, and other soft tissues before reflecting back to the probe, which detects and translates the waves into an image of what lies beneath.

Conventional ultrasound doesn’t expose patients to harmful radiation as X-ray and CT scanners do, and it’s generally noninvasive. But it does require contact with a patient’s body, and as such, may be limiting in situations where clinicians might want to image patients who don’t tolerate the probe well, such as babies, burn victims, or other patients with sensitive skin. Furthermore, ultrasound probe contact induces significant image variability, which is a major challenge in modern ultrasound imaging.

Now, MIT engineers have come up with an alternative to conventional ultrasound that doesn’t require contact with the body to see inside a patient. The new laser ultrasound technique leverages an eye- and skin-safe laser system to remotely image the inside of a person. When trained on a patient’s skin, one laser remotely generates sound waves that bounce through the body. A second laser remotely detects the reflected waves, which researchers then translate into an image similar to conventional ultrasound.

In a paper published today by Nature in the journal Light: Science and Applications, the team reports generating the first laser ultrasound images in humans. The researchers scanned the forearms of several volunteers and observed common tissue features such as muscle, fat, and bone, down to about 6 centimeters below the skin. These images, comparable to conventional ultrasound, were produced using remote lasers focused on a volunteer from half a meter away.

“We’re at the beginning of what we could do with laser ultrasound,” says Brian W. Anthony, a principal research scientist in MIT’s Department of Mechanical Engineering and Institute for Medical Engineering and Science (IMES), a senior author on the paper. “Imagine we get to a point where we can do everything ultrasound can do now, but at a distance. This gives you a whole new way of seeing organs inside the body and determining properties of deep tissue, without making contact with the patient.”

Anthony’s co-authors on the paper are lead author and MIT postdoc Xiang (Shawn) Zhang, recent doctoral graduate Jonathan Fincke, along with Charles Wynn, Matthew Johnson, and Robert Haupt of MIT’s Lincoln Laboratory.

Yelling into a canyon — with a flashlight

In recent years, researchers have explored laser-based methods in ultrasound excitation in a field known as photoacoustics. Instead of directly sending sound waves into the body, the idea is to send in light, in the form of a pulsed laser tuned at a particular wavelength, that penetrates the skin and is absorbed by blood vessels.

The blood vessels rapidly expand and relax — instantly heated by a laser pulse then rapidly cooled by the body back to their original size — only to be struck again by another light pulse. The resulting mechanical vibrations generate sound waves that travel back up, where they can be detected by transducers placed on the skin and translated into a photoacoustic image.

While photoacoustics uses lasers to remotely probe internal structures, the technique still requires a detector in direct contact with the body in order to pick up the sound waves. What’s more, light can only travel a short distance into the skin before fading away. As a result, other researchers have used photoacoustics to image blood vessels just beneath the skin, but not much deeper.

Since sound waves travel further into the body than light, Zhang, Anthony, and their colleagues looked for a way to convert a laser beam’s light into sound waves at the surface of the skin, in order to image deeper in the body.

Based on their research, the team selected 1,550-nanometer lasers, a wavelength which is highly absorbed by water (and is eye- and skin-safe with a large safety margin).  As skin is essentially composed of water, the team reasoned that it should efficiently absorb this light, and heat up and expand in response. As it oscillates back to its normal state, the skin itself should produce sound waves that propagate through the body.

The researchers tested this idea with a laser setup, using one pulsed laser set at 1,550 nanometers to generate sound waves, and a second continuous laser, tuned to the same wavelength, to remotely detect reflected sound waves.  This second laser is a sensitive motion detector that measures vibrations on the skin surface caused by the sound waves bouncing off muscle, fat, and other tissues. Skin surface motion, generated by the reflected sound waves, causes a change in the laser’s frequency, which can be measured. By mechanically scanning the lasers over the body, scientists can acquire data at different locations and generate an image of the region.

“It’s like we’re constantly yelling into the Grand Canyon while walking along the wall and listening at different locations,” Anthony says. “That then gives you enough data to figure out the geometry of all the things inside that the waves bounced against — and the yelling is done with a flashlight.”

In-home imaging

The researchers first used the new setup to image metal objects embedded in a gelatin mold roughly resembling skin’s water content. They imaged the same gelatin using a commercial ultrasound probe and found both images were encouragingly similar. They moved on to image excised animal tissue — in this case, pig skin — where they found laser ultrasound could distinguish subtler features, such as the boundary between muscle, fat, and bone.

Finally, the team carried out the first laser ultrasound experiments in humans, using a protocol that was approved by the MIT Committee on the Use of Humans as Experimental Subjects. After scanning the forearms of several healthy volunteers, the researchers produced the first fully noncontact laser ultrasound images of a human. The fat, muscle, and tissue boundaries are clearly visible and comparable to images generated using commercial, contact-based ultrasound probes.

The researchers plan to improve their technique, and they are looking for ways to boost the system’s performance to resolve fine features in the tissue. They are also looking to hone the detection laser’s capabilities. Further down the road, they hope to miniaturize the laser setup, so that laser ultrasound might one day be deployed as a portable device.

“I can imagine a scenario where you’re able to do this in the home,” Anthony says. “When I get up in the morning, I can get an image of my thyroid or arteries, and can have in-home physiological imaging inside of my body. You could imagine deploying this in the ambient environment to get an understanding of your internal state.”

This research was supported in part by the MIT Lincoln Laboratory Biomedical Line Program for the United States Air Force and by the U.S. Army Medical Research and Material Command’s Military Operational Medicine Research Program.

December 20, 2019 | More

Model beats Wall Street analysts in forecasting business financials

Knowing a company’s true sales can help determine its value. Investors, for instance, often employ financial analysts to predict a company’s upcoming earnings using various public data, computational tools, and their own intuition. Now MIT researchers have developed an automated model that significantly outperforms humans in predicting business sales using very limited, “noisy” data.

In finance, there’s growing interest in using imprecise but frequently generated consumer data — called “alternative data” — to help predict a company’s earnings for trading and investment purposes. Alternative data can comprise credit card purchases, location data from smartphones, or even satellite images showing how many cars are parked in a retailer’s lot. Combining alternative data with more traditional but infrequent ground-truth financial data — such as quarterly earnings, press releases, and stock prices — can paint a clearer picture of a company’s financial health on even a daily or weekly basis.

But, so far, it’s been very difficult to get accurate, frequent estimates using alternative data. In a paper published this week in the Proceedings of ACM Sigmetrics Conference, the researchers describe a model for forecasting financials that uses only anonymized weekly credit card transactions and three-month earning reports.

Tasked with predicting quarterly earnings of more than 30 companies, the model outperformed the combined estimates of expert Wall Street analysts on 57 percent of predictions. Notably, the analysts had access to any available private or public data and other machine-learning models, while the researchers’ model used a very small dataset of the two data types.

“Alternative data are these weird, proxy signals to help track the underlying financials of a company,” says first author Michael Fleder, a postdoc in the Laboratory for Information and Decision Systems (LIDS). “We asked, ‘Can you combine these noisy signals with quarterly numbers to estimate the true financials of a company at high frequencies?’ Turns out the answer is yes.”

The model could give an edge to investors, traders, or companies looking to frequently compare their sales with competitors. Beyond finance, the model could help social and political scientists, for example, to study aggregated, anonymous data on public behavior. “It’ll be useful for anyone who wants to figure out what people are doing,” Fleder says.

Joining Fleder on the paper is EECS Professor Devavrat Shah, who is the director of MIT’s Statistics and Data Science Center, a member of the Laboratory for Information and Decision Systems, a principal investigator for the MIT Institute for Foundations of Data Science, and an adjunct professor at the Tata Institute of Fundamental Research.

Tackling the “small data” problem

For better or worse, a lot of consumer data is up for sale. Retailers, for instance, can buy credit card transactions or location data to see how many people are shopping at a competitor. Advertisers can use the data to see how their advertisements are impacting sales. But getting those answers still primarily relies on humans. No machine-learning model has been able to adequately crunch the numbers.

Counterintuitively, the problem is actually lack of data. Each financial input, such as a quarterly report or weekly credit card total, is only one number. Quarterly reports over two years total only eight data points. Credit card data for, say, every week over the same period is only roughly another 100 “noisy” data points, meaning they contain potentially uninterpretable information.

“We have a ‘small data’ problem,” Fleder says. “You only get a tiny slice of what people are spending and you have to extrapolate and infer what’s really going on from that fraction of data.”

For their work, the researchers obtained consumer credit card transactions — at typically weekly and biweekly intervals — and quarterly reports for 34 retailers from 2015 to 2018 from a hedge fund. Across all companies, they gathered 306 quarters-worth of data in total.

Computing daily sales is fairly simple in concept. The model assumes a company’s daily sales remain similar, only slightly decreasing or increasing from one day to the next. Mathematically, that means sales values for consecutive days are multiplied by some constant value plus some statistical noise value — which captures some of the inherent randomness in a company’s sales. Tomorrow’s sales, for instance, equal today’s sales multiplied by, say, 0.998 or 1.01, plus the estimated number for noise.

If given accurate model parameters for the daily constant and noise level, a standard inference algorithm can calculate that equation to output an accurate forecast of daily sales. But the trick is calculating those parameters.

Untangling the numbers

That’s where quarterly reports and probability techniques come in handy. In a simple world, a quarterly report could be divided by, say, 90 days to calculate the daily sales (implying sales are roughly constant day-to-day). In reality, sales vary from day to day. Also, including alternative data to help understand how sales vary over a quarter complicates matters: Apart from being noisy, purchased credit card data always consist of some indeterminate fraction of the total sales. All that makes it very difficult to know how exactly the credit card totals factor into the overall sales estimate.

“That requires a bit of untangling the numbers,” Fleder says. “If we observe 1 percent of a company’s weekly sales through credit card transactions, how do we know it’s 1 percent? And, if the credit card data is noisy, how do you know how noisy it is? We don’t have access to the ground truth for daily or weekly sales totals. But the quarterly aggregates help us reason about those totals.”

To do so, the researchers use a variation of the standard inference algorithm, called Kalman filtering or Belief Propagation, which has been used in various technologies from space shuttles to smartphone GPS. Kalman filtering uses data measurements observed over time, containing noise inaccuracies, to generate a probability distribution for unknown variables over a designated timeframe. In the researchers’ work, that means estimating the possible sales of a single day.

To train the model, the technique first breaks down quarterly sales into a set number of measured days, say 90 — allowing sales to vary day-to-day. Then, it matches the observed, noisy credit card data to unknown daily sales. Using the quarterly numbers and some extrapolation, it estimates the fraction of total sales the credit card data likely represents. Then, it calculates each day’s fraction of observed sales, noise level, and an error estimate for how well it made its predictions.

The inference algorithm plugs all those values into the formula to predict daily sales totals. Then, it can sum those totals to get weekly, monthly, or quarterly numbers. Across all 34 companies, the model beat a consensus benchmark — which combines estimates of Wall Street analysts — on 57.2 percent of 306 quarterly predictions.

Next, the researchers are designing the model to analyze a combination of credit card transactions and other alternative data, such as location information. “This isn’t all we can do. This is just a natural starting point,” Fleder says.

December 19, 2019 | More

A new way to remove contaminants from nuclear wastewater

Nuclear power continues to expand globally, propelled, in part, by the fact that it produces few greenhouse gas emissions while providing steady power output. But along with that expansion comes an increased need for dealing with the large volumes of water used for cooling these plants, which becomes contaminated with radioactive isotopes that require special long-term disposal.

Now, a method developed at MIT provides a way of substantially reducing the volume of contaminated water that needs to be disposed of, instead concentrating the contaminants and allowing the rest of the water to be recycled through the plant’s cooling system. The proposed system is described in the journal Environmental Science and Technology, in a paper by graduate student Mohammad Alkhadra, professor of chemical engineering Martin Bazant, and three others.

The method makes use of a process called shock electrodialysis, which uses an electric field to generate a deionization shockwave in the water. The shockwave pushes the electrically charged particles, or ions, to one side of a tube filled with charged porous material, so that concentrated stream of contaminants can be separated out from the rest of the water. The group discovered that two radionuclide contaminants — isotopes of cobalt and cesium — can be selectively removed from water that also contains boric acid and lithium. After the water stream is cleansed of its cobalt and cesium contaminants, it can be reused in the reactor.

The shock electrodialysis process was initially developed by Bazant and his co-workers as a general method of removing salt from water, as demonstrated in their first scalable prototype four years ago. Now, the team has focused on this more specific application, which could help improve the economics and environmental impact of working nuclear power plants. In ongoing research, they are also continuing to develop a system for removing other contaminants, including lead, from drinking water.

Not only is the new system inexpensive and scalable to large sizes, but in principle it also can deal with a wide range of contaminants, Bazant says. “It’s a single device that can perform a whole range of separations for any specific application,” he says.

In their earlier desalination work, the researchers used measurements of the water’s electrical conductivity to determine how much salt was removed. In the years since then, the team has developed other methods for detecting and quantifying the details of what’s in the concentrated radioactive waste and the cleaned water.

“We carefully measure the composition of all the stuff going in and out,” says Bazant, who is the E.G. Roos Professor of Chemical Engineering as well as a professor of mathematics. “This really opened up a new direction for our research.” They began to focus on separation processes that would be useful for health reasons or that would result in concentrating material that has high value, either for reuse or to offset disposal costs.

The method they developed works for sea water desalination, but it is a relatively energy-intensive process for that application. The energy cost is dramatically lower when the method is used for ion-selective separations from dilute streams such as nuclear plant cooling water. For this application, which also requires expensive disposal, the method makes economic sense, he says. It also hits both of the team’s targets: dealing with high-value materials and helping to safeguard health. The scale of the application is also significant — a single large nuclear plant can circulate about 10 million cubic meters of water per year through its cooling system, Alkhadra says.

For their tests of the system, the researchers used simulated nuclear wastewater based on a recipe provided by Mitsubishi Heavy Industries, which sponsored the research and is a major builder of nuclear plants. In the team’s tests, after a three-stage separation process, they were able to remove 99.5 percent of the cobalt radionuclides in the water while retaining about 43 percent of the water in cleaned-up form so that it could be reused. As much as two-thirds of the water can be reused if the cleanup level is cut back to 98.3 percent of the contaminants removed, the team found.

While the overall method has many potential applications, the nuclear wastewater separation, is “one of the first problems we think we can solve [with this method] that no other solution exists for,” Bazant says. No other practical, continuous, economic method has been found for separating out the radioactive isotopes of cobalt and cesium, the two major contaminants of nuclear wastewater, he adds.

While the method could be used for routine cleanup, it could also make a big difference in dealing with more extreme cases, such as the millions of gallons of contaminated water at the damaged Fukushima Daichi power plant in Japan, where the accumulation of that contaminated water has threatened to overpower the containment systems designed to prevent it from leaking out into the adjacent Pacific. While the new system has so far only been tested at much smaller scales, Bazant says that such large-scale decontamination systems based on this method might be possible “within a few years.”

The research team also included MIT postdocs Kameron Conforti and Tao Gao and graduate student Huanhuan Tian.

December 19, 2019 | More

Taking the carbon out of construction with engineered wood

To meet the long-term goals of the Paris Agreement on climate change — keeping global warming well below 2 degrees Celsius and ideally capping it at 1.5 C — humanity will ultimately need to achieve net-zero emissions of greenhouse gases (GHGs) into the atmosphere. To date, emissions reduction efforts have largely focused on decarbonizing the two economic sectors responsible for the most emissions, electric power and transportation. Other approaches aim to remove carbon from the atmosphere and store it through carbon capture technology, biofuel cultivation, and massive tree planting.

As it turns out, planting trees is not the only way forestry can help in climate mitigation; how we use wood harvested from trees may also make a difference. Recent studies have shown that engineered wood products — composed of wood and various types of adhesive to enhance physical strength — involve far fewer carbon dioxide emissions than mineral-based building materials, and at lower cost. Now new research in the journal Energy Economics explores the potential environmental and economic impact in the United States of substituting lumber for energy-intensive building materials such as cement and steel, which account for nearly 10 percent of human-made GHG emissions and are among the hardest to reduce.

“To our knowledge, this study is the first economy-wide analysis to evaluate the economic and emissions impacts of substituting lumber products for more CO2-intensive materials in the construction sector,” says the study’s lead author Niven Winchester, a research scientist at the MIT Joint Program on the Science and Policy of Global Change and Motu Economic and Public Policy Research. “There is no silver bullet to reduce GHGs, so exploiting a suite of emission-abatement options is required to mitigate climate change.”

Comparing the economic and emissions impacts of replacing CO2-intensive building materials (e.g., steel and concrete) with lumber products in the United States under an economy-wide cap-and-trade policy consistent with the nation’s Paris Agreement GHG emissions-reduction pledge, the study found that the CO2 intensity (tons of CO2 emissions per dollar of output) of lumber production is about 20 percent less than that of fabricated metal products, under 50 percent that of iron and steel, and under 25 percent that of cement. In addition, shifting construction toward lumber products lowers the GDP cost of meeting the emissions cap by approximately $500 million and reduces the carbon price.

The authors caution that these results only take into account emissions resulting from the use of fossil fuels in harvesting, transporting, fabricating, and milling lumber products, and neglect potential increases in atmospheric CO2 associated with tree harvesting or beneficial long-term carbon sequestration provided by wood-based building materials.

“The source of lumber, and the conditions under which it is grown and harvested, and the fate of wood products deserve further attention to develop a full accounting of the carbon implications of expanded use of wood in building construction,” they write. “Setting aside those issues, lumber products appear to be advantageous compared with many other building materials, and offer one potential option for reducing emissions from sectors like cement, iron and steel, and fabricated metal products — by reducing the demand for these products themselves.”

Funded, in part, by Weyerhaeuser and the Softwood Lumber Board, the study develops and utilizes a customized economy-wide model that includes a detailed representation of energy production and use and represents production of construction, forestry, lumber, and mineral-based construction materials.

December 11, 2019 | More