News and Research
Catherine Iacobo named industry co-director for MIT Leaders for Global Operations

Catherine Iacobo named industry co-director for MIT Leaders for Global Operations

Cathy Iacobo, a lecturer at the MIT Sloan School of Management, has been named the new industry co-director for the MIT Leaders for Global Operations (LGO) program. Read more

Lgo

New leadership for Bernard M. Gordon-MIT Engineering Leadership Program

Olivier de Weck, frequent LGO advisor, professor of aeronautics and astronautics and of engineering systems at MIT, has been named the new faculty co-director of the Bernard M. Gordon-MIT Engineering Leadership Program (GEL). He joins Reza Rahaman, who was appointed the Bernard M. Gordon-MIT Engineering Leadership Program industry co-director and senior lecturer on July 1, 2018.

“Professor de Weck has a longstanding commitment to engineering leadership, both as an educator and a researcher. I look forward to working with him and the GEL team as they continue to strengthen their outstanding undergraduate program and develop the new program for graduate students,” says Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

A leader in systems engineering, de Weck researches how complex human-made systems such as aircraft, spacecraft, automobiles, and infrastructures are designed, manufactured, and operated. By investigating their lifecycle properties, de Weck and members of his research group have developed a range of novel techniques broadly adopted by industry to maximize the value of these systems over time.

August 1, 2019 | More

Building the tools of the next manufacturing revolution

John Hart, an associate professor of mechanical engineering at MIT, LGO adviser, and the director of the Laboratory for Manufacturing and Productivity and the Center for Additive and Digital Advanced Production Technologies, is an expert in 3-D printing, also known as additive manufacturing, which involves the computer-guided deposition of material layer by layer into precise three-dimensional shapes. (Conventional manufacturing usually entails making a part by removing material, for example through machining, or by forming the part using a mold tool.)

Hart’s research includes the development of advanced materials — new types of polymers, nanocomposites, and metal alloys — and the development of novel machines and processes that use and shape materials, such as high-speed 3-D printing, roll-to-roll graphene growth, and manufacturing techniques for low-cost sensors and electronics.

June 19, 2019 | More

LGO Best Thesis 2019 for Big Data Analysis at Amgen, Inc.

After the official MIT commencement ceremonies, Thomas Roemer, LGO’s executive director, announced the best thesis winner at LGO’s annual post-graduation celebration. This year’s winner was Maria Emilia Lopez Marino (Emi), who developed a predictive framework to evaluate and assess the impact of raw material attributes on the manufacturing process at Amgen. Thesis readers described Marino’s project as an “extremely well-written thesis.  Excellent coverage of not only the project, but also the industry as a whole.”

Applying MIT knowledge in the real world

Marino, who earned her MBA and SM in Civil and Environmental Engineering, completed her six-month LGO internship project at Amgen, Inc. For her project, Marino developed a new predictive framework through machine learning techniques to assess the impact of raw material variability on the performance of several commercial processes of biologics manufacturing.  Finding this solution represents a competitive advantage for biopharmaceutical leaders. The results from her analysis showed an 80% average accuracy on predictions for new data. Additionally, the framework she developed is the starting point of a new methodology towards material variability understanding in the manufacturing process for the pharmaceutical industry.

Each year, the theses are nominated by faculty advisors and then reviewed by LGO alumni readers to determine the winner. Thesis advisor and Professor Roy Welsch stated Emi “understood variation both in a statistical sense and in manufacturing in the biopharmaceutical industry and left behind highly accurate and interpretable models in a form that others can use and expand. We hope she will share her experiences with us in the future at LGO alumni reunions and on DPT visits.”

Marino, who earned her undergraduate degree Chemical Engineering from the National University of Mar Del Plata in Argentina, has accepted a job offer with Amgen in Puerto Rico.

 

June 11, 2019 | More

The tenured engineers of 2019

The School of Engineering has announced that 17 members of its faculty have been granted tenure by MIT, including 3 LGO advisors: Saurabh Amin, Kerri Cahoy, and Julie Shah.

“The tenured faculty in this year’s cohort are a true inspiration,” said Anantha Chandrakasan, dean of the School of Engineering. “They have shown exceptional dedication to research and teaching, and their innovative work has greatly advanced their fields.”

This year’s newly tenured associate professors are:

Antoine Allanore, in the Department of Materials Science and Engineering, develops more sustainable technologies and strategies for mining, metal extraction, and manufacturing, including novel methods of fertilizer production.

Saurabh Amin, in the Department of Civil and Environmental Engineering, focuses on the design and implementation of network inspection and control algorithms for improving the resilience of large-scale critical infrastructures, such as transportation systems and water and energy distribution networks, against cyber-physical security attacks and natural events.

Emilio Baglietto, in the Department of Nuclear Science and Engineering, uses computational modeling to characterize and predict the underlying heat-transfer processes in nuclear reactors, including turbulence modeling, unsteady flow phenomena, multiphase flow, and boiling.

Paul Blainey, the Karl Van Tassel (1925) Career Development Professor in the Department of Biological Engineering, integrates microfluidic, optical, and molecular tools for application in biology and medicine across a range of scales.

Kerri Cahoy, the Rockwell International Career Development Professor in the Department of Aeronautics and Astronautics, develops nanosatellites that demonstrate weather sensing using microwave radiometers and GPS radio occultation receivers, high data-rate laser communications with precision time transfer, and active optical imaging systems using MEMS deformable mirrors for exoplanet exploration applications.

Juejun Hu, in the Department of Materials Science and Engineering, focuses on novel materials and devices to exploit interactions of light with matter, with applications in on-chip sensing and spectroscopy, flexible and polymer photonics, and optics for solar energy.

Sertac Karaman, the Class of 1948 Career Development Professor in the Department of Aeronautics and Astronautics, studies robotics, control theory, and the application of probability theory, stochastic processes, and optimization for cyber-physical systems such as driverless cars and drones.

R. Scott Kemp, the Class of 1943 Career Development Professor in the Department of Nuclear Science and Engineering, combines physics, politics, and history to identify options for addressing nuclear weapons and energy. He investigates technical threats to nuclear-deterrence stability and the information theory of treaty verification; he is also developing technical tools for reconstructing the histories of secret nuclear-weapon programs.

Aleksander Mądry, in the Department of Electrical Engineering and Computer Science, investigates topics ranging from developing new algorithms using continuous optimization, to combining theoretical and empirical insights, to building a more principled and thorough understanding of key machine learning tools. A major theme of his research is rethinking machine learning from the perspective of security and robustness.

Frances Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering, performs research on nanostructures using transmission electron microscopes that allow researchers to see, in real-time, how structures form and develop in response to changes in temperature, environment, and other variables. Understanding crystal growth at the nanoscale is helpful in creating precisely controlled materials for applications in microelectronics and energy conversion and storage.

Daniel Sanchez, in the Department of Electrical Engineering and Computer Science, works on computer architecture and computer systems, with an emphasis on large-scale multi-core processors, scalable and efficient memory hierarchies, architectures with quality-of-service guarantees, and scalable runtimes and schedulers.

Themistoklis Sapsis, the Doherty Career Development Professor in the Department of Mechanical Engineering, develops analytical, computational, and data-driven methods for the probabilistic prediction and quantification of extreme events in high-dimensional nonlinear systems such as turbulent fluid flows and nonlinear mechanical systems.

Julie Shah, the Boeing Career Development Professor in the Department of Aeronautics and Astronautics, develops innovative computational models and algorithms expanding the use of human cognitive models for artificial intelligence. Her research has produced novel forms of human-machine teaming in manufacturing assembly lines, healthcare applications, transportation, and defense.

Hadley Sikes, the Esther and Harold E. Edgerton Career Development Professor in the Department of Chemical Engineering, employs biomolecular engineering and knowledge of reaction networks to detect epigenetic modifications that can guide cancer treatment, induce oxidant-specific perturbations in tumors for therapeutic benefit, and improve signaling reactions and assay formats used in medical diagnostics.

William Tisdale, the ARCO Career Development Professor in the Department of Chemical Engineering, works on energy transport in nanomaterials, nonlinear spectroscopy, and spectroscopic imaging to better understand and control the mechanisms by which excitons, free charges, heat, and reactive chemical species are converted to more useful forms of energy, and on leveraging this understanding to guide materials design and process optimization.

Virginia Vassilevska Williams, the Steven and Renee Finn Career Development Professor in the Department of Electrical Engineering and Computer Science, applies combinatorial and graph theoretic tools to develop efficient algorithms for matrix multiplication, shortest paths, and a variety of other fundamental problems. Her recent research is centered on proving tight relationships between seemingly different computational problems. She is also interested in computational social choice issues, such as making elections computationally resistant to manipulation.

Amos Winter, the Tata Career Development Professor in the Department of Mechanical Engineering, focuses on connections between mechanical design theory and user-centered product design to create simple, elegant technological solutions for applications in medical devices, water purification, agriculture, automotive, and other technologies used in highly constrained environments.

June 7, 2019 | More

MIT team places second in 2019 NASA BIG Idea Challenge

An MIT student team, including LGO ’20 Hans Nowak, took second place for its design of a multilevel greenhouse to be used on Mars in NASA’s 2019 Breakthrough, Innovative and Game-changing (BIG) Idea Challenge last month.

Each year, NASA holds the BIG Idea competition in its search for innovative and futuristic ideas. This year’s challenge invited universities across the United States to submit designs for a sustainable, cost-effective, and efficient method of supplying food to astronauts during future crewed explorations of Mars. Dartmouth College was awarded first place in this year’s closely contested challenge.

“This was definitely a full-team success,” says team leader Eric Hinterman, a graduate student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). The team had contributions from 10 undergraduates and graduate students from across MIT departments. Support and assistance were provided by four architects and designers in Italy. This project was completely voluntary; all 14 contributors share a similar passion for space exploration and enjoyed working on the challenge in their spare time.

The MIT team dubbed its design “BEAVER” (Biosphere Engineered Architecture for Viable Extraterrestrial Residence). “We designed our greenhouse to provide 100 percent of the food requirements for four active astronauts every day for two years,” explains Hinterman.

The ecologists and agriculture specialists on the MIT team identified eight types of crops to provide the calories, protein, carbohydrates, and oils and fats that astronauts would need; these included potatoes, rice, wheat, oats, and peanuts. The flexible menu suggested substitutes, depending on astronauts’ specific dietary requirements.

“Most space systems are metallic and very robotic,” Hinterman says. “It was fun working on something involving plants.”

Parameters provided by NASA — a power budget, dimensions necessary for transporting by rocket, the capacity to provide adequate sustenance — drove the shape and the overall design of the greenhouse.

Last October, the team held an initial brainstorming session and pitched project ideas. The iterative process continued until they reached their final design: a cylindrical growing space 11.2 meters in diameter and 13.4 meters tall after deployment.

An innovative design

The greenhouse would be packaged inside a rocket bound for Mars and, after landing, a waiting robot would move it to its site. Programmed with folding mechanisms, it would then expand horizontally and vertically and begin forming an ice shield around its exterior to protect plants and humans from the intense radiation on the Martian surface.

Two years later, when Earth and Mars orbits were again in optimal alignment for launching and landing, a crew would arrive on Mars, where they would complete the greenhouse setup and begin growing crops. “About every two years, the crew would leave and a new crew of four would arrive and continue to use the greenhouse,” explains Hinterman.

To maximize space, BEAVER employs a large spiral that moves around a central core within the cylinder. Seedlings are planted at the top and flow down the spiral as they grow. By the time they reach the bottom, the plants are ready for harvesting, and the crew enters at the ground floor to reap the potatoes and peanuts and grains. The planting trays are then moved to the top of the spiral, and the process begins again.

“A lot of engineering went into the spiral,” says Hinterman. “Most of it is done without any moving parts or mechanical systems, which makes it ideal for space applications. You don’t want a lot of moving parts or things that can break.”

The human factor

“One of the big issues with sending humans into space is that they will be confined to seeing the same people every day for a couple of years,” Hinterman explains. “They’ll be living in an enclosed environment with very little personal space.”

The greenhouse provides a pleasant area to ensure astronauts’ psychological well-being. On the top floor, just above the spiral, a windowed “mental relaxation area” overlooks the greenery. The ice shield admits natural light, and the crew can lounge on couches and enjoy the view of the Mars landscape. And rather than running pipes from the water tank at the top level down to the crops, Hinterman and his team designed a cascading waterfall at

May 24, 2019 | More

MIT team places first in U.S. Air Force virtual reality competition

When the United States Air Force put out a call for submissions for its first-ever Visionary Q-Prize competition in October 2018, a six-person team of 3 MIT students and 3 LGO alumni took up the challenge. Last month, they emerged as a first-place winner for their prototype of a virtual reality tool they called CoSMIC (Command, Sensing, and Mapping Information Center).

The challenge was hosted by the Air Force Research Labs Space Vehicles Directorate and the Wright Brothers Institute to encourage nontraditional sources with innovative products and ideas to engage with military customers to develop solutions for safe and secure operations in space.

April 12, 2019 | More

MIT graduate engineering, business programs earn top rankings from U.S. News for 2020

Graduate engineering program is No. 1 in the nation; MIT Sloan is No. 3.

MIT’s graduate program in engineering has again earned a No. 1 spot in U.S. News and Word Report’s annual rankings, a place it has held since 1990, when the magazine first ranked such programs.

The MIT Sloan School of Management also placed highly, occupying the No. 3 spot for the best graduate business program, which it shares with Harvard University and the University of Chicago.

March 22, 2019 | More

Leading to Green

More efficient or more sustainable? Janelle Heslop, LGO ’19, helps businesses achieve both. Heslop is no shrinking violet. She found a voice for herself and the environment when she was in middle school, volunteering as a junior docent for the Hudson River Museum. “I was a 12-year-old giving tours, preaching to people: we’ve got to protect our resources,” Heslop says. “At a very early age, I learned to have a perspective, and assert it.”

February 22, 2019 | More

Winners of inaugural AUS New Venture Challenge Announced

Danielle Castley, Dartmouth PhD Candidate, Jordan Landis, LGO ’20, and Ian McDonald, PhD, of Neutroelectric LLC won the inaugural American University of Sharjah New Ventures Challenge, winning the Chancellor’s Prize of $50,000 with radiation shielding materials  developed to improve safety margins and reduce costs for both nuclear power plant operations and transport and storage of spent nuclear waste.

February 20, 2019 | More

Tackling greenhouse gases

While a number of other MIT researchers are developing capture and reuse technologies to minimize greenhouse gas emissions, Professor Timothy Gutowski, frequent LGO advisor, is approaching climate change from a completely different angle: the economics of manufacturing.

Gutowski understands manufacturing. He has worked on both the industry and academic side of manufacturing, was the director of MIT’s Laboratory for Manufacturing and Productivity for a decade, and currently leads the Environmentally Benign Manufacturing research group at MIT. His primary research focus is assessing the environmental impact of manufacturing.

January 11, 2019 | More

Sloan

24 MIT startups to watch

A robotic bartender, smart clothing, a fantasy sports app, remote eye exams.

The wide-ranging ideas of this year’s MIT delta v cohort were on display during the Sept. 6 Demo Day in Cambridge, Massachusetts.

“Concepts don’t win here, ideas don’t win, it’s impact that wins,” said Bill Aulet, managing director of the Martin Trust Center for MIT Entrepreneurship.

Friday’s Demo Day was the culmination of delta v’s eighth cohort. Delta v runs through the summer, and takes place at the Trust Center. Seventeen teams were based at the Trust Center, while seven other teams worked out of the New York City cohort. This is the third year for the Manhattan-based accelerator.

September 11, 2019 | More

4 strategies for future-proofing your workforce

Equipping a company to excel in a changing business landscape isn’t just about technology — a successful company needs a digital-savvy workforce with the mindset to take on new challenges and embrace new ways of working.

To future-proof the workforce, companies are developing new performance, reward, and training strategies. Some embrace peer-led education while others make a game of digital knowledge. In other companies, preparing the workforce for the future means making sure digital natives are versed in business fundamentals.

Cookie-cutter approaches aren’t enough to transform companies, according to
a research

September 9, 2019 | More

How a hybrid housing policy is opening doors to good neighborhoods

Jackie used short-term financial assistance to help pay the deposit on a new apartment for her and her 9-year-old son. With the help of a housing advocate, Dee, a mother of five, was able to compare neighborhood amenities in ways she’d never thought to do. Melinda received a list of property owners and landlords who ignored “Section 8 stereotypes” and were ready to work with her to find a new home for her and her 2-year-old son.

Through a combination of financial support and custom guidance and advocacy, these Washington state mothers were able to move to neighborhoods with more opportunities for their children. They’re three of the 420 families who participated in a regional housing mobility program that refutes the assumption that low-income families want to stay in low-opportunity areas.

September 4, 2019 | More

4 keys to modernizing legacy technology

Digital transformations are a daunting undertaking for any organization, but they’re especially challenging for companies that rely on legacy technology that’s been in place for generations.

It’s a high-tech process that’s made easier by some low-tech, time-tested business practices — communicating clearly, deputizing company influencers, and continuing to deliver quality products and superior customer service, according to executives from Salesforce, Blue Cross Blue

August 24, 2019 | More

Supply chain visibility boosts consumer trust, and even sales

Global supply chains are complex. Transforming raw materials into completed goods often requires a multitude of workers crossing different countries and cultures. Companies undertaking efforts to learn more about their supply chain often face a significant investment of time and resources.

Those costs are worth it, according to a new study by MIT Sloan professor

and visiting assistant professor

along with León Valdés, an assistant professor at the University of Pittsburgh.

The researchers found that investing in supply chain visibility is a surefire way for companies to gain con

August 20, 2019 | More

Looking to stay relevant, big enterprises embrace the platform

How hot are digital platforms? Very: The five most valuable companies on the planet right now — Microsoft, Amazon, Apple, Alphabet, and Facebook — are platform companies, and “myriad startups and smaller companies are thriving as well,” according to Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy.

With both the “behemoths of the digital economy,” as Brynjolfsson called them, and startups reaping the benefits of these digital ecosystems, the pressure is on incumbent organizations to join the platform economy.

August 10, 2019 | More

A study of more than 250 platforms reveals why most fail

Platforms have become one of the most important business models of the 21st century. In our newly-published book, we divide all platforms into two types:  Innovation platforms enable third-party firms to add complementary products and services to a core product or technology. Prominent examples include Google Android and Apple iPhone operating systems as well as Amazon Web Services. The other type, transaction platforms, enable the exchange of information, goods, or services. Examples include Amazon Marketplace, Airbnb, or Uber. Five of the six most valuable firms in the world are built around these types of platforms.  In our analysis of data going back 20 years, we also identified 43 publicly-listed platform companies in the Forbes Global 2000. These platforms generated the same level of annual revenues (about $4.5 billion) as their non-platform counterparts, but used half the number of employees. They also had twice the operating profits and much higher market values … Read More »

The post A study of more than 250 platforms reveals why most fail – Michael A. Cusumano, David B. Yoffie, and Annabelle Gawer appeared first on MIT Sloan Experts.

August 8, 2019 | More

The unsung heroes of global technology? Standard-setters.

When we think about major figures in global technology, we often focus on inventors like Thomas Edison, with the light bulb, or Tim Berners-Lee, with the World Wide Web. Alternatively, we may look to builders of organizations that develop and spread innovations, such as Thomas Watson with IBM or Bill Gates with Microsoft. But other little-known figures have also played a critical role in the spread of technologies: engineers who set national and, especially, global standards.

August 6, 2019 | More

3 forces pushing on the platform economy

Digital platforms are transforming the way companies do business, but the last few years have shown these platforms still have their own need to evolve. Today especially, digital platforms are navigating a changing landscape.

During the recent 2019 MIT Platform Strategy Summit, three experts shared their predictions, offered advice, and asked questions about the changing platform business model.

Regulation

With Facebook and Google in control of 84% of global spending on online ads (excluding China) and Amazon handling close to half of all e-commerce purchases, it’s no surprise that talk of regulating big tech and avoiding data monopolies are frequent topics of conversation.

August 5, 2019 | More

Business leaders gird for ‘organizational explosions’

You understand there is no one road to digital transformation. You remind your stakeholders that different companies take different paths, and none of them are easy. You warn your team to look out for potholes. But are you ready for “organizational explosions?”

For the last four years, the MIT Sloan Center for Information Systems Research has collected data from more than 800 organizations that are undergoing digital transformation, according to Nick van der Meulen, research scientist at the center. In its study of the data and interviews with the organizations, the center developed a framework for successful transformations.

August 1, 2019 | More

Engineering

MIT named No. 3 university by U.S. News for 2020

For a second year in a row, U.S. News and World Report has placed MIT third in its annual rankings of the nation’s best colleges and universities, which were announced today. Columbia University and Yale University also share the No. 3 ranking.

MIT’s engineering program continues to top the magazine’s list of undergraduate engineering programs at a doctoral institution. The Institute also placed first in six out of 12 engineering disciplines. No other institution is No. 1 in more than two disciplines.

MIT also remains the No. 2 undergraduate business program. Among business subfields, MIT is ranked No. 1 in two specialties.

In the overall institutional rankings, U.S. News placed Princeton University in the No. 1 spot, followed by Harvard University.

MIT ranks as the third most innovative university in the nation, according to the U.S. News peer assessment survey of top academics. And it’s fourth on the magazine’s list of national universities that offer students the best value, based on the school’s ranking and the net cost of attendance for a student who received the average level of need-based financial aid, and other variables.

MIT placed first in six engineering specialties: aerospace/aeronautical/astronautical engineering; chemical engineering; computer engineering; electrical/electronic/communication engineering; materials engineering; and mechanical engineering. It placed second in biomedical engineering.

Other schools in the top five overall for undergraduate engineering programs are Stanford University, University of California at Berkeley, Caltech, and Georgia Tech.

Among undergraduate business specialties, the MIT Sloan School of Management leads in production/operations management and in quantitative analysis/methods. It ranks second in entrepreneurship and in management information systems.

The No. 1-ranked undergraduate business program overall is at the University of Pennsylvania; other schools ranking in the top five include Berkeley, the University of Michigan at Ann Arbor, New York University, Carnegie Mellon University, and the University of Texas at Austin.

September 9, 2019 | More

Taking the next giant leaps

In July, the world celebrated the 50th anniversary of the historic Apollo 11 moon landing. MIT played an enormous role in that accomplishment, helping to usher in a new age of space exploration. Now MIT faculty, staff, and students are working toward the next great advances — ones that could propel humans back to the moon, and to parts still unknown.

“I am hard-pressed to think of another event that brought the world together in such a collective way as the Apollo moon landing,” says Daniel Hastings, the Cecil and Ida Green Education Professor and head of the Department of Aeronautics and Astronautics (AeroAstro). “Since the spring, we have been celebrating the role MIT played in getting us there and reflecting on how far technology has come in the past five decades.”

“Our community continues to build on the incredible legacy of Apollo,” Hastings adds. Some aspects of future of space exploration, he notes, will follow from lessons learned. Others will come from newly developed technologies that were unimaginable in the 1960s. And still others will arise from novel collaborations that will fuel the next phases of research and discovery.

“This is a tremendously exciting time to think about the future of space exploration,” Hastings says. “And MIT is leading the way.”

Sticking the landing

Making a safe landing — anywhere — can be a life-or-death situation. On Earth, thanks to a network of global positioning satellites and a range of ground-based systems, pilots have instantaneous access to real-time data on every aspect of a landing environment. The moon, however, is not home to any of this precision navigation technology, making it rife with potential danger.

NASA’s recent decision to return to moon has made this a more pressing challenge — and one that MIT has risen to before. The former MIT Instrumentation Lab (now the independent Draper) developed the guidance systems that enabled Neil Armstrong and Buzz Aldrin to land safely on the moon, and that were used on all Apollo spacecraft. This system relied on inertial navigation, which integrates acceleration and velocity measurements from electronic sensors on the vehicle and a digital computer to determine the spacecraft’s location. It was a remarkable achievement — the first time that humans traveled in a vehicle controlled by a computer.

Today, working in MIT’s Aerospace Controls Lab with Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, graduate student Lena Downes — who is also co-advised by Ted Steiner at Draper — is developing a camera-based navigation system that can sense the terrain beneath the landing vehicle and use that information to update the location estimation. “If we want to explore a crater to determine its age or origin,” Downes explains, “we will need to avoid landing on the more highly-sloped rim of the crater. Since lunar landings can have errors as high as several kilometers, we can’t plan to land too closely to the edge.”

Downes’s research on crater detection involves processing images using convolutional neural networks and traditional computer vision methods. The images are combined with other data, such as previous measurements and known crater location information, enabling increased precision vehicle location estimation.

“When we return to the moon, we want to visit more interesting locations, but the problem is that more interesting can often mean more hazardous,” says Downes. “Terrain-relative navigation will allow us to explore these locations more safely.”

“Make it, don’t take it”

NASA also has its sights set on Mars — and with that objective comes a very different challenge: What if something breaks? Given that the estimated travel time to Mars is between 150 and 300 days, there is a relatively high chance that something will break or malfunction during flight. (Just ask Jim Lovell or Fred Haise, whose spacecraft needed serious repairs only 55 hours and 54 minutes into the Apollo 13 mission.)

Matthew Moraguez, a graduate student in Professor Olivier L. de Weck’s Engineering Systems Lab, wants to empower astronauts to manufacture whatever they need, whenever they need it. (“On the fly,” you could say).

“In-space manufacturing (ISM) — where astronauts can carry out the fabrication, assembly, and integration of components — could revolutionize this paradigm,” says Moraguez. “Since components wouldn’t be limited by launch-related design constraints, ISM could reduce the cost and improve the performance of existing space systems while also enabling entirely new capabilities.”

Historically, a key challenge facing ISM is correctly pairing the components with manufacturing processes needed to produce them. Moraguez approached this problem by first defining the constraints created by a stressful launch environment, which can limit the size and weight of a payload. He then itemized the challenges that could potentially be alleviated by ISM and developed cost-estimating relationships and performance models to determine the exact break-even point at which ISM surpasses the current approach.

Moraguez points to Made in Space, an additive manufacturing facility that is currently in use on the International Space Station. The facility produces tools and other materials as needed, reducing both the cost and the wait time of replenishing supplies from Earth. Moraguez is now developing physics-based manufacturing models that will determine the size, weight, and power required for the next generation of ISM equipment.

“We have been able to evaluate the commercial viability of ISM across a wide range of application areas,” says Moraguez. “Armed with this framework, we aim to determine the best components to produce with ISM and their appropriate manufacturing processes. We want to develop the technology to a point where it truly revolutionizes the future of spaceflight. Ultimately, it could allow humans to travel further into deep space for longer durations than ever before,” he says.

Partnering with industry

The MIT Instrumentation Lab was awarded the first contract for the Apollo program in 1961. In one brief paragraph on a Western Union telegram, the lab was charged with developing the program’s guidance and control system. Today the future of space exploration depends as much as ever on deep collaborations.

Boeing is a longstanding corporate partner of MIT, supporting such efforts as the Wright Brother’s Wind Tunnel renovation and the New Engineering Education Transformation (NEET) program, which focuses on modern industry and real-world projects in support of MIT’s educational mission. In 2020, Boeing is slated to open the Aerospace and Autonomy Center in Kendall Square, which will focus on advancing enabling technologies for autonomous aircraft.

Just last spring the Institute announced a new relationship with Blue Origin in which it will begin planning and developing new payloads for missions to the moon. These new science experiments, rovers, power systems, and more will hitch a ride to the moon via Blue Moon, Blue Origin’s flexible lunar lander.

Working with IBM, MIT researchers are exploring the potential uses of artificial intelligence in space research. This year, IBM’s AI Research Week (Sept. 16-20) will feature an event, co-hosted with AeroAstro, in which researchers will pitch ideas for projects related to AI and the International Space Station.

“We are currently in an exciting new era marked by the development and growth of entrepreneurial private enterprises driving space exploration,” says Hastings. “This will lead to new and transformative ways for human beings to travel to space, to create new profit-making ventures in space for the world’s economy, and, of course, lowering the barrier of access to space so many other countries can join this exciting new enterprise.”

September 5, 2019 | More

MIT report examines how to make technology work for society

Automation is not likely to eliminate millions of jobs any time soon — but the U.S. still needs vastly improved policies if Americans are to build better careers and share prosperity as technological changes occur, according to a new MIT report about the workplace.

The report, which represents the initial findings of MIT’s Task Force on the Work of the Future, punctures some conventional wisdom and builds a nuanced picture of the evolution of technology and jobs, the subject of much fraught public discussion.

The likelihood of robots, automation, and artificial intelligence (AI) wiping out huge sectors of the workforce in the near future is exaggerated, the task force concludes — but there is reason for concern about the impact of new technology on the labor market. In recent decades, technology has contributed to the polarization of employment, disproportionately helping high-skilled professionals while reducing opportunities for many other workers, and new technologies could exacerbate this trend.

Moreover, the report emphasizes, at a time of historic income inequality, a critical challenge is not necessarily a lack of jobs, but the low quality of many jobs and the resulting lack of viable careers for many people, particularly workers without college degrees. With this in mind, the work of the future can be shaped beneficially by new policies, renewed support for labor, and reformed institutions, not just new technologies. Broadly, the task force concludes, capitalism in the U.S. must address the interests of workers as well as shareholders.

“At MIT, we are inspired by the idea that technology can be a force for good. But if as a nation we want to make sure that today’s new technologies evolve in ways that help build a healthier, more equitable society, we need to move quickly to develop and implement strong, enlightened policy responses,” says MIT President L. Rafael Reif, who called for the creation of the Task Force on the Work of the Future in 2017.

“Fortunately, the harsh societal consequences that concern us all are not inevitable,” Reif adds. “Technologies embody the values of those who make them, and the policies we build around them can profoundly shape their impact. Whether the outcome is inclusive or exclusive, fair or laissez-faire, is therefore up to all of us. I am deeply grateful to the task force members for their latest findings and their ongoing efforts to pave an upward path.”

“There is a lot of alarmist rhetoric about how the robots are coming,” adds Elisabeth Beck Reynolds, executive director of the task force, as well as executive director of the MIT Industrial Performance Center. “MIT’s job is to cut through some of this hype and bring some perspective to this discussion.”

Reynolds also calls the task force’s interest in new policy directions “classically American in its willingness to consider innovation and experimentation.”

Anxiety and inequality

The core of the task force consists of a group of MIT scholars. Its research has drawn upon new data, expert knowledge of many technology sectors, and a close analysis of both technology-centered firms and economic data spanning the postwar era.

The report addresses several workplace complexities. Unemployment in the U.S. is low, yet workers have considerable anxiety, from multiple sources. One is technology: A 2018 survey by the Pew Research Center found that 65 to 90 percent of respondents in industrialized countries think computers and robots will take over many jobs done by humans, while less than a third think better-paying jobs will result from these technologies.

Another concern for workers is income stagnation: Adjusted for inflation, 92 percent of Americans born in 1940 earned more money than their parents, but only about half of people born in 1980 can say that.

“The persistent growth in the quantity of jobs has not been matched by an equivalent growth in job quality,” the task force report states.

Applications of technology have fed inequality in recent decades. High-tech innovations have displaced “middle-skilled” workers who perform routine tasks, from office assistants to assembly-line workers, but these innovations have complemented the activities of many white-collar workers in medicine, science and engineering, finance, and other fields. Technology has also not displaced lower-skilled service workers, leading to a polarized workforce. Higher-skill and lower-skill jobs have grown, middle-skill jobs have shrunk, and increased earnings have been concentrated among white-collar workers.

“Technological advances did deliver productivity growth over the last four decades,” the report states. “But productivity growth did not translate into shared prosperity.”

Indeed, says David Autor, who is the Ford Professor of Economics at MIT, associate head of MIT’s Department of Economics, and a co-chair of the task force, “We think people are pessimistic because they’re on to something. Although there’s no shortage of jobs, the gains have been so unequally distributed that most people have not benefited much. If the next four decades of automation are going to look like the last four decades, people have reason to worry.”

Productive innovations versus “so-so technology”

A big question, then, is what the next decades of automation have in store. As the report explains, some technological innovations are broadly productive, while others are merely “so-so technologies” — a term coined by economists Daron Acemoglu of MIT and Pascual Restrepo of Boston University to describe technologies that replace workers without markedly improving services or increasing productivity.

For instance, electricity and light bulbs were broadly productive, allowing the expansion of other types of work. But automated technology allowing for self-check-out at pharmacies or supermarkets merely replaces workers without notably increasing efficiency for the customer or productivity.

“That’s a strong labor-displacing technology, but it has very modest productivity value,” Autor says of these automated systems. “That’s a ‘so-so technology.’ The digital era has had fabulous technologies for skill complementarity [for white-collar workers], but so-so technologies for everybody else. Not all innovations that raise productivity displace workers, and not all innovations that displace workers do much for productivity.”

Several forces have contributed to this skew, according to the report. “Computers and the internet enabled a digitalization of work that made highly educated workers more productive and made less-educated workers easier to replace with machinery,” the authors write.

Given the mixed record of the last four decades, does the advent of robotics and AI herald a brighter future, or a darker one? The task force suggests the answer depends on how humans shape that future. New and emerging technologies will raise aggregate economic output and boost wealth, and offer people the potential for higher living standards, better working conditions, greater economic security, and improved health and longevity. But whether society realizes this potential, the report notes, depends critically on the institutions that transform aggregate wealth into greater shared prosperity instead of rising inequality.

One thing the task force does not foresee is a future where human expertise, judgment, and creativity are less essential than they are today.

“Recent history shows that key advances in workplace robotics — those that radically increase productivity — depend on breakthroughs in work design that often take years or even decades to achieve,” the report states.

As robots gain flexibility and situational adaptability, they will certainly take over a larger set of tasks in warehouses, hospitals, and retail stores — such as lifting, stocking, transporting, cleaning, as well as awkward physical tasks that require picking, harvesting, stooping, or crouching.

The task force members believe such advances in robotics will displace relatively low-paid human tasks and boost the productivity of workers, whose attention will be freed to focus on higher-value-added work. The pace at which these tasks are delegated to machines will be hastened by slowing growth, tight labor markets, and the rapid aging of workforces in most industrialized countries, including the U.S.

And while machine learning — image classification, real-time analytics, data forecasting, and more — has improved, it may just alter jobs, not eliminate them: Radiologists do much more than interpret X-rays, for instance. The task force also observes that developers of autonomous vehicles, another hot media topic, have been “ratcheting back” their timelines and ambitions over the last year.

“The recent reset of expectations on driverless cars is a leading indicator for other types of AI-enabled systems as well,” says David A. Mindell, co-chair of the task force, professor of aeronautics and astronautics, and the Dibner Professor of the History of Engineering and Manufacturing at MIT. “These technologies hold great promise, but it takes time to understand the optimal combination of people and machines. And the timing of adoption is crucial for understanding the impact on workers.”

Policy proposals for the future

Still, if the worst-case scenario of a “job apocalypse” is unlikely, the continued deployment of so-so technologies could make the future of work worse for many people.

If people are worried that technologies could limit opportunity, social mobility, and shared prosperity, the report states, “Economic history confirms that this sentiment is neither ill-informed nor misguided. There is ample reason for concern about whether technological advances will improve or erode employment and earnings prospects for the bulk of the workforce.”

At the same time, the task force report finds reason for “tempered optimism,” asserting that better policies can significantly improve tomorrow’s work.

“Technology is a human product,” Mindell says. “We shape technological change through our choices of investments, incentives, cultural values, and political objectives.”

To this end, the task force focuses on a few key policy areas. One is renewed investment in postsecondary workforce education outside of the four-year college system — and not just in the STEM skills (science, technology, engineering, math) but reading, writing, and the “social skills” of teamwork and judgment.

Community colleges are the biggest training providers in the country, with 12 million for-credit and non-credit students, and are a natural location for bolstering workforce education. A wide range of new models for gaining educational credentials is also emerging, the task force notes. The report also emphasizes the value of multiple types of on-the-job training programs for workers.

However, the report cautions, investments in education may be necessary but not sufficient for workers: “Hoping that ‘if we skill them, jobs will come,’ is an inadequate foundation for constructing a more productive and economically secure labor market.”

More broadly, therefore, the report argues that the interests of capital and labor need to be rebalanced. The U.S., it notes, “is unique among market economies in venerating pure shareholder capitalism,” even though workers and communities are business stakeholders too.

“Within this paradigm [of pure shareholder capitalism], the personal, social, and public costs of layoffs and plant closings should not play a critical role in firm decision-making,” the report states.

The task force recommends greater recognition of workers as stakeholders in corporate decision making. Redressing the decades-long erosion of worker bargaining power will require new institutions that bend the arc of innovation toward making workers more productive rather than less necessary. The report holds that the adversarial system of collective bargaining, enshrined in U.S. labor law adopted during the Great Depression, is overdue for reform.

The U.S. tax code can be altered to help workers as well. Right now, it favors investments in capital rather than labor — for instance, capital depreciation can be written off, and R&D investment receives a tax credit, whereas investments in workers produce no such equivalent benefits. The task force recommends new tax policy that would also incentivize investments in human capital, through training programs, for instance.

Additionally, the task force recommends restoring support for R&D to past levels and rebuilding U.S. leadership in the development of new AI-related technologies, “not merely to win but to lead innovation in directions that will benefit the nation: complementing workers, boosting productivity, and strengthening the economic foundation for shared prosperity.”

Ultimately the task force’s goal is to encourage investment in technologies that improve productivity, and to ensure that workers share in the prosperity that could result.

“There’s no question technological progress that raises productivity creates opportunity,” Autor says. “It expands the set of possibilities that you can realize. But it doesn’t guarantee that you will make good choices.”

Reynolds adds: “The question for firms going forward is: How are they going to improve their productivity in ways that can lead to greater quality and efficiency, and aren’t just about cutting costs and bringing in marginally better technology?”

Further research and analyses

In addition to Reynolds, Autor, and Mindell, the central group within MIT’s Task Force on the Work of the Future consists of 18 MIT professors representing all five Institute schools. Additionally, the project has a 22-person advisory board drawn from the ranks of industry leaders, former government officials, and academia; a 14-person research board of scholars; and eight graduate students. The task force also counsulted with business executives, labor leaders, and community college leaders, among others.

The task force follows other influential MIT projects such as the Commission on Industrial Productivity, an intensive multiyear study of U.S. industry in the 1980s. That effort resulted in the widely read book, “Made in America,” as well as the creation of MIT’s Industrial Performance Center.

The current task force taps into MIT’s depth of knowledge across a full range of technologies, as well as its strengths in the social sciences.

“MIT is engaged in developing frontier technology,” Reynolds says. “Not necessarily what will be introduced tomorrow, but five, 10, or 25 years from now. We do see what’s on the horizon, and our researchers want to bring realism and context to the public discourse.”

The current report is an interim finding from the task force; the group plans to conduct additional research over the next year, and then will issue a final version of the report.

“What we’re trying to do with this work,” Reynolds concludes, “is to provide a holistic perspective, which is not just about the labor market and not just about technology, but brings it all together, for a more rational and productive discussion in the public sphere.”

September 4, 2019 | More

Students spearhead group to enhance the graduate experience

What do graduate students in engineering want?

This was the question before a new advisory group launched by the MIT School of Engineering in late 2017 — the school’s first comprised entirely of graduate students. This fall the group is rolling out its inaugural initiatives: a graduate-level leadership minor or certificate and a set of recommendations intended to improve advisor-advisee relations.

GradSAGE (short for Graduate Student Advisory Group for Engineering) was established by Anantha Chandrakasan just months after he became dean of the MIT School of Engineering.

“I thought it would be great to get student engagement as we shaped new initiatives, and to learn their perspectives on important issues and challenges they face,” says Chandrakasan. “In a sense, we are listening to our customers.”

The dean already counted department heads and other school stakeholders among his advisors. But Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science, felt he was missing the voice of students.

“The beauty of this group is that the students came up with a list of topics and priorities for us to focus on,” Chandrakasan says. “This was an opportunity for them to tell me what was most important, and while I wasn’t surprised by their choices, I was surprised by how passionately they felt about these areas.”

Soft skills matter

The very first gathering of GradSAGE, on Dec. 5, 2017, was like “a brainstorming-schmooze session,” recalls Parker Vascik, a fifth-year graduate student in aeronautics and astronautics (AeroAstro). “But we quickly moved toward identifying specific topics where we felt we could make significant changes in the academic culture and environment.”

One topic that immediately seized the interest of the group involved expanded opportunities to learn and practice leadership abilities.

“Grad students come to MIT hoping to have an impact on the world, and they are probably in the top 1 percent in terms of technical skills,” says Lucio Milanese, a fourth-year graduate student in nuclear science and engineering. “But there are nontechnical skills, soft skills, that are essential to communicating ideas and managing people that are just as important in solving really important problems.”

GradSAGE research suggested MIT engineering graduate students could benefit from more structured opportunities to learn and practice soft skills.

“There is an ocean of knowledge to acquire around teamwork — giving and receiving feedback, conflict resolution, growth mindset, that the basic graduate school curriculum doesn’t address,” says Dhanushkodi Mariappan, a fourth-year graduate student in mechanical engineering. After working in industry and launching his own startup before grad school, Mariappan felt strongly about what was needed.

“A formal leadership program could propel MIT graduate students in their careers, whether they are interested in taking on jobs in industry or in academia, where in some sense they will be running labs or research groups that are like little companies.”

A readymade leadership curriculum

Potential solutions to the leadership education challenge lay close at hand. Mariappan pointed the group to the Bernard M. Gordon-MIT Engineering Leadership Program (GEL), a center focused on helping undergraduates acquire leadership skills. Mariappan made particular note of a GEL course he had taken, 6.928 (Leading Creative Teams), taught by David Niño.

“The class was eye-opening,” says Mariappan, “We were introduced to frameworks that can be applied to solve problems in an incredible range of real-world situations.” It was a course with a blueprint for the kind of curriculum GradSAGE hoped to advance, so Mariappan recruited Niño to the effort.

“To achieve something great in engineering takes a team, but engineers often don’t know how to develop a vision, recruit a talented team, facilitate group decisions, negotiate, delegate, and lead everyone in the same direction,” says Niño, who now works closely with GradSAGE. “Our courses involve practice of these leadership skills, so students can continue to evolve after graduation, and apply these over a lifetime.”

As a result of this collaboration, a new option for satisfying a doctoral minor requirement draws on GEL’s classes, including new ones offered this fall that can serve as cornerstones for the minor: 6.S978 (Negotiation and Influence Skills for Technical Leaders) and 6.S976 (Engineering Leadership in the Age of Artificial Intelligence). Students whose doctoral programs do not permit a minor can instead pursue the GEL Leadership Certificate, which will be launched in the spring of 2020. Leadership classes taken before then will be retroactively recognized and can count toward the certificate.

“We envision hundreds of graduate students pursuing some sort of leadership development experience —not just in the school of engineering but in the other MIT schools,” says Milanese. “In 10 to 15 years, we want employers to recognize a unique brand of MIT leadership and value MIT graduate students as nearly universally possessing outstanding leadership skills.”

“A very special relationship”

The second major thrust of GradSAGE focused on an aspect of graduate life universally acknowledged as critical.

“Advisor-advisee relations arose in every single GradSAGE discussion as a root issue for nearly everything graduates experience, from mental health to taking on leadership opportunities,” says Vascik. “Graduate students have a very special relationship with one person who is boss, mentor, and a little bit of family, and this person guides your destiny while you’re here.”

“Most problems between advisors and students boil down to two issues: poor advisor-advisee fit and poor communication,” according to Jessica Boles, who is starting her third year as a graduate student in electrical engineering and computer science (EECS).

“Many students arrive at MIT thinking, ‘Here is a field I’d like to work in, here’s a prominent person in the field I’d like to work with,’” says Boles. “But there are lots of other things to consider: Who will directly mentor them, what’s the work environment like, what are the advisor’s expectations and policies?”

From informal surveys, Boles and her GradSAGE colleagues knew that an unclear understanding of an advisor’s standards and styles could lead to friction, disappointment, stress, lab-switching, and sometimes even departure from MIT.

Different professors have starkly different approaches to dealing with their graduate students, notes Vascik. “One might like to see students three times a week and micromanage research, while another wants to get together once per semester,” he says. “Factors such as these can dramatically shape a student’s experience in graduate school, and we believe these styles and expectations should be communicated to incoming and current students more effectively.”

Transparency and communication

Approaching the challenge like engineers, the GradSAGE students developed flow charts of specific advisor-advisee problems, interviewed faculty, reviewed literature, and derived a set of potential mitigations. They ran their proposals by the Office of Graduate Education, MIT Chancellor Cynthia Barnhart, MIT Vice Chancellor Ian Waitz, and then presented their recommendations to Chandrakasan. In a matter of months, the group had approval to pilot several initiatives.

Among these efforts: requesting advisors to post online brief statements about their philosophies and policies related to research advising (an effort now being explored within the AeroAstro and EECS departments); and centralizing and publicizing resources for graduate students who encounter difficulties with their advisors. In addition, Boles produced a video that details the kinds of questions admitted students should consider during the graduate school selection process, which she unveiled online to admitted EECS students just prior to MIT’s visit weekend last spring.

“It was well-received, especially among the populations of students we really hope to reach: international students, underrepresented minorities, and students without prior graduate school experience,” she says. “So many more students sought information on the roles advisors would play in their research and career, and on the work environments in potential research labs, including expectations around publications, work hours, and group interactions.” A new, enhanced video is in the works intended for all incoming engineering graduate students.

“Our goal is to increase transparency of advising style so we can ensure better advisor-advisee fits from the beginning,” says Boles. Down the line, adds Vascik, this work could translate to reduced stress among graduate students, fewer students switching labs, and more cohesive and productive labs. “Prospective students stand to benefit the most, because with online information, and their ability to ask smart questions, they will have a good sense before they arrive of what awaits them here.”

For both the advising and leadership GradSAGE ventures, this fall marks just the start of a longer process. Growing these programs will take both time and money, which Chandrakasan seems intent to provide. “What we have done so far is expose important issues, and now it’s a matter of actually converting them into actionable items, which we must do,” he says.

September 4, 2019 | More

MIT engineers build advanced microprocessor out of carbon nanotubes

After years of tackling numerous design and manufacturing challenges, MIT researchers have built a modern microprocessor from carbon nanotube transistors, which are widely seen as a faster, greener alternative to their traditional silicon counterparts.

The microprocessor, described today in the journal Nature, can be built using traditional silicon-chip fabrication processes, representing a major step toward making carbon nanotube microprocessors more practical.

Silicon transistors — critical microprocessor components that switch between 1 and 0 bits to carry out computations — have carried the computer industry for decades. As predicted by Moore’s Law, industry has been able to shrink down and cram more transistors onto chips every couple of years to help carry out increasingly complex computations. But experts now foresee a time when silicon transistors will stop shrinking, and become increasingly inefficient.

Making carbon nanotube field-effect transistors (CNFET) has become a major goal for building next-generation computers. Research indicates CNFETs have properties that promise around 10 times the energy efficiency and far greater speeds compared to silicon. But when fabricated at scale, the transistors often come with many defects that affect performance, so they remain impractical.

The MIT researchers have invented new techniques to dramatically limit defects and enable full functional control in fabricating CNFETs, using processes in traditional silicon chip foundries. They demonstrated a 16-bit microprocessor with more than 14,000 CNFETs that performs the same tasks as commercial microprocessors. The Nature paper describes the microprocessor design and includes more than 70 pages detailing the manufacturing methodology.

The microprocessor is based on the RISC-V open-source chip architecture that has a set of instructions that a microprocessor can execute. The researchers’ microprocessor was able to execute the full set of instructions accurately. It also executed a modified version of the classic “Hello, World!” program, printing out, “Hello, World! I am RV16XNano, made from CNTs.”

“This is by far the most advanced chip made from any emerging nanotechnology that is promising for high-performance and energy-efficient computing,” says co-author Max M. Shulaker, the Emanuel E Landsman Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) and a member of the Microsystems Technology Laboratories. “There are limits to silicon. If we want to continue to have gains in computing, carbon nanotubes represent one of the most promising ways to overcome those limits. [The paper] completely re-invents how we build chips with carbon nanotubes.”

Joining Shulaker on the paper are: first author and postdoc Gage Hills, graduate students Christian Lau, Andrew Wright, Mindy D. Bishop, Tathagata Srimani, Pritpal Kanhaiya, Rebecca Ho, and Aya Amer, all of EECS; Arvind, the Johnson Professor of Computer Science and Engineering and a researcher in the Computer Science and Artificial Intelligence Laboratory; Anantha Chandrakasan, the dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science; and Samuel Fuller, Yosi Stein, and Denis Murphy, all of Analog Devices.

Fighting the “bane” of CNFETs

The microprocessor builds on a previous iteration designed by Shulaker and other researchers six years ago that had only 178 CNFETs and ran on a single bit of data. Since then, Shulaker and his MIT colleagues have tackled three specific challenges in producing the devices: material defects, manufacturing defects, and functional issues. Hills did the bulk of the microprocessor design, while Lau handled most of the manufacturing.

For years, the defects intrinsic to carbon nanotubes have been a “bane of the field,” Shulaker says. Ideally, CNFETs need semiconducting properties to switch their conductivity on an off, corresponding to the bits 1 and 0. But unavoidably, a small portion of carbon nanotubes will be metallic, and will slow or stop the transistor from switching. To be robust to those failures, advanced circuits will need carbon nanotubes at around 99.999999 percent purity, which is virtually impossible to produce today.

The researchers came up with a technique called DREAM (an acronym for “designing resiliency against metallic CNTs”), which positions metallic CNFETs in a way that they won’t disrupt computing. In doing so, they relaxed that stringent purity requirement by around four orders of magnitude — or 10,000 times — meaning they only need carbon nanotubes at about 99.99 percent purity, which is currently possible.

Designing circuits basically requires a library of different logic gates attached to transistors that can be combined to, say, create adders and multipliers — like combining letters in the alphabet to create words. The researchers realized that the metallic carbon nanotubes impacted different pairings of these gates differently. A single metallic carbon nanotube in gate A, for instance, may break the connection between A and B. But several metallic carbon nanotubes in gates B may not impact any of its connections.

In chip design, there are many ways to implement code onto a circuit. The researchers ran simulations to find all the different gate combinations that would be robust and wouldn’t be robust to any metallic carbon nanotubes. They then customized a chip-design program to automatically learn the combinations least likely to be affected by metallic carbon nanotubes. When designing a new chip, the program will only utilize the robust combinations and ignore the vulnerable combinations.

“The ‘DREAM’ pun is very much intended, because it’s the dream solution,” Shulaker says. “This allows us to buy carbon nanotubes off the shelf, drop them onto a wafer, and just build our circuit like normal, without doing anything else special.”

Exfoliating and tuning

CNFET fabrication starts with depositing carbon nanotubes in a solution onto a wafer with predesigned transistor architectures. However, some carbon nanotubes inevitably stick randomly together to form big bundles — like strands of spaghetti formed into little balls — that form big particle contamination on the chip.

To cleanse that contamination, the researchers created RINSE (for “removal of incubated nanotubes through selective exfoliation”). The wafer gets pretreated with an agent that promotes carbon nanotube adhesion. Then, the wafer is coated with a certain polymer and dipped in a special solvent. That washes away the polymer, which only carries away the big bundles, while the single carbon nanotubes remain stuck to the wafer. The technique leads to about a 250-times reduction in particle density on the chip compared to similar methods.

Lastly, the researchers tackled common functional issues with CNFETs. Binary computing requires two types of transistors: “N” types, which turn on with a 1 bit and off with a 0 bit, and “P” types, which do the opposite. Traditionally, making the two types out of carbon nanotubes has been challenging, often yielding transistors that vary in performance. For this solution, the researchers developed a technique called MIXED (for “metal interface engineering crossed with electrostatic doping”), which precisely tunes transistors for function and optimization.

In this technique, they attach certain metals to each transistor — platinum or titanium — which allows them to fix that transistor as P or N. Then, they coat the CNFETs in an oxide compound through atomic-layer deposition, which allows them to tune the transistors’ characteristics for specific applications. Servers, for instance, often require transistors that act very fast but use up energy and power. Wearables and medical implants, on the other hand, may use slower, low-power transistors.

The main goal is to get the chips out into the real world. To that end, the researchers have now started implementing their manufacturing techniques into a silicon chip foundry through a program by Defense Advanced Research Projects Agency, which supported the research. Although no one can say when chips made entirely from carbon nanotubes will hit the shelves, Shulaker says it could be fewer than five years. “We think it’s no longer a question of if, but when,” he says.

The work was also supported by Analog Devices, the National Science Foundation, and the Air Force Research Laboratory.

August 28, 2019 | More

IBM gives artificial intelligence computing at MIT a lift

IBM designed Summit, the fastest supercomputer on Earth, to run the calculation-intensive models that power modern artificial intelligence (AI). Now MIT is about to get a slice.

IBM pledged earlier this year to donate an $11.6 million computer cluster to MIT modeled after the architecture of Summit, the supercomputer it built at Oak Ridge National Laboratory for the U.S. Department of Energy. The donated cluster is expected to come online this fall when the MIT Stephen A. Schwarzman College of Computing opens its doors, allowing researchers to run more elaborate AI models to tackle a range of problems, from developing a better hearing aid to designing a longer-lived lithium-ion battery.

“We’re excited to see a range of AI projects at MIT get a computing boost, and we can’t wait to see what magic awaits,” says John E. Kelly III, executive vice president of IBM, who announced the gift in February at MIT’s launch celebration of the MIT Schwarzman College of Computing.

IBM has named the cluster Satori, a Zen Buddhism term for “sudden enlightenment.” Physically the size of a shipping container, Satori is intellectually closer to a Ferrari, capable of zipping through 2 quadrillion calculations per second. That’s the equivalent of each person on Earth performing more than 10 million multiplication problems each second for an entire year, making Satori nimble enough to join the middle ranks of the world’s 500 fastest computers.

Rapid progress in AI has fueled a relentless demand for computing power to train more elaborate models on ever-larger datasets. At the same time, federal funding for academic computing facilities has been on a three-decade decline. Christopher Hill, director of MIT’s Research Computing Project, puts the current demand at MIT at five times what the Institute can offer.

“IBM’s gift couldn’t come at a better time,” says Maria Zuber, a geophysics professor and MIT’s vice president of research. “The opening of the new college will only increase demand for computing power. Satori will go a long way in helping to ease the crunch.”

The computing gap was immediately apparent to John Cohn, chief scientist at the MIT-IBM Watson AI Lab, when the lab opened last year. “The cloud alone wasn’t giving us all that we needed for challenging AI training tasks,” he says. “The expense and long run times made us ask, could we bring more compute power here, to MIT?”

It’s a mission Satori was built to fill, with IBM Power9 processors, a fast internal network, a large memory, and 256 graphics processing units (GPUs). Designed to rapidly process video-game images, graphics processors have become the workhorse for modern AI applications. Satori, like Summit, has been configured to wring as much power from each GPU as possible.

IBM’s gift follows a history of collaborations with MIT that have paved the way for computing breakthroughs. In 1956, IBM helped launch the MIT Computation Center with the donation of an IBM 704, the first mass-produced computer to handle complex math. Nearly three decades later, IBM helped fund Project Athena, an initiative that brought networked computing to campus. Together, these initiatives spawned time-share operating systems, foundational programming languages, instant messaging, and the network-security protocol, Kerberos, among other technologies.

More recently, IBM agreed to invest $240 million over 10 years to establish the MIT-IBM Watson AI Lab, a founding sponsor of MIT’s Quest for Intelligence. In addition to filling the computing gap at MIT, Satori will be configured to allow researchers to exchange data with all major commercial cloud providers, as well as prepare their code to run on IBM’s Summit supercomputer.

Josh McDermott, an associate professor at MIT’s Department of Brain and Cognitive Sciences, is currently using Summit to develop a better hearing aid, but before he and his students could run their models, they spent countless hours getting the code ready. In the future, Satori will expedite the process, he says, and in the longer term, make more ambitious projects possible.

“We’re currently building computer systems to model one sensory system but we’d like to be able to build models that can see, hear and touch,” he says. “That requires a much bigger scale.”

Richard Braatz, the Edwin R. Gilliland Professor at MIT’s Department of Chemical Engineering, is using AI to improve lithium-ion battery technologies. He and his colleagues recently developed a machine learning algorithm to predict a battery’s lifespan from past charging cycles, and now, they’re developing multiscale simulations to test new materials and designs for extending battery lifeWith a boost from a computer like Satori, the simulations could capture key physical and chemical processes that accelerate discovery. “With better predictions, we can bring new ideas to market faster,” he says.

Satori will be housed at a silk mill-turned data center, the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, Massachusetts, and connect to MIT via dedicated, high-speed fiber optic cables. At 150 kilowatts, Satori will consume as much energy as a mid-sized building at MIT, but its carbon footprint will be nearly fully offset by the use of hydro and nuclear power at the Holyoke facility. Equipped with energy-efficient cooling, lighting, and power distribution, the MGHPCC was the first academic data center to receive LEED-platinum status, the highest green-building award, in 2011.

“Siting Satori at Holyoke minimizes its carbon emissions and environmental impact without compromising its scientific impact,” says John Goodhue, executive director of the MGHPCC.

Visit the Satori website for more information.

August 26, 2019 | More

A battery-free sensor for underwater exploration

To investigate the vastly unexplored oceans covering most our planet, researchers aim to build a submerged network of interconnected sensors that send data to the surface — an underwater “internet of things.” But how to supply constant power to scores of sensors designed to stay for long durations in the ocean’s deep?

MIT researchers have an answer: a battery-free underwater communication system that uses near-zero power to transmit sensor data. The system could be used to monitor sea temperatures to study climate change and track marine life over long periods — and even sample waters on distant planets. They are presenting the system at the SIGCOMM conference this week, in a paper that has won the conference’s “best paper” award.

The system makes use of two key phenomena. One, called the “piezoelectric effect,” occurs when vibrations in certain materials generate an electrical charge. The other is “backscatter,” a communication technique commonly used for RFID tags, that transmits data by reflecting modulated wireless signals off a tag and back to a reader.

In the researchers’ system, a transmitter sends acoustic waves through water toward a piezoelectric sensor that has stored data. When the wave hits the sensor, the material vibrates and stores the resulting electrical charge. Then the sensor uses the stored energy to reflect a wave back to a receiver — or it doesn’t reflect one at all. Alternating between reflection in that way corresponds to the bits in the transmitted data: For a reflected wave, the receiver decodes a 1; for no reflected wave, the receiver decodes a 0.

“Once you have a way to transmit 1s and 0s, you can send any information,” says co-author Fadel Adib, an assistant professor in the MIT Media Lab and the Department of Electrical Engineering and Computer Science and founding director of the Signal Kinetics Research Group. “Basically, we can communicate with underwater sensors based solely on the incoming sound signals whose energy we are harvesting.”

The researchers demonstrated their Piezo-Acoustic Backscatter System in an MIT pool, using it to collect water temperature and pressure measurements. The system was able to transmit 3 kilobytes per second of accurate data from two sensors simultaneously at a distance of 10 meters between sensor and receiver.

Applications go beyond our own planet. The system, Adib says, could be used to collect data in the recently discovered subsurface ocean on Saturn’s largest moon, Titan. In June, NASA announced the Dragonfly mission to send a rover in 2026 to explore the moon, sampling water reservoirs and other sites.

“How can you put a sensor under the water on Titan that lasts for long periods of time in a place that’s difficult to get energy?” says Adib, who co-wrote the paper with Media Lab researcher JunSu Jang. “Sensors that communicate without a battery open up possibilities for sensing in extreme environments.”

Preventing deformation

Inspiration for the system hit while Adib was watching “Blue Planet,” a nature documentary series exploring various aspects of sea life. Oceans cover about 72 percent of Earth’s surface. “It occurred to me how little we know of the ocean and how marine animals evolve and procreate,” he says. Internet-of-things (IoT) devices could aid that research, “but underwater you can’t use Wi-Fi or Bluetooth signals … and you don’t want to put batteries all over the ocean, because that raises issues with pollution.”

That led Adib to piezoelectric materials, which have been around and used in microphones and other devices for about 150 years. They produce a small voltage in response to vibrations. But that effect is also reversible: Applying voltage causes the material to deform. If placed underwater, that effect produces a pressure wave that travels through the water. They’re often used to detect sunken vessels, fish, and other underwater objects.

“That reversibility is what allows us to develop a very powerful underwater backscatter communication technology,” Adib says.

Communicating relies on preventing the piezoelectric resonator from naturally deforming in response to strain. At the heart of the system is a submerged node, a circuit board that houses a piezoelectric resonator, an energy-harvesting unit, and a microcontroller. Any type of sensor can be integrated into the node by programming the microcontroller. An acoustic projector (transmitter) and underwater listening device, called a hydrophone (receiver), are placed some distance away.

Say the sensor wants to send a 0 bit. When the transmitter sends its acoustic wave at the node, the piezoelectric resonator absorbs the wave and naturally deforms, and the energy harvester stores a little charge from the resulting vibrations. The receiver then sees no reflected signal and decodes a 0.

However, when the sensor wants to send a 1 bit, the nature changes. When the transmitter sends a wave, the microcontroller uses the stored charge to send a little voltage to the piezoelectric resonator. That voltage reorients the material’s structure in a way that stops it from deforming, and instead reflects the wave. Sensing a reflected wave, the receiver decodes a 1.

Long-term deep-sea sensing

The transmitter and receiver must have power but can be planted on ships or buoys, where batteries are easier to replace, or connected to outlets on land. One transmitter and one receiver can gather information from many sensors covering one area or many areas.

“When you’re tracking a marine animal, for instance, you want to track it over a long range and want to keep the sensor on them for a long period of time. You don’t want to worry about the battery running out,” Adib says. “Or, if you want to track temperature gradients in the ocean, you can get information from sensors covering a number of different places.”

Another interesting application is monitoring brine pools, large areas of brine that sit in pools in ocean basins, and are difficult to monitor long-term. They exist, for instance, on the Antarctic Shelf, where salt settles during the formation of sea ice, and could aid in studying melting ice and marine life interaction with the pools. “We could sense what’s happening down there, without needing to keep hauling sensors up when their batteries die,” Adib says.

Polly Huang, a professor of electrical engineering at Taiwan National University, praised the work for its technical novelty and potential impact on environmental science. “This is a cool idea,” Huang says. “It’s not news one uses piezoelectric crystals to harvest energy … [but is the] first time to see it being used as a radio at the same time [which] is unheard of to the sensor network/system research community. Also interesting and unique is the hardware design and fabrication. The circuit and the design of the encapsulation are both sound and interesting.”

While noting that the system still needs more experimentation, especially in sea water, Huang adds that “this might be the ultimate solution for researchers in marine biography, oceanography, or even meteorology — those in need of long-term, low-human-effort underwater sensing.”

Next, the researchers aim to demonstrate that the system can work at farther distances and communicate with more sensors simultaneously. They’re also hoping to test if the system can transmit sound and low-resolution images.

The work is sponsored, in part, by the U.S Office of Naval Research.

August 20, 2019 | More

Using Wall Street secrets to reduce the cost of cloud infrastructure

Stock market investors often rely on financial risk theories that help them maximize returns while minimizing financial loss due to market fluctuations. These theories help investors maintain a balanced portfolio to ensure they’ll never lose more money than they’re willing to part with at any given time.

Inspired by those theories, MIT researchers in collaboration with Microsoft have developed a “risk-aware” mathematical model that could improve the performance of cloud-computing networks across the globe. Notably, cloud infrastructure is extremely expensive and consumes a lot of the world’s energy.

Their model takes into account failure probabilities of links between data centers worldwide — akin to predicting the volatility of stocks. Then, it runs an optimization engine to allocate traffic through optimal paths to minimize loss, while maximizing overall usage of the network.

The model could help major cloud-service providers — such as Microsoft, Amazon, and Google — better utilize their infrastructure. The conventional approach is to keep links idle to handle unexpected traffic shifts resulting from link failures, which is a waste of energy, bandwidth, and other resources. The new model, called TeaVar, on the other hand, guarantees that for a target percentage of time — say, 99.9 percent — the network can handle all data traffic, so there is no need to keep any links idle. During that 0.01 percent of time, the model also keeps the data dropped as low as possible.

In experiments based on real-world data, the model supported three times the traffic throughput as traditional traffic-engineering methods, while maintaining the same high level of network availability. A paper describing the model and results will be presented at the ACM SIGCOMM conference this week.

Better network utilization can save service providers millions of dollars, but benefits will “trickle down” to consumers, says co-author Manya Ghobadi, the TIBCO Career Development Assistant Professor in the MIT Department of Electrical Engineering and Computer Science and a researcher at the Computer Science and Artificial Intelligence Laboratory (CSAIL).

“Having greater utilized infrastructure isn’t just good for cloud services — it’s also better for the world,” Ghobadi says. “Companies don’t have to purchase as much infrastructure to sell services to customers. Plus, being able to efficiently utilize datacenter resources can save enormous amounts of energy consumption by the cloud infrastructure. So, there are benefits both for the users and the environment at the same time.”

Joining Ghobadi on the paper are her students Jeremy Bogle and Nikhil Bhatia, both of CSAIL; Ishai Menache and Nikolaj Bjorner of Microsoft Research; and Asaf Valadarsky and Michael Schapira of Hebrew University.

On the money

Cloud service providers use networks of fiber optical cables running underground, connecting data centers in different cities. To route traffic, the providers rely on “traffic engineering” (TE) software that optimally allocates data bandwidth — amount of data that can be transferred at one time — through all network paths.

The goal is to ensure maximum availability to users around the world. But that’s challenging when some links can fail unexpectedly, due to drops in optical signal quality resulting from outages or lines cut during construction, among other factors. To stay robust to failure, providers keep many links at very low utilization, lying in wait to absorb full data loads from downed links.

Thus, it’s a tricky tradeoff between network availability and utilization, which would enable higher data throughputs. And that’s where traditional TE methods fail, the researchers say. They find optimal paths based on various factors, but never quantify the reliability of links. “They don’t say, ‘This link has a higher probability of being up and running, so that means you should be sending more traffic here,” Bogle says. “Most links in a network are operating at low utilization and aren’t sending as much traffic as they could be sending.”

The researchers instead designed a TE model that adapts core mathematics from “conditional value at risk,” a risk-assessment measure that quantifies the average loss of money. With investing in stocks, if you have a one-day 99 percent conditional value at risk of $50, your expected loss of the worst-case 1 percent scenario on that day is $50. But 99 percent of the time, you’ll do much better. That measure is used for investing in the stock market — which is notoriously difficult to predict.

“But the math is actually a better fit for our cloud infrastructure setting,” Ghobadi says. “Mostly, link failures are due to the age of equipment, so the probabilities of failure don’t change much over time. That means our probabilities are more reliable, compared to the stock market.”

Risk-aware model

In networks, data bandwidth shares are analogous to invested “money,” and the network equipment with different probabilities of failure are the “stocks” and their uncertainty of changing values. Using the underlying formulas, the researchers designed a “risk-aware” model that, like its financial counterpart, guarantees data will reach its destination 99.9 percent of time, but keeps traffic loss at minimum during 0.1 percent worst-case failure scenarios. That allows cloud providers to tune the availability-utilization tradeoff.

The researchers statistically mapped three years’ worth of network signal strength from Microsoft’s networks that connects its data centers to a probability distribution of link failures. The input is the network topology in a graph, with source-destination flows of data connected through lines (links) and nodes (cities), with each link assigned a bandwidth.

Failure probabilities were obtained by checking the signal quality of every link every 15 minutes. If the signal quality ever dipped below a receiving threshold, they considered that a link failure. Anything above meant the link was up and running. From that, the model generated an average time that each link was up or down, and calculated a failure probability — or “risk” — for each link at each 15-minute time window. From those data, it was able to predict when risky links would fail at any given window of time.

The researchers tested the model against other TE software on simulated traffic sent through networks from Google, IBM, ATT, and others that spread across the world. The researchers created various failure scenarios based on their probability of occurrence. Then, they sent simulated and real-world data demands through the network and cued their models to start allocating bandwidth.

The researchers’ model kept reliable links working to near full capacity, while steering data clear of riskier links. Over traditional approaches, their model ran three times as much data through the network, while still ensuring all data got to its destination. The code is freely available on GitHub.

August 19, 2019 | More

Yearlong hackathon engages nano community around health issues

A traditional hackathon focuses on computer science and programming, attracts coders in droves, and spans an entire weekend with three stages: problem definition, solution development, and business formation.

Hacking Nanomedicine, however, recently brought together graduate and postgraduate students for a single morning of hands-on problem solving and innovation in health care while offering networking opportunities across departments and research interests. Moreover, the July hackathon was the first in a series of three half-day events structured to allow ideas to develop over time.

This deliberately deconstructed, yearlong process promotes necessary ebb and flow as teams shift in scope and recruit new members throughout each stage. “We believe this format is a powerful combination of intense, collaborative, multidisciplinary interactions, separated by restful research periods for reflecting on new ideas, allowing additional background research to take place and enabling additional people to be pulled into the fray as ideas take shape,” says Brian Anthony, associate director of MIT.nano and principal research scientist in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Mechanical Engineering.

Organized by Marble Center for Cancer Nanomedicine Assistant Director Tarek Fadel, Foundation Medicine’s Michael Woonton, and MIT Hacking Medicine Co-Directors Freddy Nguyen and Kriti Subramanyam, the event was sponsored by IMES, the Koch Institute’s Marble Center for Cancer Nanomedicine, and MIT.nano, the new 200,000-square-foot nanoscale research center that launched at MIT last fall.

Sangeeta Bhatia, director of the Marble Center, emphasizes the importance of creating these communication channels between community members working in tangentially-related research spheres. “The goal of the event is to galvanize the nanotechnology community around Boston — including MIT.nano, the Marble Center, and IMES — to leverage the unique opportunities presented by miniaturization and to answer critical questions impacting health care,” says Bhatia, who is also the John J. and Dorothy Wilson Professor of Health Sciences and Technology at MIT.

At the kickoff session, organizers sought to create a smaller, workshop-based event that would introduce students, medical residents, and trainees to the world of hacking and disruptive problem solving. Representatives from MIT Hacking Medicine started the day with a brief overview and case study on PillPack, a successful internet pharmacy startup created from a previous hackathon event.

Participants then each had 30 seconds to develop and pitch problems highlighting critical health care industry shortcomings before forming into five teams based on shared interests. Groups pinpointed a wide array of timely topics, from the nation’s fight against obesity to minimizing vaccine pain. Each cohort had two hours to work through multifaceted, nanotechnology-based solutions.

Mentors Cicely Fadel, a clinical researcher at the Wyss Institute for Biologically Inspired Engineering and neonatologist at Beth Israel Deaconess Medical Center, and David Chou, a hematopathologist at Massachusetts General Hospital and clinical fellow at the Wyss Institute, roamed the room during the solution phase, offering feedback on feasibility based on their own clinical experience.

At the conclusion of the problem-solving block, each of the five teams presented their solution to a panel of expert judges: Imran Babar, chief business officer of Cydan; Adama Marie Sesay, senior staff engineer of the Wyss Institute; Craig Mak, director of strategy at Arbor Bio; Jaideep Dudani, associate director of Relay Therapeutics; and Zen Chu, senior lecturer at the MIT Sloan School of Management and faculty director of MIT Hacking Medicine.

Given the introductory nature of the event, judges opted to forego the traditional scoring rubric and instead paired with each team to offer individualized, qualitative feedback. Event sponsors note that the decision to steer away from a black-and-white, ranked-placing system encourages participants to continue thinking about the pain points of their problem in anticipation of the next hackathon in the series this fall.

During this second phase, participants will further develop their solution and explore the issue’s competitive landscape. Organizers plan to bring together local business and management stakeholders for a final event in the spring that will allow participants to pitch their project for acquisition or initial seed funding.

Founded in 2011, MIT Hacking Medicine consists of both students and community members and aims to promote medical innovation to benefit the health care community. The group recognizes that technological advancement is often born out of collaboration rather than isolation. Monday’s event accordingly encouraged networking among students and postdocs not just from MIT but institutions all around Boston, creating lasting relationships rooted in a commitment to deliver crucial health care solutions.

Indeed, these events have proven successful in fostering connections and propelling innovation. According to MIT Hacking Medicine’s website, more than 50 companies with over $240 million in venture funding have been created since June 2018 thanks to their hackathons, workshops, and networking gatherings. The organization’s events across the globe have engaged nearly 22,000 hackers eager to disrupt the status quo and think critically about health systems in place.

This past weekend, MIT Hacking Medicine hosted its flagship Grand Hack event in Washington. Over the course of a weekend, like-minded students and professionals across a range of industries will join forces to tackle issues related to health care access, mental health and professional burnout, rare diseases, and more. Sponsors hope that Monday’s shorter, intimate event will garner enthusiasm for larger hackathons like this one to sustain communication among a diverse community of experts in their respective fields.

August 9, 2019 | More

Guided by AI, robotic platform automates molecule manufacture

Guided by artificial intelligence and powered by a robotic platform, a system developed by MIT researchers moves a step closer to automating the production of small molecules that could be used in medicine, solar energy, and polymer chemistry.

The system, described in the August 8 issue of Science, could free up bench chemists from a variety of routine and time-consuming tasks, and may suggest possibilities for how to make new molecular compounds, according to the study co-leaders Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering, and Timothy F. Jamison, the Robert R. Taylor Professor of Chemistry and associate provost at MIT.

The technology “has the promise to help people cut out all the tedious parts of molecule building,” including looking up potential reaction pathways and building the components of a molecular assembly line each time a new molecule is produced, says Jensen.

“And as a chemist, it may give you inspirations for new reactions that you hadn’t thought about before,” he adds.

Other MIT authors on the Science paper include Connor W. Coley, Dale A. Thomas III, Justin A. M. Lummiss, Jonathan N. Jaworski, Christopher P. Breen, Victor Schultz, Travis Hart, Joshua S. Fishman, Luke Rogers, Hanyu Gao, Robert W. Hicklin, Pieter P. Plehiers, Joshua Byington, John S. Piotti, William H. Green, and A. John Hart.

From inspiration to recipe to finished product

The new system combines three main steps. First, software guided by artificial intelligence suggests a route for synthesizing a molecule, then expert chemists review this route and refine it into a chemical “recipe,” and finally the recipe is sent to a robotic platform that automatically assembles the hardware and performs the reactions that build the molecule.

Coley and his colleagues have been working for more than three years to develop the open-source software suite that suggests and prioritizes possible synthesis routes. At the heart of the software are several neural network models, which the researchers trained on millions of previously published chemical reactions drawn from the Reaxys and U.S. Patent and Trademark Office databases. The software uses these data to identify the reaction transformations and conditions that it believes will be suitable for building a new compound.

“It helps makes high-level decisions about what kinds of intermediates and starting materials to use, and then slightly more detailed analyses about what conditions you might want to use and if those reactions are likely to be successful,” says Coley.

“One of the primary motivations behind the design of the software is that it doesn’t just give you suggestions for molecules we know about or reactions we know about,” he notes. “It can generalize to new molecules that have never been made.”

Chemists then review the suggested synthesis routes produced by the software to build a more complete recipe for the target molecule. The chemists sometimes need to perform lab experiments or tinker with reagent concentrations and reaction temperatures, among other changes.

“They take some of the inspiration from the AI and convert that into an executable recipe file, largely because the chemical literature at present does not have enough information to move directly from inspiration to execution on an automated system,” Jamison says.

The final recipe is then loaded on to a platform where a robotic arm assembles modular reactors, separators, and other processing units into a continuous flow path, connecting pumps and lines that bring in the molecular ingredients.

“You load the recipe — that’s what controls the robotic platform — you load the reagents on, and press go, and that allows you to generate the molecule of interest,” says Thomas. “And then when it’s completed, it flushes the system and you can load the next set of reagents and recipe, and allow it to run.”

Unlike the continuous flow system the researchers presented last year, which had to be manually configured after each synthesis, the new system is entirely configured by the robotic platform.

“This gives us the ability to sequence one molecule after another, as well as generate a library of molecules on the system, autonomously,” says Jensen.

The design for the platform, which is about two cubic meters in size — slightly smaller than a standard chemical fume hood — resembles a telephone switchboard and operator system that moves connections between the modules on the platform.

“The robotic arm is what allowed us to manipulate the fluidic paths, which reduced the number of process modules and fluidic complexity of the system, and by reducing the fluidic complexity we can increase the molecular complexity,” says Thomas. “That allowed us to add additional reaction steps and expand the set of reactions that could be completed on the system within a relatively small footprint.”

Toward full automation

The researchers tested the full system by creating 15 different medicinal small molecules of different synthesis complexity, with processes taking anywhere between two hours for the simplest creations to about 68 hours for manufacturing multiple compounds.

The team synthesized a variety of compounds: aspirin and the antibiotic secnidazole in back-to-back processes; the painkiller lidocaine and the antianxiety drug diazepam in back-to-back processes using a common feedstock of reagents; the blood thinner warfarin and the Parkinson’s disease drug safinamide, to show how the software could design compounds with similar molecular components but differing 3-D structures; and a family of five ACE inhibitor drugs and a family of four nonsteroidal anti-inflammatory drugs.

“I’m particularly proud of the diversity of the chemistry and the kinds of different chemical reactions,” says Jamison, who said the system handled about 30 different reactions compared to about 12 different reactions in the previous continuous flow system.

“We are really trying to close the gap between idea generation from these programs and what it takes to actually run a synthesis,” says Coley. “We hope that next-generation systems will increase further the fraction of time and effort that scientists can focus their efforts on creativity and design.”

The research was supported, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) Make-It program.

August 8, 2019 | More