News and Research
Catherine Iacobo named industry co-director for MIT Leaders for Global Operations

Catherine Iacobo named industry co-director for MIT Leaders for Global Operations

Cathy Iacobo, a lecturer at the MIT Sloan School of Management, has been named the new industry co-director for the MIT Leaders for Global Operations (LGO) program. Read more


New leadership for Bernard M. Gordon-MIT Engineering Leadership Program

Olivier de Weck, frequent LGO advisor, professor of aeronautics and astronautics and of engineering systems at MIT, has been named the new faculty co-director of the Bernard M. Gordon-MIT Engineering Leadership Program (GEL). He joins Reza Rahaman, who was appointed the Bernard M. Gordon-MIT Engineering Leadership Program industry co-director and senior lecturer on July 1, 2018.

“Professor de Weck has a longstanding commitment to engineering leadership, both as an educator and a researcher. I look forward to working with him and the GEL team as they continue to strengthen their outstanding undergraduate program and develop the new program for graduate students,” says Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

A leader in systems engineering, de Weck researches how complex human-made systems such as aircraft, spacecraft, automobiles, and infrastructures are designed, manufactured, and operated. By investigating their lifecycle properties, de Weck and members of his research group have developed a range of novel techniques broadly adopted by industry to maximize the value of these systems over time.

August 1, 2019 | More

Building the tools of the next manufacturing revolution

John Hart, an associate professor of mechanical engineering at MIT, LGO adviser, and the director of the Laboratory for Manufacturing and Productivity and the Center for Additive and Digital Advanced Production Technologies, is an expert in 3-D printing, also known as additive manufacturing, which involves the computer-guided deposition of material layer by layer into precise three-dimensional shapes. (Conventional manufacturing usually entails making a part by removing material, for example through machining, or by forming the part using a mold tool.)

Hart’s research includes the development of advanced materials — new types of polymers, nanocomposites, and metal alloys — and the development of novel machines and processes that use and shape materials, such as high-speed 3-D printing, roll-to-roll graphene growth, and manufacturing techniques for low-cost sensors and electronics.

June 19, 2019 | More

LGO Best Thesis 2019 for Big Data Analysis at Amgen, Inc.

After the official MIT commencement ceremonies, Thomas Roemer, LGO’s executive director, announced the best thesis winner at LGO’s annual post-graduation celebration. This year’s winner was Maria Emilia Lopez Marino (Emi), who developed a predictive framework to evaluate and assess the impact of raw material attributes on the manufacturing process at Amgen. Thesis readers described Marino’s project as an “extremely well-written thesis.  Excellent coverage of not only the project, but also the industry as a whole.”

Applying MIT knowledge in the real world

Marino, who earned her MBA and SM in Civil and Environmental Engineering, completed her six-month LGO internship project at Amgen, Inc. For her project, Marino developed a new predictive framework through machine learning techniques to assess the impact of raw material variability on the performance of several commercial processes of biologics manufacturing.  Finding this solution represents a competitive advantage for biopharmaceutical leaders. The results from her analysis showed an 80% average accuracy on predictions for new data. Additionally, the framework she developed is the starting point of a new methodology towards material variability understanding in the manufacturing process for the pharmaceutical industry.

Each year, the theses are nominated by faculty advisors and then reviewed by LGO alumni readers to determine the winner. Thesis advisor and Professor Roy Welsch stated Emi “understood variation both in a statistical sense and in manufacturing in the biopharmaceutical industry and left behind highly accurate and interpretable models in a form that others can use and expand. We hope she will share her experiences with us in the future at LGO alumni reunions and on DPT visits.”

Marino, who earned her undergraduate degree Chemical Engineering from the National University of Mar Del Plata in Argentina, has accepted a job offer with Amgen in Puerto Rico.


June 11, 2019 | More

The tenured engineers of 2019

The School of Engineering has announced that 17 members of its faculty have been granted tenure by MIT, including 3 LGO advisors: Saurabh Amin, Kerri Cahoy, and Julie Shah.

“The tenured faculty in this year’s cohort are a true inspiration,” said Anantha Chandrakasan, dean of the School of Engineering. “They have shown exceptional dedication to research and teaching, and their innovative work has greatly advanced their fields.”

This year’s newly tenured associate professors are:

Antoine Allanore, in the Department of Materials Science and Engineering, develops more sustainable technologies and strategies for mining, metal extraction, and manufacturing, including novel methods of fertilizer production.

Saurabh Amin, in the Department of Civil and Environmental Engineering, focuses on the design and implementation of network inspection and control algorithms for improving the resilience of large-scale critical infrastructures, such as transportation systems and water and energy distribution networks, against cyber-physical security attacks and natural events.

Emilio Baglietto, in the Department of Nuclear Science and Engineering, uses computational modeling to characterize and predict the underlying heat-transfer processes in nuclear reactors, including turbulence modeling, unsteady flow phenomena, multiphase flow, and boiling.

Paul Blainey, the Karl Van Tassel (1925) Career Development Professor in the Department of Biological Engineering, integrates microfluidic, optical, and molecular tools for application in biology and medicine across a range of scales.

Kerri Cahoy, the Rockwell International Career Development Professor in the Department of Aeronautics and Astronautics, develops nanosatellites that demonstrate weather sensing using microwave radiometers and GPS radio occultation receivers, high data-rate laser communications with precision time transfer, and active optical imaging systems using MEMS deformable mirrors for exoplanet exploration applications.

Juejun Hu, in the Department of Materials Science and Engineering, focuses on novel materials and devices to exploit interactions of light with matter, with applications in on-chip sensing and spectroscopy, flexible and polymer photonics, and optics for solar energy.

Sertac Karaman, the Class of 1948 Career Development Professor in the Department of Aeronautics and Astronautics, studies robotics, control theory, and the application of probability theory, stochastic processes, and optimization for cyber-physical systems such as driverless cars and drones.

R. Scott Kemp, the Class of 1943 Career Development Professor in the Department of Nuclear Science and Engineering, combines physics, politics, and history to identify options for addressing nuclear weapons and energy. He investigates technical threats to nuclear-deterrence stability and the information theory of treaty verification; he is also developing technical tools for reconstructing the histories of secret nuclear-weapon programs.

Aleksander Mądry, in the Department of Electrical Engineering and Computer Science, investigates topics ranging from developing new algorithms using continuous optimization, to combining theoretical and empirical insights, to building a more principled and thorough understanding of key machine learning tools. A major theme of his research is rethinking machine learning from the perspective of security and robustness.

Frances Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering, performs research on nanostructures using transmission electron microscopes that allow researchers to see, in real-time, how structures form and develop in response to changes in temperature, environment, and other variables. Understanding crystal growth at the nanoscale is helpful in creating precisely controlled materials for applications in microelectronics and energy conversion and storage.

Daniel Sanchez, in the Department of Electrical Engineering and Computer Science, works on computer architecture and computer systems, with an emphasis on large-scale multi-core processors, scalable and efficient memory hierarchies, architectures with quality-of-service guarantees, and scalable runtimes and schedulers.

Themistoklis Sapsis, the Doherty Career Development Professor in the Department of Mechanical Engineering, develops analytical, computational, and data-driven methods for the probabilistic prediction and quantification of extreme events in high-dimensional nonlinear systems such as turbulent fluid flows and nonlinear mechanical systems.

Julie Shah, the Boeing Career Development Professor in the Department of Aeronautics and Astronautics, develops innovative computational models and algorithms expanding the use of human cognitive models for artificial intelligence. Her research has produced novel forms of human-machine teaming in manufacturing assembly lines, healthcare applications, transportation, and defense.

Hadley Sikes, the Esther and Harold E. Edgerton Career Development Professor in the Department of Chemical Engineering, employs biomolecular engineering and knowledge of reaction networks to detect epigenetic modifications that can guide cancer treatment, induce oxidant-specific perturbations in tumors for therapeutic benefit, and improve signaling reactions and assay formats used in medical diagnostics.

William Tisdale, the ARCO Career Development Professor in the Department of Chemical Engineering, works on energy transport in nanomaterials, nonlinear spectroscopy, and spectroscopic imaging to better understand and control the mechanisms by which excitons, free charges, heat, and reactive chemical species are converted to more useful forms of energy, and on leveraging this understanding to guide materials design and process optimization.

Virginia Vassilevska Williams, the Steven and Renee Finn Career Development Professor in the Department of Electrical Engineering and Computer Science, applies combinatorial and graph theoretic tools to develop efficient algorithms for matrix multiplication, shortest paths, and a variety of other fundamental problems. Her recent research is centered on proving tight relationships between seemingly different computational problems. She is also interested in computational social choice issues, such as making elections computationally resistant to manipulation.

Amos Winter, the Tata Career Development Professor in the Department of Mechanical Engineering, focuses on connections between mechanical design theory and user-centered product design to create simple, elegant technological solutions for applications in medical devices, water purification, agriculture, automotive, and other technologies used in highly constrained environments.

June 7, 2019 | More

MIT team places second in 2019 NASA BIG Idea Challenge

An MIT student team, including LGO ’20 Hans Nowak, took second place for its design of a multilevel greenhouse to be used on Mars in NASA’s 2019 Breakthrough, Innovative and Game-changing (BIG) Idea Challenge last month.

Each year, NASA holds the BIG Idea competition in its search for innovative and futuristic ideas. This year’s challenge invited universities across the United States to submit designs for a sustainable, cost-effective, and efficient method of supplying food to astronauts during future crewed explorations of Mars. Dartmouth College was awarded first place in this year’s closely contested challenge.

“This was definitely a full-team success,” says team leader Eric Hinterman, a graduate student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). The team had contributions from 10 undergraduates and graduate students from across MIT departments. Support and assistance were provided by four architects and designers in Italy. This project was completely voluntary; all 14 contributors share a similar passion for space exploration and enjoyed working on the challenge in their spare time.

The MIT team dubbed its design “BEAVER” (Biosphere Engineered Architecture for Viable Extraterrestrial Residence). “We designed our greenhouse to provide 100 percent of the food requirements for four active astronauts every day for two years,” explains Hinterman.

The ecologists and agriculture specialists on the MIT team identified eight types of crops to provide the calories, protein, carbohydrates, and oils and fats that astronauts would need; these included potatoes, rice, wheat, oats, and peanuts. The flexible menu suggested substitutes, depending on astronauts’ specific dietary requirements.

“Most space systems are metallic and very robotic,” Hinterman says. “It was fun working on something involving plants.”

Parameters provided by NASA — a power budget, dimensions necessary for transporting by rocket, the capacity to provide adequate sustenance — drove the shape and the overall design of the greenhouse.

Last October, the team held an initial brainstorming session and pitched project ideas. The iterative process continued until they reached their final design: a cylindrical growing space 11.2 meters in diameter and 13.4 meters tall after deployment.

An innovative design

The greenhouse would be packaged inside a rocket bound for Mars and, after landing, a waiting robot would move it to its site. Programmed with folding mechanisms, it would then expand horizontally and vertically and begin forming an ice shield around its exterior to protect plants and humans from the intense radiation on the Martian surface.

Two years later, when Earth and Mars orbits were again in optimal alignment for launching and landing, a crew would arrive on Mars, where they would complete the greenhouse setup and begin growing crops. “About every two years, the crew would leave and a new crew of four would arrive and continue to use the greenhouse,” explains Hinterman.

To maximize space, BEAVER employs a large spiral that moves around a central core within the cylinder. Seedlings are planted at the top and flow down the spiral as they grow. By the time they reach the bottom, the plants are ready for harvesting, and the crew enters at the ground floor to reap the potatoes and peanuts and grains. The planting trays are then moved to the top of the spiral, and the process begins again.

“A lot of engineering went into the spiral,” says Hinterman. “Most of it is done without any moving parts or mechanical systems, which makes it ideal for space applications. You don’t want a lot of moving parts or things that can break.”

The human factor

“One of the big issues with sending humans into space is that they will be confined to seeing the same people every day for a couple of years,” Hinterman explains. “They’ll be living in an enclosed environment with very little personal space.”

The greenhouse provides a pleasant area to ensure astronauts’ psychological well-being. On the top floor, just above the spiral, a windowed “mental relaxation area” overlooks the greenery. The ice shield admits natural light, and the crew can lounge on couches and enjoy the view of the Mars landscape. And rather than running pipes from the water tank at the top level down to the crops, Hinterman and his team designed a cascading waterfall at

May 24, 2019 | More

MIT team places first in U.S. Air Force virtual reality competition

When the United States Air Force put out a call for submissions for its first-ever Visionary Q-Prize competition in October 2018, a six-person team of 3 MIT students and 3 LGO alumni took up the challenge. Last month, they emerged as a first-place winner for their prototype of a virtual reality tool they called CoSMIC (Command, Sensing, and Mapping Information Center).

The challenge was hosted by the Air Force Research Labs Space Vehicles Directorate and the Wright Brothers Institute to encourage nontraditional sources with innovative products and ideas to engage with military customers to develop solutions for safe and secure operations in space.

April 12, 2019 | More

MIT graduate engineering, business programs earn top rankings from U.S. News for 2020

Graduate engineering program is No. 1 in the nation; MIT Sloan is No. 3.

MIT’s graduate program in engineering has again earned a No. 1 spot in U.S. News and Word Report’s annual rankings, a place it has held since 1990, when the magazine first ranked such programs.

The MIT Sloan School of Management also placed highly, occupying the No. 3 spot for the best graduate business program, which it shares with Harvard University and the University of Chicago.

March 22, 2019 | More

Leading to Green

More efficient or more sustainable? Janelle Heslop, LGO ’19, helps businesses achieve both. Heslop is no shrinking violet. She found a voice for herself and the environment when she was in middle school, volunteering as a junior docent for the Hudson River Museum. “I was a 12-year-old giving tours, preaching to people: we’ve got to protect our resources,” Heslop says. “At a very early age, I learned to have a perspective, and assert it.”

February 22, 2019 | More

Winners of inaugural AUS New Venture Challenge Announced

Danielle Castley, Dartmouth PhD Candidate, Jordan Landis, LGO ’20, and Ian McDonald, PhD, of Neutroelectric LLC won the inaugural American University of Sharjah New Ventures Challenge, winning the Chancellor’s Prize of $50,000 with radiation shielding materials  developed to improve safety margins and reduce costs for both nuclear power plant operations and transport and storage of spent nuclear waste.

February 20, 2019 | More

Tackling greenhouse gases

While a number of other MIT researchers are developing capture and reuse technologies to minimize greenhouse gas emissions, Professor Timothy Gutowski, frequent LGO advisor, is approaching climate change from a completely different angle: the economics of manufacturing.

Gutowski understands manufacturing. He has worked on both the industry and academic side of manufacturing, was the director of MIT’s Laboratory for Manufacturing and Productivity for a decade, and currently leads the Environmentally Benign Manufacturing research group at MIT. His primary research focus is assessing the environmental impact of manufacturing.

January 11, 2019 | More


Supply chain visibility boosts consumer trust, and even sales

Global supply chains are complex. Transforming raw materials into completed goods often requires a multitude of workers crossing different countries and cultures. Companies undertaking efforts to learn more about their supply chain often face a significant investment of time and resources.

Those costs are worth it, according to a new study by MIT Sloan professor

and visiting assistant professor

along with León Valdés, an assistant professor at the University of Pittsburgh.

The researchers found that investing in supply chain visibility is a surefire way for companies to gain con

August 20, 2019 | More

Looking to stay relevant, big enterprises embrace the platform

How hot are digital platforms? Very: The five most valuable companies on the planet right now — Microsoft, Amazon, Apple, Alphabet, and Facebook — are platform companies, and “myriad startups and smaller companies are thriving as well,” according to Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy.

With both the “behemoths of the digital economy,” as Brynjolfsson called them, and startups reaping the benefits of these digital ecosystems, the pressure is on incumbent organizations to join the platform economy.

August 10, 2019 | More

A study of more than 250 platforms reveals why most fail

Platforms have become one of the most important business models of the 21st century. In our newly-published book, we divide all platforms into two types:  Innovation platforms enable third-party firms to add complementary products and services to a core product or technology. Prominent examples include Google Android and Apple iPhone operating systems as well as Amazon Web Services. The other type, transaction platforms, enable the exchange of information, goods, or services. Examples include Amazon Marketplace, Airbnb, or Uber. Five of the six most valuable firms in the world are built around these types of platforms.  In our analysis of data going back 20 years, we also identified 43 publicly-listed platform companies in the Forbes Global 2000. These platforms generated the same level of annual revenues (about $4.5 billion) as their non-platform counterparts, but used half the number of employees. They also had twice the operating profits and much higher market values … Read More »

The post A study of more than 250 platforms reveals why most fail – Michael A. Cusumano, David B. Yoffie, and Annabelle Gawer appeared first on MIT Sloan Experts.

August 8, 2019 | More

The unsung heroes of global technology? Standard-setters.

When we think about major figures in global technology, we often focus on inventors like Thomas Edison, with the light bulb, or Tim Berners-Lee, with the World Wide Web. Alternatively, we may look to builders of organizations that develop and spread innovations, such as Thomas Watson with IBM or Bill Gates with Microsoft. But other little-known figures have also played a critical role in the spread of technologies: engineers who set national and, especially, global standards.

August 6, 2019 | More

3 forces pushing on the platform economy

Digital platforms are transforming the way companies do business, but the last few years have shown these platforms still have their own need to evolve. Today especially, digital platforms are navigating a changing landscape.

During the recent 2019 MIT Platform Strategy Summit, three experts shared their predictions, offered advice, and asked questions about the changing platform business model.


With Facebook and Google in control of 84% of global spending on online ads (excluding China) and Amazon handling close to half of all e-commerce purchases, it’s no surprise that talk of regulating big tech and avoiding data monopolies are frequent topics of conversation.

August 5, 2019 | More

Business leaders gird for ‘organizational explosions’

You understand there is no one road to digital transformation. You remind your stakeholders that different companies take different paths, and none of them are easy. You warn your team to look out for potholes. But are you ready for “organizational explosions?”

For the last four years, the MIT Sloan Center for Information Systems Research has collected data from more than 800 organizations that are undergoing digital transformation, according to Nick van der Meulen, research scientist at the center. In its study of the data and interviews with the organizations, the center developed a framework for successful transformations.

August 1, 2019 | More

Improving strategic execution with machine learning

Machine learning (ML) is changing how leaders use metrics to drive business performance, customer experience, and growth. A small but growing group of companies is investing in ML to augment strategic decision-making with key performance indicators (KPIs). Our research,1 based on a global survey and more than a dozen interviews with executives and academics, suggests that ML is literally, and figuratively, redefining how businesses create and measure value. KPIs traditionally have had a retrospective, reporting bias, but by surfacing hidden variables that anticipate “key performance,” machine learning is making KPIs more predictive and prescriptive. With more forward-looking KPIs, progressive leaders can treat strategic measures as high-octane data fuel for training machine-learning algorithms to optimize business processes. Our survey and interviews suggest that this flip ― transforming KPIs from analytic outputs to data inputs ― is at an early, albeit promising, stage. Those companies that are already … Read More »

The post Improving strategic execution with machine learning – Michael Schrage appeared first on MIT Sloan Experts.

July 24, 2019 | More

Industrial sector amped for digital transformation

Manufacturing often gets dinged for being stuck in another era, but in truth the industrial sector is well positioned to surge ahead with digital transformation, thanks to investments in augmented reality, the “industrial internet of things,” machine learning, and artificial intelligence.

These advanced technologies are already having an impact on how manufacturers design, produce, and service products, according to Joseph Biron, chief technology officer, IoT, at Boston-based PTC, a company with roots in 3D design that now offers software for industrial transformation.

July 22, 2019 | More

Ethics and automation: What to do when workers are displaced

As companies embrace automation and artificial intelligence, some jobs will be created or enhanced, but many more are likely to go away. What obligation do organizations have to displaced workers in such situations? Is there an ethical way for business leaders to usher their workforces through digital disruption?

July 19, 2019 | More

Using machine learning to better predict clinical trial outcomes

Randomized clinical trials for new drugs and devices have always been a high-risk venture for a variety of stakeholders — investors, biopharma leaders, regulators, and, of course, patients and their families.

Now, MIT researchers are employing machine learning and statistical techniques to enhance data on clinical trial outcomes, allowing them to better handicap the drug and device approval process.

A new study, published in the debut issue of the Harvard Data Science Review, aims to provide more timely and accurate estimates of the risks of clinical trials. That data can help stakeholders manage their resources more efficientl

July 5, 2019 | More


A battery-free sensor for underwater exploration

To investigate the vastly unexplored oceans covering most our planet, researchers aim to build a submerged network of interconnected sensors that send data to the surface — an underwater “internet of things.” But how to supply constant power to scores of sensors designed to stay for long durations in the ocean’s deep?

MIT researchers have an answer: a battery-free underwater communication system that uses near-zero power to transmit sensor data. The system could be used to monitor sea temperatures to study climate change and track marine life over long periods — and even sample waters on distant planets. They are presenting the system at the SIGCOMM conference this week, in a paper that has won the conference’s “best paper” award.

The system makes use of two key phenomena. One, called the “piezoelectric effect,” occurs when vibrations in certain materials generate an electrical charge. The other is “backscatter,” a communication technique commonly used for RFID tags, that transmits data by reflecting modulated wireless signals off a tag and back to a reader.

In the researchers’ system, a transmitter sends acoustic waves through water toward a piezoelectric sensor that has stored data. When the wave hits the sensor, the material vibrates and stores the resulting electrical charge. Then the sensor uses the stored energy to reflect a wave back to a receiver — or it doesn’t reflect one at all. Alternating between reflection in that way corresponds to the bits in the transmitted data: For a reflected wave, the receiver decodes a 1; for no reflected wave, the receiver decodes a 0.

“Once you have a way to transmit 1s and 0s, you can send any information,” says co-author Fadel Adib, an assistant professor in the MIT Media Lab and the Department of Electrical Engineering and Computer Science and founding director of the Signal Kinetics Research Group. “Basically, we can communicate with underwater sensors based solely on the incoming sound signals whose energy we are harvesting.”

The researchers demonstrated their Piezo-Acoustic Backscatter System in an MIT pool, using it to collect water temperature and pressure measurements. The system was able to transmit 3 kilobytes per second of accurate data from two sensors simultaneously at a distance of 10 meters between sensor and receiver.

Applications go beyond our own planet. The system, Adib says, could be used to collect data in the recently discovered subsurface ocean on Saturn’s largest moon, Titan. In June, NASA announced the Dragonfly mission to send a rover in 2026 to explore the moon, sampling water reservoirs and other sites.

“How can you put a sensor under the water on Titan that lasts for long periods of time in a place that’s difficult to get energy?” says Adib, who co-wrote the paper with Media Lab researcher JunSu Jang. “Sensors that communicate without a battery open up possibilities for sensing in extreme environments.”

Preventing deformation

Inspiration for the system hit while Adib was watching “Blue Planet,” a nature documentary series exploring various aspects of sea life. Oceans cover about 72 percent of Earth’s surface. “It occurred to me how little we know of the ocean and how marine animals evolve and procreate,” he says. Internet-of-things (IoT) devices could aid that research, “but underwater you can’t use Wi-Fi or Bluetooth signals … and you don’t want to put batteries all over the ocean, because that raises issues with pollution.”

That led Adib to piezoelectric materials, which have been around and used in microphones and other devices for about 150 years. They produce a small voltage in response to vibrations. But that effect is also reversible: Applying voltage causes the material to deform. If placed underwater, that effect produces a pressure wave that travels through the water. They’re often used to detect sunken vessels, fish, and other underwater objects.

“That reversibility is what allows us to develop a very powerful underwater backscatter communication technology,” Adib says.

Communicating relies on preventing the piezoelectric resonator from naturally deforming in response to strain. At the heart of the system is a submerged node, a circuit board that houses a piezoelectric resonator, an energy-harvesting unit, and a microcontroller. Any type of sensor can be integrated into the node by programming the microcontroller. An acoustic projector (transmitter) and underwater listening device, called a hydrophone (receiver), are placed some distance away.

Say the sensor wants to send a 0 bit. When the transmitter sends its acoustic wave at the node, the piezoelectric resonator absorbs the wave and naturally deforms, and the energy harvester stores a little charge from the resulting vibrations. The receiver then sees no reflected signal and decodes a 0.

However, when the sensor wants to send a 1 bit, the nature changes. When the transmitter sends a wave, the microcontroller uses the stored charge to send a little voltage to the piezoelectric resonator. That voltage reorients the material’s structure in a way that stops it from deforming, and instead reflects the wave. Sensing a reflected wave, the receiver decodes a 1.

Long-term deep-sea sensing

The transmitter and receiver must have power but can be planted on ships or buoys, where batteries are easier to replace, or connected to outlets on land. One transmitter and one receiver can gather information from many sensors covering one area or many areas.

“When you’re tracking a marine animal, for instance, you want to track it over a long range and want to keep the sensor on them for a long period of time. You don’t want to worry about the battery running out,” Adib says. “Or, if you want to track temperature gradients in the ocean, you can get information from sensors covering a number of different places.”

Another interesting application is monitoring brine pools, large areas of brine that sit in pools in ocean basins, and are difficult to monitor long-term. They exist, for instance, on the Antarctic Shelf, where salt settles during the formation of sea ice, and could aid in studying melting ice and marine life interaction with the pools. “We could sense what’s happening down there, without needing to keep hauling sensors up when their batteries die,” Adib says.

Polly Huang, a professor of electrical engineering at Taiwan National University, praised the work for its technical novelty and potential impact on environmental science. “This is a cool idea,” Huang says. “It’s not news one uses piezoelectric crystals to harvest energy … [but is the] first time to see it being used as a radio at the same time [which] is unheard of to the sensor network/system research community. Also interesting and unique is the hardware design and fabrication. The circuit and the design of the encapsulation are both sound and interesting.”

While noting that the system still needs more experimentation, especially in sea water, Huang adds that “this might be the ultimate solution for researchers in marine biography, oceanography, or even meteorology — those in need of long-term, low-human-effort underwater sensing.”

Next, the researchers aim to demonstrate that the system can work at farther distances and communicate with more sensors simultaneously. They’re also hoping to test if the system can transmit sound and low-resolution images.

The work is sponsored, in part, by the U.S Office of Naval Research.

August 20, 2019 | More

Using Wall Street secrets to reduce the cost of cloud infrastructure

Stock market investors often rely on financial risk theories that help them maximize returns while minimizing financial loss due to market fluctuations. These theories help investors maintain a balanced portfolio to ensure they’ll never lose more money than they’re willing to part with at any given time.

Inspired by those theories, MIT researchers in collaboration with Microsoft have developed a “risk-aware” mathematical model that could improve the performance of cloud-computing networks across the globe. Notably, cloud infrastructure is extremely expensive and consumes a lot of the world’s energy.

Their model takes into account failure probabilities of links between data centers worldwide — akin to predicting the volatility of stocks. Then, it runs an optimization engine to allocate traffic through optimal paths to minimize loss, while maximizing overall usage of the network.

The model could help major cloud-service providers — such as Microsoft, Amazon, and Google — better utilize their infrastructure. The conventional approach is to keep links idle to handle unexpected traffic shifts resulting from link failures, which is a waste of energy, bandwidth, and other resources. The new model, called TeaVar, on the other hand, guarantees that for a target percentage of time — say, 99.9 percent — the network can handle all data traffic, so there is no need to keep any links idle. During that 0.01 percent of time, the model also keeps the data dropped as low as possible.

In experiments based on real-world data, the model supported three times the traffic throughput as traditional traffic-engineering methods, while maintaining the same high level of network availability. A paper describing the model and results will be presented at the ACM SIGCOMM conference this week.

Better network utilization can save service providers millions of dollars, but benefits will “trickle down” to consumers, says co-author Manya Ghobadi, the TIBCO Career Development Assistant Professor in the MIT Department of Electrical Engineering and Computer Science and a researcher at the Computer Science and Artificial Intelligence Laboratory (CSAIL).

“Having greater utilized infrastructure isn’t just good for cloud services — it’s also better for the world,” Ghobadi says. “Companies don’t have to purchase as much infrastructure to sell services to customers. Plus, being able to efficiently utilize datacenter resources can save enormous amounts of energy consumption by the cloud infrastructure. So, there are benefits both for the users and the environment at the same time.”

Joining Ghobadi on the paper are her students Jeremy Bogle and Nikhil Bhatia, both of CSAIL; Ishai Menache and Nikolaj Bjorner of Microsoft Research; and Asaf Valadarsky and Michael Schapira of Hebrew University.

On the money

Cloud service providers use networks of fiber optical cables running underground, connecting data centers in different cities. To route traffic, the providers rely on “traffic engineering” (TE) software that optimally allocates data bandwidth — amount of data that can be transferred at one time — through all network paths.

The goal is to ensure maximum availability to users around the world. But that’s challenging when some links can fail unexpectedly, due to drops in optical signal quality resulting from outages or lines cut during construction, among other factors. To stay robust to failure, providers keep many links at very low utilization, lying in wait to absorb full data loads from downed links.

Thus, it’s a tricky tradeoff between network availability and utilization, which would enable higher data throughputs. And that’s where traditional TE methods fail, the researchers say. They find optimal paths based on various factors, but never quantify the reliability of links. “They don’t say, ‘This link has a higher probability of being up and running, so that means you should be sending more traffic here,” Bogle says. “Most links in a network are operating at low utilization and aren’t sending as much traffic as they could be sending.”

The researchers instead designed a TE model that adapts core mathematics from “conditional value at risk,” a risk-assessment measure that quantifies the average loss of money. With investing in stocks, if you have a one-day 99 percent conditional value at risk of $50, your expected loss of the worst-case 1 percent scenario on that day is $50. But 99 percent of the time, you’ll do much better. That measure is used for investing in the stock market — which is notoriously difficult to predict.

“But the math is actually a better fit for our cloud infrastructure setting,” Ghobadi says. “Mostly, link failures are due to the age of equipment, so the probabilities of failure don’t change much over time. That means our probabilities are more reliable, compared to the stock market.”

Risk-aware model

In networks, data bandwidth shares are analogous to invested “money,” and the network equipment with different probabilities of failure are the “stocks” and their uncertainty of changing values. Using the underlying formulas, the researchers designed a “risk-aware” model that, like its financial counterpart, guarantees data will reach its destination 99.9 percent of time, but keeps traffic loss at minimum during 0.1 percent worst-case failure scenarios. That allows cloud providers to tune the availability-utilization tradeoff.

The researchers statistically mapped three years’ worth of network signal strength from Microsoft’s networks that connects its data centers to a probability distribution of link failures. The input is the network topology in a graph, with source-destination flows of data connected through lines (links) and nodes (cities), with each link assigned a bandwidth.

Failure probabilities were obtained by checking the signal quality of every link every 15 minutes. If the signal quality ever dipped below a receiving threshold, they considered that a link failure. Anything above meant the link was up and running. From that, the model generated an average time that each link was up or down, and calculated a failure probability — or “risk” — for each link at each 15-minute time window. From those data, it was able to predict when risky links would fail at any given window of time.

The researchers tested the model against other TE software on simulated traffic sent through networks from Google, IBM, ATT, and others that spread across the world. The researchers created various failure scenarios based on their probability of occurrence. Then, they sent simulated and real-world data demands through the network and cued their models to start allocating bandwidth.

The researchers’ model kept reliable links working to near full capacity, while steering data clear of riskier links. Over traditional approaches, their model ran three times as much data through the network, while still ensuring all data got to its destination. The code is freely available on GitHub.

August 19, 2019 | More

Yearlong hackathon engages nano community around health issues

A traditional hackathon focuses on computer science and programming, attracts coders in droves, and spans an entire weekend with three stages: problem definition, solution development, and business formation.

Hacking Nanomedicine, however, recently brought together graduate and postgraduate students for a single morning of hands-on problem solving and innovation in health care while offering networking opportunities across departments and research interests. Moreover, the July hackathon was the first in a series of three half-day events structured to allow ideas to develop over time.

This deliberately deconstructed, yearlong process promotes necessary ebb and flow as teams shift in scope and recruit new members throughout each stage. “We believe this format is a powerful combination of intense, collaborative, multidisciplinary interactions, separated by restful research periods for reflecting on new ideas, allowing additional background research to take place and enabling additional people to be pulled into the fray as ideas take shape,” says Brian Anthony, associate director of MIT.nano and principal research scientist in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Mechanical Engineering.

Organized by Marble Center for Cancer Nanomedicine Assistant Director Tarek Fadel, Foundation Medicine’s Michael Woonton, and MIT Hacking Medicine Co-Directors Freddy Nguyen and Kriti Subramanyam, the event was sponsored by IMES, the Koch Institute’s Marble Center for Cancer Nanomedicine, and MIT.nano, the new 200,000-square-foot nanoscale research center that launched at MIT last fall.

Sangeeta Bhatia, director of the Marble Center, emphasizes the importance of creating these communication channels between community members working in tangentially-related research spheres. “The goal of the event is to galvanize the nanotechnology community around Boston — including MIT.nano, the Marble Center, and IMES — to leverage the unique opportunities presented by miniaturization and to answer critical questions impacting health care,” says Bhatia, who is also the John J. and Dorothy Wilson Professor of Health Sciences and Technology at MIT.

At the kickoff session, organizers sought to create a smaller, workshop-based event that would introduce students, medical residents, and trainees to the world of hacking and disruptive problem solving. Representatives from MIT Hacking Medicine started the day with a brief overview and case study on PillPack, a successful internet pharmacy startup created from a previous hackathon event.

Participants then each had 30 seconds to develop and pitch problems highlighting critical health care industry shortcomings before forming into five teams based on shared interests. Groups pinpointed a wide array of timely topics, from the nation’s fight against obesity to minimizing vaccine pain. Each cohort had two hours to work through multifaceted, nanotechnology-based solutions.

Mentors Cicely Fadel, a clinical researcher at the Wyss Institute for Biologically Inspired Engineering and neonatologist at Beth Israel Deaconess Medical Center, and David Chou, a hematopathologist at Massachusetts General Hospital and clinical fellow at the Wyss Institute, roamed the room during the solution phase, offering feedback on feasibility based on their own clinical experience.

At the conclusion of the problem-solving block, each of the five teams presented their solution to a panel of expert judges: Imran Babar, chief business officer of Cydan; Adama Marie Sesay, senior staff engineer of the Wyss Institute; Craig Mak, director of strategy at Arbor Bio; Jaideep Dudani, associate director of Relay Therapeutics; and Zen Chu, senior lecturer at the MIT Sloan School of Management and faculty director of MIT Hacking Medicine.

Given the introductory nature of the event, judges opted to forego the traditional scoring rubric and instead paired with each team to offer individualized, qualitative feedback. Event sponsors note that the decision to steer away from a black-and-white, ranked-placing system encourages participants to continue thinking about the pain points of their problem in anticipation of the next hackathon in the series this fall.

During this second phase, participants will further develop their solution and explore the issue’s competitive landscape. Organizers plan to bring together local business and management stakeholders for a final event in the spring that will allow participants to pitch their project for acquisition or initial seed funding.

Founded in 2011, MIT Hacking Medicine consists of both students and community members and aims to promote medical innovation to benefit the health care community. The group recognizes that technological advancement is often born out of collaboration rather than isolation. Monday’s event accordingly encouraged networking among students and postdocs not just from MIT but institutions all around Boston, creating lasting relationships rooted in a commitment to deliver crucial health care solutions.

Indeed, these events have proven successful in fostering connections and propelling innovation. According to MIT Hacking Medicine’s website, more than 50 companies with over $240 million in venture funding have been created since June 2018 thanks to their hackathons, workshops, and networking gatherings. The organization’s events across the globe have engaged nearly 22,000 hackers eager to disrupt the status quo and think critically about health systems in place.

This past weekend, MIT Hacking Medicine hosted its flagship Grand Hack event in Washington. Over the course of a weekend, like-minded students and professionals across a range of industries will join forces to tackle issues related to health care access, mental health and professional burnout, rare diseases, and more. Sponsors hope that Monday’s shorter, intimate event will garner enthusiasm for larger hackathons like this one to sustain communication among a diverse community of experts in their respective fields.

August 9, 2019 | More

Guided by AI, robotic platform automates molecule manufacture

Guided by artificial intelligence and powered by a robotic platform, a system developed by MIT researchers moves a step closer to automating the production of small molecules that could be used in medicine, solar energy, and polymer chemistry.

The system, described in the August 8 issue of Science, could free up bench chemists from a variety of routine and time-consuming tasks, and may suggest possibilities for how to make new molecular compounds, according to the study co-leaders Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering, and Timothy F. Jamison, the Robert R. Taylor Professor of Chemistry and associate provost at MIT.

The technology “has the promise to help people cut out all the tedious parts of molecule building,” including looking up potential reaction pathways and building the components of a molecular assembly line each time a new molecule is produced, says Jensen.

“And as a chemist, it may give you inspirations for new reactions that you hadn’t thought about before,” he adds.

Other MIT authors on the Science paper include Connor W. Coley, Dale A. Thomas III, Justin A. M. Lummiss, Jonathan N. Jaworski, Christopher P. Breen, Victor Schultz, Travis Hart, Joshua S. Fishman, Luke Rogers, Hanyu Gao, Robert W. Hicklin, Pieter P. Plehiers, Joshua Byington, John S. Piotti, William H. Green, and A. John Hart.

From inspiration to recipe to finished product

The new system combines three main steps. First, software guided by artificial intelligence suggests a route for synthesizing a molecule, then expert chemists review this route and refine it into a chemical “recipe,” and finally the recipe is sent to a robotic platform that automatically assembles the hardware and performs the reactions that build the molecule.

Coley and his colleagues have been working for more than three years to develop the open-source software suite that suggests and prioritizes possible synthesis routes. At the heart of the software are several neural network models, which the researchers trained on millions of previously published chemical reactions drawn from the Reaxys and U.S. Patent and Trademark Office databases. The software uses these data to identify the reaction transformations and conditions that it believes will be suitable for building a new compound.

“It helps makes high-level decisions about what kinds of intermediates and starting materials to use, and then slightly more detailed analyses about what conditions you might want to use and if those reactions are likely to be successful,” says Coley.

“One of the primary motivations behind the design of the software is that it doesn’t just give you suggestions for molecules we know about or reactions we know about,” he notes. “It can generalize to new molecules that have never been made.”

Chemists then review the suggested synthesis routes produced by the software to build a more complete recipe for the target molecule. The chemists sometimes need to perform lab experiments or tinker with reagent concentrations and reaction temperatures, among other changes.

“They take some of the inspiration from the AI and convert that into an executable recipe file, largely because the chemical literature at present does not have enough information to move directly from inspiration to execution on an automated system,” Jamison says.

The final recipe is then loaded on to a platform where a robotic arm assembles modular reactors, separators, and other processing units into a continuous flow path, connecting pumps and lines that bring in the molecular ingredients.

“You load the recipe — that’s what controls the robotic platform — you load the reagents on, and press go, and that allows you to generate the molecule of interest,” says Thomas. “And then when it’s completed, it flushes the system and you can load the next set of reagents and recipe, and allow it to run.”

Unlike the continuous flow system the researchers presented last year, which had to be manually configured after each synthesis, the new system is entirely configured by the robotic platform.

“This gives us the ability to sequence one molecule after another, as well as generate a library of molecules on the system, autonomously,” says Jensen.

The design for the platform, which is about two cubic meters in size — slightly smaller than a standard chemical fume hood — resembles a telephone switchboard and operator system that moves connections between the modules on the platform.

“The robotic arm is what allowed us to manipulate the fluidic paths, which reduced the number of process modules and fluidic complexity of the system, and by reducing the fluidic complexity we can increase the molecular complexity,” says Thomas. “That allowed us to add additional reaction steps and expand the set of reactions that could be completed on the system within a relatively small footprint.”

Toward full automation

The researchers tested the full system by creating 15 different medicinal small molecules of different synthesis complexity, with processes taking anywhere between two hours for the simplest creations to about 68 hours for manufacturing multiple compounds.

The team synthesized a variety of compounds: aspirin and the antibiotic secnidazole in back-to-back processes; the painkiller lidocaine and the antianxiety drug diazepam in back-to-back processes using a common feedstock of reagents; the blood thinner warfarin and the Parkinson’s disease drug safinamide, to show how the software could design compounds with similar molecular components but differing 3-D structures; and a family of five ACE inhibitor drugs and a family of four nonsteroidal anti-inflammatory drugs.

“I’m particularly proud of the diversity of the chemistry and the kinds of different chemical reactions,” says Jamison, who said the system handled about 30 different reactions compared to about 12 different reactions in the previous continuous flow system.

“We are really trying to close the gap between idea generation from these programs and what it takes to actually run a synthesis,” says Coley. “We hope that next-generation systems will increase further the fraction of time and effort that scientists can focus their efforts on creativity and design.”

The research was supported, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) Make-It program.

August 8, 2019 | More

Automating artificial intelligence for medical decision-making

MIT computer scientists are hoping to accelerate the use of artificial intelligence to improve medical decision-making, by automating a key step that’s usually done by hand — and that’s becoming more laborious as certain datasets grow ever-larger.

The field of predictive analytics holds increasing promise for helping clinicians diagnose and treat patients. Machine-learning models can be trained to find patterns in patient data to aid in sepsis care, design safer chemotherapy regimens, and predict a patient’s risk of having breast cancer or dying in the ICU, to name just a few examples.

Typically, training datasets consist of many sick and healthy subjects, but with relatively little data for each subject. Experts must then find just those aspects — or “features” — in the datasets that will be important for making predictions.

This “feature engineering” can be a laborious and expensive process. But it’s becoming even more challenging with the rise of wearable sensors, because researchers can more easily monitor patients’ biometrics over long periods, tracking sleeping patterns, gait, and voice activity, for example. After only a week’s worth of monitoring, experts could have several billion data samples for each subject.

In a paper being presented at the Machine Learning for Healthcare conference this week, MIT researchers demonstrate a model that automatically learns features predictive of vocal cord disorders. The features come from a dataset of about 100 subjects, each with about a week’s worth of voice-monitoring data and several billion samples — in other words, a small number of subjects and a large amount of data per subject. The dataset contain signals captured from a little accelerometer sensor mounted on subjects’ necks.

In experiments, the model used features automatically extracted from these data to classify, with high accuracy, patients with and without vocal cord nodules. These are lesions that develop in the larynx, often because of patterns of voice misuse such as belting out songs or yelling. Importantly, the model accomplished this task without a large set of hand-labeled data.

“It’s becoming increasing easy to collect long time-series datasets. But you have physicians that need to apply their knowledge to labeling the dataset,” says lead author Jose Javier Gonzalez Ortiz, a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to remove that manual part for the experts and offload all feature engineering to a machine-learning model.”

The model can be adapted to learn patterns of any disease or condition. But the ability to detect the daily voice-usage patterns associated with vocal cord nodules is an important step in developing improved methods to prevent, diagnose, and treat the disorder, the researchers say. That could include designing new ways to identify and alert people to potentially damaging vocal behaviors.

Joining Gonzalez Ortiz on the paper is John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering and head of CSAIL’s Data Driven Inference Group; Robert Hillman, Jarrad Van Stan, and Daryush Mehta, all of Massachusetts General Hospital’s Center for Laryngeal Surgery and Voice Rehabilitation; and Marzyeh Ghassemi, an assistant professor of computer science and medicine at the University of Toronto.

Forced feature-learning

For years, the MIT researchers have worked with the Center for Laryngeal Surgery and Voice Rehabilitation to develop and analyze data from a sensor to track subject voice usage during all waking hours. The sensor is an accelerometer with a node that sticks to the neck and is connected to a smartphone. As the person talks, the smartphone gathers data from the displacements in the accelerometer.

In their work, the researchers collected a week’s worth of this data — called “time-series” data — from 104 subjects, half of whom were diagnosed with vocal cord nodules. For each patient, there was also a matching control, meaning a healthy subject of similar age, sex, occupation, and other factors.

Traditionally, experts would need to manually identify features that may be useful for a model to detect various diseases or conditions. That helps prevent a common machine-learning problem in health care: overfitting. That’s when, in training, a model “memorizes” subject data instead of learning just the clinically relevant features. In testing, those models often fail to discern similar patterns in previously unseen subjects.

“Instead of learning features that are clinically significant, a model sees patterns and says, ‘This is Sarah, and I know Sarah is healthy, and this is Peter, who has a vocal cord nodule.’ So, it’s just memorizing patterns of subjects. Then, when it sees data from Andrew, which has a new vocal usage pattern, it can’t figure out if those patterns match a classification,” Gonzalez Ortiz says.

The main challenge, then, was preventing overfitting while automating manual feature engineering. To that end, the researchers forced the model to learn features without subject information. For their task, that meant capturing all moments when subjects speak and the intensity of their voices.

As their model crawls through a subject’s data, it’s programmed to locate voicing segments, which comprise only roughly 10 percent of the data. For each of these voicing windows, the model computes a spectrogram, a visual representation of the spectrum of frequencies varying over time, which is often used for speech processing tasks. The spectrograms are then stored as large matrices of thousands of values.

But those matrices are huge and difficult to process. So, an autoencoder — a neural network optimized to generate efficient data encodings from large amounts of data — first compresses the spectrogram into an encoding of 30 values. It then decompresses that encoding into a separate spectrogram.

Basically, the model must ensure that the decompressed spectrogram closely resembles the original spectrogram input. In doing so, it’s forced to learn the compressed representation of every spectrogram segment input over each subject’s entire time-series data. The compressed representations are the features that help train machine-learning models to make predictions.

Mapping normal and abnormal features

In training, the model learns to map those features to “patients” or “controls.” Patients will have more voicing patterns than will controls. In testing on previously unseen subjects, the model similarly condenses all spectrogram segments into a reduced set of features. Then, it’s majority rules: If the subject has mostly abnormal voicing segments, they’re classified as patients; if they have mostly normal ones, they’re classified as controls.

In experiments, the model performed as accurately as state-of-the-art models that require manual feature engineering. Importantly, the researchers’ model performed accurately in both training and testing, indicating it’s learning clinically relevant patterns from the data, not subject-specific information.

Next, the researchers want to monitor how various treatments — such as surgery and vocal therapy — impact vocal behavior. If patients’ behaviors move form abnormal to normal over time, they’re most likely improving. They also hope to use a similar technique on electrocardiogram data, which is used to track muscular functions of the heart.

August 6, 2019 | More

Microfluidics device helps diagnose sepsis in minutes

A novel sensor designed by MIT researchers could dramatically accelerate the process of diagnosing sepsis, a leading cause of death in U.S. hospitals that kills nearly 250,000 patients annually.

Sepsis occurs when the body’s immune response to infection triggers an inflammation chain reaction throughout the body, causing high heart rate, high fever, shortness of breath, and other issues. If left unchecked, it can lead to septic shock, where blood pressure falls and organs shut down. To diagnose sepsis, doctors traditionally rely on various diagnostic tools, including vital signs, blood tests, and other imaging and lab tests.

In recent years, researchers have found protein biomarkers in the blood that are early indicators of sepsis. One promising candidate is interleukin-6 (IL-6), a protein produced in response to inflammation. In sepsis patients, IL-6 levels can rise hours before other symptoms begin to show. But even at these elevated levels, the concentration of this protein in the blood is too low overall for traditional assay devices to detect it quickly.

In a paper being presented this week at the Engineering in Medicine and Biology Conference, MIT researchers describe a microfluidics-based system that automatically detects clinically significant levels of IL-6 for sepsis diagnosis in about 25 minutes, using less than a finger prick of blood.

In one microfluidic channel, microbeads laced with antibodies mix with a blood sample to capture the IL-6 biomarker. In another channel, only beads containing the biomarker attach to an electrode. Running voltage through the electrode produces an electrical signal for each biomarker-laced bead, which is then converted into the biomarker concentration level.

“For an acute disease, such as sepsis, which progresses very rapidly and can be life-threatening, it’s helpful to have a system that rapidly measures these nonabundant biomarkers,” says first author Dan Wu, a PhD student in the Department of Mechanical Engineering. “You can also frequently monitor the disease as it progresses.”

Joining Wu on the paper is Joel Voldman, a professor and associate head of the Department of Electrical Engineering and Computer Science, co-director of the Medical Electronic Device Realization Center, and a principal investigator in the Research Laboratory of Electronics and the Microsystems Technology Laboratories.

Integrated, automated design

Traditional assays that detect protein biomarkers are bulky, expensive machines relegated to labs that require about a milliliter of blood and produce results in hours. In recent years, portable “point-of-care” systems have been developed that use microliters of blood to get similar results in about 30 minutes.

But point-of-care systems can be very expensive since most use pricey optical components to detect the biomarkers. They also capture only a small number of proteins, many of which are among the more abundant ones in blood. Any efforts to decrease the price, shrink down components, or increase protein ranges negatively impacts their sensitivity.

In their work, the researchers wanted to shrink components of the magnetic-bead-based assay, which is often used in labs, onto an automated microfluidics device that’s roughly several square centimeters. That required manipulating beads in micron-sized channels and fabricating a device in the Microsystems Technology Laboratory that automated the movement of fluids.

The beads are coated with an antibody that attracts IL-6, as well as a catalyzing enzyme called horseradish peroxidase. The beads and blood sample are injected into the device, entering into an “analyte-capture zone,” which is basically a loop. Along the loop is a peristaltic pump — commonly used for controlling liquids — with valves automatically controlled by an external circuit. Opening and closing the valves in specific sequences circulates the blood and beads to mix together. After about 10 minutes, the IL-6 proteins have bound to the antibodies on the beads.

Automatically reconfiguring the valves at that time forces the mixture into a smaller loop, called the “detection zone,” where they stay trapped. A tiny magnet collects the beads for a brief wash before releasing them around the loop. After about 10 minutes, many beads have stuck on an electrode coated with a separate antibody that attracts IL-6. At that time, a solution flows into the loop and washes the untethered beads, while the ones with IL-6 protein remain on the electrode.

The solution carries a specific molecule that reacts to the horseradish enzyme to create a compound that responds to electricity. When a voltage is applied to the solution, each remaining bead creates a small current. A common chemistry technique called “amperometry” converts that current into a readable signal. The device counts the signals and calculates the concentration of IL-6.

“On their end, doctors just load in a blood sample using a pipette. Then, they press a button and 25 minutes later they know the IL-6 concentration,” Wu says.

The device uses about 5 microliters of blood, which is about a quarter the volume of blood drawn from a finger prick and a fraction of the 100 microliters required to detect protein biomarkers in lab-based assays. The device captures IL-6 concentrations as low as 16 picograms per milliliter, which is below the concentrations that signal sepsis, meaning the device is sensitive enough to provide clinically relevant detection.

A general platform

The current design has eight separate microfluidics channels to measure as many different biomarkers or blood samples in parallel. Different antibodies and enzymes can be used in separate channels to detect different biomarkers, or different antibodies can be used in the same channel to detect several biomarkers simultaneously.

Next, the researchers plan to create a panel of important sepsis biomarkers for the device to capture, including interleukin-6, interleukin-8, C-reactive protein, and procalcitonin. But there’s really no limit to how many different biomarkers the device can measure, for any disease, Wu says. Notably, more than 200 protein biomarkers for various diseases and conditions have been approved by the U.S. Food and Drug Administration.

“This is a very general platform,” Wu says. “If you want to increase the device’s physical footprint, you can scale up and design more channels to detect as many biomarkers as you want.”

The work was funded by Analog Devices, Maxim Integrated, and the Novartis Institutes of Biomedical Research.

July 23, 2019 | More

Behind the scenes of the Apollo mission at MIT

Fifty years ago this week, humanity made its first expedition to another world, when Apollo 11 touched down on the moon and two astronauts walked on its surface. That moment changed the world in ways that still reverberate today.

MIT’s deep and varied connections to that epochal event — many of which have been described on MIT News — began years before the actual landing, when the MIT Instrumentation Laboratory (now Draper Labs) signed the very first contract to be awarded for the Apollo program after its announcement by President John F. Kennedy in 1961. The Institute’s involvement continued throughout the program — and is still ongoing today.

MIT’s role in creating the navigation and guidance system that got the mission to the moon and back has been widely recognized in books, movies, and television series. But many other aspects of the Institute’s involvement in the Apollo program and its legacy, including advances in mechanical and computational engineering, simulation technology, biomedical studies, and the geophysics of planet formation, have remained less celebrated.

Amid the growing chorus of recollections in various media that have been appearing around this 50th anniversary, here is a small collection of bits and pieces about some of the unsung heroes and lesser-known facts from the Apollo program and MIT’s central role in it.

A new age in electronics

The computer system and its software that controlled the spacecraft — called the Apollo Guidance Computer and designed by the MIT Instrumentation Lab team under the leadership of Eldon Hall — were remarkable achievements that helped push technology forward in many ways.

The AGC’s programs were written in one of the first-ever compiler languages, called MAC, which was developed by Instrumentation Lab engineer Hal Laning. The computer itself, the 1-cubic-foot Apollo Guidance Computer, was the first significant use of silicon integrated circuit chips and greatly accelerated the development of the microchip technology that has gone on to change virtually every consumer product.

In an age when most computers took up entire climate-controlled rooms, the compact AGC was uniquely small and lightweight. But most of its “software” was actually hard-wired: The programs were woven, with tiny donut-shaped metal “cores” strung like beads along a set of wires, with a given wire passing outside the donut to represent a zero, or through the hole for a 1. These so-called rope memories were made in the Boston suburbs at Raytheon, mostly by women who had been hired because they had experience in the weaving industry. Once made, there was no way to change individual bits within the rope, so any change to the software required weaving a whole new rope, and last-minute changes were impossible.

As David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing, points out in his book “Digital Apollo,” that system represented the first time a computer of any kind had been used to control, in real-time, many functions of a vehicle carrying human beings — a trend that continues to accelerate as the world moves toward self-driving vehicles. Right after the Apollo successes, the AGC was directly adapted to an F-8 fighter jet, to create the first-ever fly-by-wire system for aircraft, where the plane’s control surfaces are moved via a computer rather than direct cables and hydraulic systems. This approach is now widespread in the aerospace industry, says John Tylko, who teaches MIT’s class 16.895J (Engineering Apollo: The Moon Project as a Complex System), which is taught every other year.

As sophisticated as the computer was for its time, computer users today would barely recognize it as such. Its keyboard and display screen looked more like those on a microwave oven than a computer: a simple numeric keypad and a few lines of five-digit luminous displays. Even the big mainframe computer used to test the code as it was being developed had no keyboard or monitor that the programmers ever saw. Programmers wrote their code by hand, then typed it onto punch cards — one card per line — and handed the deck of cards to a computer operator. The next day, the cards would be returned with a printout of the program’s output. And in this time long before email, communications among the team often relied on handwritten paper notes.

Priceless rocks

MIT’s involvement in the geophysical side of the Apollo program also extends back to the early planning stages — and continues today. For example, Professor Nafi Toksöz, an expert in seismology, helped to develop a seismic monitoring station that the astronauts placed on the moon, where it helped lead to a greater understanding of the moon’s structure and formation. “It was the hardest work I have ever done, but definitely the most exciting,” he has said.

Toksöz says that the data from the Apollo seismometers “changed our understanding of the moon completely.” The seismic waves, which on Earth continue for a few minutes, lasted for two hours, which turned out to be the result of the moon’s extreme lack of water. “That was something we never expected, and had never seen,” he recalls.

The first seismometer was placed on the moon’s surface very shortly after the astronauts landed, and seismologists including Toksöz started seeing the data right away — including every footstep the astronauts took on the surface. Even when the astronauts returned to the lander to sleep before the morning takeoff, the team could see that Buzz Aldrin ScD ’63 and Neil Armstrong were having a sleepless night, with every toss and turn dutifully recorded on the seismic traces.

MIT Professor Gene Simmons was among the first group of scientists to gain access to the lunar samples as soon as NASA released them from quarantine, and he and others in what is now the Department of Earth, Planetary and Atmospheric Sciences (EAPS) have continued to work on these samples ever since. As part of a conference on campus, he exhibited some samples of lunar rock and soil in their first close-up display to the public, where some people may even have had a chance to touch the samples.

Others in EAPS have also been studying those Apollo samples almost from the beginning. Timothy Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences, started studying the Apollo samples in 1971 as a graduate student at Harvard University, and has been doing research on them ever since. Grove says that these samples have led to major new understandings of planetary formation processes that have helped us understand the Earth and other planets better as well.

Among other findings, the rocks showed that ratios of the isotopes of oxygen and other elements in the moon rocks were identical to those in terrestrial rocks but completely different than those of any meteorites, proving that the Earth and the moon had a common origin and leading to the hypothesis that the moon was created through a giant impact from a planet-sized body. The rocks also showed that the entire surface of the moon had likely been molten at one time. The idea that a planetary body could be covered by an ocean of magma was a major surprise to geologists, Grove says.

Many puzzles remain to this day, and the analysis of the rock and soil samples goes on. “There’s still a lot of exciting stuff” being found in these samples, Grove says.

Sorting out the facts

In the spate of publicity and new books, articles, and programs about Apollo, inevitably some of the facts — some trivial, some substantive — have been scrambled along the way. “There are some myths being advanced,” says Tylko, some of which he addresses in his “Engineering Apollo” class. “People tend to oversimplify” many aspects of the mission, he says.

For example, many accounts have described the sequence of alarms that came from the guidance computer during the last four minutes of the mission, forcing mission controllers to make the daring decision to go ahead despite the unknown nature of the problem. But Don Eyles, one of the Instrumentation Lab’s programmers who had written the landing software for the AGC, says that he can’t think of a single account he’s read about that sequence of events that gets it entirely right. According to Eyles, many have claimed the problem was caused by the fact that the rendezvous radar switch had been left on, so that its data were overloading the computer and causing it to reboot.

But Eyles says the actual reason was a much more complex sequence of events, including a crucial mismatch between two circuits that would only occur in rare circumstances and thus would have been hard to detect in testing, and a probably last-minute decion to put a vital switch in a position that allowed it to happen. Eyles has described these details in a memoir about the Apollo years and in a technical paper available online, but he says they are difficult to summarize simply. But he thinks the author Norman Mailer may have come closest, capturing the essence of it in his book “Of a Fire on the Moon,” where he describes the issue as caused by a “sneak circuit” and an “undetectable” error in the onboard checklist.

Some accounts have described the AGC as a very limited and primitive computer compared to today’s average smartphone, and Tylko acknowledges that it had a tiny fraction of the power of today’s smart devices — but, he says, “that doesn’t mean they were unsophisticated.” While the AGC only had about 36 kilobytes of read-only memory and 2 kilobytes of random-access memory, “it was exceptionally sophisticated and made the best use of the resources available at the time,” he says.

In some ways it was even ahead of its time, Tylko says. For example, the compiler language developed by Laning along with Ramon Alonso at the Instrumentation Lab used an architecture that he says was relatively intuitive and easy to interact with. Based on a system of “verbs” (actions to be performed) and “nouns” (data to be worked on), “it could probably have made its way into the architecture of PCs,” he says. “It’s an elegant interface based on the way humans think.”

Some accounts go so far as to claim that the computer failed during the descent and astronaut Neil Armstrong had to take over the controls and land manually, but in fact partial manual control was always part of the plan, and the computer remained in ultimate control throughout the mission. None of the onboard computers ever malfunctioned through the entire Apollo program, according to astronaut David Scott SM ’62, who used the computer on two Apollo missions: “We never had a failure, and I think that is a remarkable achievement.”

Behind the scenes

At the peak of the program, a total of about 1,700 people at MIT’s Instrumentation Lab were working on the Apollo program’s software and hardware, according to Draper Laboratory, the Instrumentation Lab’s successor, which spun off from MIT in 1973. A few of those, such as the near-legendary “Doc” Draper himself — Charles Stark Draper ’26, SM ’28, ScD ’38, former head of the Department of Aeronautics and Astronautics (AeroAstro) — have become widely known for their roles in the mission, but most did their work in near-anonymity, and many went on to entirely different kinds of work after the Apollo program’s end.

Margaret Hamilton, who directed the Instrumentation Lab’s Software Engineering Division, was little known outside of the program itself until an iconic photo of her next to the original stacks of AGC code began making the rounds on social media in the mid 2010s. In 2016, when she was awarded the Presidential Medal of Freedom by President Barack Obama, MIT Professor Jaime Peraire, then head of AeroAstro, said of Hamilton that “She was a true software engineering pioneer, and it’s not hyperbole to say that she, and the Instrumentation Lab’s Software Engineering Division that she led, put us on the moon.” After Apollo, Hamilton went on to found a software services company, which she still leads.

Many others who played major roles in that software and hardware development have also had their roles little recognized over the years. For example, Hal Laning ’40, PhD ’47, who developed the programming language for the AGC, also devised its executive operating system, which employed what was at the time a new way of handling multiple programs at once, by assigning each one a priority level so that the most important tasks, such as controlling the lunar module’s thrusters, would always be taken care of. “Hal was the most brilliant person we ever had the chance to work with,” Instrumentation Lab engineer Dan Lickly told MIT Technology Review. And that priority-driven operating system proved crucial in allowing the Apollo 11 landing to proceed safely in spite of the 1202 alarms going off during the lunar descent.

While the majority of the team working on the project was male, software engineer Dana Densmore recalls that compared to the heavily male-dominated workforce at NASA at the time, the MIT lab was relatively welcoming to women. Densmore, who was a control supervisor for the lunar landing software, told The Wall Street Journal that “NASA had a few women, and they kept them hidden. At the lab it was very different,” and there were opportunities for women there to take on significant roles in the project.

Hamilton recalls the atmosphere at the Instrumentation Lab in those days as one of real dedication and meritocracy. As she told MIT News in 2009, “Coming up with solutions and new ideas was an adventure. Dedication and commitment were a given. Mutual respect was across the board. Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did. Looking back, we were the luckiest people in the world; there was no choice but to be pioneers.”

July 18, 2019 | More

Experiments show dramatic increase in solar cell output

In any conventional silicon-based solar cell, there is an absolute limit on overall efficiency, based partly on the fact that each photon of light can only knock loose a single electron, even if that photon carried twice the energy needed to do so. But now, researchers have demonstrated a method for getting high-energy photons striking silicon to kick out two electrons instead of one, opening the door for a new kind of solar cell with greater efficiency than was thought possible.

While conventional silicon cells have an absolute theoretical maximum efficiency of about 29.1 percent conversion of solar energy, the new approach, developed over the last several years by researchers at MIT and elsewhere, could bust through that limit, potentially adding several percentage points to that maximum output. The results are described today in the journal Nature, in a paper by graduate student Markus Einzinger, professor of chemistry Moungi Bawendi, professor of electrical engineering and computer science Marc Baldo, and eight others at MIT and at Princeton University.

The basic concept behind this new technology has been known for decades, and the first demonstration that the principle could work was carried out by some members of this team six years ago. But actually translating the method into a full, operational silicon solar cell took years of hard work, Baldo says.

That initial demonstration “was a good test platform” to show that the idea could work, explains Daniel Congreve PhD ’15, an alumnus now at the Rowland Institute at Harvard, who was the lead author in that prior report and is a co-author of the new paper. Now, with the new results, “we’ve done what we set out to do” in that project, he says.

The original study demonstrated the production of two electrons from one photon, but it did so in an organic photovoltaic cell, which is less efficient than a silicon solar cell. It turned out that transferring the two electrons from a top collecting layer made of tetracene into the silicon cell “was not straightforward,” Baldo says. Troy Van Voorhis, a professor of chemistry at MIT who was part of that original team, points out that the concept was first proposed back in the 1970s, and says wryly that turning that idea into a practical device “only took 40 years.”

The key to splitting the energy of one photon into two electrons lies in a class of materials that possess “excited states” called excitons, Baldo says: In these excitonic materials, “these packets of energy propagate around like the electrons in a circuit,” but with quite different properties than electrons. “You can use them to change energy — you can cut them in half, you can combine them.” In this case, they were going through a process called singlet exciton fission, which is how the light’s energy gets split into two separate, independently moving packets of energy. The material first absorbs a photon, forming an exciton that rapidly undergoes fission into two excited states, each with half the energy of the original state.

But the tricky part was then coupling that energy over into the silicon, a material that is not excitonic. This coupling had never been accomplished before.

As an intermediate step, the team tried coupling the energy from the excitonic layer into a material called quantum dots. “They’re still excitonic, but they’re inorganic,” Baldo says. “That worked; it worked like a charm,” he says. By understanding the mechanism taking place in that material, he says, “we had no reason to think that silicon wouldn’t work.”

What that work showed, Van Voorhis says, is that the key to these energy transfers lies in the very surface of the material, not in its bulk. “So it was clear that the surface chemistry on silicon was going to be important. That was what was going to determine what kinds of surface states there were.” That focus on the surface chemistry may have been what allowed this team to succeed where others had not, he suggests.

The key was in a thin intermediate layer. “It turns out this tiny, tiny strip of material at the interface between these two systems [the silicon solar cell and the tetracene layer with its excitonic properties] ended up defining everything. It’s why other researchers couldn’t get this process to work, and why we finally did.” It was Einzinger “who finally cracked that nut,” he says, by using a layer of a material called hafnium oxynitride.

The layer is only a few atoms thick, or just 8 angstroms (ten-billionths of a meter), but it acted as a “nice bridge” for the excited states, Baldo says. That finally made it possible for the single high-energy photons to trigger the release of two electrons inside the silicon cell. That produces a doubling of the amount of energy produced by a given amount of sunlight in the blue and green part of the spectrum. Overall, that could produce an increase in the power produced by the solar cell — from a theoretical maximum of 29.1 percent, up to a maximum of about 35 percent.

Actual silicon cells are not yet at their maximum, and neither is the new material, so more development needs to be done, but the crucial step of coupling the two materials efficiently has now been proven. “We still need to optimize the silicon cells for this process,” Baldo says. For one thing, with the new system those cells can be thinner than current versions. Work also needs to be done on stabilizing the materials for durability. Overall, commercial applications are probably still a few years off, the team says.

Other approaches to improving the efficiency of solar cells tend to involve adding another kind of cell, such as a perovskite layer, over the silicon. Baldo says “they’re building one cell on top of another. Fundamentally, we’re making one cell — we’re kind of turbocharging the silicon cell. We’re adding more current into the silicon, as opposed to making two cells.”

The researchers have measured one special property of hafnium oxynitride that helps it transfer the excitonic energy. “We know that hafnium oxynitride generates additional charge at the interface, which reduces losses by a process called electric field passivation. If we can establish better control over this phenomenon, efficiencies may climb even higher.” Einzinger says. So far, no other material they’ve tested can match its properties.

The research was supported as part of the MIT Center for Excitonics, funded by the U.S. Department of Energy.

July 3, 2019 | More

Drag-and-drop data analytics

In the Iron Man movies, Tony Stark uses a holographic computer to project 3-D data into thin air, manipulate them with his hands, and find fixes to his superhero troubles. In the same vein, researchers from MIT and Brown University have now developed a system for interactive data analytics that runs on touchscreens and lets everyone — not just billionaire tech geniuses — tackle real-world issues.

For years, the researchers have been developing an interactive data-science system called Northstar, which runs in the cloud but has an interface that supports any touchscreen device, including smartphones and large interactive whiteboards. Users feed the system datasets, and manipulate, combine, and extract features on a user-friendly interface, using their fingers or a digital pen, to uncover trends and patterns.

In a paper being presented at the ACM SIGMOD conference, the researchers detail a new component of Northstar, called VDS for “virtual data scientist,” that instantly generates machine-learning models to run prediction tasks on their datasets. Doctors, for instance, can use the system to help predict which patients are more likely to have certain diseases, while business owners might want to forecast sales. If using an interactive whiteboard, everyone can also collaborate in real-time.

The aim is to democratize data science by making it easy to do complex analytics, quickly and accurately.

“Even a coffee shop owner who doesn’t know data science should be able to predict their sales over the next few weeks to figure out how much coffee to buy,” says co-author and long-time Northstar project lead Tim Kraska, an associate professor of electrical engineering and computer science in at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and founding co-director of the new Data System and AI Lab (DSAIL). “In companies that have data scientists, there’s a lot of back and forth between data scientists and nonexperts, so we can also bring them into one room to do analytics together.”

VDS is based on an increasingly popular technique in artificial intelligence called automated machine-learning (AutoML), which lets people with limited data-science know-how train AI models to make predictions based on their datasets. Currently, the tool leads the DARPA D3M Automatic Machine Learning competition, which every six months decides on the best-performing AutoML tool.

Joining Kraska on the paper are: first author Zeyuan Shang, a graduate student, and Emanuel Zgraggen, a postdoc and main contributor of Northstar, both of EECS, CSAIL, and DSAIL; Benedetto Buratti, Yeounoh Chung, Philipp Eichmann, and Eli Upfal, all of Brown; and Carsten Binnig who recently moved from Brown to the Technical University of Darmstadt in Germany.

An “unbounded canvas” for analytics

The new work builds on years of collaboration on Northstar between researchers at MIT and Brown. Over four years, the researchers have published numerous papers detailing components of Northstar, including the interactive interface, operations on multiple platforms, accelerating results, and studies on user behavior.

Northstar starts as a blank, white interface. Users upload datasets into the system, which appear in a “datasets” box on the left. Any data labels will automatically populate a separate “attributes” box below. There’s also an “operators” box that contains various algorithms, as well as the new AutoML tool. All data are stored and analyzed in the cloud.

The researchers like to demonstrate the system on a public dataset that contains information on intensive care unit patients. Consider medical researchers who want to examine co-occurrences of certain diseases in certain age groups. They drag and drop into the middle of the interface a pattern-checking algorithm, which at first appears as a blank box. As input, they move into the box disease features labeled, say, “blood,” “infectious,” and “metabolic.” Percentages of those diseases in the dataset appear in the box. Then, they drag the “age” feature into the interface, which displays a bar chart of the patient’s age distribution. Drawing a line between the two boxes links them together. By circling age ranges, the algorithm immediately computes the co-occurrence of the three diseases among the age range.

“It’s like a big, unbounded canvas where you can lay out how you want everything,” says Zgraggen, who is the key inventor of Northstar’s interactive interface. “Then, you can link things together to create more complex questions about your data.”

Approximating AutoML

With VDS, users can now also run predictive analytics on that data by getting models custom-fit to their tasks, such as data prediction, image classification, or analyzing complex graph structures.

Using the above example, say the medical researchers want to predict which patients may have blood disease based on all features in the dataset. They drag and drop “AutoML” from the list of algorithms. It’ll first produce a blank box, but with a “target” tab, under which they’d drop the “blood” feature. The system will automatically find best-performing machine-learning pipelines, presented as tabs with constantly updated accuracy percentages. Users can stop the process at any time, refine the search, and examine each model’s errors rates, structure, computations, and other things.

According to the researchers, VDS is the fastest interactive AutoML tool to date, thanks, in part, to their custom “estimation engine.” The engine sits between the interface and the cloud storage. The engine leverages automatically creates several representative samples of a dataset that can be progressively processed to produce high-quality results in seconds.

“Together with my co-authors I spent two years designing VDS to mimic how a data scientist thinks,” Shang says, meaning it instantly identifies which models and preprocessing steps it should or shouldn’t run on certain tasks, based on various encoded rules. It first chooses from a large list of those possible machine-learning pipelines and runs simulations on the sample set. In doing so, it remembers results and refines its selection. After delivering fast approximated results, the system refines the results in the back end. But the final numbers are usually very close to the first approximation.

“For using a predictor, you don’t want to wait four hours to get your first results back. You want to already see what’s going on and, if you detect a mistake, you can immediately correct it. That’s normally not possible in any other system,” Kraska says. The researchers’ previous user study, in fact, “show that the moment you delay giving users results, they start to lose engagement with the system.”

The researchers evaluated the tool on 300 real-world datasets. Compared to other state-of-the-art AutoML systems, VDS’ approximations were as accurate, but were generated within seconds, which is much faster than other tools, which operate in minutes to hours.

Next, the researchers are looking to add a feature that alerts users to potential data bias or errors. For instance, to protect patient privacy, sometimes researchers will label medical datasets with patients aged 0 (if they do not know the age) and 200 (if a patient is over 95 years old). But novices may not recognize such errors, which could completely throw off their analytics.

“If you’re a new user, you may get results and think they’re great,” Kraska says. “But we can warn people that there, in fact, may be some outliers in the dataset that may indicate a problem.”

June 27, 2019 | More

Confining cell-killing treatments to tumors

Cytokines, small proteins released by immune cells to communicate with each other, have for some time been investigated as a potential cancer treatment.

However, despite their known potency and potential for use alongside other immunotherapies, cytokines have yet to be successfully developed into an effective cancer therapy.

That is because the proteins are highly toxic to both healthy tissue and tumors alike, making them unsuitable for use in treatments administered to the entire body.

Injecting the cytokine treatment directly into the tumor itself could provide a method of confining its benefits to the tumor and sparing healthy tissue, but previous attempts to do this have resulted in the proteins leaking out of the cancerous tissue and into the body’s circulation within minutes.

Now researchers at the Koch Institute for Integrative Cancer Research at MIT have developed a technique to prevent cytokines escaping once they have been injected into the tumor, by adding a Velcro-like protein that attaches itself to the tissue.

In this way the researchers, led by Dane Wittrup, the Carbon P. Dubbs Professor in Chemical Engineering and Biological Engineering and a member of the Koch Institute, hope to limit the harm caused to healthy tissue, while prolonging the treatment’s ability to attack the tumor.

To develop their technique, which they describe in a paper published today in the journal Science Translational Medicine, the researchers first investigated the different proteins found in tumors, to find one that could be used as a target for the cytokine treatment. They chose collagen, which is expressed abundantly in solid tumors.

They then undertook an extensive literature search to find proteins that bind effectively to collagen. They discovered a collagen-binding protein called lumican, which they then attached to the cytokines.

“When we inject (a collagen-anchoring cytokine treatment) intratumorally, we don’t have to worry about collagen found elsewhere in the body; we just have to make sure we have a protein that binds to collagen very tightly,” says lead author Noor Momin, a graduate student in the Wittrup Lab at MIT.

To test the treatment, the researchers used two cytokines known to stimulate and expand immune cell responses. The cytokines, interleukin-2 (IL-2) and interleukin-12 (IL-12), are also known to combine well with other immunotherapies.

Although IL-2 already has FDA approval, its severe side-effects have so far prevented its clinical use. Meanwhile IL-12 therapies have not yet reached phase 3 clinical trials due to their severe toxicity.

The researchers tested the treatment by injecting the two different cytokines into tumors in mice. To make the test more challenging, they chose a type of melanoma that contains relatively low amounts of collagen, compared to other tumor types.

They then compared the effects of administering the cytokines alone and of injecting cytokines attached to the collagen-binding lumican.

“In addition, all of the cytokine therapies were given alongside a form of systemic therapy, such as a tumor-targeting antibody, a vaccine, a checkpoint blockade, or chimeric antigen receptor (CAR)-T cell therapy, as we wanted to show the potential of combining cytokines with many different immunotherapy modalities,” Momin says.

They found that when any of the treatments were administered individually, the mice did not survive. Combining the treatments improved survival rates slightly, but when the cytokine was administered with the lumican to bind to the collagen, the researchers found that over 90 percent of the mice survived with some combinations.

“So we were able to show that these combinations are synergistic, they work really well together, and that cytokines attached to lumican really helped reap the full benefits of the combination,” Momin says.

What’s more, attaching the lumican eliminated the problem of toxicity associated with cytokine treatments alone.

The paper attempts to address a major obstacle in the oncology field, that of how to target potent therapeutics to the tumor microenvironment to enable their local action, according to Shannon Turley, a staff scientist and specialist in cancer immunology at Genentech, who was not involved in the research.

“This is important because many of the most promising cancer drugs can have unwanted side effects in tissues beyond the tumor,” Turley says. “The team’s approach relies on two principles that together make for a novel approach: injection of the drug directly into the tumor site, and engineering of the drug to contain a ‘Velcro’ that attaches the drug to the tumor to keep it from leaking into circulation and acting all over the body.”

The researchers now plan to carry out further work to improve the technique, and to explore other treatments that could benefit from being combined with collagen-binding lumican, Momin says.

Ultimately, they hope the work will encourage other researchers to consider the use of collagen binding for cancer treatments, Momin says.

“We’re hoping the paper seeds the idea that collagen anchoring could be really advantageous for a lot of different therapies across all solid tumors.”

June 26, 2019 | More