News and Research
space conference

Commercial space: Can we privatize our way to the stars?

Barret Schlegelmilch (LGO ’18) and a team of LGOs in the MIT Sloan Astropreneurship and Space Industry Club, hosted the recent conference.
Read more


Using statistics can can improve clinical trials and outcomes

Dimitris Bertsimas, LGO faculty and Thesis advisor, Professor of Operations Research, and the CoDirector of the Operations Research Center at MIT explains how using more data would mean better treatments and fewer tears in clinical trials.

Sometimes science can be personal. When my father, who was living in Greece at the time, was diagnosed with stage IV gastric cancer in 2007, I set out to find the best possible care for him. As is the case with many patients with advanced disease, drug therapy was his best course. So, after unsuccessful surgery in Greece, he came to the US for treatment.

I contacted the most prestigious cancer hospitals in the country and found that they all used different drugs in different treatment regimens to treat advanced gastric cancer. As both a son and a scientist, I was surprised to discover that there was no standard treatment – something I would later realise was true of many different kinds of late-stage cancers.

My family and I were therefore left without a good way to make treatment decisions. As a result, I was forced to do a kind of back-of-the-envelope calculation. Based on the small number of published findings I could locate, I plotted different drug combinations on a curve, seeking to discover the sweet spot between the estimated survival period given by the chemotherapy treatment and the expected toxicity of the treatment.

Read the full post at Times Higher Education 

Dimitris Bertsimas is the Boeing Leaders for Global Operations Professor of Management, a Professor of Operations Research, and the CoDirector of the Operations Research Center at MIT.

May 23, 2017 | More

Illuminating uncertainty

Associate professor of aeronautics and astronautics and LGO thesis advisor Youssef Marzouk has been working to quantify and reduce the uncertainty in complex computational models which can be applied to tracking underground contaminants and improving the accuracy in weather forecasts.

How does today’s weather compare with what was forecast a week or even a day ago? Is that torrential Nor’easter that was predicted in fact just a light drizzle? Has the sun, projected to emerge from the clouds at 11 a.m., instead appeared at noon?

It may come as no surprise that weather predictions come with a fair amount of uncertainty, as do any predictions of large, complex, and interacting systems. And yet, many of us depend on such simulations for information, from everyday traffic and weather reports, to long-term projections for climate.

“You’re sort of using a simulation as an oracle,” says Youssef Marzouk, associate professor of aeronautics and astronautics at MIT. “But if we’re really going to use computations as a way of predicting what’s happening in the world, how can we get a handle on this very fuzzy problem of how believable the computations are?”

Quantifying and reducing the uncertainty in complex computational models is the major theme in Marzouk’s work, which he is applying to a wide range of problems, including tracking underground contaminants, characterizing combustion in jet engines, estimating the concentrations of trace gases in the atmosphere, and improving the accuracy in weather forecasts.

“I’m driven by developing methodology that will be broadly useful,” says Marzouk, who earned tenure in 2016.

An abstract pull

In the 1970s, Marzouk’s parents emigrated from Egypt to the U.S., ultimately settling in St. Louis, Missouri, where his father took up a faculty position at Washington University’s School of Dental Medicine. Before their move, Marzouk’s mother worked as a translator, performing simultaneous translations in French and Arabic for diplomats in the Middle East.

Marzouk and his sister were born and raised in suburban St. Louis, and he remembers feeling a pull toward science and math from an early age.

“When I was in grade school, kids would be playing in the playground, and I stayed in the classroom because I wanted to add the biggest numbers I possibly could,” Marzouk recalls. “I was a relatively nerdy kid.”

As his academic pursuits grew, so did his passion for music. Marzouk, following in his sister’s footsteps, took up piano lessons when he was 6 years old. In high school, he regularly participated in science fairs, and he entered and sometimes won piano competitions. In retrospect, he says that his interests in music and math may have shared some overlap.

“There’s a level of abstract thinking and conceptual thought involved in understanding the structure of a piece of music that, in some indirect way, can carry over to math and quantitative thinking,” Marzouk says.

Solving puzzles

In the summer before his junior year of high school, Marzouk took part in a program that placed students in local university labs as summer interns. He worked in a combustion lab at Washington University, studying the physical and chemical interactions involved in producing flames.

“It was my first exposure to mechanical engineering,” Marzouk says. “There were high temperatures, flames, blowing things up — what more could you want?”

He liked the work so much that he continued participating in the lab for the next two and a half years. After graduating high school, he decided to study mechanical engineering at MIT, where he found an immediate connection during Campus Preview Weekend.

“People were down to earth; they were excited about what they were doing. It was very relatable and full of techie people I could talk to,” Marzouk says. “People wanted to work with and help each other to solve puzzles. I think this is still true of the students I see today.”

As an undergraduate, he pursued a degree in mechanical engineering with a concentration in physics and a minor in music. He also took part in the Undergraduate Research Opportunity Program (UROP), working in the lab of mechanical engineering professor Doug Hart, where he first learned to develop algorithms to track the direction of vortices in a fluid flow.

That experience helped steer Marzouk toward more computational research. He continued his master’s and PhD work at MIT under the guidance of mechanical engineering professor Ahmed Ghoniem, concentrating again in the field of fluid dynamics but this time from a modeling perspective. For his thesis, he developed a computational model of how a jet of fluid mixes with a cross-flow, yielding new insights useful for improving the design of jet engines.

Uncertain knowledge

After receiving his PhD, Marzouk spent four years working as a postdoc and staff member at Sandia National Laboratories, developing computational models to simulate processes in the physical world. Specifically, he looked for ways to model subsurface flows of radioactive waste and other contaminants that seep through soil and rocks — information that is crucial for determining where and how to clean up contamination and prevent its leakage into aquifers. He soon found that modeling such physical systems was a daunting task with countless unknowns.

“I started getting interested in what aspects of your computational prediction should you actually believe, and what parts are not so reliable or trustworthy?” Marzouk says.

For instance, he says a model’s mathematical formulas that should characterize a certain relationship, such as a porous medium’s response to a given pressure, may not be “perfectly predictive.” Data from actual subsurface measurements can help refine the model, but the number of measurements that can be practically made are limited. Uncertainty, therefore, is intrinsic to both building models and drawing conclusions from data.

Marzouk surmised that if researchers could quantify the uncertainty in a model and a dataset, they could begin to reduce that uncertainty, to produce a more accurate prediction of the state of a physical system. To do this, he dove into Bayesian statistics, a philosophy of statistics which represents the state of the world in terms of probability, or degrees of uncertainty.

Uncertainy can still encode knowledge,” says Marzouk, who continued this line of work, known as uncertainty quantification, when he accepted a faculty position in MIT’s Department of Aeronautics and Astronautics in 2009.

Modeling a data stream

At MIT, Marzouk has been developing methods to quantify and reduce uncertainty in computational models. He’s also finding ways to identify the best quantities to measure in order to improve a model’s prediction.

“You might have thousands of uncertain parameters, and you could boil them down to maybe 20 that matter and that are informed by data, which makes the problem much more tractable,” Marzouk says.

He’s applied his methods to a number of wide-ranging, high-dimensional problems, from subsurface flow and combustion in jet engines, to estimating the concentration of gases throughout Earth’s atmosphere.

“I’m particularly interested in geophysical phenemona which are complex and where data may be very expensive to acquire,” Marzouk says. “There’s a lot of uncertainty, and characterizing that uncertainty is important.”

In the coming years, he plans to nail down and reduce the sources of uncertainty in  continuously changing models, such as weather forecasting tools, which are constantly updated with a glut of streaming data from ground, air, and space sensors.

“We know mathematically how to formulate rigorous predictions of uncertainty,” Marzouk says. “But doing that for a system the size of the weather is hopelessly out of reach. Our algorithms are giving rise to a new class of approximations that might make uncertainty quantification for these kind of problems more tractable.”

May 12, 2017 | More

Teaching robots to teach other robots

A team under professor of aeronautics and astronautics and LGO thesis advisor Julie Shah, developed a system that enables users to teach robots skills that can be automatically transferred to other robots.

Most robots are programmed using one of two methods: learning from demonstration, in which they watch a task being done and then replicate it, or via motion-planning techniques such as optimization or sampling, which require a programmer to explicitly specify a task’s goals and constraints.

Both methods have drawbacks. Robots that learn from demonstration can’t easily transfer one skill they’ve learned to another situation and remain accurate. On the other hand, motion planning systems that use sampling or optimization can adapt to these changes but are time-consuming, since they usually have to be hand-coded by expert programmers.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently developed a system that aims to bridge the two techniques: C-LEARN, which allows noncoders to teach robots a range of tasks simply by providing some information about how objects are typically manipulated and then showing the robot a single demo of the task.

Importantly, this enables users to teach robots skills that can be automatically transferred to other robots that have different ways of moving — a key time- and cost-saving measure for companies that want a range of robots to perform similar actions.

“By combining the intuitiveness of learning from demonstration with the precision of motion-planning algorithms, this approach can help robots do new types of tasks that they haven’t been able to learn before, like multistep assembly using both of their arms,” says Claudia Pérez-D’Arpino, a PhD student who wrote a paper on C-LEARN with MIT Professor Julie Shah.

The team tested the system on Optimus, a new two-armed robot designed for bomb disposal that they programmed to perform tasks such as opening doors, transporting objects, and extracting objects from containers. In simulations they showed that Optimus’ learned skills could be seamlessly transferred to Atlas, CSAIL’s 6-foot-tall, 400-pound humanoid robot.

A paper describing C-LEARN was recently accepted to the IEEE International Conference on Robotics and Automation (ICRA), which takes place May 29 to June 3 in Singapore.

How it works

With C-LEARN the user first gives the robot a knowledge base of information on how to reach and grasp various objects that have different constraints. (The C in C-LEARN stands for “constraints.”) For example, a tire and a steering wheel have similar shapes, but to attach them to a car, the robot has to configure its arms differently to move them. The knowledge base contains the information needed for the robot to do that.

The operator then uses a 3-D interface to show the robot a single demonstration of the specific task, which is represented by a sequence of relevant moments known as “keyframes.” By matching these keyframes to the different situations in the knowledge base, the robot can automatically suggest motion plans for the operator to approve or edit as needed.

“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Pérez-D’Arpino. “We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”

One challenge was that existing constraints that could be learned from demonstrations weren’t accurate enough to enable robots to precisely manipulate objects. To overcome that, the researchers developed constraints inspired by computer-aided design (CAD) programs that can tell the robot if its hands should be parallel or perpendicular to the objects it is interacting with.

The team also showed that the robot performed even better when it collaborated with humans. While the robot successfully executed tasks 87.5 percent of the time on its own, it did so 100 percent of the time when it had an operator that could correct minor errors related to the robot’s occasional inaccurate sensor measurements.

“Having a knowledge base is fairly common, but what’s not common is integrating it with learning from demonstration,” says Dmitry Berenson, an assistant professor of computer science at the University of Michigan who was not involved in the research. “That’s very helpful, because if you are dealing with the same objects over and over again, you don’t want to then have to start from scratch to teach the robot every new task.”


The system is part of a larger wave of research focused on making learning-from-demonstration approaches more adaptive. If you’re a robot that has learned to take an object out of a tube from a demonstration, you might not be able to do it if there’s an obstacle in the way that requires you to move your arm differently. However, a robot trained with C-LEARN can do this, because it does not learn one specific way to perform the action.

“It’s good for the field that we’re moving away from directly imitating motion, toward actually trying to infer the principles behind the motion,” Berenson says. “By using these learned constraints in a motion planner, we can make systems that are far more flexible than those which just try to mimic what’s being demonstrated”

Shah says that advanced LfD methods could prove important in time-sensitive scenarios such as bomb disposal and disaster response, where robots are currently tele-operated at the level of individual joint movements.

“Something as simple as picking up a box could take 20-30 minutes, which is significant for an emergency situation,” says Pérez-D’Arpino.

C-LEARN can’t yet handle certain advanced tasks, such as avoiding collisions or planning for different step sequences for a given task. But the team is hopeful that incorporating more insights from human learning will give robots an even wider range of physical capabilities.

“Traditional programming of robots in real-world scenarios is difficult, tedious, and requires a lot of domain knowledge,” says Shah. “It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step toward teaching robots to perform complex multiarm and multistep tasks necessary for assembly manufacturing and ship or aircraft maintenance.”

May 11, 2017 | More

Economic Tectonics Episode 5: Technology

Andrew McAfee (LGO ’90) – a tech optimist – explores how he thinks technology could change our economic futures for the better.

April 19, 2017 | More

A toolset for getting stuck conversations back on track

Jason Jay is an LGO thesis advisor, Senior Lecturer at the MIT Sloan School of Management and Director of the Sustainability Initiative at MIT Sloan.  Jason Jay explains how to rethink and reboot the conversations holding you back.

“Understand what the other person is for — not what they’re against,” suggests MIT Sloan Senior Lecturer Jason Jay.

As anyone who’s argued with a colleague — or simply tried to persuade their spouse to unload the dishwasher —  knows, our worlds are rife with disagreements that go nowhere. MIT Sloan Senior Lecturer Jason Jay calls these ruts “gridlock.”

Jay co-authored the upcoming book “Breaking Through Gridlock: The Power of Conversation in a Polarized World” to transform these disagreements into progress.

His book is based on conversation workshops focusing on social change, refined over the course of several years and run alongside co-author Gabriel Grant, a founder of the social-change-focused Byron Fellowship. Together, they’ve coached Fortune 500 companies, small businesses, and students on the power of authentic, effective dialogue.

Here, Jay, the director of the MIT Sloan Sustainability Initiative, explains how you can get yourself unstuck. “Breaking Through Gridlock” is out May 22.

Why do you use the term “gridlock” in the title, something that’s typically associated with traffic, not conversations?

Breaking through “gridlock” applies to when conversations get stuck. We offer a toolset for helping conversations get back into gear and back on track. The metaphor is really designed to capture what it feels like when we have an agenda that we’re trying to advance, whether it’s in our organization, community, family, or on the wider political stage, and we’re not getting where we want to go.

What types of “gridlock” exist in conversations?

Broadly speaking, there are two kinds of getting stuck: One is that you’re avoiding a conversation, even though you want to advance an agenda, because it seems like your perspectives are too different, with too much potential conflict. The second kind is when it turns into a debate over who’s right, whose facts are right, and everyone is walking away entrenched in their own points of view. Therefore, you haven’t gotten where you want to go.

What’s usually at the root of conversational conflict?

Our book is organized around six steps. The first step is reflecting on your own internal monologue. What’s the baggage that you bring into a conversation? You could have the best talking points prepared, but if what’s going through your mind is, “I’m right, and they’re wrong,” it will bleed though. Often people think they’re being passionate and clear, but come off as arrogant and preachy.

The next step is to locate the bait. We fall into traps because there’s bait — there’s some benefit to being stuck. Why would you want to be stuck? Even when conversations get stuck, when we walk away, we get to feel right, righteous, and certain in our own perspective. We stay safe in our own unchallenged worldview. We retreat to a safe group of people who agree with us, and we can go back to preaching to the choir.

How can people break out of gridlock?

Dare to share. Get over “winning” and think about what you want out of the relationship. Focus on asking questions and understanding. If there’s a choice you don’t agree with, ask, “What inspires you to make that choice?” Understand it and acknowledge the differences. For example, my [conservative] cousin and I spend a good amount of conversation talking about where we get our news. Not in a tone of, “Your facts are wrong,” but recognizing we live in a culture where people live in bubbles, and we interact with people who only agree with us. So talk about values instead of facts; talk about personal values and perspectives, as opposed to just iterating talking points. All of that stuff creates space for difference.

Understand what the other person is for — not what they’re against. Say I’m an advocate for a renewable energy strategy, and I want my company to go carbon neutral.

If my CFO is pushing back because it’s too expensive, my tendency is to say, “He’s against it; I’m for it.” But what does he stand for? It’s wise allocation of company resources and economic sustainability of the company.

How do you reboot a conversation gone awry?

Two ways. An “apology” is where you say, “You know what? I’m responsible for your background conversation. I’ve come into your office five times now with ideas I was really passionate about but didn’t think through. I want to explore something new, a financially exciting approach to doing renewable energy. Here’s what it would look like, and I tried my best to run the numbers.”

Or use “contrast.” Say, “I’m passionate about what we do, and you might expect me to do such-and-such, and you wouldn’t be wrong. But that’s not what I’m doing today. I’m bringing you something that will save you money while moving you toward renewable energy, and I’d love to learn from your perspective, too.”

Which companies do a good job of communicating this way?

We really like Patagonia. They’re a leader in sustainability, but they don’t show up saying, “We’re the best.” It’s the opposite: They start a conversation about their footprint by saying, “Look how bad we are. These are all the ways we say we care, but we still have these challenges in our supply chains.” It’s this simultaneous ambition to make a difference with humility about where they are, and this has been very effective. They show up not as self-righteous and hot, but as, “We’re on this adventure together.”

What’s the goal of your work?

There’s a goal for the reader and a goal for our wider society. The goal for the reader is that we want to create stronger relationships and creative solutions to problems you care about, with people you didn’t think you could work with. If a lot of people do that, and if organizations do this, social and political movements — which are often characterized by people preaching to the choir or burning out due to conflict or lack of progress — will be transformed.

What was your most surprising takeaway from the book?

How dramatically people can turn around relationships. We share a story in the preface about a young woman who had tried to change her mother’s eating habits to address her obesity. The two hadn’t shared a meal in over a year because it had gotten so contentious. When she took a new approach that was more compassionate and helpful — where she ended up shopping and cooking with her mom — they had shared meals every night for two weeks when she reported back.

We also found that turning around a conversation gives people confidence to dive into bigger and higher-stakes contexts. One participant had to repair a friendship that had gotten frayed because of intense debate about climate change. After she had this experience, which included bringing her friend around on the issue, it gave her confidence to raise environmental issues with the Republican governor of her state — who now happens to be our vice president.

April 14, 2017 | More

Disorder can be good

A team under professor of aeronautics and astronautics and LGO thesis advisor Brian L. Wardle have found a tangible link between the random ordering of carbon atoms within a phenol-formaldehyde resin, which was “baked” at high temperatures, and the strength and density of the resulting graphite-like carbon material. Phenol-formaldehyde resin is a hydrocarbon commonly known as “SU-8” in the electronics industry. Additionally, by comparing the performance of the “baked” carbon material, the MIT researchers identified a “sweet spot” manufacturing temperature: 1,000 C (1,832 F).

“These materials we’re working with, which are commonly found in SU-8 and other hydrocarbons that can be hardened using ultraviolet [UV] light, are really promising for making strong and light lattices of beams and struts on the nanoscale, which only recently became possible due to advances in 3-D printing,” says MIT postdoc Itai Stein SM ’13, PhD ’16. “But up to now, nobody really knew what happens when you’re changing the manufacturing temperature, that is, how the structure affects the properties. There was a lot of work on structure and a lot of work on properties, but there was no connection between the two. … We hope that our study will help to shed some light on the governing physical mechanisms that are at play.”

Stein, who is the lead author of the paper published in Carbon, led a team under professor of aeronautics and astronautics Brian L. Wardle, consisting of MIT junior Chlöe V. Sackier, alumni Mackenzie E. Devoe ’15 and Hanna M. Vincent ’14, and undergraduate Summer Scholars Alexander J. Constable and Naomi Morales-Medina.

March 21, 2017 | More

MBAs in space: rocket science absorbs business school thinking

“I’m trying to get more technical and business education to transition into the space industry,” says Barret Schlegelmilch (LGO ’18), a former submarine officer in the US Navy, who is pursuing an MBA at the same time as a masters of science in astronautical and space engineering.

February 21, 2017 | More

Learning: the key to continuous improvement

Steven Spear is an LGO thesis advisor and Senior Lecturer at the MIT Sloan School of Management. Certain companies continually deliver more value to the market. They do so with greater speed and ease than their rivals, even when they lack the classic elements of strategic advantage: locked-in customers, dependent suppliers and barriers that keep competitors at bay. Absent such structural advantages, you would expect parity. There are, however, still those companies that regularly outscore the competition. Toyota, Intel, and Apple are among them, as are many lesser known but no less disproportionally successful ventures.

The source of uneven outcomes on otherwise level playing fields? Learning, at which the very best organizations excel. They are far faster and better at discovering what to do and how to do it, as well as at refreshing the set of problems to be solved and solutions to be delivered faster than the ecosystem can render their relevance obsolete.

For sure, learning is not simply training. Training involves accepted skills with an accepted application, and then using an accepted approach to deliver those skills to the organization. Learning, on the other hand, involves converting ignorance and a lack of capacity into knowledge, new skills and understanding. It requires recognizing what you do not know and finding new approaches to solve new problems. This, in turn, requires critical thinking and a willingness to challenge accepted practices, even when those practices are perceived as successful.

Challenge—even respectful challenge—is not a natural act. When something has worked well, complacency and inertia accumulate and interests get vested in sustaining what is familiar, even if it is not optimal. Challenging historical approaches goes along with challenging the emotions, status and prestige associated with those approaches. That is not typically welcome.

For aspiring leaders to overcome inertia, as well as to realize and capitalize on the innate potential of those they wish to lead, they must embrace a two-pronged approach. First, they need to cultivate a sense of dissatisfaction with current practices, actively encourage paranoia about the status quo and incite a spirit of relentlessly seeking flaws. Second, they must make this constant challenge both respectful and safe, communicating the expectation that associates at all levels identify problems, try new approaches and evaluate those approaches based on both the results and the discipline and speed with which insights are generated.

This is a skunkworks approach, not a tactic isolated to a few top projects given to an elite group of researchers. It is everyone striving ahead on the work that is within their control and subject to their influence, so that both the pieces and the whole get better together.

Successful practitioners of high-velocity learning have made it a fundamental part of leadership to develop less-experienced associates’ ability to actively convert experiences into bona fide learning. A problem-solving/learning dynamic is broadly diffused throughout the enterprise. These organizations have expanded our typical concept of “the knowledge worker” from doctors, scientists and IT staff to the people wearing hard hats, coveralls and khakis.

Global manufacturer Alcoa enjoyed a profound transformation by embracing this approach. For example, when an Alcoa manager new to a recently acquired facility formed a quality and safety committee, he chose to depend on unionized workers. Previously, these employees expressed their insights by filing labor grievances, because more genteel methods of calling out issues were ignored and diminished.

This led the company to implement a system for all workers to easily document practices that led to injuries or beneficial outcomes. By changing practices based on workers’ insights, the risk of job injury collapsed from 2 percent to 0.07 percent. Costs dropped, and productivity soared. The company’s stock, a mainstay of the Dow Jones Industrial Average, starting tracking the NASDAQ—the domicile of dot.coms, high-techs and other ventures valued for what they know and what they are expected to invent.

Alcoa’s success reflects the essence of high-velocity learning: By motivating and enabling all employees to challenge the norm, organizations can realize competitive advantage.

Steven Spear is a Senior Lecturer at the MIT Sloan School of Management and at the Engineering Systems Division at MIT.

February 9, 2017 | More

Featured video: MIT Hyperloop

A team of MIT students, including LGOs, are competing in the SpaceX Hyperloop Pod Competition in California.

January 25, 2017 | More

MIT Students Tour Pratt & Whitney’s Columbus Facility

A group of more than 50 students and faculty members from MIT’s Leaders for Global Operations program toured the Columbus Engine Center on January 9 to experience what it’s like to work in a high-tech manufacturing business.

January 11, 2017 | More


Five ways to prepare tech employees for the future of work

A warehouse employee works with an OTTO 1500, a self-driving robot that moves pallets, racks, and other large payloads

There’s no question that artificial intelligence, self-driving cars, chatbots, devices connected to the Internet of Things, and other rapidly advancing technologies all pose threats to the jobs that make up a large chunk of the global economy — and to the livelihood of the men and women who hold those jobs.

Finding a way to mitigate the impact of these digital transformations on workers as well as businesses was a key theme of the May 24 MIT Sloan CIO Symposium. Here are five ideas from IT and strategy executives who have led the way in digital innovation without leaving their employees behind.

Pursue business platforms and services enabled by technology
This is the holy grail, said Lucille Mayer, head of client experience delivery and global innovation at BNY Mellon. “You have to change behavior so that tech is the business, a partner at the table to … transform the business model and become more digital.”

Making this change happen requires a cultural shift that starts with executive leadership and product line CEOs, Mayer said. At BNY Mellon, “all the product lines had to come up with their own digital transformation roadmap” to create consumable services and “get the biggest bang for their platform.” This initiative helped the investment firm shift its offerings from siloed products to a portfolio of products, which provided a more seamless customer experience, she said.

fowlerGeneral Electric, Co. vice president and CIO Jim Fowler discusses how technology drives process at the May 24 MIT Sloan CIO Symposium. Photo: Kent Reichert

General Electric Co., on the other hand, built a platform that connects its machines, its enterprise customers, and GE employees, said vice president and CIO Jim Fowler. This means that business processes are now tied to a machine (and not a person) issuing a material replenishment order or maintenance request. When pursuing such technology initiatives, he said, it’s important to ask, “How do we do this and add value to the customer?”

Augment, don’t automate
One of the biggest questions about the technology’s impact on the future of work is whether AI and machine learning will complement human workers or replace them outright. While some jobs will in fact become automated — sorry, paper-pushers — those roles that require supervision, initiative, and working with complex equipment will be augmented by AI, said George Westerman, principal research scientist for the MIT Initiative on the Digital Economy.

AI can be used in several ways to improve employee productivity. “Take the intelligence of the best knowledge worker who just nails it every time,” said Ross Meyercord, executive vice president and CIO at Salesforce. “How can the machine understand the processes that person does and raise the output of the average worker?”

For example, Cogito’s emotional intelligence software can analyze call center conversations and determine, for example, whether agents talk too fast or interrupt callers, CTO Ali Azarbayejani said. This real-time feedback helps agents learn on the job and better serve callers, he said — and it gives a company a competitive edge in customer service.

Focus on problem-solving
Fowler said he was taught that process drives technology, but said that’s no longer true. When AI and other technologies drive process, workers have little choice but to take on different roles. Instead of processing transactions, workers must get used to forming and disbanding teams that use the data presented to them to solve problems. “That’s the future of work,” he said.

This forces workers to be agile, Meyercord said: “Don’t be stuck to your existing roadmap, that North Star direction.” The days of building a product once and getting it right are gone, he added. “It’s not that people had an end solution nailed at day one, but they had the ability to be open-minded, they have the metadata exhaust, and they iterate as they go.”

To embrace agility, organizations need to adopt what Meyercord described as a “citizen development model.” This flips the concept of shadow IT on its head, as it encourages ideas from the growing percentage of employees who work outside the IT department but nonetheless understand tech.

“What becomes interesting for our jobs is, how do you encourage grassroots innovation and standardize it to get to standard products and services for global processes?” he asked. “How do we bring them in and incubate the best ideas and bring them into production?”

Don’t set lofty expectations
When new technology comes along, hype is sure to follow. Executives need to set clear expectations about what a product can and cannot do.

OTTO Motors deploys fleets of self-driving vehicles in industrial settings such as warehouses. These autonomous fleets can deliver savings, since as much as 75 percent of the cost of any item is a “tax” on people moving it, but they can’t replace human drivers just yet, CTO and co-founder Ryan Gariepy said. With so many variables to consider, he added, “there will need to be interventions from time to time.”

Likewise, Cogito’s Azarbayejani said AI chatbots aren’t yet ready to solve the biggest problem facing call centers. Bots can handle simple customer interactions, but they aren’t yet set up to pull data from disconnected systems. Personal agents are needed to mitigate those types of problems, he said.

Encourage people to learn
Robot drivers won’t be all over the road tomorrow, but it’s not unreasonable to predict that 90 percent of truck driving jobs will be eliminated within a generation, Gariepy said. To confront this, OTTO Motors’ customers teach their drivers to manage the robot fleets instead of simply moving materials.

Companies must be proactive about education and retraining, Gariepy added, saying that a laid-off truck driver may not be able to afford to go back to school.

Nor can white-collar workers remain complacent. As data is commoditized, and it becomes easier to build predictive models, “even I have to step up my game,” said Ernest Ng, Salesforce’s senior director of employee success strategy and people analytics. “Everybody needs to think of ways to grow.”

Health Management Systems does this by encouraging employees to pursue internal training opportunities, even if they aren’t directly related to their current role. One year into the program, nearly two-thirds of employees have participated, and many have offered to teach classes, said Cynthia Nustad, the company’s executive vice president and chief strategy officer.

“We want people to stay and unleash their passion to do what they want,”she said.

May 26, 2017 | More

Soft skills, partnerships needed to bridge economic divide

Year Up National Director Shawn Bohen and Alphabet Executive Chairman Eric Schmidt at an MIT discussion on closing the economic divide

Bridging the nation’s growing economic divide will require partnerships among businesses, governments, and colleges and universities, as well as investments in programs as diverse as early education, job training, family leave, and infrastructure.

It’s a tall order, but such work is critical to addressing labor shortages, skills gaps, and a lack of diversity in the science and technology fields, according to a group of experts who spoke May 3 at a panel hosted by the Inclusive Innovation Challenge within the MIT Initiative on the Digital Economy.

“The talent shortage drives everything,” said Eric Schmidt, executive chairman of Alphabet Inc., Google’s parent company. During the event, Schmidt announced that — Google’s charity organization — is donating $500,000 to the Inclusive Innovation Challenge, which gives awards to organizations that use technology to create economic opportunities and redefine the future of work.

“Too often, we hear it’s a world without work, but it’s a dangerous and misleading meme, because there are tremendous opportunities to create work,” said MIT Sloan Professor Erik Brynjolfsson, director of the Initiative on the Digital Economy. The challenge is finding the right people for the right jobs — even so-called “middle-skilled” positions that require data input and processing knowledge.

2017-BrynjolfssonMIT Sloan Professor Erik Brynjolfsson (left) with Massachusetts House Speaker Robert DeLeo

Address soft skills
One solution, Brynjolfsson said, is teaching the underlying cause and effect of economic and business case studies. The particular use case may not be relevant in five years, he added, but the principles behind it will be.

This approach emphasizes critical thinking as opposed to rote memorization, and it helps address the need for soft skills that are necessary to succeed in today’s workforce — problem solving, creativity, collaboration, and analysis.

The process must begin early, Brynjolfsson said. Education gaps can appear as young as age 5 and only increase as children grow up, depending on where they live and go to school.

“Most employers say people are hired for skills and fired for attitude and behavior. It starts in preschool, learning how to play well with others,” said Shawn Bohen, national director for growth and impact at Year Up. The program pairs young adults from low-income neighborhoods with employer partners for one year of on-the-job training as well as education, which help companies find “untapped talent” among those with life experience but less exposure to traditional education, Bohen said.

At the state level, initiatives such as Massachusetts’ STEM Starter Academy pair community colleges with employers so that students who may not have considered careers in science, technology, engineering, and math know more about the opportunities available to them. Massachusetts House Speaker Robert DeLeo said those partnerships are critical to driving employment growth in the state.

Short-term pain, long-term gain?
In the short term, Schmidt said, the problem may get worse before it gets better. In the meantime, Alphabet has been taking approaches like training existing employees with artificial intelligence skills, as there are too few college graduates with the right data science background to work with increasingly sophisticated AI systems, he said.

Within the next five to 10 years, though, AI technology will evolve from analysis of inputted data to supervised and reinforced learning. In other words, Schmidt said, workers with soft skills will soon be able to accomplish things that today require an advanced computer science degree.

“I would love everyone to become a PhD in computer science. It’s an unrealistic goal,” he said. “But a vast majority of ‘normal’ people will be able to program computer systems to do powerful things.”

May 24, 2017 | More

Many wrongs make it right

Productively wrong: Netflix’s original model was the wrong one for growth, but it was essential to developing its now-thriving business.

In 1971, the environmental nonprofit Greenpeace formed and launched its first expedition. A ragged crew of friends and acquaintances decided to pilot a fishing boat from Vancouver, Canada to the small island of Amchitka, Alaska, where the U.S. government was testing nuclear warheads. Greenpeace assumed that their presence would stall the tests. The crew raised money, set sail and, soon after departure, was intercepted by the Navy, who sent them home.

Everyone onboard lamented the failure until the boat pulled into harbor and, through media attention, public interest swelled. Greenpeace realized that a vocal public could be just as effective for protest as an anchored fishing boat. “Being wrong showed a path to success,” said Luis Perez-Breva, director of the MIT Innovation Teams Program at MIT Sloan. To put it another way: Greenpeace was productively wrong.


Perez-Breva, in his new book on innovating, uses this term to demonstrate how pursuing a thorough understanding of how ideas are wrong is necessary to successful innovating. (“Innovation,” he notes, is an outcome; “innovating” is the process.) Whether entrepreneurs just getting started or a well-established global business, the lesson holds. “We all love to be right,” he said. “But if we make it an operating principle to identify the ways that our ideas are wrong, we tend to learn more.”

Here are three reasons why it’s right to pursue what’s wrong.

The certainty of wrongness
One of the fundamental benefits of being wrong is its definitiveness. “When something is wrong, you know that with full certainty,” Perez-Breva said. “We don’t often realize that the alternative to being wrong is gambling.” Rightness cannot be proved; it is uncertain. We may believe we’re right about an idea, but we cannot know for sure.

Physicist Theodore Maiman, for instance, invented the first laser in 1960 by shining a lamp purchased from a photographic equipment catalog on a ruby rod. His colleagues told him it would never work. “But simply trying it out was easier, and better, than ‘being right,’” Perez-Breva said. “He started out thinking that he was wrong. It turned out everyone else was wrong.”

When pushing an idea to see if it’s wrong, Perez-Breva explained that one of two things might happen. “Either you’ll be wrong and find out about it, and so you fix it before you’ve spent an enormous amount of money, or, if the idea is stubbornly resistant to being proven wrong, then maybe there’s something to it.”

Being wrong is the opposite of failing 
While it’s easy to conflate being wrong and failure, Perez-Breva notes that the former is actually intended to prevent the latter. “I don’t believe you should fail,” Perez-Breva said. “Nobody wants to fail.”

Being productively wrong instead implies “running a story forward” to ferret out the weaknesses in an idea. He pointed to astronauts as exemplars of this strategy. Before a mission, astronauts don’t try to find the best way to stay alive in space, but they try to determine everything that might get them killed. And through this inquisition they develop the tools to stay alive.

Though counterintuitive, any company can standardize this practice by flipping the way it approaches problems: rather than trying to find a single best answer, try to find all the pitfalls that need to be sidestepped.

“Throughout my work I’ve seen so many examples of failure that could have been avoided,” Perez-Breva said. In the business and nonprofit worlds, the cost of this avoidable failure extends beyond simply lost money. “It is a societal waste because those people in the company had energy, they had a desire to bring about change,” he said.

Taking wrongness to scale
In writing his book, Perez-Breva sorted through hundreds of examples of innovative companies. He found that those that succeed cannot be thought of as simply growing out from core competencies, but as “layering up,” he said. “What these companies did first becomes an innovation for what comes next.”

He used Netflix as an example: they started as a DVD-by-mail company, and have since added on a streaming service and a production studio — two endeavors that have little in common with the original model. The DVD rental business was an essential way for Netflix to create revenue and learn — it was a prototype — but it was the wrong organization for acquiring new consumers and growing. Netflix needed to correct inefficiencies and ultimately layer a new company on top of the old one.

“You can’t hack scale; you learn your way through it,” Perez-Breva said. “The ‘old organization’ teaches you, but is ill-suited to target the next scale. It is wrong. Everything that’s wrong about it tells you what you ought to solve for at the next scale.”

Paying attention to what’s wrong inside a company opens avenues to successful growth. “As an organization grows, some parts will be reorganized, some discarded, and what you end up with may have no obvious resemblance to the original organization,” Perez-Breva said. “This is how being productively wrong can be brought up to scale.”

May 24, 2017 | More

How to survive a hack: Management trumps technology

Email phishing scams and nefarious attachments are a fact of life in any industry. To fight back, build a culture of security and cyber-prepared and cyber-resilient employees.

Signs in power plants, manufacturing facilities, and office buildings around the world remind employees that “Safety is our top priority.”

The same cannot be said for information security, and it reflects the fact that data breaches are often the result of, and exacerbated by, organizational and management issues and not technical issues.

“We build a culture of safety, but not security,” said Dr. Keri Pearlson, executive director of MIT’s Interdisciplinary Consortium for Improving Critical Infrastructure Cybersecurity, or (IC)3. “People open email attachments without thinking, but they wouldn’t put their fingers in the gears. You need to make sure the people in your organization are cyber-prepared and cyber-resilient.”

The recent WannaCry ransomware attack illustrated this need all too well. The National Security Agency had previously identified a vulnerability and let the cybersecurity community know about it. But just last week, malware nonetheless shut down public and private institutions around the world, including parts of the United Kingdom’s National Health Service. What’s worse, WannaCry targeted systems that had not been updated with the latest security patches, particularly impacting older Windows technology that companies should have long ago replaced — Windows XP and Windows Server 2003.

When a hack becomes a breach
In today’s security environment, hacks are inevitable. In fact, large enterprises probably face hundreds of hack attempts at any one time, said Pearlson, who joined (IC)3 in January.

Anti-virus software, firewalls, and other technology can protect against hacks. The trouble begins when the hackers get in and a breach occurs — when an employee opens that email attachment, leaves important data unencrypted, or fails to upgrade network security.

The infamous 2005–2007 TJX breach, for example, was the result of substandard wireless LAN security that went undetected for 18 months. The 2013 Target breach, meanwhile, happened after employees ignored warning signs identified by a third-party software vendor. In the wake of the Target breach, both the CEO and CIO resigned.

Pearlson referred to the cybersecurity framework [PDF] developed by the National Institute of Standard and Technology with five key steps — identify, protect, detect, respond, and recover — but added that response and recovery aren’t always adequately covered by a company’s cybersecurity plans.

“You have to do all of that, but a lot of companies don’t have sufficient plans to respond or recover from a major incident” said Pearlson, who will moderate a panel on effectively responding to such incidents at the May 24 MIT Sloan CIO Symposium.

Along with adopting the cybersecurity framework in full, executives should build a strong relationship with their cyberinsurers, Pearlson said. Insurers have developed an “ecosystem” to minimize the chances of attack, but also to effectively respond and recover. They can also advise executives about how to notify law enforcement — since that’s not always the first phone call to make, she said. “Hackers might go dark once they figure out that the authorities are involved.”

People prep and backup plans
One reason it’s important to plan for major incidents is that a lot of second- and third-order things can happen, Pearlson said. The tsunami that hit Japan in 2011 had such a devastating impact in large part because the Fukushima nuclear plant lost its primary and secondary power sources, which caused the cooling system failure that led to radiation exposure.

The (IC)3 aims to address the security of infrastructure — electricity grids, water and sewer systems, and so on. Right now, public and private entities may have plans in place to address power outages or mechanical failures that last a few days. A cyberattack on infrastructure, on the other hand, could knock these systems out for months, MIT Sloan Professor Stuart Madnick wrote in Harvard Business review this month.

“A lot of the prep is people prep,” Pearlson said. “You have to assume that tech may or may not be available, so you need to have a backup plan just in case.”

In that sense, cybersecurity today is similar to the early days of the Internet and IT management. “It’s people, process, and technology,” she said. “You need people who manage, not who wring people’s necks if there’s a problem. You need a culture of security. You need to do an audit, and put things in place so people know what to do and what not to do.”

May 24, 2017 | More

Live Stream of “Strategic Analytics: Changing the future of Healthcare”, May 25, 9 am ET

Join MIT Sloan and the Universidad De Chile virtually at a conference on May 25th in Santiago titled, “Strategic Analytics: Changing the Future of Healthcare,” which aims to highlight the many ways in which data and analytics promise to transform the provision of healthcare. The conference is expected to draw hundreds of researchers and leaders from academia, health care, government, and industry.

Recently, MIT Sloan hosted a Twitter chat on the subject and previewed the conference in a recent post. You can join the live stream below:

May 23, 2017 | More

The Businesses That Platforms Are Actually Disrupting

The Businesses That Platforms Are Actually Disrupting – David S. Evans and Richard Schmalensee

Platforms are all the rage these days. Powered by online technologies, they are sweeping across the economic landscape, striking down companies large and small. Uber’s global assault on the taxi industry is well known. Many platforms, some household names and others laboring in obscurity, are doing the same in other sectors.

Surveying these changes, you might conclude that if your business isn’t a platform, you had better worry that one is coming your way. Everyone from automakers to plumbers should count their days as traditional businesses. And maybe you should jump on the platform bandwagon too. If it worked for Airbnb, why not you?

Based on our research into the wave of online platforms that have started in the last two decades, we don’t necessarily disagree. Traditional businesses should worry, and maybe they should think about platform strategies. But we think these conclusions are overwrought — and miss what’s really going on.

Read the full post at Harvard Business Review

David S. Evans is an economist, business adviser, and entrepreneur. He has done pioneering research into the new economics of multisided platforms. He is the co-author of Matchmakers: The New Economics of Multisided Platforms.

Richard Schmalensee is the Howard W. Johnson Professor of Management Emeritus and Professor of Economics Emeritus. 

May 18, 2017 | More


Reimagining Chile’s healthcare system: Harnessing the power of strategic analytics and Big Data to keep patients healthier for less money – Rafael Epstein, Marcelo Larraguibel, Lee Ullman

Economic growth, urbanization, and rising affluence are having a profound impact on the health of Latin Americans. Very little of it is positive, especially in Chile.

While life expectancy has increased faster in Chile than in most OECD countries and income per person has quadrupled over the last quarter-century, great disparities continue to exist between the country’s public and private healthcare systems. Healthcare costs are skyrocketing and many of the country’s public hospitals—especially those in rural areas—face a shortage of general practitioners and family physicians.

The modern Chilean diet—comprised largely of ultra-processed foods and sugary drinks—is taking a toll. One third of Chilean children are overweight or obese; one quarter of Chilean adults are in those categories. Chronic diseases, like diabetes, are increasingly prevalent. Stress-related disorders and mental illnesses are also on the rise, as are rates of alcoholism, tobacco use, and certain types of cancer. Over the last decade suicide has been one of the top 10 causes of death in Chilean men.

Today’s statistics are bleak, but we have hope for the tomorrow. Technological innovations and discoveries, powered by Big Data, hold enormous opportunities for Chile and Latin America overall. To explore this further, we are hosting a conference next month in Santiago—“Strategic Analytics: Changing the Future of Healthcare”—that aims to highlight the many ways in which data and analytics promise to transform the provision of healthcare. The conference is expected to draw hundreds of researchers and leaders from academia, health care, government, and industry.

Our agenda is ambitious. By combining MIT’s expertise in analyzing massive amounts of data and optimizing complex systems with Universidad de Chile’s path-breaking medical research and Virtus Partners’ strategic and operational insights, we aim to unravel the complicated underlying problems that plague the healthcare system.

Of course many countries—including the US—face healthcare challenges. Our hope is that this conference inspires engineers, medical professionals, economists, and technologists from all over the world to see the benefits of working together to improve human health. Our goal is simple: to keep patients healthier for less money.

Progress is afoot. At MIT, researchers have devised algorithms that boost treatment for certain diseases, including diabetes, using a combination of machine learning and electronic medical records. At a time when 1.7 million Chileans, or about 12.3% of the population, have diabetes, this research has important implications.

The dawn of telemedicine—which enables doctors to monitor patients from afar—also holds promise, particularly for patients who live in remote areas. (Chile is a long and skinny country, and about 10% of the population lives in rural areas.) Researchers at the Universidad de Chile’s Medical Informatics and Telemedicine Center are using sensors and other devices to monitor patients’ blood pressure, heart rate, weight, and blood sugar levels from great distances. Technologists at the MIT Media Lab are finding new ways to apply emotion technology and wearable devices to help sufferers with autism, anxiety, and epilepsy manage their symptoms.

Researchers are also finding new ways to contain medical costs. Using Big Data to measure returns of healthcare spending, economists are able to help hospitals uncover best practices and align incentives to improve the quality of the care they provide. This has special relevance to Chile. The country’s Fondo Nacional de Salud (FONASA) struggles with overwhelming management challenges and increasing costs. Meanwhile, access to high-quality technology and healthcare services is still limited to the wealthy.

The promise of Big Data is immense, but so, too, are its perils. Many questions remain: How do we ensure that patient data stays both confidential and secure? How do we safeguard against Big Data applications creating even more disparities between the rich and poor, and instead use it to build a more equitable healthcare system for all? And how should governments cope with managing the high costs of aging populations?

These are big challenges and nothing will be solved overnight. Our hope is that the conference will point to new ideas and solutions that improve patient health for generations to come.

Read the original blog post at El Mercurio.

Lee Ullmann is the Director of the MIT Sloan Latin America Office.

Rafael Epstein is the Provost of Universidad de Chile.

Marcelo Larraguibel is the Founder of Virtus Partners, the management consultancy, and an Advisory Council Member of the MIT Sloan Latin America Office (MSLAO).

May 17, 2017 | More

Digital transformation: MIT's Westerman shares new lessons

Digital transformation: MIT’s Westerman shares new lessons

As a Principal Research Scientist with the MIT Initiative on the Digital Economy, George Westerman leads the group’s research portfolio on digital transformation. In his latest book, LEADING DIGITAL, Westerman and his colleagues studied more than 400 major organizations in traditional industries around the world to learn how they’re using digital technology to innovate.

May 17, 2017 | More

Digital business talent wars: MIT expert shares new strategies

Digital business talent wars: MIT expert shares new strategies

Kristine Dery, research scientist for the MIT Sloan Center for Information Systems Research (CISR), has been exploring the evolving digital workplace at the annual MIT Sloan CIO Symposium for the past several years.

May 16, 2017 | More

culture of waste

Overcoming the culture of waste

 One of the key messages in the Pope’s recent TED Talk was an entreaty to overcome the “culture of waste.” I wholeheartedly agree — this is a critical issue. The question is how do we even think about taking on such a large problem?

The “culture of waste” can be viewed through many different lenses: moral, philosophical, societal — just to name a few. But in addition to these broader notions of waste, there is simply the mundane notion of trash. Although mundane, trash is omnipresent, and perhaps understanding our mentality towards it can yield insights into broader cultural issues on waste.

Embarking on the study of waste (of the trash kind) several years ago, I was surprised to find that most waste is generated on purpose. Aside from the trash that we discard as individuals (municipal solid waste), there is trash (industrial waste) generated by supply chain processes that make the products we use. It turns out that the amount of industrial waste is orders of magnitude greater than municipal solid waste, which is already staggering. Moreover, the generation of this kind of waste is codified in the processes we use to produce our goods.

Pick any product and look at how it is produced. You will find that along with the desired product, whether it be an automobile or a hamburger, the process that produced it also produced other stuff, which we generally refer to as waste. An industrial example is production of pig iron, a key ingredient for making steel. In the process of making pig iron, a waste stream of oxides and silicates called slag is generated.

Why do we call the pig iron “product” and the slag “waste?” The distinction is driven by the intent of the process designer. The process was designed with the intent to produce iron, so iron is the product and all other output is waste. This mindset has enormous impact on our approach to process design, and the resulting waste of resources by those processes.

We value, and assign as “the product,” the item we intended to make. But it turns out that you can’t just make a product. There is always other stuff that comes out of the process. Even something as simple as making an apple pie in your own kitchen creates waste in the form of apple cores. A few apple cores are easily forgotten when that pie comes out of your oven, but scale that up to industrial manufacturing of pies and we end up with a massive amount of waste with apple cores, seeds, stems, etc.

However, the apple core is not inherently useless — it contains nutrients and juice that can be productively used elsewhere. Relegating it to the status of waste is simply a matter of perspective.

This output-oriented approach to process design implicitly codifies the generation of waste into our production processes. Stated another way, every output-oriented process is designed to produce waste.

So how can we overcome this “culture of waste” that is designed into virtually all our production processes? At the risk of sounding simplistic — instead of an output-oriented approach, we can shift to an input-oriented approach. Inputs are the resources we use to create value. Instead of working backwards from the output that we think we want, start with the resources we have and work forward to use them most effectively to create value.

An example of how vastly the results of these two approaches can differ is in the production of beef. An output-oriented approach would start with what we want to produce — beef — then construct a process using as few resources as possible to produce the output. The key step to producing beef is growing the cow so we would design the most “efficient” process to make cows gain weight quickly, like feedlots that constantly feeds cows. Cows are raised in confined lots and given ample access to specially designed feed that promotes muscle growth.

Confinement in the lot reduces the amount of land required and inhibits movement, which conserves energy that can then be used for weight gain. Measured on the cost of inputs required to produce a pound of beef, feedlots are incredibly efficient for producing beef. They also are efficient at producing waste in the form of tons and tons of manure.

Consider the different outcomes that result with an input-oriented approach. Joel Salatin of Polyface Farm designed a process that also grows cows for the purpose of beef production. However, instead of starting with the desired output (beef), Salatin began by assessing the resources at his disposal — primarily the land on his farm. Salatin could have used his land in any number of different ways, including clearing the land to grow corn. However, he recognized that if tended carefully, his land contained an engine for sustainable value creation: the soil.

Cow manure is toxic by the ton, but when incorporated at a reasonable rate into soil, manure is a valuable fertilizer. Recognizing this age-old fact and through careful study of inter-species symbiosis, Salatin designed a process to fully utilize his most valuable resource: his land. Salatin’s process orchestrates the growth of cows, chickens, and grass, where animal manure is used to enrich the soil which supports grass and foliage growth that in turn feeds the animals.

The movement of animals over the pasture is carefully sequenced and timed according to optimum grass height. Custom equipment using state-of-the-art farming technology was designed to facilitate efficient livestock movement. These and many other sophisticated techniques ensure that another kind of “efficiency” is at work here that fully utilizes resources, and in the process, honors those resources for the value they create.

What is the result of Salatin’s process? Over time, his engine for value creation — the soil — becomes richer and supports more vegetation, which means his land can then support more livestock.

The output-oriented process approach of feedlots produces an intended product (beef) and waste (manure). The input-oriented process approach of Salatin recognizes and harnesses the multi-faceted characteristics of each resource to produce cows, chickens, eggs, and fertile soil. Very little is wasted because each resource is appreciated and utilized in its entirety.

It may be tempting to dismiss Polyface as an anomaly that is possible only in agriculture, but we also have examples of input-oriented thinking in industrial manufacturing. The pig iron example from above? It turns out that the slag “waste stream” can be used to make high-quality Portland cement. How did this discovery come about? Gordon Forward, CEO of Chaparral Steel, challenged his organization to shift its mindset from, “We make steel,” to “We have particular raw material and technology resources, what can we do with them?”

As these examples illustrate, an input-oriented mindset can be powerful in many types of organizations from manufacturing and agriculture to community and non-profit organizations. A good way to recognize an opportunity for input-oriented process design is to look for waste streams. Whether it is material, energy, or labor, when you see waste streams, that means valuable resources are being wasted.

At the heart of an input-oriented approach to process design is an appreciation of resources. The word “waste” can either be a noun or a verb. The difference between the two lies in the attribution of fault, and the distinction is critical to how we overcome the “culture of waste.” The noun “waste” attributes the fault to the item itself. The verb “to waste” attributes the fault to the party who neglects to appreciate the value of the item. By focusing on the latter, we can begin to tackle our “culture of waste.”

Deishin Lee is a Visiting Assistant Professor at the MIT Sloan School of Management.

May 15, 2017 | More


Bacteria with multicolor vision

MIT researchers have engineered bacteria with “multicolor vision” — E. coli that recognize red, green, or blue (RGB) light and, in response to each color, express different genes that perform different biological functions.

To showcase the technology, the researchers produced several colored images on culture plates — one of which spells out “MIT” — by using RGB lights to control the pigment produced by the bacteria. Outside of the lab, the technology could also prove useful for commercial, pharmaceutical, and other applications.

The E. coli is programmed with a protein- and enzyme-based system, analogous to a computer chip, with several different modules to process the light input and produce a biological output. In computing terms, a “sensor array” first becomes activated in the presence of either red, green, or blue light, and a “circuit” processes the signal. Then, a “resource allocator” connects the processed information to “actuators” that implement the corresponding biological function.

Think of the new E. coli as microbial marionettes, with colored light instead of puppet strings making the bacteria act in a certain way, says MIT professor of biological engineering Chris Voigt, co-author of a paper in Nature describing the technology. “Using different colors, we can control different genes that are being expressed,” he says.

The paper’s co-authors are former postdocs Jesus Fernandez-Rodriguez, Felix Moser, and Miryoung Song.

Synthetic-biology innovation comes together

In 2005, Voigt, who co-directs the Synthetic Biology Center at MIT, and other researchers pioneered a “bacterial camera” by programming a light sensor into a strain of E. coli, along with a gene that produced black pigment. When light shone through a stencil onto a bacteria-coated plate, the microbes formed black-and-white images. At the time, this feat required only four genes and three promoters — regions of DNA that initiate gene transcription — to get the job done.

New synthetic biology tools, such as the genome-editing system CRISPR, have cropped up since then, opening broader possibilities to researchers. In contrast to the 2005 system, the new RGB system — the first to use three colors — consists of 18 genes and 14 promoters, among other parts, as well as 46,000 base pairs of DNA.

But with greater complexity come greater challenges. Because the researchers were dealing with a sensor array that could detect three separate colors, for instance, they had to include in the microbial program a protein that prevents gene transcription of the two unused sensors.

In computing terms, this is called a “NOT gate,” a circuit that produces an output signal — in this case, gene repression — only when there is not a signal on its input. With bacteria under a red light, for instance, the NOT gate would unleash that gene-repressing protein on the green and blue sensors, turning them off.

About five years ago, Voigt led a team that engineered microbes to respond to red and green light. Adding a third sensor was a major challenge of the new research. “Inside the cell, all the new protein sensors you add interfere with each other, because it’s all molecules bumping around the cell, and they all require keeping the cell alive and happy. With every additional sensor you add, that gets exponentially harder,” he says.

In that regard, Voigt adds, the system’s resource allocator, a new feature, also acts as a circuit breaker, shutting down the sensors if all three turn on at once, overloading the cell.

From a genetic engineering perspective, the four-subsystem configuration was “the biggest impact of this work,” Voigt says. Each subsystem — the sensor array, circuits, resource actuators, and actuators — was designed, built, and optimized in isolation before being assembled into a final structure. This simplified, modular process could pave the way for more complex biological programming in the future, according to the researchers.

Generally speaking, Voigt sees the new system as a culmination of a decade of synthetic-biology innovations. “It’s a representation of where we are currently, and all the pieces that needed to come together over the last decade to create systems of this scale and complexity,” he says.

Making “disco bacteria”

To make the new color images, the researchers programmed bacteria to produce the same pigment as the red, green, or blue light shone upon them. In an incubator, the researchers coated a petri dish with bacteria that are genetically identical. “You can think of it like undeveloped film, where you have the petri dish with bacteria on it,” Voigt says, “and the camera is the incubator.”

At the top of the incubator is a hole, where a stenciled image is projected onto the plate. Over time, the bacteria grow, producing an enzyme that produces a pigment corresponding to whichever RBG color they’re illuminated by. In addition to the MIT logo, the researchers produced images of various patterns, multicolored fruit, and the video game character Super Mario.

The engineered bacteria could also be used to rapidly start and stop the chemical reactions of microbes in industrial fermentation processes, which are used to make pharmaceuticals and other products. Today, controlling such chemical reactions requires dumping different chemical additives into large fermenting vats, which is time-consuming and inefficient.

In their paper, the researchers demonstrated this “chemicals on-demand” concept on a small scale. Using CRISPR gene-editing tools, they modified three genes that produce acetate — a sometimes-unwanted byproduct of various bioprocesses — to produce less of the chemical in response to RGB lights.

“Individually, and in combination with one another, the different colors of light reduce acetate production without sacrificing biomass accumulation,” the researchers wrote in their paper.

Voigt has coined an amusing name for these industrial microbes. “I refer to them as ‘disco bacteria,’” he says, “because different colored lights are flashing inside the fermenter and controlling the cells.”

A future application, Voigt adds, could be in controlling cells to form various materials and structures. Researchers, including some at MIT, have started programming cells to assemble into living materials that one day could be used to design solar cells, self-healing materials, or diagnostic sensors.

“It’s amazing when you look at the world and see all the different materials,” Voigt says. “Things like cellulose, silk proteins, metals, nanowires, and living materials like organs — all these different things in nature we get from cells growing into different patterns. You can imagine using different colors of light to tell the cells how they should be growing as part of building that material.”

The research was funded by the National Science Foundation’s Synthetic Biology Engineering Research Center, the Office of Naval Research’s Multidisciplinary University Research Initiative, and the National Institutes of Health.

May 26, 2017 | More

Toward mass-producible quantum computers

Quantum computers are experimental devices that offer large speedups on some computational problems. One promising approach to building them involves harnessing nanometer-scale atomic defects in diamond materials.

But practical, diamond-based quantum computing devices will require the ability to position those defects at precise locations in complex diamond structures, where the defects can function as qubits, the basic units of information in quantum computing. In today’s of Nature Communications, a team of researchers from MIT, Harvard University, and Sandia National Laboratories reports a new technique for creating targeted defects, which is simpler and more precise than its predecessors.

In experiments, the defects produced by the technique were, on average, within 50 nanometers of their ideal locations.

“The dream scenario in quantum information processing is to make an optical circuit to shuttle photonic qubits and then position a quantum memory wherever you need it,” says Dirk Englund, an associate professor of electrical engineering and computer science who led the MIT team. “We’re almost there with this. These emitters are almost perfect.”

The new paper has 15 co-authors. Seven are from MIT, including Englund and first author Tim Schröder, who was a postdoc in Englund’s lab when the work was done and is now an assistant professor at the University of Copenhagen’s Niels Bohr Institute. Edward Bielejec led the Sandia team, and physics professor Mikhail Lukin led the Harvard team.

Appealing defects

Quantum computers, which are still largely hypothetical, exploit the phenomenon of quantum “superposition,” or the counterintuitive ability of small particles to inhabit contradictory physical states at the same time. An electron, for instance, can be said to be in more than one location simultaneously, or to have both of two opposed magnetic orientations.

Where a bit in a conventional computer can represent zero or one, a “qubit,” or quantum bit, can represent zero, one, or both at the same time. It’s the ability of strings of qubits to, in some sense, simultaneously explore multiple solutions to a problem that promises computational speedups.

Diamond-defect qubits result from the combination of “vacancies,” which are locations in the diamond’s crystal lattice where there should be a carbon atom but there isn’t one, and “dopants,” which are atoms of materials other than carbon that have found their way into the lattice. Together, the dopant and the vacancy create a dopant-vacancy “center,” which has free electrons associated with it. The electrons’ magnetic orientation, or “spin,” which can be in superposition, constitutes the qubit.

A perennial problem in the design of quantum computers is how to read information out of qubits. Diamond defects present a simple solution, because they are natural light emitters. In fact, the light particles emitted by diamond defects can preserve the superposition of the qubits, so they could move quantum information between quantum computing devices.

Silicon switch

The most-studied diamond defect is the nitrogen-vacancy center, which can maintain superposition longer than any other candidate qubit. But it emits light in a relatively broad spectrum of frequencies, which can lead to inaccuracies in the measurements on which quantum computing relies.

In their new paper, the MIT, Harvard, and Sandia researchers instead use silicon-vacancy centers, which emit light in a very narrow band of frequencies. They don’t naturally maintain superposition as well, but theory suggests that cooling them down to temperatures in the millikelvin range — fractions of a degree above absolute zero — could solve that problem. (Nitrogen-vacancy-center qubits require cooling to a relatively balmy 4 kelvins.)

To be readable, however, the signals from light-emitting qubits have to be amplified, and it has to be possible to direct them and recombine them to perform computations. That’s why the ability to precisely locate defects is important: It’s easier to etch optical circuits into a diamond and then insert the defects in the right places than to create defects at random and then try to construct optical circuits around them.

In the process described in the new paper, the MIT and Harvard researchers first planed a synthetic diamond down until it was only 200 nanometers thick. Then they etched optical cavities into the diamond’s surface. These increase the brightness of the light emitted by the defects (while shortening the emission times).

Then they sent the diamond to the Sandia team, who have customized a commercial device called the Nano-Implanter to eject streams of silicon ions. The Sandia researchers fired 20 to 30 silicon ions into each of the optical cavities in the diamond and sent it back to Cambridge.

Mobile vacancies

At this point, only about 2 percent of the cavities had associated silicon-vacancy centers. But the MIT and Harvard researchers have also developed processes for blasting the diamond with beams of electrons to produce more vacancies, and then heating the diamond to about 1,000 degrees Celsius, which causes the vacancies to move around the crystal lattice so they can bond with silicon atoms.

After the researchers had subjected the diamond to these two processes, the yield had increased tenfold, to 20 percent. In principle, repetitions of the processes should increase the yield of silicon vacancy centers still further.

When the researchers analyzed the locations of the silicon-vacancy centers, they found that they were within about 50 nanometers of their optimal positions at the edge of the cavity. That translated to emitted light that was about 85 to 90 percent as bright as it could be, which is still very good.

“It’s an excellent result,” says Jelena Vuckovic, a professor of electrical engineering at Stanford University who studies nanophotonics and quantum optics. “I hope the technique can be improved beyond 50 nanometers, because 50-nanometer misalignment would degrade the strength of the light-matter interaction. But this is an important step in that direction. And 50-nanometer precision is certainly better than not controlling position at all, which is what we are normally doing in these experiments, where we start with randomly positioned emitters and then make resonators.”

May 26, 2017 | More

Faster, more nimble drones on the horizon

There’s a limit to how fast autonomous vehicles can fly while safely avoiding obstacles. That’s because the cameras used on today’s drones can only process images so fast, frame by individual frame. Beyond roughly 30 miles per hour, a drone is likely to crash simply because its cameras can’t keep up.

Recently, researchers in Zurich invented a new type of camera, known as the Dynamic Vision Sensor (DVS), that continuously visualizes a scene in terms of changes in brightness, at extremely short, microsecond intervals. But this deluge of data can overwhelm a system, making it difficult for a drone to distinguish an oncoming obstacle through the noise.

Now engineers at MIT have come up with an algorithm to tune a DVS camera to detect only specific changes in brightness that matter for a particular system, vastly simplifying a scene to its most essential visual elements.

The results, which they presented this week at the IEEE American Control Conference in Seattle, can be applied to any linear system that directs a robot to move from point A to point B as a response to high-speed visual data. Eventually, the results could also help to increase the speeds for more complex systems such as drones and other autonomous robots.

“There is a new family of vision sensors that has the capacity to bring high-speed autonomous flight to reality, but researchers have not developed algorithms that are suitable to process the output data,” says lead author Prince Singh, a graduate student in MIT’s Department of Aeronautics and Astronautics. “We present a first approach for making sense of the DVS’ ambiguous data, by reformulating the inherently noisy system into an amenable form.”

Singh’s co-authors are MIT visiting professor Emilio Frazzoli of the Swiss Federal Institute of Technology in Zurich, and Sze Zheng Yong of Arizona State University.

Taking a visual cue from biology

The DVS camera is the first commercially available “neuromorphic” sensor — a class of sensors that is modeled after the vision systems in animals and humans. In the very early stages of processing a scene, photosensitive cells in the human retina, for example, are activated in response to changes in luminosity, in real time.

Neuromorphic sensors are designed with multiple circuits arranged in parallel, similarly to photosensitive cells, that activate and produce blue or red pixels on a computer screen in response to either a drop or spike in brightness.

Instead of a typical video feed, a drone with a DVS camera would “see” a grainy depiction of pixels that switch between two colors, depending on whether that point in space has brightened or darkened at any given moment. The sensor requires no image processing and is designed to enable, among other applications, high-speed autonomous flight.

Researchers have used DVS cameras to enable simple linear systems to see and react to high-speed events, and they have designed controllers, or algorithms, to quickly translate DVS data and carry out appropriate responses. For example, engineers have designed controllers that interpret pixel changes in order to control the movements of a robotic goalie to block an incoming soccer ball, as well as to direct a motorized platform to keep a pencil standing upright.

But for any given DVS system, researchers have had to start from scratch in designing a controller to translate DVS data in a meaningful way for that particular system.

“The pencil and goalie examples are very geometrically constrained, meaning if you give me those specific scenarios, I can design a controller,” Singh says. “But the question becomes, what if I want to do something more complicated?”

Cutting through the noise

In the team’s new paper, the researchers report developing a sort of universal controller that can translate DVS data in a meaningful way for any simple linear, robotic system. The key to the controller is that it identifies the ideal value for a parameter Singh calls “H,” or the event-threshold value, signifying the minimum change in brightness that the system can detect.

Setting the H value for a particular system can essentially determine that system’s visual sensitivity: A system with a low H value would be programmed to take in and interpret changes in luminosity that range from very small to relatively large, while a high H value would exclude small changes, and only “see” and react to large variations in brightness.

The researchers formulated an algorithm first by taking into account the possibility that a change in brightness would occur for every “event,” or pixel activated in a particular system. They also estimated the probability for “spurious events,” such as a pixel randomly misfiring, creating false noise in the data.

Once they derived a formula with these variables in mind, they were able to work it into a well-known algorithm known as an H-infinity robust controller, to determine the H value for that system.

The team’s algorithm can now be used to set a DVS camera’s sensitivity to detect the most essential changes in brightness for any given linear system, while excluding extraneous signals. The researchers performed a numerical simulation to test the algorithm, identifying an H value for a theoretical linear system, which they found was able to remain stable and carry out its function without being disrupted by extraneous pixel events.

“We found that this H threshold serves as a ‘sweet-spot,’ so that a system doesn’t become overwhelmed with too many events,” Singh says. He adds that the new results “unify control of many systems,” and represent a first step toward faster, more stable autonomous flying robots, such as the Robobee, developed by researchers at Harvard University.

“We want to break that speed limit of 20 to 30 miles per hour, and go faster without colliding,” Singh says. “The next step may be to combine DVS with a regular camera, which can tell you, based on the DVS rendering, that an object is a couch versus a car, in real time.”

This research was supported in part by the Singapore National Research Foundation through the SMART Future Urban Mobility project.

May 25, 2017 | More

T.W. “Bill” Lambe, professor emeritus of civil and environmental engineering, dies at 96

T. William “Bill” Lambe, professor emeritus in civil and environmental engineering, passed away on March 6. He was 96 years old.

Lambe SM ’44 PhD ’48 arrived at MIT to pursue graduate studies in civil engineering after a brief stint working in the engineering industry.

As a graduate student in 1945, Lambe began working as an instructor at MIT. By July 1959, he was a full professor in the Department of Civil and Environmental Engineering. He held the first Edmund K. Turner Professor of Civil Engineering professorship from 1969 until his retirement from teaching in June 1981.

Lambe’s research is remembered for having a close relation to engineering practice, reflective of his own career path. His academic contributions to geotechnical engineering were fundamental and far-reaching, and included research of soil chemistry, soil stabilization and freezing, the stress path method, and the formalizing of geotechnical prediction. Lambe’s predictions are one instance of the overlap between engineering practice and academia. His contributions as an academic were fundamental to geotechnical engineering.

His textbooks, “Soil Testing for Engineers,” published in 1951, and “Soil Mechanics,” co-authored with Robert Whitman and published in 1969, were also groundbreaking in the field.

Another example of Lambe’s ability to have research and practical engineering benefit from each other was the instrumentation of foundation work on multiple MIT buildings constructed during the building boom of the 1960’s and for Boston-area subway construction. MIT geotechnical students were educated to become engineers through practice-oriented research and direct or indirect involvement in Lambe’s consulting projects.

Following his retirement from MIT, Lambe returned to the engineering industry, serving as a consultant on numerous international projects. These projects included landslides; earth dams for storage of oil, mining waster, and water; building foundations; foundations for an off-shore storm surge barrier; and hydraulic reclamation projects, among others. He remained active as a consultant until his early 90’s.

Lambe was a member of the National Academy of Engineering, an honorary member of the American Society of Civil Engineers (ASCE), a fellow of the Institution of Civil Engineers, an honorary member of the Southeast Asian Society of Geotechnical Engineering, and an honorary member of the Venezuelan Society of Soil mechanics and Foundation Engineering. His more than 100 publications earned him many awards including the ASCE’s highest award, the Norman Medal, in 1964; the ASCE Terzaghi Award in 1975; and the N.C. State University Distinguished Engineering Alumnus Award in 1982.

He is survived by five children: Philip and wife Catherine; Virginia and husband Robert Guaraldi; Richard and wife Michele; Robert and wife Judith; and Susan and husband Scott Clary, who live in North Carolina, New Hampshire, Washington, Massachusetts, and Virginia, respectively. His growing family includes 14 grandchildren and their six spouses, and seven great-grandchildren.

May 23, 2017 | More

Speeding up quality control for biologics

Drugs manufactured by living cells, also called biologics, are one of the fastest-growing segments of the pharmaceutical industry. These drugs, often antibodies or other proteins, are being used to treat cancer, arthritis, and many other diseases.

Monitoring the quality of these drugs has proven challenging, however, because protein production by living cells is much more difficult to control than the synthesis of traditional drugs. Typically these drugs consist of small organic molecules produced by a series of chemical reactions.

MIT engineers have devised a new way to analyze biologics as they are being produced, which could lead to faster and more efficient safety tests for such drugs. The system, based on a series of nanoscale filters, could also be deployed to test drugs immediately before administering them, to ensure they haven’t degraded before reaching the patient.

“Right now there is no mechanism for checking the validity of the protein postrelease,” says Jongyoon Han, an MIT professor of electrical engineering and computer science. “If you have analytics that consume a very small amount of a sample but also provide critical safety information about aggregation and binding, we can think about point-of-care analytics.”

Han is the senior author of the paper, which appears in the May 22 issue of Nature Nanotechnology. The paper’s lead author is MIT postdoc Sung Hee Ko.

A complicated process

Many biologics are produced in “bioreactors” populated by cells that have been engineered to produce large quantities of certain proteins such as antibodies or cytokines (a type of signaling molecule used by the immune system). Some of these protein drugs also require the addition of sugar molecules through a process known as glycosylation.

“Proteins are inherently more complicated than small-molecule drugs. Even if you run the same exact bioreactor process, you may end up with different proteins, with different glycosylation and different activity,” Han says.

Although manufacturers can monitor bioreactor conditions such as temperature and pH, which may warn of potential problems, there is no way to test the quality of the proteins until after production is complete, and that process can take months.

“At the end of that process, you may or may not get a good batch. And if you happen to get a bad batch, this means a lot of waste in overall manufacturing workflow,” Han says.

Han believed that nanofilters he had previously developed could be adapted to sort proteins by size as they flow through a tiny channel, which could allow for continuous, automatic monitoring as the proteins are produced. This size information can reveal whether the proteins have clumped together, which is a sign that the protein has lost its original structure.

After proteins enter the nanofilter array device, they are directed to one side of the wall. This narrow line of proteins then encounters a series of slanted filters with tiny pores (15 to 30 nanometers). The pores are designed so that smaller proteins will fit through them easily, while larger proteins will move along the diagonal for some distance before making it through one of the pores. This allows the proteins to be separated based on their size: Smaller proteins stay closer to the side where they started, while larger proteins drift toward the opposite side.

By changing the size of the pores, the researchers can use this system to separate proteins ranging in mass from 20 to hundreds of kilodaltons. This allows them to determine whether the proteins have formed large clumps that could provoke a dangerous immune response in patients.

The researchers tested their device on three proteins: human growth hormone; interferon alpha-2b, a cytokine that is being tested as a cancer drug; and granulocyte-colony stimulating factor (GCSF), which is used to stimulate production of white blood cells.

To demonstrate the device’s ability to reveal protein degradation, the researchers exposed these proteins to harmful conditions such as heat, hydrogen peroxide, and ultraviolet light. Separating the proteins through the nanofilter array device allowed the researchers to accurately determine if they had degraded or not.

Sorting by size can also reveal whether proteins bind to their intended targets. To do this, the researchers mixed the biologics with protein fragments that the drugs are meant to target. If the biologics and protein fragments bind correctly, they form a larger protein with a distinctive size.

Rapid analysis

This nanofluidic system can analyze a small protein sample in 30 to 40 minutes, plus the few hours it takes to prepare the sample. However, the researchers believe they can speed that up by further miniaturizing the device.

“We may be able to do it in tens of minutes, or even a few minutes,” Han says. “If we realize that, we may be able to do real point-of-care checks. That’s the future direction.”

The research was funded by the Defense Advanced Research Projects Agency, SPAWAR Systems Center Pacific, and some authors were supported by a Siebel Fellowship and a Samsung Scholarship.

May 22, 2017 | More

Researchers design moisture-responsive workout suit

A team of MIT researchers has designed a breathable workout suit with ventilating flaps that open and close in response to an athlete’s body heat and sweat. These flaps, which range from thumbnail- to finger-sized, are lined with live microbial cells that shrink and expand in response to changes in humidity. The cells act as tiny sensors and actuators, driving the flaps to open when an athlete works up a sweat, and pulling them closed when the body has cooled off.

The researchers have also fashioned a running shoe with an inner layer of similar cell-lined flaps to air out and wick away moisture. Details of both designs are published today in Science Advances.

Why use live cells in responsive fabrics? The researchers say that moisture-sensitive cells require no additional elements to sense and respond to humidity. The microbial cells they have used are also proven to be safe to touch and even consume. What’s more, with new genetic engineering tools available today, cells can be prepared quickly and in vast quantities, to express multiple functionalities in addition to moisture response.

To demonstrate this last point, the researchers engineered moisture-sensitive cells to not only pull flaps open but also light up in response to humid conditions.

“We can combine our cells with genetic tools to introduce other functionalities into these living cells,” says Wen Wang, the paper’s lead author and a former research scientist in MIT’s Media Lab and Department of Chemical Engineering. “We use fluorescence as an example, and this can let people know you are running in the dark. In the future we can combine odor-releasing functionalities through genetic engineering. So maybe after going to the gym, the shirt can release a nice-smelling odor.”

Wang’s co-authors include 14 researchers from MIT, specializing in fields including mechanical engineering, chemical engineering, architecture, biological engineering, and fashion design, as well as researchers from New Balance Athletics. Wang co-led the project, dubbed bioLogic, with former graduate student Lining Yao as part of MIT’s Tangible Media group, led by Hiroshi Ishii, the Jerome B. Wiesner Professor of Media Arts and Sciences.

Shape-shifting cells

In nature, biologists have observed that living things and their components, from pine cone scales to microbial cells and even specific proteins, can change their structures or volumes when there is a change in humidity. The MIT team hypothesized that natural shape-shifters such as yeast, bacteria, and other microbial cells might be used as building blocks to construct moisture-responsive fabrics.

“These cells are so strong that they can induce bending of the substrate they are coated on,” Wang says.

The researchers first worked with the most common nonpathogenic strain of E. coli, which was found to swell and shrink in response to changing humidity. They further engineered the cells to express green fluorescent protein, enabling the cell to glow when it senses humid conditions.

They then used a cell-printing method they had previously developed to print E. coli onto sheets of rough, natural latex.

The team printed parallel lines of E. coli cells onto sheets of latex, creating two-layer structures, and exposed the fabric to changing moisture conditions. When the fabric was placed on a hot plate to dry, the cells began to shrink, causing the overlying latex layer to curl up. When the fabric was then exposed to steam, the cells began to glow and expand, causing the latex flatten out. After undergoing 100 such dry/wet cycles, Wang says the fabric experienced “no dramatic degradation” in either its cell layer or its overall performance.

No sweat

The researchers worked the biofabric into a wearable garment, designing a running suit with cell-lined latex flaps patterned across the suit’s back. They tailored the size of each flap, as well as the degree to which they open, based on previously published maps of where the body produces heat and sweat.

“People may think heat and sweat are the same, but in fact, some areas like the lower spine produce lots of sweat but not much heat,” Yao says. “We redesigned the garment using a fusion of heat and sweat maps to, for example, make flaps bigger where the body generates more heat.”

Support frames underneath each flap keep the fabric’s inner cell layer from directly touching the skin, while at the same time, the cells are able to sense and react to humidity changes in the air lying just over the skin. In trials to test the running suit, study participants donned the garment and worked out on exercise treadmills and bicycles while researchers monitored their temperature and humidity using small sensors positioned across their backs.

After five minutes of exercise, the suit’s flaps started opening up, right around the time when participants reported feeling warm and sweaty. According to sensor readings, the flaps effectively removed sweat from the body and lowered skin temperature, more so than when participants wore a similar running suit with nonfunctional flaps.

When Wang tried on the suit herself, she found that the flaps created a welcome sensation. After pedaling hard for a few minutes, Wang recalls that “it felt like I was wearing an air conditioner on my back.”

Ventilated running shoes

The team also integrated the moisture-responsive fabric into a rough prototype of a running shoe. Where the bottom of the foot touches the sole of the shoe, the researchers sewed multiple flaps, curved downward, with the cell-lined layer facing toward — though not touching — a runner’s foot. They again designed the size and position of the flaps based on heat and sweat maps of the foot.

“In the beginning, we thought of making the flaps on top of the shoe, but we found people don’t normally sweat on top of their feet,” Wang says. “But they sweat a lot on the bottom of their feet, which can lead to diseases like warts. So we thought, is it possible to keep your feet dry and avoid those diseases?”

As with the workout suit, the flaps on the running shoe opened and lit up when researchers increased the surrounding humidity; in dry conditions the flaps faded and closed.

Going forward, the team is looking to collaborate with sportswear companies to commercialize their designs, and is also exploring other uses, including moisture-responsive curtains, lampshades, and bedsheets.

“We are also interested in rethinking packaging,” Wang says. “The concept of a second skin would suggest a new genre for responsive packaging.”

“This work is an example of harnessing the power of biology to design new materials and devices and achieve new functions,” says Xuanhe Zhao, the Robert N. Noyce Career Development Associate Professor in the Department of Mechanical Engineering and a co-author on the paper. “We believe this new field of ‘living’ materials and devices will find important applications at the interface between engineering and biological systems.”

This research was supported, in part, by MIT Media Lab and the Singapore-MIT Alliance for Research and Technology.

May 19, 2017 | More

MIT $100K winner’s optical chips perform AI computations at light speed

The big winner at this year’s MIT $100K Entrepreneurship Competition aims to drastically accelerate artificial-intelligence computations — to light speed.

Devices such as Apple’s Siri and Amazon’s Alexa, as well as self-driving cars, all rely on artificial intelligence algorithms. But the chips powering these innovations, which use electrical signals to do computations, could be much faster and more efficient.

That’s according to MIT team Lightmatter, which took home the $100,000 Robert P. Goldberg grand prize from last night’s competition for developing fully optical chips that compute using light, meaning they work many times faster — using much less energy — than traditional electronics-based chips. These new chips could be used to power faster, more efficient, and more advanced artificial-intelligence devices.

“Artificial intelligence has affected or will affect all industries,” said Nick Harris, an MIT PhD student, during the team’s winning pitch to a capacity crowd in the Kresge Auditorium. “We’re bringing the next step of artificial intelligence to light.”

Two other winners took home cash prizes from the annual competition, now in its 28th year. Winning a $5,000 Audience Choice award was change:WATER Labs, a team of MIT researchers and others making toilets that can condense waste into smaller bulk for easier transport in areas where people live without indoor plumbing. PipeGuard, an MIT team developing a sensor that can be sent through water pipes to detect leaks, won a $10,000 Booz Allen Hamilton data prize.

The competition is run by MIT students and supported by the Martin Trust Center for MIT Entrepreneurship and the MIT Sloan School of Management.

Computing at light speed

Founded out of MIT, Lightmatter has developed a new optical chip architecture that could in principle speed up artificial-intelligence computations by orders of magnitude.

In artificial intelligence, traditional chips rely on electrical signals that conduct millions of calculations using transistors (switches) to simulate a neural network that can produce an output. Lightmatter’s chip uses a completely different architecture that is more similar to the architecture of a real biological neural network. In addition, it uses light, instead of electrons, as a medium to carry the information during computing.

The team has already built a prototype to carry out some simple speech recognition tasks.

The chips could be used by companies to develop faster and more sophisticated artificial-intelligence models. Consumers could see, for instance, smarter models of Alexa or Siri, or autonomous cars that compute faster, using less energy.

With the prize money, the team will travel to meet with potential customers, rent its first office space, and visit manufacturers. The competition also helped the team develop a detailed business plan, Harris told MIT News. “Our business plan was passed around to quite a number of judges before we were even vetted to get in here,” he said. “We were able to iterate on our understanding of how this thing is going to work, who we’re going to sell it to, how much money we’re going to make, and all the details of a business. Before this, we weren’t really there.”

Detecting leaks, shrinking waste

In PipeGuard’s pitch, Jonathan Miller, an integrated design and management student, and You Wu, a mechanical engineering PhD student, showcased Robot Daisy, a palm-sized bot wearing a sensor “skirt.” A worker puts the device into one end of a water pipe and collects it at another end. If Daisy passes a leak while flowing through the pipe, the small amount of pressure pulls on robot’s “skirt,” collecting data on the size of the leak. Data from Daisy is used to pinpoint leaks within a couple of feet. Traditional methods give only a general area of a potential leak.

“Moreover, Daisy can detect leaks too small for current technology,” Wu said. “We can find leaks when they’re really small, in their early stages, way before a pipe bursts.” Using that information, the team can predict which pipes will burst, and when.

Diana Yousef, a research associate at D-Lab, and Huda Elasaad, a technical research assistant in D-Lab and the Department of Mechanical Engineering, pitched for change:Water Labs, which is developing a portable toilet that shrinks waste for easier removal.

Water makes up the bulk of human waste. The team’s toilet collects solid and liquid waste in a small pouch made of a novel membrane. The membrane passively, rapidly vaporizes 95 percent of the waste’s liquid, releasing pure water vapor. This can be used in the many parts of the world that have off-line sewerage, meaning people lack access to indoor plumbing and rely on expensive sewerage removal.

“While all off-line sewerage requires collection and removal, this is usually frequent and costly. But by so drastically shrinking on-site sewerage volumes on a day-to-day basis, our toilets cut those costs in half and allow for unprecedented scalability,” Yousef said. About 40 cents worth of the material can cut waste of 20 people, according to the team.

The $100K Entrepreneurship Competition consists of three independent contests: Pitch, held in February; Accelerate, held in March; and the Launch grand finale, held last night. Winner of the Pitch competition was High Q Imaging, which reduces the cost of MRI machines by 10 times with advanced algorithms and innovative hardware. The Accelerate contest winner was NeuroSleeve, a team developing an arm brace that detects carpal tunnel syndrome in its early stages, which also competed last night.

Last night’s other competing teams were: Rendever, NeuroMesh, Legionarius, and CareMobile Transportation.

The $100K impact

Since its 1990 debut, the MIT $100K Entrepreneurship Competition has facilitated the birth of more than 160 companies, which have gone on to raise $1.3 billion in venture capital and build $16 billion in market capitalization. More than 30 of the startups have been acquired by major companies, such as Oracle and Merck, and more 4,600 people are currently employed by former competing companies.

This year, 200 teams applied to the entrepreneurship competition. That number was winnowed to 50 semifinalist teams for the Launch contest. Judges then chose eight finalists to compete in Wednesday’s grand finale event. Semifinalist teams receive mentoring, prototyping funds, media exposure, and discounted services.

In his welcoming remarks, Bar Kafri, an MBA student and managing director of the MIT $100K Entrepreneurship Competition, who has been involved with the competition for many years, told the teams to embrace the process of competing because it walks them through all the intricacies of starting a company.

Noting that people often ask why he always gets involved with the competition, Kafri said, “It’s the same [reason] that brought me all the way from Israel to MIT. This Institution is a shining light of innovation, a light that guides science and humanity in a sea of uncertainty. The $100K competition is the lighthouse that helps carry this light high above and enables it to be seen from afar. I have the privilege of being the lighthouse keeper, fostering this light.” He added: “Keep shining this light.”

Keynote speaker was Jason Jacobs, founder and CEO of Runkeeper, a popular fitness app that sold to Japanese sportswear giant Asics in 2016.

May 18, 2017 | More

Cinematography on the fly

In recent years, a host of Hollywood blockbusters — including “The Fast and the Furious 7,” “Jurassic World,” and “The Wolf of Wall Street” — have included aerial tracking shots provided by drone helicopters outfitted with cameras.

Those shots required separate operators for the drones and the cameras, and careful planning to avoid collisions. But a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and ETH Zurich hope to make drone cinematography more accessible, simple, and reliable.

At the International Conference on Robotics and Automation later this month, the researchers will present a system that allows a director to specify a shot’s framing — which figures or faces appear where, at what distance. Then, on the fly, it generates control signals for a camera-equipped autonomous drone, which preserve that framing as the actors move.

As long as the drone’s information about its environment is accurate, the system also guarantees that it won’t collide with either stationary or moving obstacles.

“There are other efforts to do autonomous filming with one drone,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and a senior author on the new paper. “They can follow someone, but if the subject turns, say 180 degrees, the drone will end up showing the back of the subject. With our solution, if the subject turns 180 degrees, our drones are able to circle around and keep focus on the face. We are able to specify richer higher-level constraints for the drones. The drones then map the high-level specifications into control and we end up with greater levels of interaction between the drones and the subjects.”

Joining Rus on the paper are Javier Alonso-Mora, who was a postdoc in her group when the work was done and is now an assistant professor of robotics at the Delft University of Technology; Tobias Nägeli, a graduate student at ETH Zurich and his advisor Otmar Hilliges, an assistant professor of computer science; and Alexander Domahidi, CTO of Embotech, an autonomous-systems company that spun out of ETH.

In the picture

With the new system, the user can specify how much of the screen a face or figure should occupy, what part of the screen it should occupy, and what the subject’s orientation toward the camera should be — straight on, profile, three-quarter view from either side, or over the shoulder. Those parameters can be set separately for any number of subjects; in tests at MIT, the researchers used compositions involving up to three subjects.

Usually, the maintenance of the framing will be approximate. Unless the actors are extremely well-choreographed, the distances between them, the orientations of their bodies, and their distance from obstacles will vary, making it impossible to meet all constraints simultaneously. But the user can specify how the different factors should be weighed against each other. Preserving the actors’ relative locations onscreen, for instance, might be more important than maintaining a precise distance, or vice versa. The user can also assign a weight to minimize occlusion, ensuring that one actor doesn’t end up blocking another from the camera.

The key to the system, Alonso-Mora explains, is that it continuously estimates the velocities of all of the moving objects in the drone’s environment and projects their locations a second or two into the future. This buys it a little time to compute optimal flight trajectories and also ensures that it can get recover smoothly if the drone needs to take evasive action to avoid collision.

The system updates its position projections about 50 times a second. Usually, the updates will have little effect on the drone’s trajectory, but the frequent updates ensure that the system can handle sudden changes of velocity.

The researchers tested the system at CSAIL’s motion-capture studio, using a quadrotor (four-propeller) drone. The motion-capture system provided highly accurate position data about the subjects, the studio walls, and the drone itself.

In one set of experiments, the subjects actively tried to collide with the drone, marching briskly toward it as it attempted to keep them framed within the shot. In all such cases, it avoided collision and immediately tried to resume the prescribed framing.

May 18, 2017 | More

MIT to spur global education hub for displaced populations and refugees

MIT is poised to become a global educational hub for displaced populations and refugees. With the launch of the Refugee ACTion Hub (ReACT), which was announced at the SOLVE at MIT annual flagship event, the Institute will develop digital and blended learning opportunities and serve as a catalyst for anyone dedicated to solving the problem of refugee education.

MIT ReACT stems from the vision and personal journey of its faculty founder, Admir Masic. “During the war in Yugoslavia my family lost everything, and I became a teenage refugee. I had access to a great deal of humanitarian support, such as food, clothes and shelter, but what changed my life was access to education,” he says.

It was “pure luck” that put Masic on the path to eventually becoming a faculty member at MIT, he adds. Now the Esther and Harold E. Edgerton Career Development Assistant Professor in the Department of Civil and Environmental Engineering, Masic has long dreamed about how to “bring this luck to everyone.”

“One of the greatest things I get to do as dean of engineering is to help catalyze ideas from faculty like Admir,” says Ian A. Waitz, dean of engineering and the Jerome C. Hunsaker Professor of Aeronautics and Astronautics. “When I heard his personal story about how education became his ‘ticket out’ of living as a refugee, I wanted to do anything and everything to help him create opportunities for others.”

MIT ReACT will focus on three main objectives: community engagement within MIT and beyond; the development of a certification system for displaced learners; and an outreach effort to connect with broader audiences. The founding team includes Hala Fadel MBA ’01, founder and chair of the MIT Enterprise Forum of the Pan-Arab region; Said Darwazah, CEO of Hikma Pharmaceuticals; Thomas Ermacora, futurist, urbanist and humanitarian; and Riccardo Sabatini, scientist and entrepreneur.

“With very limited educational and employment opportunities, there is little future for refugee populations,” says Fadel in reference to the nearly 65 million forcibly displaced people around the world. “The MIT community and technological innovation can become an inflection point and change a curse into an opportunity.”

To create a scalable program that meets the needs of displaced learners, MIT ReACT will initially pilot two efforts. Coding For Life, a hybrid learning program, will customize MIT’s MicroMasters concept, offering a professional and academic credential for online learners with the possibility of applying for campus-based programs. Through a partnership with MIT Media Lab’s Refugee Learning Accelerator, a parallel effort will aim to increase the digital innovation capacity in higher education institutions across the Middle East.

Masic and Fadel, who both served as judges of the refugee education challenge at SOLVE, are already building upon that experience and expect to offer seed research funding for faculty, postdocs and students and support for student-led fieldwork projects.

MIT ReACT will leverage other related global learning, engagement, and innovation activities throughout MIT such as the MIT International Science and Technology Initiatives (MISTI), MIT Sandbox Innovation Fund Program, and the Undergraduate Research Opportunities Program (UROP). Additionally, the debut of ReACT follows on the heels of the recent announcement of the Abdul Latif Jameel World Education Lab (J-WEL), aimed at learners in the developing world and those now underserved by education.

May 16, 2017 | More

Hacking discrimination

In July 2016, feeling frustrated about violence in the news and continued social and economic roadblocks to progress for minorities, members of the Black Alumni of MIT (BAMIT) were galvanized by a letter to the MIT community from President L. Rafael Reif. Responding to a recent series of tragic shootings, he asked “What are we to do?”

BAMIT members gathered in Washington to brainstorm a response, and out of that session emerged a plan to organize a hackathon aimed at finding technology-based solutions to address discrimination. The event, held at MIT last month, was called “Hacking Discrimination” and spearheaded by Elaine Harris ’78 and Lisa Egbuonu-Davis ’79 in partnership with the MIT Alumni Association.

The 11 pitches presented during the two-day hackathon covered a wide range of issues affecting communities of color, including making routine traffic stops less harmful for motorists and police officers, preventing bias in the hiring process by creating a professional profile using a secure blockchain system, flagging unconscious biases using haptic (touch-based) feedback and augmented reality, and providing advice for those who experience discrimination.

Hackathon winners were selected in three categories – Innovation, Impact, and Storytelling – and received gifts valued at $1,500. The teams also received advice from local experts on their topics throughout the second day of hacking.

The Innovation prize was awarded to Taste Voyager, a platform that enables individuals or families to host guests and foster cultural understanding over a home-cooked meal. The Impact prize went to Rahi, a smartphone app that makes shopping easier for recipients of the federally funded Women, Infant, and Children food-assistance program. The Storytelling prize was awarded to Just-Us and Health, which uses surveys to track the effects of discrimination in neighborhoods.

As Randal Pinkett SM ’98, MBA ’98, PhD ’02 said in his keynote speech, “Technology alone won’t solve bias in the U.S.,” and the hackathon made sure to focus on technology’s human users. Under the guidance of Fahad Punjwani, an MIT graduate student in integrated design and management, the event’s mentors ensured that participants considered not just how to deploy their technologies but also the people they aimed to serve.

With a human-centered design process as the guideline, Punjwani encouraged participants to speak with people affected by the problem and carefully define their target audience. For some, including the Taste Voyager team, which began the hackathon as Immigrant Integration, this resulted in an overhaul of the project. Examining their target audience led the team to switch their focus from helping immigrants integrate to creating a way for people of different backgrounds to connect and help each other in a safe space.

“We hacked the topic of our topic,” said Jennifer Williams of the Lincoln Laboratory’s Human Language Technology group, who led the team.

The Rahi team, which was led by Hildreth England, assistant director of the Media Lab’s Open Agriculture Initiative, also focused on the user as it attempted to improve the national Women, Infants, and Children (WIC) nutrition program by acknowledging the racial and ethnic inequalities embedded in the food system. For example, according to Feeding America, one in five African-American and Latino households is food insecure — lacking consistent and adequate access to affordable and nutritious food — compared to one in 10 Caucasian households.

The team created mockups for a smartphone app and focused on improving “the experience of using it before [shopping], and then in a store because that’s where all of the problems are,” explained England. In some states, WIC recipients have only a sheet of paper listing the foods available through the program.

During the first day of the event, speeches by Kirk Kolenbrander, vice president at MIT; J. Phillip Thompson, associate professor of urban studies and planning; and Shannon Al-Wakeel, executive director of the Muslim Justice League, reminded participants of the past and current social justice issues needing solutions. The following morning, in a keynote address, Pinkett stressed the strengths and weaknesses that come with cultural differences. “Our greatest strength is our diversity; our greatest liability is in our cultural ignorance,” he said.

A Hacking Discrimination Fund, which was announced at the event, has been created to support undergraduate and graduate students addressing racism and discrimination through events such as the hackathon, development of sustainable community dialogue, contest development, and other activities that specifically address racism in the U.S. The fund’s emphasis will be placed on solutions that aim to overcome challenges to safety or economic and professional success for populations that have historically been victims of racism.

Alumnae organizers Egbuonu-Davis and Harris worked closely with a number of collaborators to launch the inaugural event. Contributors included Punjwani; Leo Anthony G. Celi SM ’09, a principal research scientist at the MIT Institute of Medical Engineering and Science; Trishan Panch, an MIT lecturer, primary care physician, and co-founder and Chief Medical Officer at Wellframe; and Marzyeh Ghassemi and Tristan Naumann, both MIT CSAIL PhD candidates.

May 16, 2017 | More