News and Research
news.mit_.edusitesmit.edu_.newsofficefilesstylesarticle_cover_image_originalpublicimages2016NewsImage_dome-trees_1-7e1b40022cb9d24990dc73378e4a15e752dfd2f1

MIT Corporation elects LGO alum as term member

Vrajesh Y. Modi (LGO ’15) was elected to The MIT Corporation — the Institute’s board of trustees. He will serve for five years.
Read more

Lgo

Designing climate-friendly concrete, from the nanoscale up

Franz-Josef Ulm, professor of CEE, LGO thesis advisor, and director of the MIT Concrete Sustainability Hub (CSHub), have been working to reduce concrete’s environmental footprint.

An MIT-led team has defined the nanoscale forces that control how particles pack together during the formation of cement “paste,” the material that holds together concrete and causes that ubiquitous construction material to be a major source of greenhouse gas emissions. By controlling those forces, the researchers will now be able to modify the microstructure of the hardened cement paste, reducing pores and other sources of weakness to make concrete stronger, stiffer, more fracture-resistant, and longer-lasting. Results from the researchers’ simulations explain experimental measurements that have confused observers for decades, and they may guide the way to other improvements, such as adding polymers to fill the pores and recycling waste concrete into a binder material, reducing the need to make new cement.

Each year, the world produces 2.3 cubic yards of concrete for every person on earth, in the process generating more than 10 percent of all industrial carbon dioxide (CO2) emissions. New construction and repairs to existing infrastructure currently require vast amounts of concrete, and consumption is expected to escalate dramatically in the future. “To shelter all the people moving into cities in the next 30 years, we’ll have to build the equivalent of several hundred New York cities,” says Roland Pellenq, senior research scientist in the MIT Department of Civil and Environmental Engineering (CEE) and research director at France’s National Center for Scientific Research (CNRS). “There’s no material up to that task but concrete.”

Recognizing the critical need for concrete, Pellenq and his colleague Franz-Josef Ulm, professor of CEE and director of the MIT Concrete Sustainability Hub (CSHub), have been working to reduce its environmental footprint. Their goal: to find ways to do more with less. “If we can make concrete stronger, we’ll need to use less of it in our structures,” says Ulm. “And if we can make it more durable, it’ll last longer before it needs to be replaced.”

Surprisingly, while concrete has been a critical building material for 2,000 years, improvements have largely come from trial and error rather than rigorous research. As a result, the factors controlling how it forms and performs have remained poorly understood. “People always deemed what they saw under a microscope as being coincidence or evidence of the special nature of concrete,” says Ulm, who with Pellenq co-directs the joint MIT-CNRS laboratory called MultiScale Material Science for Energy and Environment, hosted at MIT by the MIT Energy Initiative (MITEI). “They didn’t go to the very small scale to see what holds it together — and without that knowledge, you can’t modify it.”

Cement: the key to better concrete

The problems with concrete — both environmental and structural — are linked to the substance that serves as its glue, namely, cement. Concrete is made by mixing together gravel, sand, water, and cement. The last two ingredients combine to make cement hydrate, the binder in the hardened concrete. But making the dry cement powder requires cooking limestone (typically with clay) at temperatures of 1,500 degrees Celsius for long enough to drive off the carbon in it. Between the high temperatures and the limestone decarbonization, the process of making cement powder for concrete is by itself responsible for almost 6 percent of all CO2 emissions from industry worldwide. Structural problems can also be traced to the cement: When finished concrete cracks and crumbles, the failure inevitably begins within the cement hydrate that’s supposed to hold it together — and replacing that crumbling concrete will require making new cement and putting more CO2 into the atmosphere.

To improve concrete, then, the researchers had to address the cement hydrate — and they had to start with the basics: defining its fundamental structure through atomic-level analysis. In 2009, Pellenq, Ulm, and an international group of researchers associated with CSHub published the first description of cement hydrate’s three-dimensional molecular structure. Subsequently, they determined a new formula that yields cement hydrate particles in which the atoms occur in a specific configuration — a “sweet spot” — that increases particle strength by 50 percent.

However, that nanoscale understanding doesn’t translate directly into macroscale characteristics. The strength and other key properties of cement hydrate actually depend on its structure at the “mesoscale” — specifically, on how nanoparticles have packed together over hundred-nanometer distances as the binder material forms.

When dry cement powder dissolves in water, room-temperature chemical reactions occur, and nanoparticles of cement hydrate precipitate out. If the particles don’t pack tightly, the hardened cement will contain voids that are tens of nanometers in diameter — big enough to allow aggressive materials such as road salt to seep in. In addition, the individual cement hydrate particles continue to move around over time — at a tiny scale — and that movement can cause aging, cracking, and other types of degradation and failure.

To understand the packing process, the researchers needed to define the precise physics that drives the formation of the cement hydrate microstructure — and that meant they had to understand the physical forces at work among the particles. Every particle in the system exerts forces on every other particle, and depending on how close together they are, the forces either pull them together or push them apart. The particles seek an organization that minimizes energy over length scales of many particles. But reaching that equilibrium state takes a long time. When the Romans made concrete 2,000 years ago, they used a binder that took many months to harden, so the particles in it had time to redistribute so as to relax the forces between them. But construction time is money, so today’s binder has been optimized to harden in a few hours. As a result, the concrete is solid long before the cement hydrate particles have relaxed, and when they do, the concrete sometimes shrinks and cracks. So while the Roman Colosseum and Pantheon are still standing, concrete that’s made today can fail in just a few years.

The research challenge

Laboratory investigation of a process that can take place over decades isn’t practical, so the researchers turned to computer simulations. “Thanks to statistical physics and computational methods, we’re able to simulate this system moving toward the equilibrium state in a couple of hours,” says Ulm.

Based on their understanding of interactions among atoms within a particle, the researchers — led by MITEI postdoc Katerina Ioannidou — defined the forces that control how particles space out relative to one another as cement hydrate forms. The result is an algorithm that mimics the precipitation process, particle by particle. By constantly tracking the forces among the particles already present, the algorithm calculates the most likely position for each new one — a position that will move the system toward equilibrium. It thus adds more and more particles of varying sizes until the space is filled and the precipitation process stops.

Results from sample analyses appear in the first two diagrams in Figure 1 of the slideshow above. The width of each simulation box is just under 600 nanometers — about one-tenth the diameter of a human hair. The two analyses assume different packing fractions, that is, the total fraction of the simulation box occupied by particles. The packing fraction is 0.35 in the left-hand diagram and 0.52 in the center diagram. At the lower fraction, far more of the volume is made up of open pores, indicated by the white regions.

The third diagram in Figure 1 is a sketch of the cement hydrate structure proposed in pioneering work by T.C. Powers in 1958. The similarity to the center figure is striking. The MIT results thus support Powers’ idea that the formation of mesoscale pores can be attributed to the use of excessive water during hydration — that is, more water than needed to dissolve and precipitate the cement hydrate. “Those pores are the fingerprint of the water you put into the mix in the first place,” says Pellenq. “Add too much water, and at the end you’ll have a cement paste that is too porous, and it will degrade faster over time.”

To validate their model, the researchers performed experimental tests and parallel theoretical analyses to determine the stiffness and hardness (or strength) of cement hydrate samples. The laboratory measurements were taken using a technique called nanoindentation, which involves pushing a hard tip into a sample to determine the relationship between the applied load and the volume of deformed material beneath the indenter.

The graphs in Figure 2 of the slideshow above show results from small-scale nanoindentation tests on three laboratory samples (small symbols) and from computations of those properties in a “sample” generated by the simulation (yellow squares). The graph on the left shows results for stiffness, the graph on the right results for hardness. In both cases, the X-axis indicates the packing fraction. The results from the simulations match the experimental results well. (The researchers note that at lower packing fractions, the material is too soggy to test experimentally — but the simulation can do the calculation anyway.)

In another test, the team investigated experimental measurements of cement hydrate that have mystified researchers for decades. A standard way to determine the structure of a material is using small-angle neutron scattering (SANS). Send a beam of neutrons into a sample, and how they bounce back conveys information about the distribution of particles and pores and other features on length scales of a few hundred nanometers.

SANS had been used on hardened cement paste for several decades, but the measurements always exhibited a regular pattern that experts in the field couldn’t explain. Some talked about fractal structures, while others proposed that concrete is simply unique.

To investigate, the researchers compared SANS analyses of laboratory samples with corresponding scattering data calculated using their model. The experimental and theoretical results showed excellent agreement, once again validating their technique. In addition, the simulation elucidated the source of the past confusion: The unexplained patterns are caused by the rough edges at the boundary between the pores and the solid regions. “All of a sudden we could explain this signature, this mystery, but on a physics basis in a bottom-up fashion,” says Ulm. “That was a really big step.”

New capabilities, new studies

“We now know that the microtexture of cement paste isn’t a given but is a consequence of an interplay of physical forces,” says Ulm. “And since we know those forces, we can modify them to control the microtexture and produce concrete with the characteristics we want.” The approach opens up a new field involving the design of cement-based materials from the bottom up to create a suite of products tailored to specific applications.

The CSHub researchers are now exploring ways to apply their new techniques to all steps in the life cycle of concrete. For example, a promising beginning-of-life approach may be to add another ingredient — perhaps a polymer — to alter the particle-particle interactions and serve as filler for the pore spaces that now form in cement hydrate. The result would be a stronger, more durable concrete for construction and also a high-density, low-porosity cement that would perform well in a variety of applications. For instance, at today’s oil and natural gas wells, cement sheaths are generally placed around drilling pipes to keep gas from escaping. “A molecule of methane is 500 times smaller than the pores in today’s cement, so filling those voids would help seal the gas in,” says Pellenq.

The ability to control the material’s microtexture could have other, less-expected impacts. For example, novel CSHub work has demonstrated that the fuel efficiency of vehicles is significantly affected by the interaction between tires and pavement. Simulations and experiments in the lab-scale setup shown in Figure 3 of the slideshow above suggest that making concrete surfaces stiffer could reduce vehicle fuel consumption by as much as 3 percent nationwide, saving energy and reducing emissions.

Perhaps most striking is a concept for recycling spent concrete. Today, methods of recycling concrete generally involve cutting it up and using it in place of gravel in new concrete. But that approach doesn’t reduce the need to manufacture more cement. The researchers’ idea is to reproduce the cohesive forces they’ve identified in cement hydrate. “If the microtexture is just a consequence of the physical forces between nanometer-sized particles, then we should be able to grind old concrete into fine particles and compress them so that the same force field develops,” says Ulm. “We can make new binder without needing any new cement — a true recycling concept for concrete!”

This research was supported by Schlumberger; France’s National Center for Scientific Research (through its Laboratory of Excellence Interdisciplinary Center on MultiScale Materials for Energy and Environment); and the Concrete Sustainability Hub at MIT. Schlumberger is a Sustaining Member of the MIT Energy Initiative. The research team also included other investigators at MIT; the University of California at Los Angeles; Newcastle University in the United Kingdom; and Sorbonne University, Aix-Marseille University, and the National Center for Scientific Research in France.

This article appears in the Spring 2016 issue of Energy Futures, the magazine of the MIT Energy Initiative.


July 25, 2016 | More

Predicting performance under pressure

Two LGO thesis advisors and MIT Sloan operations professors use sweat to measure stress, see surprising results. Many industries subject current and prospective employees to stress tests to see how they might perform under pressure. Those who remain cool, calm, and collected during the simulations are often seen as the best fit for stressful real-life situations, whether it’s landing an airplane or trading on the stock exchange floor.

July 15, 2016 | More

Ready for takeoff

“The system is large, and there’s a lot of connectivity,” says Hamsa Balakrishnan, associate professor of aeronautics and astronautics and LGO student advisor at MIT.

Over the next 25 years, the number of passengers flying through U.S. airport hubs is expected to skyrocket by almost 70 percent, to more than 900 million passengers per year. This projected boom in commercial fliers will almost certainly add new planes to an already packed airspace.

Any local delays, from a congested runway to a weather-related cancellation, could ripple through the aviation system and jam up a significant portion of it, making air traffic controllers’ jobs increasingly difficult.

“The system is large, and there’s a lot of connectivity,” says Hamsa Balakrishnan, associate professor of aeronautics and astronautics at MIT. “How do you move along today’s system to be more efficient, and at the same time think about technologies that are lightweight, that you can implement in the tower now?”

These are questions that Balakrishnan, who was recently awarded tenure, is seeking to answer. She is working with the Federal Aviation Administration and major U.S. airports to upgrade air traffic control tools in a way that can be easily integrated into the existing infrastructure. These tools are aimed at predicting and preventing air traffic delays, both at individual airports and across the aviation system. They will also ultimately make controllers’ jobs easier.

“We don’t necessarily want [controllers] to spend the bandwidth on processing 40 pieces of information,” says Balakrishnan, who is a member of MIT’s Institute for Data, Systems, and Society. “Instead, we can tell them the three top choices, and the difference between those choices would be something only a human could tell.”

Most recently Balakrishnan has developed algorithms to prevent congestion on airport runways. Large hubs like New York’s John F. Kennedy International Airport can experience significant jams, with up to 40 planes queuing up at a time, each idling in line — and generating emissions — before finally taking off. Balakrishnan found that runways run more smoothly, with less idling time, if controllers simply hold planes at the gate for a few extra minutes. She has developed a queuing model that predicts the wait time for each plane before takeoff, given weather conditions, runway traffic, and arriving schedules, and she has calculated the optimal times when planes should push back from the gate.

In reality, air traffic controllers may also be balancing “human constraints,” such as maintaining a certain level of fairness in determining which plane lines up first. That’s why a large part of Balakrishnan’s work also involves talking directly with air traffic controllers and operators, to understand all the factors that impact their decision making.

“You can’t purely look at the theory to design these systems,” Balakrishnan says. “A lot of the constraints they need to work within are unwritten, and you want to be as nondestructive as possible, in a way that a minor change does not increase their workload. Everybody understands in these systems that you have to modernize. If you’re willing to listen, people are very willing to tell you about what it looks like from where they are.”

First flight

Balakrishnan was born in Madras, now Chennai, a large metropolitan city in southern India, and was raised by academics: Her father is a recently retired physics professor at the Indian Institute of Technology at Madras, and her mother is a retired professor of physics at the Institute of Mathematical Sciences, in Chennai. Her brother, Hari, is now at MIT as the Fujitsu Professor of Electrical Engineering and Computer Science.

“A lot of people we knew were academics, and people used to talk about their research at our home,” Balakrishnan recalls. “I was surrounded by [academia] growing up.”

Following the family’s academic path wasn’t necessarily Balakrishnan’s goal, but as an undergraduate at the Indian Institute of Technology at Madras she found that she enjoyed math and physics. She eventually gravitated to computational fluid dynamics, as applied to aerospace engineering.

“My parents are physicists, and maybe I wanted to rebel, so I went into engineering,” Balakrishnan says, half-jokingly. “I liked practical things.”

She applied to graduate school at Stanford University, and after she was accepted, she took her first-ever plane ride, from India to the U.S.

“Air travel is much more affordable and common now, even in India,” Balakrishnan. “It didn’t used to be that way, and a lot of work has been done, even in more developing economies, to make air travel more accessible.”

Clearing the runways

At Stanford, Balakrishnan shifted her focus from fluid dynamics to air traffic and control-related problems, first looking at ways to track planes in the sky.

“That got me interested in how the rest of the system works,” Balakrishnan says. “I started looking at all the different decisions that are getting made, who’s deciding what, and how do you end up with what you see eventually on the data side, in terms of the aircraft that are moving.”

After graduating from Stanford, she spent eight months at NASA’s Ames Research Center, where she worked on developing control algorithms to reduce airport congestion and optimize the routing of planes on the tarmac.

In 2007, Balakrishnan accepted a faculty position in MIT’s Department of Aeronautics and Astronautics, where she has continued to work on developing algorithms to cut down airport congestion. She’s also finding practical ways to integrate those algorithms in the stressful and often very human environment of an airport’s control tower.

She and her students have tested their algorithms at major airports including Boston’s Logan International, where they made suggestions, in real-time, to controllers about when to push aircraft back from the gate. Those controllers who did take the team’s suggestions observed a surprising outcome: The time-saving method actually cleared traffic, making it easier for planes to cross the tarmac and queue up for takeoff.

“It wasn’t an intended consequence of what we were doing,” Balakrishnan says. “Just by making things calmer and a little more streamlined, it made it easier for them to make decisions in other dimensions.”

Such feedback from controllers, she says, is essential for implementing upgrades in a system that is projected to take on a far higher volume of flights in the next few years.

“You’re designing with the human decision-maker in mind,” Balakrishnan says. “In these systems, that’s a very important thing.”


July 15, 2016 | More

New microfluidic device offers means for studying electric field cancer therapy

Roger Kamm, Distinguished Professor of Mechanical and Biological Engineering at MIT and LGO thesis advisor developed the device that has low-intensity fields keep malignant cells from spreading, while preserving healthy cells.

July 7, 2016 | More

Silk-based filtration material breaks barriers

Markus J. Buehler, head of MIT Civil and Environmental Engineering and LGO collaborator contributed to the research team. Engineers find nanosized building blocks of silk hold the secrets to improved filtration membranes.

July 1, 2016 | More

2016 MBAs to watch: Iris Zielske, MIT Sloan

An LGO 2016 graduate was named an “MBA to Watch.”

June 23, 2016 | More

A makerspace for students, by students

MakerWorks has the support and guidance of Professor (and unofficial MechE “maker czar”), who has also overseen LGO theses.

June 23, 2016 | More

LGO Best Thesis Award for Raytheon project on additive manufacturing

Andrew Byron developed a test plan and process for metals AM at Raytheon Missile Systems. He received the LGO Best Thesis award for the Class of 2016 for his research project on additive manufacturing in metals. Byron, who recieved his MBA and an SM in aeronautics and astronautics, based his thesis on his six-month LGO internship at at Raytheon Missile Systems.

best thesis 2016
Andrew Byron receives his Best Thesis Award for his cutting-edge research at Raytheon.

Additive manufacturing (AM) is an exciting new way to digitally manufacture complicated structures, and Raytheon Missile Systems recognized that development of advanced missile systems would be accelerated by leveraging the strengths of AM, so the company wants to qualify AM for use on Raytheon’s flight-critical parts. Byron’s project delivered a qualification test plan and process that will be used next year to drive adoption and integration of Raytheon’s metals AM technology into new programs.

“His experiment design and execution has improved the state-of-the-art characterization of an important cutting-edge metallic additive manufacturing process. We also learned what hard work it takes to advance manufacturing process readiness,” said MIT Sloan School of Management Professor Steve Eppinger, one of Bryon’s thesis advisors.

“Andrew realized greater potential in his project than originally conceived, and I am not at all surprised by this recognition,” said Aero/Astro Professor Brian Wardle, Byron’s other thesis advisor. “He should be very proud of the quality of the work and the quality of the presentation of that great work in his thesis.”

“At least two chapters of this thesis could become the standard reference handbook for DoE of additive manufacturing screening. I want to keep this as a desk reference,” said one of the LGO alumni who read and commented on the 2016 theses. Several other reviewers said they hoped to share Byron’s thesis throughout their organizations because of the relevance of its topic and clarity of presentation.

Byron, who earned his undergraduate degree in chemical engineering from the University of Maine at Orono, has accepted a position at The Boeing Company working on composite structures.

June 16, 2016 | More

Caterpillar, National Grid join LGO Governing Board

Representatives of LGO partner companies Caterpillar and National Grid have joined the LGO Governing Board, bringing the board’s membership to 12.

The Governing Board is the senior of the LGO program’s two governance groups, the other being the Operating Committee. Members include the deans of the MIT Sloan School of Management and the School of Engineering, LGO program staff and senior executives from industry partners that have contributed most significantly to the program over the total length of their partnership. The board advises the MIT and LGO leadership on strategic directions for the program as well as the admission of new industry partners.

Since joining LGO in 2010, National Grid has successfully leveraged expertise from MIT faculty research and student internships to study the use of drones to help with inspections of field assets, and a tool to predict storm damage and optimize crew response to improve storm response operations. This tool has resulted in research publications with MIT Sloan Professor of Operations Research and Operations Management Georgia Perakis and was touted on CNBC by National Grid CEO Steve Holliday.

National Grid chief customer officer Terry Sobolewski will represent the company on the LGO Governing Board.

Caterpillar, which has been an LGO partner since 2009, has among its senior leadership Denise Johnson (LGO ’97), group president of resource industries and the company’s GB representative. The firm has also hired in recent years from a talent pipeline of LGO graduates including Glenn Bergevin LGO ’13, Cullen Johnson LGO ’12, Matt Reveley LGO ’12 and Brandon Rowan LGO ’12.

Current Governing Board co-chairs are Jeff Wilke (LGO ’93), CEO Worldwide Consumer, Amazon, and Mick Maurer, senior vice president, UTC–Otis. Other industry members are:

  • Cathy Arledge, vice president, business transformation programs, Dell Inc.
  • Matthew Bromberg, president, commercial engine aftermarket, UTC–Pratt & Whitney
  • Kim Caruso, vice president, corporate operations, Raytheon
  • Tim Copes, vice president, material services, Boeing
  • Rafael De Jesus, group vice president, ABB
  • Peter Dunn, executive medical director, perioperative administration, Massachusetts General Hospital
  • Sam Guhan, vice president, operations technology, Amgen Inc.
  • Aine Hanly, vice president, drug substance technologies, Amgen Inc.
  • Gerald Johnson, vice president, North America manufacturing, General Motors
  • Gerry Rogers, vice president, global supply chain, Nike

June 14, 2016 | More

MBAs Are Harnessing Big Data With The Internet Of Things — And B-Schools Showed Them How

Thomas Roemer, LGO Director, is quoted in this article about big data and strategic management.

June 14, 2016 | More

Sloan

The five keys to successfully negotiating your salary

Many people find asking to be paid more money awkward. How will your request be perceived? Will you look greedy or demanding? Are you sure you’re really worth what you’re asking for? The key to answering these questions and reaching a successful outcome is preparation. Fortunately, it’s not difficult to prepare for a salary negotiation. It just takes a few simple steps.

1. Think about timing.

The first step in preparing for a salary discussion is to consider timing. In general, it’s better to discuss salary after you receive a job offer rather than once you start a position. Companies generally expect there will be some negotiations before a person formally accepts a position, and assuming you have done your market research, you should be comfortable knowing the salary range and typical benefits for your position and in your location.

However, many people decide to have this conversation when they have been in a job for a time and desire a raise. If this is the case, look at whether you’ve had changes in job responsibilities. Have you taken on new roles or tasks? Or have you recently completed a successful project? If so, this would be an appropriate time to ask for an increase.

Another rule of thumb is that it’s better to ask for a raise when you’re happy in your job, versus feeling dissatisfied. You want to bring a positive attitude to the negotiating table, because that suggests you are committed to the company and are in for the long haul. After all, who wants to reward a disgruntled employee?

It’s also helpful to look at how the company is doing. If it just announced layoffs, don’t ask for a raise. On the other hand, it reported a 15% increase in profits over the last quarter, that is probably a better time.

Be strategic with your timing; don’t make your annual review the default time to discuss a raise. The purpose of a review is to evaluate your performance over a time period, whereas a salary discussion should reflect your recent achievements and value creation for the company.

Read the full post on Forbes.

Neal Hartman is a Senior Lecturer in Managerial Communication at the MIT Sloan School of Management.

July 21, 2016 | More

palmisano_news-257df662a8ab94fe4c9d995c7d9e2157522c03b3

Digital platforms driving shift in supply chains, globalization

The political environment in Europe and the United States may point to a growth in nationalism, but the rapid growth of digital platforms, networks, and data business models in fact represents the latest shift in the forces of globalization, according to former IBM CEO Samuel J. Palmisano.

“You have to be cognizant of this technology shift,” he said, adding that advocates of ideas such as Great Britain’s exit from the European Union fail to see that supply chains are shifting around the world. “I’ve been working in technology for 40 years, and I’ve never seen anything move this fast.”

In fact, Palmisano said July 15 at the MIT Platform Strategy Summit hosted by the MIT Initiative on the Digital Economy, the companies achieving the largest scale today possess few assets but build an extensible platform ecosystem.

“These companies have tremendous leverage and return on capital,” Palmisano said.

Leaders and followers in global platform growth
A recent global survey of platform enterprises by the Center for Global Enterprise—where Palmisano is chairman—identified 176 firms with a market capitalization of more than $1 billion. E-commerce is the most common industry, followed by financial technology, business tools, and social media and messaging, said Peter C. Evans, vice president of the Center for Global Enterprise. Energy, health care, and the public sector are ripe for innovation, he added, but most platforms addressing these industries remain small.

According to the report, the largest platform companies are typically young, public, and American (think Amazon, Facebook, and Google). China is the second-largest platform market, in part because it has erected barriers of entry for American firms.

Asia and Africa are poised for rapid growth, Evans said, while Europe’s strict regulations have led to a paucity of homegrown platforms. Absent a multinational response, akin to the creation of Airbus to counter Boeing’s dominance of airplane manufacturing, Europe risks falling 15 or more years behind the rest of the world, Evans said.

“If you’re not on the platform wave, you’ll be in trouble, long-term,” Evans said.

Platform strategies come with advantages as well as risks but the best value proposition for a platform may be reducing friction between people and organizations who are trying to connect with each other.

This can be a struggle for some enterprises, said Mynul Khan, founder and CEO of Field Nation, which connects project managers to certified contractors. While small and medium-sized businesses have been quick to pivot and adopt platform services, Khan said on a panel at the summit, “Working with a department within a very large organization that doesn’t have much influence, the adoption is slow. Anyone who wants to put on the red tape will put on the red tape.”

July 19, 2016 | More

feb16-03-97615336

The rise of data-driven decision making is real but uneven

Growing opportunities to collect and leverage digital information have led many managers to change how they make decisions – relying less on intuition and more on data. As Jim Barksdale, the former CEO of Netscape quipped, “If we have data, let’s look at data. If all we have are opinions, let’s go with mine.” Following pathbreakers such as Caesar’s CEO Gary Loveman – who attributes his firm’s success to the use of databases and cutting-edge analytical tools – managers at many levels are now consuming data and analytical output in unprecedented ways.

This should come as no surprise. At their most fundamental level, all organizations can be thought of as “information processors” that rely on the technologies of hierarchy, specialization, and human perception to collect, disseminate, and act on insights. Therefore, it’s only natural that technologies delivering faster, cheaper, more accurate information create opportunities to re-invent the managerial machinery.

At the same time, large corporations are not always nimble creatures. How quickly are managers actually making the investments and process changes required to embrace decision-making practices rooted in objective data? And should all firms jump on this latest managerial bandwagon?

 

We recently worked with a team at the U.S. Census Bureau and our colleagues Nick Bloom of Stanford and John van Reenen of the London School of Economics to design and field a large-scale survey to pursue these questions in the U.S. manufacturing sector. The survey targeted a representative group of roughly 50,000 American manufacturing establishments.

Our initial line of inquiry delves into the spread of data-driven decision making, or “DDD” for short. We find that the use of DDD in U.S. manufacturing nearly tripled between 2005 and 2010, from 11% to 30% of plants. However, adoption has been uneven. DDD is primarily concentrated in plants with four key advantages: 1) high levels of information technology, 2) educated workers, 3) greater size, and 4) better awareness.

Read the full post at Harvard Business Review.

Kristina McElheran is a visiting scholar at the MIT Center for Digital Business. 

Erik Brynjolfsson is the Director of the MIT Initiative on the Digital Economy, the Schussel Family Professor at the MIT Sloan School, and Chairman of the MIT Sloan Management Review.

July 18, 2016 | More

MIT-Nima-Sensor-1_0

Is your meal really gluten free?

Portable sensor detects trace amounts of gluten in food at restaurants. Now MIT spinout Nima — co-founded by CEO Shireen Yates MBA ’13 and Chief Product Officer Scott Sundvor ’12 — has developed a portable, highly sensitive gluten sensor that lets diners know if their food is, indeed, safe to eat.

July 15, 2016 | More

Should the federal government raise the minimum wage to $15?

Should the U.S. government increase the hourly minimum wage from $7.25 to $15? The issue is nuanced: Raising wages would boost employee paychecks, but it could also cause cost-conscious companies to reduce hiring. But with many states taking independent action to increase wages—and with a $15 federal minimum wage “over time” added to the Democratic Party platform last week—we asked three faculty experts to discuss the implications.

Boost jobs through an earned income tax credit, better education, and reduced licensing requirements

Erik BrynjolfssonErik Brynjolfsson

We’ve seen median wages stagnate for almost 20 years in the United States. How can we increase them while also boosting jobs?

Here are three ideas: One: expand the Earned Income Tax Credit, or EITC; Two: reinvent education; and Three: reduce occupational licensing.

Here’s how the EITC works. Suppose that someone is earning $12 per hour, and we’d like to them to earn $15. With an EITC, they’d get an additional $3 per hour worked from the government. The money to pay for this would come from general tax revenue including income taxes, or ideally increased taxes on carbon dioxide emissions, congestion, and other things we’d like to discourage. One of the benefits of the EITC is that it encourages employers to hire more workers, unlike increasing the minimum wage. That’s important because I’ve been convinced by sociologists like Bob Putnam that work has value beyond the dollars it provides. It’s good for society to keep people engaged in the workforce, and we should be rewarding entrepreneurs and managers who come up with jobs.

Another way to increase both wages and jobs is by increasing the educational levels of our workforce. The wage gap between the most and least educated workers has grown enormously since the 1980s, and better-educated workers also have much lower unemployment rates and higher rates of workforce participation. But it’s not enough to simply do more of the same. We need to reinvent education for an age where machines are increasingly doing cognitive tasks—the second machine age. That means a greater emphasis on skills like teamwork, project management, persuasion, leadership, coaching, and creativity. I believe these can be fostered in the right educational settings.

Last but not least, we need to reduce unnecessary occupational licensing. Over 25 percent of workers now require a license to do their jobs, a five-fold increase since the 1950s. While some licenses are important for safety or other reasons, research has shown that excessive licensing requirements reduce employment and mobility. Requirements vary widely across states: Michigan requires three years of education and training to become a security guard, while most other states require 11 days or less.

Having more people working and earning good wages is good not just for the people we help, but for all of us: People who work are more engaged in community, creating a virtuous cycle. If we do these three things, we’ll be on track to becoming a richer, more engaged, and more dynamic nation.

ErikBrynjolfsson, professor of information technology and director of the MIT Initiative on the Digital Economy

A higher minimum wage, by region

Professor Simon JohnsonSimon Johnson

I’m in favor of an increased minimum wage, but there is a valid question of “by how much?” Would you lose jobs as a consequence of increasing the minimum wage above some level? Labor economists have studied this carefully, and while there is no consensus, it’s not difficult to support an increase to $12 per hour on the basis of the available evidence.

In areas with higher living costs, a higher minimum wage can make sense—and some states are already planning to phase in $15 per hour over several years.

However, especially in less heavily urbanized areas with a lower cost of living, a higher national minimum wage could have unintended consequences, in the sense of reducing hiring and potentially increasing unemployment.

Simon Johnson, professor of global economics and management

A modest, stepwise increase over time

Thaomas KochanThomas Kochan

It’s clearly beyond time to increase the minimum wage. But it’s a political stalemate: It has less to do with economics than politics. Congress has not acted positively on labor legislation for a long time. They block essentially all changes in labor policy, whether it’s increases in wages, updating hourly wage legislation, or in other areas of labor relations law, all of which badly need to be updated.

The stalemate has led states to take action on their own. Half the states have recognized the need for an increase. It’s time to catch up. We’re at $7.25, which is ridiculous.

My view is that $15 is a reasonable target for the future, but we should raise it in steps at the federal level. An immediate jump to $15 would be too abrupt a change. It could have significant negative employment consequences. If we increased it step-by-step with a goal toward $15 over a period of years, it wouldn’t have significant employment effects. We could start at $10, then go up to $15 over four years.

Thomas Kochan, professor of work and employment research, and co-director, MIT Sloan Institute for Work and Employment Research

July 15, 2016 | More

2016-Taylor-3c5918837fa8b942dac93612f67b09d06890ffc8

Online marketplace offers debt-relief benefits for new graduates

Laurel Taylor’s startup connects current students and graduates with employers offering student debt repayment. Laurel Taylor, EMBA ’15, was an A student in high school, but the most competitive universities were out of reach for her financially. Nearly two decades later, she is determined to make paying for college easier for future generations.

“The American dream has become an American nightmare for many,” Taylor said, noting that roughly 70 percent of college undergraduates borrow money to pay for their education and, according to the One Wisconsin Institute survey of approximately 67,000 respondents, it takes an average of 20 years for students to pay off those loans. The Institute for College Access & Success cites that students who graduated in 2014 carried an average debt of $28,950.

In response, Taylor founded FutureFuel.io, an online marketplace that enrolls employers offering debt-relief benefits and matches them with scarce talent—the young professionals and students they need to fill jobs in science, technology, engineering, math (STEM), as well as management.

“We are working with fundamental economics. There is massive demand for STEM talent, and an impressive shortfall of available human capital, which creates an exciting opportunity for the talent side of the marketplace to express their voice as to what compensation matters most when considering for whom they will work,” Taylor said, noting that in computer science alone, there are 1 million more openings than there are qualified applicants.

While many companies attract young people with free meals and foosball, Taylor believes many employees would prioritize debt repayment over other perks, and would welcome the idea of a “new normal in compensation.”

“The reality of paying $500-plus per month, every month, for years, is extremely painful,” she said. “Student debt has significant associated financial impact as indebted students save less, buy homes later, and are less prepared for retirement.”

FutureFuel works by connecting employers with tech-savvy talent via a media-rich, mobile-accessible platform. “Think of it as a career fair that’s open 24/7,” Taylor said. Every employer on FutureFuel offers debt repayment on top of existing compensation practices. The program is free for students and users.

“This whole marketplace is about system dynamics,” said Taylor, crediting MIT with giving her the skills—including multi-stakeholder system analysis—needed to establish FutureFuel. The startup recently finalized a seven-figure seed round of funding and begins its soft launch this summer.

Taylor attended MIT Sloan School of Management’s seven-day Entrepreneurship Development Program and then enrolled in the MIT Executive MBA program. “MIT taught me to really think about large-scale impact, and then have the courage to pursue it. Mind. Heart. And hand.” she said.

Taylor has also taken her ideas beyond her startup. She is arranging a meeting this fall between White House officials and industry representatives to discuss a potential policy change that would enable 100 percent of dollars applied to student loans to be pre-tax dollars, similar to 401(k) plans.

MIT students and alumni can participate in the invitation-only soft launch via the iOS app or website.

July 14, 2016 | More

abroad producs

When selling virtual products abroad, don’t put prices on autopilot

If you have a physical product that you want to sell in more than one country, determining the price in different markets can be challenging. You might have to open an office in each country, or at least hire a consultant to assess local demand and analyze the competition.

But if you have a virtual product — say an app for a mobile phone — setting the price for it in different countries is easy. Using the individual exchange rate, the app store instantly will convert the price from your home country to any of the world’s many currencies.

This is, very likely, how prices are set for most smartphone applications sold in different countries. As developers prefer to spend time solving technical challenges, it is all too convenient to leave the responsibility of currency calculations and pricing to Apple or Google or some other virtual marketplace.

But is this the best approach when sellinginternationally? Is there a more profitable way to price virtual products sold in different currencies?

We explored these questions in an experiment that was both a real-world business trial and an academic exercise. We wanted to see whether we could boost revenue for a virtualproduct, Root Checker Pro, an app that helps Android users customize their phones. The app is sold through Google Play — the app store for Android devices — in more than 130 countries.

For our experiment, we selected six different currencies — Australian dollar, Canadian dollar, British pound, Mexican peso, Malaysian ringgit and Saudi riyal. Over six months, we charged various prices for the app in each of the currencies to see how sales and revenue would respond.

Read the full post at TechCrunch.

Joey Conway is creator and owner of Android app Root Checker Pro. He received his MBA from the Sloan School of Management in May 2015.

Catherine Tucker is a Professor of Marketing at MIT Sloan.  She is also Chair of the MIT Sloan PhD Program.

July 14, 2016 | More

Solar Energy

Pricing solar so it doesn’t raise everyone’s energy rates

Despite its recent growth, solar power remains an expensive energy alternative and accounts for only a small percentage of electricity generation in Massachusetts. If the state is going to make sharp reductions in carbon emissions as well as enjoy healthy economic growth, solar generation will have to be greatly expanded. But given the already high cost of electricity in Massachusetts, it is critical to obtain solar power as cost-effectively as possible to ensure that all consumers benefit.

In a recent study, an MIT team that I led presented a set of policy changes to make solar more affordable. The study shows that because of current policies, we are paying a good deal more for solar electricity than we need to. Residential solar systems are significantly more expensive per unit of capacity than utility-scale systems — about 70 percent more expensive on a levelized-cost basis. In addition, high levels of residential solar penetration often require substantial investments in distribution systems.

Residential solar continues to grow robustly, nonetheless, in large part because it is more heavily subsidized than utility-scale solar. The main federal subsidy, the investment tax credit, has just been extended for an additional five years. Since the amount of the tax credit is directly proportional to system cost, residential systems, which are more expensive on a per-unit of capacity basis, receive larger tax credits per unit of capacity than megawatt-scale, utility systems. This translates into a higher subsidy per kilowatt-hour of residential solar electricity, paid by taxpayers.

 

Massachusetts’ net metering policies provide another extra subsidy to qualifying solar. Retail rates, which residential generators receive, are higher than the wholesale rates that utility-scale generators earn. The difference is a per-kilowatt-hour distribution charge that was designed to cover the largely fixed costs of the grid itself — the wires and related equipment. As more residential solar comes on line, the distribution charge must be increased to cover those costs, and the burden of covering them is shifted to all customers without solar. Not only is this spending subsidy dollars wastefully, but the cost shift it entails has already produced an antisolar backlash in some states. Subsidizing the high-cost path to solar power more than lower-cost solar alternatives simply makes no economic or policy sense.

Read the full post at The Boston Globe.

Richard Schmalensee is the Howard W. Johnson Professor of Management Emeritus, Professor of Economics and Dean Emeritus. 

July 13, 2016 | More

Media bias and terrorism coverage

What’s in a word? More precisely, what’s in three words: “radical Islamic terrorist.” These words seem to be imbued with a strange power. By not uttering them, according to various Republicans, President Obama is losing the war on terrorism. Obama, on his part, has declined to use the three words together, insisting that the United States can’t be perceived as at war with the religion of Islam.

And there’s little the media loves more than a war of words – even if this squabble over semantics has, in fact, very little to do with parsing out the reasons for the horrific attack on an Orlando gay club, which left 49 people dead. The shooter, Omar Mateen, did pledge himself to ISIS, but other aspects of his life point to a troubled mind and history of violence.

Repeating the words “radical Islamic terrorist,” pundits note, won’t bring back the dead. It’s not a magical chant that will freeze jihadi in their tracks. And yet the words do have a bizarre power to turn what should be a reasonable debate over gun control, domestic surveillance and effective law enforcement (Mateen had been questioned by the FBI) into fisticuffs over word play.

By playing up this debate, the media is, however, setting itself up for more attacks on perceived bias – bias that depend on the very words that a journalist or columnist or anchor uses or the issues that newspapers or networks choose to focus upon.

Consider: Why are some events deemed important enough for front pages of newspapers (or the top position on an online news sites) and other events are on page 23 or reached only by scrolling deep down into a web page? Certainly both events are “reported” – the information is there if you want to find it – and yet given readers’ busy lives and the increasing tendency to skim headlines, there’s a real sense that one event is far more important and vital than the other and thus can be ignored.  Whether this is actual “bias” or a matter of logistics is beside the point – the news/information producer is making a decision on what is important to know.

In a democracy, where we need an educated public, this can raise some troubling issues. (Just consider that the NCAA basketball news may be covered five times more than the issue of our national debt, which arguably the debt has more of a real impact on our lives.) Where this issue of priority and/or bias becomes increasingly critical is in the coverage of acts of terrorism, which rightly concern Americans and will likely influence their choice of president.

The attack in Orlando has been 24/7, world-wide news – even as it’s unclear whether this was an act of terrorism, a hate crime or  (more likely) a mix of both. But what about attacks in other parts of the world? Various media pundits have sounded an alarm that major news outlets focused on terrorist attacks in Western Europe, such as the Nov. 13 attacks in Paris and the March 22 attacks in Brussels, with 24/7, wall-to-wall coverage while giving scant attention to equally deadly and destructive terrorism attacks in Turkey, Lebanon and Africa.

Just after the Brussels attack, Nidhi Prakash, writing on Fusion, bluntly asked, “Why is the American media mostly ignoring two other terror attacks that happened this month?” She argued that two recent attacks in Turkey as well as a brutal attack by Boko Haram in Nigeria, that killed at least 65 people, did not get the extensive coverage given to Brussels.

While some commentators see ethnic bias or a double standard about the value of human life for Western Europeans versus non-Europeans, I tend to see a matter of proximity at work – Paris and Brussels appear closer and more like American cities like New York, giving American readers the visceral sense that “if it could happen there, it could happen here.” Sadly, the Paris attacks were both dramatic and unusual – you don’t get news like that out of Paris every day unlike, perhaps, the horrors that seem to unfold daily in places like Syria.

As The Guardian noted, “Ideally we should care about all deaths equally, but it’s human nature that we do not. Not out of some crass disregard for the lives of others, but the simple limitations of what we can care about, its proximity to home, and how it grabs our attention.” Unfortunately, I think violent events in war-torn areas become so commonplace that in an odd way it’s not news anymore – just another day of horrible things. The shock disappears.

Having said that, I must add that if there were more detailed coverage of what happens on a daily basis in such places, we would get a more complete picture and make us more aware of what has happened. It would rattle our complacency. Over time, we lose a sense of perspective. Many Americans remember the kidnapping of nearly 300 girls in Nigeria by extremists in 2014 but in a random poll, it’s likely people would not recall if the girls have been released or are still in captivity. (Most of the girls have not been returned.)

Even in the United States, coverage of the Orlando shooting quickly turned into a partisan free for all. Reporters faced with making sense of the events are subject to cries of bias by even bringing up Australia’s successful gun control efforts or by failing to use the words “radical Islamic terrorist.”

Ultimately, with all the information available on the Internet, all good consumers of news have to be proactive in seeking out various sources of information, not just complain about media bias. The media can be an easy target – no question about that.  We have to look beyond the headlines. We have to do a good job of taking the time and effort to search a little more thoroughly, to pick up both sides of an issue to make informed choices.

The media have a responsibility to report the news, but it’s incumbent on us to dig a little deeper and explore to get the whole picture.

Neal Hartman is a Senior Lecturer in Managerial Communication at the MIT Sloan School of Management.

July 12, 2016 | More

Scaling customer development: Crowdsourcing the research

In the seminal book “The Four Steps to the Epiphany,” Steve Blank introduces the concept of “customer development” ― get out of the building and interview customers. While this is not a new concept ― product people with user-centered design training have always done this ― this is a huge development in startup-land, where technology used to run amuck.

Challenges with sample size

There is one small problem with customer development. It relies on qualitative research techniques like detailed interviews and observation, which are time consuming and costly.  Additionally, these techniques involve deep interactions with a few individuals, and you always run the risk of talking to the wrong people about the wrong problems.

How do you know whether you can trust your results? One way is to increase sample size – but given each interaction can take a couple of hours all-in, trying to get to 100 conversations quickly becomes daunting.

Later on in the product’s life cycle, prototypes will need to be tested.  Again, qualitative research techniques like usability benchmarks and observation are the best way to start.  Right away we run into the same sample size problem.

A two-step approach: In-person, then remote

There is a way forward that will start you off with the qualitative insights you need, and end up with a big enough pool of data to make it credible. Start with in-person sessions, then scale it up with remote sessions.

Here are two quick examples.

  • From interviews to an on-line survey for problem research
    • Start with 20 interviews to validate hypotheses
    • Develop a solid persona for these target customers
    • Come up with 5-10 questions you want a larger group to answer
    • Run an on-line survey with 500 people and chart the results
  • From in-person testing to crowdsourced testing for solution research
    • Start with 5 in-person product testing sessions
    • Fix glaring problems
    • Check your homework with another 5 in-person sessions
    • Now engage a crowdsourced testing service like User Testing, User Think, or run your own DIY testing using Amazon’s Mechanical Turk’s on-demand workforce.

Crowdsourced research platforms for product research

There is a reason why crowdsourced testing is so attractive.  For a very reasonable cost (under $100 per test in some cases), these platforms connect teams with large panels of on-demand testers, solving the subject recruitment problem and saving time and money.

There is one thing that new product teams need to keep in mind.  Crowdsourced testing platforms are designed for solution research. You have to have a product for them to test first.  You still have to lead with problem research the old fashioned way. You have to do both – or fall into what Bill Aulet, Managing Director of the Martin Trust Center for MIT Entrepreneurship, calls “our dangerous obsession with the MVP”.

Doing without the two-way debrief

Another thing you get from doing in-person qualitative research, versus crowdsourcing the testing, is that the latter is one-way: The tester provides you with their feedback, but you don’t get to debrief them and learn more with the back and forth.

Let’s say you are testing your product with a user and she gets stuck.  You can help her move past this task to complete the rest on the list. Later on, you can probe deep into what happened during the debrief. None of this happens in a crowdsourced test – if the user gets stuck, she stays stuck. Crowdsourced feedback is still valuable – just not as a first line of defense.

When to go face to face, and when to crowdsource

There is a time and place for every research technique. In-person sessions are the best way  when the uncertainly is high. Crowdsourced research is best for testing well understood things. They both have a place in a robust, ongoing research program.

To recap: Lead with in-person sessions, and then transition to a scalable, faster and cheaper alternative.  That is the path to the epiphany.

Elaine Chen is a startup veteran and product strategy and innovation consultant who has brought numerous hardware and software products to market. As Founder and Managing Director of ConceptSpring, she works with executives and leaders of innovative teams to help them set up and run new product innovation initiatives with the speed and agility of a startup. She is also a Senior Lecturer at the MIT Sloan School of Management. Follow her at @chenelaine.

July 12, 2016 | More

Engineering

MIT-Sole-Vibes-e87b3387ea2b97286a976857e1648d7695e421d4

Avoiding stumbles, from spacewalks to sidewalks

Video of astronauts tripping over moon rocks can make for entertaining Internet viewing, but falls in space can jeopardize astronauts’ missions and even their lives. Getting to one’s feet in a bulky, pressurized spacesuit can consume time and precious oxygen reserves, and falls increase the risk that the suit will be punctured.

Most falls happen because spacesuits limit astronauts’ ability to both see and feel the terrain around them, so researchers from MIT’s Department of Aeronautics and Astronautics (AeroAstro) and the Charles Stark Draper Laboratory in Cambridge, Massachusetts are developing a new space boot with built-in sensors and tiny “haptic” motors, whose vibrations can guide the wearer around or over obstacles.

This week, at the International Conference on Human-Computer Interaction, the researchers presented the results of a preliminary study designed to determine what types of stimuli, administered to what parts of the foot, could provide the best navigation cues. On the basis of that study, they’re planning further trials using a prototype of the boot.

The work could also have applications in the design of navigation systems for the visually impaired. The development of such systems has been hampered by a lack of efficient and reliable means of communicating spatial information to users.

“A lot of students in my lab are looking at this question of how you map wearable-sensor information to a visual display, or a tactile display, or an auditory display, in a way that can be understood by a nonexpert in sensor technologies,” says Leia Stirling, an assistant professor of AeroAstro and an associate faculty member at MIT’s Institute for Medical Engineering and Science, whose group led the work. “This initial pilot study allowed Alison [Gibson, a graduate student in AeroAstro and first author on the paper] to learn about how she could create a language for that mapping.” Gibson and Stirling are joined on the paper by Andrea Webb, a psychophysiologist at Draper.

What, where, and when

For the pilot study, Gibson developed a device that spaced six haptic motors around each of a subject’s feet — one motor each at the heel, big toe, and instep, and three motors along the outer edge of the foot. The intensity of the motors’ vibrations could be varied continuously between minimum and maximum settings.

A subject placed his or her feet in the device while seated before a computer. Software asked the subjects to indicate when they felt vibrations and at what locations on the foot. Tests were conducted under two conditions. In the first, the subjects focused on the stimuli to their feet. In the second, they were distracted by a simple cognitive test: The software would flash a random number on the screen, and the subject would count upward from that number by threes. The vibration of one of the motors would interrupt the counting, and the subject would report on the sensation.

Each subject was asked to report on more than 500 individual stimuli, divided between the two conditions.

The researchers had envisioned that variations in the intensity of the motors’ vibrations could indicate distance to obstacles, as measured by sensors built into the boot. But they found that when distracted by cognitive tests, subjects had difficulty identifying steady increases in intensity. And even when they were attending to the stimuli, the subjects still had difficulty identifying decreases in intensity.

Subjects also had difficulty distinguishing between the locations of stimuli on the outer edge of the foot. Strangely, in 20 percent of cases, distributed across all study participants, subjects were entirely unable to discern low-intensity stimuli to the middle location on the outer edge of the right foot.

Boot-building

On the basis of the study results, Gibson is developing a boot with motors at only three locations: at the toe, at the heel, and toward the front of the outside of the foot — away from the middle location where stimuli sometimes didn’t register.

Stimuli will not be varied continuously, but they will jump from low to high intensity when the wearer is at risk of colliding with an obstacle. The high-intensity stimuli will also be pulsed, to help distinguish them from the low-intensity ones.

In principle, the motor at the side of the foot could help guide the user around obstacles, but the first trial of the boot will concentrate entirely on the problem of stepping over obstacles of different heights. The researchers will also be evaluating the haptic signals in conjunction with, and separately from, visual signals, to determine the optimal method of conveying spatial information.

“Trying to provide people with more information about the environment — especially when not only vision but other sensory information, auditory as well as proprioception, is compromised — is a really good idea,” says Shirley Rietdyk, a professor of health and kinesiology at Purdue University who studies the neurology and biomechanics of falls. “From my perspective, [this work could be useful] not only for astronauts but for firemen, who have well-documented issues interacting with their environment, and for people with compromised sensory systems, such as older adults and people with disease and disorders.”

Rietdyk points out that there’s some prior work on using vibrating insoles to alert people with impaired proprioception — physical self-awareness — when, for instance, they’re beginning to tip off-balance. “The big question is whether people will attend to it, which is why I really like the work that they did,” she says. “They came all the way back to the starting point and said, ‘Okay, where are people going to detect this most, and what is least likely to be compromised by attention-demanding tasks?’ Going back to the start, rather than diving into the deep end of the pool, was, I thought, I really good approach.”


July 22, 2016 | More

MIT-Nanocomposites-1-45509323e1cca9368dd66816d9a83a0d5194f3c9

Borrowing from pastry chefs, engineers create nanolayered composites

Adapting an old trick used for centuries by both metalsmiths and pastry makers, a team of researchers at MIT has found a way to efficiently create composite materials containing hundreds of layers that are just atoms thick but span the full width of the material. The discovery could open up wide-ranging possibilities for designing new, easy-to-manufacture composites for optical devices, electronic systems, and high-tech materials.

The work is described this week in a paper in Science by Michael Strano, the Carbon P. Dubbs Professor in Chemical Engineering; postdoc Pingwei Liu; and 11 other MIT students, postdocs, and professors.

Materials such as graphene, a two-dimensional form of pure carbon, and carbon nanotubes, tiny cylinders that are essentially rolled-up graphene, are “some of the strongest, hardest materials we have available,” says Strano, because their atoms are held together entirely by carbon-carbon bonds, which are “the strongest nature gives us” for chemical bonds to work with. So, researchers have been searching for ways of using these nanomaterials to add great strength to composite materials, much the way steel bars are used to reinforce concrete.

The biggest obstacle has been finding ways to embed these materials within a matrix of another material in an orderly way. These tiny sheets and tubes have a strong tendency to clump together, so just stirring them into a batch of liquid resin before it sets doesn’t work at all. The MIT team’s insight was in finding a way to create large numbers of layers, stacked in a perfectly orderly way, without having to stack each layer individually.

Although the process is more complex than it sounds, at the heart of it is a technique similar to that used to make ultrastrong steel sword blades, as well as the puff pastry that’s in baklava and napoleons. A layer of material — be it steel, dough, or graphene — is spread out flat. Then, the material is doubled over on itself, pounded or rolled out, and then doubled over again, and again, and again.

With each fold, the number of layers doubles, thus producing an exponential increase in the layering. Just 20 simple folds would produce more than a million perfectly aligned layers.

Now, it doesn’t work out exactly that way on the nanoscale. In this research, rather than folding the material, the team cut the whole block — itself consisting of alternating layers of graphene and the composite material — into quarters, and then slid one quarter on top of another, quadrupling the number of layers, and then repeating the process. But the result was the same: a uniform stack of layers, quickly produced, and already embedded in the matrix material, in this case polycarbonate, to form a composite.

In their proof-of-concept tests, the MIT team produced composites with up to 320 layers of graphene embedded in them. They were able to demonstrate that even though the total amount of the graphene added to the material was minuscule — less than 1/10 of a percent by weight — it led to a clear-cut improvement in overall strength.

“The graphene has an effectively infinite aspect ratio,” Strano says, since it is infinitesimally thin yet can span sizes large enough to be seen and handled. “It can span two dimensions of the material,” even though it is only nanometers thick. Graphene and a handful of other known 2-D materials are “the only known materials that can do that,” he says.

The team also found a way to make structured fibers from graphene, potentially enabling the creation of yarns and fabrics with embedded electronic functions, as well as yet another class of composites. The method uses a shearing mechanism, somewhat like a cheese slicer, to peel off layers of graphene in a way that causes them to roll up into a scroll-like shape, technically known as an Archimedean spiral.

That could overcome one of the biggest drawbacks of graphene and nanotubes, in terms of their ability to be woven into long fibers: their extreme slipperiness. Because they are so perfectly smooth, strands slip past each other instead of sticking together in a bundle. And the new scrolled strands not only overcome that problem, they are also extremely stretchy, unlike other super-strong materials such as Kevlar. That means they might lend themselves to being woven into protective materials that could “give” without breaking.

One unexpected feature of the new layered composites, Strano says, is that the graphene layers, which are extremely electrically conductive, maintain their continuity all the way across their composite sample without any short-circuiting to the adjacent layers. So, for example, simply inserting an electrical probe into the stack to a certain precise depth would make it possible to uniquely “address” any one of the hundreds of layers. This could ultimately lead to new kinds of complex multilayered electronics, he says.

This paper “describes a rather unique and creative way to make composites using large area graphene films,” says Angelos Kyrlidis, research and development manager for graphenes at Cabot Corporation who was not involved with this work. He adds, “This work assembles the composites from chemical vapor deposition graphene, where a very high aspect ratio can be obtained, while still maintaining many of the features and properties of the single layer graphene. … It would be quite interesting to evaluate in a broader range of polymers, such as thermosets and also other thermoplastics.”

The research was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT.


July 21, 2016 | More

MIT-Program-Cell-1-44f3445881de2ef2cbfdb15393ec018e4235d34a

Scientists program cells to remember and respond to series of stimuli

Synthetic biology allows researchers to program cells to perform novel functions such as fluorescing in response to a particular chemical or producing drugs in response to disease markers. In a step toward devising much more complex cellular circuits, MIT engineers have now programmed cells to remember and respond to a series of events.

These cells can remember, in the correct order, up to three different inputs, but this approach should be scalable to incorporate many more stimuli, the researchers say. Using this system, scientists can track cellular events that occur in a particular order, create environmental sensors that store complex histories, or program cellular trajectories.

“You can build very complex computing systems if you integrate the element of memory together with computation,” says Timothy Lu, an associate professor of electrical engineering and computer science and of biological engineering, and head of the Synthetic Biology Group at MIT’s Research Laboratory of Electronics.

This approach allows scientists to create biological “state machines” — devices that exist in different states depending on the identities and orders of inputs they receive. The researchers also created software that helps users design circuits that implement state machines with different behaviors, which can then be tested in cells.

Lu is the senior author of the new study, which appears in the 22 July issue of Science. Nathaniel Roquet, an MIT and Harvard graduate student, is the paper’s lead author. Other authors on the paper include Scott Aaronson, an associate professor of electrical engineering and computer science, recent MIT graduate Ava Soleimany, and recent Wellesley College graduate Alyssa Ferris.

Long-term memory

In 2013, Lu and colleagues designed cell circuits that could perform a logic function and then store a memory of the event by encoding it in their DNA.

The state machine circuits that they designed in the new paper rely on enzymes called recombinases. When activated by a specific input in the cell, such as a chemical signal, recombinases either delete or invert a particular stretch of DNA, depending on the orientation of two DNA target sequences known as recognition sites. The stretch of DNA between those sites may contain recognition sites for other recombinases that respond to different inputs. Flipping or deleting those sites alters what will happen to the DNA if a second or third recombinase is later activated. Therefore, a cell’s history can be determined by sequencing its DNA.

In the simplest version of this system, with just two inputs, there are five possible states for the circuit: states corresponding to neither input, input A only, input B only, A followed by B, and B followed by A. The researchers also designed and built circuits that record three inputs, in which 16 states are possible.

For this study, the researchers programmed E. coli cells to respond to substances commonly used in lab experiments, including ATc (an analogue of the antibiotic tetracycline), a sugar called arabinose, and a chemical called DAPG. However, for medical or environmental applications, the recombinases could be re-engineered to respond to other conditions such as acidity or the presence of specific transcription factors (proteins that control gene expression).

Gene control

After creating circuits that could record events, the researchers then incorporated genes into the array of recombinase binding sites, along with genetic regulatory elements. In these circuits, when recombinases rearrange the DNA, the circuits not only record information but also control which genes get turned on or off.

The researchers tested this approach with three genes that code for different fluorescent proteins — green, red, and blue, constructing a circuit that expressed a different combination of the fluorescent proteins for each identity and order of two inputs. For example, when cells carrying this circuit recieved input A followed by input B they fluoresced red and green, while cells that recieved B before A fluoresced red and blue.

Lu’s lab now hopes to use this approach to study cellular processes that are controlled by a series of events, such as the appearance of cytokines or other signaling molecules, or the activation of certain genes.

“This idea that we can record and respond to not just combinations of biological events but also their orders opens up a lot of potential applications. A lot is known about what factors regulate differentiation of specific cell types or lead to the progression of certain diseases, but not much is known about the temporal organization of those factors. That’s one of the areas we hope to dive into with our device,” Roquet says.

For example, scientists could use this technique to follow the trajectory of stem cells or other immature cells into differentiated, mature cell types. They could also follow the progression of diseases such as cancer. A recent study has shown that the order in which cancer-causing mutations are acquired can determine the behavior of the disease, including how cancer cells respond to drugs and develop into tumors. Furthermore, engineers could use the state machine platform developed here to program cell functions and differentiation pathways.

The MIT study represents “a new benchmark in the use of living cells to perform computation and to record information,” says Tom Ellis, a senior lecturer at the Centre for Synthetic Biology at Imperial College London.

“These recombinase-based state machines open up the possibility of cells being engineered to become recorders of temporal information about their environment, and they can be built to lead the cells to take actions in response to the appropriate string of inputs,” says Ellis, who was not involved in the research. “It’s an excellent paper that puts these recombinase-based switches to good use.”


July 21, 2016 | More

MIT-Smart-Schedule-0083fde245b2cc17f3057a0ef2f1abee329672e3

Reducing wait times at the doctor’s office

Ever waited entirely too long at your doctor’s office for an appointment to start? The long wait may soon be over: An MIT spinout’s schedule-optimizing software that gets more patients seen more quickly could soon be used by tens of thousands of health care providers across the country, after a recent acquisition by a major health care services company.

Arsenal Health has developed a schedule-optimization service for health care providers that began as a pitch at MIT Hacking Medicine, a hackathon that aims to solve problems in health care. The service analyzes scheduling and other data to predict which patients might not show up to appointments. Health-care providers can then double-book over those potential no-show patients or prioritize their outreach as needed.

Double-booking is often a necessary evil for health-care providers who see high volumes of patients. Inevitably, some of those patients cancel too late or don’t show to appointments, which is costly and a time-waster. Front-end staff can get ahead of this by double-booking, allowing them to move patients around to make up for the no-shows and enabling more patients to be seen overall. But if both parties show up for a single appointment slot, one person waits longer.

Arsenal Health’s service, on the other hand, is a type of “targeted double-booking” that can predict that one patient won’t show, much more accurately than when administrative staff double-book manually, says co-founder and former CEO Chris Moses ’10, now director of product innovation at athenahealth. “This means improved patient access and availability, and improved provider productivity by making their scheduling more open,” he says.

Arsenal Health was acquired in April by athenahealth, which provides cloud-based, network-enabled services and apps for more than 78,000 health-care providers nationwide. Currently, more than 800 of those providers use Arsenal Health’s technology.

Win-win-win

Arsenal Health’s solution gathers and analyzes clinical, administrative, and scheduling data to find trends of when and why patients cancel. (Such factors as potential sickness or bad weather, for instance, may be out of a patient’s control.) Using that information, the software uses predictive modeling that determines if a certain patient will or won’t show on a particular day and time.

When front-desk staff are searching for open appointments in athenahealth, the scheduling tool flags the patients that might not show, so they can double-book those slots, called “smart open slots.” It also lists potential no-shows in a call list in a web app so that front-desk staff can reach out to those targeted patients, which has been shown to decrease no-show rates, Moses says.

According to Moses, manually double-booking patients without Arsenal Health is about 20 to 30 percent accurate, meaning both patients show up to an appointment about 70 to 80 percent of the time. The Arsenal Health solution, however, is about 75 percent accurate, with both patients showing up only about a quarter of the time, which cuts waiting time and improves patient satisfaction, Moses says.

In some cases, Arsenal Health’s service has also increased the number of new patients seen by providers, Moses says. On average, he says, a provider using Arsenal Health’s schedule optimization gains a couple more patients each month, and some have gained an additional 30 or 40 patients each month. “That’s important for medical groups and primary-care practices that work in these hospital systems, because new patients equals more revenue for the hospitals, while you’re improving patient experience,” Moses says.

By decreasing no-show rates and increasing new patient numbers, the software has boosted revenue of participating providers by roughly $700 per month, according to Arsenal Health. “So it’s a win-win-win on all sides,” Moses says.

Hack to acquisition

Two years after graduating MIT, Moses attended the 2012 MIT Hacking Medicine event, which was co-organized by a friend. During the weekend-long hackathon, MIT postdoc Gabriel Belfort pitched the no-show problem that his wife, a pediatrician, often complained about. “I thought that was one of biggest problems, one of the realest needs I heard that weekend,” Moses says.

Moses and Belfort joined up with another physician, Donald Misquitta, who was also a data scientist, and MIT PhD engineering student Andrea Ippolito, a Hacking Medicine co-founder. They formed a team dedicated to developing commercial software to solve the no-show problem, with Moses becoming the founding full-time employee.

That summer, the four-person team was accepted into the Founders’ Skills Accelerator at the Martin Trust Center for MIT Entrepreneurship, “which was an amazing opportunity,” Moses says. Among other things, the program provided a “board” of directors, made up of seasoned entrepreneurs, whom the team submitted milestones to on a monthly basis. “The board would say, ‘We’re not giving you $20,000 upfront. You have to earn it,’” Moses says. “That was a really cool structure that helped create accountability really early on.”

In the summer of 2012, the team entered the Healthbox startup accelerator in Boston, where they partnered with Steward Health Care, a major hospital system in Massachusetts. As luck would have it, Steward was an athenahealth enterprise customer, and agreed to work with Arsenal Health and make their data available for training the initial predictive models. “It was a match made in heaven,” Moses says.

The team looked through five years’ worth of scheduling data from 17 offices across several Steward hospitals, and found that of 700,000 appointments, there were 30,000 no-shows — which confirmed there was a real problem, Moses says. Using athenahealth’s data, they built their first prototype that predicted future no-shows at Steward pilot hospitals. After that, Steward became Arsenal’s first paying customer. “When their paycheck hit the bank, we became immediately profitable given our small team size,” Moses says.

In 2014, athenahealth launched its “More Disruption Please” accelerator program, recruiting Arsenal Health as its first investment — which ultimately led to a strong partnership and the acquisition in April. Now Moses is working with a team to implement the scheduling service across athenahealth’s entire network. “The growth opportunity is amazing,” Moses says. “That’s something we could never do as an independent company.”

The acquisition also means more development on predictive modeling and more research into what really causes patients no-shows. “[We’ll] look at the problems across athenahealth’s customer base — whether they’re internal efficiency problems or external customer problems,” Moses says. “Using data to better improve customers’ care.”


July 21, 2016 | More

Greenhouse-agriculture-d56ccf6609c4912355cc7f3f15ba0184a6b17160

Growing season

One of MIT’s strengths is bringing together business, technology, government, and academic leaders, at the Institute’s Professional Education Short Programs. This spring, a new five-day course — Agriculture, Innovation and the Environment — showcased innovative technologies and strategies to make the agriculture industry more productive, and attracted a score of professionals from all over the world. The participants engaged in deep conversations with the instructors and each other, brainstorming new initiatives and ideas to take back to their companies and organizations.

The timing is opportune. Experts agree that by 2050 the earth’s population will likely reach 9.5 billion people, requiring an 80 percent increase in agricultural production. But how will this goal be achieved?

The instructors emphasized that people will need to work better together across disciplines to create the type of change necessary to make agriculture more efficient, effective, scalable, and sustainable, and to use fundamental understanding to create new solutions.

“We are proud to have introduced in our portfolio this year a highly interactive, practitioner-oriented course that harkens back to the founding days of MIT when serving ‘the advancement of agriculture’ was included in its core mission,” said Bhaskar Pant, executive director of MIT Professional Education. “It is gratifying to see that the course addressing the food production growth challenges of the 21st century elicited the interest of such a wide array of global professionals engaged in the field.”

The faculty director, Department of Civil and Environmental Engineering (CEE) head and McAfee Professor of Engineering Markus J. Buehler, said he and co-director Edmund W. Schuster designed the curriculum to include multiple MIT faculty and initiative heads, government leaders, and industry representatives to give a wide and deep view of agriculture productivity issues. Most of the days’ presentations were lecture-driven, however, the program included plenty of time for classroom discussion, group work, lab demonstrations, and hands-on experiments too.

Course participants — some with science and engineering backgrounds, others with data analytics, economics, policy, or entrepreneurial interests — came from as far away as South America and the United Kingdom, as well as from across the United States, to learn more.

Marcio Aurelio Soares Santos of Brazil is general manager of a multinational company that produces products and services for the management of irrigation water and road infrastructure. He said he attended the short course to better understand the complex issues behind the use of natural resources.

“There are some boundaries that need to be respected when you have controversy about natural resources use,” he said. “When you have controversy, you often don’t have a clear understanding of these things. By coming to MIT, I can put the science in cooperation together with the arguments, and that means a lot when addressing some of these questions. A broad view gives you the confidence to move forward despite uncertainty.”

Getting down and dirty

Buehler opened the course with remarks that set the stage for key course takeaways. He introduced guest speaker Ken Sudduth, research agricultural engineer of the U.S. Department of Agriculture’s (USDA) Agricultural Research Service, who gave an overview of the agriculture industry and then asked the group to imagine themselves as farmers and what farming could be someday.

“Imagine remote and in situ sensing of influential soil factors before you even begin to plant,” Sudduth began. “Next, imagine superimposing weather estimates and field topography, and using models and agri-informatics to generate maps of the genetic traits needed based on environmental factors plus yield and quality targets. Now go ahead and plant your crop while applying beneficial microbes and time release fertilizer. Remote sensing of real-time crop status and real-time adjustments can be obtained through nanotechnology breakthroughs. Get real-time sensing of product ‘ripeness’ based on weather forecast and market targets. Automate your harvest and use models to begin planning for best use next year taking into account field conditions, global markets, forecast weather, and environmental goals.”

Sudduth’s challenge highlighted the promise of agricultural advancements, assuming climate variability and the opportunities to optimize performance of genetic resources under varying environmental conditions.

Data-driven decision-making and technology improvements also played heavily in Sudduth’s ideal. Many of his points were to be expanded upon by additional speakers during the week. MIT faculty members, in particular, talked about ways small innovation in the lab often leads to creation of systems with large-scale tangible impacts. This topic — which CEE calls “big engineering” — also ties into the MIT.nano project, a new MIT facility opening in 2018 where researchers will study nanotechnology, including microscale innovations to improve plant health and crop production, to find ways to scale up nanotechnology to innovations that benefit society.

Tip of the iceberg, but not the lettuce variety

Professor Dennis McLaughlin, the H.M. King Bhumibol Professor in CEE, believes that strategies for increasing food production must consider environmental impacts if the resources needed to grow crops are to be preserved. He said the so-called Green Revolution from 1930-60 expanded the use of hybrid seeds, synthetic fertilizers, and irrigation. This fueled increased output but also had a dramatic impact on the natural environment, including the carbon, nitrogen, and phosphorous cycles. A new Green Revolution will be needed to simultaneously achieve increased demand and environmental sustainability.

“We have sufficient resources to meet reasonable demand for food. The real question is whether our use of these resources will be sustainable,” says McLaughlin. “We have a range of options for increasing production, but often poor understanding of their performance and impacts. Climate change adds further uncertainty. We need better data and more experiments, guided by a conceptual framework that considers crop production, costs, and the environment.”

McLaughlin told the class that there are five global changes that can be expected to impact food production: higher maximum temperatures during the growing season; increased variation in water availability; increased atmospheric carbon dioxide and ozone; changes in ocean acidification and temperature; and poorly understood interactions among climate, nutrient availability, and losses to pests. All could have positive or negative impacts, depending on their magnitude and location.

CEE Professor Martin Polz and Associate Professor Dan Cziczo emphasized the important roles climate, weather, and microbiology play in agricultural productivity.

Polz painted the big picture by describing challenges with microbial abundance and diversity, and their future threats to both agriculture and people. He talked about the overuse of antibiotics in some countries and ways emergent pathogens like bacteria, viruses, fungi, and protozoa often co-evolve with their hosts.

“We need to better understand interactions of plants and animals with their microbiomes,” Polz said, adding there are opportunities to enhance positive interactions and suppress negative ones such as targeting pathogens with phage, viruses that are specific to bacteria and which can be used to fight infections.

CEE Associate Professor Ruben Juanes, director of the Henry L. Pierce Laboratory for Infrastructure Science and Engineering, gave a deep-dive, technical review of his lab’s research at the intersection of water, soil, and infrastructure. His work applies theoretical, computational, and experimental research to energy and environment-driven geophysical problems, including carbon sequestration, methane hydrates, and water infiltration in soil.

The role of smart systems, climate, fluid dynamics, and biomaterials engineering

The second day began with a Smart(er) Agriculture treatise by Daniel Schmoldt, the U.S. Department of Agriculture’s National Institute of Food and Agriculture (USDA NIFA) national program leader. He spoke about advancing precision agriculture for just-in-time-and-place agriculture, including the use of sensors, cyber-physical systems, robotics, and big data. He used an example of growing better blackberries and raspberries to showcase ways “smart systems” produce results through new sensing and measuring technologies, screening of plant genotypes and phenotypes, managing resulting big data for improved insight, breeding desirable crop varieties, and enhancing harvesting and distribution systems.

The Daniel Griffith Anderson, the Samuel A. Goldblith Professor of Applied Biology, Chemical Engineering and Health Sciences and Technology at MIT, followed with a technical talk about RNAi and biological frontiers.

CEE professors had the audiences’ attention for the rest of the day, first with a tag team composed of postdoc Ross E. Alter from the Eltahir research group, who led a technical presentation on irrigation and rainfall, and then from Lydia Bourouiba, the Esther and Harold E. Edgerton Career Development Assistant Professor, describing her research on disease transmission and fluid mechanics. Bourouiba enthralled the class with her graphic research videos showing detailed slow-motion dynamics of fluid fragmentation and interfacial flows and later with demonstrations and hands-on activities in her lab, the Fluid Dynamics of Disease Transmission Laboratory.

What does fluid fragmentation have to do with agriculture? Fluid fragmentation is the fundamental physics that governs droplet formation from bulk of fluids. Bourouiba specializes in investigating how such a process controls the ways pathogens are encapsulated, emitted, and transported in droplets into the environment, and then infiltrated beyond the immediate area of a contaminated plant or field. Bourouiba relates this fluid mechanics to ways rain droplets could distribute and disperse pathogens from one plant to another in a crop field. Implications are not to ask the farmer to space planted crops farther apart, but instead exploit the inherent mechanical and surface properties of the crops to create natural defenses around the plant, such as planting complementary crops as buffers, or specifically optimizing and tailoring irrigation and spray drops to the crop’s mechanical properties to minimize crop disease and foodborne disease amplification.

Extending medicine and materials research to agriculture

Mid-week sessions featured a thought-provoking bioengineering and biomaterials presentation about medicine and agriculture by Robert S. Langer, the David H. Koch Institute Professor; professor of chemical engineering, biological engineering, and mechanical engineering; and head of the Langer Lab. His lab’s research focuses on innovation in drug development, nanoscale drug delivery, novel biomaterials, tissue engineering, and stem cells.

Early in his research career studying the chemistry and biology of cartilage, Langer was frustrated with the lack of an effective delivery system to diffuse large molecules slowly through polymers. As an early leader of cross-disciplinary study, he eventually set out to create a solution himself, which he later patented and led to the health care industry’s commercial application of long acting microsphere injection technology. It was the first in a series of 1,100 patents Langer has received and filed, and led him to extend his microscale research and its analogs to medicine, pharmaceuticals, and biotechnology innovation.

“Chemistry and biology used to be separate professions, but now researchers need both skills and a broader science and engineering understanding to solve many of the world’s most critical problems,” he said. “The program was great and I really liked the students’ enthusiasm.”

“I had no idea that biochemistry would have such an impact, or materials study would have such an impact, in agriculture,” said class participant Ambre Soubiran of France, who recently quit her investment banking career and enrolled in the short program to advance her interest in agriculture and animal feed.

“I have a master’s degree in theoretical mathematics, but have never applied it to science,” she said. “It was pretty amazing to see all the implications of research being done in the medical world and in the materials world that could be applied to agriculture.”

Langer’s talk was followed by Roger Beachy, chief scientific officer of Indigo Agriculture and the first director of USDA NIFA. Indigo is a U.S.-based startup focusing on microbiology in agriculture. Specifically, the company is working to identify missing microbiomes that occur within plants — such as those that make them resilient to environmental stresses like heat or drought — and then reintroduce the microbiome through a seed treatment that makes the plant healthier and, ultimately, improves the yield. This solution was created based on insights made through years of discovery and research on the human microbiome.

Indigo’s solutions are designed to help farmers sustainably feed the planet: “Modern seeds have [far fewer] microbes, with less diversity, compared to their ancestors,” said Beachy. “We are working to restore this lost function and are planning to release our first commercial product this year.”

Low- and high-tech ingenuity

Benedetto Marelli, the Paul M. Cook Career Development Assistant Professor in CEE, and Professor John Lienhard, director of the Jameel World Water and Food Security lab (J-WAFS) at MIT, rounded out this day’s sessions. Marelli presented on nature-inspired materials for use in agriculture and food preservation, and Lienhard gave an overview of his organization’s leadership role in solving agricultural challenges using low and high tech strategies.

Marelli questioned if the class knew how structural biopolymers such as collagen, silk, and keratin — the building materials of life — are made. He answered that material, structure, form, and function are all correlated, and grow by controlled assembly. The color of butterfly wings, for example, are produced by light passing through the wing’s nanostructure and getting trapped in photonic crystals to create different hues. This understanding of materials and engineering informs the work Marelli does in his lab, including using silk fibroins to make a bioprinted label that, when placed on a package of meat, changes color to detect and warn of contamination, and using an edible coating of silk that can be applied to highly perishable food to preserve it longer. Later that day, Marelli took the attendees on a tour of his lab, including access to his control group, and experimental, strawberries coated with the silk preservation material.

The presenters were clear that low-tech solutions, including agriculture policy and implementation strategies, are just as important as high-tech solutions to transform agricultural development. Many of these low-tech examples were highlighted by J-WAFS director John Lienhard. The lab — named for Abdul Latif Jameel, father of CEE alumnus Mohammed Abdul Latif Jameel ’78 — was established in 2014 as an Institute-wide effort to bring MIT’s expertise to the challenge of the world’s diverse needs for water and food in the context of population growth, climate change, urbanization, and development. Lienhard spoke about the many ways J-WAFS is helping mitigate global food waste and food borne illness; preparing countries for the impacts of climate change on food security; utilizing management best practices and economics to turn nascent ideas into new businesses; developing more productive food systems and processes; and extending water supplies to the underserved.

In addition to a range of policy, business, and economic solutions, J-WAFS supports research and commercialization around advanced technologies. For instance, Lienhard mentioned J-WAFS’ support of an innovative water sampling technique that allows faster water testing in remote areas by using dry sample preservation. Just like the method used today for fast and efficient blood testing, a drop of water on a card is dried and mailed to a central facility for analysis. No need to lug a liter of water long distances and wait for water quality professionals to test it. Lienhard added that a cross-disciplinary team is working on the design as well as implementation strategies for the new system.

Other technologies J-WAFS is supporting include the development of sensors for food and water quality, and separation technologies for water purification.

A model approach

The sessions’ final days featured many other scientists and engineers, including the course co-leaders Buehler and Schuster.

Buehler spoke of the current and potential role of new materials in agriculture, including the use of computation in materials design for agricultural applications. Buehler explained his research is inspired by nature, such as plant materials, which can teach us lessons about new designs that improve on nature. He uses a modeling approach to experimentation which means he investigates what he can do with a minimal sequencing pattern and nature’s building blocks such as proteins to make something new and improved. For example, to optimize a microstructure composite for toughness and strength, he might use computer software to simulate and make it right in his lab or office using 3-D printing in about 30 minutes — a process and time scale unfathomable just a decade ago.

“There are a lot of things humans can do that nature can’t do, but to do it, you need models,” Buehler said. “We try to synthesize material creation from the bottom up instead of the traditional top-down approach. How does a material work? How does it break? How can we use the same chemical components to optimize the structure or create a new material with a different function? These are questions whose answers provide deep insight and potential for innovative engineering solutions to agricultural problems with examples in seed coatings, synthetic soils, thin films as barriers for disease, or better products such as bioproduced fuels.”

Schuster, an Ohio-native and a product of five generations of farmers, presented on spatial design of experiments and the use of relevant data. He promotes the use of advanced technology such as drones, robotics, and remote sensing to provide data and statistics for modeling and analysis for agricultural inputs. But it is still difficult to capture all the information necessary to determine plant health, he said. Cameron Dryden of AOA Xinetics, a Northrop Grumman business unit, echoed his concerns: Is there invasive species or mold growing underground? Does the soil lack nitrogen?

“The new grand challenge of this generation is seeing through the ground and the ocean with images and communicating that information quickly and in a way that’s easily interpreted,” said Dryden.

“Viewing agriculture through the lens of materials science and mechanical engineering is a unique perspective,” said Schuster, “but one which could have significant implications for innovation and the environment. Further, as we enter a new age of mapping with precision, we’ll be able to learn more about the root system of plants and organize the information to always know what’s happening out of sight.”

CEE research scientist and alumnus Abel Sanchez highlighted ways digital location information — included in approximately 80 percent of all data — can enhance scientific research. He offered examples of professional sports, transportation, logistics, robotics, and augmented reality to illustrate its use and benefits.

“Many industries are leveraging this location intelligence, open web standards, and powerful, intuitive platforms to discover and predict key insights,” he said. “Fortunately, the rising levels of abstraction in spatial technologies is enabling optimization of operational performance, higher farming yields, strategic investments, and everyday decisions for everyone.”

CEE Assistant Professor Ben Kocar, the final speaker of the week, got straight to the point about biogeochemistry: “Soils are amazingly complex,” he said. “They possess a diverse array of physical, chemical, and biological characteristics that impart overarching controls on the fate of toxic elements like arsenic and mercury, and the availability of nutrients like nitrogen and phosphorus for plant growth. However, many of these processes are poorly understood and lay hidden beneath our feet.”

He studies if and how soils might be serving as a sink for pollutants like atmospheric methane or carbon released as bacteria breaks down soil organic matter. Since this often occurs at a microscopic level, he has developed a new sampling device capable of measuring methane concentrations within a volume about the size of a grain of sand. He encourages others to develop similar devices to measure micro-scale soil processes that may illuminate how important nutrients and chemicals behave in soils.

Plant the seeds; watch them grow

This inaugural short program offered a unique interdisciplinary experience, bringing together industry speakers and MIT faculty from many related areas. It covered many aspects of agriculture, innovation, and the environment, from the big picture and motivations — including fundamental science as well as environmental engineering considerations — to specific topics such as water-soil interactions, biomaterials in agriculture and environment, and foliar disease. Techniques studied included computing and big data, analytics, sensing and data assimilation, risk modeling, microbial dynamics, genomics, and synthetic biology.

Neither the course directors nor the participants were quite sure what to expect as they launched the new program last month. But given the initial response from instructors and students, it seems that many new opportunities will grow from this initial program.


July 19, 2016 | More

juliacon-mit-csail-9e6a30628737757ba3b8984ad9c6ef13fefc5827

JuliaCon draws global users of a dynamic, easy-to-learn programming language

“Julia is a great tool.” That’s what New York University professor of economics and Nobel laureate Thomas J. Sargent told 250 engineers, computer scientists, programmers, and data scientists at the third annual JuliaCon held at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

If you have not yet heard of Julia, it is not a “who,” but a “what.” Developed at CSAIL, the MIT Department of Mathematics, and throughout the Julia community, it is a fast-maturing programming language developed to be simple to learn, highly dynamic, operational at the speed of C, and ranging in use from general programming to highly quantitative uses such as scientific computing, machine learning, data mining, large-scale linear algebra, and distributed and parallel computing. The language was launched open-source in 2012 and has begun to amass a large following of users and contributors.

This year’s JuliaCon, held June 21-25, was the biggest yet, and featured presentations describing how Julia is being used to solve complex problems in areas as diverse as economic modeling, spaceflight, bioinformatics, and many others.

“We are very excited about Julia because our models are complicated,” said Sargent, who is also a senior fellow at the Hoover Institution. “It’s easy to write the problem down, but it’s hard to solve it — especially if our model is high dimensional. That’s why we need Julia. Figuring out how to solve these problems requires some creativity. The guys who deserve a lot of the credit are the ones who figured out how to put this into a computer. This is a walking advertisement for Julia.” Sargent added that the reason Julia is important is because the next generation of macroeconomic models is very computationally intensive, using high-dimensional models and fitting them over extremely large data sets.

Sargent was awarded the Nobel Memorial Prize in Economic Sciences in 2011 for his work on macroeconomics. Together with John Stachurski he founded quantecon.net, a Julia- and Python-based learning platform for quantitative economics focusing on algorithms and numerical methods for studying economic problems as well as coding skills.

The Julia programming language was created and open-sourced thanks, in part, to a 2012 innovation grant awarded by the MIT Deshapnde Center for Technological Innovation. Julia combines the functionality of quantitative environments such as Matlab, R, SPSS, Stata, SAS, and Python with the speed of production programming languages like Java and C++ to solve big data and analytics problems. It delivers dramatic improvements in simplicity, speed, capacity, and productivity for data scientists, algorithmic traders, quants, scientists, and engineers who need to solve massive computation problems quickly and accurately. The number of Julia users has grown dramatically during the last five years, doubling every nine months. It is taught at MIT, Stanford University, and dozens of universities worldwide. Julia 0.5 will launch this month and Julia 1.0 in 2017.

Presenters at JuliaCon have included analysts, researchers and data scientists at the U.S. Federal Reserve, BlackRock, MIT Lincoln Laboratory, Intel, Conning, and a number of universities around the world. In addition to a community of 500 contributors, Julia’s co-creators include Alan Edelman, professor of applied mathematics at MIT; Jeff Bezanson SM ’12, PhD ’15; Viral Shah, co-founder of Julia Computing; and Stefan Karpinski, co-founder of Julia Computing.


July 18, 2016 | More

MIT-Desalination-India-1-82a209d176e82e38ee948f64ee8ea522f7ec8f43

The quest for clean water

The air was hot and gritty. Shehazvi had to squint to see past the sun into the edge of town, past the cars and motorcycles whizzing by, past the scorched earth, to where old buildings stood beautiful in their own way, muted pinks and oranges still curving and curling in all the right places. No rain again today.

She and her daughter climbed out of the rickshaw and walked down the alley that leads to their home, 200 rupees lighter than when they left for Jalgaon city earlier that day. That’s how much it cost every time she took her daughter to the doctor for stomach pains. The culprit? The salty drinking water.

“Excessive salt intake can be quite detrimental to one’s health, both in the short and long term,” says Maulik D. Majmudar, a cardiologist at Massachusetts General Hospital.

But there is no grocery store in Shehazvi’s rural Indian village where she can stock up on bottled water. There is no on-demand tap of drinking water that’s already been prepared for her safety and comfort. There is no reliable electricity.

MIT researchers design a solar-powered desalination device for rural India.

Video: Mechanical Engineering/MIT

The cost of clean water

Shehazvi is a teacher and resident of Mhasawad, a village of about 8,400 people that flanks the Girna River in Maharashtra, India. Unable to watch her daughter suffer further pains from drinking salty water, she recently started paying 30 percent of her monthly income to receive treated water from a reverse osmosis (RO) plant. With an average salinity 75 percent lower than that of the untreated town water, the treated water is worth the cost to Shehazvi.

“The water that is supplied is contaminated, and my daughter was always in pain,” she says. “I had to repeatedly take her to the doctor in Jalgaon, and it was very expensive. So I started buying filtered water. Now the stomachaches and the illnesses are gone.”

But despite the benefits, most of the residents of Mhasawad can’t afford RO water, from which bacteria and salt have been filtered out, and thousands of people in the village regularly drink water with a salinity level above 1,200 parts per million (ppm). To put that into perspective, the World Health Organization recommends levels under 600 ppm, and the water in Cambridge, Massachusetts, usually doesn’t get above 350 ppm at its worst.

“Everyone wants to drink the clean water,” Shehazvi says. “But what do they do if they can’t afford it? I only get paid 2,000 rupees per month and buying this water has been difficult.”

If the lower-income households can’t afford the RO-treated water, they definitely can’t afford the health costs associated with drinking salty water. One man living in Mhasawad says he spends around 20,000 rupees a year on his kidney stone problem.

The townspeople of Mhasawad are particularly concerned about the health of their children, who, according to the teachers in the village, including Shehazvi, have continuous digestive problems and stomach pains that often distract them during school lessons. When the pain gets bad enough, the teachers have to send the children to the hospital during school.

Convergence of perspectives

In order to design a water treatment system that was affordable and would actually work in the context of rural Indian villages, Amos Winter, an assistant professor in the Department of Mechanical Engineering, and PhD candidate Natasha Wright, a researcher in Winter’s GEAR Lab and a fellow of the MIT Tata Center for Technology and Design — which supports this and other GEAR Lab projects for the developing world — knew they first had to develop an in-depth understanding of the problem by talking directly to the residents themselves.

“We are in the field every six months trying to figure out how socioeconomic factors influence technical factors,” Winter says. “We walk the lines between product designers, machine designers, ethnographers, and social scientists, and it’s at the convergence of all those perspectives that disruptive new solutions come together.”

In August 2012, Wright travelled to Jalgaon to meet with engineers at Jain Irrigation Systems and partner on the development of a system that would set in motion the company’s dream of providing poor villages in India access to affordable potable water.

The company’s plan was to develop affordable home water systems that would remove the biological contaminants from the water, and Wright’s first two trips to Jalgaon were spent researching which systems were already on the market and how they were working.

“I went to villages and interviewed women’s groups, men’s groups, and individual families,” she says. “I was focused on the removal of biological contaminants and was hearing that a lot of villagers had filters but weren’t using them regularly. I wanted to figure out how to improve the water and increase the likelihood of filter use to prevent sickness.”

“When I reviewed my survey results,” she continues, “I realized that everyone was complaining about salt, even though I never even asked about it. They said it tastes bad, leaves marks on their pots and pans, and makes their stomachs hurt.”

“As outsiders, our motivations are often fueled solely by health concerns,” Winter says. “And of course that is crucial, but you have to remember that villagers have almost always gotten their water for free. So to go to a person and say we want you to pay for water that basically looks and tastes the same — what’s the value added to them? It’s our job to figure out why people would choose to buy clean water and include it in our solution.”

Wright and Winter believe that by designing a community system that can provide tasty, desalinated water at an affordable price, all villagers — especially those who are poorer and tend to drink contaminated, high-saline water on a regular basis — will be more likely to consistently drink water that’s clean and healthy, even if they have to pay for it.

Not much water anywhere, and not a drop to drink

The issue goes much deeper than taste. About 50 to 70 meters deeper, in fact. That is the depth at which many villages in India have to dig new wells to access any water at all.

India’s climate is hot and dry for most of the year, and the country as a whole is overcrowded. With almost 1.3 billion people and counting, it has the second-highest population in the world, and rainfall is mostly isolated to the three-month monsoon season. So while the demand for water increases with population, the water remains scarce, and many places like Mhasawad are forced to dig into the ground for water.

But as water is removed from the ground, the water table — which is dependent on the amount of rainfall — lowers as it becomes overdrawn but not replenished. The aquifers get deeper and deeper to access more water, and the salinity level of the water often naturally increases with depth.

So it’s an understatement to say that water is precious and can’t be wasted. And yet that’s exactly what happens when RO systems are used with the water in these areas.

RO systems work by utilizing a high-pressure pump to push water through a membrane; the saltier it is, the more energy that’s required to move the water through. The problem is that after the first pass through an RO membrane, the now-pure water has been removed and what’s left is concentrated saltwater.

And now that it’s more concentrated, it requires proportionally more power to move it through the membrane — so much more that the cost of the power outweighs the benefits, and manufacturers forgo a second pass to keep the costs down.

As a result, many RO systems in this area have enormous water reject rates. For example, in Chellur, a city outside of Hyderabad, the reject rate is approximately 70 percent — meaning that 70 percent of potential drinking water is wasted before it ever gets desalinated.

Enter solar-powered electrodialysis

The way Wright and Winter see it, they have to engineer a system for low cost, low waste, and low energy consumption. It is a mighty tall order indeed, and certainly one that can’t be fulfilled just by simplifying a solution that already exists in a developed country.

They started by identifying a system that would work best for the salinity of brackish groundwater in these rural villages. They chose electrodialysis reversal (EDR), because at the area’s typical salinity level of 500 to 2,000 ppm, it requires 25 to 70 percent less energy than RO and can recover more than 90 percent of the feed water.

EDR, which has been commercially available since the 1960s, works by pumping feed water through a stack alongside alternating cation and anion exchange membranes. When a voltage is applied across the stack, anions in the water are pushed toward the anode but are blocked by the cation exchange membranes, which only allow cations to cross, and the opposite is true for the cations. In this way, the salt is separated from the feed water and the resulting concentrate stream is recirculated until it is too salty to continue and is pumped into a nearby evaporation pond. Wright’s system utilizes UV light to kill biological contaminants in the water as well.

Because water is not being forced through a membrane, the required pressure and relative pumping power is much lower than in RO systems, and Winter and Wright can save energy as a result. This energy gain also opens the door to affordable solar-powered desalination systems, because now they don’t need as many solar panels.

So the researchers have replaced grid electricity with solar power and can bypass the unreliability of the Indian electrical grid altogether, decreasing operational and capital costs at the same time. Because EDR uses stacks of exchange membranes that only have to be replaced every 10 years, and that don’t require any filters, they cut down on the maintenance costs by eliminating the need to replace membranes or filters often.

The capital costs of their photovoltaic (PV)-EDR system will depend on whether they’re able to manufacture their own stacks, but they are targeting a one-time investment of around 755,000 rupees, which is equivalent to the cost of current community on-grid reverse osmosis systems.

In Bahdupet, outside of Hyderabad, the local government pays approximately 7,600 rupees per month to power its village RO system, pay the plant operator, and replace filters and cartridges, incurring no loss but making no profit. Switching to Wright’s system could cut their monthly costs almost in half, and they could reinvest the savings back into their town and its people.

Solving the solar power problem

Wright and Winter have designed, built, and tested their prototype system, and their next step is to implement it in a village outside of Hyderabad, where the people are currently using a village-scale RO system that was originally sold to them on loan from a local company called Tata Projects. Wright and Winter have partnered with Tata Projects to help the company improve their village-scale water desalination systems and potentially transition from their current RO systems to the PV-EDR systems Wright is designing.

Meanwhile, Wright is looking into ways to make the system more efficient — for example, using alternate architectures for the EDR stack. At the same time, she is working with GEAR Lab graduate students David Bian and Sterling Watson to cost-optimize the combined solar power and EDR system. Currently, the solar panels are equipped with batteries to store extra solar power and distribute it evenly throughout the day, but they are investigating alternative designs that may allow the solar panels to connect directly to the EDR stack while maintaining a steady distribution of power throughout 24 hours.

“If we can solve that problem,” Wright says, “we can potentially provide about 250 million people in India who currently drink salty groundwater a safe and affordable source of water.”

The MIT Tata Center catalyzed GEAR Lab’s desalination work and, along with Jain Irrigation, enabled them to enter and win last year’s USAID Desal Prize. GEAR Lab has also received funding from Tata projects, USAID, and UNICEF for this work.

Join Professor Winter and Natasha Wright for a live Reddit AskScience AMA (Ask Me Anything) on Wednesday, July 20 at 4 p.m. EDT.


July 18, 2016 | More

Ghostbusters-Lab-MIT-e068025f2576e4b6964263567950d3d9d6b30bca

How MIT gave Ghostbusters its geek cred

The energetic researchers who grounded the new “Ghostbusters” in hard science — giving it “geek cred” — are using a flurry of media attention to alter public perceptions.

Janet Conrad and Lindley Winslow, colleagues in the MIT Department of Physics and researchers in MIT’s Lab for Nuclear Science, were key consultants for the all-female reboot of the classic 1984 supernatural comedy that is opening in theaters today. And the creative side of the STEM fields — science, technology, engineering, and mathematics — will be on full display.

Creativity is, after all, a driving force at MIT, says Conrad. “MIT is like a giant sandbox. You can find a spot and start building your castle, and soon other people will come over to admire it and help. There is a sense that it is okay to think big and to play here that is really wonderful. Keeping in mind that I have an office full of physics toys, I feel like I fit right in.”

MIT Chancellor Cynthia Barnhart, the first woman to hold the post, says it’s inspiring to see faculty members influence pop culture for the good. “At MIT, we know that being ‘a geek’ is cool. Movies like this have the potential to tell the whole world that. It’s such an important, powerful message for young people — especially women — to receive,” she says.

Kristin Wiig’s character, Erin Gilbert, a no-nonsense physicist at Columbia University, is all the more convincing because of Conrad’s toys. Her office features demos and other actual trappings from Conrad’s workspace: books, posters, and scientific models. She even created detailed academic papers and grant applications for use as desk props.

“I loved the original ‘Ghostbusters,’” says Conrad. “And I thought the switch to four women, the girl-power concept, was a great way to change it up for the reboot. Plus I love all of the stuff in my office. I was happy to have my books become stars.”

Conrad developed an affection for MIT while absorbing another piece of pop culture: “Doonesbury.” She remembers one cartoon strip featuring a girl doing Psets. She is discouraged until a robot comes to her door and beeps. All is right with the world again. The exchange made an impression. “Only at MIT do robots come by your door to cheer you up,” she thought.

Like her colleague, Winslow describes mainstream role models as powerful, particularly when fantasy elements in film and television enhance their childhood appeal. She, too, loved “Ghostbusters” as a kid. “I watched the original many times,” she recalls. “And my sister had a stuffed Slimer.”

Winslow jokes that she “probably put in too much time” helping with the remake. Indeed, Wired magazine recently detailed that: “In one scene in the movie, Wiig’s Gilbert stands in front of a lecture hall, speaking on challenges of reconciling quantum mechanics with Einstein’s gravity. On the whiteboards, behind her, a series of equations tells the same story: a self-contained narrative, written by Winslow and later transcribed on set, illustrating the failure of a once-promising physics theory called SU(5).”

Movie reviewers have been floored by the level of set detail. Also deserving of serious credit is James Maxwell, a postdoc at the Lab for Nuclear Science during the period he worked on “Ghostbusters.” He is now a staff scientist at Thomas Jefferson National Accelerator Facility in Newport News, Virginia.

Maxwell crafted realistic schematics of how proton packs, ghost traps, and other paranormal equipment might work. “I recalled myself as a kid, poring over the technical schematics of X-wings and Star Destroyers. I wanted to be sure that boys and especially girls of today could pore over my schematics, plug the components into Wikipedia, and find out about real tools that experimental physicists use to study the workings of the universe.”

He too hopes this behind-the-scenes MIT link with a Hollywood blockbuster will get people thinking. “I hope that it shows a little bit of the giddy side of science and of MIT; the laughs that can come with a spectacular experimental failure or an unexpected break-through.”

The movie depicts the worlds of science and engineering, as drawn from MIT, with remarkable conviction, says Maxwell. “So much of the feel of the movie, and to a great degree the personalities of the characters, is conveyed by the props,” he says.

Kate McKinnon’s character, Jillian Holtzmann, an eccentric engineer, is nearly inseparable from, as Maxwell says, “a mess of wires and magnets and lasers” — a pile of equipment replicated from his MIT lab. When she talks proton packs, her lines are drawn from his work.

Keep an eye out for treasures hidden in the props. For instance, Wiig’s character is the recipient of the Maria Goeppert Mayer “MGM Award” from the American Physical Society, which hangs on her office wall. Conrad and Winslow say the honor holds a special place in their hearts.

“We both think MGM was inspirational. She did amazing things at a time when it was tough for women to do anything in physics,” says Conrad. “She is one of our favorite women in physics,” adds Winslow. Clearly, some of the film’s props and scientific details reflect their personal predilections but Hollywood — and the nation — is also getting a real taste of MIT.


July 15, 2016 | More

MIT-Mobile-Genes-bcaec5e9def9986551bd6647dc3c4a34a96a867e

Microbiome genes on the move

The word “culture” typically refers to a group’s shared heritage — such as its customs, cuisine, music, and language — that connects people in unique ways. But what if culture extended to a population’s microbiome, the collection of microorganisms that live on and within the human body?

Scientists are learning that the state of the microbiome can have an impact on human health, with the risk for everything from autoimmune disease to certain cancers being linked to the diversity and wellbeing of the trillions of microbes living in and on the body. In work published in this week’s Nature, Eric Alm and Ilana Brito from MIT and the Broad Institute of MIT and Harvard and their colleagues took a deep look at the microbiomes in developing world populations to study how culture can influence their makeup.

They uncovered an interesting role for “mobile genes” — genetic material that moves between organisms by a process called horizontal gene transfer — in shaping culturally distinct microbiomes in developing world populations. These mobile genes are useful for highlighting key genes in microbial genomes that help individuals adapt to their environment.

Isolated populations provide a clear lens

In 2008, the Human Microbiome Project (HMP) of the National Institutes of Health began an effort to survey the human microbiome on a large scale, by gathering samples (such as skin swabs, saliva samples, and stool) from hundreds of healthy North Americans, primarily those living in urban areas. They sequenced the microbes in those samples with the goal of understanding how they influence health and disease, and produced an unprecedented look at the diversity of the healthy human microbiome.

The human race, however, is more diverse than urban-dwelling North Americans. To understand how the microbiome of a population from the developing world might compare to the HMP dataset, Brito traveled quite far from urban North America, venturing all the way to the South Pacific islands of Fiji, where many of the country’s native villagers live in remote, isolated communities. “I wanted to track microorganisms that move from place to place, and I thought the best place for doing this was where all contacts are local contacts who use local water and food,” explains Brito, a postdoc in the lab of Eric Alm, an institute member at the Broad Institute, professor of biological engineering at MIT, and co-director of the MIT Center for Microbiome and Therapeutics.  “In contrast, in big cities, we come into contact with a lot of different people, eat food from around the world, and use lots of hygiene products and antibiotics which can prevent the transmission of even endogenous microbes.”

The villages Brito studied were on the second-largest island in Fiji, but they were still fairly remote, with about 100-150 people living in each village. While the HMP had been limited in the amount of information it collected about its participants, Brito conducted a thorough survey of the villagers she met. She mapped out people’s family trees and social networks, noted what medications they took, and recorded the GPS coordinates of their homes and drinking water supplies. In addition to sampling the individuals’ microbiomes, she sampled their water, identified who touched livestock, and took samples from those livestock. Brito captured not only the human microbiome, but also the reservoirs of microbes in the community. The project’s name, the Fiji COmmunity Microbiome Project (FijiCOMP), reflects this holistic approach.

Metagenomic data reveals layers of stories

While much of the earlier microbiome research used a method known as 16S ribosomal subunit sequencing to identify microbial species in a population, that approach tells little about the rest of their genomes. Brito’s goal was instead to do metagenomic sequencing, a more comprehensive way to look at microbial genomes that allows for more granular, strain-level distinctions. “There are layers of stories that can be missed just looking at the 16S profiles,” says Brito.

When Brito arrived home from Fiji with more than 1,000 samples in hand, she and Alm joined forces with Broad’s microbial sequencing group and, with the support of Broad along with funding from the National Human Genome Research Institute, were able to do metagenomic sequencing on over 500 of the samples. This was a game-changer for the researchers, moving them from having very little data to building the largest data set of this type and the only one of its kind on a developing world population. So massive was the influx of data that the researchers had to develop a new way to assemble and analyze the information.

Mobile genes identify welcome genomic additions

Data in hand, Brito and Alm could now dig in. In particular, they were looking for mobile genes, genetic elements that have been shared among species and that likely perform some crucial or survival-promoting function. “If you look at a microbial genome with 5,000 genes in it, which ones are particularly important?” Alm asks. “Probably not all 5,000 genes. Most of them are probably either housekeeping genes that every bacterium has or some random selfish gene. But if you go into an environment and see a particular gene being transferred to many different species, to every bug in this environment, which is maybe rich in tetracycline, [and if this is a] tetracycline resistance gene, then you’re like, aha! Then it’s likely that gene is one … of the 5,000 genes that’s super important.”

Brito and Alm scoured their data for signs of horizontal gene transfer — the process by which mobile genes move among species — and in collaboration with researchers at Sandia National Laboratory and Broad core institute member Paul Blainey, they used microbial single-cell sequencing to create a new set of reference genomes to compare with the metagenomic data and identify mobile genes. To pinpoint gene transfer events, they took a cue from earlier work in Alm’s lab.

“In 2011, we created the first map of who was sharing genes with whom,” says Alm. “We downloaded all of the microbial genomes in Genbank and looked for identical stretches of DNA that were surprisingly present in two totally different species.” Between microbes that diverged evolutionarily hundreds of millions of years ago, it is expected that their sequences would have diverged over the years and there would be a lot of sequence differences. But, if large stretches of DNA are identical between two very different organisms, they reasoned, it strongly suggested that the DNA was horizontally transferred.

What Alm found in this earlier work was that two bacteria of different species were more likely to share a gene if they came from essentially the same site on the human body, for example, both from different spots in the mouth, than if they came from different sites — one from the mouth and one from the gut. So while this data showed that geography was not particularly indicative for sharing genes, perhaps due to lack of geographic coverage, what about in the developing world? What about people in these relatively isolated Fijian villages?

In this new study, Brito and the team looked at the gene transfer events, not only for the Fijian samples but also those from the Human Microbiome Project, to understand how the local environment influences the microbiome. What emerged from the data was that among the Fijian samples it was actually possible to identify particular functional genes selected for within particular populations, which meant the genes were culturally important.

One big difference between participants in the FijiCOMP study and the HMP is diet. For example, the Fijian diet is rich in local fare such as cassava, coconut, and regional seafood. Looking at families of digestive enzymes called glycoside hydrolases, particular family members useful for digesting particular foods were transferred as mobile genes within groups of people that eat those foods. Here, looking at mobile genes allowed the researchers to more directly assess the impact of environmental factors such as diet rather than the impact of which species were present.

“While 16S sequencing can identify which species are present and let us make associations between particular species and disease, what the mobile genes tell us is that even if we know the species, there seem to be culturally important genes that are crossing species boundaries that don’t show up in the 16S data,” says Alm. “So if we want to fully understand the public health impact of the microbiome overall, we need to not only track the species, but also the genes of interest. Combining single-cell and metagenomic analysis provides a powerful way to do it.”

This research was supported by the Fiji Ministry of Health, National Human Genome Research Institute, Center for Environmental Health at MIT, Center for Microbiome Informatics and Therapeutics at MIT, Wildlife Conservation Society and the Earth Institute at Columbia University, Broad Institute of MIT and Harvard, Burroughs Welcome Fund, the National Institute of Dental and Craniofacial Research, and the United States Department of Energy.


July 14, 2016 | More

MIT-Community-Dialogue-1-9af65f537f33a8ac75af507302c9c7c1b9912e8a

At open forum, MIT community discusses recent U.S. tragedies

More than 600 members of the MIT community met on Wednesday in the Institute’s latest public discussion of diversity, tolerance, and inclusion — matters made all the more salient by the series of high-profile gun killings in the U.S. this month.

The event featured public remarks by a few MIT speakers, while devoting most of its time to private discussions among audience members. Randomly assigned to tables of 10, the participants engaged in extended conversations about values, sources of intolerance, and ways to help MIT sustain an inclusive community during a time of social tension.

The U.S. has been roiled most recently by two incidents in which black men were killed by police officers this month, followed by the killing of five police officers who were serving at a demonstration in Dallas.

“I urge us not to give in to the darkness, the darkness of doubt and fear,” said DiOnetta Jones Crayton, associate dean for undergraduate education and director of the Office of Minority Education, in closing remarks to the entire audience. Instead, she said, the “light” we all carry can help us “stand together against injustice, intolerance, and hatred.”

The event is part of an ongoing MIT effort to foster diversity and a culture of inclusion.

“Injustice, racism, mistrust, suspicion, fear, and violence corrode the foundations of a healthy society,” MIT President L. Rafael Reif wrote in an open letter to the MIT community on Monday. “We cannot stand as observers and accept a future of escalating violence and divisiveness. I believe our leading civic institutions have a responsibility to speak clearly against these corrosive forces and to act practically to inspire and create positive change.”

On campus, MIT has started implementing a series of measures intended to further extend an atmosphere of respect and inclusiveness for all — and of greater mutual understanding among community members regardless of differences in ethnicity, religion, gender, or sexual orientation.

These efforts have been spurred in part by recommendations that MIT’s Black Students’ Union and Black Graduate Student Association made in December 2015. Changes at MIT that have occurred or are being implemented include increased financial aid for undergraduate students; expanded diversity orientation for undergraduate and graduate students; increased capacity at MIT Medical, including race-based traumatic stress counseling, and new staff with expertise in issues pertaining to the African diaspora; and more extensive collection and release of data about ethnicity and MIT, on subjects from admissions to student life.

At the same time, MIT expects to keep holding community events on topics similar to those featured Wednesday, in order to generate frank and supportive dialogue.

“We can’t solve a problem we can’t hear each other talking about,” said Ed Bertschinger, Institute Community and Equity Officer and a professor in MIT’s Department of Physics.

Kester Barrow, Area Director for MacGregor House (a student residence), in MIT’s Division of Student Life, also spoke to the audience about the needs of a diverse student community. While race is a social construct, Barrow stated, it is also the case that “race is a lived experience for us all.” As such, he suggested, we have an obligation to understand how those sometimes very disparate experiences shape us, individually and communally.

Audience members at the forum also submitted written suggestions about new ways MIT can keep working to generate civic inclusion on campus. Additionally, MIT chaplains set up a “prayer and reflection” space used after the event, where, among pother things, community members created a paper chain of written thoughts about recent events.

In her closing remarks, Crayton urged audience members to rise above the current climate — “Returning violence to violence can only multiply violence,” she said — and noted that MIT can “challenge itself” to “make a better world” for everyone, no matter how daunting that goal may seem at times.

“If we stay in a state of helplessness for too long, it will cloud our vision,” Crayton said, adding: “As a nation, I do not believe we are incapable of rising above our current state.”


July 14, 2016 | More