News and Research
Duane-Boning-MIT-LGO_0

Duane Boning named faculty co-director of Leaders for Global Operations

Boning succeeds David Simchi-Levi as engineering faculty director of master’s program.
Read more

Lgo

Rick Dauch (LGO ’92), Five Lessons: Adventures in the Automotive Supply Chain

From collapsing roofs to late-night phone calls, a veteran automotive executive shares his stories of turning around companies for private equity firms.

August 24, 2016 | More

A revolutionary model to optimize promotion pricing

Georgia Perakis is an LGO advisor and professor of operations management, operations research and statistics at MIT Sloan School of Management. In recognition of their work in this area, Perakis and her team of students from the Operations Research Center at MIT as well as her collaborators at Oracle received the 2014 Informs Service Science Best Student Paper Award and the 2015 NEDSI Best Application of Theory Paper Award. The team also was also selected as a finalist for the INFORMS Revenue Management & Pricing Section Practice Award in 2015.

Grocery stores run price promotions all the time. You see them when a particular brand of spaghetti sauce is $1 off or your favorite coffee is buy one get one free. Promotions are used for a variety of reasons from increasing traffic in stores to boosting sales of a particular brand. They are responsible for a lot of revenue, as a 2009 A.C. Nielsen study found that 42.8% of grocery store sales in the U.S. are made during promotions. This raises an important question: How much money does a retailer leave on the table by using current pricing practices as opposed to a more scientific, data-driven approach in order to determine optimal promotional prices?

The promotion planning tools currently available in the industry are mostly manual and based on “what-if” scenarios. In other words, supermarkets tend to use intuition and habit to decide when, how deep, and how often to promote products. Yet promotion pricing is very complicated. Product managers have to solve problems like whether or not to promote an item in a particular week, whether or not to promote two items together, and how to order upcoming discounts ― not to mention incorporating seasonality issues in their decision-making process.

There are plenty of people in the industry with years of experience who are good at this, but their brains are not computers. They can’t process the massive amounts of data available to determine optimal pricing. As a result, lots of money is left on the table.

To revolutionize the field of promotion pricing, my team of PhD students from the Operations Research Center at MIT, our collaborators from Oracle, and I sought to build a model based on several goals. It had to be simple and realistic. It had to be easy to estimate directly from the data, but also computationally easy and scalable. In addition, it had to lead to interesting and valuable results for retailers in practice.

Partnering with Oracle, we began by mining more than two years of sales and promotions data from several of Oracle’s clients. Using that data, our team developed various new demand models that captured price effects, promotion effects, and general consumer behavior. For example, when paper towels are promoted one week, a consequence is that people stockpile paper towels. Not surprisingly, the effect of a pricing promotion on paper towels the next week is smaller. Our model took that behavior into account. Furthermore, we developed an optimization model that determines the promotion schedule for every item fast.

The first formulation modeled demand “exactly”. Nevertheless, it proved extremely difficult for that model to solve problems in practice. As a result, we created a simpler version that captures 90+% of the complicated version and can solve practical problems. This simpler version can run on accessible software programs like Excel and provides answers in milliseconds. It allows product managers to test various what-if scenarios easily and fast – and be the final decision-makers about promotional pricing.

As for how it works in practice, the simple model is highly effective. When we compared that model with what is currently implemented, we found an average of 3-10% improvement in profits. With typical retail store margins close to 1.9%, promotions can contribute to a significant portion of stores’ profits. For instance, a 5% increase can mean $5 million for retailers with annual profits of $100 million.

So far, together with our Oracle collaborators, we have received very positive feedback on this model and have filed patents for this work. The model has a strong mathematical foundation and can be used by any retailer in any industry. It could be a game changer for retailers, as they seek to optimize promotion pricing.

 

August 8, 2016 | More

Replicating the connection between muscles and nerves

Roger Kamm, LGO advisor and the Cecil and Ida Green Distinguished Professor of Mechanical and Biological Engineering at MIT, and his colleagues developed a microfluidic device that replicates the neuromuscular junction — the vital connection where nerve meets muscle. The device, about the size of a U.S. quarter, contains a single muscle strip and a small set of motor neurons. Researchers can influence and observe the interactions between the two, within a realistic, three-dimensional matrix.

The researchers genetically modified the neurons in the device to respond to light. By shining light directly on the neurons, they can precisely stimulate these cells, which in turn send signals to excite the muscle fiber. The researchers also measured the force the muscle exerts within the device as it twitches or contracts in response.

The team’s results, published online today in Science Advances, may help scientists understand and identify drugs to treat amyotrophic lateral sclerosis (ALS), more commonly known as Lou Gehrig’s disease, as well as other neuromuscular-related conditions.

“The neuromuscular junction is involved in a lot of very incapacitating, sometimes brutal and fatal disorders, for which a lot has yet to be discovered,” says Sebastien Uzel, who led the work as a graduate student in MIT’s Department of Mechanical Engineering. “The hope is, being able to form neuromuscular junctions in vitro will help us understand how certain diseases function.”

Uzel’s coauthors include Roger Kamm, the Cecil and Ida Green Distinguished Professor of Mechanical and Biological Engineering at MIT, along with former graduate student and now postdoc Randall Platt, research scientist Vidya Subramanian, former undergraduate researcher Taylor Pearl, senior postdoc Christopher Rowlands, former postdoc Vincent Chan, associate professor of biology Laurie Boyer, and professor of mechanical engineering and biological engineering Peter So.

Closing in on a counterpart

Since the 1970s, researchers have come up with numerous ways to simulate the neuromuscular junction in the lab. Most of these experiments involve growing muscle and nerve cells in shallow Petri dishes or on small glass substrates. But such environments are a far cry from the body, where muscles and neurons live in complex, three-dimensional environments, often separated over long distances.

“Think of a giraffe,” says Uzel, who is now a postdoc at the Wyss Institute at Harvard University. “Neurons that live in the spinal cord send axons across very large distances to connect with muscles in the leg.”

To recreate more realistic in vitro neuromuscular junctions, Uzel and his colleagues fabricated a microfluidic device with two important features: a three-dimensional environment, and compartments that separate muscles from nerves to mimic their natural separation in the human body. The researchers suspended muscle and neuron cells in the millimeter-sized compartments, which they then filled with gel to mimic a three-dimensional environment.

A flash and a twitch

To grow a muscle fiber, the team used muscle precursor cells obtained from mice, which they then differentiated into muscle cells. They injected the cells into the microfluidic compartment, where the cells grew and fused to form a single muscle strip. Similarly, they differentiated motor neurons from a cluster of stem cells, and placed the resulting aggregate of neural cells in the second compartment. Before differentiating both cell types, the researchers genetically modified the neural cells to respond to light, using a now-common technique known as optogenetics.

Kamm says light “gives you pinpoint control of what cells you want to activate,” as opposed to using electrodes, which, in such a confined space, can inadvertently stimulate cells other than the targeted neural cells.

Finally, the researchers added one more feature to the device: force sensing. To measure muscle contraction, they fabricated two tiny, flexible pillars within the muscle cells’ compartment, around which the growing muscle fiber could wrap. As the muscle contracts, the pillars squeeze together, creating a displacement that researchers can measure and convert to mechanical force.

In experiments to test the device, Uzel and his colleagues first observed neurons extending axons toward the muscle fiber within the three-dimensional region. Once they observed that an axon had made a connection, they stimulated the neuron with a tiny burst of blue light and instantly observed a muscle contraction.

“You flash a light, you get a twitch,” Kamm says.

Judging from these experiments, Kamm says the microfluidic device may serve as a fruitful testing ground for drugs to treat neuromuscular disorders, and could even be tailored to individual patients.

“You could potentially take pluripotent cells from an ALS patient, differentiate them into muscle and nerve cells, and make the whole system for that particular patient,” Kamm says. “Then you could replicate it as many times as you want, and try different drugs or combinations of therapies to see which is most effective in improving the connection between nerves and muscles.”

On the flip side, he says the device may be useful in “modeling exercise protocols.” For instance, by stimulating muscle fibers at varying frequencies, scientists can study how repeated stress affects muscle performance.

“Now with all these new microfluidic approaches people are developing, you can start to model more complex systems with neurons and muscles,” Kamm says. “The neuromuscular junction is another unit people can now incorporate into those testing modalities.”

This research was funded, in part, by the National Science Foundation.


August 3, 2016 | More

Carbon nanotube “stitches” strengthen composites

AeroAstro professor and LGO advisor Brian Wardle : The newest Airbus and Boeing passenger jets flying today are made primarily from advanced composite materials such as carbon fiber reinforced plastic — extremely light, durable materials that reduce the overall weight of the plane by as much as 20 percent compared to aluminum-bodied planes. Such lightweight airframes translate directly to fuel savings, which is a major point in advanced composites’ favor.

But composite materials are also surprisingly vulnerable: While aluminum can withstand relatively large impacts before cracking, the many layers in composites can break apart due to relatively small impacts — a drawback that is considered the material’s Achilles’ heel.

Now MIT aerospace engineers have found a way to bond composite layers in such a way that the resulting material is substantially stronger and more resistant to damage than other advanced composites. Their results are published this week in the journal Composites Science and Technology.

The researchers fastened the layers of composite materials together using carbon nanotubes — atom-thin rolls of carbon that, despite their microscopic stature, are incredibly strong. They embedded tiny “forests” of carbon nanotubes within a glue-like polymer matrix, then pressed the matrix between layers of carbon fiber composites. The nanotubes, resembling tiny, vertically-aligned stitches, worked themselves within the crevices of each composite layer, serving as a scaffold to hold the layers together.

In experiments to test the material’s strength, the team found that, compared with existing composite materials, the stitched composites were 30 percent stronger, withstanding greater forces before breaking apart.

Roberto Guzman, who led the work as an MIT postdoc in the Department of Aeronautics and Astronautics (AeroAstro), says the improvement may lead to stronger, lighter airplane parts — particularly those that require nails or bolts, which can crack conventional composites.

“More work needs to be done, but we are really positive that this will lead to stronger, lighter planes,” says Guzman, who is now a researcher at the IMDEA Materials Institute, in Spain. “That means a lot of fuel saved, which is great for the environment and for our pockets.”

The study’s co-authors include AeroAstro professor Brian Wardle and researchers from the Swedish aerospace and defense company Saab AB.

“Size matters”

Today’s composite materials are composed of layers, or plies, of horizontal carbon fibers, held together by a polymer glue, which Wardle describes as “a very, very weak, problematic area.” Attempts to strengthen this glue region include Z-pinning and 3-D weaving — methods that involve pinning or weaving bundles of carbon fibers through composite layers, similar to pushing nails through plywood, or thread through fabric.

“A stitch or nail is thousands of times bigger than carbon fibers,” Wardle says. “So when you drive them through the composite, you break thousands of carbon fibers and damage the composite.”

Carbon nanotubes, by contrast, are about 10 nanometers in diameter — nearly a million times smaller than the carbon fibers.

“Size matters, because we’re able to put these nanotubes in without disturbing the larger carbon fibers, and that’s what maintains the composite’s strength,” Wardle says. “What helps us enhance strength is that carbon nanotubes have 1,000 times more surface area than carbon fibers, which lets them bond better with the polymer matrix.”

Stacking up the competition

Guzman and Wardle came up with a technique to integrate a scaffold of carbon nanotubes within the polymer glue. They first grew a forest of vertically-aligned carbon nanotubes, following a procedure that Wardle’s group previously developed. They then transferred the forest onto a sticky, uncured composite layer and repeated the process to generate a stack of 16 composite plies — a typical composite laminate makeup — with carbon nanotubes glued between each layer.

To test the material’s strength, the team performed a tension-bearing test — a standard test used to size aerospace parts — where the researchers put a bolt through a hole in the composite, then ripped it out. While existing composites typically break under such tension, the team found the stitched composites were stronger, able to withstand 30 percent more force before cracking.

The researchers also performed an open-hole compression test, applying force to squeeze the bolt hole shut. In that case, the stitched composite withstood 14 percent more force before breaking, compared to existing composites.

“The strength enhancements suggest this material will be more resistant to any type of damaging events or features,” Wardle says. “And since the majority of the newest planes are more than 50 percent composite by weight, improving these state-of-the art composites has very positive implications for aircraft structural performance.”

Stephen Tsai, emeritus professor of aeronautics and astronautics at Stanford University, says advanced composites are unmatched in their ability to reduce fuel costs, and therefore, airplane emissions.

“With their intrinsically light weight, there is nothing on the horizon that can compete with composite materials to reduce pollution for commercial and military aircraft,” says Tsai, who did not contribute to the study. But he says the aerospace industry has refrained from wider use of these materials, primarily because of a “lack of confidence in [the materials’] damage tolerance. The work by Professor Wardle addresses directly how damage tolerance can be improved, and thus how higher utilization of the intrinsically unmatched performance of composite materials can be realized.”

This work was supported by Airbus Group, Boeing, Embraer, Lockheed Martin, Saab AB, Spirit AeroSystems Inc., Textron Systems, ANSYS, Hexcel, and TohoTenax through MIT’s Nano-Engineered Composite aerospace STructures (NECST) Consortium and, in part, by the U.S. Army.


August 2, 2016 | More

Dan Frey named D-Lab faculty director

LGO advisor and Professor Daniel Frey of the Department of Mechanical Engineering is the new faculty director of the MIT D-Lab, an innovative initiative to design technologies that improve the lives of those living in poverty.

Professor J. Kim Vandiver, MIT dean for undergraduate research, has appointed Professor Daniel Frey of the Department of Mechanical Engineering the new faculty director of the MIT D-Lab. Vandiver has functioned in that role since its earliest days and says, “I am delighted to have Dan Frey take on this important leadership role in D-Lab at an exciting time. He combines a deep interest in design with a desire to strengthen research, which will have positive impact in the developing world.”

Frey will work closely with D-Lab leadership and staff to advance the program’s mission, values, ideals, and culture. Says D-Lab founder Amy Smith, “I’ve had the pleasure of teaching with Dan and am excited that he will be joining us in this capacity. Dan and his approach to systems thinking will be a valuable asset for D-Lab as we move forward with our strategic plan.”

“I think D-Lab is a really important organization — for the communities it serves, for students, for MIT,” Frey says. “When this role cropped up, it seemed like a great opportunity.”

Frey is no stranger to D-Lab. “Dan has been a supporter of D-Lab since the early days,” says Smith. In fact, over the years, Frey has supervised or co-supervised 10 projects at D-Lab and has formed strong working relationships with the D-Lab research and program staff.

With Frey’s help, D-Lab expects to further develop its research program and deepen its scholarship while maintaining a focus on practical impact, genuine connection with communities, and respect for the creative capacity of people living in poverty. D-Lab’s current research and program portfolio includes biomass fuel and cookstoves, off-grid energy, mobile technology, local innovation, agricultural needs assessment, and developing world mobility.

“I really hope that a lot of my contributions are down in the trenches of projects,” Frey. says “My research concerns the planning of experiments and analysis of data, especially when the goal of the experiments is to inform mechanical design. I foresee lots of chances to partner with D-Lab teams and help them get more out of their experiments.”

A big part of Frey’s job is working with students on products intended for the developing world. “One example is a vaccine cooler design,” says Frey. “A remarkably large fraction of vaccines is damaged somewhere along the way as they travel through the cold chain. It turns out temperatures that are too low are a principal culprit rather than temperatures that are too warm,” he explains. “We had an idea — ‘we’ being me and Prithvi Sundar, a mechanical engineering graduate student at the time — to change the configuration of a cooler, to change the arrangement of phase change material, insulation, and vaccines to avoid such problems across a much broader range of conditions.”

Another project Frey is working on is Surgibox, a project that began its development at D-Lab in 2011 when Debbie Lin Teodorescu brought the then nascent idea to D-Lab staff and researchers. Dan says, “I just made an offer — and it was accepted I’m happy to say — for a research assistant to join the Surgibox effort. The idea is to make surgeries safer when they have to be done in austere settings by maintaining more nearly aseptic conditions around the surgical site for the most common abdominal and thoracic procedures.”

In addition to bringing his considerable experience in research and engineering, in teaching, and project supervision, Frey will play a key role in coordinating D-Lab’s activities with other parts of the Institute by promoting faculty engagement in D-Lab courses, programs, and research and strengthening D-Lab’s alignment and collaboration with programs throughout MIT.

Some of those connections and engagements are already in place. In Frey’s role as co-director of experimental design research in the SUTD-MIT International Design Center (IDC), he is helping to implement experimental designs in the field and also learn from those experiences to improve the methods for planning experiments. Much of the field work has been done in association with the IDC’s Developing World track which, according to the program website, “aims to significantly improve conditions in the developing world by working with partners in developing countries.”

Frey also serves as faculty advisor for the Comprehensive Initiative on Technology Evaluation (CITE), a consortium of six MIT partners established in 2012 by a multimillion-dollar award from USAID. CITE is led by the Department of Urban Studies and Planning and includes D-Lab, Center for Transportation and Logistics, the Priscilla King Gray Public Service Center, and the Sociotechnical Systems Research Center.

Of his four years working with CITE, Frey says, “I’ve learned that there’s work being done all over the world with good intentions to alleviate problems related to poverty. I’m surprised over and over by the many things that don’t ultimately succeed. This work is very hard, complicated, and demands a broad range of disciplinary knowledge. I’m strongly committed to making things, testing them, and I hope, scaling the solutions.”

In addition to the many other roles Dan Frey plays mentioned in this article, he is also co-principal investigator of BLOSSOMS, video lessons intended to enrich students’ learning experiences in high school classrooms from in the U.S. and around the world.

Dan holds a PhD in mechanical engineering from MIT, an MS in mechanical engineering from the University of Colorado, and a BS in aeronautical engineering from Rensselaer Polytechnic Institute.

July 27, 2016 | More

Data-driven approach to pavement management lowers greenhouse gas emissions

LGO thesis advisor and Professor Franz-Josef Ulm’s MIT Concrete Sustainability Hub (CSHub) introduces a way to reduce emissions across a roadway network by using big data to identify specific pavement sections where improvements will have the greatest impact.

The roadway network is an important part of the nation’s transportation system, but it also contributes heavily to greenhouse gas emissions. A paper published this month in the Journal of Cleaner Production by researchers with the MIT Concrete Sustainability Hub (CSHub) introduces a way to reduce emissions across a roadway network by using big data to identify specific pavement sections where improvements will have the greatest impact.

For the recent paper, CSHub researchers Arghavan Louhghalam and Mehdi Akbarian, and Professor Franz-Josef Ulm, the CSHub faculty director, studied over 5,000 lane-miles of Virginia’s interstate highway system.

“We found that the maintenance of just a few lane miles allows for significant performance improvement, along with lowered environmental impact, across the entire network,” explains Louhghalam, the paper’s lead author. “Maintaining just 1.5 percent of the roadway network would lead to a reduction of 10 percent in greenhouse gas emissions statewide.”

Use-phase impact has historically been ignored in the life cycle assessment of pavements, due in part to the difficulty of obtaining real-time data and a lack of effective quantitative tools. CSHub models recreate the interaction between the wheel and pavement and allow researchers to directly observe the interplay with varying road conditions, pavement properties, traffic loads, and climatic conditions.

The method presented in this paper integrates those pavement vehicle interaction (PVI) models into several databases used by transportation agencies. A ranking algorithm allows local results to be scaled up and applied to state or national sustainability goals, providing the shortest path to greenhouse gas emissions savings through maintenance at the network scale.

“The quantitative approach is less subjective than qualitative methods, and it’s easy to use,” Louhghalam says. “Decision makers can take more factors into account and make smart choices that are economically and also environmentally optimal.”

This study quantified the impact of deflection-induced PVI (which refers to the stiffness of the pavement) and roughness-induced PVI (which refers to the unevenness of a road’s surface) on the excess fuel consumption of vehicles. Results showed deflection-induced PVI is a major contributor to excess fuel consumption for trucks, due to their higher weights, and roughness-induced PVI impacts are larger for passenger vehicles, mainly due to higher traffic volume.

The researchers compared their approach to other methods, such as random maintenance, choosing roads based on traffic volume, and the current common practice of selecting roads based on their International Roughness Index values. The data-driven method allows for a maximum reduction in CO2 emissions with minimum lane-mile road maintenance.

“There is huge potential to improve efficiency and lower environmental impact through better design and maintenance of roadways,” says Ulm. “This work supports one of our major goals, which is to aid decision makers, including engineers and politicians, in thinking about infrastructure as part of the solution in a carbon-constrained environment.”

The MIT Concrete Sustainability Hub is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation.

July 26, 2016 | More

Designing climate-friendly concrete, from the nanoscale up

Franz-Josef Ulm, professor of CEE, LGO thesis advisor, and director of the MIT Concrete Sustainability Hub (CSHub), have been working to reduce concrete’s environmental footprint.

An MIT-led team has defined the nanoscale forces that control how particles pack together during the formation of cement “paste,” the material that holds together concrete and causes that ubiquitous construction material to be a major source of greenhouse gas emissions. By controlling those forces, the researchers will now be able to modify the microstructure of the hardened cement paste, reducing pores and other sources of weakness to make concrete stronger, stiffer, more fracture-resistant, and longer-lasting. Results from the researchers’ simulations explain experimental measurements that have confused observers for decades, and they may guide the way to other improvements, such as adding polymers to fill the pores and recycling waste concrete into a binder material, reducing the need to make new cement.

Each year, the world produces 2.3 cubic yards of concrete for every person on earth, in the process generating more than 10 percent of all industrial carbon dioxide (CO2) emissions. New construction and repairs to existing infrastructure currently require vast amounts of concrete, and consumption is expected to escalate dramatically in the future. “To shelter all the people moving into cities in the next 30 years, we’ll have to build the equivalent of several hundred New York cities,” says Roland Pellenq, senior research scientist in the MIT Department of Civil and Environmental Engineering (CEE) and research director at France’s National Center for Scientific Research (CNRS). “There’s no material up to that task but concrete.”

Recognizing the critical need for concrete, Pellenq and his colleague Franz-Josef Ulm, professor of CEE and director of the MIT Concrete Sustainability Hub (CSHub), have been working to reduce its environmental footprint. Their goal: to find ways to do more with less. “If we can make concrete stronger, we’ll need to use less of it in our structures,” says Ulm. “And if we can make it more durable, it’ll last longer before it needs to be replaced.”

Surprisingly, while concrete has been a critical building material for 2,000 years, improvements have largely come from trial and error rather than rigorous research. As a result, the factors controlling how it forms and performs have remained poorly understood. “People always deemed what they saw under a microscope as being coincidence or evidence of the special nature of concrete,” says Ulm, who with Pellenq co-directs the joint MIT-CNRS laboratory called MultiScale Material Science for Energy and Environment, hosted at MIT by the MIT Energy Initiative (MITEI). “They didn’t go to the very small scale to see what holds it together — and without that knowledge, you can’t modify it.”

Cement: the key to better concrete

The problems with concrete — both environmental and structural — are linked to the substance that serves as its glue, namely, cement. Concrete is made by mixing together gravel, sand, water, and cement. The last two ingredients combine to make cement hydrate, the binder in the hardened concrete. But making the dry cement powder requires cooking limestone (typically with clay) at temperatures of 1,500 degrees Celsius for long enough to drive off the carbon in it. Between the high temperatures and the limestone decarbonization, the process of making cement powder for concrete is by itself responsible for almost 6 percent of all CO2 emissions from industry worldwide. Structural problems can also be traced to the cement: When finished concrete cracks and crumbles, the failure inevitably begins within the cement hydrate that’s supposed to hold it together — and replacing that crumbling concrete will require making new cement and putting more CO2 into the atmosphere.

To improve concrete, then, the researchers had to address the cement hydrate — and they had to start with the basics: defining its fundamental structure through atomic-level analysis. In 2009, Pellenq, Ulm, and an international group of researchers associated with CSHub published the first description of cement hydrate’s three-dimensional molecular structure. Subsequently, they determined a new formula that yields cement hydrate particles in which the atoms occur in a specific configuration — a “sweet spot” — that increases particle strength by 50 percent.

However, that nanoscale understanding doesn’t translate directly into macroscale characteristics. The strength and other key properties of cement hydrate actually depend on its structure at the “mesoscale” — specifically, on how nanoparticles have packed together over hundred-nanometer distances as the binder material forms.

When dry cement powder dissolves in water, room-temperature chemical reactions occur, and nanoparticles of cement hydrate precipitate out. If the particles don’t pack tightly, the hardened cement will contain voids that are tens of nanometers in diameter — big enough to allow aggressive materials such as road salt to seep in. In addition, the individual cement hydrate particles continue to move around over time — at a tiny scale — and that movement can cause aging, cracking, and other types of degradation and failure.

To understand the packing process, the researchers needed to define the precise physics that drives the formation of the cement hydrate microstructure — and that meant they had to understand the physical forces at work among the particles. Every particle in the system exerts forces on every other particle, and depending on how close together they are, the forces either pull them together or push them apart. The particles seek an organization that minimizes energy over length scales of many particles. But reaching that equilibrium state takes a long time. When the Romans made concrete 2,000 years ago, they used a binder that took many months to harden, so the particles in it had time to redistribute so as to relax the forces between them. But construction time is money, so today’s binder has been optimized to harden in a few hours. As a result, the concrete is solid long before the cement hydrate particles have relaxed, and when they do, the concrete sometimes shrinks and cracks. So while the Roman Colosseum and Pantheon are still standing, concrete that’s made today can fail in just a few years.

The research challenge

Laboratory investigation of a process that can take place over decades isn’t practical, so the researchers turned to computer simulations. “Thanks to statistical physics and computational methods, we’re able to simulate this system moving toward the equilibrium state in a couple of hours,” says Ulm.

Based on their understanding of interactions among atoms within a particle, the researchers — led by MITEI postdoc Katerina Ioannidou — defined the forces that control how particles space out relative to one another as cement hydrate forms. The result is an algorithm that mimics the precipitation process, particle by particle. By constantly tracking the forces among the particles already present, the algorithm calculates the most likely position for each new one — a position that will move the system toward equilibrium. It thus adds more and more particles of varying sizes until the space is filled and the precipitation process stops.

Results from sample analyses appear in the first two diagrams in Figure 1 of the slideshow above. The width of each simulation box is just under 600 nanometers — about one-tenth the diameter of a human hair. The two analyses assume different packing fractions, that is, the total fraction of the simulation box occupied by particles. The packing fraction is 0.35 in the left-hand diagram and 0.52 in the center diagram. At the lower fraction, far more of the volume is made up of open pores, indicated by the white regions.

The third diagram in Figure 1 is a sketch of the cement hydrate structure proposed in pioneering work by T.C. Powers in 1958. The similarity to the center figure is striking. The MIT results thus support Powers’ idea that the formation of mesoscale pores can be attributed to the use of excessive water during hydration — that is, more water than needed to dissolve and precipitate the cement hydrate. “Those pores are the fingerprint of the water you put into the mix in the first place,” says Pellenq. “Add too much water, and at the end you’ll have a cement paste that is too porous, and it will degrade faster over time.”

To validate their model, the researchers performed experimental tests and parallel theoretical analyses to determine the stiffness and hardness (or strength) of cement hydrate samples. The laboratory measurements were taken using a technique called nanoindentation, which involves pushing a hard tip into a sample to determine the relationship between the applied load and the volume of deformed material beneath the indenter.

The graphs in Figure 2 of the slideshow above show results from small-scale nanoindentation tests on three laboratory samples (small symbols) and from computations of those properties in a “sample” generated by the simulation (yellow squares). The graph on the left shows results for stiffness, the graph on the right results for hardness. In both cases, the X-axis indicates the packing fraction. The results from the simulations match the experimental results well. (The researchers note that at lower packing fractions, the material is too soggy to test experimentally — but the simulation can do the calculation anyway.)

In another test, the team investigated experimental measurements of cement hydrate that have mystified researchers for decades. A standard way to determine the structure of a material is using small-angle neutron scattering (SANS). Send a beam of neutrons into a sample, and how they bounce back conveys information about the distribution of particles and pores and other features on length scales of a few hundred nanometers.

SANS had been used on hardened cement paste for several decades, but the measurements always exhibited a regular pattern that experts in the field couldn’t explain. Some talked about fractal structures, while others proposed that concrete is simply unique.

To investigate, the researchers compared SANS analyses of laboratory samples with corresponding scattering data calculated using their model. The experimental and theoretical results showed excellent agreement, once again validating their technique. In addition, the simulation elucidated the source of the past confusion: The unexplained patterns are caused by the rough edges at the boundary between the pores and the solid regions. “All of a sudden we could explain this signature, this mystery, but on a physics basis in a bottom-up fashion,” says Ulm. “That was a really big step.”

New capabilities, new studies

“We now know that the microtexture of cement paste isn’t a given but is a consequence of an interplay of physical forces,” says Ulm. “And since we know those forces, we can modify them to control the microtexture and produce concrete with the characteristics we want.” The approach opens up a new field involving the design of cement-based materials from the bottom up to create a suite of products tailored to specific applications.

The CSHub researchers are now exploring ways to apply their new techniques to all steps in the life cycle of concrete. For example, a promising beginning-of-life approach may be to add another ingredient — perhaps a polymer — to alter the particle-particle interactions and serve as filler for the pore spaces that now form in cement hydrate. The result would be a stronger, more durable concrete for construction and also a high-density, low-porosity cement that would perform well in a variety of applications. For instance, at today’s oil and natural gas wells, cement sheaths are generally placed around drilling pipes to keep gas from escaping. “A molecule of methane is 500 times smaller than the pores in today’s cement, so filling those voids would help seal the gas in,” says Pellenq.

The ability to control the material’s microtexture could have other, less-expected impacts. For example, novel CSHub work has demonstrated that the fuel efficiency of vehicles is significantly affected by the interaction between tires and pavement. Simulations and experiments in the lab-scale setup shown in Figure 3 of the slideshow above suggest that making concrete surfaces stiffer could reduce vehicle fuel consumption by as much as 3 percent nationwide, saving energy and reducing emissions.

Perhaps most striking is a concept for recycling spent concrete. Today, methods of recycling concrete generally involve cutting it up and using it in place of gravel in new concrete. But that approach doesn’t reduce the need to manufacture more cement. The researchers’ idea is to reproduce the cohesive forces they’ve identified in cement hydrate. “If the microtexture is just a consequence of the physical forces between nanometer-sized particles, then we should be able to grind old concrete into fine particles and compress them so that the same force field develops,” says Ulm. “We can make new binder without needing any new cement — a true recycling concept for concrete!”

This research was supported by Schlumberger; France’s National Center for Scientific Research (through its Laboratory of Excellence Interdisciplinary Center on MultiScale Materials for Energy and Environment); and the Concrete Sustainability Hub at MIT. Schlumberger is a Sustaining Member of the MIT Energy Initiative. The research team also included other investigators at MIT; the University of California at Los Angeles; Newcastle University in the United Kingdom; and Sorbonne University, Aix-Marseille University, and the National Center for Scientific Research in France.

This article appears in the Spring 2016 issue of Energy Futures, the magazine of the MIT Energy Initiative.


July 25, 2016 | More

Predicting performance under pressure

Two LGO thesis advisors and MIT Sloan operations professors use sweat to measure stress, see surprising results. Many industries subject current and prospective employees to stress tests to see how they might perform under pressure. Those who remain cool, calm, and collected during the simulations are often seen as the best fit for stressful real-life situations, whether it’s landing an airplane or trading on the stock exchange floor.

July 15, 2016 | More

Ready for takeoff

“The system is large, and there’s a lot of connectivity,” says Hamsa Balakrishnan, associate professor of aeronautics and astronautics and LGO student advisor at MIT.

Over the next 25 years, the number of passengers flying through U.S. airport hubs is expected to skyrocket by almost 70 percent, to more than 900 million passengers per year. This projected boom in commercial fliers will almost certainly add new planes to an already packed airspace.

Any local delays, from a congested runway to a weather-related cancellation, could ripple through the aviation system and jam up a significant portion of it, making air traffic controllers’ jobs increasingly difficult.

“The system is large, and there’s a lot of connectivity,” says Hamsa Balakrishnan, associate professor of aeronautics and astronautics at MIT. “How do you move along today’s system to be more efficient, and at the same time think about technologies that are lightweight, that you can implement in the tower now?”

These are questions that Balakrishnan, who was recently awarded tenure, is seeking to answer. She is working with the Federal Aviation Administration and major U.S. airports to upgrade air traffic control tools in a way that can be easily integrated into the existing infrastructure. These tools are aimed at predicting and preventing air traffic delays, both at individual airports and across the aviation system. They will also ultimately make controllers’ jobs easier.

“We don’t necessarily want [controllers] to spend the bandwidth on processing 40 pieces of information,” says Balakrishnan, who is a member of MIT’s Institute for Data, Systems, and Society. “Instead, we can tell them the three top choices, and the difference between those choices would be something only a human could tell.”

Most recently Balakrishnan has developed algorithms to prevent congestion on airport runways. Large hubs like New York’s John F. Kennedy International Airport can experience significant jams, with up to 40 planes queuing up at a time, each idling in line — and generating emissions — before finally taking off. Balakrishnan found that runways run more smoothly, with less idling time, if controllers simply hold planes at the gate for a few extra minutes. She has developed a queuing model that predicts the wait time for each plane before takeoff, given weather conditions, runway traffic, and arriving schedules, and she has calculated the optimal times when planes should push back from the gate.

In reality, air traffic controllers may also be balancing “human constraints,” such as maintaining a certain level of fairness in determining which plane lines up first. That’s why a large part of Balakrishnan’s work also involves talking directly with air traffic controllers and operators, to understand all the factors that impact their decision making.

“You can’t purely look at the theory to design these systems,” Balakrishnan says. “A lot of the constraints they need to work within are unwritten, and you want to be as nondestructive as possible, in a way that a minor change does not increase their workload. Everybody understands in these systems that you have to modernize. If you’re willing to listen, people are very willing to tell you about what it looks like from where they are.”

First flight

Balakrishnan was born in Madras, now Chennai, a large metropolitan city in southern India, and was raised by academics: Her father is a recently retired physics professor at the Indian Institute of Technology at Madras, and her mother is a retired professor of physics at the Institute of Mathematical Sciences, in Chennai. Her brother, Hari, is now at MIT as the Fujitsu Professor of Electrical Engineering and Computer Science.

“A lot of people we knew were academics, and people used to talk about their research at our home,” Balakrishnan recalls. “I was surrounded by [academia] growing up.”

Following the family’s academic path wasn’t necessarily Balakrishnan’s goal, but as an undergraduate at the Indian Institute of Technology at Madras she found that she enjoyed math and physics. She eventually gravitated to computational fluid dynamics, as applied to aerospace engineering.

“My parents are physicists, and maybe I wanted to rebel, so I went into engineering,” Balakrishnan says, half-jokingly. “I liked practical things.”

She applied to graduate school at Stanford University, and after she was accepted, she took her first-ever plane ride, from India to the U.S.

“Air travel is much more affordable and common now, even in India,” Balakrishnan. “It didn’t used to be that way, and a lot of work has been done, even in more developing economies, to make air travel more accessible.”

Clearing the runways

At Stanford, Balakrishnan shifted her focus from fluid dynamics to air traffic and control-related problems, first looking at ways to track planes in the sky.

“That got me interested in how the rest of the system works,” Balakrishnan says. “I started looking at all the different decisions that are getting made, who’s deciding what, and how do you end up with what you see eventually on the data side, in terms of the aircraft that are moving.”

After graduating from Stanford, she spent eight months at NASA’s Ames Research Center, where she worked on developing control algorithms to reduce airport congestion and optimize the routing of planes on the tarmac.

In 2007, Balakrishnan accepted a faculty position in MIT’s Department of Aeronautics and Astronautics, where she has continued to work on developing algorithms to cut down airport congestion. She’s also finding practical ways to integrate those algorithms in the stressful and often very human environment of an airport’s control tower.

She and her students have tested their algorithms at major airports including Boston’s Logan International, where they made suggestions, in real-time, to controllers about when to push aircraft back from the gate. Those controllers who did take the team’s suggestions observed a surprising outcome: The time-saving method actually cleared traffic, making it easier for planes to cross the tarmac and queue up for takeoff.

“It wasn’t an intended consequence of what we were doing,” Balakrishnan says. “Just by making things calmer and a little more streamlined, it made it easier for them to make decisions in other dimensions.”

Such feedback from controllers, she says, is essential for implementing upgrades in a system that is projected to take on a far higher volume of flights in the next few years.

“You’re designing with the human decision-maker in mind,” Balakrishnan says. “In these systems, that’s a very important thing.”


July 15, 2016 | More

New microfluidic device offers means for studying electric field cancer therapy

Roger Kamm, Distinguished Professor of Mechanical and Biological Engineering at MIT and LGO thesis advisor developed the device that has low-intensity fields keep malignant cells from spreading, while preserving healthy cells.

July 7, 2016 | More

Sloan

chang-f5a4d30b12fddbd575e67c1c39195911ccb9e517

MIT Hip-Hop Speaker Series returns, with eye on diversity

On Sept. 21, Sophia Chang—hip-hop manager, producer, writer, and label manager—will visit MIT. She will be the first woman and the first Asian to speak in the burgeoning MIT Hip-Hop Speaker Series. The talk takes place at 7 p.m. in Wong Auditorium, with doors opening at 6:30 p.m.

The series, which began in 2014, has so far welcomed hip-hop artists to discuss topics ranging from prison reform to entrepreneurship. Until now, the series has been organized by a loose collective of MIT students, including Forest Sears, SB ’16, and Chris Nolte, MBA ’15, co-founder of music crowdfunding site TapTape. Ad hoc support has come from Arts at MIT and the MIT Sloan Entertainment, Sports, and Media Club.

The club’s involvement was recently formalized and the series will become more frequent, with three or four talks expected in the 2016-2017 academic year. Adam Mitchell, MBA ’17, an organizer of the series, discussed why hip-hop makes sense at MIT, why Chang will kick off the year, and his favorite moments from past talks.

What is the goal of the MIT Hip-Hop Speaker Series? Is it just to have artists and others speak about anything, or is it more specific than that?

Adam Mitchell, MBA '17Adam Mitchell, MBA ’17

The MIT Hip-Hop Speaker Series brings together leaders in hip-hop with students at MIT, the goal being to create entirely unique and meaningful dialogues on topics outside the traditional realms of both academia and entertainment. Past speakers have included  Killer Mike, Lil B, Prodigy of Mobb Deep and Young Guru, who have lectured on topics as diverse as prison reform, race relations and police brutality, entrepreneurship, and workplace culture.

As a lifelong fan of hip-hop, it’s been an absolute dream come true for me to be able to bring artists to campus to share their stories. Hip-hop is a fundamentally provocative, engaging genre so the series has been really well-received by students at MIT.

What do people need to know about Sophia Chang, your first speaker for the school year?

Sophia Chang is the music business matriarch who managed Wu-Tang Clan members ODB, RZA, and GZA, as well as D’Angelo, Raphael Saadiq, Q-Tip, A Tribe Called Quest, and Blackalicious in addition to putting in label time at Atlantic, Jive, and Universal. She produced fashion shows for Vivienne Tam, “Project Runway All Stars,” and Ralph Lauren and has written a screenplay which she sold to HBO and has developed other film and TV properties. Most recently she ran Cinematic, the label and management company for Joey Bada$$, Pro Era, and G Herbo.

She’s the embodiment of what we value at Sloan. She has displayed tremendous leadership, talent, and entrepreneurialism across multiple industries and passions, all the while overcoming racism and sexism in a traditionally male-dominated industry and genre. She’s an engaging public speaker and it’s a major milestone as she’ll be the first woman and first executive to ever speak in the series, which is long overdue.

What unique perspectives or ideas do speakers from the hip-hop community offer the MIT community?

While our guests have achieved tremendous success in the creative arts and business, they rarely come from privileged backgrounds. Unlike many of the speakers we host at MIT, they’ve lived through tremendous adversity and can speak to their firsthand experiences with issues of national importance in the U.S.: racism, police brutality, disproportionate amounts of incarceration of black and Latino communities, institutionalized sexism, amongst other topics. What I think is so compelling about the series is that many of our students already feel a deep emotional connection to the music, but they’re given this chance to engage directly with an artist or executive on their life and beliefs, which makes the messages in the music resonate even further. The series presents a human face and connection to societal issues that many MIT students are generally insulated from.

What are some of your favorite moments from past talks?

Hearing firsthand Prodigy of Mobb Deep reflect on his experience in prison and the state of the prison system in the U.S. was a highlight. Getting to hang out with Pusha T, who is one of my favorite artists, to talk about technology and what we as MIT students could invent to help artists make music was surreal.

What should people expect from the speaker series this year?

More diversity, more unique perspectives, and provocative speakers who are making a difference in the industry. First and foremost, we’re really excited to have Sophia as our first-ever female speaker. So many women have contributed to the evolution of hip-hop, but their voices and stories have traditionally been underrepresented. Sophia will speak about her history in and perspective on hip-hop through the unique lens of an Asian woman in the industry dating back three decades. She will discuss the tightrope she learned to walk as a woman in such a male-dominated world as well as management as a service industry, acknowledgement of privilege, addiction and depression, and her evolving relationship with the culture.

She has chosen [the event] to launch her highly-anticipated blog Raised by Wu-Tang and will give the audience a sneak peek at the site, including video testimonials from the Abbot himself, the RZA, and Method Man.

August 17, 2016 | More

platform_slide-d5b2a6b051e2dccd22330e68a12160b596d9c8f9

Reading list: Digital platform strategy

How to get it right. And how to not get it wrong. MIT Sloan expert insight into building your platform company.

 By 2013, platform companies already accounted for many of the leading global brands. Image: Geoffrey Parker and Marshall Van Alstyne

Platforms. Everyone’s building one. Many will fail. Make sure yours isn’t one of them.

Sure, Uber is (ostensibly) worth $66 billion. And Airbnb continues to grow, despite legal and regulatory challenges. But have you checked in on Groupon recently? Despite a few major players, many digital platform companies fade quickly or are never noticed at all.

That’s understandable. Platform companies are really hard to get right, MIT experts said time and time again in the past year. But there is a path to success.

Here’s what we know:

Samuel J. PalmisanoSamuel J. PalmisanoDigital platforms are remaking the global economic map. A recent study found the largest platform companies are young, public, and American. China is the second-largest platform market. Asia and Africa are poised for growth, while Europe lags behind. Former IBM CEO Samuel Palmisano discussed this growth at last month’s MIT Platform Strategy Summit.

Read “Digital platforms driving shift in supply chains, globalization.”

Successful platforms reduce frictions between people and organizations.

Richard SchmalenseeRichard Schmalensee

Making life and work easier for both sides is more important than having novel technology, says MIT Sloan professor emeritus and dean emeritus Richard Schmalensee, co-author of Matchmakers: The New Economics of Multisided Platforms.

“It’s not finding some clever technology connecting A types and B types more easily,” Schmalensee says. “You have to make the connection more valuable, something you can get paid for enabling.”

Read “Successful platforms? Matchmakers that reduce friction.”

Geoffrey ParkerGeoffrey ParkerEcosystem management and governance are a must. A major benefit of platforms is that anyone can connect to them and use them. But platform companies must have a mechanism to determine when to step in and guide changes or halt behavior. MySpace failed in part because users were driven away by a glut of unanticipated advertising, spam, and pornography, says Geoffrey G. Parker, a visiting scholar and research fellow at the MIT Initiative on the Digital Economy. Parker is the co-author of Platform Revolution: How Networked Markets are Transforming the Economy — And How to Make Them Work for You.

Read “The return of platforms (and how to not fail at building one).”

Trying to do it all is to court certain failure. Platform companies must choose between more content and exclusive content and between a mass market or a niche market, researchers in Italy and Spain wrote in MIT Sloan Management Review. They also caution against overlooking the value partners bring to a platform ecosystem. The article includes lessons learned from failure at Groupon and Blackberry, and from an early stumble by Amazon’s Kindle.

Read “How to avoid platform traps” at MIT Sloan Management Review.

Product companies are attempting to transition to a platform model. The strategy will vary by industry, but the trick is to use a platform or network approach to get closer to the customer for insights into pricing, network effects, supply chains, and strategy, according to a blog post from MIT Sloan Executive Education.

Read “Why platforms beat products every time” at MIT Sloan Executive Education.

MIT professors are teaching platform strategy to executives and entrepreneurs. In Platform Strategy: Building and Thriving in a Vibrant Ecosystem, MIT Sloan professors Pierre Azoulay and Catherine Tucker show how business strategies can be revised to find success building a platform. Azoulay discusses the course in the short video above.

Apply to attend the course Oct. 19-20 on the MIT campus.

August 17, 2016 | More

Gig-Banner-1000x534

Good gig? New employment proposals for contract workers could make it better

Last week, Sen. Elizabeth Warren unveiled a comprehensive set of proposals to provide basic employment policy protections and income security benefits to those working in the so-called “gig” economy and others in subcontracted or franchised arrangements. Whether one agrees with her specific ideas or not, the nation owes her a debt of gratitude for putting these issues front and square on the table for a discussion that is long overdue.

The gig economy, best embodied by Uber, Lyft and Task Rabbit, may account for less than 1 percent of the workforce, but it has sparked a debate over what to do about all those who make their living outside of standard employment relationships.

Standard employment relationships are ones in which there is a clearly defined and identifiable employer that is responsible for complying with the range of employment laws put on the books since the New Deal: unemployment insurance, Social Security, minimum wage and overtime rules, and the right to unionize and gain access to collective bargaining. To be clear, the vast majority of American workers, about 85 percent to be exact, still work in this type of employment relationship.

 

But the last decade has witnessed increased erosion of this model, with the growth of subcontracting, outsourcing, franchising, on-call, temporary and, more recently, gig economy workers. Between 2005 and 2015, the number in these nonstandard work relationships increased from 10 to 15 percent.

Read the full article at WBUR Cognoscenti.

Thomas Kochan is the George Maverick Bunker Professor of Management, a Professor of Work and Employment Research, and the CoDirector of the MIT Sloan Institute for Work and Employment Research at the MIT Sloan School of Management.

August 17, 2016 | More

Get to know MIT Sloan’s eight new faculty members

New professors examine manufacturing in the developing world, improving health care with limited resources, and more. One examines how multinational corporations work with manufacturers in the developing world. Another is an economist who served as senior policy advisor to the U.K.’s secretary of health. MIT Sloan this year welcomes eight new faculty members in fields ranging from global economics to operations management.

Greg Distelhorst
Assistant Professor of Global Economics and Management

Comes from: Saïd Business School at the University of Oxford, where he was an associate professor of international business and an associate member of the department of politics and international relations. He received his PhD in political science from MIT in 2013.

Research: Explores the social impact of multinational business, focusing on how multinationals engage with labor-intensive manufacturers in the developing world.

Find out more: On his website and his Google Scholar profile.

Colin Fogarty

Colin Fogarty
Assistant Professor of Operations Research and Statistics

Comes from: The Wharton School of the University of Pennsylvania, where he received his PhD in statistics earlier this year.

Research: Examines the design and analysis of observational studies. Investigates whether existing qualitative advice from quasi-experimentalists on how to conduct a “good” observational study can produce demonstrable quantitative improvements in the resulting inference.

Find out more: At his faculty directory page.

Jacquelyn Gillette

Jacquelyn Gillette
Assistant Professor of Accounting

Comes from: Simon Business School at the University of Rochester, where she received her PhD in accounting earlier this year.

Research: Focuses on the mechanisms that shape the information environment and the pricing of securities in public debt markets, with concentration on the role of accounting and information intermediaries in the corporate bond market.

Find out more: At her faculty directory page.

Daniel Greenwald

Daniel Greenwald
Assistant Professor of Finance

Comes from: New York University, where he received his PhD in economics earlier this year.

Research: Studies connections between financial markets and the macroeconomy, including how institutional features of mortgage markets can amplify the effects of interest rate movements on debt, house prices, and economic activity.

Find out more: At his website and his Google Scholar page.

Jonas Jonasson

Jónas Jónasson
Assistant Professor of Operations Management

Comes from: London Business School, where he received his PhD in management science and operations earlier this year.

Research: Focuses on improving health care delivery in settings with limited resources. Develops models showing the impact of health care delivery programs on individual disease progression and public health outcomes.

Find out more: At his faculty directory page.

David Thesmar

David Thesmar
Professor of Finance

Comes from: HEC Paris, where he was a professor of finance.

Research: Has studied the impact of financing constraints on the real economy. Investigates risk management and systemic risk in banking, as well as the impact of firm organization and non-rational decision making on corporate strategies.

Find out more: At his faculty directory page, his Google Scholar page, and his Twitter profile.

Nikos Trichakis

Nikos Trichakis
Assistant Professor of Operations Management

Comes from: Harvard Business School, where he was an assistant professor of business administration. He received his PhD in operations research from MIT Sloan in 2011.

Research: Studies optimization under uncertainty, and data-driven optimization and analytics, with application in health care, supply chain management, and finance.

Find out more: At his website.

John Van Reenen

John Van Reenen
Professor of Applied Economics

Comes from: The London School of Economics, where he was a professor of economics and the director of the Centre for Economic Performance.

Research: Has published widely on the economics of innovation, labor markets, and productivity and has been a senior policy advisor to the U.K. Secretary of State for Health and for many international organizations.

Find out more:At his faculty directory page, his Google Scholar page, and his Twitter profile.

August 16, 2016 | More

Stephen Curry, the Golden State Warriors, and the power of analytics at work

A commitment to data-driven decisions is transforming the management of sports. Other industries can — and should — follow suit. Whether or not their 2016 season ends with a second consecutive NBA championship, the Golden State Warriors are making Silicon Valley proud. They broke the record for regular season wins with 73. They are headlined by Stephen Curry, the dynamic and eminently likeable two-time MVP. They have established themselves among the league’s elite franchises.

Like the “unicorns” along Highway 101, the Warriors have done it all with a deep organizational commitment to data-driven decision making – both on the court and as a business. The three-pointers Steph and running mate Klay Thompson hoist seemingly without abandon are actually grounded in troves of evidence supporting the shot’s relative value. Meanwhile, the business side of the organization is leveraging fan data to more effectively drive ticket, sponsorship, and merchandise revenue.

The Warriors are not the only team pioneering the analytics revolution in sports. Organizations across an increasing number of sports and levels (professional, college, and high school) are capitalizing on data to gain a competitive edge. Indeed, few industries have implemented data-driven decision making as successfully as sports.

 

What learnings from the sports analytics revolution are applicable to the broader management community? For those seeking to become more data-driven in approach, consider the following:

Adopt a measured mindset. In the simplest of terms, analytics refers to quantitative tools that help organizations find, interpret, and use data to make better decisions. Sports teams understand that other factors such as previous experience and even gut instinct influence the decision-making process. In this context, analytics is a single input, albeit a potentially powerful one.

Read the full post at the MIT Sloan Management Review.

Ben Shields is a Lecturer in Managerial Communication at the MIT Sloan School of Management.

August 15, 2016 | More

one earth two social fields

One earth, two social fields

From the Huffington Post Dallas, Ferguson, Nice. Turkey, Trump & Brexit. The simultaneous rise of global terrorism, of authoritarian strongmen and the far-right are the twin faces of our current moment. Even though Trump-type politicians and terrorism pretend to fight each other, on a deeper level they feed off each other. The more terrorist attacks occur in the US, Turkey, France, or Germany, the greater the chances that Trump, Le Pen, and their allies will be elected. But what’s more interesting is the intertwined connection on a deeper spiritual level: both movements, to various degrees, thrive on activating a social-emotional field that is characterized by prejudice, anger, and fear. Social-Emotional Fields Geopolitics and international relations have long been framed by different levels: political issues arise; they are seen in light of their underlying systemic structures; those, in turn, are shaped by the self-interests of nation-states (levels 1, 2, and 3 … Read More »

The post One earth, two social fields – Otto Scharmer appeared first on MIT Sloan Experts.

August 11, 2016 | More

FRANKFURT AM MAIN, GERMANY - MARCH 18:  Activists march in a demonstration organized by the Blockupy movement to protest against the policies of the European Central Bank (ECB) after the ECB officially inaugurated its new headquarters earlier in the day on March 18, 2015 in Frankfurt, Germany. At least 10,000 protesters were expected to take part at riot police equipped with water cannons established a wide security perimeter around the ECB site.  (Photo by Thomas Lohnes/Getty Images)

Here’s why negative interest rates are more dangerous than you think

Europe and other parts of the world are in for big risks. Desperate times call for desperate and somewhat speculative measures. The European Central Bank (ECB) cut its deposit rate last Thursday, pushing it deeper into negative territory. The move is not unprecedented. In 2009, Sweden’s Riksbank was the first central bank to utilize negative interest rates to bolster its economy, with the ECB, Danish National Bank, Swiss National Bank and, this past January, the Bank of Japan, all following suit. The ECB’s latest move, however, was coupled with the announcement that it would also ramp its Quantitative Easing measures by increasing its monthly bond purchases to 80 billion Euros from 60 billion Euros — a highly aggressive policy shift. The fact that the ECB has adopted this approach raises two key questions: What are the risks? And, if the policy fails, what other options are left? Negative … Read More »

The post Here’s why negative interest rates are more dangerous than you think — Charles Kane appeared first on MIT Sloan Experts.

August 11, 2016 | More

2016-Mdaas2-df1c317dcbe8b7241bde3adf0ce028dcad0a14dc

The last mile: startup gets medical equipment up and running in Africa

Founded at MIT Sloan, Medical Devices as a Service provides defibrillators, ultrasound machines, and other equipment to Nigerian hospitals. Oluwasoga Oni tests a newly installed patient monitor on a hospital staff member in Ondo State, Nigeria

Oluwasoga Oni’s father, a doctor, delivered him at Inland Hospital in Ondo State, Nigeria more than 30 years ago.

Today, Oni, a graduate of MIT’s System Design and Management program, is delivering new and refurbished medical equipment to that same hospital through his startup, Medical Devices as a Service, or MDaaS.

While he was still an MIT student, Oni realized he could solve some critical problems for Nigerian hospitals in need of basic medical devices. “The challenges [in Nigeria] are the high cost of the equipment, the fact that there is little to no financing available for medical equipment, and the lack of skilled biomedical technicians to fix the equipment,” said Oni, who was a Legatum Fellow at MIT.

Charities often donate medical equipment to African hospitals, Oni said, but frequently, factors such as high temperatures and an unreliable electrical grid are not considered. The equipment is sometimes left at hospitals with no instructions on setup or maintenance, and ends up unused or broken within a few months. Meanwhile, in the United States, viable used medical equipment is abandoned in warehouses for years, in a kind of equipment purgatory, after hospitals upgrade to the latest models.

“There’s a huge inventory of equipment not being used in the United States and a serious shortage of quality medical equipment, and maintenance services in places like Nigeria,” said co-founder Genevieve Barnard, MBA ’18, who is also a Legatum Fellow and a dual-degree student at the Harvard Kennedy School of Government.

The for-profit company, which is still in its pilot phase, aims to bridge this gap. The MDaaS team sources refurbished medical equipment in the United States, then targets small to medium-sized hospitals in Nigeria that need it. Although MDaaS occasionally identifies new equipment, the company mainly focuses on high-quality refurbished pieces because they are a better financial fit for most hospitals, Barnard said.

2016-Mdaas team The Medical Devices as a Service team in Nigeria

The company works closely with doctors and hospitals to find out what they need before they look for equipment. The team does research to understand the conditions—such as power fluctuations or humidity levels—in each setting, and provides support when maintenance is needed. MDaaS handles the shipping and installation and then offers initial training on each piece of equipment. Customers receive one year of free pre-planned maintenance support through the company’s biomedical technicians.

MDaaS currently has five employees including Joe McCord, MEng ’15, who is focusing on supply chain logistics. As it grows, the company plans to hire more service technicians.

MDaaS will formally transition from the pilot phase to full operations in 2017.

August 9, 2016 | More

aug16-08-105105139-850x478

Why your diversity program may be helping women but not minorities (or vice versa)

When it comes to issues of race, gender, and diversity in organizations, researchers have revealed the problems in ever more detail. We have found a lot less to say about what does work — what organizations can do to create the conditions in which stigmatized groups can reach their potential and succeed. That’s why my collaborators — Nicole Stephens at the Kellogg School of Management and Ray Reagans at MIT Sloan — and I decided to study what organizations can do to increase traditionally stigmatized groups’ performance and persistence, and curb the disproportionately high rates at which they leave jobs.

One tool at any organization’s disposal is the way its leaders choose to talk (or not to talk) about diversity and differences — what we refer to as their diversity approach. Diversity approaches are important because they provide employees with aframework for thinking about group differences in the workplace and how they should respond to them. We first studied the public diversity statements of 151 big law firms in the U.S. to understand the relationship between how organizations talk about diversity and the rates of attrition of associate-level women and racial minority attorneys at these firms. We assumed that how firms talked about diversity in their statements was a rough proxy for their firm’s approach to diversity more generally.

Read the full post at Harvard Business Review

Evan Apfelbaum is the W. Maurice Young (1961) Career Development Professor and an Assistant Professor of Organizational Studies at the MIT Sloan School of Management. 

August 9, 2016 | More

The road to safe, secure driverless cars

The development of autonomous vehicles promises a future of safe and efficient roads, unimpeded by distracted, impaired, aggressive, or deliberately speeding drivers. But to achieve this, the companies involved in developing driverless cars will have to navigate significant obstacles.

The transition from personally controlled to automated vehicles can be likened to the shift that occurred over the past 20 years from brick-and-mortar retail to e-commerce. For traditional storeowners, security depended on door locks, alarm systems, cameras, and access to cash registers. For online retailers, security has to do with networks and software.

Similarly, the safety focus in driverless vehicles will be largely about securing the networks and software that drive the cars. Today’s cars have approximately 100 million lines of code in them. Autonomous cars will have many times more. The companies that manufacture driverless cars will have to actively manage all of the security aspects of the vehicles’ software.

Today’s carmakers have, over time, developed efficient procedures for recalling and fixing vehicles with parts identified as faulty or unsafe. Similarly, with autonomous vehicles, manufacturers will need to devise methods of identifying and fixing problems discovered in software. In many cases, repairs can be done remotely, in the same way that mobile phone and computer makers can send patches over networks. But however fixes are made, management of software supply chains will need to be as efficient as the management of the supply chains for physical parts.

 

Beyond being efficient, software providers for driverless cars will surely face requirements to certify that the code they deliver is free of security vulnerabilities that, if exploited, could enable a hacker to seize control of the vehicle. A faulty spark plug is one thing. Suddenly having your steering, acceleration and braking hijacked is quite another.

Read the full post at Xconomy.

Lou Shipley is a Lecturer at theMartin Trust Center for MIT Entrepreneurship at the  MIT Sloan School of Management. 

August 8, 2016 | More

Engineering

margaret-hamilton-mit-apollo-code-01eb2340b64a7f4b6feda73693b6574c80c86ab1

Scene at MIT: Margaret Hamilton’s Apollo code

Half a century ago, MIT played a critical role in the development of the flight software for NASA’s Apollo program, which landed humans on the moon for the first time in 1969. One of the many contributors to this effort was Margaret Hamilton, a computer scientist who led the Software Engineering Division of the MIT Instrumentation Laboratory, which in 1961 contracted with NASA to develop the Apollo program’s guidance system. For her work during this period, Hamilton has been credited with popularizing the concept of software engineering. 

In recent years, a striking photo of Hamilton and her team’s Apollo code has made the rounds on social media and in articles detailing her key contributions to Apollo 11’s success. According to Hamilton, this now-iconic image (at left, above) was taken at MIT in 1969 by a staff photographer for the Instrumentation Laboratory — later named the Draper Laboratory and today an independent organization — for use in promotion of the lab’s work on the Apollo project. The original caption, she says, reads:

“Here, Margaret is shown standing beside listings of the software developed by her and the team she was in charge of, the LM [lunar module] and CM [command module] on-board flight software team.”

Hamilton, now an independent computer scientist, described for MIT News in 2009 her contributions to the Apollo software — which last month was added in its entirety to the code-sharing site GitHub:

“From my own perspective, the software experience itself (designing it, developing it, evolving it, watching it perform and learning from it for future systems) was at least as exciting as the events surrounding the mission. … There was no second chance. We knew that. We took our work seriously, many of us beginning this journey while still in our 20s. Coming up with solutions and new ideas was an adventure. Dedication and commitment were a given. Mutual respect was across the board. Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did. Looking back, we were the luckiest people in the world; there was no choice but to be pioneers.”

Have a creative photo of campus life you’d like to share? Submit it to Scene at MIT.


August 17, 2016 | More

MIT-Solid-Energy-Systems-698be96b3c523f9fd20e60f7a581a07e9d315a67

Doubling battery power of consumer electronics

An MIT spinout is preparing to commercialize a novel rechargable lithium metal battery that offers double the energy capacity of the lithium ion batteries that power many of today’s consumer electronics.

Founded in 2012 by MIT alumnus and former postdoc Qichao Hu ’07, SolidEnergy Systems has developed an “anode-free” lithium metal battery with several material advances that make it twice as energy-dense, yet just as safe and long-lasting as the lithium ion batteries used in smartphones, electric cars, wearables, drones, and other devices.

“With two-times the energy density, we can make a battery half the size, but that still lasts the same amount of time, as a lithium ion battery. Or we can make a battery the same size as a lithium ion battery, but now it will last twice as long,” says Hu, who co-invented the battery at MIT and is now CEO of SolidEnergy.

The battery essentially swaps out a common battery anode material, graphite, for very thin, high-energy lithium-metal foil, which can hold more ions — and, therefore, provide more energy capacity. Chemical modifications to the electrolyte also make the typically short-lived and volatile lithium metal batteries rechargeable and safer to use. Moreover, the batteries are made using existing lithium ion manufacturing equipment, which makes them scalable.

In October 2015, SolidEnergy demonstrated the first-ever working prototype of a rechargeable lithium metal smartphone battery with double energy density, which earned them more than $12 million from investors. At half the size of the lithium ion battery used in an iPhone 6, it offers 2.0 amp hours, compared with the lithium ion battery’s 1.8 amp hours.

SolidEnergy plans to bring the batteries to smartphones and wearables in early 2017, and to electric cars in 2018. But the first application will be drones, coming this November. “Several customers are using drones and balloons to provide free Internet to the developing world, and to survey for disaster relief,” Hu says. “It’s a very exciting and noble application.”

Putting these new batteries in electric vehicles as well could represent “a huge societal impact,” Hu says: “Industry standard is that electric vehicles need to go at least 200 miles on a single charge. We can make the battery half the size and half the weight, and it will travel the same distance, or we can make it the same size and same weight, and now it will go 400 miles on a single charge.”

Tweaking the “holy grail” of batteries

Researchers have for decades sought to make rechargeable lithium metal batteries, because of their greater energy capacity, but to no avail. “It is kind of the holy grail for batteries,” Hu says.

Lithium metal, for one, reacts poorly with the battery’s electrolyte — a liquid that conducts ions between the cathode (positive electrode) and the anode (negative electrode) — and forms compounds that increase resistance in the battery and reduce cycle life. This reaction also creates mossy lithium metal bumps, called dendrites, on the anode, which lead to short circuits, generating high heat that ignites the flammable electrolyte, and making the batteries generally nonrechargable.

Measures taken to make the batteries safer come at the cost of the battery’s energy performance, such as switching out the liquid electrolyte with a poorly conductive solid polymer electrolyte that must be heated at high temperatures to work, or with an inorganic electrolyte that is difficult to scale up.

While working as a postdoc in the group of MIT professor Donald Sadoway, a well-known battery researcher who has developed several molten salt and liquid metal batteries, Hu helped make several key design and material advancements in lithium metal batteries, which became the foundation of SolidEnergy’s technology.

One innovation was using an ultrathin lithium metal foil for the anode, which is about one-fifth the thickness of a traditional lithium metal anode, and several times thinner and lighter than traditional graphite, carbon, or silicon anodes. That shrunk the battery size by half.

But there was still a major setback: The battery only worked at 80 degrees Celsius or higher. “That was a showstopper,” Hu says. “If the battery doesn’t work at room temperature, then the commercial applications are limited.”

So Hu developed a solid and liquid hybrid electrolyte solution. He coated the lithium metal foil with a thin solid electrolyte that doesn’t need to be heated to function. He also created a novel quasi-ionic liquid electrolyte that isn’t flammable, and has additional chemical modifications to the separator and cell design to stop it from negatively reacting with the lithium metal.

The end result was a battery with energy-capacity perks of lithium metal batteries, but with the safety and longevity features of lithium ion batteries that can operate at room temperature. “Combining the solid coating and new high-efficiency ionic liquid materials was the basis for SolidEnergy on the technology side,” Hu says.

Blessing in disguise

On the business side, Hu frequented the Martin Trust Center for MIT Entrepreneurship to gain valuable insight from mentors and investors. He also enrolled in Course 15.366 (Energy Ventures), where he formed a team to develop a business plan around the new battery.

With their business plan, the team won the first-place prize at the MIT $100K Entrepreneurship Competition’s Accelerator Contest, and was a finalist in the MIT Clean Energy Prize. After that, the team represented MIT at the national Clean Energy Prize competition held at the White House, where they placed second.

In late 2012, Hu was gearing up to launch SolidEnergy, when A123 Systems, the well-known MIT spinout developing advanced lithium ion batteries, filed for bankruptcy. The landscape didn’t look good for battery companies. “I didn’t think my company was doomed, I just thought my company would never even get started,” Hu says.

But this was somewhat of a blessing in disguise: Through Hu’s MIT connections, SolidEnergy was able to use the A123’s then-idle facilities in Waltham — which included dry and clean rooms, and manufacturing equipment — to prototype. When A123 was acquired by Wanxiang Group in 2013, SolidEnergy signed a collaboration agreement to continue using A123’s resources.

At A123, SolidEnergy was forced to prototype with existing lithium ion manufacturing equipment — which, ultimately, led the startup to design novel, but commercially practical, batteries. Battery companies with new material innovations often develop new manufacturing processes around new materials, which are not practical and sometimes not scalable, Hu says. “But we were forced to use materials that can be implemented into the existing manufacturing line,” he says. “By starting with this real-world manufacturing perspective and building real-world batteries, we were able to understand what materials worked in those processes, and then work backwards to design new materials.”

After three years of sharing A123’s space in Waltham, SolidEnergy this month moved its headquarters to a brand new, state-of-the-art pilot facility in Woburn that’s 10 times larger — and “can house the wings of a Boeing 747” Hu says — with aims of ramping up production for their November launch.


August 17, 2016 | More

mit-detecting-carbon-1-3cf5b5c34f90483dc20b94750fa7d23058b90195

New technique may help detect Martian life

In 2020, NASA plans to launch a new Mars rover that will be tasked with probing a region of the planet scientists believe could hold remnants of ancient microbial life. The rover will collect samples of rocks and soil, and store them on the Martian surface; the samples would be returned to Earth sometime in the distant future so that scientists can meticulously analyze the samples for signs of present or former extraterrestrial life.

Now, as reported in the journal Carbon, MIT scientists have developed a technique that will help the rover quickly and non-invasively identify sediments that are relatively unaltered, and that maintain much of their original composition. Such “pristine” samples give scientists the best chance for identifying signs of former life, if they exist, as opposed to rocks whose histories have been wiped clean by geological processes such as excessive heating or radiation damage.

Spectroscopy on Mars

The team’s technique centers on a new way to interpret the results of Raman spectroscopy, a common, non-destructive process that geologists use to identify the chemical composition of ancient rocks. Among its suite of scientific tools, the 2020 Mars rover includes SHERLOC (Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals), an instrument that will acquire Raman spectra from samples on or just below the Martian surface. SHERLOC will be pivotal in determining whether life ever existed on Mars.

Raman spectroscopy measures the minute vibrations of atoms within the molecules of a given material. For example, graphite is composed of a very orderly arrangement of carbon atoms. The bonds between these carbon atoms vibrate naturally, at a frequency that scientists can measure when they focus a laser beam on graphite’s surface.

As atoms and molecules vibrate at various frequencies depending on what they are bound to, Raman spectroscopy enables scientists to identify key aspects of a sample’s chemical composition. More importantly, the technique can determine whether a sample contains carbonaceous matter — a first clue that the sample may also harbor signs of life.

But Roger Summons, professor of earth, atmospheric, and planetary sciences at MIT, says the chemical picture that scientists have so far been able to discern using Raman spectroscopy has been somewhat fuzzy. For example, a Raman spectrum acquired from a piece of coal on Earth might look very similar to that of an organic particle in a meteorite that was originally made in space.

“We don’t have a way to confidently distinguish between organic matter that was once biological in origin, versus organic matter that came from some other chemical process,” Summons says.

However, Nicola Ferralis, a research scientist in MIT’s Department of Materials Science and Engineering, discovered hidden features in Raman spectra that can give a more informed picture of a sample’s chemical makeup. Specifically, the researchers were able to estimate the ratio of hydrogen to carbon atoms from the substructure of the peaks in Raman spectra. This is important because the more heating any rock has experienced, the more the organic matter becomes altered, specifically through the loss of hydrogen in the form of methane.

The improved technique enables scientists to more accurately interpret the meaning of existing Raman spectra, and quickly evaluate the ratio of hydrogen to carbon — thereby identifying the most pristine, ancient samples of rocks for further study. Summons says this may also help scientists and engineers working with the SHERLOC instrument on the 2020 Mars rover to zero in on ideal Martian samples.

“This may help in deciding what samples the 2020 rover will archive,” Summons says. “It will be looking for organic matter preserved in sediments, and this will allow a more informed selection of samples for potential return to Earth.”

Seeing the hidden peaks

A Raman spectrum represents the vibration of a molecule or atom, in response to laser light. A typical spectrum for a sample containing organic matter appears as a curve with two main peaks — one wide peak, and a sharper, more narrow peak. Researchers have previously labeled the wide peak as the D (disordered) band, as vibrations in this region correlate with carbon atoms that have a disordered makeup, bound to any number of other elements. The second, more narrow peak is the G (graphite) band, which is typically related to more ordered arrangements of carbon, such as is found in graphitic materials.

Ferralis, working with ancient sediment samples being investigated in the Summons’ lab, identified substructures within the main D band that are directly related to the amount of hydrogen in a sample. That is, the higher these sub-peaks, the more hydrogen is present — an indication that the sample has been relatively less altered, and its original chemical makeup better preserved.

To test this new interpretation, the team sought to apply Raman spectroscopy, and their analytic technique, to samples of sediments whose chemical composition was already known. They obtained additional samples of ancient kerogen — fragments of organic matter in sedimentary rocks — from a team based at the University of California at Los Angeles, who in the 1980s had used meticulous, painstaking chemical methods to accurately determine the ratio of hydrogen to carbon.

The team quickly estimated the same ratio, first using Raman spectroscopy to generate spectra of the various kerogen samples, then using their method to interpret the peaks in each spectrum. The team’s ratios of hydrogen to carbon closely matched the original ratios.

“This means our method is sound, and we don’t need to do an insane or impossible amount of chemical purification to get a precise answer,” Summons says.

Mapping a fossil

Going a step further, the researchers wondered whether they could use their technique to map the chemical composition of a microscopic fossil, which ordinarily would contain so little carbon that it would be undetectable by traditional chemistry techniques.

“We were wondering, could we map across a single microscopic fossil and see if any chemical differences were preserved?” Summons says.

To answer that question, the team obtained a microscopic fossil of a protist — an ancient, single-celled organism that could represent a simple alga or its predator. Scientists deduce that such fossils were once biological in origin, simply from their appearance and their similarity to hundreds of other patterns in the fossil record.

The team used Raman spectroscopy to measure the atomic vibrations throughout the fossil, at a sub-micron resolution, and then analyzed the resulting spectra using their new analytic technique. They then created a chemical map based on their analysis.

“The fossil has seen the same thermal history throughout, and yet we found the cell wall and cell contents have higher hydrogen than the cell’s matrix or its exterior,” Summons says. “That to me is evidence of biology. It might not convince everybody, but it’s a significant improvement than what we had before.”

Ultimately, Summons says that, in addition to identifying promising samples on Mars, the group’s technique will help paleontologists understand Earth’s own biological evolution.

“We’re interested in the oldest organic matter preserved on the planet that might tell us something about the physiologies of Earth’s earliest forms of cellular life,” Summons says. “We’re hoping to understand, for example, when did the biological carbon cycle that we have on the Earth today first appear? How did it evolve over time? This technique will ultimately help us to find organic matter that is minimally altered, to help us learn more about what organisms were made of, and how they worked.”

This work was supported by Shell Oil Company and Schlumberger through the X-Shale Consortium under the MIT-Energy Initiative, and Extramural Research by Shell Innovation Research and Development, The Simons Foundation Collaboration on the Origins of Life, the NASA Astrobiology Institute, and the Max Planck Society.


August 16, 2016 | More

MIT-Jolt-1-6a3d51fb98f4762e8939f373add51cfc7c704565

When to get your head out of the game

Head injuries are a hot topic today in sports medicine, with numerous studies pointing to a high prevalence of sports-related concussions, both diagnosed and undiagnosed, among youth and professional athletes. Now an MIT-invented tool is aiding in detecting and diagnosing concussions, in real-time.

In 2007, the American College of Sports Medicine estimated that each year roughly 300,000 high school and college athletes are diagnosed with sports-related head injuries — but that number may be seven times higher, due to undiagnosed cases. One-third of sports-related concussions among college athletes went undiagnosed in a 2013 study by the National Institutes of Health. And the Centers for Disease Control and Prevention has consistently referred to the rise of sports-related head injuries as a national epidemic.

Last October, MIT alumnus Ben Harvatine ’12 — who suffered several head injuries as a longtime wrestler — started selling a wearable sensor for athletes, called the Jolt Sensor, that detects and gathers data on head impacts in real-time. Commercialized through Harvatine’s startup Jolt Athletics, the sensor is now being used nationwide by teams from grade-school to college levels, and is being trialed by professional teams.

“We’re trying to give parents and coaches another tool to make sure they don’t miss big hits, or maybe catch a hit that doesn’t look that big but measures off the charts,” Harvatine says.

Tracking impact

The Jolt Sensor is essentially a small, clip-on accelerometer that can be mounted on an athlete’s helmet, or other headgear, to measure any impact an athlete sustains. When the athlete receives a heavy blow, the sensor vibrates and sends alerts to a mobile app, which is monitored by coaches or parents on the sideline.

The app lists each player on a team wearing the sensor. Filtered to the top of the list are players that received the biggest hits, players with the most total hits, and players with above average hits compared to their past impacts. If a player sustains a hard hit, the player’s name turns red, and an alert appears telling the coach to evaluate that player. The app includes a concussion symptom checklist and cognitive assessment test.

“We can’t be overly diagnostic, but we do our best to communicate the urgency that that was a big hit and you need to check out the player,” Harvatine says.

By recording every impact, big or small, the app also creates impact statistics for each athlete. “You can watch how an athlete is trending — day to day, week to week, month to month — in terms of their total impact exposure, and mitigate high risk situations before they result in injury,” Harvatine says.

Several other concussion-monitoring sensors are currently available. But a key innovation of the Jolt Sensor, Harvatine says, is a custom communications protocol that allows an unlimited number of sensors to transfer data to the app from up to 200 yards away. “That gives us an unparalleled range,” he says. “You don’t have to chase your kids around the field with your phone to get those alerts. You can actually follow a whole team at once.”

Data: The voice of reason

Apart from developing the sensors, the startup, headquartered in Boston, is focusing on gathering and analyzing data, which could provide deeper, objective insights into concussions, Harvatine says.

Over the years, Harvatine has seen sports-related head injuries become increasingly polarizing in the U.S., especially among parents. Some parents, he says, deny concussions happen so frequently, while others say they’ll never let their kids play sports due to risk. By amassing data, Harvatine hopes Jolt Athletics can offer a scientific middle ground: “We’re trying to be that rational voice, saying, ‘Yes, there are risks in sports, but we can help you better understand that risk and intelligently mitigate it.’”

So far, the Jolt Sensor has uncovered a surprising frequency of big hits among kids as young as 10, Harvatine says. “We had a couple sensors that have registered so many hits, at such a high level, that we’ve contacted the owners to make sure we didn’t have a defective sensor,” he says. “Turns out, it’s just typical for that age range.”

Although that finding doesn’t come from a large data set, Harvatine has formed a hypothesis for why those young kids take such big hits. “They’re big enough, strong enough, and fast enough to put hard licks on each other, but not necessarily experienced enough that they’re in total control of their bodies,” he says. “That may be making that particular level of play a little more dangerous than the levels just before or just after.”

Getting knocked around — for science

Harvatine, who studied mechanical engineering at MIT, designed the Jolt Sensor for an class project, after a fateful incident: During a practice his junior year for MIT’s wrestling team, he suffered a concussion that went unnoticed. “I was feeling dizzy and nauseous, but I thought I was dehydrated, so I pushed through,” he says. “But by the end of practice, I was having trouble getting up, and I couldn’t pull words together.”

Harvatine ended up in the hospital with a months-long recovery that required dropping out of all classes for the fall semester. Upon returning to MIT the following spring, he enrolled in Course 2.671 (Measurement and Instrumentation), where he was charged with using a sensor to collect real-world data.

And he had a revelation. “I grabbed a bunch of accelerometers, strapped them to my wrestling headgear, and, much to my parents’ chagrin, went back to the wrestling mat to get knocked around and start gathering data,” he says.

In his fraternity house, Harvatine and classmate and Jolt Athletics co-founder Seth Berg ’14 designed the first Jolt Sensor prototype: a data-collection unit strapped around Harvatine’s waist, with wires running from the device, up his back, and connecting to accelerometers on his headgear. Everything had to be connected to a laptop.

During open gym hours, Harvatine wrestled with teammates while wearing the prototype — and collected some interesting data. Wrestling moves that generated the biggest blows didn’t involve direct impact to the head, but instead came from snapping his head back and forth. “We were doing a lot of drills that cause that type of impact, and it was something that I would’ve never worried about,” Harvatine says.

After graduating, Harvatine launched Jolt Athletics in 2013 to commercialize the sensor. While doing so, he received valuable advice from mentors at MIT’s Venture Mentoring Service, with whom Harvatine still keeps in contact today. “Honestly, I wouldn’t have had a clue what to do without VMS,” he says.

Additionally, Harvatine says, MIT classes like Course 2.008 (Design and Manufacturing II) and Course 2.009 (Product Engineering Processes) taught valuable lessons in product design and manufacturing, and in applying engineering skills to real-world applications. “Those are a couple of a long list of MIT courses I can point to that gave some useful insight into how the world works,” Harvatine says.

August 11, 2016 | More

MIT-Simulation-Language-1a633c1e409fdb33a00f42c95751f6bb70ebf2b3

User-friendly language for programming efficient simulations

Computer simulations of physical systems are common in science, engineering, and entertainment, but they use several different types of tools.

If, say, you want to explore how a crack forms in an airplane wing, you need a very precise physical model of the crack’s immediate vicinity. But if you want to simulate the flexion of an airplane wing under different flight conditions, it’s more practical to use a simpler, higher-level description of the wing.

If, however, you want to model the effects of wing flexion on the crack’s propagation, or vice versa, you need to switch back and forth between these two levels of description, which is difficult not only for computer programmers but for computers, too.

A team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory, Adobe, the University of California at Berkeley, the University of Toronto, Texas A&M, and the University of Texas have developed a new programming language that handles that switching automatically.

In experiments, simulations written in the language were dozens or even hundreds of times as fast as those written in existing simulation languages. But they required only one-tenth as much code as meticulously hand-optimized simulations that could achieve similar execution speeds.

“The story of this paper is that the trade-off between concise code and good performance is false,” says Fredrik Kjolstad, an MIT graduate student in electrical engineering and computer science and first author on a new paper describing the language. “It’s not necessary, at least for the problems that this applies to. But it applies to a large class of problems.”

Indeed, Kjolstad says, the researchers’ language has applications outside physical simulation, in machine learning, data analytics, optimization, and robotics, among other areas. Kjolstad and his colleagues have already used the language to implement a version of Google’s original PageRank algorithm for ordering search results, and they’re currently collaborating with researchers in MIT’s Department of Physics on an application in quantum chromodynamics, a theory of the “strong force” that holds atomic nuclei together.

“I think this is a language that is not just going to be for physical simulations for graphics people,” says Saman Amarasinghe, Kjolstad’s advisor and a professor of electrical engineering and computer science (EECS). “I think it can do a lot of other things. So we are very optimistic about where it’s going.”

Kjolstad presented the paper in July at the Association for Computing Machinery’s Siggraph conference, the major conference in computer graphics. His co-authors include Amarasinghe; Wojciech Matusik, an associate professor of EECS; and Gurtej Kanwar, who was an MIT undergraduate when the work was done but is now an MIT PhD student in physics.

Graphs vs. matrices

As Kjolstad explains, the distinction between the low-level and high-level descriptions of physical systems is more properly described as the distinction between descriptions that use graphs and descriptions that use linear algebra.

In this context, a graph is a mathematical structure that consists of nodes, typically represented by circles, and edges, typically represented as line segments connecting the nodes. Edges and nodes can have data associated with them. In a physical simulation, that data might describe tiny triangles or tetrahedra that are stitched together to approximate the curvature of a smooth surface. Low-level simulation might require calculating the individual forces acting on, say, every edge and face of each tetrahedron.

Linear algebra instead represents a physical system as a collection of points, which exert forces on each other. Those forces are described by a big grid of numbers, known as a matrix. Simulating the evolution of the system in time involves multiplying the matrix by other matrices, or by vectors, which are individual rows or columns of numbers.

Matrix manipulations are second nature to many scientists and engineers, and popular simulation software such as MatLab provides a vocabulary for describing them. But using MatLab to produce graphical models requires special-purpose code that translates the forces acting on, say, individual tetrahedra into a matrix describing interactions between points. For every frame of a simulation, that code has to convert tetrahedra to points, perform matrix manipulations, then map the results back onto tetrahedra. This slows the simulation down drastically.

So programmers who need to factor in graphical descriptions of physical systems will often write their own code from scratch. But manipulating data stored in graphs can be complicated, and tracking those manipulations requires much more code than matrix manipulation does. “It’s not just that it’s a lot of code,” says Kjolstad. “It’s also complicated code.”

Automatic translation

Kjolstad and his colleagues’ language, which is called Simit, requires the programmer to describe the translation between the graphical description of a system and the matrix description. But thereafter, the programmer can use the language of linear algebra to program the simulation.

During the simulation, however, Simit doesn’t need to translate graphs into matrices and vice versa. Instead, it can translate instructions issued in the language of linear algebra into the language of graphs, preserving the runtime efficiency of hand-coded simulations.

Unlike hand-coded simulations, however, programs written in Simit can run on either conventional microprocessors or on graphics processing units (GPUs), with no change to the underlying code. In the researchers’ experiments, Simit code running on a GPU was between four and 20 times as fast as on a standard chip.

“One of the biggest frustrations as a physics simulation programmer and researcher is adapting to rapidly changing computer architectures,” says Chris Wojtan, a professor at the Institute of Science and Technology Austria. “Making a simulation run fast often requires painstakingly specific rearrangements to be made to the code. To make matters worse, different code must be written for different computers. For example, a graphics processing unit has different strengths and weaknesses compared to a cluster of CPUs, and optimizing simulation code to perform well on one type of machine will usually result in sub-optimal performance on a different machine.”

“Simit and Ebb” — another experimental simulation language presented at Siggraph — “aim to handle all of these frustratingly specific optimizations automatically, so programmers can focus their time and energy on developing new algorithms,” Wojtan says. “This is especially exciting news for physics simulation researchers, because it can be difficult to defend creative and raw new ideas against traditional algorithms which have been thoroughly optimized for existing architectures.”


August 10, 2016 | More

jessika-trancik-mit-idss-energy-72ea286a8c06c0f32fd4dbf1a1cf7f817e78d3f2

Reducing emissions, improving technology: A mutually reinforcing cycle

In December 2015, much of the world celebrated when 195 nations plus the European Union reached an agreement to address climate change and pledged to meet nationally determined emissions-reduction targets at the United Nations climate talks in Paris. But many experts have observed that the national targets in the Paris Agreement aren’t sufficiently aggressive to meet the goal of limiting global warming to less than 2 degrees Celsius. Moreover, they worry that some countries won’t be willing — or able — to meet their targets.

Now, an MIT analysis shows that if countries meet their emissions-reduction pledges to the Paris climate agreement, the cost of electricity from solar photovoltaic systems could drop by 50 percent and from wind systems by 25 percent between now and 2030. The reason: To cut their emissions, countries will need to deploy low-carbon technologies, and with that deployment will come technological innovation and lower costs, enabling further deployment.

The researchers estimate that if countries reinvest their savings as costs decline, they can increase their solar deployment by 40 percent and wind deployment by 20 percent — for the same level of investment. The lower costs of these and other low-carbon technologies will also help developing countries meet their emissions-reduction commitments for the future. Results of the MIT analysis were presented at the White House and referenced by negotiators in Paris.

In the study, Jessika Trancik, the Atlantic Richfield Career Development Assistant Professor of Energy Studies at the MIT Institute for Data, Systems, and Society (IDSS), and her colleagues showed that the impact of this mutually reinforcing cycle of emissions reduction and technology development can be significant. “The return on emissions reductions can be astonishingly large … and should feature prominently in efforts to broker an ambitious, long-term agreement among nations,” she notes.

Trancik agrees that the targets as written are too weak to do the job. But she cautions that looking only at those targets doesn’t tell the whole story. “There’s something else going on below the surface that’s important to recognize,” she says. “If those pledges are realized, they’ll require an expansion of clean energy, which will mean further investment in developing key clean-energy technologies. If good investment and policy decisions are made, the technologies will improve, and costs will come down.” Thus, the act of cutting carbon emissions will drive down the cost of meeting current emissions-reduction targets and of adopting stronger targets for the future.

The study involved an interdisciplinary team of graduate students — Patrick Brown of the Department of Physics, Joel Jean of the Department of Electrical Engineering and Computer Ccience, and Goksin Kavlak and Magdalena Klemun of IDSS — in consultation with other colleagues at both MIT and Tsinghua University in Beijing, China.

Before the Paris climate talks, the researchers brought their message to Washington. In an invited talk at the White House, Trancik presented the research findings to U.S. policymakers, and the message apparently resonated: U.S. negotiators used the report during the talks to encourage agreement to revisit and strengthen commitments every five years; White House statements on the agreement, including the final press release, cited the mutually reinforcing cycle between enhanced mitigation and cost reductions; and the Paris Agreement cited the benefits of investing in emissions reductions early on to drive down the cost of future mitigation.

Understanding technology development

Trancik is not new to the study of technology development. For the past decade, she has been studying the underlying reasons why technologies improve over time. Of particular interest has been figuring out why the cost of a technology falls as its deployment increases — a phenomenon first observed some 80 years ago.

By developing fundamentally new research methods, Trancik has been able to look “under the hood” of solar photovoltaics (PV) and other technologies to model changes over time. The resulting models can be tested against data and then applied to many different technologies to pin down the general drivers of technological improvement. The research has required studying hundreds of technologies, looking for key trends in everything from individual device capabilities and constraints up to macroscale market behavior.

About a year ago, she decided to take a comprehensive look at PV and wind technologies — two low-carbon energy sources that have been improving rapidly and have large potential for expansion. Using her analytical methodology, she asked: How quickly are these technologies improving? How rapidly have costs fallen and why? And what can those insights about the past tell us about future trends — in particular, under the emissions-reduction targets stated in the Paris Agreement?

Expanding markets, falling costs

In recent decades, worldwide solar and wind electricity-generating capacities have grown at rates far outpacing experts’ forecasts, and associated costs have dropped dramatically. The charts in Figure 1 of the slideshow above show those changes. Between 2000 and 2014, global solar PV capacity increased 126 times and wind capacity 23 times. Over the same period, the price of a solar PV module dropped 86  percent per kilowatt, and the cost of wind-generated electricity dropped by 35 percent per megawatt-hour. (Changes in solar costs cited here are based on module price because the cost of installation varies so widely from country to country.)

Drawing on Trancik’s past research on the drivers of technological improvement, the researchers determined why those costs have been falling. Public funding of research and development has played a role, but a key contributor has been the policies enacted by governments worldwide to reward the use of emissions-reducing technologies. Those policies have caused deployment of solar and wind technologies to ramp up and markets to expand, increasing competition among firms to excel. For example, in-house researchers work to improve product designs and manufacturing procedures. Technicians on solar PV manufacturing lines find ways to waste less high-cost silicon and make processes more efficient. And increased output yields cost reductions from economies of scale.

“Policies to incentivize the growth of markets have unleashed the ingenuity of private companies to drive down costs,” says Trancik. “I think that’s an important angle that’s not always recognized.”

Interestingly, the gains have resulted from a hodgepodge of public policies adopted by a handful of countries in North America, Europe, and Asia. And the leadership role in installing the technologies has shifted over the past three decades. Solar PV deployment was led by Japan and later Germany, while wind deployment shifted from the United States to Germany and ultimately to China. “Effort was not coordinated,” says Trancik. “Nonetheless, something resembling a relay race emerged, with countries trading off the leader’s baton to maintain progress as efforts from individual nations rose and fell.”

Implications for the Paris Agreement

So what do those insights mean for the future under the Paris Agreement? To find out, the researchers first had to estimate how much solar and wind capacity would be deployed under the Intended Nationally Determined Contributions (INDCs) specified by countries in the Paris Agreement. They assumed a scenario that had a “relatively heavy” emphasis on the renewables but also allowed for expanded use of nuclear fission and hydropower, and they took into account any specific commitment to renewables adoption that countries have made. Based on analyses of all the INDCs, they concluded that global installed solar capacity could increase nearly fivefold and wind about threefold between now and 2030.

To forecast how costs will change at those deployment levels, the researchers used models that Trancik had developed in her previous research, including methods of dealing with inherent uncertainty and forecasting errors so as to generate robust results. In addition, they incorporated expert opinion into their estimates of the “soft costs” of PV installation — that is, labor, permitting, and on-site construction costs, which vary significantly from country to country. Based on their analyses, they forecast a cost decline of 50 percent for solar PV and 25 percent for wind between now and 2030 (and they quantify the expected errors in those forecasts).

Figure 2 in the slideshow above helps to put those costs into context. The bars show electricity costs (including contributions from construction and operation) in 2014 from solar, wind, coal, and natural gas and cost projections for solar and wind in 2030.

The results show that wind is already competitive with coal and natural gas in 2014. Solar PV can compete only with coal and only when the coal cost is increased to account for health-related costs resulting from air pollution (as estimated in the literature). By 2030, solar costs are roughly comparable to the 2014 coal and natural gas costs, even without considering health costs. “So there are already circumstances under which switching from fossil fuels to renewable sources could both abate carbon emissions and reduce the cost of generating electricity,” says Trancik, adding that the “development of storage will become increasingly critical over time as intermittent renewables deployment grows.”

Of course, an obvious question is whether the coal and natural gas technologies will also improve between now and 2030, eroding the renewables’ ability to compete. According to the researchers, the cost of generating electricity with those fuels hasn’t followed long-term decreasing trends in recent decades. In both cases, a large fraction of the total cost is buying the fuel. Those fuel costs tend to fluctuate over the short term but trend neither up nor down over the longer term, limiting the cost decline for the technologies that rely on them.

Messages for policymakers

So what does this mean for international climate change efforts? Trancik cites several possible outcomes for the Paris Agreement if pledges are met. One is that the targets are reached, costs fall, and countries are that much better positioned by 2025 or 2030 to commit to further emissions reductions and expanded adoption of low-carbon technology.

Another possible outcome is that the deployment of wind and solar PV could actually outpace the INDC commitments, either due to market forces alone or because of increasingly aggressive public policy. Policymakers may become more ambitious over time because of the ability to deploy more low-carbon energy without additional financial investment. That possibility is demonstrated in Figure 3 of the slideshow above, which plots global installed solar capacity against the cost of electricity. According to the researchers’ scenario, the INDCs commit countries to deploying a total of 858 gigawatts (GW) of solar PV by 2030.

But if costs decline as forecast by the MIT team, then investing the same amount of money could fund the deployment of 1,210 GW — a 40 percent increase. Performing the same analysis for wind shows that the projected cost decline would permit a 20 percent increase in the amount of wind power deployed for the same investment.

“So if developed countries invest their cost savings back into deployment, they could increase their emissions-reduction commitments without changing the total investment — and the larger those commitments, the faster costs may fall,” says Trancik. “If good decisions are made, by the time the least-developed nations are required to cut emissions, technology development may have lowered costs so much that switching to low-carbon energy is a benefit rather than a burden.”

Sustaining the momentum

As solar PV and wind power begin to dominate electricity markets, other technologies and practices will be needed to ensure reliable delivery of power. Since electricity generation from solar and wind sources is intermittent, ensuring that supply is available to meet demand will require bulk storage devices, expanded long-distance transmission infrastructure, and methods of shifting demand to times of maximum supply. “We can draw lessons on how to drive innovation in those areas by observing the approaches that successfully grew PV and wind markets,” says Trancik. But, she notes, the future is uncertain and we shouldn’t “put all our eggs in one basket.” Other low-carbon electricity sources — such as hydropower and nuclear fission in some locations — as well as technologies for transportation and heating should also be supported.

On the solar side, a final challenge — and opportunity — is to bring down the soft costs of installation. PV modules and inverters are sold in a global marketplace, so cost-reducing advances in that hardware can be shared internationally. But the soft cost components aren’t currently traded on global markets, and they’re twice as high in some countries as in others. Finding ways to share knowledge and best practices relating to soft costs, or possibly even creating global markets, could significantly reduce total costs, both within some countries and globally.

Trancik and her collaborators offer one last encouraging observation: There appears to be growing recognition among negotiators of the long-term positive contributions their countries can make by supporting low-carbon energy and driving down costs. “I think countries now realize that by supporting the early-stage development of these low-carbon energy technologies, they’re helping to contribute knowledge that will last indefinitely and will enable the world to combat climate change, and they take pride in that,” says Trancik. “It’s something that can become part of their historical legacy — an opportunity that I believe played a role in the latest climate change negotiations.”

This research was supported by the MIT International Policy Laboratory.

A version of this article originally appeared in the Spring 2016 issue of Energy Futures, the magazine of the MIT Energy Initiative. 


August 8, 2016 | More

MIT-Address-Qubits-d530cdb320d8b35de400710475995a2b45215edc

Toward practical quantum computers

Quantum computers are largely hypothetical devices that could perform some calculations much more rapidly than conventional computers can. Instead of the bits of classical computation, which can represent 0 or 1, quantum computers consist of quantum bits, or qubits, which can, in some sense, represent 0 and 1 simultaneously.

Although quantum systems with as many as 12 qubits have been demonstrated in the lab, building quantum computers complex enough to perform useful computations will require miniaturizing qubit technology, much the way the miniaturization of transistors enabled modern computers.

Trapped ions are probably the most widely studied qubit technology, but they’ve historically required a large and complex hardware apparatus. In today’s Nature Nanotechnology, researchers from MIT and MIT Lincoln Laboratory report an important step toward practical quantum computers, with a paper describing a prototype chip that can trap ions in an electric field and, with built-in optics, direct laser light toward each of them.

“If you look at the traditional assembly, it’s a barrel that has a vacuum inside it, and inside that is this cage that’s trapping the ions. Then there’s basically an entire laboratory of external optics that are guiding the laser beams to the assembly of ions,” says Rajeev Ram, an MIT professor of electrical engineering and one of the senior authors on the paper. “Our vision is to take that external laboratory and miniaturize much of it onto a chip.”

Caged in

The Quantum Information and Integrated Nanosystems group at Lincoln Laboratory was one of several research groups already working to develop simpler, smaller ion traps known as surface traps. A standard ion trap looks like a tiny cage, whose bars are electrodes that produce an electric field. Ions line up in the center of the cage, parallel to the bars. A surface trap, by contrast, is a chip with electrodes embedded in its surface. The ions hover 50 micrometers above the electrodes.

Cage traps are intrinsically limited in size, but surface traps could, in principle, be extended indefinitely. With current technology, they would still have to be held in a vacuum chamber, but they would allow many more qubits to be crammed inside.

“We believe that surface traps are a key technology to enable these systems to scale to the very large number of ions that will be required for large-scale quantum computing,” says Jeremy Sage, who together with John Chiaverini leads Lincoln Laboratory’s trapped-ion quantum-information-processing project. “These cage traps work very well, but they really only work for maybe 10 to 20 ions, and they basically max out around there.”

Performing a quantum computation, however, requires precisely controlling the energy state of every qubit independently, and trapped-ion qubits are controlled with laser beams. In a surface trap, the ions are only about 5 micrometers apart. Hitting a single ion with an external laser, without affecting its neighbors, is incredibly difficult; only a few groups had previously attempted it, and their techniques weren’t  practical for large-scale systems.

Getting onboard

That’s where Ram’s group comes in. Ram and Karan Mehta, an MIT graduate student in electrical engineering and first author on the new paper, designed and built a suite of on-chip optical components that can channel laser light toward individual ions. Sage, Chiaverini, and their Lincoln Lab colleagues Colin Bruzewicz and Robert McConnell retooled their surface trap to accommodate the integrated optics without compromising its performance. Together, both groups designed and executed the experiments to test the new system.

“Typically, for surface electrode traps, the laser beam is coming from an optical table and entering this system, so there’s always this concern about the beam vibrating or moving,” Ram says. “With photonic integration, you’re not concerned about beam-pointing stability, because it’s all on the same chip that the electrodes are on. So now everything is registered against each other, and it’s stable.”

The researchers’ new chip is built on a quartz substrate. On top of the quartz is a network of silicon nitride “waveguides,” which route laser light across the chip. Above the waveguides is a layer of glass, and on top of that are the niobium electrodes. Beneath the holes in the electrodes, the waveguides break into a series of sequential ridges, a “diffraction grating” precisely engineered to direct light up through the holes and concentrate it into a beam narrow enough that it will target a single ion, 50 micrometers above the surface of the chip.

Prospects

With the prototype chip, the researchers were evaluating the performance of the diffraction gratings and the ion traps, but there was no mechanism for varying the amount of light delivered to each ion. In ongoing work, the researchers are investigating the addition of light modulators to the diffraction gratings, so that different qubits can simultaneously receive light of different, time-varying intensities. That would make programming the qubits more efficient, which is vital in a practical quantum information system, since the number of quantum operations the system can perform is limited by the “coherence time” of the qubits.

“As far as I know, this is the first serious attempt to integrate optical waveguides in the same chip as an ion trap, which is a very significant step forward on the path to scaling up ion-trap quantum information processors [QIP] to the sort of size which will ultimately contain the number of qubits necessary for doing useful QIP,” says David Lucas, a professor of physics at Oxford University. “Trapped-ion qubits are well-known for being able to achieve record-breaking coherence times and very precise operations on small numbers of qubits. Arguably, the most important area in which progress needs to be made is technologies which will enable the systems to be scaled up to larger numbers of qubits. This is exactly the need being addressed so impressively by this research.”

“Of course, it’s important to appreciate that this is a first demonstration,” Lucas adds. “But there are good prospects for believing that the technology can be improved substantially. As a first step, it’s a wonderful piece of work.”


August 8, 2016 | More

MIT-Biocatalyst-Reactor-3548925f6ff2e1eeeede7bce7cfdd7e8bab1fe63

Microbial engineering technique could reduce contamination in biofermentation plants

The cost and environmental impact of producing liquid biofuels and biochemicals as alternatives to petroleum-based products could be significantly reduced, thanks to a new metabolic engineering technique.

Liquid biofuels are increasingly used around the world, either as a direct “drop-in” replacement for gasoline, or as an additive that helps reduce carbon emissions.

The fuels and chemicals are often produced using microbes to convert sugars from corn, sugar cane, or cellulosic plant mass into products such as ethanol and other chemicals, by fermentation. However, this process can be expensive, and developers have struggled to cost-effectively ramp up production of advanced biofuels to large-scale manufacturing levels.

One particular problem facing producers is the contamination of fermentation vessels with other, unwanted microbes. These invaders can outcompete the producer microbes for nutrients, reducing yield and productivity.

Ethanol is known to be toxic to most microorganisms other than the yeast used to produce it, Saccharomyces cerevisiae, naturally preventing contamination of the fermentation process. However, this is not the case for the more advanced biofuels and biochemicals under development.

To kill off invading microbes, companies must instead use either steam sterilization, which requires fermentation vessels to be built from expensive stainless steels, or costly antibiotics. Exposing large numbers of bacteria to these drugs encourages the appearance of tolerant bacterial strains, which can contribute to the growing global problem of antibiotic resistance.

Now, in a paper published today in the journal Science, researchers at MIT and the Cambridge startup Novogy describe a new technique that gives producer microbes the upper hand against unwanted invaders, eliminating the need for such expensive and potentially harmful sterilization methods.

The researchers engineered microbes, such as Escherichia coli, with the ability to extract nitrogen and phosphorous — two vital nutrients needed for growth — from unconventional sources that could be added to the fermentation vessels, according to Gregory Stephanopoulos, the Willard Henry Dow Professor of Chemical Engineering and Biotechnology at MIT, and Joe Shaw, senior director of research and development at Novogy, who led the research.

What’s more, because the engineered strains only possess this advantage when they are fed these unconventional chemicals, the chances of them escaping and growing in an uncontrolled manner outside of the plant in a natural environment are extremely low.

“We created microbes that can utilize some xenobiotic compounds that contain nitrogen, such as melamine,” Stephanopoulos says. Melamine is a xenobiotic, or artificial, chemical that contains 67 percent nitrogen by weight.

Conventional biofermentation refineries typically use ammonium to supply microbes with a source of nitrogen. But contaminating organisms, such as Lactobacilli, can also extract nitrogen from ammonium, allowing them to grow and compete with the producer microorganisms.

In contrast, these organisms do not have the genetic pathways needed to utilize melamine as a nitrogen source, says Stephanopoulos.

“They need that special pathway to be able to utilize melamine, and if they don’t have it they cannot incorporate nitrogen, so they cannot grow,” he says.

The researchers engineered E. coli with a synthetic six-step pathway that allows it to express enzymes needed to convert melamine to ammonia and carbon dioxide, in a strategy they have dubbed ROBUST (Robust Operation By Utilization of Substrate Technology).

When they experimented with a mixed culture of the engineered E. coli strain and a naturally occurring strain, they found the engineered type rapidly outcompeted the control, when fed on melamine.

They then investigated engineering the yeast Saccharomyces cerevisiae to express a gene that allowed it to convert the nitrile-containing chemical cyanamide into urea, from which it could obtain nitrogen.

The engineered strain was then able to grow with cyanamide as its only nitrogen source.

Finally, the researchers engineered both S. cerevisiae and the yeast Yarrowia lipolytica to use potassium phosphite as a source of phosphorous.

Like the engineered E. coli strain, both the engineered yeasts were able to outcompete naturally occurring strains when fed on these chemicals.

“So by engineering the strains to make them capable of utilizing these unconventional sources of phosphorous and nitrogen, we give them an advantage that allows them to outcompete any other microbes that may invade the fermenter without sterilization,” Stephanopoulos says.

The microbes were tested successfully on a variety of biomass feedstocks, including corn mash, cellulosic hydrolysate, and sugar cane, where they demonstrated no loss of productivity when compared to naturally occurring strains.

The paper provides a novel approach to allow companies to select for their productive microbes and select against contaminants, according to Jeff Lievense, a senior engineering fellow at the San Diego-based biotechnology company Genomatica who was not involved in the research.

“In theory you could operate a fermentation plant with much less expensive equipment and lower associated operating costs,” Lievense says. “I would say you could cut the capital and capital-related costs [of fermentation] in half, and for very large-volume chemicals, that kind of saving is very significant,” he says.

The ROBUST strategy is now ready for industrial evaluation, Shaw says. The technique was developed with Novogy researchers, who have tested the engineered strains at laboratory scale and trials with 1,000-liter fermentation vessels, and with Felix Lam of the MIT Whitehead Institute for Biomedical Research, who led the cellulosic hydrosylate testing.

Novogy now hopes to use the technology in its own advanced biofuel and biochemical production, and is also interested in licensing it for use by other manufacturers, Shaw says.


August 4, 2016 | More

IDV-Vibration-Modes-MIT-CSAIL-0335b3349ff00fef208d3497e69a2932f254e935

Reach in and touch objects in videos with “Interactive Dynamic Video”

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an imaging technique called Interactive Dynamic Video (IDV) that lets you reach in and “touch” objects in videos. Using traditional cameras and algorithms, IDV looks at the tiny, almost invisible vibrations of an object to create video simulations that users can virtually interact with.

Interactive Dynamic Video demonstration from the MIT Computer Science and Artificial Intelligence Laboratory

Video: MIT CSAIL

“This technique lets us capture the physical behavior of objects, which gives us a way to play with them in virtual space,” says CSAIL PhD student Abe Davis, who will be publishing the work this month for his final dissertation. “By making videos interactive, we can predict how objects will respond to unknown forces and explore new ways to engage with videos.”

Davis says that IDV has many possible uses, from filmmakers producing new kinds of visual effects to architects determining if buildings are structurally sound. For example, he shows that, in contrast to how the popular Pokemon Go app can drop virtual characters into real-world environments, IDV can go a step beyond that by actually enabling virtual objects (including Pokemon) to interact with their environments in specific, realistic ways, like bouncing off the leaves of a nearby bush.

He outlined the technique in a paper he published earlier this year with PhD student Justin G. Chen and professor Fredo Durand.

How it works

The most common way to simulate objects’ motions is by building a 3-D model. Unfortunately, 3-D modeling is expensive, and can be almost impossible for many objects. While algorithms exist to track motions in video and magnify them, there aren’t ones that can reliably simulate objects in unknown environments. Davis’ work shows that even five seconds of video can have enough information to create realistic simulations.

To simulate the objects, the team analyzed video clips to find “vibration modes” at different frequencies that each represent distinct ways that an object can move. By identifying these modes’ shapes, the researchers can begin to predict how these objects will move in new situations.

“Computer graphics allows us to use 3-D models to build interactive simulations, but the techniques can be complicated,” says Doug James, a professor of computer science at Stanford University who was not involved in the research. “Davis and his colleagues have provided a simple and clever way to extract a useful dynamics model from very tiny vibrations in video, and shown how to use it to animate an image.”

Davis used IDV on videos of a variety of objects, including a bridge, a jungle gym, and a ukelele. With a few mouse-clicks, he showed that he can push and pull the image, bending and moving it in different directions. He even demonstrated how he can make his own hand appear to telekinetically control the leaves of a bush.

“If you want to model how an object behaves and responds to different forces, we show that you can observe the object respond to existing forces and assume that it will respond in a consistent way to new ones,” says Davis, who also found that the technique even works on some existing videos on YouTube.

Applications

Researchers say that the tool has many potential uses in engineering, entertainment, and more.

For example, in movies it can be difficult and expensive to get CGI characters to realistically interact with their real-world environments. Doing so requires filmmakers to use green-screens and create detailed models of virtual objects that can be synchronized with live performances.

But with IDV, a videographer could take video of an existing real-world environment and make some minor edits like masking, matting, and shading to achieve a similar effect in much less time — and at a fraction of the cost.

Engineers could also use the system to simulate how an old building or bridge would respond to strong winds or an earthquake.

“The ability to put real-world objects into virtual models is valuable for not just the obvious entertainment applications, but also for being able to test the stress in a safe virtual environment, in a way that doesn’t harm the real-world counterpart,” says Davis.

He says that he is also eager to see other applications emerge, from studying sports film to creating new forms of virtual reality.

“When you look at VR companies like Oculus, they are often simulating virtual objects in real spaces,” he says. “This sort of work turns that on its head, allowing us to see how far we can go in terms of capturing and manipulating real objects in virtual space.”

This work was supported by the National Science Foundation and the Qatar Computing Research Institute. Chen also received support from Shell Research through the MIT Energy Initiative.


August 2, 2016 | More

MIT-Biologics-1-8063f31b12b1308d91add83505c2ec1a219ccc0c

Portable device produces biopharmaceuticals on demand

For medics on the battlefield and doctors in remote or developing parts of the world, getting rapid access to the drugs needed to treat patients can be challenging.

Biopharmaceutical drugs, which are used in a wide range of therapies including vaccines and treatments for diabetes and cancer, are typically produced in large, centralized fermentation plants. This means they must be transported to the treatment site, which can be expensive, time-consuming, and challenging to execute in areas with poor supply chains.

Now a portable production system, designed to manufacture a range of biopharmaceuticals on demand, has been developed by researchers at MIT, with funding from the Defense Advanced Research Projects Agency (DARPA).

In a paper published today in the journal Nature Communications, the researchers demonstrate that the system can be used to produce a single dose of treatment from a compact device containing a small droplet of cells in a liquid.

In this way, the system could ultimately be carried onto the battlefield and used to produce treatments at the point of care. It could also be used to manufacture a vaccine to prevent a disease outbreak in a remote village, according to senior author Tim Lu, an associate professor of biological engineering and electrical engineering and computer science, and head of the Synthetic Biology Group at MIT’s Research Laboratory of Electronics.

“Imagine you were on Mars or in a remote desert, without access to a full formulary, you could program the yeast to produce drugs on demand locally,” Lu says.

The system is based on a programmable strain of yeast, Pichia pastoris, which can be induced to express one of two therapeutic proteins when exposed to a particular chemical trigger. The researchers chose P. pastoris because it can grow to very high densities on simple and inexpensive carbon sources, and is able to express large amounts of protein.

“We altered the yeast so it could be more easily genetically modified, and could include more than one therapeutic in its repertoire,” Lu says.

When the researchers exposed the modified yeast to estrogen β-estradiol, the cells expressed recombinant human growth hormone (rHGH). In contrast, when they exposed the cells to methanol, the yeast expressed the protein interferon.

The cells are held within a millimeter-scale table-top microbioreactor, containing a microfluidic chip, which was originally developed by Rajeev Ram, a professor of electrical engineering at MIT, and his team, and then commercialized by Kevin Lee — an MIT graduate and co-author — through a spin-off company.

A liquid containing the desired chemical trigger is first fed into the reactor, to mix with the cells.

Inside the reactor, the cell-and-chemical mixture is surrounded on three sides by polycarbonate; on the fourth side is a flexible and gas-permeable silicone rubber membrane.

By pressurizing the gas above this membrane, the researchers are able to gently massage the liquid droplet to ensure its contents are fully mixed together.

“This makes sure that the one milliliter (of liquid) is homogenous, and that is important because diffusion at these small scales, where there is no turbulence, takes a surprisingly long time,” says Ram, who was also a senior author of the paper.

Because the membrane is gas permeable, it allows oxygen to flow through to the cells, while any carbon dioxide they produce can be easily extracted.

The device continuously monitors conditions within the microfluidic chip, including oxygen levels, temperature, and pH, to ensure the optimum environment for cell growth. It also monitors cell density.

If the yeast is required to produce a different protein, the liquid is simply flushed through a filter, leaving the cells behind. Fresh liquid containing a new chemical trigger can then be added, to stimulate production of the next protein.

Although other research teams have previously attempted to build microbioreactors, these have not have not had the ability to retain the protein-producing cells while flushing out the liquid they are mixed with, Ram says. “You want to keep the cells because they are your factory,” he says. “But you also want to rapidly change their chemical environment, in order to change the trigger for protein production.”

The researchers have demonstrated a very logical and practical way to produce biologic drugs, according to Luke P. Lee, a professor of bioengineering at the University of California at  Berkeley, who was not involved in the research. Their smart biologics production technique uses one of the best integrated microfluidics systems, Lee says.

“It is a pragmatic solution for biomanufacturing, and the team’s flexible and portable platform shows an authentic way of producing personalized therapeutics,” he says.

The researchers are now investigating the use of the system in combinatorial treatments, in which multiple therapeutics, such as antibodies, are used together.

Combining multiple therapeutics in this way can be expensive if each requires its own production line, Lu says.

“But if you could engineer a single strain, or maybe even a consortia of strains that grow together, to manufacture combinations of biologics or antibodies, that could be a very powerful way of producing these drugs at a reasonable cost,” he says.


July 29, 2016 | More