News and Research
Don Rosenfield

Donald Rosenfield, a longtime leader of MIT LGO, dies at 70

With deep sadness, the LGO community mourns its founding program director, Don Rosenfield. He leaves a legacy of over 1,200 LGO alumni and countless colleagues, students, and friends who were touched and inspired by him.
Read more

Lgo

This MIT program will purchase carbon offsets for student travel

Lead by Yakov Berenshteyn, (LGO ’19) a new Jetset Offset program will reduce the environmental impact of student travel by purchasing carbon offsets.

In one week about 100 MIT Sloan students will fly around the world to study regional economies, immerse themselves in different cultures, and produce more than 300 metric tons [PDF] of carbon dioxide.

Thanks to the necessary air travel for study tours, those students are producing the same emissions in two weeks as 1,600 average American car commuters would in that same timeframe, said Yakov Berenshteyn, LGO ’19.

While Berenshteyn doesn’t want to do away with student travel at MIT Sloan, he is hoping to lessen the impact on the environment, with the help of his Jetset Offset program.

The pilot involves purchasing carbon offsets for the three MBA and one Master of Finance study tours for spring break 2018.

Carbon offsets are vetted projects that help capture or avoid carbon emissions. These projects can include reforestation and building renewable energy sources. The reductions might not have an immediate impact on emissions, Berenshteyn said, but they are “still the primary best practice for us to use.”

“This is raising awareness of, and starting to account for, our environmental impacts from student travel,” Berenshteyn said. “You don’t get much choice in the efficiency of the airplane that you board.”

The idea for the offset came in October, when Berenshteyn was helping to plan the January Leaders for Global Operations Domestic Plant Trek. Berenshteyn at the time realized for the two weeks of the trip, the roughly 50 students and staff would be logging a total of 400,000 air miles.

Berenshteyn spent months researching an answer for counterbalancing the burned jet fuel. He also got input from MIT Sloan professor John Sterman. Berenshteyn said he looked at other options, like funding more local projects such as solar panel installation, but the calculations were too small scale to make much of a difference.

Universities around the world are applying carbon offsets and carbon-neutral practices in some form to their operations. Berenshteyn said Duke University has something similar to the air travel and carbon offsets that he proposes for MIT Sloan.

The Leaders for Global Operations program purchased 67 metric tons of offsets through Gold Standard for the January student trek, and those offsets are going to reforestation efforts in Panama.

In the case of the four upcoming study trips, MIT Sloan’s student life office is picking up the tab.

“My colleague Paul Buckley (associate director of student life) had an idea for something like this close to a decade ago, when he first arrived in student life, and noted the extent to which our students travel during their time at Sloan,” said Katie Ferrari, associate director of student life. “So this was an especially meaningful partnership for us. Yakov’s idea is exactly the kind of student initiative we love to support. He is practicing principled, innovative leadership with an eye toward improving the world.”

Ferrari said the support for the pilot this semester is a stake in the ground for incorporating carbon offset purchases into future student-organized travel — which is what Berenshteyn said was his hope for launching the pilot.

“It should be at Sloan, if a student is planning a trip, they have their checklist of insurance, emergency numbers, and carbon offsets,” he said.

March 21, 2018 | More

A machine-learning approach to inventory-constrained dynamic pricing

LGO thesis advisor and MIT Civil and Environmental Engineering Professor David Simchi-Levi lead a team on a new study showing how a model-based algorithm known as Thompson sampling can be used for revenue management.

In 1933, William R. Thompson published an article on a Bayesian model-based algorithm that would ultimately become known as Thompson sampling. This heuristic was largely ignored by the academic community until recently, when it became the subject of intense study, thanks in part to internet companies that successfully implemented it for online ad display.

Thompson sampling chooses actions for addressing the exploration-exploitation in the multiarmed bandit problem to maximize performance and continually learn, acquiring new information to improve future performance.

In a new study, “Online Network Revenue Management Using Thompson Sampling,” MIT Professor David Simchi-Levi and his team have now demonstrated that Thompson sampling can be used for a revenue management problem, where demand function is unknown.

Incorporating inventory constraints

A main challenge to adopting Thompson sampling for revenue management is that the original method does not incorporate inventory constraints. However, the authors show that Thompson sampling can be naturally combined with a classical linear program formulation to include inventory constraints.

The result is a dynamic pricing algorithm that incorporates domain knowledge and has strong theoretical performance guarantees as well as promising numerical performance results.

Interestingly, the authors demonstrate that Thompson sampling achieves poor performance when it does not take into account domain knowledge.

Simchi-Levi says, “It is exciting to demonstrate that Thomson sampling can be adapted to combine a classical linear program formulation, to include inventory constraints, and to see that this method can be applied to general revenue management problems in the business-to-consumer and business-to-business environments.”

Industry application improves revenue

The proposed dynamic pricing algorithm is highly flexible and is applicable in a range of industries, from airlines and internet advertising all the way to online retailing.

The new study, which has just been accepted by the journal Operations Research, is part of a larger research project by Simchi-Levi that combines machine learning and stochastic optimization to improve revenue, margins, and market share.

Algorithms developed in this research stream have been implemented at companies such as Groupon, a daily market maker, Rue La La, a U.S. online flash sales retailer, B2W Digital, a large online retailer in Latin America, and at a large brewing company, where Simchi-Levi and his team optimized the company’s promotion and pricing in various retail channels.


March 19, 2018 | More

A revolutionary model to optimize promotion pricing

William F. Pounds Professor of Management and LGO thesis advisor Georgia Perakis recently authored a Huffington Post article about using a scientific, data-driven approach to determine optimal promotion pricing.
Grocery stores run price promotions all the time. You see them when a particular brand of spaghetti sauce is $1 off or your favorite coffee is buy one get one free. Promotions are used for a variety of reasons from increasing traffic in stores to boosting sales of a particular brand. They are responsible for a lot of revenue, as a 2009 A.C. Nielsen study found that 42.8% of grocery store sales in the U.S. are made during promotions. This raises an important question: How much money does a retailer leave on the table by using current pricing practices as opposed to a more scientific, data-driven approach in order to determine optimal promotional prices?

The promotion planning tools currently available in the industry are mostly manual and based on “what-if” scenarios. In other words, supermarkets tend to use intuition and habit to decide when, how deep, and how often to promote products. Yet promotion pricing is very complicated. Product managers have to solve problems like whether or not to promote an item in a particular week, whether or not to promote two items together, and how to order upcoming discounts ― not to mention incorporating seasonality issues in their decision-making process.

There are plenty of people in the industry with years of experience who are good at this, but their brains are not computers. They can’t process the massive amounts of data available to determine optimal pricing. As a result, lots of money is left on the table.

To revolutionize the field of promotion pricing, my team of PhD students from the Operations Research Center at MIT, our collaborators from Oracle, and I sought to build a model based on several goals. It had to be simple and realistic. It had to be easy to estimate directly from the data, but also computationally easy and scalable. In addition, it had to lead to interesting and valuable results for retailers in practice.

Read the full post at The Huffington Post.

Georgia Perakis is the William F. Pounds Professor of Management and a Professor of Operations Research and Operations Management at the MIT Sloan School of Management.

March 16, 2018 | More

JDA Software collaborates with MIT to advance research in intelligent supply chains

David Simchi-Levi, Professor of Civil and Environmental Engineering and LGO thesis advisor is leading a multiyear collaboration with JDA Software.

MIT will work with JDA, leveraging their business domain expertise and client base, to advance research in intelligent supply chains.

The collaboration aims to improve supply chain performance and customer experiences by leveraging data, computational power, and machine learning.

Professor of civil and environmental engineering David Simchi-Levi says, “I am very pleased JDA has entered into a multiyear research collaboration with MIT, and I look forward to working with the JDA Lab and teams. The collaboration will support our students and advance research in machine learning, optimization and consumer behavior modeling. “

This collaboration with JDA brings real world challenges, opportunities, and data, and will help to further the advancement of MIT’s world-class research in supply chain and retail analytics.

The MIT and JDA research teams will create real-world use cases to expand predictive demand, intelligent execution, and smart supply chain and retail planning that will yield a unique business strategy. These use cases will explore new data science algorithms that combine natural language processing, predictive behavior, and prescriptive optimization by taking into account past behaviors, and predicting and changing future behaviors.

“It is more critical than ever to infuse innovation into every aspect of the supply chain, as edge technologies such as the Internet of Things (IoT) and artificial intelligence (AI) are essential to digitally transforming supply chains. This collaboration allows us to tap into the extraordinary mindshare at MIT to accelerate the research into more intelligent and cognitive capabilities moving forward,” says Desikan Madhavanur, executive vice president and chief development officer at JDA.

“We are excited to be working on the future of supply chain with MIT to double down on researching enhanced, innovative, and value-driven supply chain solutions,” Madhavanur says.

The multiyear collaboration will support students on the research teams and the development of knowledge and education.

Simchi-Levi will speak at JDA’s annual customer conference, JDA FOCUS 2018, in Orlando, May 6-9, 2018.

March 16, 2018 | More

Making appliances and energy grids more efficient

Professor of electrical engineering and frequent LGO thesis advisor James Kirtley Jr., is working on a new design for fans that offers high efficiency at an affordable cost, which could have a huge impact for developing countries.

The ceiling fan is one of the most widely used mechanical appliances in the world. It is also, in many cases, one of the least efficient.

In India, ceiling fans have been used for centuries to get relief from the hot, humid climate. Hand-operated fans called punkahs can be traced as far back as 500 BC and were fixtures of life under the British Raj in the 18th and 19th centuries. Today’s ceiling fans run on electricity and are more ubiquitous than ever. The Indian Fan Manufacturers’ Association reported producing 40 million units in 2014 alone, and the number of fans in use nationwide is estimated in the hundreds of millions, perhaps as many as half a billion.

James Kirtley Jr., a professor of electrical engineering at MIT, has been investigating the efficiency of small motors like those found in ceiling fans for more than 30 years.

“A typical ceiling fan in India draws about 80 watts of electricity, and it does less than 10 watts of work on the air,” he says. “That gives you an efficiency of just 12.5 percent.”

Low-efficiency fans pose a variety of energy problems. Consumers don’t get good value for the electricity they buy from the grid, and energy utilities have to deal with the power losses and grid instability that result from low-quality appliances.

But there’s a reason these low-efficiency fans, driven by single-phase induction motors, are so popular: They’re inexpensive. “The best fans on the market in India — those that move a reasonable amount of air and have a low input power — are actually quite costly,” Kirtley says. The high price puts them out of reach for most of India’s population.

Now Kirtley, with support from the Tata Center for Technology and Design, is working on a single-phase motor design that offers high efficiency at an affordable cost. He says the potential impact is huge.

“If every fan in India saved just 2 watts of electricity, that would be the equivalent of a nuclear power plant’s generation capacity,” he says. “If we could make these fans substantially more efficient than they are, operating off of DC electricity, you could imagine extending the use of ceiling fans into rural areas where they could provide a benefit to the quality of life.”

Mohammad Qasim, a graduate student in Kirtley’s research group and a fellow in the Tata Center, says the benefits could reach multiple stakeholders. “Having more efficient appliances means a lower electricity bill for the consumer and fewer power losses on the utility’s side,” he says.

Choosing the right motor

“The idea is to try and hit that high-efficiency mark at a cost that is only a little more than that of existing low-efficiency fans,” Kirtley says. “We imagine a fan that might have an input power of 15 watts and an efficiency of 75 percent.”

To accomplish that, Kirtley and Qasim are exploring two approaches: creating an improved version of the conventional induction motor, or switching to a brushless DC motor, which may be more expensive but can deliver superior efficiency.

In either case, they plan to use power electronics — devices that control and optimize the flow of electricity through the motor — to improve the power quality and grid compatibility of the fan. Power electronics can also be used to convert AC electricity from the grid into DC, opening up the possibility of using DC motors in ceiling fans.

Brushless DC motors, which are the younger technology, use permanent magnets to establish a magnetic field that creates torque between the motor’s two main components, the rotor and stator. “You can think of it almost like a dog chasing his tail,” Kirtley says. “If I establish the magnetic field in some direction, the magnet turns to align itself in that direction. As I rotate the magnetic field, the magnet moves to align, and that keeps the rotor spinning.”

Induction motors, on the other hand, use no magnets but instead create a rotating magnetic field by flowing current through the stator coils. Because they use AC electricity, they are directly grid compatible, but their efficiency and stability can be improved by using power electronics to optimize the speed of the motor.

International collaboration

In determining which path to take, induction or brushless DC motor, Kirtley and Qasim are leaning on the expertise of Vivek Agarwal, a professor of electrical engineering at the Indian Institute of Technology, Bombay (IITB). Agarwal is a specialist in power electronics.

“The collaboration with Professor Agarwal’s group is so important,” Kirtley says. “They can give us a good idea of what the two different power electronics packages will cost. You would typically think of the brushless motor package as the more expensive option, but it may or may not be.”

Outside of the lab, on-the-ground detective work is key. When Qasim visited India in January 2017, he hit the streets of Mumbai with one of the graduate students from Agarwal’s lab. Together, they visited people across the ceiling fan industry, from manufacturers to repairmen in street-side shops.

“This visit was a big motivation for us,” says Qasim, noting that they were able to glean insights that will help them design a more robust and durable motor. “We want to understand the major maintenance issues that cause these motors to break down so that we can avoid common sources of failure. It was important to make the effort to talk to local people who had real experience repairing these motors.”

Usha International, an appliance manufacturer based in New Delhi, has been a key advisor in the early stages of the project and helped identify ceiling fans as a critical focus area. Engineers at Usha agree with Kirtley’s assessment that there is an unmet need for high-efficiency motors at relatively low cost, and Qasim says the Usha team shared what they had learned from designing their own high-efficiency fans.

Now, Kirtley and Qasim are engaged in the daunting task of envisioning how an ideal motor might look.

“This is a very challenging problem, to design a motor that is both efficient and inexpensive,” Kirtley says. “There’s still a question of which type of motor is going to be the best one to pursue. If we can get a good understanding of what exactly the machine ought to do, we can proceed to do a good machine design.”

Qasim has built a test facility in Kirtley’s laboratory at MIT, which he is using to characterize a variety of existing fans. His experimental data, combined with his fieldwork in India, should provide a set of design requirements for the improved motor. From there, he and Kirtley will work with the IITB researchers to pair the machine with an appropriate power electronics package.

In reducing the power demands of the standard ceiling fan by as much as 65 watts, they hope to have a far-reaching, positive effect on India’s energy system. But that’s only the start. Ultimately, they believe efficient, affordable motors can be applied to a number of common appliances, potentially saving gigawatts of electricity in a country that is working hard to expand reliable energy access for what will soon be the world’s largest population.

This article appeared in the Autumn 2017 issue of Energy Futures, the magazine of the MIT Energy Initiative.


March 2, 2018 | More

Urban heat island effects depend on a city’s layout

Franz-Josef Ulm, professor of civil and environmental engineering and LGO thesis advisor lead a recent study in the urban heat island effect, which causes cities to be hotter than their surroundings. The research will improve future building in hot locations to minimize extra heating.

The arrangement of a city’s streets and buildings plays a crucial role in the local urban heat island effect, which causes cities to be hotter than their surroundings, researchers have found. The new finding could provide city planners and officials with new ways to influence those effects.

Some cities, such as New York and Chicago, are laid out on a precise grid, like the atoms in a crystal, while others such as Boston or London are arranged more chaotically, like the disordered atoms in a liquid or glass. The researchers found that the “crystalline” cities had a far greater buildup of heat compared to their surroundings than did the “glass-like” ones.

The study, published today in the journal Physical Review Letters, found these differences in city patterns, which they call “texture,” was the most important determinant of a city’s heat island effect. The research was carried out by MIT and National Center for Scientific Research senior research scientist Roland Pellenq, who is also director of a joint MIT/ CNRS/Aix-Marseille University laboratory called <MSE>2 (MultiScale Material Science for Energy and Environment); professor of civil and environmental engineering Franz-Josef Ulm; research assistant Jacob Sobstyl; <MSE>2 senior research scientist T. Emig; and M.J. Abdolhosseini Qomi, assistant professor of civil and environmental engineering at the University of California at Irvine.

The heat island effect has been known for decades. It essentially results from the fact that urban building materials, such as concrete and asphalt, can absorb heat during the day and radiate it back at night, much more than areas covered with vegetation do. The effect can be quite dramatic, adding as much as 10 degrees Farenheit to night-time temperatures in places such as Phoenix, Arizona. In such places this effect can significantly increase health problems and energy use during hot weather, so a better understanding of what produces it will be important in an era when ever more people are living in cities.

The team found that using mathematical models that were developed to analyze atomic structures in materials provides a useful tool, leading to a straightforward formula to describe the way a city’s design would influence its heat-island effect, Pellenq says.

“We use tools of classical statistical physics,” he explains. The researchers adapted formulas initially devised to describe how individual atoms in a material are affected by forces from the other atoms, and they reduced these complex sets of relationships to much simpler statistical descriptions of the relative distances of nearby buildings to each other. They then applied them to patterns of buildings determined from satellite images of 47 cities in the U.S. and other countries, ultimately ending up with a single index number for each — called the local order parameter — ranging between 0 (total disorder) and 1 (perfect crystalline structure), to provide a statistical description of the cluster of nearest neighbors of any given building.

For each city, they had to collect reliable temperature data, which came from one station within the city and another outside it but nearby, and then determine the difference.

To calculate this local order parameter, physicists typically have to use methods such as bombarding materials with neutrons to locate the positions of atoms within them. But for this project, Pellenq says, “to get the building positions we don’t use neutrons, just Google maps.” Using algorithms they developed to determine the parameter from the city maps, they found that the cities varied from 0.5 to 0.9.

The differences in the heating effect seem to result from the way buildings reradiate heat that can then be reabsorbed by other buildings that face them directly, the team determined.

Especially for places such as China where new cities are rapidly being built, and other regions where existing cities are expanding rapidly, the information could be important to have, he says. In hot locations, cities could be designed to minimize the extra heating, but in colder places the effect might actually be an advantage, and cities could be designed accordingly.

“If you’re planning a new section of Phoenix,” Pellenq says, “you don’t want to build on a grid, since it’s already a very hot place. But somewhere in Canada, a mayor may say no, we’ll choose to use the grid, to keep the city warmer.”

The effects are significant, he says. The team evaluated all the states individually and found, for example, that in the state of Florida alone urban heat island effects cause an estimated $400 million in excess costs for air conditioning. “This gives a strategy for urban planners,” he says. While in general it’s simpler to follow a grid pattern, in terms of placing utility lines, sewer and water pipes, and transportation systems, in places where heat can be a serious issue, it can be well worth the extra complications for a less linear layout.

This study also suggests that research on construction materials may offer a way forward to properly manage heat interaction between buildings in cities’ historical downtown areas.

The work was partly supported by the Concrete Sustainability Hub at MIT, sponsored by the Portland Cement Association and the Ready-Mixed Concrete Research and Education Foundation.

February 22, 2018 | More

Getting to the heart of carbon nanotube clusters

Brian Wardle, LGO thesis advisor and professor of aeronautics and astronautics, has led a team of MIT researches in the development of a systematic method to predict the two-dimensional patterns carbon nanotubes (CNTs).

Integrating nanoscale fibers such as carbon nanotubes (CNTs) into commercial applications, from coatings for aircraft wings to heat sinks for mobile computing, requires them to be produced in large scale and at low cost. Chemical vapor deposition (CVD) is a promising approach to manufacture CNTs in the needed scales, but it produces CNTs that are too sparse and compliant for most applications.

Applying and evaporating a few drops of a liquid such as acetone to the CNTs is an easy, cost-effective method to more tightly pack them together and increase their stiffness, but until now, there was no way to forecast the geometry of these CNT cells.

MIT researchers have now developed a systematic method to predict the two-dimensional patterns CNT arrays form after they are packed together, or densified, by evaporating drops of either acetone or ethanol. CNT cell size and wall stiffness grow proportionally with cell height, they report in the Feb. 14 issue of Physical Chemistry Chemical Physics.

One way to think of this CNT behavior is to imagine how entangled fibers such as wet hair or spaghetti collectively reinforce each other. The larger this entangled region is, the higher its resistance to bending will be. Similarly, longer CNTs can better reinforce one another in a cell wall. The researchers also find that CNT binding strength to the base on which they are produced, in this case, silicon, makes an important contribution to predicting the cellular patterns that these CNTs will form.

“These findings are directly applicable to industry because when you use CVD, you get nanotubes that have curvature, randomness, and are wavy, and there is a great need for a method that can easily mitigate these defects without breaking the bank,” says Itai Stein SM ’13, PhD ’16, who is a postdoc in the Department of Aeronautics and Astronautics. Co-authors include materials science and engineering graduate student Ashley Kaiser, mechanical engineering postdoc Kehang Cui, and senior author Brian Wardle, professor of aeronautics and astronautics.

“From our previous work on aligned carbon nanotubes and their composites, we learned that more tightly packing the CNTs is a highly effective way to engineer their properties,” says Wardle. “The challenging part is to develop a facile way of doing this at scales that are relevant to commercial aircraft (hundreds of meters), and the predictive capabilities that we developed here are a large step in that direction.”

Detailed measurements

Carbon nanotubes are highly desirable because of their thermal, electrical, and mechanical properties, which are directionally dependent. Earlier work in Wardle’s lab demonstrated that waviness reduces the stiffness of CNT arrays by as little as 100 times, and up to 100,000 times. The technical term for this stiffness, or ability to bend without breaking, is elastic modulus. Carbon nanotubes are from 1,000 to 10,000 times longer than they are thick, so they deform principally along their length.

For an earlier paper published in the journal Applied Physics Letters, Stein and colleagues used nanoindentation techniques to measure stiffness of aligned carbon nanotube arrays and found their stiffness to be 1/1,000 to 1/10,000 times less than the theoretical stiffness of individual carbon nanotubes. Stein, Wardle, and former visiting MIT graduate student Hülya Cebeci also developed a theoretical model explaining changes at different packing densities of the nanofibers.

The new work shows that CNTs compacted by the capillary forces from first wetting them with acetone or ethanol and then evaporating the liquid also produces CNTs that are hundreds to thousands of times less stiff than expected by theoretical values. This capillary effect, known as elastocapillarity, is similar to a how a sponge often dries into a more compact shape after being wetted and then dried.

“Our findings all point to the fact that the CNT wall modulus is much lower than the normally assumed value for perfect CNTs because the underlying CNTs are not straight,” says Stein. “Our calculations show that the CNT wall is at least two orders of magnitude less stiff than we expect for straight CNTs, so we can conclude that the CNTs must be wavy.”

Heat adds strength

The researchers used a heating technique to increase the adhesion of their original, undensified CNT arrays to their silicon wafer substrate. CNTs densified after heat treatment were about four times harder to separate from the silicon base than untreated CNTs. Kaiser and Stein, who share first authorship of the paper, are currently developing an analytical model to describe this phenomenon and tune the adhesion force, which would further enable prediction and control of such structures.

“Many applications of vertically aligned carbon nanotubes [VACNTs], such as electrical interconnects, require much denser arrays of nanotubes than what is typically obtained for as-grown VACNTs synthesized by chemical vapor deposition,” says Mostafa Bedewy, assistant professor at the University of Pittsburgh, who was not involved in this work. “Hence, methods for postgrowth densification, such as those based on leveraging elastocapillarity have previously been shown to create interesting densified CNT structures. However, there is still a need for a better quantitative understanding of the factors that govern cell formation in densified large-area arrays of VACNTs. The new study by the authors contributes to addressing this need by providing experimental results, coupled with modeling insights, correlating parameters such as VACNT height and VACNT-substrate adhesion to the resulting cellular morphology after densification.

“There are still remaining questions about how the spatial variation of CNT density, tortuosity [twist], and diameter distribution across the VACNT height affects the capillary densification process, especially since vertical gradients of these features can be different when comparing two VACNT arrays having different heights,” says Bedewy. “Further work incorporating spatial mapping of internal VACNT morphology would be illuminating, although it will be challenging as it requires combining a suite of characterization techniques.”

Picturesque patterns

Kaiser, who was a 2016 MIT Summer Scholar, analyzed the densified CNT arrays with scanning electron microscopy (SEM) in the MIT Materials Research Laboratory’s NSF-MRSEC-supported Shared Experimental Facilities. While gently applying liquid to the CNT arrays in this study caused them to densify into predictable cells, vigorously immersing the CNTs in liquid imparts much stronger forces to them, forming randomly shaped CNT networks. “When we first started exploring densification methods, I found that this forceful technique densified our CNT arrays into highly unpredictable and interesting patterns,” says Kaiser. “As seen optically and via SEM, these patterns often resembled animals, faces, and even a heart — it was a bit like searching for shapes in the clouds.” A colorized version of her optical image showing a CNT heart is featured on the cover of the Feb. 14 print edition of Physical Chemistry Chemical Physics.

“I think there is an underlying beauty in this nanofiber self-assembly and densification process, in addition to its practical applications,” Kaiser adds. “The CNTs densify so easily and quickly into patterns after simply being wet by a liquid. Being able to accurately quantify this behavior is exciting, as it may enable the design and manufacture of scalable nanomaterials.”

This work made use of the MIT Materials Research Laboratory Shared Experimental Facilities, which are supported in part by the MRSEC Program of the National Science Foundation, and MIT Microsystems Technology Laboratories. This research was supported in part by Airbus, ANSYS, Embraer, Lockheed Martin, Saab AB, Saertex, and Toho Tenax through MIT’s Nano-Engineered Composite Aerospace Structures Consortium and by NASA through the Institute for Ultra-Strong Composites by Computational Design.

February 15, 2018 | More

If retailers want to compete with Amazon, they should use their tax savings to raise wages

Zeynep Ton, professor of operations management and LGO thesis advisor discusses the impact of new tax laws for retailers and the potential to achieve operational excellence.

Walmart announced today that it is raising its starting wages in the United States from $9 per hour to $11, giving employees one-time cash bonuses of as much as $1,000, and expanding maternity and parental leave benefits as a result of the recently enacted tax reform. It is part of Walmart’s broader effort to create a better experience for its employees and customers. The new tax law creates a major business opportunity for other retailers as well — if their leaders are wise enough to take advantage of it.

The U.S. corporate tax rate is dropping from 35% to 21%. Retailers, many of whom have been paying the full tax rate, are going to benefit substantially. Take a retailer that makes 15% pretax income. Assuming its effective tax rate goes from 35% to 21%, it could save the equivalent of 2.3% of sales. Specialty retailers with higher pretax income will save even more.

Retail executives have a choice in how they use these savings. I believe the smartest choice — one that will help them compete against online retailers like Amazon — is to create a better experience for customers and to achieve operational excellence in stores. For most retailers, doing both requires more investment in store employees — starting with higher wages and more-predictable work schedules. My research shows that combining higher pay for retail employees with a set of smart operational choices that leverage that investment results in more-satisfied customers, employees, and investors.

Retailers that do not provide a compelling draw for their customers may not make it. In 2017, according to Fung Global Retail and Technology, there were nearly 7,000 store closing announcements, the second-largest number since 2000. There were 662 bankruptcy filings in retail, according to bankruptcydata.com, up 30% from 2016. This year is expected to be even worse. What’s more, two of my MIT Sloan MBA students analyzed store openings and closings from 2015 to 2017, looking at department stores with more than 50 stores and over $100 million in revenues, and found a positive correlation between customer satisfaction, as measured by Yelp ratings, and the net change in the number of open stores.

Many companies can no longer grow profitably just by adding stores — they need to get more out of their existing stores. Operational excellence makes that possible by ensuring that merchandise is in stock and well displayed, checkout is efficient, stores are clean, and employees are responsive to customers. Operational excellence also makes it possible to provide a better omnichannel experience by linking digital and brick-and-mortar channels. For instance, retailers are increasingly expecting in-store employees to serve customers who order online, by shipping products to those customers or enabling them to pick up their orders in the store. If that doesn’t work smoothly — that is, without operational excellence — it’s going to waste a lot of employee and customer time and convince customers they’re better off shopping online than in the store.

Creating a great customer experience and achieving operational excellence both require a capable and motivated workforce. You need knowledgeable employees who are cross-trained to manage customers’ needs wherever they arise. You need employees who can empathize with customers, are empowered to solve customer problems, and can spot opportunities to improve operations. You also need a capable and motivated workforce that can embrace and leverage new technologies.

Read the full post at Harvard Business Review

Zeynep Ton is an Adjunct Associate Professor of Operations Management at the MIT Sloan School of Management.

January 22, 2018 | More

MIT launches MITx MicroMasters in Principles of Manufacturing

David Hardt, professor of mechanical engineering and LGO thesis advisor discusses the launch of the Institute’s third MITx MicroMasters program, in principles of manufacturing.

MIT today announced the launch of the Institute’s third MITx MicroMasters program, in principles of manufacturing. The new program brings an advanced manufacturing curriculum to the MITx platform for the first time and enables learners worldwide to advance their careers by mastering the fundamental skills needed for global manufacturing excellence and competitiveness.

New manufacturing firms are growing at their fastest rate since 1993, as technology revolutionizes the field. The MITx Principles of Manufacturing MicroMasters program focuses on broad-based concepts that underlie all manufacturing environments, putting graduates of this unique program in a position to leverage the industry’s fast-paced growth. The graduate-level program enables engineers, product designers, and technology developers to advance their careers in a broad array of engineering capacities, including manufacturing, supply chain management, design, and product development.

“Throughout an entire undergraduate degree program, the conventional engineering curriculum teaches students that everything is certain, and results are exact, ignoring inherent uncertainty,” says David Hardt, a professor of mechanical engineering at MIT. “All too often, people fail to get products, and even companies, across what’s known as the valley of death, which is the gap between small-volume and full-scale production. Their efforts fail because they haven’t been given the fundamental skill set for managing uncertainties associated with production rate, quality, and cost. And, that’s exactly what we do in this new program.”

Noting the continued evolution of technologies, instability of supply chains, and introduction of new production processes, Hardt says that manufacturing technologies “change so quickly that unless students master the cohesive set of fundamentals that underlie production, they won’t know how to handle many of the unexpected challenges that arise. It’s not just about knowing the latest technologies. To be a good decision-maker in manufacturing, a person has to master the core principles that determine how to apply those technologies under uncertain conditions.”

By maintaining a technology-agnostic curriculum and embracing the fundamental principles that govern manufacturing, the MITx Principles of Manufacturing MicroMasters curriculum will maintain its relevance in this constantly changing environment.

The new MicroMasters program traces its roots back to the Master of Engineering in Advanced Manufacturing and Design, originally established at MIT in 2001 through the Singapore-MIT Alliance for Research and Technology. This master’s program provides a launchpad for graduates to become innovative future leaders in established manufacturing firms and new entrepreneurial ventures. The MITx Principles of Manufacturing MicroMasters program announced today leverages this curriculum.

The MITx Principles of Manufacturing MicroMasters curriculum consists of eight online courses, which span the fields of process control, manufacturing systems, engineering management, and supply chain planning and design. Each course runs for eight weeks, and students who complete the entire curriculum and earn their MicroMasters credential will be eligible to apply to the Master of Engineering in Advanced Manufacturing and Design degree program on campus at MIT. If accepted, course credits earned through the MITx Principles of Manufacturing MicroMasters will be applied to the on-campus degree program, enabling students to earn their master’s in eight months. Principles of Manufacturing online coursework commences in March 2018. The first cohort of students who have earned their MicroMasters credential and been admitted to the on-campus master’s degree program will arrive at MIT in January 2020 and graduate that August.

“We are excited to help the MIT faculty who have spent many years crafting this innovative curriculum teach the principles of manufacturing to learners around the country and around the world,” says Dean for Digital Learning Krishna Rajagopal. “At a time when manufacturing is changing rapidly, we are happy to make this learning opportunity open to all. For those who wish to advance their careers, the MITx MicroMasters will be a valuable professional credential. They will also be eligible to accelerate their completion of a master’s degree at MIT — or elsewhere. We are using digital technologies to leverage MIT’s commitment to rigorous, high-quality curricula in a way that expands access to, and transforms, graduate-level education for working professionals.”

The Rochester Institute of Technology (RIT) will also offer a pathway to their Master of Science in Professional Studies that awards credit to learners who successfully complete the MITx Principles of Manufacturing MicroMasters credential and are then admitted to RIT. The RIT MS in Professional Studies is an innovative open curriculum environment that enables students to create a customized degree path that meets their educational or career objectives. The curriculum can include courses from multiple RIT graduate programs across two or three areas of study. RIT has been working with MITx since early 2017, and they currently offer a similar pathway to holders of the MITx Supply Chain Management MicroMasters credential.

“Digital technologies are enabling us to extend this cutting-edge manufacturing curriculum, which is the result of many years of research and development, to learners around the world regardless of their location or socioeconomic status,” says Vice President for Open Learning Sanjay Sarma. “The innovative application of open learning technologies has broken down barriers and enabled people of all ages and backgrounds to access world-class educational content. We hope that Principles of Manufacturing, MIT’s third MicroMasters program, will dramatically expand the opportunities for professional and lifelong learners to advance their careers and pursue their passions.”

January 10, 2018 | More

Turning any room into an operating room

Daniel Frey, LGO thesis advisor, professor of mechanical engineering and faculty research director of MIT’s D-Lab is working with a team to develop innovative access to clean surgical care through a product called SurgiBox.

Dust, dirt, bacteria, flies — these are just some of the many contaminants surgeons need to worry about when operating in the field or in hospitals located in developing nations. According to a 2015 study in The Lancet, 5 billion people don’t have access to safe, clean surgical care. Graduate student Sally Miller ’16 is hoping to change that with a product called SurgiBox.

“The idea of SurgiBox is to take the operating room and shrink it down to just the patient’s size,” Miller explains. “Keeping an entire room clean and surgery-ready requires a lot of resources that many hospitals and surgeons across the globe don’t have.”

Upon starting her master’s degree in the Department of Mechanical Engineering, where she also received her bachelors, Miller connected with Daniel Frey, professor of mechanical engineering and faculty research director of MIT’s D-Lab. Frey had been working on the concept of SurgiBox with Debbie Teodorescu, the company’s founder and CEO, who graduated from Harvard Medical School and acted as a D-Lab research affiliate. Having just won the Harvard President’s Challenge grant of $70,000, the SurgiBox team was looking for a mechanical engineering graduate student who could help enhance the product’s design.

“We were looking for a way to accelerate the project,” explains Frey, who also serves as Miller’s advisor. “At MIT, grad students can really deepen a project and move it forward at a faster pace.”

Enter Miller, who took on the project as her master’s thesis. “The first thing I did was assess the design they already had, but use my mechanical engineering lens to make the product more affordable, more usable, and easier to manufacture,” Miller explains.
Miller found inspiration in 2.75 (Medical Device Design). For the class project, she visited the VA Medical Center, where she watched a pacemaker surgery. During the surgery, doctors placed an incise drape — an adhesive, antimicrobial sheet infused with iodine — on the site of the incision.

“Watching the surgeons that day I realized, ‘Oh, I can use this adhesive drape idea for SurgiBox,’” Miller says.

In addition to incorporating adhesive drapes at the point of incision, Miller has redesigned the structure of SurgiBox. The original design had a rectangular frame that sealed to the patient at the armpit and waist. The frame held up a plastic, tent-like enclosure with a fan and high-efficiency particulate air (HEPA) filter that removes 99.997 percent of contaminants. Miller realized, to make SurgiBox more portable and cost effective, she had to get rid of the frames. With her new design, SurgiBox now consists of an inflatable tent; the outward pressure from the HEPA-filtered air gives the surgical site its structure.

This structural change marked a turning point in SurgiBox’s development. “Now the patient doesn’t have to be in the SurgiBox. Rather, the SurgiBox is on them,” Frey explains. “I thought that was a big breakthrough for us.”

Teodorescu agrees. “Sally is stunningly capable at both manual and digital forms of technical drafting,” she says. “Because of her designs, a key part of SurgiBox now fits into a Ziploc bag.” This latest iteration of SurgiBox now meets the same germ-proof and blood-proof standard as surgical gowns used by doctors treating Ebola patients.

The next step for the SurgiBox team is user testing. In addition to continuing particle testing, the team will partner with local Boston-area hospitals to test the ergonomics of the design and ensure it aligns with surgical workflows. After that, the team will test its efficacy at partner hospitals in developing nations where the technology is most needed.

As for Miller, after graduating with her masters in January she is hoping to start a career in product design. “Working on SurgiBox during my masters and in classes like 2.009 (Product Engineering Processes) in my undergraduate classes gave me hands-on experience in creating a product with real-world application,” Miller says. “I’m open to working on products in a number of fields and am excited to see what my future holds after MIT.”

January 10, 2018 | More

Sloan

An inside look at Europe’s financial woes

An inside look at Europe’s financial woes

European monetary policy leaders visit MIT Sloan to share news of the region’s complex economic challenges. It’s been seven years since the financial crisis began with the slowing of the housing market and the fall of Bear Stearns, the Wall Street investment bank. Yet central bankers around the world are still struggling to chart the best course in the face of the uneven recovery, none more so than those in the eurozone, where recent economic numbers still reflect widespread unemployment and slow to nonexistent growth.

In an Oct. 9 talk at MIT Sloan, Ewald Nowotny, governor of the National Bank of Austria and a member of the governing council of the European Central Bank, shared the latest on the eurozone monetary picture with students and faculty, including Deputy Dean S.P. Kothari, who served as host and moderator, and Simon Johnson, professor of entrepreneurship and former chief economist at the International Monetary Fund. Johnson joined Nowotny on a panel along with Hannes Androsch, Austria’s former minister of finance and former vice chancellor.

With inflation low and interest rates at record lows, the ECB finds itself “in uncharted territory,” said Nowotny as efforts continue to revive the eurozone economies. “We have to be aware that monetary policy has limitations. It has to be decided how Europe will find a balance between monetary policy, fiscal policy and fiscal discipline, and reforms.” He expressed hope that policymakers would see these areas as complementary, rather than as alternatives to each other.

The ECB has introduced a new program to begin buying assets, an approach that the U.S. Federal Reserve Bank has taken with its extensive bond purchasing effort since the financial crisis, but that is “something quite new” for the ECB, said Nowotny. The central bank will also begin supervising the largest European banks starting in November, taking responsibility for the oversight of some 80 percent of the region’s collective bank balance sheet.

“We are aware that taking up responsibility for supervision of the banks is a very risky affair for central banks … but somebody has to do it,” said Nowotny. The move will make the ECB one of the world’s largest bank supervisors. The objective, said the governor, is to provide a higher degree of trust in European banks.

As the panel discussion began, Johnson expressed confidence in the long-term future of the European monetary union and noted that the ECB had already taken many steps to address the region’s economic crisis. Androsch joined the conversation on a more pessimistic note, pointing out that the unstable geopolitical situations in Eastern Europe, the Middle East, and West Africa were likely hindering recovery globally. “It would be an understatement to say the world economy is not in the best shape,” he said. “We have not yet overcome the crisis that started in 2007 as a banking crisis and became an economic crisis.”

Like Nowotny, Androsch acknowledged that monetary policy can only do so much. “That things did not go worse [following the financial crisis] is thanks to the activity of central banks,” he said. “But they have come to the edge. Monetary policy is very important but can only be one ingredient in a reasonable economic policy cocktail.”

Nowotny and Androsch were visiting MIT Sloan en route to the International Monetary Fund’s annual meeting in Washington, D.C.

March 29, 2018 | More

Good questions: A conversation with leadership expert Hal Gregersen

Good questions: A conversation with leadership expert Hal Gregersen

For nearly 20 years, Harvard Business School professor Clayton Christensen and I have been trying to figure out what causes great leaders to ask the right questions. During the last few years we’ve interviewed about 100 senior leaders from around the world for a forthcoming book and so far the answers have been intriguing. We asked Stewart Brand, who started the Whole Earth Catalog decades ago, how he surfaces such incredible questions. He said, “Every day, I wonder how many things I am dead wrong about.”

When we interviewed A.G. Lafley, the chairman, president, and CEO at Procter & Gamble, he shared the same thing but in different words. Every Monday morning he wakes up asking himself, “What am I going to be curious about this week?” Which really means, “What don’t I know? What am I missing?”

In senior executive roles, the toughest challenge is figuring out what they don’t know they don’t know. This creates the most dangerous blind spot, but many executives are not actively working to uncover it. That’s why Kodak dove under. That’s why Nokia dove under. And the global list goes on where senior leaders failed to explore the crucial blind spots that came back to destroy their companies. It’s a dangerous emotional space for most executives to enter, and they avoid it at all costs. A critical question for any executive to answer is “How long ago did someone ask you a question that caused you to feel uncomfortable?” If the answer is more than seven days, it’s time for a leadership shake-up, because the bad news that you need to hear is likely not making it your way.

Is that type of inquisitiveness based on talent and intuition, or can it be learned?

I taught Leading Organizations to the MIT Executive MBA class this summer, and it was a powerful learning experience, both for me and many of them. At the beginning, some were skeptical about why I asked them to keep a questioning journal about all the questions they had related to the course content, as well as any insights gained. During class we also practiced a methodology that I call “ catalytic questioning.” Each student identified a professional challenge for which they honestly did not have a solution but wanted one. They shared their challenge with two other peers in class and then spent four fast and furious minutes brainstorming nothing but questions about their challenge. Almost everybody walked away with a different perspective or angle on the challenge, as they uncovered new questions that they’d never considered before. And those new questions were just like keys that unlock a door; only in this case they opened up entirely new avenues of action. I’ve found the same response working with thousands of executives around the world.

Most leaders know that asking the right questions makes a difference, but they’re hard-pressed when it comes to teaching anyone else how to do it. That’s the core of what I’m trying to achieve through The 4-24 Project, which is a non-profit organization I founded dedicated to keeping questioning skills alive so individuals can pass this skill onto the next generation of leaders.

How will you continue your work on questioning at MIT Sloan?

I believe that powerful insights come from deep interactions across disciplinary boundaries. MIT is world class when it comes to technology and science, and then you combine that with a world-class business school—it’s unbeatable. Before joining MIT, I thought, “There must be some really incredible things going on there.”

After joining MIT, I discovered that data backed up my hunch. MIT alumni have started or lead companies that collectively account for the 11th largest economy in the world. Now my hunch is that to do what they’ve done, MIT alumni must have been asking different, better questions. Based on my brief interactions with students, faculty and alumni so far, I believe this is one of the core capabilities that leaders gain from an MIT experience. Now I’m hoping to figure out what it is that we can do to bring in the right people and then equip them to go off as better questioners, ultimately creating even greater value.

Recent photography by Hal GregersenRecent photography by Hal Gregersen

You have incorporated your work on questioning and innovation into your work as a photographer. What can executives learn from the pursuit of photography and other arts?

Every 4-year-old on the face of the earth is both a great artist and a great questioner. And all of us were once 4 years old, so we have it in our genes to ask great questions and to be great artists. Photography is my art of choice. I fell in love with a camera at the age of fifteen and have been smitten ever since.

Thousands of pictures and four decades later, my photography work has come full swing into my leadership work. The mere act of photographing helps us see things that we wouldn’t otherwise see and that’s core to creating the right questions. If we don’t see new things, we won’t get new questions. The trick is using the camera to unlock new insights into leadership challenges—especially the things we don’t know we don’t know about ourselves, others and our organizations—as well as create beautiful images. That’s what we tried to accomplish recently in a unique workshop co-taught with Sam Abell, a 30-year National Geographic veteran, through the Santa Fe Photography Workshop. We brought together a small group of senior executives from around the world who love photography and taught them how to ask better questions as leaders and photographers. It was thrilling to work with Sam, and weave the power of photography into the heart of our leadership. After four intensive days of working together, each of us, including myself, walked away transformed. Powerful stuff.

March 22, 2018 | More

Andrew Lo

This is your brain on stocks

From MarketWatch Ever since I was a graduate student in economics, I’ve been struggling with the uncomfortable observation that economic theories often don’t seem to work in practice. That goes for that most influential economic theory, the Efficient Markets Hypothesis, which holds that investors are rational decision makers and market prices fully reflect all available information, that is, the “wisdom of crowds.” Certainly, the principles of Efficient Markets are an excellent approximation to reality during normal business environments. It is one of the most useful, powerful, and beautiful pieces of economic reasoning that economists have ever proposed. It has saved generations of portfolio managers from bad investment decisions, democratizing finance along the way through passive investment vehicles like index funds. Then came the Financial Crisis of 2008; the “wisdom of crowds” was replaced by the “madness of mobs.” Investors reacted emotionally and instinctively in response to extreme business environments — good or … Read More »

The post This is your brain on stocks–Andrew Lo appeared first on MIT Sloan Experts.

March 19, 2018 | More

Beer’s role in innovation

Beer’s role in innovation

Many great—or seemingly great—ideas come to fruition during the course of drinking a beer. When you’re out with the guys (or girls), one or two cold ones could have you rhapsodizing about how you’re going to change the world. This is most likely when self-lowering toilet seats, automatic pet petters, and self-twirling ice cream cones were all dreamed into existence.

As great as these and other inventions are, we’re not sure beer had any role in their creation. But has beer had a role in actual innovation?

Self-driving cars are all the rage in the news lately, with Google and Uber fighting it out over patents and racing to the front of the line for consumer release. While they were focused on cars for the everyday driver, the first self-driving truck delivered 50,000 cans of Budweiser 120 miles in Colorado.

That’s right. The first self-driven truck was used to deliver beer.

Budweiser has come a long way since the days of the horse and cart, right? In the first days of beer delivery, customers only had access because their drink of choice was brought daily by horse and wagon.

You’re probably familiar with the Clydesdales, still often used in Budweiser commercials to tug at heartstrings. These horses were bred by farmers along the banks of the River Clyde in Lanarkshire, Scotland. The Great Flemish Horse was the forerunner of the Clydesdale, which was bred to pull loads of more than one ton at a walking speed of five miles per hour. While that kind of pulling power was amazing during those days, it was still slow and expensive. Each hitch horse needed 20 to 25 quarts of whole grains, minerals and vitamins, 50 to 60 pounds of hay, and 30 gallons of water per day.

Is it any wonder that Anheuser Busch was the exclusive US licensee of the Rudolph Diesel patents? One might assume Ford or the railroad would have been first on board with the development of diesel powered trucks, but it was actually beer.

Knowing how much was needed to keep those magnificent horses healthy and hardy, it seems diesel was a logical next step. This is a classic example of early adopter customers driving a new technology.

Effect on Jobs

As with most disruptive innovation, jobs were impacted first when the diesel truck was introduced and again with the self-driving truck. T here is a fundamental difference between the two cases, as we’ll discuss.

When the diesel trucks were introduced, of course some people found their skills were no longer necessary. The overall net effect, however, was not that great. Those who took care of the horses and replaced the stables did feel the sting. Those jobs were replaced by mechanics and garage managers. Those who drove the horse-drawn wagons were replaced by truck drivers.

It’s worth noting here that the Teamsters Union was formed to protect those who drove teams of draft animals, such as oxen, horses, or mules. While they didn’t stop diesel trucks from eventually taking all the jobs, they did get workers organized. Organization led to training, which then led those horse and mule drivers to drive trucks instead. There’s one problem solved.

With some training and education, the jobs for mechanics and garage managers were eventually filled, too. What seemed insurmountable was, in fact, just a small step toward the future of trucking. Now, according to NPR, truck driving might be the most predominant job in America. Thus far, the trucking industry has been nearly immune to the automation that has eliminated thousands of other blue-collar jobs in the past forty years.

Until now.

Read the full post at Huffington Post.

Joseph Hadzima is a Senior Lecturer in the Martin Trust Center for MIT Entrepreneurship.

March 15, 2018 | More

Kevin O’Leary on fintech, customer acquisition, andcryptocurrency

Kevin O’Leary on fintech, customer acquisition, andcryptocurrency

Kevin O’Leary’s blunt delivery, honed in nearly a decade on “Shark Tank,” makes clear his positions on fintech investment and cryptocurrency regulation.

A financial technology entrepreneur — or anyone pitching a startup — with a concise pitch and a unique skill set won’t get funding without mastery of the company’s financials, according to Kevin O’Leary, the celebrity investor from ABC’s long-running TV show “Shark Tank.”

“If you don’t know your numbers,” O’Leary said, “you deserve to burn in a Hell, and I’ll put you there myself.”

Known to “Shark Tank” viewers as “Mr. Wonderful,” O’Leary offered blunt advice during a keynote address, “Entrepreneurship in Fintech: a VC Perspective,” at the fourth-annual MIT FinTech Conference March 10 in Cambridge, Mass. During his nearly hour-long presentation, O’Leary revealed some of the lessons for entrepreneurs he’s learned from a decade on Shark Tank and reviewed the challenges and opportunities in financial technology and cryptocurrencies.

Know your numbers
The inability to talk coherently about a company’s financials has come back to bite Shark Tank contestants and damages financial technology company entrepreneurs as well. Indeed, financial technology and traditional businesses face common problems.

“Customer acquisition cost is the number one issue,” O’Leary said. “Fintech is ubiquitous because there’s no barrier to entry. But the big barrier is … how to get the cost of acquiring a customer to be worth less than their lifetime value.”

“Shark Tank’s” millions of viewers each Friday night solve the acquisition cost for entrepreneurs that appear on the show, O’Leary said, if they know how to take advantage. One of his most celebrated investments, Wicked Good Cupcakes, did. O’Leary said many businesses that got deals with “sharks” that season failed to acquire customers after appearing on the show because they were not prepared for the surge of orders. Wicked Good Cupcakes not only anticipated this — upgrading its internet capacity ahead of time — but proved masterful over the years using free media to obtain new customers, O’Leary said.

Fintech opportunity and taking cryptocurrency mainstream
O’Leary, whose financial technology offerings include a small-dollar investment product called Beanstox that allows people to learn to invest without making big bets. The multi-millionaire entrepreneur routinely jets with his extended family between countries and finds the promise of cryptocurrency exciting.

“I should be able to land in Geneva and pay with any coin,” O’Leary said. “I want to have the government accept it and be able to pay taxes with it. That’s the dream.”

Real estate also is ripe for investment opportunity, he said. Asset-based coins with a smart contract will allow for fractional ownership, where an investor can put down 50 cents or $1 million, and provide a less expensive way to raise capital than a real estate investment trust.

But the dream isn’t a reality yet, as cryptocurrencies remain a constant source of regulatory scrutiny. There are those who view cryptocurrency as a tool for breaking down governments and others who embrace it within a vision of future commerce, according to O’Leary. Not an anarchist, O’Leary said financial technology players such as himself have to be willing to work with regulators to make cryptocurrency mainstream.

“I want to find a way to use this technology to be ubiquitous, as everybody else does,” O’Leary said. “The fringe movement will slowly erode away as those of us going in through the regulated portal do this. This is the most exciting technology in the friggin’ world, but we’ve got to get the regulators off of everybody’s back by complying with them.”

March 15, 2018 | More

Probing the origins of happiness

Probing the origins of happiness

New research on what makes people happy serves as a handbook for policymakers and senior leaders.

The Origins of Happiness: The Science of Well-Being over the Life Course” lays out a sweeping framework for happiness in childhood and adulthood — and explains how policymakers and leaders can implement this knowledge to improve society.

MIT Sloan PhD student George Ward co-authored the book. He explains what makes people happy, what doesn’t, and why it matters.

What was the impetus behind the book?
The broad idea was to bring together a lot of disparate work on happiness. In the last few decades, there’s been a huge amount of work on what you can loosely call the “science of happiness.” More recently, government policy has moved toward measuring happiness at a national level and using that data to inform policy. Over 23 countries are systematically tracking happiness indicators that complement traditional measures like GDP.

We wanted to bring together this body of work in a systematic, quantitative way. Often, these are single studies that say “x” is important for happiness; “y” is important for happiness. In this book we try to provide an overarching framework that documents what makes for a satisfying life.

On the policy side, more and more countries are using well-being data in the real world. As we speak to policymakers, they often ask the question: “Look, we’re measuring this now and we have the impetus to improve these figures, move up these tables of happiness, and make our citizens enjoy their lives more. What can we do?”

The idea of the book is to try to give policymakers a sort of broad road map, if you like, for what’s determining people’s happiness at different stages of life and help them to think about areas where they might target policy.

George Ward“The Origins of Happiness” co-author George Ward

How can a government measure happiness?
It’s usually measured with survey data, in questions like: “Overall, how satisfied are you with your life, on a scale of zero to ten?” In the United Kingdom, they do it with a number of questions on overall life satisfaction, feelings of happiness, whether the things you do in life are worthwhile, and so on. In a number of countries, it’s now included in major government surveys. Here in the U.S., there are well-being modules in the American Time Use survey from the Bureau of Labor Statistics, for example. The Centers for Disease Control and Prevention also track life satisfaction among the population.

Why should government care about who’s happy, anyway?
There are two ways to answer. One is more philosophical and, in a sense, goes back to Enlightenment, if not even further to Aristotle’s “Politics.” There’s an influential school of thought that thinks well-being or happiness ought to be a policy goal, beyond GDP. Jefferson, for example, thought that happiness should be the goal of good government. Ultimately, just making countries wealthier isn’t enough. Life should be more enjoyable.

The second is that there seems to be some electoral self-interest to looking after citizens’ well-being. Governments of populations that are unhappy don’t tend to stay in power very long.

What makes adults happy?
Mental and physical health as well as social relationships are very significant. Money plays a role, of course, but isn’t quite as significant as people might think. A huge predictor of unhappiness is unemployment. It’s a big psychological hit: You lose a sense of purpose, and you lose social relationships, relationships with employees, and with management. Relationships are a driving force behind people’s happiness, and that’s not just at home but also at work and in the community.

What makes a successful child?
We look at three dimensions of success: emotional health, behavior, and academic achievement. If you follow a child into adulthood and try to predict their life satisfaction in their 40s, emotional health at age 16 is the strongest predictor [of happiness] much later in life. Even though academic achievement buys you a great deal else, emotional health is the strongest nuts-and-bolts predictor.

Family income in childhood predicts academic success, but it doesn’t predict emotional health and behavior particularly well. Schools can play a huge role in fostering happiness. They can teach kids life skills, resilience, and a lot more about what is important beyond just math and English.

What can we do with this information?
There’s a great deal that individuals can do themselves, of course. But government can also play a key role in fostering the conditions that allow people to live enjoyable lives. Take mental health. Depression and anxiety are a huge source of misery, and they are largely treatable. Currently public health expenditure is heavily geared towards physical health. And treating mental health usually brings with it huge savings in other areas like absenteeism, unemployment, crime, and so on.

There’s also a great deal that leaders in the business community can be doing to make jobs more satisfying and enjoyable. Similarly, a happier workforce can bring with it a number of benefits in terms of productivity, engagement, and reduced turnover.

March 15, 2018 | More

How Lies Spread Online

How Lies Spread Online

The spread of misinformation on social media is an alarming phenomenon that scientists have yet to fully understand. While the data show that false claims are increasing online, most studies have analyzed only small samples or the spread of individual fake stories.

My colleagues Soroush Vosoughi, Deb Roy and I set out to change that. We recently analyzed the diffusion of all of the major true and false stories that spread on Twitter from its inception in 2006 to 2017. Our data included approximately 126,000 Twitter “cascades” (unbroken chains of retweets with a common, singular origin) involving stories spread by three million people more than four and a half million times.

Disturbingly, we found that false stories spread significantly more than did true ones. Our findings were published on Thursday in the journal Science.

We started by identifying thousands of true and false stories, using information from six independent fact-checking organizations, including Snopes, PolitiFact and Factcheck.org. These organizations exhibited considerable agreement — between 95 percent and 98 percent — on the truth or falsity of these stories.

Then we searched Twitter for mentions of these stories, followed the sharing activity to the “origin” tweets (the first mention of a story on Twitter) and traced all the retweet cascades from every origin tweet. We then analyzed how they spread online.

For all categories of information — politics, entertainment, business and so on — we found that false stories spread significantly farther, faster and more broadly than did true ones. Falsehoods were 70 percent more likely to be retweeted, even when controlling for the age of the original tweeter’s account, its activity level, the number of its followers and followees, and whether Twitter had verified the account as genuine. These effects were more pronounced for false political stories than for any other type of false news.

Surprisingly, Twitter users who spread false stories had, on average, significantly fewer followers, followed significantly fewer people, were significantly less active on Twitter, were verified as genuine by Twitter significantly less often and had been on Twitter for significantly less time than were Twitter users who spread true stories. Falsehood diffused farther and faster despite these seeming shortcomings.

And despite concerns about the role of web robots in spreading false stories, we found that human behavior contributed more to the differential spread of truth and falsity than bots did. Using established bot-detection algorithms, we found that bots accelerated the spread of true stories at approximately the same rate as they accelerated the spread of false stories, implying that false stories spread more than true ones as a result of human activity.

Why would that be? One explanation is novelty. Perhaps the novelty of false stories attracts human attention and encourages sharing, conveying status on sharers who seem more “in the know.”

Read the full post at The New York Times

Sinan Aral is the David Austin Professor of Management at MIT, where he is a Professor of IT & Marketing, and Professor in the Institute for Data, Systems and Society where he co-leads MIT’s Initiative on the Digital Economy.

March 12, 2018 | More

Here's Microsoft Chairman John Thompson's advice for MBA students

Here’s Microsoft Chairman John Thompson’s advice for MBA students

John Thompson has led Microsoft, IBM, and Symantec. Here’s the guidance that lit his way.

In more than four decades at tech’s biggest companies, Microsoft Chairman John Thompson has encountered phenomenal success and faced a few disasters.

He spent 28 years at IBM, rising to lead IBM Americas. He took a risk, leaving IBM at the dawn of the Internet age in 1999 to become CEO of Symantec, presiding over the ascendance of Norton AntiVirus software.

Now he invests in early-stage companies and education initiatives. A 1983 MIT Sloan Fellows graduate, he visited campus March 7 for an iLead talk, explaining how to blend humanity with power. And how it helped avert those disasters.

Here’s his advice:

Assimilate. Thompson began his career at IBM in the 1970s, working as a sales rep. At the outset, he wore a polyester suit and “refused to wear a blue suit and a white shirt.”

Eventually, he modified his wardrobe to embrace the culture. The lesson transcends fashion: A key part of strong leadership is to absorb a company’s culture by setting aside ego and truly listening.

“If you go into a new job, you should spend 90 to 100 days listening — not espousing,” Thompson said.

“That ability to assimilate will influence your thinking and strategy more than anything else,” he said. “The human body was created with two ears and one mouth. Use them proportionately. Leadership is about listening and reacting.”

Own your errors. “As a leader, it’s important to be open and admit when you’re wrong and made a mistake. People on your team, or customers, are far more respectful of that,” he said.

When Thompson worked at IBM, there was a problem with their mainframe computers. All IBM customers were affected, including retail giant Walmart. Thompson traveled from New York to the company’s Arkansas headquarters to meet with founder Sam Walton.

“I walked in, and I said, ‘Hi. I’m John Thompson.’ He said, ‘I know who you are. I don’t want to talk about the problems. I appreciate the fact that you even came, so let’s talk about something else,’” he recalled.

They ended up swapping notes on hunting, not business.

Presentation matters. When Thompson became chairman at Microsoft in 2014, it was a “company with resources and credibility, but it had lost its way,” he said, noting that some considered the culture toxic. The board wanted to transform that perception when searching for a new leader. They did that by hiring new CEO Satya Nadella.

“If he has a strong opinion counter to yours, you will hear it — but hear it in a way that’s not in your face and offensive,” Thompson said. “It’s one of the most pleasant business experiences I’ve had. He’s a deep technologist who had the right leadership attitude around sincerity.”

Stay humble. During a question-and-answer session, students asked for leadership advice after graduation.

“Humility,” Thompson replied. “One of the things you get to observe in Silicon Valley is the hubris that comes from the great entrepreneur leader. That hubris often creates powerful companies, but ultimately long-term success is about balance.”

Reciprocity is as essential as ambition, he said.

“People only want to connect with people willing to connect with them. If you don’t have the right attitude, I’m moving on,” he said.

March 9, 2018 | More

Study: False news spreads faster than the truth

Study: False news spreads faster than the truth

To stop the spread of false news, first we have to understand it.

A new study published in Science finds that false news online travels “farther, faster, deeper, and more broadly than the truth.” And the effect is more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information.

Falsehoods are 70 percent more likely to be retweeted on Twitter than the truth, researchers found. And false news reached 1,500 people about six times faster than the truth.

The study, by Soroush Vosoughi and associate professor Deb Roy, both of the MIT Media Lab, and MIT Sloan professor Sinan Aral, is the largest-ever longitudinal study of the spread of false news online. It uses the term “false news” instead of “fake news” because the former “has lost all connection to the actual veracity of the information presented, rendering it meaningless for use in academic classification,” the authors write.

To track the spread of news, the researchers investigated all the true and false news stories verified by six independent fact-checking organizations that were distributed on Twitter from 2006 to 2017. They studied approximately 126,000 cascades — defined as “instances of a rumor spreading pattern that exhibits an unbroken retweet chain with a common, singular origin” — on Twitter about contested news stories tweeted by 3 million people more than 4.5 million times. Twitter provided access to data and provided funding for the study.

The researchers removed Twitter bots before running their analysis. They then included the bots and ran the analysis again and found “none of our main conclusions changed.”

“This suggests that false news spreads farther, faster, deeper, and more broadly than the truth because humans, not robots, are more likely to spread it,” the researchers wrote.

So what to do? In an interview for the MIT Sloan Experts video series, Aral said possible solutions include labeling fake news much as food is labeled, creating financial disincentives such as reducing the flow of advertising dollars to accounts that spread fake news, and using algorithms to find and dampen the effect of fake news.

But he cautioned that none of those solutions is tested.

“We do not know enough about what’s going on. We have to do more research about how false news spreads, why do people spread false news, how are bots involved or not involved,” he said.

Other findings from the study include:

  • The amount of false news on Twitter is increasing and spikes during key events, like the U.S. presidential elections of 2012 and 2016.
  • While one might think that characteristics of the people spreading the news could explain why falsity travels with greater velocity than the truth, the data revealed the opposite. Users that spread false news had significantly fewer followers, followed significantly fewer people, were significantly less active on Twitter, were “verified” significantly less often, and had been on Twitter for significantly less time. Falsehood diffused further and faster despite these differences, not because of them.
  • The data support a “novelty hypothesis.” False news was more novel than the truth and people were more likely to share novel information.
  • False rumors also inspired replies expressing greater surprise, corroborating the novelty hypothesis, and greater fear and disgust. The truth, on the other hand, inspired greater sadness, anticipation, joy, and trust. These emotions, expressed in reply to falsehoods, may shed light on what inspires people to share false news.

March 9, 2018 | More

The hybrid trap: Why most efforts to bridge old and new technology miss the mark

The hybrid trap: Why most efforts to bridge old and new technology miss the mark

Technological transitions are challenging, particularly for companies in mature industries. Incumbents are frequently blindsided by new technologies, fully missing opportunities to enter emerging markets early. While some established companies do possess the awareness and dexterity to become early adopters of new technologies, they typically lack the vision and the commitment to become leaders. Too often, they cling to the familiar, developing hybrid products that combine elements of the old and the new. The trouble is, hybrid strategies put even the best incumbent companies in a weak position when the market finally embraces the new technology. We call this the “hybrid trap.”

The transition from internal combustion engines to electric vehicles (EVs) demonstrates the dangers of hesitating to embrace the new. Several internal combustion engine makers, such as General Motors Co., and Honda Motor Co. Ltd., entered the EV market early, but they backed away from these projects in favor of continued emphasis on established technology. Gradually, most of the automakers focused on hybrid cars that combined old and new technologies. This opened the door to new competitors that pursued solely the EV technology, most notably Tesla Inc. It wasn’t until established players saw the market’s interest in Tesla that they began to question their hybrid strategies and realized that electric cars had the potential for broad market appeal. By mid-2017, nearly every old-line engine producer was playing catchup on EV technology, working to release new electric models in the next two to five years.

Meanwhile, Tesla, having established a strong brand in EV, continues its move down market as a more direct threat to incumbent automakers.

Tesla’s first mass-market car, the Model 3, was announced in March 2016, and by summer 2017, it had a waiting list of more than 455,000 units. Although it is too early to know if Tesla will be successful in the long run, its clear leadership in EVs has exposed a fundamental weakness in the approach incumbents commonly take when faced with industry transformations, with lessons that apply to other industries that face similar transitions.

Conviction vs. Opportunism

New markets are often enabled by technological change and exploited by minds that can envision futures that are far different from the status quo. More so, they are convinced that such a future must happen. Amazon.com Inc. founder Jeff Bezos didn’t invest in Blue Origin LLC, the rocket company he founded for space transportation, based on short-term financial calculations or because he likes to invest in wild ideas. Bezos made the investment because he truly believes mankind needs to conquer space to survive and prosper. Steve Jobs led Apple Computer Inc. to enter the computer industry in the 1980s and the mobile phone industry in the 2000s with the belief that computers and phones needed to be not only fast and precise, but also easy to use and aesthetically pleasing. Like other innovators who have changed their companies and industries over the years, Bezos and Jobs had clear visions that they believed in and thought would one day become reality. Their convictions drove them to attempt what many would have considered wild or even insane.

It’s interesting to contrast the bold visions of Bezos and Jobs with the hesitant approaches taken by GM, Ford, Toyota, Nissan, BMW, Daimler-Benz, and other established automakers in the emerging EV market in the 1990s. Although some of these companies had done exploratory EV research for some time and even entered the market early, none of them had the vision and conviction to push forward as leaders in the emerging market. Rather, they all settled back on a hybrid product strategy. (See “About the Research.”)

The Market Creators

In the eyes of many people, the credit for the emergence of electric cars is closely associated with Tesla and Elon Musk. However, there were other visionaries who also tried to open up the market, including Shai Agassi, who developed plans for Better Place around the same time Musk was planning Tesla’s first car. Agassi’s venture proposed a battery-swap technology that would be licensed to existing automakers. Despite raising more than $1 billion, ultimately, he could not get enough traction in the industry, and the venture failed.1

Musk chose a different route: Tesla would make its own cars, without having to depend on legacy players. The Tesla Roadster, announced in 2006 and released in 2008, was the first EV to use lithium-ion battery cells and have a 200-plus-mile range. In addition to being stylish (it was built on a Lotus chassis), it was fun to drive; it could reach 60 miles per hour in less than four seconds. The hype around being an environmentally friendly, premium sports car was immense, attracting celebrity buyers such as George Clooney, Steven Spielberg, Demi Moore, and David Letterman, who added to the brand’s sex appeal. In 2012, Tesla released a luxury sedan, the Model S, with a 300-mile range. In 2015, the company started selling its crossover luxury SUV, the Model X.

The fanfare around Tesla’s products triggered reactions from existing automakers. Nissan, for example, launched an all-electric car, the Leaf, in 2010, aimed at the mass market. In 2007, BMW unveiled a new strategy labeled Project I, centered on alternative mobility concepts and new materials. Its first product was the experimental Mini E, an electric version of the popular Mini Cooper that was first made available to 500 U.S. customers in 2009. For its part, Daimler-Benz produced test quantities of its Smart car, the Smart ED, in 2011, using Tesla technology. However, with the exception of Nissan’s Leaf, production volumes of EVs were low. The mainstream was still hedging.

Indeed, most of the industry pursued a path typified by GM. Seeing the Tesla Roadster at the Detroit Auto Show in 2006, then GM vice chairman Robert Lutz reportedly challenged his company to produce an all-electric vehicle.2,3 “All the geniuses here at General Motors kept saying lithium-ion technology is 10 years away, and Toyota agrees with us — and, boom, along comes Tesla. So, I said, ‘How come some teeny little California startup run by guys who know nothing about the car business can do this and we can’t?’”4 But Lutz was in the minority; other GM executives argued that the technology was not there yet for an affordable electric car. They suggested that GM move forward on a “transitional car,” a hybrid vehicle that had a small battery pack with an all-electric range of 38 miles and a small gasoline-powered engine acting as a generator to extend the range. GM’s transitional car, the Chevrolet Volt, was introduced in December 2010. Its battery-engine configuration was designed to overcome the limitations of prior EVs. GM’s obsession with the past kept it from seeing the future — even with Tesla directly in its line of sight.

Wasting Precious Time

GM’s Volt is a good example of what incumbents in many industries do during times of technological transition: design and produce products that bring the old and new technologies together in a single product. Companies may tell themselves that this is the approach their customers will be most comfortable with, but more often, it is simply the only strategy the company itself has the collective nerve to execute.

It is a repeating pattern. In the 1960s, U.S. electronics companies responded to the introduction of Japanese transistor radios by developing products that blended transistor technology with traditional vacuum tubes.5 In the early 1990s, Kodak tried to sell a “film-based digital imaging” product, which merged film photography and digital technology.6 And a decade ago, BlackBerry tried to respond to the challenge of the iPhone by releasing a phone that had both a touchscreen display (like the iPhone) and a traditional keyboard (like earlier BlackBerry phones). At Verizon’s insistence, BlackBerry later came out with the Storm, which featured a specially designed touchscreen that still maintained the sounds and sensation of pushing buttons that BlackBerry users were accustomed to.7

These hybrid efforts, however common, have ultimately underperformed in the market. Why? For one thing, our research found, they give established companies a false sense of safety. In addition, they typically deliver suboptimal performance.

False Sense of Safety

Hybrids allow incumbents to claim they are investing in the new technology when, in reality, this is only partly true. By definition, hybrids require companies to acquire some knowledge about the new technology. However, companies approach the new technology from the perspective of the old one. Also, in the face of uncertainty, established organizations fall back on learned patterns, further slowing the development of the new technology. This is why most hybrid products developed by incumbents, particularly the earliest ones, are weighted toward the old technology. Toyota’s first Prius, for example, was primarily an internal combustion vehicle; it only used battery power at low speeds and recharged through the traditional engine, with no plug-in capability. Until the mid-2010s, most other hybrid autos operated in this manner. Indeed, although a hybrid strategy might seem to be a reasonable “bridge” strategy when the technological transitions take a long time to unfold, the reality is that hybrids never capture a significant portion of the market. (Hybrid cars represent only about 2% of total U.S. auto sales today.8) More important, they end up exposing incumbents to inroads from other actors who are fully committed to the new technology.

Suboptimal Performance

The second problem with hybrids is that they typically don’t optimize or excel in either the old technology or the new one. What’s more, they cost more and tend to be larger and clunkier, since they have to be designed to host subsystems and components for both technologies. When Japanese companies began selling portable transistor radios in the 1960s, U.S. manufacturers produced hybrid radios that used both transistors and older vacuum-tube technology, making them twice as heavy as the Japanese portable transistor radios.9 Starting from scratch on product design, the Japanese companies produced radios that were smaller and lighter than the U.S. hybrids. Since transistors required less power than vacuum tubes, they were able to reduce the dimensions of tuning capacitors, speakers, battery supplies, and other elements.10 Another example is Kodak’s Photo CD, which was bulky, expensive, and difficult to use, and soon superseded by advances in digital photography. Early versions of the Chevy Volt suffered from similar limitations — it was relatively heavy and had a small battery.

While hybrids might succeed in attracting customers and providing a reasonable value proposition for a period of time, they distract incumbents from developing the new technology. Incumbents that focus on hybrids waste precious time they could use to develop a real competitive advantage based on the new technology. It is no coincidence that the most successful companies producing and selling hybrid products tend to be the slowest ones to move to the new technology. As late as 2017, Toyota didn’t offer an EV, and it does not plan to begin mass producing EVs until 2019.11 Moreover, by focusing on hybrids, incumbents hand the new entrants a valuable advantage: sufficient time to not only gain technological leadership and market visibility, but also to build or acquire the assets they require to be successful in the long run.

The Role of Complementary Assets

According to a classical framework in innovation management, innovators often don’t profit from being early in complex markets because they lack the “complementary assets” needed to scale the innovation into a sustainable business.12 The experience of EMI Group, the British company that invented the CAT scanner and was the first entrant in the emergent market for CAT scanning machines in 1973, offers a good example. After introducing its early products, EMI wasn’t able to fend off the fast moves of competitors in the medical equipment business, such as General Electric Co. and Technicare, which within two to three years had competing products in the market.13 The established companies already had large, strong manufacturing capabilities, international distribution, recognized brands, equipment support, and training and service capabilities. Within a few years, GE and other companies developed CAT scanners that were more advanced than EMI’s, and they used their resources and complementary assets to take control of the market.

It’s interesting to compare what happened with EMI in the early years to Tesla’s experience. In many respects, Tesla and EMI were in similar positions. Tesla was new to the auto industry, and it had no dealership network, no manufacturing capabilities, and no brand name. It was totally lacking the complementary assets presumably required to compete. However, in contrast to the medical equipment incumbents that reacted quickly to EMI’s product, the auto industry incumbents didn’t treat Tesla as a serious threat. Why? Perhaps because the auto industry incumbents did not immediately see a big performance improvement with the EV. Electric vehicles still transported people from point A to point B, looked very similar to the existing cars, and were used in a similar fashion (wheel, accelerator, brake, etc.). Because the benefits of the new technology were not obvious (convenience of charging at home, in the office, or in the parking garage; zero noise and no pollution; software-driven interface, etc.), incumbents may have miscalculated its importance and, thus, missed the opportunity first to lead and then to react in a timely manner to Tesla. In contrast, from the start, CAT scanners were visibly superior to existing X-ray technology — they provided much richer, highly valuable information to doctors and patients. Incumbents in the medical device industry saw the writing on the wall: They had to either embrace the new technology or be left behind quickly.

In the case of Tesla, slow incumbent reaction gave Tesla time to build its production capacity, brand reputation, and distribution capabilities. It also gave Tesla time to create other complementary assets that were specific to the new technology, which helped the company fend off the late-entering competition. (See “Tesla’s Growth in the U.S. Electric Vehicle Market.”)

Chief among these assets is a network of fast and dependable electric charging stations. As of July 2017, Tesla had more fast-charging outlets in the United States than other providers.14 In recognition of the fact that one of the major obstacles for EV adoption would be “range anxiety” — that is, fear of running out of battery power — Tesla’s cars are designed to go farther than any of its competitors (approximately 300 miles between charges). And the way the company planned its network of charging stations was intended to minimize range anxiety: While competing EV charging networks are primarily concentrated in cities or narrow corridors within the United States, Tesla focused on offering intercity charging capacity so that a Tesla owner could drive throughout the country and always find a supercharger within range.

What’s more, Tesla made an important strategic decision with regard to its charging technology. Tesla supercharging stations, which charge significantly faster than other EV chargers, are based on a closed technology that can be used only on Tesla cars. Tesla owners therefore have the best of both worlds: In addition to having access to Tesla’s proprietary charging network, they can charge their cars on the other available charging networks using an adapter that comes with every Tesla vehicle.15

Avoiding the Trap

The stark message from our analysis is that hybrid product strategies are usually a lure toward failure. In the midst of threat and uncertainty created by an emerging technology, new and old competitors stake out positions in the new. However, as we have noted, only incumbent companies introduce products that combine elements of both new and old systems. The idea seems to be that the hybrids give them a beachhead in the new technology while enabling them to take advantage of their experience in the old technology. Hybrids, the thinking goes, help incumbents learn about the new technology while it is still developing, thereby assisting them in making a smooth transition.16

The problem with this argument is that the clock often moves too quickly for hybrid-focused incumbents. During most technological transitions, the pace of the transition is dictated by new entrants, who commit all of their resources and efforts to the emerging alternative. New market entrants rethink and redesign their products to take full advantage of the possibilities of the new technology. That is what Japanese radio makers did with the transistor, and it is what Tesla has been doing with the EV: exploit new technological knowledge, develop new complementary assets, establish strong market leadership, and both create and satisfy an appetite on the part of investors and customers for products that perform well in terms of range, responsiveness, and user interface.

Tesla’s hybrid-free vision does not stop with EVs. It has envisioned the electric vehicle as being part of a much larger system, one that includes batteries and home charging and backup systems, and even extending to roofing materials embedded with photovoltaic technology. If the company’s expansive vision pans out (though it is still far too early to tell), incumbent auto companies and others may come to see the hybrid trap as bigger and deeper than they could have imagined.

So, is the answer for incumbents simply to walk away from products based on the old technology and jump headlong into the new? No, it can’t be. Products based on old technologies may yield profitable results for years. But it is essential that a company’s legacy operations don’t hamper its ability to pursue new technology. Based on our research, this is the single biggest risk of hybrids. Not only do hybrid product strategies lead to products that underperform from the perspective of both the old and new technologies, but they also limit a company’s imagination and creativity. New technologies can open opportunities that extend well beyond the scope of legacy products, within both current markets and new ones. But such opportunities can be seen only by companies that are willing to view the world through the lens of the new technology.

March 4, 2018 | More

Engineering

3Q: John Heywood on the future of the internal combustion engine

3Q: John Heywood on the future of the internal combustion engine

The future of the internal combustion engine, with some 2 billion in use in the world today, was a hot topic at last week’s Society of Automotive Engineers (SAE) World Congress in Detroit. There, John Heywood, the Sun Jae Professor Emeritus of Mechanical Engineering at MIT, joined auto industry propulsion system leaders on a panel addressing the theme, “Not Dead Yet — The Ever Evolving Internal Combustion Engine Powertrain.”

Heywood is recognized as one of the world’s preeminent experts on internal combustion engines. In the late 1960s, Heywood joined MIT’s Sloan Automotive Lab, where he started researching why engines created air pollutants and how the amount of those pollutants could be reduced. Heywood thrived in this important emerging area of study.

His research over the past five decades has substantially increased our understanding of how engines work, how they can be designed to reduce their emissions of air pollutants and greenhouse gases, and how to improve their fuel economy. Thirty years after its first publication, Heywood has just completed a second edition of his seminal book, “Internal Combustion Engine Fundamentals.” Its publication comes at a critical time when the automotive industry is faced with difficult questions on how to move forward in an era when alternative propulsion options are getting a lot of attention.

Q: Much of your career has focused on internal combustion engines. What changes have been made in engine design to reduce air pollutant and greenhouse gas emissions?

A: In the past 30 years, there’s been a lot of progress in controlling air pollutant emissions using exhaust after-treatment technology. The key technology component is the catalytic converter in the exhaust system that cleans up the exhaust gases before they go out into the atmosphere. Within these catalytic converters there’s a honeycomb-like matrix with lots of passages with porous surfaces that maximize gas-to-surface contact. Noble metals like platinum, rhodium, and palladium are then spread over this surface area. These metals act as a catalyst that helps get rid of the pollutants at temperatures readily achievable in the exhaust system. As the exhaust gas goes through these channels, unburned fuel is oxidized and oxygen is removed from nitric oxide. It’s a clever combination of engineering and chemistry. This has been very successful in gasoline engines, but not as successful for diesel engines. As a consequence, the environmental problems presented by diesel engines haven’t yet been adequately resolved.

Q: Over the past decade or so, there has been a strong focus on electric vehicles as a solution to transportation’s greenhouse gas emissions problem. Why is work on internal combustion engines still important?

A: Behind this question, there’s this implication: “Why are you bothering with engines when electric vehicles are taking over?” Electric vehicles are certainly going to play a useful role moving forward, but right now it is really difficult to estimate how big a role they will eventually play. I’ve been researching the critical area of electrical vehicle recharging for MIT Energy Initiatives’ “Mobility of the Future” project. If you own a battery electric vehicle, you really need a home recharger. The logistics and cost of having a home charger at most of the homes in America is problematic and expensive.

Various projections for the U.S. suggest that by 2030, some 10 to 25 percent of vehicles might be electrified. The question then remains, what about the other 90 to 75 percent? And what about the large trucks and ships that run on diesel fuel? There are, as yet, no convincing electric options for those vehicles. That is why it is still so important to continue working on internal combustion engines and make them as clean and efficient as we can. The SAE panel members agreed.

Q: The EPA recent issued a report saying that the auto industry is unlikely to meet the greenhouse gas emissions regulations set for model years 2021 to 2025. What is your perspective on this?

A: There are several implications of this recent assessment by the EPA for the automotive industry. Some dozen states have joined California, which was a key partner with the EPA in the setting of these ambitious standards during the Obama administration, and they are unlikely to back down. This could result in having more demanding standards for California and the dozen states that agree with the tougher regulations, with the remaining states following the EPA’s requirements. Having different standards in different states would be a real headache for the auto industry. My hope is that through negotiations, these parties will agree on an appropriate compromise: likely a delay of the original standards for a few years.

In a report I and co-workers wrote as part of an MIT’s Energy Initiative project, we concluded that these 2021 to 2025 emission standards could be met within an additional five years beyond the target dates. Given the challenges in bringing the needed technologies into production vehicles quickly enough, and the shift in consumer’s preferences to larger, heavier, crossover vehicles and SUVs, it’s understandable why a delay might be needed. But this new EPA report has thrown a monkey wrench into this situation.

April 18, 2018 | More

P.L. Thibaut Brian, professor emeritus of chemical engineering, dies at 87P.L. Thibaut Brian, professor emeritus of chemical engineering, dies at 87

P.L. Thibaut Brian, professor emeritus of chemical engineering, dies at 87

Pierre Leonc Thibaut Brian, professor emeritus in the Department of Chemical Engineering, died on April 2 at age 87.

Born in New Orleans, Louisiana, on July 8, 1930, Brian received a BS in chemical engineering from Louisiana State University in 1951. He earned his ScD in chemical engineering from MIT in 1956, supervised by Professor Edwin R. Gilliland. Upon graduation, he immediately joined the faculty of the Department of Chemical Engineering as director of the Bangor Station of the Chemical Engineering Practice School. As a professor, Brian’s research focused largely on mass and heat transfer with simultaneous chemical reaction. He was an early adopter of computers in chemical engineering and contributed to the associated opportunities in process control and numerical analysis.

“Thibaut was well known for many qualities but two may head the list: high energy and quickness of insight. He projected enormous energy and worked extremely hard — and this made him a captivating teacher,” says Ken Smith, the Gilliland Professor Emeritus of Chemical Engineering. “When Thibaut was presented with a complex, ill-defined problem, he would almost instantly understand what the essential elements really were and how one should go about attacking it.”

In 1972, Brian retired from MIT and joined Air Projects as vice president of engineering, where he remained until 1994. Brian’s early contributions at Air Products were mainly of a technical sort, largely in the context of air separation. Later, he became a very effective advocate for enhanced safety in the chemical process industry and particularly for sophisticated quantitative hazard analyses as a means of assessing risks. As a result of his efforts, Air Products’ safety record became one of the best in the industry and other companies emulated their procedures.

Brian was an active member and director of the American Institute of Chemical Engineers; he received its Professional Progress in Chemical Engineering Award in 1973 and its R.L. Jacks Award (now re-named the Management Award) in 1989. Churchill College of Cambridge in the United Kingdom elected him to the position of Overseas Fellow, and hosted him for a sabbatical year. Brian was a member of the Chemical Industry Institute of Toxicology and the American Industrial Health Council. He was elected to the National Academy of Engineering in 1975 for his “contributions to both theory and engineering practice of desalination, mass transfer in chemically reactive systems, and the technology of liquefied gases.” Brian was elected to foreign membership in the Royal Academy of Engineering (UK) in 1991. In 1972, he authored the book, “Staged Cascades in Chemical Processing.”

Predeceased in 2016 by his wife of 64 years, Geraldine ‘Gerry,’ he is survived by his son Richard and daughter-in-law Susan; his son James and daughter-in-law Sheryl; his daughter, Evelyn ‘Evie’; his grandchildren, Richard Christopher Brian and Lauren Brian Spears; and by his great grandson, Olin Thomas Spears. Condolences may be made to brownandsonsfuneral.com.

April 18, 2018 | More

Artificial antimicrobial peptides could help overcome drug-resistant bacteria

Artificial antimicrobial peptides could help overcome drug-resistant bacteria

During the past several years, many strains of bacteria have become resistant to existing antibiotics, and very few new drugs have been added to the antibiotic arsenal.

To help combat this growing public health problem, some scientists are exploring antimicrobial peptides — naturally occurring peptides found in most organisms. Most of these are not powerful enough to fight off infections in humans, so researchers are trying to come up with new, more potent versions.

Researchers at MIT and the Catholic University of Brasilia have now developed a streamlined approach to developing such drugs. Their new strategy, which relies on a computer algorithm that mimics the natural process of evolution, has already yielded one potential drug candidate that successfully killed bacteria in mice.

“We can use computers to do a lot of the work for us, as a discovery tool of new antimicrobial peptide sequences,” says Cesar de la Fuente-Nunez, an MIT postdoc and Areces Foundation Fellow. “This computational approach is much more cost-effective and much more time-effective.”

De la Fuente-Nunez and Octavio Franco of the Catholic University of Brasilia and the Dom Bosco Catholic University are the corresponding authors of the paper, which appears in the April 16 issue of Nature Communications. Timothy Lu, an MIT associate professor of electrical engineering and computer science, and of biological engineering, is also an author.

Artificial peptides

Antimicrobial peptides kill microbes in many different ways. They enter microbial cells by damaging their membranes, and once inside, they can disrupt cellular targets such as DNA, RNA, and proteins.

In their search for more powerful, artificial antimicrobial peptides, scientists typically synthesize hundreds of new variants, which is a laborious and time-consuming process, and then test them against different types of bacteria.

De la Fuente-Nunez and his colleagues wanted to find a way to make computers do most of the design work. To achieve that, the researchers created a computer algorithm that incorporates the same principles as Darwin’s theory of natural selection. The algorithm can start with any peptide sequence, generate thousands of variants, and test them for the desired traits that the researchers have specified.

“By using this approach, we were able to explore many, many more peptides than if we had done this manually. Then we only had to screen a tiny fraction of the entirety of the sequences that the computer was able to browse through,” de la Fuente-Nunez says.

In this study, the researchers began with an antimicrobial peptide found in the seeds of the guava plant. This peptide, known as Pg-AMP1, has only weak antimicrobial activity. The researchers told the algorithm to come up with peptide sequences with two features that help peptides to penetrate bacterial membranes: a tendency to form alpha helices and a certain level of hydrophobicity.

After the algorithm generated and evaluated tens of thousands of peptide sequences, the researchers synthesized the most promising 100 candidates to test against bacteria grown in lab dishes. The top performer, known as guavanin 2, contains 20 amino acids. Unlike the original Pg-AMP1 peptide, which is rich in the amino acid glycine, guavanin is rich in arginine but has only one glycine molecule.

More powerful

These differences make guavanin 2 much more potent, especially against a type of bacteria known as Gram-negative. Gram-negative bacteria include many species responsible for the most common hospital-acquired infections, including pneumonia and urinary tract infections.

The researchers tested guavanin 2 in mice with a skin infection caused by a type of Gram-negative bacteria known as Pseudomonas aeruginosa, and found that it cleared the infections much more effectively than the original Pg-AMP1 peptide.

“This work is important because new types of antibiotics are needed to overcome the growing problem of antibiotic resistance,” says Mikhail Shapiro, an assistant professor of chemical engineering at Caltech, who was not involved in the study. “The authors take an innovative approach to this problem by computationally designing antimicrobial peptides using an ‘in silico’ evolutionary algorithm, which scores new peptides based on a set of properties known to be correlated with effectiveness. They also include an impressive array of experiments to show that the resulting peptides indeed have the properties needed to serve as antibiotics, and that they work in at least one mouse model of infections.”

De la Fuente-Nunez and his colleagues now plan to further develop guavanin 2 for potential human use, and they also plan to use their algorithm to seek other potent antimicrobial peptides. There are currently no artificial antimicrobial peptides approved for use in human patients.

“A report commissioned by the British government estimates that antibiotic-resistant bacteria will kill 10 million people per year by the year 2050, so coming up with new methods to generate antimicrobials is of huge interest, both from a scientific perspective and also from a global health perspective,” de la Fuente-Nunez says.

The research was funded by the Ramón Areces Foundation and the Defense Threat Reduction Agency (DTRA).

April 16, 2018 | More

Computer system transcribes words users “speak silently”

Computer system transcribes words users “speak silently”

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

“We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

The researchers describe their device in a paper they presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.

Subtle signals

The idea that internal verbalizations have physical correlates has been around since the 19th century, and it was seriously investigated in the 1950s. One of the goals of the speed-reading movement of the 1960s was to eliminate internal verbalization, or “subvocalization,” as it’s known.

But subvocalization as a computer interface is largely unexplored. The researchers’ first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals. So they conducted experiments in which the same subjects were asked to subvocalize the same series of words four times, with an array of 16 electrodes at different facial locations each time.

The researchers wrote code to analyze the resulting data and found that signals from seven particular electrode locations were consistently able to distinguish subvocalized words. In the conference paper, the researchers report a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.

But in current experiments, the researchers are getting comparable results using only four electrodes along one jaw, which should lead to a less obtrusive wearable device.

Once they had selected the electrode locations, the researchers began collecting data on a few computational tasks with limited vocabularies — about 20 words each. One was arithmetic, in which the user would subvocalize large addition or multiplication problems; another was the chess application, in which the user would report moves using the standard chess numbering system.

Then, for each application, they used a neural network to find correlations between particular neuromuscular signals and particular words. Like most neural networks, the one the researchers used is arranged into layers of simple processing nodes, each of which is connected to several nodes in the layers above and below. Data are fed into the bottom layer, whose nodes process it and pass them to the next layer, whose nodes process it and pass them to the next layer, and so on. The output of the final layer yields is the result of some classification task.

The basic configuration of the researchers’ system includes a neural network trained to identify subvocalized words from neuromuscular signals, but it can be customized to a particular user through a process that retrains just the last two layers.

Practical matters

Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

In ongoing work, the researchers are collecting a wealth of data on more elaborate conversations, in the hope of building applications with much more expansive vocabularies. “We’re in the middle of collecting data, and the results look nice,” Kapur says. “I think we’ll achieve full conversation some day.”

“I think that they’re a little underselling what I think is a real potential for the work,” says Thad Starner, a professor in Georgia Tech’s College of Computing. “Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You’ve got jet noise all around you, you’re wearing these big ear-protection things — wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to? You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press. This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.”

“The other thing where this is extremely useful is special ops,” Starner adds. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?”

April 4, 2018 | More

Featured video: Magical Bob

Featured video: Magical Bob

As a child, Institute Professor Robert S. Langer was captivated by the “magic” of the chemical reactions in a toy chemistry set. Decades later, he continues to be enchanted by the potential of chemical engineering. He is the most cited engineer in the world, and shows no signs of slowing down, despite four decades of ground-breaking work in drug delivery and polymer research.

Langer explains, “For me, magic has been discovering and inventing things. Discovering substances that can stop blood vessels from growing in the body, which can ultimately lead to treatments for cancer and blindness.”

The Langer Lab has had close to 1,000 students and postdocs go through its doors. Hundreds are now professors around the world. Many have started companies.

“I’m very proud of all of them,” says Langer. “I hope that I help them a little bit. That’s what we try to do.”

Submitted by: Melanie Miller Kaufman / Department of Chemical Engineering | Video by: Lillie Paquette / School of Engineering | 1 min, 26 sec

March 27, 2018 | More

Paper-folding art inspires better bandages

Paper-folding art inspires better bandages

Scraped up knees and elbows are tricky places to securely apply a bandage. More often than not, the adhesive will peel away from the skin with just a few bends of the affected joint.

Now MIT engineers have come up with a stickier solution, in the form of a thin, lightweight, rubber-like film. The adhesive film can stick to highly deformable regions of the body, such as the knee and elbow, and maintain its hold even after 100 bending cycles. The key to the film’s clinginess is a pattern of slits that the researchers have cut into the film, similar to the cuts made in a paper-folding art form known as kirigami.

The researchers attached the “kirigami film” to a volunteer’s knee and found that each time she bent her knee, the film’s slits opened at the center, in the region of the knee with the most pronounced bending, while the slits at the edges remained closed, allowing the film to remain bonded to the skin. The kirigami cuts give the film not only stretch, but also better grip: The cuts that open release tension that would otherwise cause the entire film to peel away from the skin.

To demonstrate potential applications, the group fabricated a kirigami-patterned adhesive bandage, as well as a heat pad consisting of a kirigami film threaded with heating wires. With the application of a 3-volt power supply, the pad maintains a steady temperature of 100 degrees Fahrenheit. The group has also engineered a wearable electronic film outfitted with light-emitting diodes. All three films can function and stick to the skin, even after 100 knee bends.

Small, “kirigami” slits in polymer film enable the material to stick to the skin, even after 100 knee bends, compared to the same film without slits, which debonds after just one bending cycle.

Ruike Zhao, a postdoc in MIT’s Department of Mechanical Engineering, says kirigami-patterned adhesives may enable a whole swath of products, from everyday medical bandages to wearable and soft electronics.

“Currently in the soft electronics field, people mostly attach devices to regions with small deformations, but not in areas with large deformations such as joint regions, because they would detach,” Ruike says. “I think kirigami film is one solution to this problem commonly found in adhesives and soft electronics.”

Ruike is the lead author of a paper published online this month in the journal Soft Matter. Her co-authors are graduate students Shaoting Lin and Hyunwoo Yuk, along with Xuanhe Zhao, the Noyce Career Development Professor in MIT’s Department of Mechanical Engineering.

Adhesion from an art form

In August 2016, Ruike and her colleagues were approached by representatives from a medical supply company in China, who asked the group to develop an improved version of a popular pain-relieving bandage that the company currently manufactures.

“Adhesives like these bandages are very commonly used in our daily life, but when you try to attach them to places that encounter large, inhomogenous bending motion, like elbows and knees, they usually detach,” Ruike says. “It’s a huge problem for the company, which they asked us to solve.”

The team considered kirigami as a potential solution. Originally an Asian folk art, kirigami is the practice of cutting intricate patterns into paper and folding this paper, much like origami, to create beautiful, elaborate three-dimensional structures. More recently, some scientists have been exploring kirigami as a way to develop new, functional materials.

“In most cases, people make cuts in a structure to make it stretchable,” Ruike says. “But we are the first group to find, with a systematic mechanism study, that a kirigami design can improve a material’s adhesion.”

The researchers fabricated thin kirigami films by pouring a liquid elastomer, or rubber solution, into 3-D-printed molds. Each mold was printed with rows of offset grooves of various spacings, which the researchers then filled with the rubber solution. Once cured and lifted out of the molds, the thin elastomer layers were studded with rows of offset slits. The researchers say the film can be made from a wide range of materials, from soft polymers to hard metal sheets.

Ruike applied a thin­­­ adhesive coating, similar to what is applied to bandages, to each film before attaching it to a volunteer’s knee. She took note of each film’s ability to stick to the knee after repeated bending, compared with an elastomer film that had no kirigami patterns. After just one cycle, the plain, continuous film quickly detached, whereas the kirigami film maintained its hold, even after 100 knee bends.

A balance in design

To find out why kirigami cuts enhance a material’s adhesive properties, the researchers first bonded a kirigami film to a polymer surface, then subjected the material to stretch tests. They measured the amount of stretch a kirigami film can undergo before peeling away from the polymer surface — a measurement they used to calculate the material’s  critical “energy-release rate,” a quantity to evaluate detaching.

They found that this energy-release rate varied throughout a single film: When they pulled the film from either end like an accordion, the slits toward the middle exhibited a higher energy-release rate and were first to peel open under less stretch. In contrast, the slits at either end of the film continued to stick to the underlying surface and remained closed.

The researchers stretched kirigami films and measured their “energy release rate,” or the critical amount of stretch a film can handle before peeling away from its surface.

Through these experiments, Ruike identified three main parameters that give kirigami films their adhesive properties: shear-lag, in which shear deformation of film can reduce the strain on other parts of the film; partial debonding, in which the film segments around an open slit maintain a partial bond to the underlying surface; and inhomogenous deformation, in which a film can maintain its overall adhesion, even as parts of its underlying surface may bend and stretch more than others.

Depending on the application, Ruike says researchers can use the team’s findings as a design blueprint to identify the best pattern of cuts and the optimal balance of the three parameters, for a given application.

“These three parameters will help guide the design of soft, advanced materials,” Ruike says. “You can always design other patterns, just like folk art. There are so many solutions that we can think of. Just follow the mechanical guidance for an optimized design, and you can achieve a lot of things.”

Ruike and her colleagues have filed a patent on their technique and are continuing to collaborate with the medical supply company, which is currently making plans to manufacture medicine patches made from kirigami films.

“They make this pain-relieving pad that’s pretty popular in China — even my parents use it,” Ruike says. “So it’s super exciting.”

The team is now branching out to explore other materials on which to pattern kirigami cuts.

“The current films are purely elastomers,” Ruike says. “We want to change the film material to gels, which can directly diffuse medicine into the skin. That’s our next step.”

This research was supported, in part, by the National Science Foundation and the Tibet Cheezheng Tibetan Medicine Co. Ltd.

March 27, 2018 | More

Cheetah III robot preps for a role as a first responder

Cheetah III robot preps for a role as a first responder

If you were to ask someone to name a new technology that emerged from MIT in the 21st century, there’s a good chance they would name the robotic cheetah. Developed by the MIT Department of Mechanical Engineering’s Biomimetic Robotics Lab under the direction of Associate Professor Sangbae Kim, the quadruped MIT Cheetah has made headlines for its dynamic legged gait, speed, jumping ability, and biomimetic design.

The dog-sized Cheetah II can run on four articulated legs at up to 6.4 meters per second, make mild running turns, and leap to a height of 60 centimeters. The robot can also autonomously determine how to avoid or jump over obstacles.

Kim is now developing a third-generation robot, the Cheetah III. Instead of improving the Cheetah’s speed and jumping capabilities, Kim is converting the Cheetah into a commercially viable robot with enhancements such as a greater payload capability, wider range of motion, and a dexterous gripping function. The Cheetah III will initially act as a spectral inspection robot in hazardous environments such as a compromised nuclear plant or chemical factory. It will then evolve to serve other emergency response needs.

“The Cheetah II was focused on high speed locomotion and agile jumping, but was not designed to perform other tasks,” says Kim. “With the Cheetah III, we put a lot of practical requirements on the design so it can be an all-around player. It can do high-speed motion and powerful actions, but it can also be very precise.”

The Biomimetic Robotics Lab is also finishing up a smaller, stripped down version of the Cheetah, called the Mini Cheetah, designed for robotics research and education. Other projects include a teleoperated humanoid robot called the Hermes that provides haptic feedback to human operators. There’s also an early stage investigation into applying Cheetah-like actuator technology to address mobility challenges among the disabled and elderly.

Conquering mobility on the land

“With the Cheetah project, I was initially motivated by copying land animals, but I also realized there was a gap in ground mobility,” says Kim. “We have conquered air and water transportation, but we haven’t conquered ground mobility because our technologies still rely on artificially paved roads or rails. None of our transportation technologies can reliably travel over natural ground or even man-made environments with stairs and curbs. Dynamic legged robots can help us conquer mobility on the ground.”

One challenge with legged systems is that they “need high torque actuators,” says Kim. “A human hip joint can generate more torque than a sports car, but achieving such condensed high torque actuation in robots is a big challenge.”

Robots tend to achieve high torque at the expense of speed and flexibility, says Kim. Factory robots use high torque actuators but they are rigid and cannot absorb energy upon the impact that results from climbing steps. Hydraulically powered, dynamic legged robots, such as the larger, higher-payload, quadruped Big Dog from Boston Dynamics, can achieve very high force and power, but at the expense of efficiency. “Efficiency is a serious issue with hydraulics, especially when you move fast,” he adds.

A chief goal of the Cheetah project has been to create actuators that can generate high torque in designs that imitate animal muscles while also achieving efficiency. To accomplish this, Kim opted for electric rather than hydraulic actuators. “Our high torque electric motors have exceeded the efficiency of animals with biological muscles, and are much more efficient, cheaper, and faster than hydraulic robots,” he says.

Cheetah III: More than a speedster

Unlike the earlier versions, the Cheetah III design was motivated more by potential applications than pure research. Kim and his team studied the requirements for an emergency response robot and worked backward.

“We believe the Cheetah III will be able to navigate in a power plant with radiation in two or three years,” says Kim. “In five to 10 years it should be able to do more physical work like disassembling a power plant by cutting pieces and bringing them out. In 15 to 20 years, it should be able to enter a building fire and possibly save a life.”

In situations such as the Fukushima nuclear disaster, robots or drones are the only safe choice for reconnaissance. Drones have some advantages over robots, but they cannot apply large forces necessary for tasks such as opening doors, and there are many disaster situations in which fallen debris prohibits drone flight.

By comparison, the Cheetah III can apply human-level forces to the environment for hours at a time. It can often climb or jump over debris, or even move it out of the way. Compared to a drone, it’s also easier for a robot to closely inspect instrumentation, flip switches, and push buttons, says Kim. “The Cheetah III can measure temperatures or chemical compounds, or close and open valves.”

Advantages over tracked robots include the ability to maneuver over debris and climb stairs. “Stairs are some of the biggest obstacles for robots,” says Kim. “We think legged robots are better in man-made environments, especially in disaster situations where there are even more obstacles.”

The Cheetah III was slowed down a bit compared to the Cheetah II, but also given greater strength and flexibility. “We increased the torque so it can open the heavy doors found in power plants,” says Kim. “We increased the range of motion to 12 degrees of freedom by using 12 electric motors that can articulate the body and the limbs.”

This is still far short of the flexibility of animals, which have over 600 muscles. Yet, the Cheetah III can compensate somewhat with other techniques. “We maximize each joint’s work space to achieve a reasonable amount of reachability,” says Kim.

The design can even use the legs for manipulation. “By utilizing the flexibility of the limbs, the Cheetah III can open the door with one leg,” says Kim. “It can stand on three legs and equip the fourth limb with a customized swappable hand to open the door or close a valve.”

The Cheetah III has an improved payload capability to carry heavier sensors and cameras, and possibly even to drop off supplies to disabled victims. However, it’s a long way from being able to rescue them. The Cheetah III is still limited to a 20-kilogram payload, and can travel untethered for four to five hours with a minimal payload.

“Eventually, we hope to develop a machine that can rescue a person,” says Kim. “We’re not sure if the robot would carry the victim or bring a carrying device,” he says. “Our current design can at least see if there are any victims or if there are any more potential dangerous events.”

Experimenting with human-robot interaction

The semiautonomous Cheetah III can make ambulatory and navigation decisions on its own. However, for disaster work, it will primarily operate by remote control.

“Fully autonomous inspection, especially in disaster response, would be very hard,” says Kim. Among other issues, autonomous decision making often takes time, and can involve trial and error, which could delay the response.

“People will control the Cheetah III at a high level, offering assistance, but not handling every detail,” says Kim. “People could tell it to go to a specific location at the map, find this place, and open that door. When it comes to hand action or manipulation, the human will take over more control and tell the robot what tool to use.”

Humans may also be able to assist with more instinctive controls. For example, if the Cheetah uses one of its legs as an arm and then applies force, it’s hard to maintain balance. Kim is now investigating whether human operators can use “balanced feedback” to keep the Cheetah from falling over while applying full force.

“Even standing on two or three legs, it would still be able to perform high force actions that require complex balancing,” says Kim. “The human operator can feel the balance, and help the robot shift its momentum to generate more force to open or hammer a door.”

The Biomimetic Robotics Lab is exploring balanced feedback with another robot project called Hermes (Highly Efficient Robotic Mechanisms and Electromechanical System). Like the Cheetah III, it’s a fully articulated, dynamic legged robot designed for disaster response. Yet, the Hermes is bipedal, and completely teleoperated by a human who wears a telepresence helmet and a full body suit. Like the Hermes, the suit is rigged with sensors and haptic feedback devices.

“The operator can sense the balance situation and react by using body weight or directly implementing more forces,” says Kim.

The latency required for such intimate real-time feedback is difficult to achieve with Wi-Fi, even when it’s not blocked by walls, distance, or wireless interference. “In most disaster situations, you would need some sort of wired communication,” says Kim. “Eventually, I believe we’ll use reinforced optical fibers.”

Improving mobility for the elderly

Looking beyond disaster response, Kim envisions an important role for agile, dynamic legged robots in health care: improving mobility for the fast-growing elderly population. Numerous robotics projects are targeting the elderly market with chatty social robots. Kim is imagining something more fundamental.

“We still don’t have a technology that can help impaired or elderly people seamlessly move from the bed to the wheelchair to the car and back again,” says Kim. “A lot of elderly people have problems getting out of bed and climbing stairs. Some elderly with knee joint problems, for example, are still pretty mobile on flat ground, but can’t climb down the stairs unassisted. That’s a very small fraction of the day when they need help. So we’re looking for something that’s lightweight and easy to use for short-time help.”

Kim is currently working on “creating a technology that could make the actuator safe,” he says. “The electric actuators we use in the Cheetah are already safer than other machines because they can easily absorb energy. Most robots are stiff, which would cause a lot of impact forces. Our machines give a little.”

By combining such safe actuator technology with some of the Hermes technology, Kim hopes to develop a robot that can help elderly people in the future. “Robots can not only address the expected labor shortages for elder care, but also the need to maintain privacy and dignity,” he says.

March 26, 2018 | More

Introducing the Innovation Mentors

Introducing the Innovation Mentors

The MIT innovation ecosystem spans a wide range of departments, centers, programs, and student groups that are spread across campus and beyond. With over 85 resources available to the MIT community, many find navigating this vast landscape on their own overwhelming, as not all possible pathways are equally viable or helpful for any given student or group.

Innovation Mentors are student advisors that can help student innovators, entrepreneurs, and makers at MIT match their interests and needs with the resources that are available to them. Scholars of innovation, entrepreneurship, and making, the mentors are qualified undergraduate and graduate students who possess deep knowledge of the MIT innovation ecosystem and are up to date on the latest resources, events, and opportunities on campus in this domain.

The MIT Innovation Initiative, a cross-school effort to strengthen and promote innovation and entrepreneurship at MIT, is introducing the pilot program for the spring 2018 semester as a way of increasing access and easing the entry points into the landscape for students who aspire to move their ideas from conception to impact. Through a call for applications, the Innovation Initiative received 55 applicants. After a round of interviews, four students representing mechanical engineering, electrical engineering and computer science, mathematics, and business were selected to be mentors.

The program builds on the existing networks maintained by the Innovation Initiative. The mentors are deployed across campus to engage with the community during office hours, at events, and by working directly with resource centers and core classes to help students through the various stages of innovation and entrepreneurship.

Innovation Mentors are available to all students at MIT, regardless of their major, year, or program.

Innovation Mentors

Alli Davanzo is a senior majoring in mechanical engineering with a concentration in product development. Alli is a volleyball enthusiast and plays on the varsity team at MIT. Last semester, she took course 2.009 (Product Development) and learned about what’s available for innovators and entrepreneurs at MIT.  She is excited to help others access those resources.

Marla Odell is a sophomore majoring in electrical engineering and computer science (EECS). Marla is the president of MIT Women in EECS and a researcher in the Media Lab’s Digital Currency Initiative. She is also a rower on MIT’s varsity crew team and a leader in Amphibious Achievement.

Michael Amoako is a junior double-majoring in mathematics with computer science and in business management. Michael was one of the organizers of the BetterMIT Hackathon and is one of the founders of the Minority Business Association, whose primary purpose is to provide mentorship and opportunities for students of underrepresented groups in business fields.

Sumit Khetan is an MBA candidate at MIT Sloan School of Management. Sumit graduated from New York University in 2013 with a bachelor’s in economics as a Lew Rudin Scholar. Upon graduating, he worked with early-stage Israeli startups in New York City to help them fit their technology and products in the U.S. market. He is on the organizing committee of the MIT $100K Entrepreneurship Competition as a mentorship lead and is vice president of community for the Technology Club.

For more information on scheduling an appointment with an Innovation Mentor, visit the MIT Innovation Initiative website or email innovation-mentors@mit.edu.

March 26, 2018 | More

Yuriy Roman: A chemical engineer pursuing renewable energy

Yuriy Roman: A chemical engineer pursuing renewable energy

A couple of years into graduate school, Yuriy Roman had what he calls a “tipping point” in his career. He realized that all of the classes he had taken were leading him toward a deep understanding of the concepts he needed to design his own solutions to chemical problems.

“All the classes I had taken suddenly came together, and that’s when I started understanding why I needed to know something about thermodynamics, kinetics, and transport. All of these concepts that I had seen as more theoretical things in my classes, I could now see being applied together to solve a problem. That really was what changed everything for me,” he says.

As a newly tenured faculty member in MIT’s Department of Chemical Engineering, Roman now tries to guide his students toward their own tipping points.

“It’s amazing to see it happen with my students,” says Roman, noting that working with students is one of his favorite things about being an MIT professor. His students also make major contributions to his lab’s mission: coming up with new catalysts to produce fuels, plastics, and other useful substances in a more efficient, sustainable manner.

“To me, the most rewarding aspect of my profession is to work with these extremely talented and bright students,” Roman says. “They really are great at coming up with outside-of-the-box concepts, and I love that. I think MIT’s biggest asset is precisely that, the students. To me it’s a pleasure to work with them and learn from them as well, and hopefully have the opportunity to teach them some of the things that I know.”

Chemical synthesis

Roman, who grew up in Mexico City, loved chemistry from a young age. “I just liked to play with things like soap and bleach, which maybe wasn’t the safest thing,” he recalls. Another favorite activity was juicing cabbages to produce a pH indicator. (Red cabbage contains a chemical called anthocyanin that changes color when exposed to acidic or basic environments.)

Roman’s mother was originally from Belarus, and with his multicultural background he developed a strong interest in learning about other cultures and visiting other countries. He won a full scholarship to Monterrey Institute of Technology and Higher Education, in Mexico, for high school and college, but during his first year of college, he became interested in going abroad to finish his degree.

A friend who was then an undergraduate at MIT encouraged Roman to apply to schools in the United States, and he ended up transferring to the University of Pennsylvania.

“My parents were very surprised. In Mexico, it is common to live with your parents long after finishing college. The concept of leaving for college is almost nonexistent,” Roman says.

Roman decided to study chemical engineering, allowing him to combine his love for chemical reactions and his desire to follow in the footsteps of a brother who was an engineer. After graduating, he planned to look for a job in the chemical industry, but his then-girlfriend, now his wife, was planning to begin medical school. She suggested that he go to graduate school with her, so they both ended up attending the University of Wisconsin at Madison.

There, Roman studied with James Dumesic, a chemistry professor who works on biofuels. For his PhD thesis, Roman devised a process to generate a chemical called hydroxymethylfurfural (HMF) from sugars derived from biomass. HMF is a “platform chemical” that can be converted into many different end products, including fuels.

After finishing graduate school, Roman thought he would go to work for a chemical company, but at Dumesic’s suggestion he decided to go into academia instead.

“When I started interviewing at different universities, I realized that as a professor, you can have a lot of freedom to explore ideas and tackle problems long-term, and you can still have a lot of contact with industry,” he says. “You have more control over your time and where you spend it, in terms of investing effort toward basic science.”

Out of graduate school, he got a job offer at MIT but first spent two years doing a postdoc at Caltech, while his wife did her residency at the University of California at Los Angeles. Working with Mark Davis, a professor of chemical engineering, Roman began studying materials called zeolites, which have pores the same size as many common molecules. Confining molecules in these pores allows for certain chemical reactions to occur much faster than they otherwise would, Roman says.

Davis also instilled in Roman the importance of designing his own catalysts rather than relying on those developed by others, which allows for more control over chemical reactions and the resulting materials. While many research groups focus either on designing catalysts or on using existing catalysts to come up with novel ways to synthesize materials, Roman believes it is critical to work on both.

“When you are in control of synthesizing your own catalysts, you can do much more systematic studies. You have the ability to manipulate things at will,” he says. “It’s working at this juncture of synthesis and catalysis that is the key to discovering new chemistry.”

Green chemistry

After arriving at MIT in 2010, Roman launched his lab with a focus on designing catalysts that can generate new and interesting materials. One key area of research is the conversion of biomass components, such as lignin, into fuels and chemicals. One of the biggest challenges in this type of synthesis is to selectively remove oxygen atoms from these molecules, which usually have many more oxygen atoms than fuels do.

During a brainstorming session, Roman and his students came up with the idea of using a metal oxide catalyst in which some oxygen atoms were removed from the surface, creating small pockets known as “vacancies.” Oxygenated molecules can be precisely anchored in those vacancies, allowing their carbon-oxygen bonds to be easily broken so the oxygen can be replaced with hydrogen.

In another project, Roman’s lab developed a more sustainable alternative to catalysts made from precious metals such as platinum and palladium. These metals are used in many renewable-energy technologies, including fuel cells and lithium-air batteries, but they are among the Earth’s scarcest metals.

“If we were to go from our current fleet of vehicles with internal combustion engines to a fuel cell fleet, there’s not enough platinum in the world to sustain that amount,” Roman says. “You need to use Earth-abundant materials because there simply aren’t enough of these other precious materials to do it.”

In 2014, Roman and his students showed that they could create powerful catalysts from compounds called metal carbides, made from plentiful metals such as tungsten, coated with just a thin layer of a rare metal such as platinum.

Developing and promoting this kind of sustainable technology is one of Roman’s biggest research priorities.

“It’s a tremendous battle because the energy sector is just so large. The scale is so big and the infrastructure that’s already in place for petroleum-based fuel is so extensive. But it’s important for us to develop technologies for renewable resources and really curb our emissions of greenhouse gases,” he says. “Big, hard problems. That’s what we’re going after.”

March 23, 2018 | More

A low-tech solution for high-impact health care

A low-tech solution for high-impact health care

Over 17 million people around the world are forced to flee their homes by conflict or persecution each year. After enduring the long and treacherous passage to safety, many refugees arrive at settlement camps suffering from malnutrition and dehydration and require medical attention on site.

Most ailments are easily treatable if properly diagnosed, but communicating across languages and cultures can be difficult. Patients often struggle to convey their symptoms, and doctors worry they may be missing crucial information. To complicate matters, doctors jot patients’ medical notes on paper, which can result in incomplete and illegible information being recorded. Further, each time a refugee moves to another camp, a new record has to be started for the individual, making it hard to maintain a consistent medical history over time.

There’s no shortage of apps designed to address the current refugee crisis. Rather than impose an entirely new system or technology, a team of MIT undergraduate, graduate, and PhD students from multiple disciplines set out to develop a solution that recognizes and supports existing workflows that also helps overcome time, language, and cultural barriers in doctor-patient interactions to improve overall medical care for refugees.

Introducing Sajal

Sajal (meaning “record” in Arabic) is a lightweight electronic medical record that refugees can carry around with them on their journey.

Patients begin by completing a one-time registration form with their name, date of birth, country of origin, nationality and languages spoken, along with a few simple metrics like height, weight and blood type. A QR code is then generated for patients to carry between camps and countries and keep for the long term on their phones, a prized possession and lifeline for most refugees.

To access a patient’s information, doctors simply need a QR code reader to view the individual’s medical history online. In addition, they can record notes by text or voice directly onto the patient’s page. The platform will then automatically transcribe and translate the notes, which are also searchable across languages using IBM’s speech-to-text API. This way, doctors are able to see if someone has had a history of problems in a specific area and can do so without worrying about miscommunication with the patient.

Leveraging technology to address current global problems 

The idea for Sajal came about as part of VHacks, the first-ever hackathon held at the Vatican City in Rome on the weekend of March 8-11. Student organizers from Harvard University and MIT worked with the Vatican’s Secretariat for Communications and OPTIC — a global think tank dedicated to ethical issues of disruptive technologies — to coordinate the historic event.

Organizers invited universities to provide teams, and an open application allowed any student enrolled in an undergraduate or graduate program to apply. The MIT Innovation Initiative —a cross-school effort to strengthen and promote innovation and entrepreneurship — put out a local call for applicants to bring this unique opportunity to MIT students. After reviewing applications and conducting interviews, the five students that best demonstrated real interest, commitment, and the ability to creatively problem-solve, were selected for the team. The Innovation Initiative, with additional support from the MIT Sloan School of Management and Office of International Programs, covered travel and incidentals for the students while accommodations and meals during the event were provided by VHacks.

Over the course of the 36-hour challenge, 120 students representing 30 countries and six universities came up with a total of 24 ideas that leveraged technology to address current global problems centered on the hackathon’s theme areas of social inclusion, interfaith dialogue, and migrants and refugees.

“We intentionally spent more time than the average team to ideate and conduct primary market research in order to ensure that the solution we built was actually solving an important problem and not just one we made up on the spot,” says Neil Gokhlay, an MBA candidate at MIT Sloan School.

Their idea came together when they were able to connect directly with a doctor based in Rome who volunteers her time at refugee camps through one of their mentors. “We interviewed the doctor in order to understand her pain points, and after coming up with a solution, we ran our idea by her to make sure we were really addressing the problems. It was through this process that we realized there were communication issues and a lack of health record management for refugees in these camps,” Gokhlay says.

After the hacking period came to an end, the students presented their concept to a panel of judges before being selected to move onto the final round in which nine teams pitched their projects in front a live audience. Teams were evaluated based on the viability of their project, the potential impact that project can have, and the technology used to build the project.

At the end of the day, prizes were awarded in each of the three theme areas, with Sajal winning second place and $1,000 in the category of migrants and refugees.

An unconventional take

According to the organizers of VHacks, the inspiration for the event came in part from a TED Talk by Pope Francis last year in which he said, “How wonderful would it be if the growth of scientific and technological innovation would come along with more equality and social inclusion.”

It was a viewpoint that resonated with many students, including Sam Kim, a PhD candidate in electrical engineering and computer science. “I was particularly excited about VHacks due to its unique nature of problem and philosophy. As the outreach chair for the Sidney-Pacific Graduate Housing at MIT for the last two years, I am quite involved in community service in the Boston area, but I wanted to gain more exposure to the diverse set of problems people are experiencing, and I do believe VHacks certainly broadened my perspective.”

Mechanical engineering sophomore Claire Traweek appreciated in particular the emphasis throughout the weekend on understanding existing problems and building something sustainable. “The organizers seemed very intent on supporting meaningful projects into the long term, instead of having everything die out after a weekend. Because of this, I felt that VHacks might be a good opportunity to work on something potentially useful, and a starting point for something genuinely helpful,” she said, adding, “I was not wrong.”

Jessy Lin, an electrical engineering and computer science undergraduate and a regular participant and organizer of hackathons, including HackMIT, concurs: “’Move fast and break things,’ the typical mantra of Silicon Valley, doesn’t apply when these complex problems with peoples’ lives are at stake. It takes a deep understanding of the scope and nature of these problems and continued engagement past the weekend, which is what we’re trying to do with Sajal.”

A 36-hour idea from concept to impact

Fueled by the experience of coming together as a team and building something they think can have a real impact on those who need it most, Gokhlay, Lin, and Traweek — along with teammate Juliet Wanyiri, a graduate student in integrated design and management — all agree Sajal is an effort worth continuing.

“We are passionate about this project and we want to keep the ball rolling,” says Gohklay. “It’s a project where we can leverage both our technical skills and our emphatic desire to help others.”

The team is especially proud they were able to take a human-centered approach to tackling the problems faced by migrants and refugees and that by putting people’s needs at the center, instead of the technology, they were able to come up with a solution that has immediate value to individuals.

“Our team felt strongly about creating a tool that not only addressed the challenges of tracking medical records, but further took into account the issue of social inclusion in how to better allocate and utilize medical resources for refugees,” says Wanyiri, who is also a fellow with the Legatum Center for Development and Entrepreneurship at MIT.

According to the students, the refugee doctor that helped them design their initial prototype is already on board and excited to pilot Sajal with her patients. They hope to expand their user base to additional doctors at the same site before launching the platform at other refugee camps within Italy. Eventually, their goal is to release Sajal worldwide.

In the meantime, as hackathon code is notoriously messy, the students are in the midst of polishing the technology and will be adding a number of features they didn’t have time for earlier, including machine learning for diagnostic assistance, privacy and authentication measures to protect patient data, and SMS registration. To support their work on Sajal for the long term, the team has already applied to a couple of seed grants and is actively looking around for other resources across MIT’s innovation and entrepreneurship landscape.

March 22, 2018 | More