News and Research
Don Rosenfield

Donald Rosenfield, a longtime leader of MIT Leaders for Global Operations, dies at 70

With deep sadness, the LGO community mourns its founding program director, Don Rosenfield. He leaves a legacy of over 1,200 LGO alumni and countless colleagues, students, and friends who were touched and inspired by him.
Read more

Lgo

Urban heat island effects depend on a city’s layout

Franz-Josef Ulm, professor of civil and environmental engineering and LGO thesis advisor lead a recent study in the urban heat island effect, which causes cities to be hotter than their surroundings. The research will improve future building in hot locations to minimize extra heating.

The arrangement of a city’s streets and buildings plays a crucial role in the local urban heat island effect, which causes cities to be hotter than their surroundings, researchers have found. The new finding could provide city planners and officials with new ways to influence those effects.

Some cities, such as New York and Chicago, are laid out on a precise grid, like the atoms in a crystal, while others such as Boston or London are arranged more chaotically, like the disordered atoms in a liquid or glass. The researchers found that the “crystalline” cities had a far greater buildup of heat compared to their surroundings than did the “glass-like” ones.

The study, published today in the journal Physical Review Letters, found these differences in city patterns, which they call “texture,” was the most important determinant of a city’s heat island effect. The research was carried out by MIT and National Center for Scientific Research senior research scientist Roland Pellenq, who is also director of a joint MIT/ CNRS/Aix-Marseille University laboratory called <MSE>2 (MultiScale Material Science for Energy and Environment); professor of civil and environmental engineering Franz-Josef Ulm; research assistant Jacob Sobstyl; <MSE>2 senior research scientist T. Emig; and M.J. Abdolhosseini Qomi, assistant professor of civil and environmental engineering at the University of California at Irvine.

The heat island effect has been known for decades. It essentially results from the fact that urban building materials, such as concrete and asphalt, can absorb heat during the day and radiate it back at night, much more than areas covered with vegetation do. The effect can be quite dramatic, adding as much as 10 degrees Farenheit to night-time temperatures in places such as Phoenix, Arizona. In such places this effect can significantly increase health problems and energy use during hot weather, so a better understanding of what produces it will be important in an era when ever more people are living in cities.

The team found that using mathematical models that were developed to analyze atomic structures in materials provides a useful tool, leading to a straightforward formula to describe the way a city’s design would influence its heat-island effect, Pellenq says.

“We use tools of classical statistical physics,” he explains. The researchers adapted formulas initially devised to describe how individual atoms in a material are affected by forces from the other atoms, and they reduced these complex sets of relationships to much simpler statistical descriptions of the relative distances of nearby buildings to each other. They then applied them to patterns of buildings determined from satellite images of 47 cities in the U.S. and other countries, ultimately ending up with a single index number for each — called the local order parameter — ranging between 0 (total disorder) and 1 (perfect crystalline structure), to provide a statistical description of the cluster of nearest neighbors of any given building.

For each city, they had to collect reliable temperature data, which came from one station within the city and another outside it but nearby, and then determine the difference.

To calculate this local order parameter, physicists typically have to use methods such as bombarding materials with neutrons to locate the positions of atoms within them. But for this project, Pellenq says, “to get the building positions we don’t use neutrons, just Google maps.” Using algorithms they developed to determine the parameter from the city maps, they found that the cities varied from 0.5 to 0.9.

The differences in the heating effect seem to result from the way buildings reradiate heat that can then be reabsorbed by other buildings that face them directly, the team determined.

Especially for places such as China where new cities are rapidly being built, and other regions where existing cities are expanding rapidly, the information could be important to have, he says. In hot locations, cities could be designed to minimize the extra heating, but in colder places the effect might actually be an advantage, and cities could be designed accordingly.

“If you’re planning a new section of Phoenix,” Pellenq says, “you don’t want to build on a grid, since it’s already a very hot place. But somewhere in Canada, a mayor may say no, we’ll choose to use the grid, to keep the city warmer.”

The effects are significant, he says. The team evaluated all the states individually and found, for example, that in the state of Florida alone urban heat island effects cause an estimated $400 million in excess costs for air conditioning. “This gives a strategy for urban planners,” he says. While in general it’s simpler to follow a grid pattern, in terms of placing utility lines, sewer and water pipes, and transportation systems, in places where heat can be a serious issue, it can be well worth the extra complications for a less linear layout.

This study also suggests that research on construction materials may offer a way forward to properly manage heat interaction between buildings in cities’ historical downtown areas.

The work was partly supported by the Concrete Sustainability Hub at MIT, sponsored by the Portland Cement Association and the Ready-Mixed Concrete Research and Education Foundation.

February 22, 2018 | More

Getting to the heart of carbon nanotube clusters

Brian Wardle, LGO thesis advisor and professor of aeronautics and astronautics, has led a team of MIT researches in the development of a systematic method to predict the two-dimensional patterns carbon nanotubes (CNTs).

Integrating nanoscale fibers such as carbon nanotubes (CNTs) into commercial applications, from coatings for aircraft wings to heat sinks for mobile computing, requires them to be produced in large scale and at low cost. Chemical vapor deposition (CVD) is a promising approach to manufacture CNTs in the needed scales, but it produces CNTs that are too sparse and compliant for most applications.

Applying and evaporating a few drops of a liquid such as acetone to the CNTs is an easy, cost-effective method to more tightly pack them together and increase their stiffness, but until now, there was no way to forecast the geometry of these CNT cells.

MIT researchers have now developed a systematic method to predict the two-dimensional patterns CNT arrays form after they are packed together, or densified, by evaporating drops of either acetone or ethanol. CNT cell size and wall stiffness grow proportionally with cell height, they report in the Feb. 14 issue of Physical Chemistry Chemical Physics.

One way to think of this CNT behavior is to imagine how entangled fibers such as wet hair or spaghetti collectively reinforce each other. The larger this entangled region is, the higher its resistance to bending will be. Similarly, longer CNTs can better reinforce one another in a cell wall. The researchers also find that CNT binding strength to the base on which they are produced, in this case, silicon, makes an important contribution to predicting the cellular patterns that these CNTs will form.

“These findings are directly applicable to industry because when you use CVD, you get nanotubes that have curvature, randomness, and are wavy, and there is a great need for a method that can easily mitigate these defects without breaking the bank,” says Itai Stein SM ’13, PhD ’16, who is a postdoc in the Department of Aeronautics and Astronautics. Co-authors include materials science and engineering graduate student Ashley Kaiser, mechanical engineering postdoc Kehang Cui, and senior author Brian Wardle, professor of aeronautics and astronautics.

“From our previous work on aligned carbon nanotubes and their composites, we learned that more tightly packing the CNTs is a highly effective way to engineer their properties,” says Wardle. “The challenging part is to develop a facile way of doing this at scales that are relevant to commercial aircraft (hundreds of meters), and the predictive capabilities that we developed here are a large step in that direction.”

Detailed measurements

Carbon nanotubes are highly desirable because of their thermal, electrical, and mechanical properties, which are directionally dependent. Earlier work in Wardle’s lab demonstrated that waviness reduces the stiffness of CNT arrays by as little as 100 times, and up to 100,000 times. The technical term for this stiffness, or ability to bend without breaking, is elastic modulus. Carbon nanotubes are from 1,000 to 10,000 times longer than they are thick, so they deform principally along their length.

For an earlier paper published in the journal Applied Physics Letters, Stein and colleagues used nanoindentation techniques to measure stiffness of aligned carbon nanotube arrays and found their stiffness to be 1/1,000 to 1/10,000 times less than the theoretical stiffness of individual carbon nanotubes. Stein, Wardle, and former visiting MIT graduate student Hülya Cebeci also developed a theoretical model explaining changes at different packing densities of the nanofibers.

The new work shows that CNTs compacted by the capillary forces from first wetting them with acetone or ethanol and then evaporating the liquid also produces CNTs that are hundreds to thousands of times less stiff than expected by theoretical values. This capillary effect, known as elastocapillarity, is similar to a how a sponge often dries into a more compact shape after being wetted and then dried.

“Our findings all point to the fact that the CNT wall modulus is much lower than the normally assumed value for perfect CNTs because the underlying CNTs are not straight,” says Stein. “Our calculations show that the CNT wall is at least two orders of magnitude less stiff than we expect for straight CNTs, so we can conclude that the CNTs must be wavy.”

Heat adds strength

The researchers used a heating technique to increase the adhesion of their original, undensified CNT arrays to their silicon wafer substrate. CNTs densified after heat treatment were about four times harder to separate from the silicon base than untreated CNTs. Kaiser and Stein, who share first authorship of the paper, are currently developing an analytical model to describe this phenomenon and tune the adhesion force, which would further enable prediction and control of such structures.

“Many applications of vertically aligned carbon nanotubes [VACNTs], such as electrical interconnects, require much denser arrays of nanotubes than what is typically obtained for as-grown VACNTs synthesized by chemical vapor deposition,” says Mostafa Bedewy, assistant professor at the University of Pittsburgh, who was not involved in this work. “Hence, methods for postgrowth densification, such as those based on leveraging elastocapillarity have previously been shown to create interesting densified CNT structures. However, there is still a need for a better quantitative understanding of the factors that govern cell formation in densified large-area arrays of VACNTs. The new study by the authors contributes to addressing this need by providing experimental results, coupled with modeling insights, correlating parameters such as VACNT height and VACNT-substrate adhesion to the resulting cellular morphology after densification.

“There are still remaining questions about how the spatial variation of CNT density, tortuosity [twist], and diameter distribution across the VACNT height affects the capillary densification process, especially since vertical gradients of these features can be different when comparing two VACNT arrays having different heights,” says Bedewy. “Further work incorporating spatial mapping of internal VACNT morphology would be illuminating, although it will be challenging as it requires combining a suite of characterization techniques.”

Picturesque patterns

Kaiser, who was a 2016 MIT Summer Scholar, analyzed the densified CNT arrays with scanning electron microscopy (SEM) in the MIT Materials Research Laboratory’s NSF-MRSEC-supported Shared Experimental Facilities. While gently applying liquid to the CNT arrays in this study caused them to densify into predictable cells, vigorously immersing the CNTs in liquid imparts much stronger forces to them, forming randomly shaped CNT networks. “When we first started exploring densification methods, I found that this forceful technique densified our CNT arrays into highly unpredictable and interesting patterns,” says Kaiser. “As seen optically and via SEM, these patterns often resembled animals, faces, and even a heart — it was a bit like searching for shapes in the clouds.” A colorized version of her optical image showing a CNT heart is featured on the cover of the Feb. 14 print edition of Physical Chemistry Chemical Physics.

“I think there is an underlying beauty in this nanofiber self-assembly and densification process, in addition to its practical applications,” Kaiser adds. “The CNTs densify so easily and quickly into patterns after simply being wet by a liquid. Being able to accurately quantify this behavior is exciting, as it may enable the design and manufacture of scalable nanomaterials.”

This work made use of the MIT Materials Research Laboratory Shared Experimental Facilities, which are supported in part by the MRSEC Program of the National Science Foundation, and MIT Microsystems Technology Laboratories. This research was supported in part by Airbus, ANSYS, Embraer, Lockheed Martin, Saab AB, Saertex, and Toho Tenax through MIT’s Nano-Engineered Composite Aerospace Structures Consortium and by NASA through the Institute for Ultra-Strong Composites by Computational Design.

February 15, 2018 | More

If retailers want to compete with Amazon, they should use their tax savings to raise wages

Zeynep Ton, professor of operations management and LGO thesis advisor discusses the impact of new tax laws for retailers and the potential to achieve operational excellence.

Walmart announced today that it is raising its starting wages in the United States from $9 per hour to $11, giving employees one-time cash bonuses of as much as $1,000, and expanding maternity and parental leave benefits as a result of the recently enacted tax reform. It is part of Walmart’s broader effort to create a better experience for its employees and customers. The new tax law creates a major business opportunity for other retailers as well — if their leaders are wise enough to take advantage of it.

The U.S. corporate tax rate is dropping from 35% to 21%. Retailers, many of whom have been paying the full tax rate, are going to benefit substantially. Take a retailer that makes 15% pretax income. Assuming its effective tax rate goes from 35% to 21%, it could save the equivalent of 2.3% of sales. Specialty retailers with higher pretax income will save even more.

Retail executives have a choice in how they use these savings. I believe the smartest choice — one that will help them compete against online retailers like Amazon — is to create a better experience for customers and to achieve operational excellence in stores. For most retailers, doing both requires more investment in store employees — starting with higher wages and more-predictable work schedules. My research shows that combining higher pay for retail employees with a set of smart operational choices that leverage that investment results in more-satisfied customers, employees, and investors.

Retailers that do not provide a compelling draw for their customers may not make it. In 2017, according to Fung Global Retail and Technology, there were nearly 7,000 store closing announcements, the second-largest number since 2000. There were 662 bankruptcy filings in retail, according to bankruptcydata.com, up 30% from 2016. This year is expected to be even worse. What’s more, two of my MIT Sloan MBA students analyzed store openings and closings from 2015 to 2017, looking at department stores with more than 50 stores and over $100 million in revenues, and found a positive correlation between customer satisfaction, as measured by Yelp ratings, and the net change in the number of open stores.

Many companies can no longer grow profitably just by adding stores — they need to get more out of their existing stores. Operational excellence makes that possible by ensuring that merchandise is in stock and well displayed, checkout is efficient, stores are clean, and employees are responsive to customers. Operational excellence also makes it possible to provide a better omnichannel experience by linking digital and brick-and-mortar channels. For instance, retailers are increasingly expecting in-store employees to serve customers who order online, by shipping products to those customers or enabling them to pick up their orders in the store. If that doesn’t work smoothly — that is, without operational excellence — it’s going to waste a lot of employee and customer time and convince customers they’re better off shopping online than in the store.

Creating a great customer experience and achieving operational excellence both require a capable and motivated workforce. You need knowledgeable employees who are cross-trained to manage customers’ needs wherever they arise. You need employees who can empathize with customers, are empowered to solve customer problems, and can spot opportunities to improve operations. You also need a capable and motivated workforce that can embrace and leverage new technologies.

Read the full post at Harvard Business Review

Zeynep Ton is an Adjunct Associate Professor of Operations Management at the MIT Sloan School of Management.

January 22, 2018 | More

MIT launches MITx MicroMasters in Principles of Manufacturing

David Hardt, professor of mechanical engineering and LGO thesis advisor discusses the launch of the Institute’s third MITx MicroMasters program, in principles of manufacturing.

MIT today announced the launch of the Institute’s third MITx MicroMasters program, in principles of manufacturing. The new program brings an advanced manufacturing curriculum to the MITx platform for the first time and enables learners worldwide to advance their careers by mastering the fundamental skills needed for global manufacturing excellence and competitiveness.

New manufacturing firms are growing at their fastest rate since 1993, as technology revolutionizes the field. The MITx Principles of Manufacturing MicroMasters program focuses on broad-based concepts that underlie all manufacturing environments, putting graduates of this unique program in a position to leverage the industry’s fast-paced growth. The graduate-level program enables engineers, product designers, and technology developers to advance their careers in a broad array of engineering capacities, including manufacturing, supply chain management, design, and product development.

“Throughout an entire undergraduate degree program, the conventional engineering curriculum teaches students that everything is certain, and results are exact, ignoring inherent uncertainty,” says David Hardt, a professor of mechanical engineering at MIT. “All too often, people fail to get products, and even companies, across what’s known as the valley of death, which is the gap between small-volume and full-scale production. Their efforts fail because they haven’t been given the fundamental skill set for managing uncertainties associated with production rate, quality, and cost. And, that’s exactly what we do in this new program.”

Noting the continued evolution of technologies, instability of supply chains, and introduction of new production processes, Hardt says that manufacturing technologies “change so quickly that unless students master the cohesive set of fundamentals that underlie production, they won’t know how to handle many of the unexpected challenges that arise. It’s not just about knowing the latest technologies. To be a good decision-maker in manufacturing, a person has to master the core principles that determine how to apply those technologies under uncertain conditions.”

By maintaining a technology-agnostic curriculum and embracing the fundamental principles that govern manufacturing, the MITx Principles of Manufacturing MicroMasters curriculum will maintain its relevance in this constantly changing environment.

The new MicroMasters program traces its roots back to the Master of Engineering in Advanced Manufacturing and Design, originally established at MIT in 2001 through the Singapore-MIT Alliance for Research and Technology. This master’s program provides a launchpad for graduates to become innovative future leaders in established manufacturing firms and new entrepreneurial ventures. The MITx Principles of Manufacturing MicroMasters program announced today leverages this curriculum.

The MITx Principles of Manufacturing MicroMasters curriculum consists of eight online courses, which span the fields of process control, manufacturing systems, engineering management, and supply chain planning and design. Each course runs for eight weeks, and students who complete the entire curriculum and earn their MicroMasters credential will be eligible to apply to the Master of Engineering in Advanced Manufacturing and Design degree program on campus at MIT. If accepted, course credits earned through the MITx Principles of Manufacturing MicroMasters will be applied to the on-campus degree program, enabling students to earn their master’s in eight months. Principles of Manufacturing online coursework commences in March 2018. The first cohort of students who have earned their MicroMasters credential and been admitted to the on-campus master’s degree program will arrive at MIT in January 2020 and graduate that August.

“We are excited to help the MIT faculty who have spent many years crafting this innovative curriculum teach the principles of manufacturing to learners around the country and around the world,” says Dean for Digital Learning Krishna Rajagopal. “At a time when manufacturing is changing rapidly, we are happy to make this learning opportunity open to all. For those who wish to advance their careers, the MITx MicroMasters will be a valuable professional credential. They will also be eligible to accelerate their completion of a master’s degree at MIT — or elsewhere. We are using digital technologies to leverage MIT’s commitment to rigorous, high-quality curricula in a way that expands access to, and transforms, graduate-level education for working professionals.”

The Rochester Institute of Technology (RIT) will also offer a pathway to their Master of Science in Professional Studies that awards credit to learners who successfully complete the MITx Principles of Manufacturing MicroMasters credential and are then admitted to RIT. The RIT MS in Professional Studies is an innovative open curriculum environment that enables students to create a customized degree path that meets their educational or career objectives. The curriculum can include courses from multiple RIT graduate programs across two or three areas of study. RIT has been working with MITx since early 2017, and they currently offer a similar pathway to holders of the MITx Supply Chain Management MicroMasters credential.

“Digital technologies are enabling us to extend this cutting-edge manufacturing curriculum, which is the result of many years of research and development, to learners around the world regardless of their location or socioeconomic status,” says Vice President for Open Learning Sanjay Sarma. “The innovative application of open learning technologies has broken down barriers and enabled people of all ages and backgrounds to access world-class educational content. We hope that Principles of Manufacturing, MIT’s third MicroMasters program, will dramatically expand the opportunities for professional and lifelong learners to advance their careers and pursue their passions.”

January 10, 2018 | More

Turning any room into an operating room

Daniel Frey, LGO thesis advisor, professor of mechanical engineering and faculty research director of MIT’s D-Lab is working with a team to develop innovative access to clean surgical care through a product called SurgiBox.

Dust, dirt, bacteria, flies — these are just some of the many contaminants surgeons need to worry about when operating in the field or in hospitals located in developing nations. According to a 2015 study in The Lancet, 5 billion people don’t have access to safe, clean surgical care. Graduate student Sally Miller ’16 is hoping to change that with a product called SurgiBox.

“The idea of SurgiBox is to take the operating room and shrink it down to just the patient’s size,” Miller explains. “Keeping an entire room clean and surgery-ready requires a lot of resources that many hospitals and surgeons across the globe don’t have.”

Upon starting her master’s degree in the Department of Mechanical Engineering, where she also received her bachelors, Miller connected with Daniel Frey, professor of mechanical engineering and faculty research director of MIT’s D-Lab. Frey had been working on the concept of SurgiBox with Debbie Teodorescu, the company’s founder and CEO, who graduated from Harvard Medical School and acted as a D-Lab research affiliate. Having just won the Harvard President’s Challenge grant of $70,000, the SurgiBox team was looking for a mechanical engineering graduate student who could help enhance the product’s design.

“We were looking for a way to accelerate the project,” explains Frey, who also serves as Miller’s advisor. “At MIT, grad students can really deepen a project and move it forward at a faster pace.”

Enter Miller, who took on the project as her master’s thesis. “The first thing I did was assess the design they already had, but use my mechanical engineering lens to make the product more affordable, more usable, and easier to manufacture,” Miller explains.
Miller found inspiration in 2.75 (Medical Device Design). For the class project, she visited the VA Medical Center, where she watched a pacemaker surgery. During the surgery, doctors placed an incise drape — an adhesive, antimicrobial sheet infused with iodine — on the site of the incision.

“Watching the surgeons that day I realized, ‘Oh, I can use this adhesive drape idea for SurgiBox,’” Miller says.

In addition to incorporating adhesive drapes at the point of incision, Miller has redesigned the structure of SurgiBox. The original design had a rectangular frame that sealed to the patient at the armpit and waist. The frame held up a plastic, tent-like enclosure with a fan and high-efficiency particulate air (HEPA) filter that removes 99.997 percent of contaminants. Miller realized, to make SurgiBox more portable and cost effective, she had to get rid of the frames. With her new design, SurgiBox now consists of an inflatable tent; the outward pressure from the HEPA-filtered air gives the surgical site its structure.

This structural change marked a turning point in SurgiBox’s development. “Now the patient doesn’t have to be in the SurgiBox. Rather, the SurgiBox is on them,” Frey explains. “I thought that was a big breakthrough for us.”

Teodorescu agrees. “Sally is stunningly capable at both manual and digital forms of technical drafting,” she says. “Because of her designs, a key part of SurgiBox now fits into a Ziploc bag.” This latest iteration of SurgiBox now meets the same germ-proof and blood-proof standard as surgical gowns used by doctors treating Ebola patients.

The next step for the SurgiBox team is user testing. In addition to continuing particle testing, the team will partner with local Boston-area hospitals to test the ergonomics of the design and ensure it aligns with surgical workflows. After that, the team will test its efficacy at partner hospitals in developing nations where the technology is most needed.

As for Miller, after graduating with her masters in January she is hoping to start a career in product design. “Working on SurgiBox during my masters and in classes like 2.009 (Product Engineering Processes) in my undergraduate classes gave me hands-on experience in creating a product with real-world application,” Miller says. “I’m open to working on products in a number of fields and am excited to see what my future holds after MIT.”

January 10, 2018 | More

No more blackouts?

Konstantin Turitsyn, associate professor of mechanical engineering and LGO thesis advisor led a team to develop a method for improving the stability of microgrids, which many rural and some urban communities are turning to for an alternative source of electricity.

Today, more than 1.3 billion people are living without regular access to power, including more than 300 million in India and 600 million in sub-Saharan Africa. In these and other developing countries, access to a main power grid, particularly in rural regions, is remote and often unreliable.

Increasingly, many rural and some urban communities are turning to microgrids as an alternative source of electricity. Microgrids are small-scale power systems that supply local energy, typically in the form of solar power, to localized consumers, such as individual households or small villages.

However, the smaller a power system, the more vulnerable it is to outages. Small disturbances, such as plugging in a certain appliance or one too many phone chargers, can cause a microgrid to destabilize and short out.

For this reason, engineers have typically designed microgrids in simple, centralized configurations with thick cables and large capacitors. This limits the amount of power that any appliance can draw from a network — a conservative measure that increases a microgrid’s reliability but comes with a significant cost.

Now engineers at MIT have developed a method for guaranteeing the stability of any microgrid that runs on direct current, or DC — an architecture that was originally proposed as part of the MIT Tata Center’s uLink project. The researchers found they can ensure a microgrid’s stability by installing capacitors, which are devices that even out spikes and dips in voltage, of a particular size, or capacitance.

The team calculated the minimum capacitance on a particular load that is required to maintain a microgrid’s stability, given the total load, or power a community consumes. Importantly, this calculation does not rely on a network’s particular configuration of transmission lines and power sources. This means that microgrid designers do not have to start from scratch in designing power systems for each new community.

Instead, the researchers say this microgrid design process can be performed once to develop, for instance, power system “kits”: sets of modular power sources, loads, and lines that can be produced in bulk. As long as the load units include capacitors of the appropriate size, the system is guaranteed to be stable, no matter how the individual components are connected.

The researchers say such a modular design may be easily reconfigured for changing needs, such as additional households joining a community’s existing microgrid.

“What we propose is this concept of ad hoc microgrids: microgrids that can be created without any preplanning and can operate without any oversight. You can take different components, interconnect them in any way that’s suitable for you, and it is guaranteed to work,” says Konstantin Turitsyn, associate professor of mechanical engineering at MIT. “In the end, it is a step toward lower-cost microgrids that can provide some guaranteed level of reliability and security.”

The team’s results appear online in the IEEE journal Control Systems Letters, with graduate student Kathleen Cavanagh and Julia Belk ’17.

Returning to normal operations

Cavanagh says the team’s work sought to meet one central challenge in microgrid design: “What if we don’t know the network in advance and don’t know which village a microgrid will be deployed to? Can we design components in such a way that, no matter how people interconnect them, they will still work?”

The researchers looked for ways to constrain the dimensions of a microgrid’s main components — transmission lines, power sources, and loads, or power-consuming elements — in a way that guarantees a system’s overall stability without depending on the particular layout of the network.

To do so, they looked to Brayton-Moser potential theory — a general mathematical theory developed in the 1960s that characterizes the dynamics of the flow of energy within a system comprising various physical and interconnected components, such as in nonlinear circuits.

“Here we applied this theory to systems whose main goal is transfer of power, rather than to perform any logical operations,” Turitsyn says.

The team applied the theory to a simple yet realistic representation of a microgrid. This enabled the researchers to look at the disturbances caused when there was a variation in the loading, such as when a cell phone was plugged into its charger or a fan was turned off. They showed that the worst-case configuration is a simple network comprising a source connected to a load. The identification of this simple configuration allowed them to remove any dependence on a specific network configuration or topology.

“This theory was useful to prove that, for high-enough capacitance, a microgrid’s voltage will not go to critically low levels, and the system will bounce back and continue normal operations,” Turitsyn says.

Blueprint for power

From their calculations, the team developed a framework that relates a microgrid’s overall power requirements, the length of its transmission lines, and its power demands, to the specific capacitor size required to keep the system stable.

“Ensuring that this simple network is stable guarantees that all other networks with the same line length or smaller are also stable,” Turitsyn says. “That was the key insight that allowed us to develop statements that don’t depend on the network configuration.”

“This means you don’t have to oversize your capacitors by a factor of 10, because we give explicit conditions where it would remain stable, even in worst-case scenarios,” Cavanagh says.

In the end, the team’s framework provides a cheaper, flexible blueprint for designing and adapting microgrids, for any community configuration. For instance, microgrid operators can use the framework to determine the size of a given capacitor that will stabilize a certain load. Inversely, a community that has been delivered hardware to set up a microgrid can use the group’s framework to determine the maximum length the transmission lines should be, as well as the type of appliances that the components can safely maintain.

“In some situations, for given voltage levels, we cannot guarantee stability with respect to a given load change, and maybe a consumer can decide it’s ok to use this big of a fan, but not a bigger one,” Turitsyn says. “So it could not only be about a capacitor, but also could constrain the maximal accepted amount of power that individuals can use.”

Going forward, the researchers hope to take a similar approach to AC, or alternating current, microgrids, which are mostly used in developed countries such as the United States.

“In the future we want to extend this work to AC microgrids, so that we don’t have situations like after Hurricane Maria, where in Puerto Rico now the expectation is that it will be several more months before power is completely restored,” Turitsyn says. “In these situations, the ability to deploy solar-based microgrids without a lot of preplanning, and with flexibility in connections, would be an important step forward.”

This research was sponsored by the MIT Tata Center for Technology and Design.


November 16, 2017 | More

Let your car tell you what it needs

Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and LGO thesis advisor has been working on a smartphone app for car diagnostic information by analyzing the car’s sounds and vibrations.

Imagine hopping into a ride-share car, glancing at your smartphone, and telling the driver that the car’s left front tire needs air, its air filter should be replaced next week, and its engine needs two new spark plugs.

Within the next year or two, people may be able to get that kind of diagnostic information in just a few minutes, in their own cars or any car they happen to be in. They wouldn’t need to know anything about the car’s history or to connect to it in any way; the information would be derived from analyzing the car’s sounds and vibrations, as measured by the phone’s microphone and accelerometers.

The MIT research behind this idea has been reported in a series of papers, most recently in the November issue of the journal Engineering Applications of Artificial Intelligence. The new paper’s co-authors include research scientist Joshua Siegel PhD ’16; Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president of open learning at MIT; and two others.

A smartphone app combining the various diagnostic systems the team developed could save the average driver $125 a year and improve their overall gas mileage by a few percentage points, Siegel says. For trucks, the savings could run to $600 a year, not counting the benefits of avoiding breakdowns that could result in lost income.

With today’s smartphones, Siegel explains, “the sensitivity is so high, you can do a good job [of detecting the relevant signals] without needing any special connection.” For some diagnostics, though, mounting the phone to a dashboard holder would improve the level of accuracy. Already, the accuracy of the results from the diagnostic systems they have developed, he says, are “all well in excess of 90 percent.” And tests for misfire detection have produced no false positives where a problem was incorrectly identified.

The basic idea is to provide diagnostic information that can warn the driver of upcoming issues or needed routine maintenance, before these conditions lead to breakdowns or blowouts.

Take the air filter, for example — the topic of the team’s latest findings. An engine’s sounds can reveal telltale signs of how clogged the air filter is and when to change it. And unlike many routine maintenance tasks, it’s just as bad to change air filters too soon as to wait too long, Siegel says.

That’s because brand-new air filters let more particles pass through, until they eventually build up enough of a coating of particles that the pore sizes get smaller and reach an optimal level of filtration. “As they age, they filter better,” he says. Then, as the buildup continues, eventually the pores get so small that they restrict the airflow to the engine, reducing its performance. Knowing just the right time to replace the filter can make a measurable difference in an engine’s performance and operating costs.

How can the phone tell the filter is getting clogged? “We’re listening to the car’s breathing, and listening for when it starts to snore,” Siegel says. “As it starts to get clogged, it makes a whistling noise as air is drawn in. Listening to it, you can’t differentiate it from the other engine noise, but your phone can.”

To develop and test the various diagnostic systems, which also include detecting engine misfires that signal a bad spark plug or the need for a tune up, Siegel and his colleagues tested data from a variety of cars, including some that ran perfectly and others in which one of these issues, from a clogged filter to a misfire, was deliberately induced. Often, in order to test different models, the researchers rented cars, created a condition they wanted to be able to diagnose, and then restored the car to normal.

“For our data, we’ve induced failures [after renting] a perfectly good vehicle” and then fixed it and “returned the car better than when we took it out. I’ve rented cars and given them new air filters, balanced their tires, and done an oil change” before taking them back, he recalls.

Some of the diagnostics require a complicated multistep process. For example, to tell if a car’s tires are getting bald and will need to be replaced soon, or that they are overinflated and might risk a blowout, the researchers use a combination of data collection and analysis. First, the system uses the phone’s built-in GPS system to monitor the car’s actual speed. Then, vibration data can be used to determine how fast the wheels are turning. That in turn can used to derive the wheel’s diameter, which can be compared with the diameter that would be expected if the tire were new and properly inflated.

Many of the diagnostics are derived by using machine-learning processes to compare many recordings of sound and vibration from well-tuned cars with similar ones that have a specific problem. The machine learning systems can then extract even very subtle differences. For example, algorithms designed to detect wheel balance problems did a better job at detecting imbalances than expert drivers from a major car company, Siegel says.

A prototype smartphone app that incorporates all these diagnostic tools is being developed and should be ready for field testing in about six months, Siegel says, and a commercial version should be available within about a year after that. The system will be commercialized by a startup company Siegel founded called Data Driven.

October 26, 2017 | More

Mapping gender diversity at MIT

Karen Willcox, Professor of aeronautics and astronautics and LGO thesis advisor recently helped devise an interactive map that examines trends in undergraduate gender diversity at MIT.

A trio of researchers has created and published a data visualization map that examines trends in undergraduate gender diversity at MIT. The big reveal is heartening: Over the past 20 years, MIT’s female undergraduate population has risen to nearly 50 percent of total enrollment and such growth has been sustained across almost every department and school.

Professor of aeronautics and astronautics Karen Willcox, researcher Luwen Huang, and graduate student Elizabeth Qian devised an interactive map to show these aggregate trends, and much more. The tool, using data from the MIT Registrar’s Office, allows users to explore gender diversity on a class-by-class and department-level basis, to see links between classes, such as prerequisite requirements, and to conduct keyword searches to reveal variations in related subjects across MIT.

“MIT should be proud of the leadership it has shown,” says Willcox. “The positive trends in gender equity are not seen in just one or two departments, but literally across the spectrum of science, engineering, arts, humanities, social sciences, management and architecture. One of the unique features of our tool is that it provides insight at the subject level, going deeper beyond aggregate statistics at the major level. We hope that this will be a basis for data-driven decisions — for example, by understanding what about a particular subject’s pedagogy makes it appeal to a more diverse audience.”

The map appears as a series of discipline-based ball-and-stick clusters, with each node representing a class. The size of the node indicates the class’s total enrollment. The color of a node, ranging from teal (fewer women enrolled) to salmon (more women in enrolled), represents the percentage of women in a particular class, and helps to illustrate how diversity has changed over time.

For example, in a slice across classes in MIT’s Department of Electrical Engineering and Computer Science (EECS) in 2006, the nodes appear as light and darker teal, indicating enrollments of less than 25 percent women. Fast forward to 2016, and the same slice has node colors all in shades of salmon, indicating female enrollments of 35 percent or more. In part, this change is a reflection of the steady increase in total female EECS majors, particularly over the past six years. However, since the analysis is conducted at the class level, this change is also a reflection of more women from other majors taking computer science classes.

“It is gratifying to see the change in composition of our EECS student body,” says Anantha Chandrakasan, former department head of EECS and now dean of the School of Engineering. “While it is true that we have had a dramatic increase in [computer science and engineering] majors, female enrollment has nearly tripled in the past five years. It’s a useful model for us to consider as we are improving gender equity across the school.”

Willcox credits the positive momentum in EECS to several different elements, saying, “anecdotal evidence suggests that the pedagogical reform undertaken by EECS in 2008 has played a large role.” She also points out the important role of leadership, namely Chandrakasan’s support of studies such as the EECS Undergraduate Experience Survey and his commitment to programs such as the Women in Technology Program and Rising Stars, an effort to bring together women who are interested in careers in academia.

Enrollments in the Department of Mechanical Engineering have achieved similar gender parity. This is especially impressive given that the national average of female undergraduate majors in the field is 13.2 percent. Willcox again highlights the efforts made by another leader, Mary Boyce, the first woman to head that department from 2008 to 2013 and now dean of engineering at Columbia University. The results of an internal study announced in June, suggested that the department’s ongoing proactive approach — revamping the curriculum, enhancing recruitment efforts — played a part in their success.

“The map, of course, cannot reveal specific causes of changes in gender diversity, but it does provide a place to begin a conversation,” says researcher Luwen Huang, who is an expert in visualization design. “The interactivity of the map was designed to encourage the user to explore, discover connections across classes, and ask questions.”

The researchers caution that looking at department-based data only provides one view. In the case of EECS, a deeper dive shows that introductory programming classes have historically had high female enrollments, but that finding may be deceptive. “When you look at introductory courses like 1.00 (Engineering Computation and Data Science) and 6.00 (Introduction to Computer Science and Programming), you see high levels of female enrollment,” Willcox explains. “That’s not because there are more women in those fields, but likely because women might lack the preparation and/or the self-confidence to skip introductory classes.”

Biannual surveys of MIT undergraduates and other internal reports seem to bolster such a supposition, suggesting that women at MIT may experience negative stereotyping and feel less confident than their male counterparts. Lower or higher female enrollment in certain classes and departments may also be due to a variety of other factors, from job prospects to the influence of peers to level of interest in the subject matter.

The data and tool provide a starting point to begin such analysis and to take potential actions. Being open about data, sharing data, and being data-driven are valuable forcing mechanisms, says the team, and a hallmark of MIT’s ethos of transparency. Further, having a visual map of gender diversity across MIT, they say, is literally eye opening.

“This map provides ample evidence that our efforts to enroll a diverse undergraduate class have had a dramatic impact on MIT,” says Ian A. Waitz, vice chancellor and the Jerome C. Hunsaker Professor of Aeronautics and Astronautics. “However, while these demographic trends are impressive, they are not sufficient. We must continue to work hard to create an inclusive, welcoming environment for all.”

October 26, 2017 | More

Identifying optimal product prices

David Simchi-Levi, LGO  Thesis advisor and Professor of Civil and Environmental Engineering, explains new insights into demand forecasting and price optimization.

How can online businesses leverage vast historical data, computational power, and sophisticated machine-learning techniques to quickly analyze and forecast demand, and to optimize pricing and increase revenue?

A research highlight article in the Fall 2017 issue of MIT Sloan Management Review by MIT Professor David Simchi-Levi describes new insights into demand forecasting and price optimization.

Algorithm increases revenue by 10 percent in six months

Simchi-Levi developed a machine-learning algorithm, which won the INFORMS Revenue Management and Pricing Section Practice Award, and first implemented it at online retailer Rue La La.

The initial research goal was to reduce inventory, but what the company ended up with was “a cutting-edge, demand-shaping application that has a tremendous impact on the retailer’s bottom line,” Simchi-Levi says.

Rue La La’s big challenge was pricing on items that have never been sold before and therefore required a pricing algorithm that could set higher prices for some first-time items and lower prices for others.

Within six months of implementing the algorithm, it increased Rue La La’s revenue by 10 percent.

Forecast, learn, optimize

Simchi-Levi’s process involves three steps for generating better price predictions:

The first step involves matching products with similar characteristics to the products to be optimized. A relationship between demand and price is then predicted with the help of a machine-learning algorithm.

The second step requires testing a price against actual sales, and adjusting the product’s pricing curve to match real-life results.

In the third and final step, a new curve is applied to help optimize pricing across many products and time periods.

Predicting consumer demand at Groupon

Groupon has a huge product portfolio and launches thousands of new deals every day, offering them for only a short time period. Since Groupon has such a short sales period, predicting demand was a big problem and forecasting near impossible.

Applying Simchi-Levi’s approach to this use case began by generating multiple demand functions. By then applying a test price and observing customers’ decisions, insights were gleaned on how much was sold — information that could identify the demand function closest to the level of sales at the learning price. This was the final demand-price function used, and it was used as the basis for optimizing price during the optimization period.

Analysis of the results from the field experiment showed that this new approach increased Groupon’s revenue by about 21 percent but had a much bigger impact on low-volume deals. For deals with fewer bookings per day than the median, the average increase in revenue was 116 percent, while revenue increased only 14 percent for deals with more bookings per day than the median.

Potential to disrupt consumer banking and insurance

The ability to automate pricing enables companies to optimize pricing for more products than most organizations currently find possible. This method has also been used for a bricks-and-mortar application by applying the method to a company’s promotion and pricing, in various retail channels, with similar results.

“I am very pleased that our pricing algorithm can achieve such positive results in a short timeframe,” Simchi-Levi says. “We expect that this method will soon be used not only in retail but also in the consumer banking industry. Indeed, my team at MIT has developed related methods that have recently been applied in the airline and insurance industries.”

September 22, 2017 | More

New robot rolls with the rules of pedestrian conduct

Jonathan How, Professor of aeronautics and astronautics and LGO thesis advisor recently co-authored a paper on a new robotic design for autonomous robots with “socially aware navigation.”

September 13, 2017 | More

Sloan

How to be a game-changing leader

How to be a game-changing leader

To build a game-changing organization, maintain a sense of duality: urgency with patience, leadership with individual accountability, learning with leading, and stewardship with change.

In a new webinar for MIT Sloan Alumni Online, senior lecturer Doug Ready shared five ways leaders can build game-changing organizations. Ready, an expert on organizational effectiveness, is the founder and CEO of the International Consortium for Executive Development Research. He was recently inducted into the Thinkers50 hall of fame.

“Game-changing organizations stand out from the pack and their peer group,” Ready said. “They let people raise their hand and have a point of view to share valuable contributions. They’re not afraid to swim across the stream and experiment and try new things. But they’re not just wildly innovative — they know how to get things done.”

These are the five goals that game-changing leaders must embrace.

1. Create clarity. Leaders create companies that are purpose-driven and know why they exist. They’re performance-focused and know what they want to achieve. And, finally, they’re principle-led and know what they believe in.

2. Unleash energy. “It’s not just about one charismatic leader. Leaders need to create a climate of high engagement and a robust level of dialogue, where there’s lots of questioning and lots of expectations,” Ready said. Good leaders provide a sense of inclusion, belonging, and emotional safety, where employees feel empowered to get help and to ask questions while also being held accountable for their work.

3. Build trust. “When leaders authentically solicit feedback, they build trust,” he said. They do this in four ways: By thinking about shared beliefs, articulated values, normative behaviors (what’s accepted in the organization and whether this aligns with its values), and rewards and consequences (whether there are incentives for aligning with those beliefs and values). Ideally, a game-changing culture has both “glue and grease,” he said. “Glue binds us together, and grease enables fresh thinking and resiliency.” Game-changing organizations allow employees to question the norms with grease while being bound like glue by key values and beliefs.

4. Win today. Smart leaders don’t just talk about change — they invest time and resources in it, even when change feels uncomfortable. “We need discipline to innovate and execute; we need to make sure we’re committing to constant customer obsession, shareholder happiness, and employee excitement,” Ready said.

5. Shape tomorrow. Savvy leaders scrutinize their organization to determine why change might not happen. Is it a problem with capability, or finding talent? Or is it a culture challenge, wherein employees don’t feel enabled to speak up? Leaders need to maintain a “robust sense of constant questioning,” he said.

Finally, he said, leaders don’t merely focus on these five things. They marry each skillset with a mindset. They’re able to maintain a sense of duality: urgency with patience; leadership with individual accountability; learning with leading; and stewardship with change.

“Leaders are never satisfied with the status quo. They’re always questioning and asking for input,” he said. “If we’re going to build organizations that are purpose-driven, performance-focused, and principle-led, we need to cultivate a new perspective on the skillsets and mindsets of our enterprise leaders.”

Watch the full webinar below.

February 21, 2018 | More

Lost Einsteins: The US may have missed out on millions of inventors

Lost Einsteins: The US may have missed out on millions of inventors

Innovation has slowed in the U.S., stymying economic growth. To get back on track, the U.S. needs more low-income children, women, and minorities to become inventors — but that won’t be easy.

Innovation fueled economic growth in America for the past century, but since the 1970s, innovation (as measured by fundamental productivity growth) appears to have slowed [PDF] — from an annual increase of 1.9 percent to 0.7 percent — and so has economic growth.

A new study shows that, thanks to inequality, the U.S. has potentially missed out on millions of inventors during that time — what the researchers refer to as “lost Einsteins.” Kids born into the richest 1 percent of society are 10 times more likely to be inventors than those born into the bottom 50 percent — and “this is having a big effect on innovation,” MIT Sloan professor John Van Reenen said.

The research also shows that innovation in the U.S. could quadruple if women, minorities, and children from low-income families became inventors at the same rate as men from high-income families. Making that happen is the hard part, though. It means exposing more children to innovation when they are young — and the younger they are, the better.

lost-einsteins-2

The wealth factor
Since innovation is largely seen as a means for economic growth, researchers at the Equality of Opportunity Project wanted to see what part childhood wealth plays on future innovation.

The research [PDF], completed by Van Reenen alongside Raj Chetty, Xavier Jaravel, Neviana Petkova, and Alex Bell, showed stark results.

“The most striking thing was how sharp the relationship was between the wealth of your parents and whether you grew up to be an inventor or not,” John Van Reenen said.

By linking patent records with de-identified IRS data and school district records for more than one million inventors, the researchers found that, while ability does play some part in a child’s chance of becoming an inventor in the future, it is far from the biggest factor.

Instead, wealth played a much larger role. Among children who excelled in math in third grade, those whose families’ incomes fell into the highest fifth of the population were more than five times as likely to be inventors than those whose families’ incomes were in the lowest fifth.

This disparity is amplified among children whose parents were in the top 1 percent of earners — they were 10 times more likely to be inventors than those in the bottom 50 percent.

Broken out by race, white children were three times as likely as black children to be inventors. Only 18 percent of inventors were women.

But why?

You cannot be what you cannot see
Researchers have long known that innovation is more concentrated in certain regions — near large universities, research centers, and areas with a high concentration of businesses. But nobody has previously asked how that affects children growing up in those areas.

The researchers found that growing up in one of these clusters of innovation makes kids more likely to be inventors themselves — likely because those children are being exposed to innovation at an early age.

“Your community, your family, your friends — all these things seem to matter,” Van Reenen said. And for a very simple reason — if kids grow up around innovation, they grow up hearing people talk about innovation, and begin thinking they can be inventors.

And it is industry specific. If a child’s parent in an inventor in synthetic rubber, that child is more likely to grow up and invent something in the synthetic rubber field. “There isn’t a specific gene for that — more likely it is due to something people call ‘dinner table capital.’ You are sitting around, talking about things, and you pick that up from your parents,” Van Reenen said.

What’s more, girls who are exposed to female inventors are more likely to grow up to be inventors themselves — the same is true for boys. “While having inventor exposure is always good, it is particularly strong when you see someone of your same gender,” Van Reenen said. While the researchers were unable to break this same metric out based on race, Van Reenen thinks that as research gets more granular, this is likely to be true for minorities, as well.

lost-einsteins-3

Increasing exposure 
Current U.S. policies meant to increase innovation aren’t working. The researchers found that things like tax incentives only affect those lucky enough to have already been exposed to innovation, and people who are inventors would likely be so even if the tax incentives didn’t exist.

Instead, children need to be exposed to innovation from an early age.

The researchers propose a number of ways to do that — everything from mentoring programs and internships to school programing and interventions through social networks. The goal is to get women and minorities to connect with people like them who have become inventors, to show them that they can be inventors, too.

Whatever policies are eventually adopted, policymakers need to think in terms of long-term solutions. “You want is to increase the pipeline of people on the supply side who are great inventors,” Van Reenen said. “You want to take the talent that is already in America and get those kids imagining themselves being an inventor or potential inventor. It is not a quick fix, but in the long run it is going to be a more effective policy.”

And the younger children are reached, the better. The researchers looked at third grade math scores, but by that point the effects of inequality are already starting to take hold.

The ability to be an innovator doesn’t vary across race, gender, or income groups — but circumstances do. Many of the children in the under-represented groups could have grown up to be inventors, but didn’t, leaving us with a generation of “lost Einsteins.”

“These people have the talent to be inventors, but they don’t imagine that they could be,” Van Reenen said. “We are losing out in a real source of knowledge and ultimately growth — a factor that we need.”

February 17, 2018 | More

How 2 MIT entrepreneurs keep their companies focused

How 2 MIT entrepreneurs keep their companies focused

When your company grows, a focused vision and hiring strategy will keep you from spinning out.

In its quest to build a smoother, digital auto chassis, ClearMotion grew from 15 employees in 2011 to 115 in 2017, with $130 million in funding. Romulus Capital, an early-stage venture capital firm managing $200 million, started in Krishna Gupta’s MIT dorm room in 2008.

With growth comes risk, and at the Feb. 9 MIT Venture Capital and Innovation Conference, Gupta and ClearMotion CEO Shakeel Avadhany explained how they keep their companies focused, and what they focus on when building their teams.

“For any leader, the most important thing is you have a vision, you set the vision, you have to be very critical about that vision,” said Avadhany, SB ’09. “It’s very easy to sprint in the wrong direction. Be paranoid about that.”

What’s important is to establish your main thing, and “you’ve got to keep the main thing the main thing,” Avadhany said. “That sounds simple, but in practice it’s actually an extremely important principle to not let go.”

Gupta, SB ’08, said he lived that principle when he was forced to decide the future of a struggling company he owned in less than a week. Gupta said he basically lived in one of the company’s offices, interviewing employees to get a sense of what was going through their heads, their emotions, and their plans for the future if the company stayed open. (It did — Gupta invested several million dollars into the now flourishing company.)

“Sometimes you just have to sit down and focus on one thing for several days at a time; which was actually quite difficult. That’s what I did. I just cut out everything else for those six days,” Gupta said. “One week really taught me the value of going all in, putting my focus on one thing, trying to get as many people-related signals as I could. We talk about facts, but sometimes — especially at the early stage — a lot of it is driven by human beings and what human beings are capable of doing and what they’re incentivized and motivated to do.”

Building a team
Hiring and investing in the right motivated people for your company is another challenge that takes skill.

Hopefully you’ll also find a team of people who are in many ways a better leader than you, Avadhany said. And make sure you’re all pulling on the same end of the rope.

“I have to be able to trust these people coming on board,” Avadhany said. “Because you’re not going to be up in their face day to day. That’s a great way to de-motivate.”

Avadhany said for any CEO looking to hire, it’s helpful to pull references from the “referral machine” from the top performers in your business.

“You ask them ‘who are the best people you’ve worked with,'” Avadhany said. “I think in the VC business, a lot of deals that VCs do are by introduction. It’s a great way to bring in high-quality people.

For Gupta, trust in a team is twofold: Can members do the work, and can they take feedback.

“I want to see that this person is signing up for creative destruction,” Gupta said. “We are going to clash, we are going to debate things.”

That’s what leads to great decision-making, Gupta said

“It’s very easy to develop a groupthink in a company,” Gupta said. “A great partner is someone who will actually push you. That’s only going to work if I trust that the person I’m investing in is receptive to that. Often times I will turn down a deal, even though I love the team and love the capabilities of the team, but I don’t think that entrepreneurial team will take feedback. And that to me is a huge red flag.”

February 15, 2018 | More

Starting a fintech venture? Keep these 3 things in mind.

Starting a fintech venture? Keep these 3 things in mind.

When starting a fintech venture, the stated startup costs and data rules don’t apply.

Starting a new company is always hard, but starting a fintech venture presents some unique challenges. That is something that Sophia Lin, MBA ’12, found out the hard way.

“We didn’t realize until we got our hands dirty that the hardest part of starting our company was going to be data,” Lin said.

In 2016 Lin, along with Andrew Kelley, co-founded Keel, a fintech startup that functions almost like a social network for investing, connecting rookie investors with more seasoned ones who can show them the ropes.

The company’s core technology is an algorithm that aggregates investment data from hundreds of brokers. This lets people closely follow their returns and also highlights when people have outperformed the market. Investors can then pay a fee to follow those who have performed well, helping them make smarter investment decisions.

Here is what Lin learned about starting and launching a fintech company:

It is going to be more expensive than you think
Starting a new business is never cheap, but Lin says that starting a fintech venture involves costs that aren’t always evident to outsiders.

“Starting costs are higher than other industries because of data, infrastructure, and security requirements,” Lin said. “Data is expensive, which stops a lot of early stage startups from entering this field. But you need data to build your product.”

Lin notes that companies like the FinTech Sandbox — that offers financial data for fintech companies to use when they are just getting started — or fintech incubators that help them connect with other people in the same position can help.

Additionally, user acquisition cost — or cost of marketing to and acquiring new customers — is expensive for financial services in general and fintech businesses in particular. Entrepreneurs need to keep all these things in mind when starting a new venture.

You are going to have to clean up messy data
All technology companies deal in data, but fintech companies deal in financial data. And, according to Lin, “financial data is not only expensive, it is also very messy.”

Lin says when a company is pulling data — whether from leading data sources, third-party vendors, or smaller companies — it has to expect errors. “There will be missing data, there will be incorrect labels, and incorrect data that you will have to clean up,” she said.

While this is important for any fintech venture, Lin says it is especially important for Keel, since the accuracy level required for the company is extremely high. One mistake can mean delivering incorrect results to customers, which is why she and Kelley have put so much effort into training their algorithm — using real data that dates back 17 years — and applying machine learning to further clean up their data.

“A lot of people come into this space just thinking they can plug in some data API and they are good to go. Unfortunately, it is not true,” Lin said. “You will need to work on further data manipulation to get it going.”

Find a cofounder with a complementary skill set
Lin recognizes that this is important in any startup venture, but thinks, “this is particularly true for fintech. Financial services are very specific, and domain knowledge is really important. But at the same time, you need someone who can help you build the product,” she said.

With Keel, Lin had worked in banking for several years, and Kelley had a technology background. Only by combining the two skillsets were they able to start Keel — Lin has the financial know-how, and Kelley came up with what Lin calls the “secret sauce” to cleanse the data central to Keel’s business model.

Additionally, acquiring talent to build the team can be a challenge for fintech ventures. Ideally, Lin said, fintech startups will find people with both financial and technology backgrounds. Since that can be hard, Lin suggests looking for talent with strong domain knowledge of either finance or technology, but enough training or knowledge to be able to understand the other side.

Beyond that, Lin said when starting a fintech venture it is important to “be flexible. Be adaptable. Because you will hit a lot of unexpected challenges. It is good to be stubborn, but at the same time not to always hold on to your original idea. Think beyond that. Working on something that is not as sexy — even though it might not look as attractive — that is your gold mine.”

February 15, 2018 | More

3 steps to an effective 'ask'

3 steps to an effective ‘ask’

How you ask for something matters just as much as what you want. Time to improve your request.

Whether asking for a job, a raise, or money, it’s important to not only know your audience, but yourself. In a Feb. 9 talk at the MIT Venture Capital and Innovation Conference, 1843 Capital founding partner Tracy Chadwell shared three steps to improving your chances for a positive outcome.

Know who you’re talking to
Knowing your audience is the most difficult and critical step, and is also something people do the least, Chadwell said. But it doesn’t have to be.

“It’s easy to go onto LinkedIn, it’s easy to Google someone, it’s easy to find out what they’re interested in,” Chadwell said. “Try to make a personal connection. Really understanding what the venture capitalist strategy is, this is one I really struggle with. People will march up to me in an elevator and start just diving into their pitch without understanding exactly what I do.”

Know who you are
Know your value proposition, Chadwell said, because without a value proposition, you’re just executing a demand.

“It’s really important to have a very concise statement of exactly what you do,” Chadwell said. “What’s your total addressable market? What is your experience? And have that in a very small format that’s digestible.”

Be persistent
No doesn’t mean no forever, Chadwell said. Make a list of 100 people you want to talk to in a particular sector, and just get started on introductions.

“At least get the ball rolling in that space, and then you’ll start to know the sector better and better, and you can do a better job of it,” Chadwell said. “But no doesn’t mean no, it just means no at that time, for exactly what you’re asking.”

February 15, 2018 | More

How to be a mission-driven COO

How to be a mission-driven COO

In a growing company, leaders who focus on priorities, mission, and culture help define what the organization is about — and what its future holds.

When’s the last time you browsed a vending machine and ended up selecting the Greek salad freshly made with spinach, cucumber, feta, red onion, Kalamata olives, cherry tomatoes, romaine lettuce, whole wheat orzo, and toasted almonds?

Shayna Harris wants to change your answer.

Harris, who graduated from MIT Sloan in 2011, has long been interested in “breaking models in the food system.” As a manager at Oxfam, she was instrumental in establishing a sustainable coffee sourcing program that influenced major companies, like Starbucks. After earning her MBA, she spent almost five years at the food giant Mars, overhauling the company’s sourcing practices for raw ingredients like cocoa, and investing in sustainable tools and training to strengthen the supply chain and meet growing demand. In February of 2016, she made her most recent move, becoming the chief operating officer at Farmer’s Fridge.

Founded in 2013, Farmer’s Fridge is a network of automated, self-serve micro-restaurants, offering wholesome, vegetable-forward dishes that are prepared at the Farmer’s Fridge kitchen daily and delivered fresh to their fridges every morning. Each fridge measures 12 square feet and lives in strategic locations around Chicago and Milwaukee, where fresh food may be scarce. The company has roughly 80 employees and 100 fridges, and is growing by the day. As Harris enters her second year there, she reflected on a few principles that guide her leadership in a small and rapidly growing enterprise.

Nail down your priorities
Whether you oversee a Fortune 100 or a family-owned diner, the need to prioritize is universal. But this need is heavily magnified when a company is young. “In a startup, the possibilities are so open and endless that you need to be very purposeful in your learning and action,” Harris said.

For Harris, establishing priorities started with listening. Her first few weeks on the job she spent hours in discussion with the CEO and founder of Farmer’s Fridge, Luke Saunders. She worked to understand not only his vision for the company, but also his broader mission of how the company might be used to improve people’s health, resolve issues of food waste, and create universal access to fresh and healthy food in the U.S. Because the founder possessed a clear sense of what he wished the company to be, this part was relatively straightforward. (These conversations can be difficult if the CEO has given no thought to a company’s intrinsic purpose.) “The real work,” Harris said, “was to figure out how we align ourselves with these principles.”

Filter strategy through principle
Understanding the values that define a company is one thing. Acting with consistency on those values is another. Harris noted that it would be easy, and no doubt profitable, for Farmer’s Fridge to stock its refrigerators with products that taste good but don’t carry much nutritional value. “But that’s not what we’re about,” she said.

Instead, Farmer’s Fridge maps its founding principles of fresh, healthy, and accessible food for all into explicit rules informing how the company makes decisions. For instance, Harris described “nutritional guardrails” to assure that products under development align with the company’s claims. As Farmer’s Fridge expands its offerings from salads to other dishes — such as quinoa and pasta — the guardrails assure that each new meal meets self-imposed nutritional expectations. “These guardrails are critical to how we grow,” she said. “The mission of the company is actually driving all of our research and development.”

Farmer's Fridge saladFarmer’s Fridge salads are part of the company’s vegetable-forward menu that is prepared daily and delivered to self-serve fridges each morning.

Building a strategy rooted in purpose is important for two reasons. First, referring back to mission provides a template for decision-making: Is the question or challenge before me relevant given what we’re trying to accomplish? And, if so, how can I respond in a way that’s faithful to our roots? “This keeps you grounded when you come upon the inevitable million and one decisions that a start-up leader makes throughout the day to shape the business,” Harris said. Second, a purpose-driven organization motivates current employees and spurs recruitment, as employees can see that they are investing their time in something meaningful.

Keeping a company’s mission front-of-mind can also give rise to unconventional opportunities. Because Farmer’s Fridge wants to provide healthy and sustainable food for all, meals that don’t sell by the end of the day are donated to the Greater Chicago Food Depository (food scraps that can’t be donated are composted). This wide-angle vision of what a company is about — not selling food, but changing how we get healthy food — speaks to Harris’s final thought.

Think of culture beyond your company
Though plenty busy with the daily operations of Farmer’s Fridge, a part of Harris’s responsibilities extend beyond the specific role of COO at a single corporation. She also needs to think about building the workplace of the future.

“We’re at a time in history when women are increasingly being recognized as leaders of industries and organizations,” she said. “I’m excited to be a part of the conversation about how we evolve and adapt the workplace to support excellence in work: No matter what walk of life, no matter what gender, no matter what orientation, and so on, the focus is on meaningful contributions.”

Harris attributes her success in part to mentors who encouraged her to “break mental models and ignore the mold of what pedigree you should hold for a certain position.” This proved to be invaluable support; now she’s paying it back and doing the same for others. Thinking beyond your seat and immediate workplace to the broader impact that you can have on an industry is something that Harris encourages leaders of every organization to embrace.

February 15, 2018 | More

Ending L.L. Bean lifetime return policy not a fatal marketing move

Ending L.L. Bean lifetime return policy not a fatal marketing move

Sometimes you’ve got to mess with a good thing to make the right decision for your company and your customers.

Surrendering to the bottom line, L.L. Bean waved the flannel shirt on its lifetime return policy, trading its timeless customer service perk for a one-year version.

The decision was greeted with dismay, curiosity, and — perhaps most importantly — understanding, a reaction that’s vital for a successful strategic shift, said MIT Sloan senior marketing lecturer Sharmila Chatterjee.

“Any time there is a strategic change being done by a company, communication is key,” Chatterjee said. “That is critical regardless of the situation.”

In this situation, some customers were using L.L. Bean’s lifetime return policy as a “lifetime product replacement program, expecting refunds for heavily worn products used over many years,” L.L. Bean executive chairman Shawn Gorman said in a statement. “Others seek refunds for products that have been purchased through third parties, such as at yard sales.”

As a result, the company was losing millions of dollars from people returning unsalvageable clothing and other items under the retailer’s return policy. Gorman told the Associated Press this was neither sustainable for the company, nor fair to customers.

Not everyone agrees with the change, with some people already taking to social media to declare their severed ties with the century-old company.

That’s why communication is so important, Chatterjee said. A well-communicated message explains to the customer that a company trusts its external stakeholders, but wants to balance the system so that the company is not being taken advantage of, and “so that [they] can be fair to all the stakeholders.”

Without the right communication, however, the change could seem more self-serving for the company — rather than an attempt to strike a balance among stakeholders.

“You have to be very, very careful in communicating what is this underlying decision,” Chatterjee said. “If I were L.L. Bean, I’d spend significant resources in reaching out to customers,” explaining the what, the why, and assuring customers they don’t mistrust them.

Any company should assume in its approach that their customer is a logical consumer. Explain the rationale behind the decision, she said.

“People are fair minded,” Chatterjee said. “They don’t want the company to go out of business. L.L. Bean has a responsibility to its employees. These businesses provide much-needed jobs. But they also want to treat customers fairly. It’s a balancing act.”

According to Gorman’s statement, customers will still be able to return items within a year — with proof of purchase — and the company is willing to work with customers “to reach a fair solution” if it’s after the one-year mark and the product is defective.

He also told the AP that L.L. Bean conducted internal surveys and found that 85 percent of customers were OK with the change.

The original policy was very generous, Chatterjee said, “but it feels like they’ve been taken advantage of.”

Sometimes a few bad apples can cause trouble for the larger population, Chatterjee said, but given the numbers from L.L. Bean, it seemed like it was more than just a few apples.

Ultimately for L.L. Bean, Chatterjee said, “as long as the policy is positioned properly, it’s communicated accurately in a convincing matter to show why it’s being done, and retailers sense it’s going to be fair, consumers are going to be trusted and the quality is going to be retained, they should come out OK.”

February 13, 2018 | More

MIT-led team is aiming to build a better cryptocurrency

MIT-led team is aiming to build a better cryptocurrency

New technologies that make it possible to reinvent our financial system have exploded over the past decade.

Bitcoin BTCUSD, +6.26% ethereum and other cryptocurrencies are proof that there’s a market for alternatives to the big, powerful players. And yet, it’s unclear how these cryptocurrencies will affect the economic landscape. Problems like bubbles, financial crashes and inflation aren’t going away any time soon. (Ahem, note recent events.)

But in the future, things could be different. These digital currencies and their supporting infrastructure hold great promise for deepening our understanding of the monetary circuit. With newfound clarity, we can build tools for minimizing financial risk; we can also learn to identify and act on early-warning signals, thus improving system stability. In addition, this new level of transparency could broaden participation in the economy and reduce the concentration of wealth.

A crypto alternative

How might this work? Leading cryptocurrencies, with bitcoin being perhaps the most famous, or infamous, example, have considerable logistical limitations. An alternative is needed.

For the past three years, our lab at the Massachusetts Institute of Technology (MIT) has worked on creating a new global currency, Digital Tradecoin, that combines the most recent technologies with the very old idea of a gold coin having intrinsic value. The currency will be backed by alliances of diverse players and anchored to a basket of real-world assets such as crops, energy and minerals, or perhaps by a portfolio of national currencies and bonds. These traits help stabilize its value and make it easier for the public to trust it. After all, a currency requires both efficient trade systems and trust.

This is where bitcoin falls short. For starters, it’s slow and clunky. Its infrastructure can handle about seven transactions per second, compared with the 2,000 on average handled by Visa V, +0.34% It’s an energy drain, too. The computer power required to create each digital token, a process known as “mining,” consumes at least as much electricity as the average American household burns through in two years.

Bitcoin is also not as free and libertarian as it’s often portrayed. The system was set up to spread authority among many miners; but because a small number of groups banded together into giant pools, a few players now dominate. Put simply, it’s not the peer-to-peer network it was designed to be.

Another problem is that bitcoin is not useful in day-to-day life. Bitcoin’s price against the U.S. dollar (and other government-issued legal tender) is exceedingly volatile, which makes it hard to spend. And because bitcoin isn’t backed by assets or a government guarantee, it’s essentially a speculative currency, which is a polite way of saying it’s not real money.

Blockchain-ledger combination

It’s important to point out that bitcoin’s digital token is not the ingenious invention here; that distinction goes to the “distributed ledger,” a communal database managed by multiple contributors that serves as a shared, digital bookkeeping system. Its underlying data structure, called a blockchain, is held in a series of encrypted blocks. A variety of “proving” mechanisms, which involve both humans and computers, helps keep those blocks secure.

Conceptually, blockchains and distributed ledgers aren’t new. What is new, however, is linking them together into a tamper-resistant computer system that can be applied to a broad spectrum of practical problems.

Enter Tradecoin. The principles behind Tradecoin are profoundly different from cryptocurrencies like bitcoin or ethereum, which aren’t linked to real-world assets or alliances. Tradecoin also avoids the energy-intensive process of mining by using a preapproved network of diverse and trusted “validators.” The result: a fast, scalable, reliable and environmentally friendly financial instrument. (Tradecoin is described in greater detail in a recent article that Alexander Lipton and I wrote for Scientific American.)

Tradecoin is likely safer than today’s currencies because it can be created to make the details of the monetary circuit visible for supervision. This allows for distributed accounting, which means we can more reliably forecast risk. This kind of transparency is impossible today because the details of transactions and contracts are restricted. But if such a system had been in place in 2008, it could have recognized the concentration of traders in mortgage-backed credit-default obligations and waved a red warning flag of the consequences for home values.

Pilot programs

We’re working to make Tradecoin a reality. We’re building “trust network” software systems, also the backbone for Tradecoin, for European Union nations and U.S. financial companies to use as pilot programs. We’re also exploring pilots for two Tradecoin currencies: one that’s intended for international commerce and backed by an alliance of small countries, and another that’s backed by farmers for use in commodity markets.

Today, for the first time ever, there exists the possibility of worldwide digital currencies that are largely immune to the self-serving policies of powerful central banks. As a result, major currencies like the dollar might become less dominant, or perhaps the U.S. financial system might become better behaved. The hope is that these systems, backed by broad alliances of diverse participants, can bring more transparency, accountability and equity to the world.

Alex “Sandy” Pentland is the Toshiba Professor of Media Arts & Science at MIT. He also directs MIT’s Human Dynamics Laboratory and the MIT Media Lab Entrepreneurship Program.

February 13, 2018 | More

No one knows your strategy––not even your top leaders

No one knows your strategy––not even your top leaders

The CEO of a large technology company (let’s call it Generex) recently reviewed the results of her company’s annual employee engagement survey and was delighted that strategic alignment emerged as an area of strength.1 Among the senior leaders surveyed, 97% said they had a clear understanding of the company’s priorities and how their work contributed to corporate objectives. Based on these scores, the CEO was confident that the company’s five strategic priorities — which had not changed over the past two years and which she communicated regularly — were well understood by the leaders responsible for executing them.

We then asked those same managers to list the company’s strategic priorities. Using a machine-learning algorithm and human coders, we classified their answers to assess how well their responses aligned with the official strategic priorities.2 The CEO was shocked at the results. Only one-quarter of the managers surveyed could list three of the company’s five strategic priorities. Even worse, one-third of the leaders charged with implementing the company’s strategy could not list even one.

These results are typical not just in the technology industry, but across a range of companies we have studied. Most organizations fall far short when it comes to strategic alignment: Our analysis of 124 organizations revealed that only 28% of executives and middle managers responsible for executing strategy could list three of their company’s strategic priorities.3

When executives see these results, their first instinct is to schedule more town hall meetings or send another email blast describing the corporate strategy. The impulse to double down on existing corporate communication strategies is understandable, but unlikely to solve the problem. Our research has uncovered three nonintuitive causes of strategic misalignment and concrete steps that top leaders can take to improve how well the strategy is understood throughout the organization.

1. Acknowledge you have a problem. The first step in solving a problem is recognizing you have one. C-suite executives often assume that the entire company is on the same page when it comes to strategy, but this assumption is usually wrong.4 Our strategy execution survey includes a series of questions designed to measure whether a company has a shared set of strategic priorities, how well those objectives are understood, and whether they influence resource allocation and goal setting throughout the organization.5 Top executives rate their company higher on all of these dimensions than managers lower down the organization do.

The exhibit “Top Teams Overestimate Alignment” summarizes the strategic alignment gap. To interpret this chart, start with the first assessment statement, “Our organizational priorities support our strategy.” If supervisors, managers, and executives outside the C-suite assess their company as average (the 50th percentile in this figure), the typical top team will rate their company at the 67th percentile — well above average. The pattern repeats across every single measure of strategic alignment.6

2. Agree at the top. Lack of strategic alignment often starts at the top. In developing strategic priorities, the top team should agree on a single set of objectives for the business as a whole, rather than each leader pursuing his or her own agenda. Unfortunately, most top teams we have studied fail to agree among themselves on company-wide priorities. For the typical organization we studied, just over half of senior executives converged on the same list of strategic objectives. Bear in mind, we did not measure whether the team members were committed to achieving the strategic priorities; we measured only whether they agreed on what they were.

The results from Generex were typical of the companies we have studied. Just over half of the top team could list all or all but one of the company’s five official priorities. But the other half of the team was completely out of touch. (See “Lack of Agreement on Strategy at the Top.”) Three of the top team members could list only one of the company’s strategic priorities, and two executives did not get a single objective correct — despite having five tries. Between them, these C-suite members listed a total of eight additional priorities that were not among the company’s official objectives.

Of course, not every top team shares Generex’s problem of half the members flying blind. Some teams we have worked with produce a more normal distribution, where most of the senior executives know some of the priorities with a few executives (usually including the CEO) knowing all of them, and others who can name a few or none. The Generex example does, however, underscore the importance of checking whether everyone in the C-suite is on the same page strategically. If executives are not aligned, it is critical to understand why not and address the issues before communicating the strategy more broadly throughout the organization.

3. Bring level two along. Strategic misalignment often starts at the top, but it doesn’t end there. Managers’ ability to correctly list their company’s strategic priorities continues to drop as you move further down the organization, but the rate of decline is not what you might expect. You might predict a steady decrease in alignment as you move down the organizational hierarchy, or perhaps a sharp drop-off among the frontline supervisors who are furthest from the C-suite. In fact, our data suggests the opposite — the sharpest plunge in alignment occurs between the top team and their direct reports, and is more gradual thereafter.

“Alignment Plummets Between Top Executives and Their Direct Reports” plots the average number of managers, at each level in the organization, who can list the company’s top priorities. For the typical company, just over half of top team members can do so. It is pretty bad when only half the C-suite agrees on the same objectives, but things look even worse for their direct reports. Strategic convergence drops off a cliff between the top team (51% agreement) and senior executives who report to the top team (22%).

The gap between the top team and its members’ direct reports is less surprising than it seems at first glance. Top team members oversee their own function, business unit, or geography, but also serve on the enterprise-wide leadership team that charts the course for the company as a whole. Their direct reports, in contrast, are not privy to discussions in the C-suite, and tend to view the world through the lens of the organizational silo they are charged with managing.

Rather than hosting another town hall, top executives should focus first on their direct reports, making sure they understand the company’s overall strategy and how their function, geography, or business unit fits into the bigger picture. One powerful way to do this: Each top executive should consistently explain why his or her unit’s objectives matter for the team and for the company as a whole.

In our sample, half of executives who reported directly to a top team member said that their boss consistently explained how their goals supported the company’s overall agenda. Of the rest, 37% said their boss framed their activities in terms of their team’s objectives without reference to corporate strategy, or their boss struggled to explain why their priorities mattered (12%). Many top team members need to do a better job explaining to their direct reports how their department, function, or regional goals fit into the company’s overall strategy.

To communicate strategic priorities throughout the organization, leaders at every level in the hierarchy should explain why their team’s goals matter — both for their team and for the organization as whole. Across 69 items included in our execution survey, the single best predictor of strategic alignment was how consistently managers — from top executives to frontline supervisors — explained their team’s priorities in terms of their unit and the entire company.7

To quantify the impact of this behavior, imagine a company that is average on every survey item except for one — all the managers explain why goals matter for their unit and the company. A high score on that single item would propel an average company to the top quartile in terms of strategic alignment.

A shared understanding of strategic priorities among key leaders does not guarantee successful execution. But it is a good first step. Widespread confusion and disagreement about what matters most undermine the prioritization and coordination across teams necessary to implement strategy. If managers do not understand what the company as a whole is trying to achieve over the next few years, they cannot align their actions with the organization’s overall direction.

To increase the odds that their strategy is understood throughout the company, top executives should acknowledge that they may have a problem with alignment, agree as a team on strategic priorities for the entire company as a whole, make sure their direct reports understand these objectives, and ensure that leaders at every level in the organization communicate what corporate priorities mean and for the company overall.

February 12, 2018 | More

How this MIT Sloan MBA is harnessing the new dominant force in politics

How this MIT Sloan MBA is harnessing the new dominant force in politics

America has been to the polls for the first time since Donald Trump was elected in 2016. And more than anything, the races were a testing ground for the 2018-midterm elections and the presidential race to follow in 2020.

In that election in 2020, for the first time ever, Millennials will make up the largest segment of the American electorate with 91 million Millennials composing roughly 35 percent of the voting population.

As a result, in just two years, Millennials will be poised to become a dominant force in politics—a force that can be harnessed effectively and decisively. Therefore, the time has come for campaigns to redirect their focus away from their long-standing focus on baby boomers and engage Millennials in a meaningful and lasting way.

Right now, Millennials are fairly likely to be disengaged with the political process. With the average age of members of Congress at 58-years-old and Congressional leadership in their late 60s and 70s, Millennials, who seek social impact, simply feel as if they cannot relate to government across this generational divide.

Only 32 percent of millennials report that they feel that “people like them” have a legitimate voice in the election,” according to the Center for Information and Research on Civic Learning and Engagement at Tufts University.

In fact, most Millennials have never before been exposed to the political process in a truly meaningful and lasting way. They feel that their voices and votes simply do not register with older politicians. The Economist reported that only 30 percent of millennials reported even being contacted by a campaign in 2016.

So, with Millennials now representing a third of the American electorate, my mission has been to create a process of real participatory inclusiveness, fostering lasting and meaningful Millennial engagement and, while doing so, making politics fun and appealing for young people.

I know something about getting Millennials engaged. After Hillary Clinton announced her candidacy in April 2015, I founded her Millennial fundraising program.

I did so by democratizing fundraising, first forming a steering committee, charging them with continually expanding the group outward, ultimately bringing hundreds of young people together in a fun, unconventional venue, for which they would be making a very affordable financial commitment, with an added career benefit of high level networking, and securing a high profile campaign surrogate who could directly engage those attending by speaking to their issues:  student loan reform and rising cost of college tuition.

Our first fundraiser in Philadelphia drew over five hundred millennials followed by similarly well-attended events in Boston, Chicago, Pittsburgh, New York, Seattle, Atlanta and Virginia. All tolled, I raised over $270,000 for Hilary Clinton’s campaign and was the youngest fundraiser to do so.

Read the full post at BusinessBecause.

Dan Jordan Kessler is a first-year-MBA student at the MIT Sloan School of Management.

February 5, 2018 | More

Engineering

Exploring his depth of field

Exploring his depth of field

Perhaps Corban Swain has inherited his idiosyncratic nature from his hometown. Huntsville, Alabama, has a dynamic history in the deep south: Originally a small cotton mill town, its selection as a post-WWII missile development site catapulted it into the space race. Later, it became an engineering enclave and hotspot for biotechnology.

Corban, too, defies stereotypes and ably wrangles his varied identities, as an artist and a scientist, a perfectionist and a procrastinator, a poet and an engineer, and — what in his youth sometimes seemed to him to be a dichotomy — an “intellectual brother.”

Now a first-year PhD student in biological engineering, Swain originally considered medical school while he was a student at Washington University in St. Louis. However, the year he took off of school to work as a full-time laboratory technician transformed his perspective on research. “It was definitely a tough decision because my path had been fairly linear up to that point … but so much personal and spiritual growth happened over that time,” he says.

Working in the lab, Swain, who is also a professional photographer, found himself compelled by the prospect of designing a physical tool to answer questions about invisible, microscopic phenomena — an interest that led him to MIT. Currently completing a laboratory rotation where he uses mathematical methods to reconstruct three-dimensional brain activity maps from light-field microscope images, Swain is drawn to visualization and the pursuit of a compelling image.

As he makes his way through his PhD program, Swain plans to continue to meld his technical and artistic interests, while exploring the potential for innovation that lies at the intersection of biology, engineering, and medicine.

Multidimensional identity

With the help of an artificial sun lamp, Swain is adjusting to his first Boston winter. As a first year, he is still exploring the Cambridge area from his apartment near Central Square. He particularly enjoys the live jazz at Wally’s Café, just across the river in Boston. While he has “mixed feelings” regarding the sometimes-hipster character of Cambridge (he espouses skepticism for a wood-grained drip coffee maker he encountered in a local café), he’s found the city to be a compelling place to take photographs.

Swain’s interest in photography began in high school, when he took a black-and-white photography class. His interest piqued, he participated in yearbook design in high school, and then transitioned to professional photography in college, where he began to focus primarily on headshots. At first this was a pragmatic choice — headshots are commonly in demand for laboratory websites and LinkedIn profiles — but taking portraits became a creative exercise that suited his perfectionistic tendencies and his desire to “really get good at something.”

According to Swain, a great headshot relies on fine facial details and angles of light: “Fifty-plus percent is just forming a connection with the person … just to break the tension. [Then there are] angular things, where you put the light, the shoulders, a head tilt. You know when photographers do that? They’re actually doing stuff!” He laughs. “It’s little, subtle changes.”

Coming full circle, Swain is currently participating in a black-and-white photography course at MIT. Among the striking photos he is developing in black and white include a “Black Lives Matter” chalk drawing from a sidewalk near the Loop in St. Louis, and a portrait of a pair of sharply dressed young men at the 50th anniversary of the Selma bridge crossing, staring straight into the camera.

Swain readily admits to an obsession with aesthetic detail in all aspects of his work. His eye for simplicity allows him to distill information in plots and graphics, but he also laughs at the time he spends on line spacing on even the smallest assignment, and notes that he has to manage his perfectionism in order to meet the hard deadlines of his academic work. This sometimes presents a challenge for someone who takes the time to organize their photos individually by year, then season, then (numbered, dated) shoot, then (numbered, dated) photo.

Swain’s other artistic interest, slam poetry, allows him to express himself personally and politically through performance. As a student in St. Louis, he, with many others, used slam poetry as a medium for protest and catharsis during the Ferguson protests of Michael Brown’s murder. His emotional poem “The Silence of Michael Brown” was directly influenced by this challenging time.

A self-described introvert, Swain also feels able to explore his own thoughts on identity and race through performance. One poem, entitled “N*****,” was written in response to a prank on his college campus in which fraternity initiates used the n-word publicly, in a rap, in front of black students. Swain’s piece, a powerful salvo as well as a withering rebuke, describes the significance the word holds as both an inheritance of slavery and a weapon used against him.

Settling in at MIT

While Swain says he finally feels at home in his “nerdiness” at MIT, he occasionally misses the easy camaraderie of his black friends, whom he recently saw at their WashU reunion: “I had a big group of black friends. And there’s a certain dynamic there, of black culture. … I haven’t really found that space here, as of yet. And so that’s kind of tough.”

Swain is the only black student in his cohort in the biological engineering department, and while he is a member of a variety of groups for minority students — he is a Sloan-MIT University Center for Exemplary Mentoring (UCEM) Fellow, a member of Academy of Courageous Minority Engineers (ACME), and recently attended the Annual Biomedical Research Conference for Minority Students (ABRCMS) — he still finds that the underrepresentation of black graduate students and faculty can inhibit him from feeling completely natural in his black identity at MIT.

Currently rotating among bioengineering labs, Swain will join a permanent lab next month. He’s shouldering a heavy course load, lightened by a hugely supportive department and a close relationship with his cohort. He buys a card for each of his classmates’ birthdays for everyone to sign.

For now, Swain says he would be happiest developing software and tools in a lab that would prepare him for a future professorship. Swain is more interested in the design and advancement of bioengineering technology broadly than in any one specific application thereof. This is a natural extension of a tendency to see connections rather than limitations; it is clear that Swain has come to MIT to create.


February 22, 2018 | More

MIT’s growing global leadership in water achieves industry recognition

MIT’s growing global leadership in water achieves industry recognition

In early January 2018, MIT professor John Lienhard opened an unexpected email. A panel of water industry professionals from around the world had ranked him fourth in the Top 25 Global Water Leaders list. The list was compiled and published by Water and Wastewater International (WWi), a publication dedicated to the distribution of practical knowledge for water system operators, wastewater engineers, and other professionals in the water industry worldwide. Its readership includes industry professionals either working in or consulting with water and wastewater systems and plants, researchers and educators, government and development agencies, industrial wastewater facilities, and environmental management organizations.

“WWi is one of many industry publications that I turn to to stay informed of the latest trends in water management, as do many of my colleagues from within and outside of academia,” says Lienhard. “Therefore it was an honor and a surprise to find my name included in this year’s list of influential leaders in the water sector.”

On the 2018 list, Lienhard was joined by influential individuals from across the globe who were recognized for their leadership in water innovation for the digital age. Among the recognized achievements were: developing an innovative urban storm water management strategy in Milwaukee, Wisconsin; building a biofuel production facility for algae harvested using wastewater in Spain; producing a global hub for the big-data-informed smart water sector to exchange information; leadership of the only water utility company to be named on the World’s Most Ethical Companies list for seven years running (Northumbrian Water); and many more.

Lienhard was recognized for the specific research-based water sector innovations that have come out of his lab in the Department of Mechanical Engineering. The Lienhard Research Group at MIT focuses on developing technologies for clean water through a wide variety of approaches, among them desalination, wastewater remediation, and water recycling, all while retaining the core objective of energy efficiency and reduced environmental impact. However, his influence on water innovation extends beyond this role. As director of the Abdul Latif Jameel World Water and Food Security Lab (J-WAFS) at MIT, he leads an Institute-wide initiative to cultivate new water and food systems research across fields and disciplines. This work is catalyzing innovation through grants, sponsored research programs, and other partnerships that result in real progress toward a future where the world’s water and food needs are met with a minimal impact on the environment.

While Lienhard was the only academic on the list, three others who were also nominated have a strong relationship to academia, and specifically to MIT and J-WAFS. Patrick Decker, CEO of Xylem Inc., was recognized for the way he is driving the company into the smart water space. Xylem’s business includes industry-leading brands that are used in water infrastructure around the world. The company invests in research and innovation in order to develop and distribute new technologies that will improve the way water is used, conserved, and reused in the future. J-WAFS is a key collaborator in this effort. In 2016, Xylem became J-WAFS’ first research affiliate through a three-year agreement involving sponsored research and support for J-WAFS activities and students at MIT. The sponsored research addresses challenges related to water contaminants, energy requirements for moving water, sustainability, and data analytics. The funding is also supporting a new generation of water innovators, through sponsorship of the MIT Water Club’s Water Innovation Prize and a J-WAFS graduate student fellowship.

“The global challenges of climate change, resource scarcity, and economic development for a growing population require research-based innovations in order to find new solutions,” says Decker. “The research affiliation with MIT, through J-WAFS, is allowing us to tap into a deep well of knowledge, cutting-edge facilities, and a culture of creativity to produce solutions at the vanguard of our field. This relationship shows the power of cross-sector collaboration to scope and solve these challenges.”

Research at MIT was also critical to the deployment of an advanced water sensing system in Singapore that has since become one of the digital water management strategies for which PUB, Singapore’s national water agency, was recognized. This water sensing system began through a collaboration between PUB and the MIT Center for Environmental Sensing and Modeling. The research collaboration was initiated and led by Andrew Whittle, the Edmund K. Turner Professor in the Department of Civil and Environmental Engineering, and the technology that emerged grew into the city-scale wireless sensor network now utilized by PUB Singapore to monitor water distribution by measuring water pressure, flow, and quality to track and prevent leaks, bursts, and other water loss. Harry Seah, PUB’s chief engineering and technology officer recognized by the 2018 Top 25 list, initially encouraged the project, along with and the institutional collaboration with MIT that made it possible. The technology launched a spinoff company, Visenti, which was purchased by Xylem in 2016. The sensors are now used in 25 cities worldwide.

At the top of this year’s Top 25 list is another familiar face at MIT: Carlos Cosín, CEO of Almar Water Solutions. Cosín is an agronomist and a well-known figure in the international water management community. He’s a good friend of MIT, following J-WAFS-supported research through multiple campus visits and through his company’s relationship with the Jameel organization, whose commitment to addressing the most significant problems facing mankind led to the gift that founded J-WAFS.

“MIT’s strength is creating basic research and translating it to innovations in technology that have broad societal benefits,” says Lienhard. “When our faculty and students bring this approach to the water sector, innovations abound, as is exemplified by our relationship to several of the water sector leaders recognized by the Top 25 list. The partnerships in industry and the public sector that J-WAFS has developed, and the research results that have occurred through Singapore-MIT Alliance’s relationship to PUB, show the important cross-pollination between academic research and industrial development. While the nominating committee named only one of MIT’s water researchers, I am proud to see our institutional influence on water sector advancements represented here.”


February 21, 2018 | More

Robo-picker grasps and packs

Robo-picker grasps and packs

Unpacking groceries is a straightforward albeit tedious task: You reach into a bag, feel around for an item, and pull it out. A quick glance will tell you what the item is and where it should be stored.

Now engineers from MIT and Princeton University have developed a robotic system that may one day lend a hand with this household chore, as well as assist in other picking and sorting tasks, from organizing products in a warehouse to clearing debris from a disaster zone.

The team’s “pick-and-place” system consists of a standard industrial robotic arm that the researchers outfitted with a custom gripper and suction cup. They developed an “object-agnostic” grasping algorithm that enables the robot to assess a bin of random objects and determine the best way to grip or suction onto an item amid the clutter, without having to know anything about the object before picking it up.

Once it has successfully grasped an item, the robot lifts it out from the bin. A set of cameras then takes images of the object from various angles, and with the help of a new image-matching algorithm the robot can compare the images of the picked object with a library of other images to find the closest match. In this way, the robot identifies the object, then stows it away in a separate bin.

In general, the robot follows a “grasp-first-then-recognize” workflow, which turns out to be an effective sequence compared to other pick-and-place technologies.

“This can be applied to warehouse sorting, but also may be used to pick things from your kitchen cabinet or clear debris after an accident. There are many situations where picking technologies could have an impact,” says Alberto Rodriguez, the Walter Henry Gale Career Development Professor in Mechanical Engineering at MIT.

Rodriguez and his colleagues at MIT and Princeton will present a paper detailing their system at the IEEE International Conference on Robotics and Automation, in May.

Building a library of successes and failures

While pick-and-place technologies may have many uses, existing systems are typically designed to function only in tightly controlled environments.

Today, most industrial picking robots are designed for one specific, repetitive task, such as gripping a car part off an assembly line, always in the same, carefully calibrated orientation. However, Rodriguez is working to design robots as more flexible, adaptable, and intelligent pickers, for unstructured settings such as retail warehouses, where a picker may consistently encounter and have to sort hundreds, if not thousands of novel objects each day, often amid dense clutter.

The team’s design is based on two general operations: picking — the act of successfully grasping an object, and perceiving — the ability to recognize and classify an object, once grasped.

The researchers trained the robotic arm to pick novel objects out from a cluttered bin, using any one of four main grasping behaviors: suctioning onto an object, either vertically, or from the side; gripping the object vertically like the claw in an arcade game; or, for objects that lie flush against a wall, gripping vertically, then using a flexible spatula to slide between the object and the wall.

Rodriguez and his team showed the robot images of bins cluttered with objects, captured from the robot’s vantage point. They then showed the robot which objects were graspable, with which of the four main grasping behaviors, and which were not, marking each example as a success or failure. They did this for hundreds of examples, and over time, the researchers built up a library of picking successes and failures. They then incorporated this library into a “deep neural network” — a class of learning algorithms that enables the robot to match the current problem it faces with a successful outcome from the past, based on its library of successes and failures.

“We developed a system where, just by looking at a tote filled with objects, the robot knew how to predict which ones were graspable or suctionable, and which configuration of these picking behaviors was likely to be successful,” Rodriguez says. “Once it was in the gripper, the object was much easier to recognize, without all the clutter.”

From pixels to labels

The researchers developed a perception system in a similar manner, enabling the robot to recognize and classify an object once it’s been successfully grasped.

To do so, they first assembled a library of product images taken from online sources such as retailer websites. They labeled each image with the correct identification — for instance, duct tape versus masking tape — and then developed another learning algorithm to relate the pixels in a given image to the correct label for a given object.

“We’re comparing things that, for humans, may be very easy to identify as the same, but in reality, as pixels, they could look significantly different,” Rodriguez says. “We make sure that this algorithm gets it right for these training examples. Then the hope is that we’ve given it enough training examples that, when we give it a new object, it will also predict the correct label.”

Last July, the team packed up the 2-ton robot and shipped it to Japan, where, a month later, they reassembled it to participate in the Amazon Robotics Challenge, a yearly competition sponsored by the online megaretailer to encourage innovations in warehouse technology. Rodriguez’s team was one of 16 taking part in a competition to pick and stow objects from a cluttered bin.

In the end, the team’s robot had a 54 percent success rate in picking objects up using suction and a 75 percent success rate using grasping, and was able to recognize novel objects with 100 percent accuracy. The robot also stowed all 20 objects within the allotted time.

For his work, Rodriguez was recently granted an Amazon Research Award and will be working with the company to further improve pick-and-place technology — foremost, its speed and reactivity.

“Picking in unstructured environments is not reliable unless you add some level of reactiveness,” Rodriguez says. “When humans pick, we sort of do small adjustments as we are picking. Figuring out how to do this more responsive picking, I think, is one of the key technologies we’re interested in.”

The team has already taken some steps toward this goal by adding tactile sensors to the robot’s gripper and running the system through a new training regime.

“The gripper now has tactile sensors, and we’ve enabled a system where the robot spends all day continuously picking things from one place to another. It’s capturing information about when it succeeds and fails, and how it feels to pick up, or fails to pick up objects,” Rodriguez says. “Hopefully it will use that information to start bringing that reactiveness to grasping.”

This research was sponsored in part by ABB Inc., Mathworks, and Amazon.


February 20, 2018 | More

Houston gives a Texas-sized welcome to the MIT Better World Tour

Houston gives a Texas-sized welcome to the MIT Better World Tour

A series of MIT Better World events has shined a spotlight on a number of MIT communities across the globe and the unique strengths that each region shares with the Institute and its mission to build a better world. On Jan. 19, these connections had special resonance when more than 400 alumni and friends gathered at the Hobby Center for the Performing Arts in Houston, for the largest-ever MIT event in Texas.

Greg Turner ’74, MArch ’77 president and founder of Turner Duran Architects, and recipient of the MIT Alumni Association Bronze Beaver Award, welcomed attendees and introduced MIT president L. Rafael Reif.

“Last August, the whole world had its eyes on Houston,” said Reif. “Since then, this city has demonstrated exceptional resilience, creativity, and strength.” Like Houston, Reif noted, MIT relies on courage to explore new ideas, particularly in the face of daunting challenges. Other guest speakers in the evening’s program included MIT faculty members Dina Katabi and Paulo Lozano, and Olympic gold-medalist and Texas native Jordan Malone.

Dina Katabi SM ’99, PhD ’03 traces the start of her career in wireless technology, machine learning, and artificial intelligence to a childhood obsession with the “Star Wars” films. “I really wanted to feel the force,” she said, “[and] I have continued to search for that force here at MIT as a student and … faculty member.” Katabi, who is originally from Damascus, Syria, is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and a 2013 MacArthur Fellow. She and her colleagues developed Emerald, a groundbreaking tool for wireless monitoring of physiological signals like respiration, heart rate, medication response, and sleep quality. Katabi believes that Emerald will be a powerful force in health care and credits MIT’s unique culture for enabling her to develop these “outside-the-box” ideas.

The forces that propel the work of Paulo Lozano SM ’98, PhD ’03 are revolutionary micropropulsion systems for satellites. Lozano studied space propulsion as an MIT student and is now the director of MIT’s Space Propulsion Laboratory. He also teaches space and rocket propulsion, fluid mechanics, and plasma physics to undergraduates for which he received MIT’s Outstanding Faculty UROP Mentor Award. Lozano hopes that his work will accelerate discovery by enabling more countries, like his native Mexico, to engage in space exploration. Lozano sees a kinship between MIT and space: “Like space, MIT doesn’t belong to one country, it belongs to the whole world.”

For Jordan Malone, it is the forces of acceleration — up to 3.2Gs in a turn — that have shaped his career as an Olympic speed skater and MIT mechanical engineering major. Malone took up speed skating to boost his chances of getting into MIT and, along the way, won bronze and silver Olympic medals (at the Vancouver and Sochi games, respectively) and dozens of other championship titles. Today, he pairs his passion for skating and engineering prowess to improve the technology and tools of the sport. At MIT, he said, “it takes everything you’ve got just to keep up,” and yet “we are all trying to make an impact. It’s impossible not to have that attitude when you’re exposed to the momentum that is MIT.”

President Reif returned to the stage to thank Malone, Lozano, and Katabi for sharing a “sample of the future” created by the people of MIT, and he encouraged the MIT community of Houston to support the MIT Campaign for a Better World. With their help, he said, MIT can realize its vision of “a future where prosperity is measured not in dollars alone,” but in the currencies of art and culture, innovation and technology, “and the richness of human understanding.”

The Better World tour continues with events on Feb. 20 in Seattle, Washington, and March 8 in Miami, Florida. To learn more or reserve your seat at either event, visit the MIT Better World Events webpage.


February 19, 2018 | More

Four MIT faculty elected to the National Academy of Engineering for 2018

Four MIT faculty elected to the National Academy of Engineering for 2018

Four MIT faculty are among the 83 new members and 16 foreign associates elected to the National Academy of Engineering.

Election to the National Academy of Engineering is among the highest professional distinctions accorded to an engineer. Academy membership honors those who have made outstanding contributions to “engineering research, practice, or education, including, where appropriate, significant contributions to the engineering literature,” and to “the pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.”

The four elected this year include:

Lallit Anand, the Warren and Towneley Rohsenow Professor of Mechanical Engineering, for contributions to the development of plasticity for engineering technology, involving theory, experiment, and computation.

Angela Belcher, the James Mason Crafts Professor of Biological Engineering and Materials Science and Engineering, for development of novel genetic evolution methods for the generation of new materials and devices.

Stephen Graves, the Abraham J. Siegel Professor of Management Science and a professor of engineering systems and mechanical engineering in the Sloan School of Management, for contributions to the modeling and analysis of manufacturing systems and supply chains.

Yang Shao-Horn, the Keck Professor of Energy, from the Department of Mechanical Engineering and Department of Materials Science and Engineering, for contributions to design principles for catalytic activity for oxygen electrocatalysis for electrochemical energy storage for clean energy.

“My warm congratulations to the four members of our faculty inducted into the National Academy of Engineering for their outstanding contributions as leaders in their fields,” says Anantha Chandrakasan, the dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “It is wonderful to see the contributions of our engineering faculty recognized at such a high level.”

Including this year’s inductees, 146 current MIT faculty and staff are members of the National Academy of Engineering.


February 16, 2018 | More

Researchers advance CRISPR-based tool for diagnosing disease

Researchers advance CRISPR-based tool for diagnosing disease

The team that first unveiled the rapid, inexpensive, highly sensitive CRISPR-based diagnostic tool called SHERLOCK has greatly enhanced the tool’s power, and has developed a miniature paper test that allows results to be seen with the naked eye — without the need for expensive equipment.

The SHERLOCK team developed a simple paper strip to display test results for a single genetic signature, borrowing from the visual cues common in pregnancy tests. After dipping the paper strip into a processed sample, a line appears, indicating whether the target molecule was detected or not.

This new feature helps pave the way for field use, such as during an outbreak. The team has also increased the sensitivity of SHERLOCK and added the capacity to accurately quantify the amount of target in a sample and test for multiple targets at once. All together, these advancements accelerate SHERLOCK’s ability to quickly and precisely detect genetic signatures — including pathogens and tumor DNA — in samples.

Described today in Science, the innovations build on the team’s earlier version of SHERLOCK (shorthand for Specific High Sensitivity Reporter unLOCKing) and add to a growing field of research that harnesses CRISPR systems for uses beyond gene editing. The work, led by researchers from the Broad Institute of MIT and Harvard and from MIT, has the potential for a transformative effect on research and global public health.

“SHERLOCK provides an inexpensive, easy-to-use, and sensitive diagnostic method for detecting nucleic acid material — and that can mean a virus, tumor DNA, and many other targets,” said senior author Feng Zhang, a core institute member of the Broad Institute, an investigator at the McGovern Institute, and the James and Patricia Poitras ’63 Professor in Neuroscience and associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering at MIT. “The SHERLOCK improvements now give us even more diagnostic information and put us closer to a tool that can be deployed in real-world applications.”

The researchers previously showcased SHERLOCK’s utility for a range of applications. In the new study, the team uses SHERLOCK to detect cell-free tumor DNA in blood samples from lung cancer patients and to detect synthetic Zika and Dengue virus simultaneously, in addition to other demonstrations.

Clear results on a paper strip

“The new paper readout for SHERLOCK lets you see whether your target was present in the sample, without instrumentation,” said co-first author Jonathan Gootenberg, a Harvard graduate student in Zhang’s lab as well as the lab of Broad core institute member Aviv Regev. “This moves us much closer to a field-ready diagnostic.”

The team envisions a wide range of uses for SHERLOCK, thanks to its versatility in nucleic acid target detection. “The technology demonstrates potential for many health care applications, including diagnosing infections in patients and detecting mutations that confer drug resistance or cause cancer, but it can also be used for industrial and agricultural applications where monitoring steps along the supply chain can reduce waste and improve safety,” added Zhang.

At the core of SHERLOCK’s success is a CRISPR-associated protein called Cas13, which can be programmed to bind to a specific piece of RNA. Cas13’s target can be any genetic sequence, including viral genomes, genes that confer antibiotic resistance in bacteria, or mutations that cause cancer. In certain circumstances, once Cas13 locates and cuts its specified target, the enzyme goes into overdrive, indiscriminately cutting other RNA nearby. To create SHERLOCK, the team harnessed this “off-target” activity and turned it to their advantage, engineering the system to be compatible with both DNA and RNA.

SHERLOCK’s diagnostic potential relies on additional strands of synthetic RNA that are used to create a signal after being cleaved. Cas13 will chop up this RNA after it hits its original target, releasing the signaling molecule, which results in a readout that indicates the presence or absence of the target.

Multiple targets and increased sensitivity

The SHERLOCK platform can now be adapted to test for multiple targets. SHERLOCK initially could only detect one nucleic acid sequence at a time, but now one analysis can give fluorescent signals for up to four different targets at once — meaning less sample is required to run through diagnostic panels. For example, the new version of SHERLOCK can determine in a single reaction whether a sample contains Zika or dengue virus particles, which both cause similar symptoms in patients. The platform uses Cas13 and Cas12a (previously known as Cpf1) enzymes from different species of bacteria to generate the additional signals.

SHERLOCK’s second iteration also uses an additional CRISPR-associated enzyme to amplify its detection signal, making the tool more sensitive than its predecessor. “With the original SHERLOCK, we were detecting a single molecule in a microliter, but now we can achieve 100-fold greater sensitivity,” explained co-first author Omar Abudayyeh, an MIT graduate student in Zhang’s lab at Broad. “That’s especially important for applications like detecting cell-free tumor DNA in blood samples, where the concentration of your target might be extremely low. This next generation of features help make SHERLOCK a more precise system.”

The authors have made their reagents available to the academic community through Addgene and their software tools can be accessed via the Zhang lab website and GitHub.

This study was supported in part by the National Institutes of Health and the Defense Threat Reduction Agency.


February 15, 2018 | More

System draws power from daily temperature swings

System draws power from daily temperature swings

Thermoelectric devices, which can generate power when one side of the device is a different temperature from the other, have been the subject of much research in recent years. Now, a team at MIT has come up with a novel way to convert temperature fluctuations into electrical power. Instead of requiring two different temperature inputs at the same time, the new system takes advantage of the swings in ambient temperature that occur during the day-night cycle.

The new system, called a thermal resonator, could enable continuous, years-long operation of remote sensing systems, for example, without requiring other power sources or batteries, the researchers say.

The findings are being reported in the journal Nature Communications, in a paper by graduate student Anton Cottrill, Carbon P. Dubbs Professor of Chemical Engineering Michael Strano, and seven others in MIT’s Department of Chemical Engineering.

“We basically invented this concept out of whole cloth,” Strano says. “We’ve built the first thermal resonator. It’s something that can sit on a desk and generate energy out of what seems like nothing. We are surrounded by temperature fluctuations of all different frequencies all of the time. These are an untapped source of energy.”

While the power levels generated by the new system so far are modest, the advantage of the thermal resonator is that it does not need direct sunlight; it generates energy from ambient temperature changes, even in the shade. That means it is unaffected by short-term changes in cloud cover, wind conditions, or other environmental conditions, and can be located anywhere that’s convenient — even underneath a solar panel, in perpetual shadow, where it could even allow the solar panel to be more efficient by drawing away waste heat, the researchers say.

The thermal resonator was shown to outperform an identically sized, commercial pyroelectric material — an established method for converting temperature fluctuations to electricity — by factor of more than three in terms of power per area, according to Cottrill.

The researchers realized that to produce power from temperature cycles, they needed a material that is optimized for a little-recognized characteristic called thermal effusivity — a property that describes how readily the material can draw heat from its surroundings or release it. Thermal effusivity combines the properties of thermal conduction (how rapidly heat can propagate through a material) and thermal capacity (how much heat can be stored in a given volume of material). In most materials, if one of these properties is high, the other tends to be low. Ceramics, for example, have high thermal capacity but low conduction.

To get around this, the team created a carefully tailored combination of materials. The basic structure is a metal foam, made of copper or nickel, which is then coated with a layer of graphene to provide even greater thermal conductivity. Then, the foam is infused with a kind of wax called octadecane, a phase-change material, which changes between solid and liquid within a particular range of temperatures chosen for a given application.

A sample of the material made to test the concept showed that, simply in response to a 10-degree-Celsius temperature difference between night and day, the tiny sample of material produced 350 millivolts of potential and 1.3 milliwatts of power — enough to power simple, small environmental sensors or communications systems.

“The phase-change material stores the heat,” says Cottrill, the study’s lead author, “and the graphene gives you very fast conduction” when it comes time to use that heat to produce an electric current.

Essentially, Strano explains, one side of the device captures heat, which then slowly radiates through to the other side. One side always lags behind the other as the system tries to reach equilibrium. This perpetual difference between the two sides can then be harvested through conventional thermoelectrics. The combination of the three materials — metal foam, graphene, and octadecane — makes it “the highest thermal effusivity material in the literature to date,” Strano says.

While the initial testing was done using the 24-hour daily cycle of ambient air temperature, tuning the properties of the material could make it possible to harvest other kinds of temperature cycles, such as the heat from the on-and-off cycling of motors in a refrigerator, or of machinery in industrial plants.

“We’re surrounded by temperature variations and fluctuations, but they haven’t been well-characterized in the environment,” Strano says. This is partly because there was no known way to harness them.

Other approaches have been used to try to draw power from thermal cycles, with pyroelectric devices, for example, but the new system is the first that can be tuned to respond to specific periods of temperature variations, such as the diurnal cycle, the researchers say.

These temperature variations are “untapped energy,” says Cottrill, and could be a complementary energy source in a hybrid system that, by combining multiple pathways for producing power, could keep working even if individual components failed. The research was partly funded by a grant from Saudi Arabia’s King Abdullah University of Science and Technology (KAUST), which hopes to use the system as a way of powering networks of sensors that monitor conditions at oil and gas drilling fields, for example.

“They want orthogonal energy sources,” Cottrill says — that is, ones that are entirely independent of each other, such as fossil fuel generators, solar panels, and this new thermal-cycle power device. Thus, “if one part fails,” for example if solar panels are left in darkness by a sandstorm, “you’ll have this additional mechanism to give power, even if it’s just enough to send out an emergency message.”

Such systems could also provide low-power but long-lasting energy sources for landers or rovers exploring remote locations, including other moons and planets, says Volodymyr Koman, an MIT postdoc and co-author of the new study. For such uses, much of the system could be made from local materials rather than having to be premade, he says.

This approach “is a novel development with a great future,” says Kourosh Kalantar-zadeh, a distinguished professor of engineering at RMIT University in Melbourne, Australia, who was not involved in this work. “It can potentially play an unexpected role in complementary energy harvesting units.”

He adds, “To compete with other energy harvesting technologies, always higher voltages and powers are demanded. However, I personally feel that it is quite possible to gain a lot more out of this by investing more into the concept. … It is an attractive technology which will be potentially followed by many others in the near future.”

The team also included MIT chemical engineering graduate students Albert Tianxiang Liu, Amir Kaplan, and Sayalee Mahajan; visiting scientist Yuichiro Kunai; postdoc Pingwei Liu; and undergraduate Aubrey Toland. It was supported by the Office of Naval Research, KAUST, and the Swiss National Science Foundation.


February 15, 2018 | More

A microbial approach to agriculture

A microbial approach to agriculture

The ability of animals to digest plant material is facilitated by tiny microbes in the gut that can break down complex carbohydrates. This dependency on microbes is most extreme in herbivores such as cows, which have developed a symbiotic relationship with their gut microbes. Cow microbiomes — the assemblage of microbes that inhabit their digestive system — have tremendous importance for agriculture, as they mediate the conversion of solar energy, stored in plant tissues, into animal protein consumed by humans across the globe.

Engineering these microbial communities to increase the efficiency of the plant-to-biomass conversion pipeline is thus a major challenge for microbiologists and engineers, and one that puts to test the still-limited understanding of how to control complex ecological systems such as gut-associated microbiomes.

A new special subject, 1.S992 (Agricultural Microbial Ecology) from the Department of Civil and Environmental Engineering (CEE), took a group of undergraduate students to Israel over MIT’s Independent Activities Period to learn the cutting-edge techniques and methods used to study these microbial communities, and to explore the frontier of microbiome engineering alongside graduate students from Ben-Gurion University of the Negev (BGU).

“Professor Itzhak Mizrahi [of BGU] studies the cow rumen [one chamber of a cow stomach], and at MIT we study the ocean, but we are fundamentally studying processes that are very similar. We both study microbes degrading complex materials; in the ocean it’s algal cell walls, and in the rumen it’s plant fibers,” explains CEE assistant professor Otto X. Cordero, a microbiologist who studies micro-scale ecology and who led the special subject with Mizrahi. “The goal of the class is to learn how to explore the problem of how to rationally design a microbial consortium. What this means is we are trying to understand, for different species and organisms, what compounds are produced when they degrade one resource and how other organisms can utilize those compounds. We want to learn how to predict the functioning of an interconnected metabolic system.”

To help with their research, Cordero and Mizrahi enlisted the help of students from their respective universities. For two weeks, MIT students worked with students from BGU to reverse-engineer microbial communities that inhabit the cow rumen. It is in the rumen that microbial communities have the special ability to break down recalcitrant plant materials and turn them into energy. The students thus designed different microbial ecosystems that could potentially be transplanted into an animal to make it degrade plant material more efficiently — a feature with considerable agricultural benefits.

“By seeking to understand the collective functions of the microbial system and interactions within communities, we are trying to determine the factors that control the function and efficiency of the cow rumen,” Cordero explains. “For example, we could determine how much of the excreted product goes to biomass, the animal’s weight, versus going to gases such as methane, a potent greenhouse gas.”

Agricultural Microbial Ecology was designed to be a hands-on, lab-based program to give students a more complete understanding of the technology and methods used to analyze the microbiomes and to expose students to a new way of thinking about microbial communities.

“We specifically aimed to provide the students with conceptual and technical understanding of how microbes, which are the building blocks of the microbiome, could be isolated from their environments, studied for their characteristics, and reassembled again in a desired manner,” Mizrahi explains.

In the lab, the students analyzed the preferred resources and excretion products of more than 30 different relevant species from the rumen and human gut. Using this information, the students designed communities that could consume certain resources and convert them into desired compounds, while minimizing the production of others.

“We got to learn how to assemble a microbial community from the bottom up, and while it’s certainly fast-paced, it was really cool to be able to experience every step of the process and have ownership over almost every aspect of our microbial communities at the end,” says Mikayla Murphy, a CEE senior who participated in the class.

At BGU, the students used cutting-edge tools and techniques, including advanced microscopy that allowed the students to take images of the microbes, gas chromatrography-mass spectrometry to identify the metabolites produced by rumen microbes, as well as genomics to predict the presence of various metabolic pathways.

These efforts provide insight into how these microbial communities can be manipulated. Biotechnology companies are trying to do similar work, but many are essentially mixing communities at random, Cordero explains. “The challenge is to understand the logic behind these processes,” he says.

After two weeks in Israel, the students bonded inside and outside of the lab. “It was amazing to see how science can make students from different backgrounds and cultures engaged towards a specific goal,” Mizrahi says.

In addition to experiencing Israeli culture and college life, the groups from MIT and BGU also took advantage of local attractions and historical sites. During the weekends, the students embarked on adventures like visiting and swimming (and floating) in the Dead Sea, hiking in the desert in the Negev region of Southern Israel, and spending time in Jerusalem.

“My biggest takeaway [from the subject] is that knowledge of microbial ecology can be a really powerful tool, and thus it will probably be an important field of study for many years to come,” Murphy says. “Although microbes are small, they have a large impact on a lot of our biggest problems, such as climate change and food production.”

The subject was made possible in part by MIT International Science and Technology Initiatives, the National Science Foundation, the United States-Israel Binational Science Foundation, and the United States–Israel Binational Science Foundation.


February 14, 2018 | More

MIT alumnus to compete in Winter Olympics

MIT alumnus to compete in Winter Olympics

AJ Edelman ’14 will compete in the men’s skeleton for Israel during the 2018 Winter Olympics on Wednesday, Feb. 14, at 8:00 p.m. EST. Watch the men’s skeleton races live online at the NBC Olympic Channel.

A club hockey player at MIT and former competitive body builder, Edelman caught the Olympic bug while watching the Olympic bobsled trials in the Burton 2 lounge in 2013. Learning more about the bobsled led to an interest in the specialized skeleton race, and after a test run in 2014 he became hooked on the sport.

“I wanted to keep playing sports after I left MIT,” Edelman says. “Skeleton is an eye-catching sport and seemed like the challenge I was looking for.”

Edelman, who grew up in a Modern Orthodox household near Boston, became an Israeli citizen in 2016. He is the first Israeli athlete to compete in the Olympic skeleton and is part of a small 10-athlete contingent representing the country in the games.

“I spent a summer in Israel in 2006 and resolved to make it my home one day,” he says. “I wanted to create an impact on my Jewish community. I realized the biggest impact I could make was to walk into the games wearing the Star of David.”

Edelman committed full-time to Olympic racing in 2015. He left his job as a product manager for Oracle in San Francisco and moved to Calgary, where he trained on the skeleton track used during the 1988 winter games.

“I dropped everything to focus on training,” he says. “I knew there was no other way to do it. If I wasn’t training, I watched skeleton videos on YouTube to build neuropathways on what to do in specific situations.”

After a few months of training, a 10-year plan to qualify for the games was shortened to four years — a remarkable goal considering Edelman did not have a coach.

“Training without a coach reminded me of those late nights at MIT,” Edelman says. “If you had an assignment due the next morning, it was up to you to do it.”

Edelman official qualified for the games in January after medaling in the final two races of the 2017-18 World Cup skeleton season.

“Once I got the news, I was thinking about what happens next,” he says. “Just like at MIT, the goal is not the goal. The goal is really to prepare for the next step.”

Despite being four years and nearly 8,000 miles removed from Cambridge, his MIT connection remains strong. His father is Institute for Medical Engineering and Science Professor Elazer Edelman ’78, SM ’79, PhD ’84, and his brother, Austin, is a first-year student.

“Representing Israel in the Olympics is the greatest honor of my life,” he says. “Winning a medal was not my objective when I started. It was for kids to see that you can take a step on an impossible journey and that you can accomplish your goal.”

A version of this article originally appeared on the Slice of MIT blog, which features more coverage of MIT alumni and the Olympics, including a history of MIT alumni in the Olympic Games.


February 14, 2018 | More

Neural networks everywhere

Neural networks everywhere

Most recent advances in artificial-intelligence systems such as speech- or face-recognition programs have come courtesy of neural networks, densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training data.

But neural nets are large, and their computations are energy intensive, so they’re not very practical for handheld devices. Most smartphone apps that rely on neural nets simply upload data to internet servers, which process it and send the results back to the phone.

Now, MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” says Avishek Biswas, an MIT graduate student in electrical engineering and computer science, who led the new chip’s development.

“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption. But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”

Biswas and his thesis advisor, Anantha Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, describe the new chip in a paper that Biswas is presenting this week at the International Solid State Circuits Conference.

Back to analog

Neural networks are typically arranged into layers. A single processing node in one layer of the network will generally receive data from several nodes in the layer below and pass data to several nodes in the layer above. Each connection between nodes has its own “weight,” which indicates how large a role the output of one node will play in the computation performed by the next. Training the network is a matter of setting those weights.

A node receiving data from multiple nodes in the layer below will multiply each input by the weight of the corresponding connection and sum the results. That operation — the summation of multiplications — is the definition of a dot product. If the dot product exceeds some threshold value, the node will transmit it to nodes in the next layer, over connections with their own weights.

A neural net is an abstraction: The “nodes” are just weights stored in a computer’s memory. Calculating a dot product usually involves fetching a weight from memory, fetching the associated data item, multiplying the two, storing the result somewhere, and then repeating the operation for every input to a node. Given that a neural net will have thousands or even millions of nodes, that’s a lot of data to move around.

But that sequence of operations is just a digital approximation of what happens in the brain, where signals traveling along multiple neurons meet at a “synapse,” or a gap between bundles of neurons. The neurons’ firing rates and the electrochemical signals that cross the synapse correspond to the data values and weights. The MIT researchers’ new chip improves efficiency by replicating the brain more faithfully.

In the chip, a node’s input values are converted into electrical voltages and then multiplied by the appropriate weights. Only the combined voltages are converted back into a digital representation and stored for further processing.

The chip can thus calculate dot products for multiple nodes — 16 at a time, in the prototype — in a single step, instead of shuttling between a processor and memory for every computation.

All or nothing

One of the keys to the system is that all the weights are either 1 or -1. That means that they can be implemented within the memory itself as simple switches that either close a circuit or leave it open. Recent theoretical work suggests that neural nets trained with only two weights should lose little accuracy — somewhere between 1 and 2 percent.

Biswas and Chandrakasan’s research bears that prediction out. In experiments, they ran the full implementation of a neural network on a conventional computer and the binary-weight equivalent on their chip. Their chip’s results were generally within 2 to 3 percent of the conventional network’s.

“This is a promising real-world demonstration of SRAM-based in-memory analog computing for deep-learning applications,” says Dario Gil, vice president of artificial intelligence at IBM. “The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays. It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT [the internet of things] in the future.”


February 14, 2018 | More