News and Research
Dora Aldama: Drawn to multidimensional problems

Dora Aldama: Drawn to multidimensional problems

Dora Aldama (LGO ’18) discusses her passion for multidimensional problems and her internship to improve Boeing’s production line.
Read more

Lgo

No more blackouts?

Konstantin Turitsyn, associate professor of mechanical engineering and LGO thesis advisor led a team to develop a method for improving the stability of microgrids, which many rural and some urban communities are turning to for an alternative source of electricity.

Today, more than 1.3 billion people are living without regular access to power, including more than 300 million in India and 600 million in sub-Saharan Africa. In these and other developing countries, access to a main power grid, particularly in rural regions, is remote and often unreliable.

Increasingly, many rural and some urban communities are turning to microgrids as an alternative source of electricity. Microgrids are small-scale power systems that supply local energy, typically in the form of solar power, to localized consumers, such as individual households or small villages.

However, the smaller a power system, the more vulnerable it is to outages. Small disturbances, such as plugging in a certain appliance or one too many phone chargers, can cause a microgrid to destabilize and short out.

For this reason, engineers have typically designed microgrids in simple, centralized configurations with thick cables and large capacitors. This limits the amount of power that any appliance can draw from a network — a conservative measure that increases a microgrid’s reliability but comes with a significant cost.

Now engineers at MIT have developed a method for guaranteeing the stability of any microgrid that runs on direct current, or DC — an architecture that was originally proposed as part of the MIT Tata Center’s uLink project. The researchers found they can ensure a microgrid’s stability by installing capacitors, which are devices that even out spikes and dips in voltage, of a particular size, or capacitance.

The team calculated the minimum capacitance on a particular load that is required to maintain a microgrid’s stability, given the total load, or power a community consumes. Importantly, this calculation does not rely on a network’s particular configuration of transmission lines and power sources. This means that microgrid designers do not have to start from scratch in designing power systems for each new community.

Instead, the researchers say this microgrid design process can be performed once to develop, for instance, power system “kits”: sets of modular power sources, loads, and lines that can be produced in bulk. As long as the load units include capacitors of the appropriate size, the system is guaranteed to be stable, no matter how the individual components are connected.

The researchers say such a modular design may be easily reconfigured for changing needs, such as additional households joining a community’s existing microgrid.

“What we propose is this concept of ad hoc microgrids: microgrids that can be created without any preplanning and can operate without any oversight. You can take different components, interconnect them in any way that’s suitable for you, and it is guaranteed to work,” says Konstantin Turitsyn, associate professor of mechanical engineering at MIT. “In the end, it is a step toward lower-cost microgrids that can provide some guaranteed level of reliability and security.”

The team’s results appear online in the IEEE journal Control Systems Letters, with graduate student Kathleen Cavanagh and Julia Belk ’17.

Returning to normal operations

Cavanagh says the team’s work sought to meet one central challenge in microgrid design: “What if we don’t know the network in advance and don’t know which village a microgrid will be deployed to? Can we design components in such a way that, no matter how people interconnect them, they will still work?”

The researchers looked for ways to constrain the dimensions of a microgrid’s main components — transmission lines, power sources, and loads, or power-consuming elements — in a way that guarantees a system’s overall stability without depending on the particular layout of the network.

To do so, they looked to Brayton-Moser potential theory — a general mathematical theory developed in the 1960s that characterizes the dynamics of the flow of energy within a system comprising various physical and interconnected components, such as in nonlinear circuits.

“Here we applied this theory to systems whose main goal is transfer of power, rather than to perform any logical operations,” Turitsyn says.

The team applied the theory to a simple yet realistic representation of a microgrid. This enabled the researchers to look at the disturbances caused when there was a variation in the loading, such as when a cell phone was plugged into its charger or a fan was turned off. They showed that the worst-case configuration is a simple network comprising a source connected to a load. The identification of this simple configuration allowed them to remove any dependence on a specific network configuration or topology.

“This theory was useful to prove that, for high-enough capacitance, a microgrid’s voltage will not go to critically low levels, and the system will bounce back and continue normal operations,” Turitsyn says.

Blueprint for power

From their calculations, the team developed a framework that relates a microgrid’s overall power requirements, the length of its transmission lines, and its power demands, to the specific capacitor size required to keep the system stable.

“Ensuring that this simple network is stable guarantees that all other networks with the same line length or smaller are also stable,” Turitsyn says. “That was the key insight that allowed us to develop statements that don’t depend on the network configuration.”

“This means you don’t have to oversize your capacitors by a factor of 10, because we give explicit conditions where it would remain stable, even in worst-case scenarios,” Cavanagh says.

In the end, the team’s framework provides a cheaper, flexible blueprint for designing and adapting microgrids, for any community configuration. For instance, microgrid operators can use the framework to determine the size of a given capacitor that will stabilize a certain load. Inversely, a community that has been delivered hardware to set up a microgrid can use the group’s framework to determine the maximum length the transmission lines should be, as well as the type of appliances that the components can safely maintain.

“In some situations, for given voltage levels, we cannot guarantee stability with respect to a given load change, and maybe a consumer can decide it’s ok to use this big of a fan, but not a bigger one,” Turitsyn says. “So it could not only be about a capacitor, but also could constrain the maximal accepted amount of power that individuals can use.”

Going forward, the researchers hope to take a similar approach to AC, or alternating current, microgrids, which are mostly used in developed countries such as the United States.

“In the future we want to extend this work to AC microgrids, so that we don’t have situations like after Hurricane Maria, where in Puerto Rico now the expectation is that it will be several more months before power is completely restored,” Turitsyn says. “In these situations, the ability to deploy solar-based microgrids without a lot of preplanning, and with flexibility in connections, would be an important step forward.”

This research was sponsored by the MIT Tata Center for Technology and Design.


November 16, 2017 | More

Let your car tell you what it needs

Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and LGO thesis advisor has been working on a smartphone app for car diagnostic information by analyzing the car’s sounds and vibrations.

Imagine hopping into a ride-share car, glancing at your smartphone, and telling the driver that the car’s left front tire needs air, its air filter should be replaced next week, and its engine needs two new spark plugs.

Within the next year or two, people may be able to get that kind of diagnostic information in just a few minutes, in their own cars or any car they happen to be in. They wouldn’t need to know anything about the car’s history or to connect to it in any way; the information would be derived from analyzing the car’s sounds and vibrations, as measured by the phone’s microphone and accelerometers.

The MIT research behind this idea has been reported in a series of papers, most recently in the November issue of the journal Engineering Applications of Artificial Intelligence. The new paper’s co-authors include research scientist Joshua Siegel PhD ’16; Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president of open learning at MIT; and two others.

A smartphone app combining the various diagnostic systems the team developed could save the average driver $125 a year and improve their overall gas mileage by a few percentage points, Siegel says. For trucks, the savings could run to $600 a year, not counting the benefits of avoiding breakdowns that could result in lost income.

With today’s smartphones, Siegel explains, “the sensitivity is so high, you can do a good job [of detecting the relevant signals] without needing any special connection.” For some diagnostics, though, mounting the phone to a dashboard holder would improve the level of accuracy. Already, the accuracy of the results from the diagnostic systems they have developed, he says, are “all well in excess of 90 percent.” And tests for misfire detection have produced no false positives where a problem was incorrectly identified.

The basic idea is to provide diagnostic information that can warn the driver of upcoming issues or needed routine maintenance, before these conditions lead to breakdowns or blowouts.

Take the air filter, for example — the topic of the team’s latest findings. An engine’s sounds can reveal telltale signs of how clogged the air filter is and when to change it. And unlike many routine maintenance tasks, it’s just as bad to change air filters too soon as to wait too long, Siegel says.

That’s because brand-new air filters let more particles pass through, until they eventually build up enough of a coating of particles that the pore sizes get smaller and reach an optimal level of filtration. “As they age, they filter better,” he says. Then, as the buildup continues, eventually the pores get so small that they restrict the airflow to the engine, reducing its performance. Knowing just the right time to replace the filter can make a measurable difference in an engine’s performance and operating costs.

How can the phone tell the filter is getting clogged? “We’re listening to the car’s breathing, and listening for when it starts to snore,” Siegel says. “As it starts to get clogged, it makes a whistling noise as air is drawn in. Listening to it, you can’t differentiate it from the other engine noise, but your phone can.”

To develop and test the various diagnostic systems, which also include detecting engine misfires that signal a bad spark plug or the need for a tune up, Siegel and his colleagues tested data from a variety of cars, including some that ran perfectly and others in which one of these issues, from a clogged filter to a misfire, was deliberately induced. Often, in order to test different models, the researchers rented cars, created a condition they wanted to be able to diagnose, and then restored the car to normal.

“For our data, we’ve induced failures [after renting] a perfectly good vehicle” and then fixed it and “returned the car better than when we took it out. I’ve rented cars and given them new air filters, balanced their tires, and done an oil change” before taking them back, he recalls.

Some of the diagnostics require a complicated multistep process. For example, to tell if a car’s tires are getting bald and will need to be replaced soon, or that they are overinflated and might risk a blowout, the researchers use a combination of data collection and analysis. First, the system uses the phone’s built-in GPS system to monitor the car’s actual speed. Then, vibration data can be used to determine how fast the wheels are turning. That in turn can used to derive the wheel’s diameter, which can be compared with the diameter that would be expected if the tire were new and properly inflated.

Many of the diagnostics are derived by using machine-learning processes to compare many recordings of sound and vibration from well-tuned cars with similar ones that have a specific problem. The machine learning systems can then extract even very subtle differences. For example, algorithms designed to detect wheel balance problems did a better job at detecting imbalances than expert drivers from a major car company, Siegel says.

A prototype smartphone app that incorporates all these diagnostic tools is being developed and should be ready for field testing in about six months, Siegel says, and a commercial version should be available within about a year after that. The system will be commercialized by a startup company Siegel founded called Data Driven.

October 26, 2017 | More

Mapping gender diversity at MIT

Karen Willcox, Professor of aeronautics and astronautics and LGO thesis advisor recently helped devise an interactive map that examines trends in undergraduate gender diversity at MIT.

A trio of researchers has created and published a data visualization map that examines trends in undergraduate gender diversity at MIT. The big reveal is heartening: Over the past 20 years, MIT’s female undergraduate population has risen to nearly 50 percent of total enrollment and such growth has been sustained across almost every department and school.

Professor of aeronautics and astronautics Karen Willcox, researcher Luwen Huang, and graduate student Elizabeth Qian devised an interactive map to show these aggregate trends, and much more. The tool, using data from the MIT Registrar’s Office, allows users to explore gender diversity on a class-by-class and department-level basis, to see links between classes, such as prerequisite requirements, and to conduct keyword searches to reveal variations in related subjects across MIT.

“MIT should be proud of the leadership it has shown,” says Willcox. “The positive trends in gender equity are not seen in just one or two departments, but literally across the spectrum of science, engineering, arts, humanities, social sciences, management and architecture. One of the unique features of our tool is that it provides insight at the subject level, going deeper beyond aggregate statistics at the major level. We hope that this will be a basis for data-driven decisions — for example, by understanding what about a particular subject’s pedagogy makes it appeal to a more diverse audience.”

The map appears as a series of discipline-based ball-and-stick clusters, with each node representing a class. The size of the node indicates the class’s total enrollment. The color of a node, ranging from teal (fewer women enrolled) to salmon (more women in enrolled), represents the percentage of women in a particular class, and helps to illustrate how diversity has changed over time.

For example, in a slice across classes in MIT’s Department of Electrical Engineering and Computer Science (EECS) in 2006, the nodes appear as light and darker teal, indicating enrollments of less than 25 percent women. Fast forward to 2016, and the same slice has node colors all in shades of salmon, indicating female enrollments of 35 percent or more. In part, this change is a reflection of the steady increase in total female EECS majors, particularly over the past six years. However, since the analysis is conducted at the class level, this change is also a reflection of more women from other majors taking computer science classes.

“It is gratifying to see the change in composition of our EECS student body,” says Anantha Chandrakasan, former department head of EECS and now dean of the School of Engineering. “While it is true that we have had a dramatic increase in [computer science and engineering] majors, female enrollment has nearly tripled in the past five years. It’s a useful model for us to consider as we are improving gender equity across the school.”

Willcox credits the positive momentum in EECS to several different elements, saying, “anecdotal evidence suggests that the pedagogical reform undertaken by EECS in 2008 has played a large role.” She also points out the important role of leadership, namely Chandrakasan’s support of studies such as the EECS Undergraduate Experience Survey and his commitment to programs such as the Women in Technology Program and Rising Stars, an effort to bring together women who are interested in careers in academia.

Enrollments in the Department of Mechanical Engineering have achieved similar gender parity. This is especially impressive given that the national average of female undergraduate majors in the field is 13.2 percent. Willcox again highlights the efforts made by another leader, Mary Boyce, the first woman to head that department from 2008 to 2013 and now dean of engineering at Columbia University. The results of an internal study announced in June, suggested that the department’s ongoing proactive approach — revamping the curriculum, enhancing recruitment efforts — played a part in their success.

“The map, of course, cannot reveal specific causes of changes in gender diversity, but it does provide a place to begin a conversation,” says researcher Luwen Huang, who is an expert in visualization design. “The interactivity of the map was designed to encourage the user to explore, discover connections across classes, and ask questions.”

The researchers caution that looking at department-based data only provides one view. In the case of EECS, a deeper dive shows that introductory programming classes have historically had high female enrollments, but that finding may be deceptive. “When you look at introductory courses like 1.00 (Engineering Computation and Data Science) and 6.00 (Introduction to Computer Science and Programming), you see high levels of female enrollment,” Willcox explains. “That’s not because there are more women in those fields, but likely because women might lack the preparation and/or the self-confidence to skip introductory classes.”

Biannual surveys of MIT undergraduates and other internal reports seem to bolster such a supposition, suggesting that women at MIT may experience negative stereotyping and feel less confident than their male counterparts. Lower or higher female enrollment in certain classes and departments may also be due to a variety of other factors, from job prospects to the influence of peers to level of interest in the subject matter.

The data and tool provide a starting point to begin such analysis and to take potential actions. Being open about data, sharing data, and being data-driven are valuable forcing mechanisms, says the team, and a hallmark of MIT’s ethos of transparency. Further, having a visual map of gender diversity across MIT, they say, is literally eye opening.

“This map provides ample evidence that our efforts to enroll a diverse undergraduate class have had a dramatic impact on MIT,” says Ian A. Waitz, vice chancellor and the Jerome C. Hunsaker Professor of Aeronautics and Astronautics. “However, while these demographic trends are impressive, they are not sufficient. We must continue to work hard to create an inclusive, welcoming environment for all.”

October 26, 2017 | More

Identifying optimal product prices

David Simchi-Levi, LGO  Thesis advisor and Professor of Civil and Environmental Engineering, explains new insights into demand forecasting and price optimization.

How can online businesses leverage vast historical data, computational power, and sophisticated machine-learning techniques to quickly analyze and forecast demand, and to optimize pricing and increase revenue?

A research highlight article in the Fall 2017 issue of MIT Sloan Management Review by MIT Professor David Simchi-Levi describes new insights into demand forecasting and price optimization.

Algorithm increases revenue by 10 percent in six months

Simchi-Levi developed a machine-learning algorithm, which won the INFORMS Revenue Management and Pricing Section Practice Award, and first implemented it at online retailer Rue La La.

The initial research goal was to reduce inventory, but what the company ended up with was “a cutting-edge, demand-shaping application that has a tremendous impact on the retailer’s bottom line,” Simchi-Levi says.

Rue La La’s big challenge was pricing on items that have never been sold before and therefore required a pricing algorithm that could set higher prices for some first-time items and lower prices for others.

Within six months of implementing the algorithm, it increased Rue La La’s revenue by 10 percent.

Forecast, learn, optimize

Simchi-Levi’s process involves three steps for generating better price predictions:

The first step involves matching products with similar characteristics to the products to be optimized. A relationship between demand and price is then predicted with the help of a machine-learning algorithm.

The second step requires testing a price against actual sales, and adjusting the product’s pricing curve to match real-life results.

In the third and final step, a new curve is applied to help optimize pricing across many products and time periods.

Predicting consumer demand at Groupon

Groupon has a huge product portfolio and launches thousands of new deals every day, offering them for only a short time period. Since Groupon has such a short sales period, predicting demand was a big problem and forecasting near impossible.

Applying Simchi-Levi’s approach to this use case began by generating multiple demand functions. By then applying a test price and observing customers’ decisions, insights were gleaned on how much was sold — information that could identify the demand function closest to the level of sales at the learning price. This was the final demand-price function used, and it was used as the basis for optimizing price during the optimization period.

Analysis of the results from the field experiment showed that this new approach increased Groupon’s revenue by about 21 percent but had a much bigger impact on low-volume deals. For deals with fewer bookings per day than the median, the average increase in revenue was 116 percent, while revenue increased only 14 percent for deals with more bookings per day than the median.

Potential to disrupt consumer banking and insurance

The ability to automate pricing enables companies to optimize pricing for more products than most organizations currently find possible. This method has also been used for a bricks-and-mortar application by applying the method to a company’s promotion and pricing, in various retail channels, with similar results.

“I am very pleased that our pricing algorithm can achieve such positive results in a short timeframe,” Simchi-Levi says. “We expect that this method will soon be used not only in retail but also in the consumer banking industry. Indeed, my team at MIT has developed related methods that have recently been applied in the airline and insurance industries.”

September 22, 2017 | More

New robot rolls with the rules of pedestrian conduct

Jonathan How, Professor of aeronautics and astronautics and LGO thesis advisor recently co-authored a paper on a new robotic design for autonomous robots with “socially aware navigation.”

September 13, 2017 | More

Finding leaks while they’re easy to fix

A team under professor of mechanical engineering and LGO Thesis Advisor, Kamal Youcef-Toumi, have developed a fast, inexpensive water distribution system.
Access to clean, safe water is one of the world’s pressing needs, yet today’s water distribution systems lose an average of 20 percent of their supply because of leaks. These leaks not only make shortages worse but also can cause serious structural damage to buildings and roads by undermining foundations.Unfortunately, leak detection systems are expensive and slow to operate — and they don’t work well in systems that use wood, clay, or plastic pipes, which account for the majority of systems in the developing world.Now, a new system developed by researchers at MIT could provide a fast, inexpensive solution that can find even tiny leaks with pinpoint precision, no matter what the pipes are made of.

The system, which has been under development and testing for nine years by professor of mechanical engineering Kamal Youcef-Toumi, graduate student You Wu, and two others, will be described in detail at the upcoming IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in September. Meanwhile, the team is carrying out tests this summer on 12-inch concrete water-distribution pipes under the city of Monterrey, Mexico.

The system uses a small, rubbery robotic device that looks something like an oversized badminton birdie. The device can be inserted into the water system through any fire hydrant. It then moves passively with the flow, logging its position as it goes. It detects even small variations in pressure by sensing the pull at the edges of its soft rubber skirt, which fills the diameter of of the pipe.

The device is then retrieved using a net through another hydrant, and its data is uploaded. No digging is required, and there is no need for any interruption of the water service. In addition to the passive device that is pushed by the water flow, the team also produced an active version that can control its motion.

Monterrey itself has a strong incentive to take part in this study, since it loses an estimated 40 percent of its water supply to leaks every year, costing the city about $80 million in lost revenue. Leaks can also lead to contamination of the water supply when polluted water backs up into the distribution pipes.

The MIT team, called PipeGuard, intends to commercialize its robotic detection system to help alleviate such losses. In Saudi Arabia, where most drinking water is provided through expensive desalination plants, some 33 percent is lost through leakage. That’s why that desert nation’s King Fahd University of Petroleum and Minerals has sponsored and collaborated on much of the MIT team’s work, including successful field tests there earlier this year that resulted in some further design improvements to the system, Youcef-Toumi says.

Those tests, in a mile-long section of 2-inch rusty pipe provided by Pipetech LLC, a pipeline service company in Al Khobar, Saudi Arabia, that frequently uses the same pipe system for validating and certifying pipeline technologies. The tests, in pipes with many bends, T-joints, and connections, involved creating an artificial leak for the robot to find. The robot did so successfully, distinguishing the characteristics of the leak from false alarms caused by pressure variations or changes in pipe size, roughness, or orientation.

“We put the robot in from one joint, and took it out from the other. We tried it 14 times over three days, and it completed the inspection every time,” Wu says. What’s more, it found a leak that was about one gallon per minute, which is one-tenth the minimum size that conventional detection methods can find on average, and a third as large as those systems can find under even the best of conditions.

These leakage issues are widespread. “In China, there are many newly built cities and they all use plastic water pipes,” says Honghai Bi, CEO of Banzan International Group, one of the largest plastic pipe manufacturers in China. “In those new pipe systems there is still about 30 percent of water lost due to leaks every day. Currently there is not an effective tool to locate leaks in those plastic pipes, and MIT PipeGuard’s robot is the disruptive change we have been looking for.”

The next step for the team, after the field tests in Monterrey, is to make a more flexible, collapsible version of their robot that can quickly adapt itself to pipes of different diameters. Under the steets of Boston, for example, there are a mix of 6-, 8- and 12-inch pipes to navigate — many of them installed so long ago that the city doesn’t even have accurate maps of their locations. The robot would expand “like an umbrella,” Wu says, to adapt to each pipe.

The value of the robot is not just for reducing water losses, but also for making water services safer and more reliable. “When a leak occurs, the force of the water flowing from underground can do serious structural damage undermining streets, flooding houses, and damaging other underground utilities. There is also the issue of loss of service to residents and business for extended period of time,” says Mark Gallager, director of engineering and distribution at the Cambridge, Massachusetts, Water Department. The ability of this system to detect much smaller leaks could enable early detection and repair, long before serious pipe breaks occur.

Gallager says, If we had the capability to find leaks when they first appear or before they get to the point of critical failure, that could equate to preventing the loss of millions of gallons of water annually. It could minimize the damage to infrastructure and the loss of water services to homes and businesses, and it could significantly reduce the associated cost.”

Not only could the system find leaks in virtually any kind of water pipe, it could also be used for other kinds of pipe distribution systems, such as those for natural gas. Such pipes, which are often old and also poorly mapped, have produced serious gas buildups and even explosions in some cities, but leaks are hard to detect until they become large enough for people to smell the added odorants. The MIT system was actually first developed to detect gas leaks, and later adapted for water pipes.

Ultimately, the team hopes, the robot could not just find leaks but also be equipped with a special mechanism they have designed, so that, at least for smaller leaks, it could carry out an instant repair on the spot.

The device has already attracted a series of honors and awards. The team members won the $10,000 prize at the 2017 MIT Water Innovation competition, and they were finalists in the MIT $100K Entrepreneurship Competition, where they won another $10,000. In the $100K finals, they won yet another $10,000 for the Booz Allen Hamilton Data Analytics Award, and they were one of the 25 winners nationwide to receive a $10,000 2017 Infy Maker Award from Infosys Foundation.

One of the judges in that $100k competition, DKNY CEO Caroline Brown, said “PipeGuard has created a simple, pragmatic and elegant solution to a complex problem. … This robot is a great example of utilizing smart design to simplify complexity and maximize efficiency.”

The team presenting the results at the IROS conference includes Kristina Kim ’17 and Michael Finn Henry, a local high school student who was a summer intern at MIT. The founders of PipeGuard are Wu and MIT graduate students Jonathan Miller and Daniel Gomez.

July 31, 2017 | More

Two MIT documentaries win New England Emmy Awards

Amos Winter, associate professor of mechanical engineering and LGO faculty was the subject of the Emmy Award winning film “Water is Life,” in which he travels to India gathering research on how to design a low-cost desalination system for use in developing areas.

On June 24, Boston-area journalists, videographers, and producers filled the halls of the Marriott Boston Copley Place for the 40th annual New England Emmy Awards. Staff from MIT’s Department of Mechanical Engineering (MechE) and MIT Video Productions (MVP) occupied two full tables at the black-tie affair. By the end of the night, two golden statues joined them as both groups were awarded Emmys.

MechE’s multimedia specialist John Freidah was honored with a New England Emmy in the Health/Science Program/Special category for the film “Water is Life,” which chronicles PhD student Natasha Wright and Professor Amos Winter as they travel to India gathering research on how to design a low-cost desalination system for use in developing areas. The film was also recently honored with a 2017 National Edward R. Murrow Award — one of the most prestigious awards in journalism — as well as a 2017 Circle of Excellence Award from the The Council for Advancement and Support of Education (CASE).

Meanwhile, MVP’s Lawrence Gallagher, Joseph McMaster, and Jean Dunoyer received a New England Emmy in the Education/Schools category for their film “A Bold Move,” which recounts MIT’s relocation from Boston’s Back Bay to a swath of undeveloped land on the banks of the Charles River in Cambridge, Massachusetts. The film is the first in a four-part series that commemorate MIT’s 100th year in Cambridge.

“Water Is Life”

As the camera pans over an aerial shot of a lake in India, a flock of white birds majestically flies by. Capturing this moment in the opening shot of “Water is Life” required a lot of patience and a little help from a new friend. Unable to bring a drone into India, the film’s producer, editor, and cinematographer, John Freidah, had to come up with another plan. During a conversation on a flight from Delhi to Hyderabad, Freidah befriended a passenger in his row. He mentioned his search for a drone operator to get the perfect birds-eye-view shot of India’s landscape. As luck would have it, the day before departing India, Freidah received an email from his new friend saying he new someone with a drone that he could use to film sweeping aerial shots.

Planning for “Water is Life” began months before Freidah flew to India, however. Interested in highlighting the important work done in Professor Amos Winter’s Global Engineering and Research (GEAR) Lab, Freidah and his colleagues in the media team at MechE honed in on the research PhD student and Tata Fellow Natasha Wright was conducting on designing an affordable desalination system for use in rural India. With the generous support of Robert Stoner, deputy director of the MIT Energy Initiative and director of the Tata Center for Technology and Design, plans were arranged to film Winter and Wright in India.

“India is a beautiful and amazing country, which is rich in imagery. I felt lucky to film there,” Freidah says. “We were fortunate to have the aid of stakeholders — Jain Engineering and Tata Projects — who facilitated our visits to the local villages where they were struggling with clean drinking water.”

Visiting these villages and talking to end-users who would benefit from and potentially use a desalination system was a crucial component of Winter and Wright’s research. Capturing the daily challenges these villagers face on film brought another level of exposure to the work being done by GEAR and the Tata Center.

“Having John travel to India enabled us to tell the story of our research in much greater depth than we could on campus,” says Winter. By capturing the many angles of Winter and Wright’s story, “Water Is Life” aims to show people first-hand what a problem access to clean water is on a global scale, and how essential it is to support new research and technologies that hope to solve it.

“I really wanted to give the viewer a first-person experience — through the visuals,” Freidah explains. “I wanted it to be a visual journey, as if they were there — with sound and imagery — from honking horns on the street and rickshaws going by.”

“A Bold Move”

It’s hard to imagine a time when the banks of the Charles River in Cambridge weren’t adorned with MIT’s Great Dome, inter-connected buildings, and stately columns. MIT President Richard Cockburn Maclaurin’s aspiration to move the Institute from its overcrowded classrooms in Boston’s Back Bay to a plot of vacant land across the river in 1916 did more than shape the landscape around Kendall Square; it redefined MIT’s presence as a global pioneer in science and technology research. To celebrate the 1916 move to Cambridge, the program A Century in Cambridge was launched last year.

Well before the centennial fireworks exploded over Killian Court, Larry Gallagher, director of MVP, was approached by the Century in Cambridge Steering Committee. MVP was asked to produce a series of documentaries that explored MIT’s move to Cambridge in 1916 and other key aspects of the MIT experience that have helped shape MIT into what it is today. The first of this series, “A Bold Move,” chronicles the design and construction of MIT’s new campus, the whimsical celebrations commemorating the move, and the tragic and untimely passing of the man who orchestrated the entire process — President Maclaurin.

Capturing this period in MIT’s history required extensive research and the participation of faculty, staff, and historians well versed in the move to Cambridge. “We are deeply indebted to the faculty, staff, alumni, and members of the Cambridge community who so generously gave their time end expertise,” says producer and director Joe McMaster. “Without their insights, the film wouldn’t have successfully portrayed this moment in MIT’s history.”

In addition to interviewing those with extensive knowledge of the 2016 move, the MVP team had to dig deep into MIT’s robust archives. Thousands of photos from The MIT Museum, The Institute Archives, the Cambridge Historical Commission, and other sources were analyzed by McMaster and a team of research assistants. “I was amazed to see how thoroughly documented MIT’s history is in photographs — particularly everything to do with the move to Cambridge,” McMaster adds. “The whole affair seemed to be carried out with such a wonderful mixture of seriousness and whimsy, and I hoped the film would capture that feeling.”

Editor and co-producer Jean Dunoyer was tasked with weaving together the footage and photographs in a way that reflected this mixture of the silly and sacred. The imagery and footage was set to period music, to give viewers a feel for that particular era in history. In one of the concluding scenes, this period music is brought to life once more by MIT a capella group The Chorallaries. The group performs a haunting rendition of “Mother Tech,” a piece originally performed at the conclusion of the celebrations in 1916.

The entire Century in Cambridge documentary series was produced over the course of 18 months, with assistance from the Century in Cambridge Steering Committee and the generous support of Jane and Neil Papparlardo ’64. The scope of “A Bold Move” required a massive collaboration across all of MVP. “This is indeed a huge collaborative effort for MVP,” says Gallagher. “Projects of this scope benefit from the contributions of the entire team, and for their work and talents to be recognized by their peers in the video production community with an Emmy is a great source of pride.”

July 5, 2017 | More

School of Engineering awards for 2017

Amos Winter, an associate professor of mechanical engineering and LGO faculty won the Junior Bose Award for being an outstanding contributor to education among the junior faculty of the School of Engineering.

The School of Engineering recently honored outstanding faculty, undergraduates, and graduate students, with the following awards:

  • Lorna Gibson, the Matoula S. Salapatas Professor of Materials Science and Engineering and a professor of mechanical engineering, won the Bose Award for Excellence in Teaching, given to a faculty member whose contributions have been characterized by dedication, care, and creativity.
  • Amos Winter, an associate professor of mechanical engineering, won the Junior Bose Award for being an outstanding contributor to education among the junior faculty of the School of Engineering.

The Ruth and Joel Spira Awards for Excellence in Teaching are awarded to one faculty member in each of three departments: Electrical Engineering and Computer Science (EECS), Mechanical Engineering (MechE), and Nuclear Science and Engineering (NSE). The awards acknowledge “the tradition of high-quality engineering education at MIT.” A fourth award rotates among the School of Engineering’s five other academic departments.

This year’s winners are:

  • John Hart, an associate professor of mechanical engineering;
  • Patrick Jaillet, the Dugald C. Jackson Professor in Electrical Engineering;
  • R. Scott Kemp, the Norman C. Rasmussen Career Development Professor of Nuclear Science and Engineering; and
  • Nir Shavit, a professor of electrical engineering and computer science.

Mary Elizabeth Wagner, a graduate student in materials science and engineering, won the School of Engineering Graduate Student Award for Extraordinary Teaching and Mentoring, established in 2006 to recognize an engineering graduate student who has demonstrated extraordinary teaching and mentoring as a teaching or research assistant.

Alexander H. Slocum, the Pappalardo Professor of Mechanical Engineering, won the Capers and Marion McDonald Award for Excellence in Mentoring and Advising, given to a faculty member who has demonstrated a lasting commitment to personal and professional development.

Hannah Diehl of the Department of Physics and Bryce Hwang of EECS were awarded the Barry M. Goldwater Scholarship, given to students who exhibit an outstanding potential and intend to pursue careers in mathematics, the natural sciences, or engineering disciplines that contribute significantly to technological advances in the United States.

Arinze C. Okeke, a biological engineering major, won the Henry Ford II Award, presented to a senior engineering student who has maintained a cumulative GPA average of 5.0 at the end of their seventh term and who has exceptional potential for leadership in the profession of engineering and in society.

John Ochsendorf, the Class of 1942 Professor of Architecture and a professor of civil and environmental engineering, won the Samuel M. Seegal Prize, awarded for excellence in teaching to a faculty member (or members) in the Department of Civil and Environmental Engineering and/or the MIT Sloan School of Management who inspires students in pursuing and achieving excellence.

June 22, 2017 | More

Space junk: The cluttered frontier

A team under professor of aeronautics and astronautics and LGO thesis advisor Kerri Cahoy have developed a laser sensing technique that can decipher not only where but what kind of space objects may be passing overhead.

Hundreds of millions of pieces of space junk orbit the Earth daily, from chips of old rocket paint, to shards of solar panels, and entire dead satellites. This cloud of high-tech detritus whirls around the planet at about 17,500 miles per hour. At these speeds, even trash as small as a pebble can torpedo a passing spacecraft.

NASA and the U.S. Department of Defense are using ground-based telescopes and laser radars (ladars) to track more than 17,000 orbital debris objects to help prevent collisions with operating missions. Such ladars shine high-powered lasers at target objects, measuring the time it takes for the laser pulse to return to Earth, to pinpoint debris in the sky.

Now aerospace engineers from MIT have developed a laser sensing technique that can decipher not only where but what kind of space junk may be passing overhead. For example, the technique, called laser polarimetry, may be used to discern whether a piece of debris is bare metal or covered with paint. The difference, the engineers say, could help determine an object’s mass, momentum, and potential for destruction.

“In space, things just tend to break up over time, and there have been two major collisions over the last 10 years that have caused pretty significant spikes in debris,” says Michael Pasqual, a former graduate student in MIT’s Department of Aeronautics and Astronautics. “If you can figure out what a piece of debris is made of, you can know how heavy it is and how quickly it could deorbit over time or hit something else.”

Kerri Cahoy, the Rockwell International Career Development Associate Professor of aeronautics and astronautics at MIT, says the technique can easily be implemented on existing groundbased systems that currently monitor orbital debris.

“[Government agencies] want to know where these chunks of debris are, so they can call the International Space Station and say, ‘Big chunk of debris coming your way, fire your thrusters and move yourself up so you’re clear,’” Cahoy says. “Mike came up with a way where, with a few modifications to the optics, they could use the same tools to get more information about what these materials are made of.”

Pasqual and Cahoy have published their results in the journal IEEE Transactions on Aerospace and Electronic Systems.

Seeing a signature

The team’s technique uses a laser to measure a material’s effect on the polarization state of light, which refers to the orientation of light’s oscillating electric field that reflects off the material. For instance, when the sun’s rays reflect off a rubber ball, the incoming light’s electric field may oscillate vertically. But certain properties of the ball’s surface, such as its roughness, may cause it to reflect with a horizontal oscillation instead, or in a completely different orientation. The same material can have multiple polarization effects, depending on the angle at which light hits it.

Pasqual reasoned that a material such as paint could have a different polarization “signature,” reflecting laser light in patterns that are distinct from the patterns of, say, bare aluminum. Polarization signatures therefore could be a reliable way for scientists to identify the composition of orbital debris in space.

To test this theory, he set up a benchtop polarimeter — an apparatus that measures, at many different angles, the polarization of laser light as it reflects off a material. The team used a high-powered laser beam with a wavelength of 1,064 nanometers, similar to the lasers used in existing ground-based ladars to track orbital debris. The laser was horizontally polarized, meaning that its light oscillated along a horizontal plane. Pasqual then used polarization filtering optics and silicon detectors to measure the polarization states of the reflected light.

Sifting through space trash

In choosing materials to analyze, Pasqual picked six that are commonly used in satellites: white and black paint, aluminum, titanium, and Kapton and Teflon — two filmlike materials used to shield satellites.

“We thought they were very representative of what you might find in space debris,” Pasqual says.

He placed each sample in the experimental apparatus, which was motorized so measurements could be made at different angles and geometries, and measured its polarization effects. In addition to reflecting light with same polarization as the incident light, materials can also display other, stranger polarization behaviors, such as rotating the light’s oscillation slightly — a phenomenon called retardance. Pasqual identified 16 main polarization states, then took note of which efffects a given material exhibited as it reflected laser light.

“Teflon had a very unique property where when you shine laser light with a vertical oscillation, it spits back some in-between angle of light,” Pasqual says. “And some of the paints had depolarization, where the material will spit out equal combinations of vertical and horizontal states.”

Each material had a suffiiciently unique polarization signature to distinguish it from the other five samples. Pasqual believes other aerospace materials, such as various shielding films, or composite materials for antennas, solar cells, and circuit boards, may also exhibit unique polarization effects. His hope is that scientists can use laser polarimetry to establish a library of materials with unique polarization signatures. By adding certain filters to lasers on existing groundbased ladars, scientists can measure the polarization states of passing debris and match them to a library of signatures to determine the object’s composition.

“There are already a lot of facilities on the ground for tracking debris as it is,” Pasqual says. “The point of this technique is, while you’re doing that, you might as well put some filters on your detectors that detect polarization changes, and it’s those polarization features that can help you infer what the material is made of. And you can get more information, basically for free.”

This research was supported, in part, by the MIT Lincoln Scholars Program.


June 22, 2017 | More

LGO Best Thesis 2017 for Integrated Manufacturing Analytics Project

After graduation ceremonies at MIT, Jeremy Rautenbach won the Leaders for Global Operations Program’s Best Thesis award for his project at a Danaher Corporation subsidiary company. “Jeremy’s capstone thesis shows how all of the operations concepts we develop at LGO work together. He applied skills in advanced data analytics, manufacturing optimization, and leading all levels of an organization. In the end, he created sustainable solutions for his host company,” Thomas Roemer, the executive director of the LGO program, said when announcing the award winner.

Applying MIT knowledge in the real world

MIT LGO best thesis 2017 - rautenbauch
Jeremy Rautenbach won the 2017 LGO best thesis award for his work applying statistical analysis onto a manufacturing line.

Jeremy prepared for his thesis project during one of his courses: Control of Manufacturing Processes. Current and former LGO Faculty Co-Directors Duane Boning (EECS) and David Hardt (ME) teach the course, which is jointly listed as a mechanical and electrical engineering graduate class. Students study statistical decision making, yield modeling and identifying root causes, multivariate SPC chart methods, and nested variance. Both professors noticed Jeremy’s passion for the topic and agreed to supervise his internship and thesis. Professor of Statistics and Engineering Systems Roy Welsch served as Jeremy’s final supervising professor. All LGO students work with least one management professor and one engineering professor when completing their dual-degree internship and thesis.

During his internship, Jeremy worked at a biotech firm and analyzed the company’s manufacturing processes. “Jeremy took to heart ‘classical’ statistical ideas: sampling, experimental design, and variance analysis to improve the company’s processes. He learned to carefully observe both the human and technical factors at the plant and considered that in his recommendations too,” Welsch said. “He left behind a true spirit of continuous improvement.”

Jeremy found and fixed multiple problems in the company’s manufacturing process. He identified a number of small and very different yield loss sources using a highly methodological approach. He also worked with the team on site, Boning stated, “so that they ‘owned’ the improvements. More importantly, they now own the methodology for continually improving the line.”

Every spring, the LGO office asks for nominations from MIT professors throughout the Institute who worked with LGO theses. LGO alumni read and comment on the thesis to select a winner. Jeremy will return to the Danaher Corporation at a facility in the United Kingdom in a full-time role after graduation. Before enrolling in the program, he completed undergraduate studies at the University of Pretoria, South Africa, and worked in the South African mining industry for four years.

LGO Class of 2017

Forty-five students graduated in the MIT LGO Class of 2017 on Friday, June 9. As of graduation, 98% had received a job offer. In addition to Danaher, recruiting companies include Amazon, Amgen, Boeing, Blue Origin, Cruise Automation (a General Motors subsidiary), Boston Scientific, Dell, Flex, and Nike.

June 9, 2017 | More

Sloan

Thomas Kochan

Four ways technology will change how people do business

 Technology platforms and the IoT are clearly changing the structure of organizations — and the valuation of companies today is out of line with the numbers of jobs they create. In the past, the General Motors, and even the Googles, created lots of new jobs and the valuation of the company reflected this — but compare Netflix, just 3,700 employees, with its old-world equivalent, Blockbuster, which at its peak had $7 billion in revenue and 60,000 employees. Today, Netflix has a larger market capitalization. The world is changing — and the question is, will we create enough good quality jobs to meet the needs of the workforce of the future?

There’s an old Japanese phrase that came out of robotics work in the 1980s and 1990s in manufacturing that technologists ought to begin to understand and build into their work: “It’s workers who give wisdom to machines.” People give wisdom to the technology and then technology can in turn enhance human judgment. We can solve big problems in the world and create big opportunities and the next generation of inventions and jobs.

But it will mean some major changes to how businesses are run — and how companies view (and use) technology.

It’s not technologies that will solve future challenges — it’s how we use them that counts. That means we have to start with human and societal problems, and figure out how to put technology to work to address and complement what we can do together with other institutions. We must define the questions we ask of technologists and not view technology as an autonomous, deterministic force, but as an asset that is mobilized to address these important issues. Most important, we have to educate technologists and not leave them to define the objectives of the technology. If we do, they will define it very narrowly and squeeze out as much human variabilities as possible, which would lead us to false solutions. A broad participation in defining the problems will enable us to find our way to a better world for everyone.

There’s no single deterministic model or market design for digital platform design. Uber uses data to control its customers and driver workforce. What would be different about the experience for drivers, and maybe customers, if that information was decentralized so they could maximize their own incomes and improve their own livelihoods? We have to think about how we design these new platforms, so the benefits are more broadly shared across the different stakeholders.

In the long run, a good business model is one where the more your customers and employees know how the company works and have information to control their actions, the more committed they will be to building the business to benefit both themselves and the organization that’s providing that information. We’ve got to think about ways customers, employees, even public/private partnerships can share information to use these technologies much more holistically than for some specific stakeholder. In this way, customers become part of the innovation cycle. Maybe not the first mover for innovation but the second generation, and that will create new jobs, opportunities and applications.

It’s not about technology per se, it’s the interactions with people that use them and organizational designs that drives high levels of productivity, customer service and innovation. The new, flexible enterprise also has to draw on people outside the organization more fully. We must ask what’s in it for various stakeholders, and have them contribute to further development and inventions. If they are invested in it and see joint gains, it continues a positive cycle of innovation.

A flexible corporate structure will need a lot more coordination across groups and different bodies of expertise. That means the “solid,” functional firms — finance, operations, HR, marketing and so on — are really going to be challenged to work out how the discovery and deployment of new apps will involve people across functions.

That doesn’t mean the old-world corporation is defunct. We still need people who have specialized knowledge in IT and marketing, but the productivity comes in linking them. Knowledge bases won’t go away, but the people and skills most valuable in the future (and incomes already reflect this) are the ones that have hybrid skills with technology know-how and figure out how to apply to functional areas. HR people won’t only specialize in compensation and performance management, but also know how to utilize technology to better design how we do our work.

With a lot of knowledge at the edges of organizations, strategy has to keep an eye on what the business is trying to achieve and ask how to be successful on a financial and sustainable basis, as well as when it needs to ally with others outside its traditional boundaries. This remains the role of the CEO and the board. However, they also have to rely on information flowing up rather than dictating what will be. That day is over.

Technology is leading to a more decentralized workplace, with the flexibility to work in different places at different times. But do we have the managerial wisdom to take advantage of the new norm? There’s still the legacy of Frederick Winslow Taylor’s management control thinking: “If you’re not in the office, I don’t trust you’re not at home playing computer games.” The distributed workplace calls for a mindset change in management to ensure that we work with people and don’t compensate them for the amount of time spent in the office, but for the contribution they make and the work they do. If we can get over this managerial hurdle, we can take advantage of distributed workplaces.

We have to get over the notion that it’s all about shareholder value and the shorter term, and instead invest for the long term and listen to employees. This means finding ways to expand and create value, but also discovering ways to distribute value more equitably. At MIT, we have a good companies and good jobs initiative, and are going to hold a series of multi-stakeholder forums around these broad questions: What makes a company a great place from the standpoint of financial return, but also good for jobs and career opportunities?

The reality is, if we don’t start to engage in this way and have a social contract where people feel their interests are being served, we are going to have an explosion. It happened with Brexit, it happened in the 2016 U.S. election. A new social contract must be based on trust, mutual interest and listening to each other, creating value together and negotiating how to distribute value more equitably. Use the knowledge of the workforce by all means, but we can’t have a world of winners and losers.

This article is excerpted and modified from Telefonica and MIT Sloan Leaders Consider Distributed Future

Thomas Kochan is the Co-director, MIT Sloan Institute for Work and Employment Research, where he is Professor of Work and Employment Research.

November 20, 2017 | More

MIT Sloan faculty insights: 11 books from 2017

MIT Sloan faculty insights: 11 books from 2017

MIT Sloan faculty dig deep on topics with global impact. Learn about their latest ideas, tools, and approaches in their recent books.

Looking for a gift for an aspiring MBA student? Planning on spending the holidays catching up on some not-so-light reading? Here are 11 new books published by MIT Sloan faculty and lecturers in 2017.

“Adaptive Markets: Financial Evolution at the Speed of Thought”
By Andrew W. Lo, professor

Two prevalent theories command economic debates: that markets are either rational and efficient, or irrational and inefficient. In this book, Lo presents a third option, called the adaptive markets hypothesis, that reconciles the two prevailing theories using the principles of evolutionary biology.

Read Why financial markets behave like living organisms. Buy the book at Princeton University Press.


“Breaking Through Gridlock: The Power of Conversation in a Polarized World”
By Jason Jay, senior lecturer and director of the Sustainability Initiative, and Gabriel Grant, founder of Human Partners

In business, communities, families, and society at large, people often find themselves stuck in conversations that devolve into disagreements that seem to go nowhere. The authors of this book outline six steps people can take to break out of the gridlock and have an authentic, effective dialogue.

Read A toolset for getting stuck conversations back on track and buy the book on Amazon.


“Disciplined Entrepreneurship Workbook”
By Bill Aulet, professor and managing director of the Martin Trust Center for MIT Entrepreneurship

Aulet’s 2013 book, “Disciplined Entrepreneurship: 24 Steps to a Successful Startup,” offered entrepreneurs a framework for starting a business. The companion workbook provides readers with worksheets, a visual dashboard to track progress, creativity tools, and concrete examples to help entrepreneurs succeed with their new endeavors.

Buy it on Amazon.


“Faster, Greener, Smarter: The Future of the Car and Urban Mobility”
By Charles Fine, professor, David Gonsalvez, CEO and rector at MIT’s Malaysia Institute for Supply Chain Innovation, and Venkat Sumantran, chairman of Celeris Technologies

Cars are no longer always the fastest mode of transportation, and the emissions they produce hurt the environment. As city administrators shift away from designing cities for cars and people shift from owning assets to using services, mobility is changing. In this book, the authors lay out a mobility architecture that is connected, heterogeneous, intelligent, and personalized. It’s also one that they see as a social and economic necessity.

Buy the book at MIT Press.


“Machine, Platform, Crowd: Harnessing Our Digital Future”
By Erik Brynjolfsson, professor, and Andrew McAfee, principal research scientist

In an era when a machine plays strategy games better than humans, when tech upstarts displace industry mainstays, and crowd-sourced ideas are more innovative than those produced at research labs, executives have to rethink how they run their businesses. In this book, the authors analyze the new state of the world and offer a toolkit for operating in it.

Read When the automatons explode. Buy the book on Amazon.


“The Power of Little Ideas: A Low Risk, High Reward Approach to Innovation”
By David Robertson, senior lecturer

“Disrupt or be disrupted” is something business leaders often hear. Robertson argues that being an industry disruptor is not the only path to success. He advocates for a low-risk, high-reward strategy that involves building a family of complementary innovations around a central product or service. This method creates an atmosphere that supports sustained innovation, Robertson says.

Buy it from Harvard Business Review.


“A Practitioner’s Guide to Asset Allocation”
By Mark P. Kritzman, senior lecturer, William Kinlaw, senior managing director and global head of State Street Associates, and David Turkington, senior managing director and head of portfolio and risk research at State Street Associates.

Progress regarding asset allocation has been uneven and, at times, interrupted by misleading research. In this book, the authors dispel a number of common misconceptions about asset allocation and explore advances that address its key challenges.

Buy it on Amazon.


“Shaping the Future of Work: A Handbook for Action and a New Social Contract”
By Thomas Kochan, professor, and Lee Dyer, professor at Cornell University

The world of work is changing, exacerbating income inequality. In this book, which was updated in light of social and political divides that emerged worldwide in 2016, the authors lay out a framework for achieving a new social contract that emphasizes strong financial returns for businesses alongside good jobs for employees.

Read Why we need a new social contract now. Buy the book at MIT Press.


“Social Media Management: Persuasion in a Networked Culture”
By Ben Shields, senior lecturer

With social media disrupting the business world, how can company leaders best adapt? In this book, Shields offers a framework for generating business value from social media that includes targeting a social audience, using social media to promote a company’s brand, and measuring results.

Buy the book on Amazon.


“Strengthening Teaching and Learning in Research Universities: Strategies and Initiatives for Institutional Change”
Edited by Lori Breslow, senior lecturer, Bjorn Stensaker, professor at the University of Oslo, Grahame T. Bilbow, director of the Centre for the Enhancement of Teaching and Learning, and Rob van der Vaart, professor at Utrecht University

International assessments of leading research universities have typically focused on their research performance, while their approaches to teaching and learning have received less attention. In this book, the authors compare how research universities across Europe, Asia, and the U.S. are improving in these other areas.

Buy it on Amazon.


“Who Will Care for Us: Long-Term Care and the Long-Term Workforce”
By Paul Osterman, professor

As baby boomers age, the number of people who need daily care will increase significantly, and Medicaid costs will skyrocket. In this book, Osterman proposes that home health aid workers and certified nursing assistants can do much more than current regulations allow them to. This could increase the quality of life for the elderly and disabled while also reducing cost.

Buy the book on Amazon.

November 19, 2017 | More

http://ww2.cfo.com/governance/2017/05/create-finance-committee-every-public-company/

Create a finance committee at every public company

 Almost all boards of U.S. public companies now have three committees that meet immediately before every board meeting and report to the full board — audit, compensation, and nominating-governance. Committees have become the workhorses of the governance process: with their small size and expert support, they can do more in-depth analysis of complex topics than the full board of directors.

However, since the passage of the 2002 Sarbanes-Oxley Act, the duties of the audit committee, especially, have become so large and complex that it cannot seriously assess broader financial issues.

Audit committees continue to perform the traditional functions of appointing the company’s independent auditor and reviewing its financial statements. But audit committees now have a long list of other obligations — including oversight of complaints by whistle blowers and violations of ethics codes; approval of non-audit functions by auditors; and review of the management report and auditor attestation on internal controls. The audit committee also holds private sessions with both external and internal auditors as well as the chief financial officer and the head of compliance/risk.

In other words,  audit committees are overburdened by their increased obligations to oversee the details of the reporting and compliance processes. As a result, the audit committee no longer has enough time to seriously consider broader financial topics. If directors are going to have meaningful input into the broad financial issues faced by any public company, they need to form a finance committee with the time and expertise to address the issues.

Approximately 30% of S&P 500 companies have a committee with finance in its name, according to research by Russell Reynolds. That research showed that industrial and consumer companies have the highest percentage of finance-related committees, while technology and financial services companies have the lowest (the latter often have risk committees instead).

What should be the main subjects addressed by an effective finance committee? It should review the company’s pension plans, insurance coverage, cash management, debt issuance, tax strategies and, most importantly, capital allocation.

On capital allocation, finance committees should  concentrate on three subjects —following up on significant acquisitions, monitoring of debt levels, and scrutinizing share repurchase programs.

Of course, boards do a detailed review of significant acquisitions before they occur. Most boards will examine carefully the strategic fit, projected cost savings, potential revenue synergies, and justification for the price. By contrast, boards often do not systematically study, several years later, whether significant acquisitions achieve their objectives.

The finance committee provides a good forum to look systematically at how significant acquisitions fare. The committee may find, for example, that the company typically achieves projected reductions in operating costs but not revenue synergies through cross-selling. So, in the future, the board may decide to evaluate acquisitions without assuming that they will earn additional revenue due to synergies.

Read the full post at CFO.

Robert Pozen is a Senior Lecturer at the MIT Sloan School of Management and a Senior Fellow at the Brookings Institution.

November 16, 2017 | More

4 from MIT Sloan honored at Thinkers50 awards

4 from MIT Sloan honored at Thinkers50 awards

Every two years Thinkers50 recognizes the people developing new and insightful management ideas. Here are four working on the MIT campus.

Four members of the MIT Sloan community were honored Nov. 13 at the biennial Thinkers50 awards in London. Thinkers50 recognizes global leaders in management thinking through its ranked list of the 50 top business thinkers, ten individual awards, and inductees into the Thinkers50 Hall of Fame.

Here are those from MIT Sloan who were recognized this year:

Douglas Ready was inducted into the Thinkers50 Hall of Fame. Ready is a senior lecturer at MIT Sloan and the founder and CEO of the International Consortium for Executive Development Research. He also teaches the MIT Sloan Executive Education course “Building Game-Changing Organizations: Aligning Purpose, Performance, and People.”

Hal Gregersen won the Leadership Award, which recognizes those who “shed powerful and original new light onto” the role of the leaders in teams and organizations. He was also named to the 24th spot in the Thinkers50 rankings. Gregersen is the executive director of the MIT Leadership Center and a senior lecturer at MIT Sloan. He is the co-author of the 2011 book “The Innovator’s DNA: Mastering the Five Skills of Disruptive Innovators.”

Erik Brynjolfsson, PhD ’91, and Andrew McAfee, SB ’88, SB ’89, LFM ’90, shared the 12th spot in the Thinkers50 rankings. They are co-authors of several books, the most recent of which is “Machine, Platform, Crowd: Harnessing our Digital Future” (excerpted here). Brynjolfsson is a professor at MIT Sloan and the director of the MIT Initiative on the Digital Economy. McAfee is a principle research scientist at MIT Sloan and the co-director of the MIT Initiative on the Digital Economy.

November 15, 2017 | More

6 from MIT Sloan named to Boston 50 on Fire list

6 from MIT Sloan named to Boston 50 on Fire list

Boston’s innovation ecosystem stretches across industries, but nearly all its companies and organizations have a core technology element. Here’s who’s being celebrated right now.

Six companies and organizations led by MIT Sloan alumni and students were named to the 2017 BostInno 50 on Fire list Nov. 9. The list, culled from 150 finalists, recognizes the leaders and up-and-comers in the Greater Boston innovation ecosystem. Winners were named in 11 categories.

The 50 on Fire list includes many other companies and organizations led by MIT alumni and former MIT professors and researchers. Katie Rae, CEO and managing partner of MIT startup accelerator The Engine, won in the investment category.

The MIT Sloan winners are:

Ric Fulop, SF ’06, co-founder and chief technology officer of metal 3-D printing company Desktop Metal won in the design category.

PillPack, an internet pharmacy that ships pills in pre-sorted packages with dates and times printed on them, won in the health and medicine category. Elliot Cohen, MBA ’13, is the company’s co-founder and chief technology officer.

Video advertising company Pixability won in the marketing and advertising category. The company’s leadership team includes founder and CEO Bettina Hein and chief technology officer Andreas Goeldi, both SF ’07.

Marketing company ThriveHive won in the marketing and advertising category. ThriveHive was previously Propel Marketing, which bought a startup called ThriveHive — co-founded by Max Faingezicht and Adam Blake, both MBA ’11 — in 2016. Follow? Today, Faingezicht is ThriveHive’s chief technology officer and Blake is its senior vice president of marketing and general manager of SaaS.

The MIT Sloan Sports Analytics Conference, which is led by a student organizing team, won in the sport and fitness category.

Cybersecurity company Cybereason won in the technology category. The company’s leadership includes chief marketing officer Mike Volpe, MBA ’03, Cybereason Japan CEO Shai Horovitz, MBA ’15, and Emmy Linder, MBA ’10, head of global operations. Earlier this year the company boosted its total funding to $189 million.

Did we miss any 50 on Fire winners led by MIT Sloan alumni? Let us know at news.mitsloan@mit.edu.

November 15, 2017 | More

Taking steps to reduce foreign social-media meddling in our elections

Taking steps to reduce foreign social-media meddling in our elections

One could almost pity the executives from Facebook, Google and Twitter as they were grilled on Capitol Hill earlier this week by senators upset about Russian meddling in last year’s presidential election, via the posting of cleverly worded propaganda ads and messages on social-media sites.

After all, how do you detect – let alone stop – a small group of determined foreign nationals manipulating and taking advantage of what’s supposed to be open, free-flowing Internet platforms idealistically designed to allow billions of people across the globe to voice their thoughts on everything from world politics to the type off pigeons in Trafalgar Square?

Of course, the Facebook, Google and Twitter executives at the Senate hearing earlier this week bowed their heads, expressed remorse and vowed to do better in combating the threat of foreign interference in our democratic elections.

But the question is: Can they do better? Is it possible? Remember: Facebook alone acknowledges that it received only about $100,000 in paid ads by those it later learned were tied to various Russian groups, but those ads were still seen by about 10 million people, according to media reports.

When you include free Facebook messages posted on bogus accounts taken out by Russians – and the corresponding “like” endorsements by almost countless readers – the number of people exposed to manipulative Russian antics ballooned to about 126 million, Facebook admits, according to media reports.

Now think about it: A swing of only 100,000 votes or less in last year’s presidential election could have put Hillary Clinton in the White House, not Donald Trump. So you get the picture: it doesn’t take much on social-media to effectively “interfere” in a presidential election.

In addition to these disturbing numbers is the fact that many of these fake ads and messages were cleverly written, deliberately appealing to the dogmatic extremes on both the left and right and their pre-conceived prejudices and notions about political issues. The goal of the Russians, it seems, was to simply stir up as much fear, anger and confusion among voters as possible.

Read the full post at Huffington Post.

Neal Hartman is a Senior Lecturer in Managerial Communication at the MIT Sloan School of Management.

November 15, 2017 | More

can uber evolve quickly

Can Uber evolve – quickly?

I’m a huge fan of Uber and use its services all the time. Still, I can’t deny it’s been a tough couple of weeks for Uber. A blog post by a woman employee who credibly seems to be claiming sexual harassment and retaliation for making those claims was widely covered in the media. Days later, a video that showed the CEO arguing vehemently with an Uber driver about rates went viral. Plus, revelations about “grey-balling” — preventing certain people from accessing the Uber system — put the company in an unfavorable light with a number of different stakeholders.

In the aftermath of these news items, Uber has taken some swift action. A board member, Arianna Huffington, flew in for listening meetings and the company hired former Attorney General Eric Holder to investigate the allegations of sexual harassment. And, the company announced it was searching a for chief operating officer to work with the CEO, Travis Kalanick.

Kalanick for his part acknowledged that he needs to “fundamentally change as a leader and grow up.” These actions are a necessary start, but are not sufficient to address the underlying causes of Uber’s problems.

Read the full post at Entrepreneur.

Court Chilton is a Senior Lecturer at the MIT Sloan School of Management.

November 13, 2017 | More

3 from MIT Sloan named to top women-led businesses in Massachusetts list

3 from MIT Sloan named to top women-led businesses in Massachusetts list

The corner office is less and less of a boys’ club. Here are three thriving Massachusetts companies helmed by MIT Sloan alumnae.

Three companies with MIT Sloan alumnae at the helm were named to the Boston Globe Magazine’s “2017 top 100 women-led businesses in Massachusetts” list.

The number of women-run businesses in the U.S. is small. In 2016, the number of Fortune 500 companies with women at the helm was only 21. This year, that number rose to 32.

At the same time, among Fortune 1000 companies, those with women CEOs between 2002 and 2014 produced equity returns 226 percent better than the S&P 500.

The Boston Globe Magazine partners with The Commonwealth Institute, a nonprofit devoted to advancing women as business leaders, to compile the annual list of the top organizations led by women in Massachusetts. This year’s list included organizations like Harvard University, Care.com (co-founded by MIT Sloan alumna Donna Levin), and Fidelity Investments.

Here are the businesses on this year’s list led by MIT Sloan graduates:

Analysis Group
Analysis Group is an economic consulting firm based in Boston. The company is led by CEO and chairman Martha S. Samuelson, who earned a master’s degree of science in management from MIT Sloan in 1986. Samuelson joined Analysis Group in 1992.

Axcelis Technologies
Axcelis Technologies provides equipment and services to semiconductor manufacturing businesses. The company, based in Beverly, Mass., is helmed by president and CEO Mary G. Puma, who earned a master’s degree of science in management from MIT Sloan in 1981. Puma joined Eaton Corporation in 1998 — Axcelis spun out from Eaton in 2000.

Pixability
Pixability provides marketers focused on video advertising performance and audience insights and targeting technology. The company, based in Boston, was founded by current CEO Bettina Hein, who earned a master’s degree of science in management from MIT Sloan in 2007.

Did we miss an MIT Sloan graduate on the top 100 women-led businesses in Massachusetts list? Let us know at news@sloan.mit.edu.

November 10, 2017 | More

Tactile, translating printed text to Braille, wins MIT $100K Pitch

Tactile, translating printed text to Braille, wins MIT $100K Pitch

The MIT $100K Pitch Competition recognizes new MIT technologies and businesses every year. The latest alumni convert printed text to Braille and help users take birth control on time.

Tactile won the $5,000 top prize in the MIT $100K Pitch Competition for technology that converts printed text to Braille. The portable device is designed to help sight-deprived individuals read short texts like agendas, printed receipts, and mail.

“There’s a great need for such a device. A lot of printed texts don’t have Braille translation, and there are 39 million visually impaired people,” said Tactile co-founder Grace Li, SB ’17. “We’re starting out in the U.S. market working with public institutions like libraries and schools, plus centers like the Carroll Center for the Blind and the Perkins School for the Blind. Boston has been a great place to develop our prototype, because there’s such a strong community for the visually impaired.”

Her team is now beta testing the product in the Boston area and will use the prize money to refine and streamline the prototype, intending to make the device even smaller.

The MIT $100K Pitch Competition, held Nov. 7 on campus, is the first of three annual contests held by the MIT $100K Entrepreneurship Competition. The grand prize winner of the third competition, held in the spring, receives the $100,000 prize.

This year, 20 teams competed by presenting a 90-second elevator pitch, followed by a round of judges’ questions.

Spectators also voted on their phones for an audience choice award. A smart device and app duo, aam, won that $2,000 prize for technology for a birth control pill blister pack inserted into a smart sleeve. The technology reminds its user to take the pill at a specific time and sends reminders to her phone if she forgets. It is helmed by co-founder Aagya Mathur, MBA ’18.

The next phase of the MIT $100K is Accelerate, where teams develop a prototype with mentor guidance and customer research. Ten finalists will make public presentations this winter for a $10,000 prize and a $3,000 audience choice award.

November 9, 2017 | More

At Ayar Labs, optical chips borne of MIT research

At Ayar Labs, optical chips borne of MIT research 

A breakthrough in optical chips could mean massive energy and cost savings in data centers.

In 1965, Gordon Moore noticed that computer chips were roughly doubling in power every year, following an exponential curve. The trend held with such reliability that the observation was codified into Moore’s law. “But the speed at which we take data in and out of chips has instead followed linear growth,” said Alexandra Wright-Gladstein, a 2015 graduate of MIT Sloan. This expanding gap between the speed of individual computer chips and the speed with which they can communicate creates a bottleneck. “Right now, processors in big data centers spend the majority of their time idling, simply waiting for data to come or go.”

Wright-Gladstein enrolled in business school seeking a solution to this drag on economic and environmental efficiency. She graduated as the cofounder and CEO of Ayar Labs, a company born around the use of light, instead of electricity, to transfer data between chips. The technology was invented by Rajeev Ram in MIT’s Research Laboratory of Electronics, with colleagues at Berkeley and University of Colorado Boulder. Ayar Labs expects its first product to be 10 times faster than anything currently on the market.

For more than a decade, big companies like Intel and IBM have been chasing an optical technology for this kind of data transfer. Their efforts, however, always assumed that such a technology required tinkering with the composition of silicon: pure silicon could be used for computer processors, but some admixture was needed for the optical device. This entailed building two separate chips.

“But maybe because we were a small research group across a few universities and we didn’t have the resources that Intel had we ended up questioning this assumption,” said Wright-Gladstein. “I think most of the industry figured it was a stupid idea, but we tried anyway, and after a decade of research we’ve created a new class of optical chips.”

Though a few years out from its first product, Ayar Labs raised $2.5 million in its first round of funding and has attracted interest from a range of large companies that provide supercomputing or data center networking services. The challenge of recruiting these customers was somewhat unconventional: “We had to get customers to think about what their tech can do in a different way,” said Wright-Gladstein. For decades, industry has watched input-output capacity double every 3-10 years. In that sense, Ayar is not simply introducing a new product, but a new frame of reference — a device that has potential to increase speed exponentially instead of linearly. “We’ve had to find the individuals within big organizations who are excited about thinking this way, about thinking differently,” Wright-Gladstein said. “Once we do find those people it’s really fun.”

But what excites Wright-Gladstein most is the company’s unforeseeable future. “I like to draw a comparison to the inductor in the radio,” she said. It was common knowledge in the 1990s that inductors should not be made of silicon. When a professor at Berkeley did make one of silicon, he laid the groundwork for today’s cellphones. “We’ve created a new platform for manufacturing optics and electronics together, for the first time, on the same chip,” she said. “In the same way that the first person to make an inductor in silicon likely had no idea he was enabling the wireless revolution, we can’t imagine today what this technology will enable in the future.”

November 5, 2017 | More

Engineering

A new perspective on ancient materials inspires future innovation

A new perspective on ancient materials inspires future innovation

Contemporary building materials are guaranteed for only about 100 years, yet structures built in Ancient Rome have survived for millennia. Questions about what accounts for this discrepancy in durability and resilience, and what engineers could learn from ancient technologies, are central to the research interests of Admir Masic, the Esther and Harold E. Edgerton Career Development Assistant Professor of Civil and Environmental Engineering at MIT.

Ancient materials that have resisted degradation over time, and despite environmental conditions, can inform us about properties that contribute to performance and longevity,” Masic says. “This ‘antiqua-inspired’ method offers a new way of approaching the creation of more sustainable, durable materials for the future.”

To encourage peer researchers to consider how understanding ancient materials and processes can be leveraged for the development of innovative systems and devices, Masic, along with Loic Bertrand, the director of the IPANEMA European Research platform on ancient materials, a joint laboratory between the National Center for Scientific Research, Ministry of Culture, Versailles Saint-Quentin-en-Yvelines University, and University of Paris-Saclay; Claire Gervais, professor of materials chemistry at Bern University of the Arts; and Luc Robbiola, a research engineer at Toulouse University, composed a conceptual review of the many opportunities afforded by ancient materials. The paper, entitled “Paleo-inspired systems: Durability, Sustainability and Remarkable Properties,” was recently published in Angewandte Chemie International Edition.

The “paleo- or antiqua-inspired” train of thought put forth by the researchers is modeled after the bio-inspired research approach, in which researchers look at the potential to learn from structures, composition, and outstanding properties of some biological materials to mimic and engineer new materials. As bio-inspired researchers use natural occurrences, the paleo-inspired methodology explores ancient systems and processes, adding another dimension for looking towards the future of materials.

In the paper, a variety of ancient technologies were presented, including Roman concrete, and pigments such as Mayan and Egyptian blues. In the case of the concrete recipe and methods used in Ancient Rome, the concoction could be reverse-engineered to develop and optimize novel construction materials based on ancient practices and techniques.

Ancient Romans didn’t use 3-D printing technologies or highly sophisticated nanomaterials in their construction. In fact, they came up with solutions that include volcanic ash or recycled materials, like brick fragments, to strengthen the concrete. Interestingly, these approaches have considerably less embodied energy than the commonly used Portland cement, and exhibit durability beyond the lifespan of modern infrastructure.

“This work opens exciting new avenues for the development of novel materials. There is a great deal that we can learn from antiquity, and in particular, traditional Roman concrete offers many potential advantages for construction in the future,” says Class of 1942 Professor of Civil and Environmental Engineering and professor of architecture John Ochsendorf, an active proponent of antiqua-inspired structural designs. “We are really just beginning to understand the underlying chemistry of construction materials from antiquity.”

The Masic lab, which was established in 2015, uses advanced characterization tools to design complex multi-scale materials, Masic explains. The Masic lab’s position within the Department of Civil and Environmental Engineering (CEE) allows a unique opportunity for cutting-edge interdisciplinary ways of thinking about solutions for the future. This offers a new dimension for the lab to contribute to this new class of “antiqua-inspired” materials. For example, multi-scale computational modeling, combined with Masic’s characterization tools, constitutes a powerful toolbox that will progress the design of innovative solutions to cement and infrastructure long-term resilience.

The “antiqua-inspired” method is not limited to research, however. As part of the CEE undergraduate curriculum, students interested in materials and infrastructure are encouraged to take part in the program in Materials in Art, Archaeology and Architecture (ONE-MA3). In this three-week fieldwork program, students witness firsthand the durable structures and discuss ancient technologies with experts. The program is a prerequisite for 1.057 (Heritage Science and Technology), a class that combines lectures and labs, applying concepts and observations from ONE-MA3 to antiqua-inspired research.

“ONE-MA3 is a unique opportunity for undergraduate students to learn about the ‘antiqua-inspired’ concept in the field, and hopefully get inspired to contribute to the corresponding research,” says Masic, the founder of the program. “Using ancient societies, architecture and building materials as time-proven examples of innovation in construction and material science, and building upon ONE-MA3, 1.057 introduces students to ancient materials and technologies and gives them the opportunity to explore material sustainability and durability from multiple perspectives.”


November 20, 2017 | More

Two MIT faculty elected 2017 AAAS Fellows

Two MIT faculty elected 2017 AAAS Fellows

Two current MIT faculty members have been elected as fellows of the American Association for the Advancement of Science (AAAS).

The new fellows are among a group of 396 AAAS members elected by their peers in recognition of their scientifically or socially distinguished efforts to advance science. This year’s fellows will be honored at a ceremony on Feb. 17, 2018, at the AAAS Annual Meeting in Austin, Texas.

Dorothy Hosler is professor of archaeology and ancient technology in the Department of Materials Science and Engineering. Her research incorporates the tools of materials engineering and geoscience to investigate materials processing technologies in the ancient Americas. She demonstrates that the transfer of metallurgy from Andean to Mesoamerican societies, whereas the invention of rubber was uniquely Mesoamerican. She has been recognized by AAAS for her “distinguished contributions to integration of materials science and social theory for understanding ancient technologies and what they indicate about social aspects of ancient societies.”

John D. Sterman is the Jay W. Forrester Professor of Management at the MIT Sloan School of Management and a professor in the MIT Institute for Data, Systems, and Society. He is also the director of the MIT System Dynamics Group and the MIT Sloan Sustainability Initiative. He has been recognized for his “distinguished contributions to improving decision-making in complex systems, including corporate strategy and operations, energy policy, public health, environmental sustainability, and climate change.”

This year’s fellows will be formally announced in the AAAS News and Notes section of Science on Nov. 24.


November 20, 2017 | More

Cell-weighing method could help doctors choose cancer drugs

Cell-weighing method could help doctors choose cancer drugs

Doctors have many drugs available to treat multiple myeloma, a type of blood cancer. However, there is no way to predict, by genetic markers or other means, how a patient will respond to a particular drug. This can lead to months of treatment with a drug that isn’t working.

Researchers at MIT have now shown that they can use a new type of measurement to predict how drugs will affect cancer cells taken from multiple-myeloma patients. Furthermore, they showed that their predictions correlated with how those patients actually fared when treated with those drugs.

This type of testing could help doctors predict drug responses based on measurements of cancer cell growth rates after drug exposure, says Scott Manalis, the Andrew and Erna Viterbi Professor in the MIT departments of Biological Engineering and Mechanical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research.

“For infectious diseases, antibiotic susceptibility testing based on cell proliferation has been extremely effective for many decades,” Manalis says. “Unlike bacteria, analogous tests for tumor cells have been challenging, in part because the cells don’t always proliferate upon removal from the patient. The measurement we developed doesn’t require proliferation.”

Manalis is the senior author of the study, which appears in the Nov. 20 issue of Nature Communications. The paper’s lead authors are Mark Stevens, a visiting scientist at the Koch Institute and research scientist at Dana-Farber Cancer Institute, and Arif Cetin, a former MIT postdoc.

Predicting response

The researchers’ new strategy is based on technology that Manalis and others in his lab have developed over the past several years to weigh cells. Their device, known as a suspended microchannel resonator (SMR), can measure cell masses 10 to 100 times more accurately than any other technique, allowing the researchers to precisely calculate growth rates of single cells over short periods of time.

The latest version of the device, which can measure 50 to 100 cells per hour, consists of a series of SMR sensors that weigh cells as they flow through tiny channels. Over a 20-minute period, each cell is weighed 10 times, which is enough to get an accurate MAR measurement.

A few years ago, Manalis and colleagues set out to adapt this technique to predict how cancer drugs affect tumor cell growth. They showed last year that the mass accumulation rate (MAR), a measurement of the rate at which the cells gain mass, can reveal drug susceptibility. A decrease in MAR following drug treatment means the cells are sensitive to the drug, but if they are resistant, there is no change in MAR.

In the new study, the researchers teamed up with Nikhil Munshi at Dana-Farber Cancer Institute to test a variety of drugs on tumor cells from multiple-myeloma patients. They then compared the results to what happened when the patients were treated with those drugs. For each patient, they tracked the cells’ response to three different drugs, plus several combinations of those drugs. They found that in all nine cases, their data matched the outcomes seen in patients, as measured by clinical protein biomarkers found in the bloodstream, which are used by doctors to determine whether a drug is killing the tumor cells.

“When the clinical biomarkers showed that the patients should be sensitive to a drug, we also saw sensitivity by our measurement. Whereas in cases where the patients were resistant, we saw that in the clinical biomarkers as well as our measurement,” Stevens says.

Personalized medicine

One of the difficulties in treating multiple myeloma is choosing among the many drugs available. Patients usually respond well to the first round of treatment but eventually relapse, at which point doctors must choose another drug. However, there is no way to predict which drug would be best for that particular patient at that time.

In one scenario, the researchers envision that their sensor would be used at the time of disease relapse, when the tumor may have developed resistance to specific therapies.

“At this time of relapse, we would take a bone marrow biopsy from a patient, and we would test each therapy individually or in combinations that are typically used in the clinic. At that point we’d be able to inform the clinician as to which therapy or combinations of therapies this patient seems to be most sensitive or most resistant to,” Stevens says.

The new test holds “great promise” to screen myeloma cells for drug susceptibility, says Kenneth Anderson, a professor of medicine at Harvard Medical School and Dana-Farber Cancer Institute, who was not involved in the research.

“This assay may fast-forward personalized medical care and the choice of effective therapies for myeloma both at diagnosis and at relapse,” Anderson says. “It may also be useful to profile susceptibility of minimal residual disease in order to further inform therapy and improve patient outcome.”

Bone marrow biopsies often produce limited numbers of tumor cells to test — as few as 50,000 tumor cells in this study — but for this technique that is enough to test many different drugs and drug combinations. The MIT researchers have started a company to begin a larger clinical study for validating this approach, and they plan to investigate the possibility of using this technology for other types of cancer.

The research was funded by the National Institutes of Health, the Koch Institute’s Cancer Center Support (core) Grant from the National Cancer Institute, the Department of Veterans Affairs, a Koch Institute Quinquennial Cancer Research Fellowship, and the Bridge Project of the Koch Institute and Dana-Farber/Harvard Cancer Center.


November 20, 2017 | More

Show the flow

Show the flow

When it comes to teaching, seeing is a key to believing, or at least understanding.

That’s is the guiding principle of a new class, 1.079 (Rock-on-a-Chip), dedicated to exploring multiphase flow in porous media.

“This course is an opportunity to teach this subject in a completely different way, by visualizing the physics of flow,” says instructor Ruben Juanes, the ARCO Associate Professor in Energy Studies.

Juanes introduced 1.079 in the spring of 2017, seeking to kick-start an energy resources track within the Department of Civil and Environmental Engineering. “The class plays a very nice role in the curriculum, filling a gap in a subject that is crucial to many energy technologies,” he says.

Flows in porous media come into play in a range of real-world applications, from oil and gas recovery and groundwater resource management to seismic activity mapping and energy storage technology. These flows are frequently multiphase, composed of gases, solids, and liquids in diverse mixes. For example, hydrocarbon reservoirs simultaneously host water, oil, and gas; and fuel cells feature a porous layer next to the cathode where water vapor may condense into liquid water.

However, the processes by which liquids and gases move underground often take place out of sight. Rainwater infiltrates soil, displacing air. Oil and water compete as they seep through rock reservoirs. It has been difficult to observe and capture in scientific detail what Juanes calls “the marvelous physics and chemistry of multiphase flows.”

But recently, Juanes figured out a way of elucidating these subterranean processes. Employing 3-D printing and methods borrowed from the field of microfluidics, he created a multiphase flow laboratory on a chip.

The device consists of a microfluidic flow cell patterned with vertical posts using soft lithography, sandwiched between two thin layers of a transparent polymer. When one fluid is introduced to displace another, the chip permits direct visualization of fundamental physical mechanisms at the scale of actual rock and soil pores. Juanes can now study in vivid close-up the critical properties and porous media conditions that hamper, or hasten, underground flows.

What Juanes calls a “new approach to an old problem” proves especially effective in the classroom.

“With transparent porous media, you can demonstrate the process of oil recovery, filtration of water, extraction of gases,” he says. “You can’t really understand these applications without knowledge of the physics, and here, an image is worth a thousand words.”

Lubna Barghouty SM ’17, whose graduate research focused on predicting the flow of oil from rock reservoirs containing both oil and water, calls 1.079 a “one-of-a-kind class.”

“I had been reading about the concepts and trying to imagine these phenomena, and finally I was able to see them,” she says.

Rafael Villamor Lora, a graduate student in civil engineering and geomechanics, is studying rock permeability and fluid flow inside rock fractures. He says he found that 1.079 offered “a unique approach to presenting very difficult physics, making it clear and understandable.”

Juanes divides class time between lectures focused on theory and labs that brought theory to life, a mix that students found both intellectually challenging and practical.

“I love experimenting and doing things hands-on,” says Omar Al-Dajani SM ’16, a petroleum engineer for Saudi Aramco now pursuing a doctoral degree in civil and environmental engineering. But sometimes his experiments failed. “It was amazing how Professor Juanes could change a few things on the fly so the experiment would run successfully,” he says. “He goes through derivations, formulates problems in a very elegant way, and comes up with the right solution for whatever problem comes up in the lab.”

Barghouty says she was anxious when she initially discovered that she would be responsible for fabricating her own lab tools.

“We did whole experiments from A to Z, including cutting sheets of acrylic glass with lasers and using 3-D printers to etch pores in these chips,” she says. “I am now confident that I have the skills necessary for experimental work and that I can apply those skills to other kinds of research.”

Lab-on-a-chip experiments that required hours of preparation might take mere moments to run. One experiment demonstrated the power of capillary forces. After filling their microfluidic chips with a fluid, students flipped them 180 degrees, expecting the fluid to flow down in response to gravity.

“In my cell, the fluid hung, and my jaw dropped,” recalls Al-Dajani. Surface tension made the fluid stick to the many tiny posts inside the chip, fabricated to simulate rock pores. When he added a drop of soap, suddenly the surface tension disappeared and the fluid dropped. “We saw the physics in action, the competition between gravity and capillary forces, which also takes place inside oil reservoirs,” he says.

Several labs featured Juanes’s research pursuits. “I asked students to change the wettability of the microfluidic cell and to look at displacement of multiphase flow under different wetting conditions,” says Juanes. Understanding and altering wettability — a measure of a substance’s attraction to or repulsion of water — is essential to fluid extraction applications.

“There are ways wettability could be modulated to recover more oil and gas in existing reservoirs,” Juanes notes. “There is a big margin for improvement in both fracking and conventional drilling.”

While he hopes to drive home the real-world applications of laboratory work, Juanes intends for the class to accomplish a broader pedagogical goal.

“When you perform an experiment not knowing the outcome, you are forced to make sense of what happens, especially something unexpected,” he says. “Moments like these captivate your attention, really allowing you to dig deep and giving you a better understanding of physics at play.”

The Rock-on-a-Chip class was developed with funding from the S.D. Bechtel, Jr. Foundation. It will be an elective for the energy studies minor starting in 2018.

This article appears in the Autumn 2017 issue of Energy Futures, the magazine of the MIT Energy Initiative.


November 17, 2017 | More

Using polymeric membranes to clean up industrial separations

Using polymeric membranes to clean up industrial separations

There are scores of promising technologies under development that can reduce energy consumption or capture carbon in fields including biotech, computer science, nanotechnology, materials science, and more. Not all will prove feasible, but with a little funding and nurturing, many could help solve the planet’s grand challenge.

One such solution is emerging from new approaches to industrial separation processes. At MIT’s Department of Chemical Engineering, Professor Zachary Smith is working on new polymeric membranes that can greatly reduce energy use in chemical separations. He’s also conducting longer range research into enhancing polymeric membranes with nanoscale metal-organic frameworks (MOFs).

“Not only are we making and analyzing materials from the fundamental principle of transport, thermodynamics and reactivity, but we’re beginning to take that knowledge to create models and design new materials with separation performance that has never been achieved before,” Smith says. “It’s exciting to go from the lab scale to thinking about the big process, and what will make a difference in society.”

Smith often consults with industry experts who share insights on separations technologies. With the Paris climate agreement of 2015 so far holding together, despite the retreat by the U.S., the chemical and petrochemical industries where Smith is primarily focused is starting to feel the pressure to reduce emissions. The industry is also looking to reduce costs. The heating and cooling towers used for separations require considerable energy, and are expensive to build and maintain.

Industrial processes used in the chemical and petrochemical industries alone consume from a quarter to a third of the total energy in the U.S., and separations account for about half of that, says Smith. About half the energy consumption from separations comes from distillation, a process that requires extreme heat, or in the case of cryogenic distillation, even more energy-hungry extreme cooling.

“It requires a lot of energy to boil and reboil mixtures, and it’s even more inefficient because it requires phase changes,” says Smith. “Membrane separation technology could avoid those phase changes and use far less energy. Polymers can be made defect-free, and you can cast them into selective, 100-nanometer-thick thin films that could cover a football field.”

Many hurdles that stand in the way, however. Membrane separations are used in only a tiny fraction of industrial gas separation processes because the polymeric membranes “are often inefficient, and can’t match the performance of distillation,” says Smith. “The current membranes don’t provide enough throughput — called flux — for high volume applications, and they’re often chemically and physically unstable when using more aggressive feed-streams.”

Many of these performance problems stem from the fact that polymers tend to be amorphous, or entropically disordered. “Polymers are easy to process and form into useful geometries, but the spacing where molecules can move through polymeric membranes changes over time,” says Smith. “It’s difficult to control their porous internal free volume.”

The most demanding separations require size selectively between molecules of only a fraction of an angstrom. To address this challenge, the Smith Lab is attempting to add nanoscale features and chemical functionality to polymers to achieve finer-grained separations. The new materials can “soak up one type of molecule and reject another,” says Smith.

To create polymeric membranes with higher throughput and selectivity, Smith’s team is taking new polymers developed at MIT labs that can be reacted to template ordered structure into traditional disordered, amorphous polymers. As he explains, “We then post-synthetically treat them in a way to template in some nanometer sized pockets that create diffusion pathways.”

While the Smith Lab has found success with many of these techniques, achieving the flux required for high-volume applications is still a challenge. The problem is complicated by the fact that there are more than 200 different types of distillation separation processes used by the chemical and petrochemical industry. Yet this can also be an advantage when trying to introduce a new technology — researchers can look for a niche instead of attempting to change the industry overnight.

“We’re looking for targets where we would have the most impact,” says Smith. “Our membrane technology has the advantage of offering a much smaller footprint, so you can use them in remote locations or on offshore oil platforms.”

Due to their small size and weight, membranes are already being used on airplanes to separate nitrogen from air. The nitrogen is then used to coat the fuel tank to avoid explosions like the one that brought down TWA Flight 800 in 1996. Membranes have also been used for carbon dioxide removal at remote natural gas wells, and have found a niche in a few larger petrochemical applications such as hydrogen removal.

Smith aims to expand into applications that typically use cryogenic distillation towers, which require immense energy to produce extreme cold. In the petrochemical industry, these include ethylene-ethane, nitrogen-methane, and air separations. Many plastic consumer products are made of ethylene, so reducing energy costs in fabrication could generate huge benefits.

“With cryogenic distillation, you not only must separate molecules that are similar in size, but also in thermodynamic properties,” says Smith. “The distillation columns can be 200 or 300 feet tall with very high flow rates, so the separation trains can cost up to billions of dollars. The energy required to pull vacuum and operate the systems at -120 degrees Celsius is enormous.”

Other potential applications for polymer membranes include “finding other ways to remove CO2 from nitrogen or methane or separating different types of paraffins or chemical feedstocks,” says Smith.

Carbon capture and sequestration is also on the radar. “If there was an economic driver for capturing CO2 today, carbon capture would be the largest application by volume for membranes by a factor of 10,” he says. “We could make a sponge-like material that would soak up CO2 and efficiently separate it so you could pressurize it and store it underground.”

One challenge when using polymeric membranes in gas separations is that the polymers are typically made of hydrocarbons. “If you have the same type of hydrocarbon components in your polymer that you have in the feed-stream you’re trying to separate, the polymer can swell or dissolve or lose its separation performance,” says Smith. “We’re looking to introduce non-hydrocarbon-based components such as fluorine into polymers so that the membrane interacts better with hydrocarbon-based mixtures.”

Smith is also experimenting with adding MOFs to polymers. MOFs, which are formed by linking together a metal ions or metal clusters with an organic linker, may not only solve the hydrocarbon problem, but the entropic disorder issue as well.

“MOFs let you form one, two, or three-dimensional crystal structures that are permanently porous,” says Smith. “A teaspoon of MOFs has an internal surface area of a football field, so you can think about functionalizing the internal surfaces of MOFs to selectively bind to or reject certain molecules. You can also define the pore shape and geometry to allow one molecule to pass while another is rejected.”

Unlike polymers, MOF structures won’t typically change shape, so the pores are far more persistent over time. In addition, “they don’t degrade like certain polymers through a process known as aging,” says Smith. “The challenge is how to incorporate crystalline materials in a process where you can make them as thin films. One approach we’re taking is to disperse MOFs into polymers as nanoparticles. This would let you exploit the MOFs’ efficiency and productivity while maintaining the processability of the polymer.”

One potential advantage of introducing MOF-enhanced polymeric membranes is process intensification: bundling different separation or catalytic processes in a single step to achieve greater efficiencies. “You can think about combining a type of MOF material that could separate a gas mixture and allow the mixture to undergo a catalytic reaction at the same time,” says Smith. “Some MOFs can also act as cross-linking agents. Instead of using polymers directly cross linked together, you can have links between MOF particles dispersed in a polymer matrix, which would create more stability for separations.”

Due to their porous nature, MOFs may potentially be used for “capturing hydrogen, methane, or even in some cases CO2,” says Smith. “You can get very high uptake if you create the right type of sponge-like structure. It’s a challenge, however, to find materials that selectively bind one of these components in very high capacity.”

A similar application for MOFs would be storing hydrogen or natural gas for fueling a car. “Using a porous material in your fuel tank would let you hold more hydrogen or methane,” says Smith.

Smith cautions that MOF research could take decades before it achieves fruition. His lab’s polymer research, however, is much further along, with commercial solutions expected in the next five to 10 years.

“It could be a real game changer,” he says.


November 15, 2017 | More

Gut microbes can protect against high blood pressure

Gut microbes can protect against high blood pressure

Microbes living in your gut may help protect against the effects of a high-salt diet, according to a new study from MIT.

The MIT team, working with researchers in Germany, found that in both mice and humans, a high-salt diet shrinks the population of a certain type of beneficial bacteria. As a result, pro-inflammatory immune cells called Th-17 cells grow in number. These immune cells have been linked with high blood pressure, although the exact mechanism of how they contribute to hypertension is not yet known.

The researchers further showed that treatment with a probiotic could reverse these effects, but they caution that people should not interpret the findings as license to eat as much salt as they want, as long as they take a probiotic.

“I think certainly there’s some promise in developing probiotics that could be targeted to possibly fixing some of the effects of a high-salt diet, but people shouldn’t think they can eat fast food and then pop a probiotic, and it will be canceled out,” says Eric Alm, director of MIT’s Center for Microbiome Informatics and Therapeutics and a professor of biological engineering and civil and environmental engineering at MIT.

Alm, Dominik Muller of the Max-Delbruck Center for Molecular Medicine in Berlin, and Ralf Linker of Friedrich-Alexander University in Erlangen, Germany, are the senior authors of the study, which is published today in Nature. The paper’s lead author is Nicola Wilck of the Max-Delbruck Center for Molecular Medicine. Authors from MIT include graduate students Mariana Matus and Sean Kearney, and recent PhD recipient Scott Olesen.

Too much salt

Scientists have long known that a high-salt diet can lead to cardiovascular disease. As sodium accumulates in the bloodstream, the body retains more fluid to dilute the sodium, and the heart and blood vessels have to work harder to pump the extra volume of water. This can stiffen the blood vessels, potentially leading to high blood pressure, heart attack, and stroke.

Recent evidence has also implicated the body’s immune system in some of the effects of a high-salt diet. Muller’s lab has previously shown that salt increases the population of Th-17 immune cells, which stimulate inflammation and can lead to hypertension. Muller and his colleagues have also found that excess salt can drive the development of an autoimmune disease that is similar to multiple sclerosis, in mice.

Meanwhile, Alm’s lab has studied interactions of human gut microbes with populations of different types of immune cells. He has shown that the balance between pro-inflammatory cells such as Th-17 and anti-inflammatory cells is influenced by the composition of the gut microbiome. The researchers have also found that probiotics can tip this balance in favor of anti-inflammatory cells.

In the new study, the researchers teamed up to determine how a high-salt diet would affect the microbiome, and whether those changes might be linked to the detrimental health effects of such a diet.

For two weeks, the researchers fed mice a diet in which sodium chloride (table salt) made up 4 percent of what the animals were eating, compared to 0.5 percent for mice on a normal diet. They found that this diet led to a drop in the population of a type of bacteria called Lactobacillus murinus. These mice also had greater populations of inflammatory Th-17 cells, and their blood pressure went up.

When mice experiencing high blood pressure were given a probiotic containing Lactobacillus murinus, Th-17 populations went down and hypertension was reduced.

In a study of 12 human subjects, the researchers found that adding 6,000 milligrams of sodium chloride per day to the subjects’ diet, for a duration of two weeks, also changed the composition of bacteria in the gut. Populations of lactobacillus bacteria went down, and the subjects’ blood pressure went up along with their counts of Th-17 cells.

When subjects were given a commercially available probiotic for a week before going on a high-salt diet, their gut lactobacillus levels and blood pressure remained normal.

A smoking gun

It is still unclear exactly how Th-17 cells contribute to the development of high blood pressure and other ill effects of a high-salt diet.

“We’re learning that the immune system exerts a lot of control on the body, above and beyond what we generally think of as immunity,” Alm says. “The mechanisms by which it exerts that control are still being unraveled.”

The researchers hope that their findings, along with future studies, will help to shed more light on the mechanism by which a high-salt diet influences disease. “If you can find that smoking gun and uncover the complete molecular details of what’s going on, you may make it more likely that people adhere to a healthy diet,” Alm says.

Alm and others at the Center for Microbiome Informatics and Therapeutics are also studying how other dietary factors such as fiber, fat, and protein affect the microbiome. They have collected thousands of different strains of bacteria representing the most abundant species in the human gut, and they hope to learn more about the relationships between these bacteria, diet, and diseases such as inflammatory bowel disease.

The research was funded by the German Center for Cardiovascular Research, the MIT Center for Microbiome Informatics and Therapeutics, and the MetaCardis consortium.


November 15, 2017 | More

Department of Electrical Engineering and Computer Science announces six new faculty members

Department of Electrical Engineering and Computer Science announces six new faculty members

MIT’s Department of Electrical Engineering and Computer Science (EECS) has appointed six new faculty members. One has already joined the department, while five others will arrive in 2018 and early 2019.

Song Han will join EECS as an assistant professor in July 2018. His research focuses on energy-efficient deep learning at the intersection of machine learning and computer architecture. He proposed the Deep Compression algorithm, which can compress neural networks by 17 to 49 times while fully preserving prediction accuracy. He also designed the first hardware accelerator that can perform inference directly on a compressed sparse model, which results in significant speedup and energy saving. His work has been featured by O’Reilly, TechEmergence, and The Next Platform, among others. He led research efforts in model compression and hardware acceleration and won best-paper awards at the International Conference on Learning Representations and the International Symposium on Field-Programmable Gate Arrays. Han received a PhD and a master’s degree from Stanford University, both in electrical engineering.

Phillip Isola will join EECS as an assistant professor in July 2018. Currently a fellow at OpenAI, he studies visual intelligence from the perspective of both minds and machines. Isola received both a National Science Foundation (NSF) Graduate Fellowship and a NSF Postdoctoral Fellowship. Isola received a PhD in brain and cognitive studies from MIT and a bachelor’s degree in computer science from Yale University.

Tim Kraska will join EECS as an associate professor in January 2018. Currently an assistant professor in the Department of Computer Science at Brown University, Kraska focuses on building systems for interactive data exploration, machine learning, and transactional systems for modern hardware, especially the next generation of networks. Kraska received a PhD from ETH Zurich, then spent three years as a postdoc in the AMPLab at the University of California at Berkeley, where he worked on hybrid human-machine database systems and cloud-scale data management systems. Kraska was recently selected as a 2017 Alfred P. Sloan Research Fellow in computer science. He has also received a National Science Foundation CAREER Award, an Air Force Young Investigator award, two Very Large Data Bases conference best-demo awards, and a best-paper award from the IEEE International Conference on Data Engineering.

Farnaz Niroui will join EECS as an assistant professor in January 2019. She is currently a Miller Postdoctoral Fellow at the University of California at Berkeley. She received PhD and master’s degrees in electrical engineering from MIT and a bachelor’s degree in nanotechnology engineering from the University of Waterloo in Nanotechnology Engineering. During her graduate studies, Farnaz was a recipient of the Engineering Research Council of Canada Scholarship, and was selected to the Rising Stars for EECS program in 2015 at MIT and in 2016 at Carnegie Mellon University. Her research integrates electrical engineering with materials science and chemistry to develop hybrid nanofabrication techniques to enable precise yet scalable processing of nanoscale architectures capable of uniquely controlling light-matter interactions, electronic transport, and exciton dynamics to engineer new paradigms of active nanoscale devices.

Arvind Satyanarayan will join EECS as an assistant professor in July 2018. He focuses on developing new declarative languages for interactive visualization and leveraging them in new systems for visualization design and data analysis. He is currently a postdoc at Google Brain, working on improving the interpretability of deep learning models through visualization. His research has been recognized with a Google PhD Fellowship and best-paper awards at the IEEE InfoVis and the Association for Computing Machinery Computer-Human Interaction conference. His work has also been deployed on Wikipedia to enable interactive visualizations within articles. Satyanarayan received a PhD in computer science from Stanford University, working with Jeffrey Heer and the University of Washington Interactive Data Lab. He also received a master’s degree from Stanford and a bachelor’s degree from the University of California at San Diego, both in computer science.

Julian Shun joined EECS as an assistant professor in September 2017. His research focuses on both the theory and the practice of parallel algorithms and programming. He is particularly interested in designing algorithms and frameworks for large-scale graph analytics. He is also interested in parallel algorithms for text analytics, concurrent data structures, and methods for deterministic parallelism. Shun has received the ACM Doctoral Dissertation Award, the Carnegie Mellon University (CMU) School of Computer Science Doctoral Dissertation Award, a Facebook Graduate Fellowship, and a best-student-paper award at the Data Compression Conference. Before coming to MIT, he was a postdoctoral Miller Research Fellow at the University of California at Berkeley. Shun received a PhD degree in computer science from CMU and a bachelor’s degree in computer science from UC Berkeley.


November 14, 2017 | More

Portuguese delegation led by Minister Manuel Heitor visits MIT

Portuguese delegation led by Minister Manuel Heitor visits MIT

Following previous meetings at MIT and Portugal, Manuel Heitor, the Portuguese minister for science, technology, and higher education, recently visited MIT to discuss and plan the development of a new phase of the partnership between Portugal and the American university. The president of the Portuguese Foundation for Science and Technology (FCT), Paulo Ferrão, and a representative of Portuguese universities, António Cunha, accompanied the Portuguese minister.

During the two-day, visit the Portuguese delegation met with the Richard Lester, associate provost at MIT; Dava Newman, the Apollo Professor of Astronautics and Engineering; David Miller, the Jerome Hunsaker Professor of Aeronautics and Astronautics; and Douglas Hart, professor of mechanical engineering. Last May, the same MIT team had been in Portugal to assess MIT Portugal’s role in the academic and business communities but also to analyze areas where this partnership actions can be more critical in the future.

This recent visit of the minister is part of a series of contacts established between the Portuguese stakeholders and the American universities — namely MIT, Carnegie Mellon University, and the University of Texas at Austin — involved in science and education international partnerships aiming to reinforcing the continuity of the scientific and technological cooperation that has marked the relationship between Portugal and the U.S. in recent decades. They’ve also served to define a new and more ambitious framework for these impactful international partnerships.

Based on many areas of research and development — including bioengineering, sustainable energy, transportation, engineering, and manufacturing — the partnerships established between Portugal and the U.S. are a success story, as they have allowed the development of collaborative scientific research projects between higher education and industry. They are also associated with a range of innovation and technology initiatives that have resulted in business projects and new technology-based businesses.


November 14, 2017 | More

MIT researchers release evaluation of solar pumps for irrigation and salt mining in India

MIT researchers release evaluation of solar pumps for irrigation and salt mining in India

In 2014, the government of India made an ambitious goal to replace 26 million groundwater pumps run on costly diesel, for more efficient and environmentally-friendly options such as solar pumps.

Groundwater pumps are a critical technology in India, especially for small scale farmers who depend on them for irrigating crops during dry seasons. With the lack of a reliable electrical grid connection, and the high price and variable supply of diesel fuel, solar-powered pumps have great potential to meet farmers’ needs while reducing costs and better preserving natural resources.

MIT researchers have just released a new report evaluating a range of solar pump technologies and business models available in India for irrigation and salt mining to better understand which technologies can best fit farmers’ needs.

The report, “Solar Water Pumps: Technical, Systems, and Business Model Approaches to Evaluation,” details the study design and findings of the latest experimental evaluation implemented by the Comprehensive Initiative on Technology Evaluation (CITE), a program supported by the U.S. Agency for International Development (USAID) and led by a multidisciplinary team of faculty, staff, and students at MIT.

Launched at MIT in 2012, CITE is a pioneering program dedicated to developing methods for product evaluation in global development. CITE researchers evaluate products from three perspectives, including suitability (how well a product performs its purpose), scalability (how well the product’s supply chain effectively reaches consumers), and sustainability (how well the product is used correctly, consistently, and continuously by users over time).

Designing the study to fill information gaps in the market

Despite the tremendous potential for solar pumps to fill a technological need, there is little information available to consumers about what works best for their needs and a wide range of products available for selection.

“There’s a lot of potential for these technologies to make a difference, but there is a large variance in the cost and performance of these pumps, and lot of confusion in finding the right-sized pump for your application,” says Jennifer Green, CITE sustainability research lead and MIT Sociotechnical Systems Research Center research scientist. “In many areas, the only people to turn to for information are the people selling the pumps, so an independent evaluation of the pumps working with our partners provides a third-party, non-biased information alternative.”

To conduct the evaluation, MIT researchers worked closely with the Technology Exchange Lab in Cambridge, Massachusetts, as well as the Gujarat, India-based Self Employed Women’s Association, a trade union that organizes women in India’s informal economy toward full employment and is currently piloting use of solar pumps in their programs.

Researchers tested the technical performance of small solar pump systems in the workshop at MIT D-Lab, and tested larger solar pump systems in communities in India where they were in active use. This allowed for more rigorous, controlled lab testing as well as a more real-life, grounded look at how systems operated in the environment in which they would be deployed. Researchers also used a complex systems modeling technique to examine how the pumps impacted the social, economic, and environmental conditions around them, and how different government policies might impact these conditions at a macro level.

“That was very important because although these are ‘clean pumps’ from the perspective of using solar, there is a concern that there is not a cost incentive to pump less and use less water,” Green says. “When people are using diesel, they pay by the liter, so they use as little as possible. With solar, once people make the capital investment to purchase the equipment, they’re incentivized to pump as much as possible to get a good return on investment and have potential to do serious harm to the groundwater supply.”

Identifying the most appropriate, accessible technologies

In the lab, MIT researchers procured and tested five pumps — the Falcon FCM 115, the Harbor Freight, the Kirloskar SKDS116++, the Rotomag MBP30, and the Shakti SMP1200-20-30. Lab tests on flow rate, priming ease, and overall efficiency demonstrated that two of the lower-cost pumps — the Falcon and the Rotomag — performed the best, and the most expensive pump — the Shakti — performed poorly. MIT researchers also studied pump usage, installing remote sensors in panels and pumps being used in Gujarat, India to ensure that the pumps were being used consistently over the course of a day, and operating properly.

Because solar pumps are often too expensive for small-scale farmers, CITE also conducted a business case analysis to understand what financing mechanisms might make solar pump technology more affordable for these critical end users. For example, researchers looked at government policies such as subsidizing the cost of solar equipment and paying for excess electricity production as a combination that might help farmers make this transition.

“The cost of solar pumps is still prohibitively high for individual farmers to buy them straight out,” Green says. “It will be critical to ensure financing mechanisms are accessible to these users. Coupling solar pump systems with well-thought out government policies and other technologies for minimizing water use is the best approach to optimizing the food-water-energy nexus.”

In addition to the evaluation, CITE created a pump sizing tool that can be used to help farmers understand what size pump they need given their particular field sizes, water requirements, and other factors.

“That gives them more knowledge and power when they go to talk to the water pump manufacturers,” Green says. “If they know what they need, they’re less likely to be talked into buying something too big for their needs. We don’t want them to overpay.”

“CITE’s evaluation work has been a great value-add [for the Self Employed Women’s Association] because we can better understand which pumps are most efficient,” says Reema Nanavaty, director of the Self Employed Women’s Association. “We’re not a technical organization and we did not want to set the livelihoods of these poor salt pan workers by bringing in the wrong kind of pump or an inefficient pump.”

CITE’s research is funded by the USAID U.S. Global Development Lab. CITE is led by principal investigator Bishwapriya Sanyal of MIT’s Department of Urban Studies and Planning, and supported by MIT faculty and staff from D-Lab, the Priscilla King Gray Public Service Center, Sociotechnical Systems Research Center, the Center for Transportation and Logistics, School of Engineering, and Sloan School of Management.

In addition to Green, co-authors on this report include CITE research assistants Amit Gandhi, Jonars B. Spielberg, and Christina Sung; Technology Exchange Lab’s Brennan Lake and Éadaoin Ilten; as well as Vandana Pandya and Sara Lynn Pesek.


November 13, 2017 | More

CRISPR-carrying nanoparticles edit the genome

CRISPR-carrying nanoparticles edit the genome

In a new study, MIT researchers have developed nanoparticles that can deliver the CRISPR genome-editing system and specifically modify genes in mice. The team used nanoparticles to carry the CRISPR components, eliminating the need to use viruses for delivery.

Using the new delivery technique, the researchers were able to cut out certain genes in about 80 percent of liver cells, the best success rate ever achieved with CRISPR in adult animals.

“What’s really exciting here is that we’ve shown you can make a nanoparticle that can be used to permanently and specifically edit the DNA in the liver of an adult animal,” says Daniel Anderson, an associate professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES).

One of the genes targeted in this study, known as Pcsk9, regulates cholesterol levels. Mutations in the human version of the gene are associated with a rare disorder called dominant familial hypercholesterolemia, and the FDA recently approved two antibody drugs that inhibit Pcsk9. However these antibodies need to be taken regularly, and for the rest of the patient’s life, to provide therapy. The new nanoparticles permanently edit the gene following a single treatment, and the technique also offers promise for treating other liver disorders, according to the MIT team.

Anderson is the senior author of the study, which appears in the Nov. 13 issue of Nature Biotechnology. The paper’s lead author is Koch Institute research scientist Hao Yin. Other authors include David H. Koch Institute Professor Robert Langer of MIT, professors Victor Koteliansky and Timofei Zatsepin of the Skolkovo Institute of Science and Technology, and Professor Wen Xue of the University of Massachusetts Medical School.

Targeting disease

Many scientists are trying to develop safe and efficient ways to deliver the components needed for CRISPR, which consists of a DNA-cutting enzyme called Cas9 and a short RNA that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut.

In most cases, researchers rely on viruses to carry the gene for Cas9, as well as the RNA guide strand. In 2014, Anderson, Yin, and their colleagues developed a nonviral delivery system in the first-ever demonstration of curing a disease (the liver disorder tyrosinemia) with CRISPR in an adult animal. However, this type of delivery requires a high-pressure injection, a method that can also cause some damage to the liver.

Later, the researchers showed they could deliver the components without the high-pressure injection by packaging messenger RNA (mRNA) encoding Cas9 into a nanoparticle instead of a virus. Using this approach, in which the guide RNA was still delivered by a virus, the researchers were able to edit the target gene in about 6 percent of hepatocytes, which is enough to treat tyrosinemia.

While that delivery technique holds promise, in some situations it would be better to have a completely nonviral delivery system, Anderson says. One consideration is that once a particular virus is used, the patient will develop antibodies to it, so it couldn’t be used again. Also, some patients have pre-existing antibodies to the viruses being tested as CRISPR delivery vehicles.

In the new Nature Biotechnology paper, the researchers came up with a system that delivers both Cas9 and the RNA guide using nanoparticles, with no need for viruses. To deliver the guide RNAs, they first had to chemically modify the RNA to protect it from enzymes in the body that would normally break it down before it could reach its destination.

The researchers analyzed the structure of the complex formed by Cas9 and the RNA guide, or sgRNA, to figure out which sections of the guide RNA strand could be chemically modified without interfering with the binding of the two molecules. Based on this analysis, they created and tested many possible combinations of modifications.

“We used the structure of the Cas9 and sgRNA complex as a guide and did tests to figure out we can modify as much as 70 percent of the guide RNA,” Yin says. “We could heavily modify it and not affect the binding of sgRNA and Cas9, and this enhanced modification really enhances activity.”

Reprogramming the liver

The researchers packaged these modified RNA guides (which they call enhanced sgRNA) into lipid nanoparticles, which they had previously used to deliver other types of RNA to the liver, and injected them into mice along with nanoparticles containing mRNA that encodes Cas9.

They experimented with knocking out a few different genes expressed by hepatocytes, but focused most of their attention on the cholesterol-regulating Pcsk9 gene. The researchers were able to eliminate this gene in more than 80 percent of liver cells, and the Pcsk9 protein was undetectable in these mice. They also found a 35 percent drop in the total cholesterol levels of the treated mice.

The researchers are now working on identifying other liver diseases that might benefit from this approach, and advancing these approaches toward use in patients.

“I think having a fully synthetic nanoparticle that can specifically turn genes off could be a powerful tool not just for Pcsk9 but for other diseases as well,” Anderson says. “The liver is a really important organ and also is a source of disease for many people. If you can reprogram the DNA of your liver while you’re still using it, we think there are many diseases that could be addressed.”

“We are very excited to see this new application of nanotechnology open new avenues for gene editing,” Langer adds.

The research was funded by the National Institutes of Health (NIH), the Russian Scientific Fund, the Skoltech Center, and the Koch Institute Support (core) Grant from the National Cancer Institute.


November 13, 2017 | More