News and Research
Don Rosenfield

Donald Rosenfield, a longtime leader of MIT LGO, dies at 70

With deep sadness, the LGO community mourns its founding program director, Don Rosenfield. He leaves a legacy of over 1,200 LGO alumni and countless colleagues, students, and friends who were touched and inspired by him.
Read more


Department of Mechanical Engineering announces new leadership team

Pierre Lermusiaux, LGO thesis advisor and professor of mechanical engineering and ocean science and engineering will join on the MechE department’s leadership team. Prof Lermusiaux will serve as associate department head for operations.

Evelyn Wang, the Gail E. Kendall Professor, who began her role as head of MIT’s Department of Mechanical Engineering (MechE) on July 1, has announced that Pierre Lermusiaux, professor of mechanical engineering and ocean science and engineering, and Rohit Karnik, associate professor of mechanical engineering, will join her on the department’s leadership team. Lermusiaux will serve as associate department head for operations and Karnik will be the associate department head for education.

“I am delighted to welcome Pierre and Rohit to the department’s leadership team,” says Wang. “They have both made substantial contributions to the department and are well-suited to ensure that it continues to thrive.”

Pierre Lermusiaux, associate department head for operations

Pierre Lermusiaux has been instrumental in developing MechE’s strategic plan over the past several years. In 2015, with Evelyn Wang, he was co-chair of the mechanical engineering strategic planning committee. They were responsible for interviewing individuals across the MechE community, determining priority “grand challenge” research areas, investigating new educational models, and developing mechanisms to enhance community and departmental operations. The resulting strategic plan will inform the future of MechE for years to come.

“Pierre is an asset to our department,” adds Wang. “I look forward to working with him to lead our department toward new research frontiers and cutting-edge discoveries.”

Lermusiaux joined MIT as associate professor in 2007 after serving as a research associate at Harvard University, where he also received his PhD. He is an internationally recognized thought leader at the intersection of ocean modeling and observing. He has developed new uncertainty quantification and data assimilation methods. His research has improved real-time data-driven ocean modeling and has had important implications for marine industries, fisheries, energy, security, and our understanding of human impact on the ocean’s health.

Lermusiaux’s talent as an educator has been recognized with the Ruth and Joel Spira Award for Teaching Excellence. He has been the chair of the graduate admissions committee since 2014. He has served on many MechE and institute committees and is also active in MIT-Woods Hole Oceanographic Institution Joint Program committees.

“Working for the department, from our graduate admission to the strategic planning with Evelyn, has been a pleasure,” says Lermusiaux. “I am thrilled to be continuing such contributions as associate department head for research and operations. I look forward to developing and implementing strategies and initiatives that help our department grow and thrive.”

Lermusiaux succeeds Evelyn Wang, who previously served as associate department head for operations under the former department head Gang Chen.

Rohit Karnik, associate department head for education

Over the past two years, Rohit Karnik has taken an active role in shaping the educational experience at MechE. As the undergraduate officer, he has overseen the operations of the department’s undergraduate office and chaired the undergraduate programs committee. This position has afforded Karnik the opportunity to evaluate and refine the department’s course offerings each year and work closely with undergraduate students to provide the best education.

“Rohit is a model citizen and has provided dedicated service to our department,” says Wang. “I look forward to working with him to create new education initiatives and continue to provide a world-class education for our students.”

Prior to joining MIT as a postdoc in 2006, Karnik received his PhD from the University of California at Berkeley. In 2006, he joined the faculty as an assistant professor of mechanical engineering. He is recognized as a leader in the field of micro-and-nanofluidics and has made a number of seminal contributions in the fundamental understanding of nanoscale fluid transport. He has been recognized by an National Science Foundation CAREER Award and a Department of Energy Early Career Award.

Karnik’s dedication to his students have been recognized by the Keenan Award for Innovation in Education and the Ruth and Joel Spira Award for Teaching Excellence. He has also served on the graduate admissions committee and various faculty search committees.

“It is a tremendous honor and responsibility to take this position in the top mechanical engineering department in the world,” says Karnik. “I will strive to ensure that we maintain excellence in mechanical engineering education and adapt to the changing times to offer strong and comprehensive degree programs and the best possible experience for our students.”

Karnik succeeds Professor John Brisson who previously served as associate department head for education.

August 3, 2018 | More

Boeing will be Kendall Square Initiative’s first major tenant

Boeing, the world’s largest aerospace company, and LGO Partner Company has announced they will be part MIT’s Kendall Square Initiative. The company has agreed to lease approximately 100,000 square feet at MIT’s building to be developed at 314 Main St., in the heart of Kendall Square in Cambridge.

MIT’s Kendall Square Initiative, includes six sites slated for housing, retail, research and development, office, academic, and open space uses. The building at 314 Main St. (“Site 5” on the map above) is located between the MBTA Red Line station and the Kendall Hotel. Boeing is expected to occupy its new space by the end of 2020.

“Our focus on advancing the Kendall Square innovation ecosystem includes a deep and historic understanding of what we call the ‘power of proximity’ to address pressing global challenges,” MIT Executive Vice President and Treasurer Israel Ruiz says. “MIT’s president, L. Rafael Reif, has made clear his objective of reducing the time it takes to move ideas from the classroom and lab out to the market. The power of proximity is a dynamic that propels this concept forward: Just as pharmaceutical, biotech, and tech sector scientists in Kendall Square work closely with their nearby MIT colleagues, Boeing and MIT researchers will be able to strengthen their collaborative ties to further chart the course of the aerospace industry.”

Boeing was founded in 1916 — the same year that MIT moved to Cambridge — and marked its recent centennial in a spirit similar to the Institute’s 100-year celebration in 2016, with special events, community activities, and commemorations. That period also represents a century-long research relationship between Boeing and MIT that has helped to advance the global aerospace industry.

Some of Boeing’s founding leaders, as well as engineers, executives, Boeing Technical Fellows, and student interns, are MIT alumni.

Earlier this year, Boeing announced that it will serve as the lead donor for MIT’s $18 million project to replace its 80-year-old Wright Brothers Wind Tunnel. This pledge will help to create, at MIT, the world’s most advanced academic wind tunnel.

In 2017, Boeing acquired MIT spinout Aurora Flight Sciences, which develops advanced aerospace platforms and autonomous systems. Its primary research and development center is located at 90 Broadway in Kendall Square. In the new facility at 314 Main St., Boeing will establish the Aerospace and Autonomy Center, which will focus on advancing enabling technologies for autonomous aircraft.

“Boeing is leading the development of new autonomous vehicles and future transportation systems that will bring flight closer to home,” says Greg Hyslop, Boeing chief technology officer. “By investing in this new research facility, we are creating a hub where our engineers can collaborate with other Boeing engineers and research partners around the world and leverage the Cambridge innovation ecosystem.”

“It’s fitting that Boeing will join the Kendall/MIT innovation family,” MIT Provost Martin Schmidt says. “Our research interests have been intertwined for over 100 years, and we’ve worked together to advance world-changing aerospace technologies and systems. MIT’s Department of Aeronautics and Astronautics is the oldest program of its kind in the United States, and excels at its mission of developing new air transportation concepts, autonomous systems, and small satellites through an intensive focus on cutting-edge education and research. Boeing’s presence will create an unprecedented opportunity for new synergies in this industry.”

The current appearance of the 314 Main St. site belies its future active presence in Kendall Square. The building’s foundation and basement level — which will house loading infrastructure, storage and mechanical space, and bicycle parking — is currently in construction. Adjacent to those functions is an underground parking garage, a network of newly placed utilities, and water and sewer infrastructure. Vertical construction of the building should begin in September.

August 3, 2018 | More

Reliable energy for all

Prosper Nyovanie (LGO ’19) discusses his passion for using engineering and technology to solve global problems.


During high school, Prosper Nyovanie had to alter his daily and nightly schedules to accommodate the frequent power outages that swept cities across Zimbabwe.

“[Power] would go almost every day — it was almost predictable,” Nyovanie recalls. “I’d come back from school at 5 p.m., have dinner, then just go to sleep because the electricity wouldn’t be there. And then I’d wake up at 2 a.m. and start studying … because by then you’d usually have electricity.”

At the time, Nyovanie knew he wanted to study engineering, and upon coming to MIT as an undergraduate, he majored in mechanical engineering. He discovered a new area of interest, however, when he took 15.031J (Energy Decisions, Markets, and Policies), which introduced him to questions of how energy is produced, distributed, and consumed. He went on to minor in energy studies.

Now as a graduate student and fellow in MIT’s Leaders for Global Operations (LGO) program, Nyovanie is on a mission to learn the management skills and engineering knowledge he needs to power off-grid communities around the world through his startup, Voya Sol. The company develops solar electric systems that can be scaled to users’ needs.

Determination and quick thinking

Nyovanie was originally drawn to MIT for its learning-by-doing engineering focus. “I thought engineering was a great way to take all these cool scientific discoveries and technologies and apply them to global problems,” he says. “One of the things that excited me a lot about MIT was the hands-on approach to solving problems. I was super excited about UROP [the Undergraduate Research Opportunities Program]. That program made MIT stick out from all the other universities.”

As a mechanical engineering major, Nyovanie took part in a UROP for 2.5 years in the Laboratory for Manufacturing and Productivity with Professor Martin Culpepper. But his experience in 15.031J made him realize his interests were broader than just research, and included the intersection of technology and business.

“One big thing that I liked about the class was that it introduced this other complexity that I hadn’t paid that much attention to before, because when you’re in the engineering side, you’re really focused on making technology, using science to come up with awesome inventions,” Nyovanie says. “But there are considerations that you need to think about when you’re implementing [such inventions]. You need to think about markets, how policies are structured.”

The class inspired Nyovanie to become a fellow in the LGO program, where he will earn an MBA from the MIT Sloan School of Management and a master’s in mechanical engineering. He is also a fellow of the Legatum Center for Development and Entrepreneurship at MIT.

When Nyovanie prepared for his fellowship interview while at home in Zimbabwe, he faced another electricity interruption: A transformer blew and would take time to repair, leaving him without power before his interview.

“I had to act quickly,” Nyovanie says. “I went and bought a petrol generator just for the interview. … The generator provided power for my laptop and for the Wi-Fi.” He recalls being surrounded by multiple solar lanterns that provided enough light for the video interview.

While Nyovanie’s determination in high school and quick thinking before graduate school enabled him to work around power supply issues, he realizes that luxury doesn’t extend to all those facing similar situations.

“I had enough money to actually go buy a petrol generator. Some of these communities in off-grid areas don’t have the resources they need to be able to get power,” Nyovanie says.

Scaling perspectives

Before co-founding Voya Sol with Stanford University graduate student Caroline Jo, Nyovanie worked at SunEdison, a renewable energy company, for three years. During most of that time, Nyovanie worked as a process engineer and analyst through the Renewable Energy Leadership Development Rotational Program. As part of the program, Nyovanie rotated between different roles at the company around the world.

During his last rotation, Nyovanie worked as a project engineer and oversaw the development of rural minigrids in Tanzania. “That’s where I got firsthand exposure to working with people who don’t have access to electricity and working to develop a solution for them,” Nyovanie says. When SunEdison went bankrupt, Nyovanie wanted to stay involved in developing electricity solutions for off-grid communities. So, he stayed in talks with rural electricity providers in Zimbabwe, Kenya, and Nigeria before eventually founding Voya Sol with Jo.

Voya Sol develops scalable solar home systems which are different than existing solar home system technologies. “A lot of them are fixed,” Nyovanie says. “So if you buy one, and need an additional light, then you have to go buy another whole new system. … The scalable system would take away some of that risk and allow the customer to build their own system so that they buy a system that fits their budget.” By giving users the opportunity to scale up or scale down their wattage to meet their energy needs, Nyovanie hopes that the solar electric systems will help power off-grid communities across the world.

Nyovanie and his co-founder are currently both full-time graduate students in dual degree programs. But to them, graduate school didn’t necessarily mean an interruption to their company’s operations; it meant new opportunities for learning, mentorship, and team building. Over this past spring break, Nyovanie and Jo traveled to Zimbabwe to perform prototype testing for their solar electric system, and they plan to conduct a second trip soon.

“We’re looking into ways we can aggregate people’s energy demands,” Nyovanie says. “Interconnected systems can bring in additional savings for customers.” In the future, Nyovanie hopes to expand the distribution of scalable solar electric systems through Voya Sol to off-grid communities worldwide. Voya Sol’s ultimate vision is to enable off-grid communities to build their own electricity grids, by allowing individual customers to not only scale their own systems, but also interconnect their systems with their neighbors’. “In other words, Voya Sol’s goal is to enable a completely build-your-own, bottom-up electricity grid,” Nyovanie says.

Supportive communities

During his time as a graduate student at MIT, Nyovanie has found friendship and support among his fellow students.

“The best thing about being at MIT is that people are working on all these cool, different things that they’re passionate about,” Nyovanie says. “I think there’s a lot of clarity that you can get just by going outside of your circle and talking to people.”

Back home in Zimbabwe, Nyovanie’s family cheers him on.

“Even though [my parents] never went to college, they were very supportive and encouraged me to push myself, to do better, and to do well in school, and to apply to the best programs that I could find,” Nyovanie says.

June 12, 2018 | More

LGO Best Thesis 2018 for Predictive Modeling Project at Massachusetts General Hospital

After the official MIT commencement ceremonies, Thomas Roemer, LGO’s executive director, announced the best thesis winner at LGO’s annual post-graduation celebration. This year’s winner was Jonathan Zanger, who developed a predictive model using machine learning at Massachusetts General Hospital. “The thesis describes breakthrough work at MGH that leverages machine learning and deep clinical knowledge to develop a decision support tool to predict discharges from the hospital in the next 24-48 hours and enable a fundamentally new and more effective discharge process,” said MIT Sloan School of Management Professor Retsef Levi, one of Zanger’s thesis advisors and the LGO management faculty co-director.

Applying MIT knowledge in the real world

Best Thesis 2018
Jonathan Zanger won the 2018 LGO best thesis award for his work using machine learning to develop a predictive model for better patient care at MGH

Zanger, who received his MBA and an SM in Electrical Engineering and Computer Science, conducted his six-month LGO internship project at MGH that sought to enable a more proactive process of managing the hospital’s bed capacity by identifying which surgical inpatients are likely to be discharged from the hospital in the next 24 to 48 hours. To do this, Zanger grouped patients by their surgery type, and worked to define and formalize milestones on the pathway to a post-operative recovery by defining barriers that may postpone patients’ discharge. Finally, he used a deep learning algorithm which uses over 900 features and is trained on 3000 types of surgeries and 20,000 surgical discharges. LGO thesis advisor Retsef Levi stated that “in my view, this thesis work represents a league of its own in terms of technical depth, creativity and potential impact.” Zanger was able to have true prediction for 97% of patients discharged within 48 hours. This helps to limit overcrowding and operational disruptions and anticipate capacity crises.

A group of faculty, alumni and staff review the theses each year to determine the winner. Thomas Sanderson (LGO ’14), LGO alumni and thesis reviewer stated that Zanger’s thesis showed  “tremendous extensibility and smart solution architecture decisions to make future work easy. Obvious and strong overlap of engineering, business, and industry.  This is potentially revolutionary work; this research advances the current state of the art well beyond anything currently available for large hospital bed management with obvious and immediate impact on healthcare costs and patient outcomes. The theory alone is hugely noteworthy but the fact that the work was also piloted during the thesis period is even more impressive. LGO has done a lot of great work at MGH but this is potentially the widest reaching and most important.”

Zanger, who earned his undergraduate degree Physics, Computer Science and Mathematics from the Hebrew University of Jerusalem, will return to Israel after graduation and resume service as an Israeli Defense Forces officer.

June 11, 2018 | More

A graphene roll-out

LGO thesis advisor and MIT Mechanical Engineering Professor John Hart, lead a team to develop a continuous manufacturing process that produces long strips of high-quality graphene.

The team’s results are the first demonstration of an industrial, scalable method for manufacturing high-quality graphene that is tailored for use in membranes that filter a variety of molecules, including salts, larger ions, proteins, or nanoparticles. Such membranes should be useful for desalination, biological separation, and other applications.

“For several years, researchers have thought of graphene as a potential route to ultrathin membranes,” says John Hart, associate professor of mechanical engineering and director of the Laboratory for Manufacturing and Productivity at MIT. “We believe this is the first study that has tailored the manufacturing of graphene toward membrane applications, which require the graphene to be seamless, cover the substrate fully, and be of high quality.”

Hart is the senior author on the paper, which appears online in the journal Applied Materials and Interfaces. The study includes first author Piran Kidambi, a former MIT postdoc who is now an assistant professor at Vanderbilt University; MIT graduate students Dhanushkodi Mariappan and Nicholas Dee; Sui Zhang of the National University of Singapore; Andrey Vyatskikh, a former student at the Skolkovo Institute of Science and Technology who is now at Caltech; and Rohit Karnik, an associate professor of mechanical engineering at MIT.

Growing graphene

For many researchers, graphene is ideal for use in filtration membranes. A single sheet of graphene resembles atomically thin chicken wire and is composed of carbon atoms joined in a pattern that makes the material extremely tough and impervious to even the smallest atom, helium.

Researchers, including Karnik’s group, have developed techniques to fabricate graphene membranes and precisely riddle them with tiny holes, or nanopores, the size of which can be tailored to filter out specific molecules. For the most part, scientists synthesize graphene through a process called chemical vapor deposition, in which they first heat a sample of copper foil and then deposit onto it a combination of carbon and other gases.

Graphene-based membranes have mostly been made in small batches in the laboratory, where researchers can carefully control the material’s growth conditions. However, Hart and his colleagues believe that if graphene membranes are ever to be used commercially they will have to be produced in large quantities, at high rates, and with reliable performance.

“We know that for industrialization, it would need to be a continuous process,” Hart says. “You would never be able to make enough by making just pieces. And membranes that are used commercially need to be fairly big ­— some so big that you would have to send a poster-wide sheet of foil into a furnace to make a membrane.”

A factory roll-out

The researchers set out to build an end-to-end, start-to-finish manufacturing process to make membrane-quality graphene.

The team’s setup combines a roll-to-roll approach — a common industrial approach for continuous processing of thin foils — with the common graphene-fabrication technique of chemical vapor deposition, to manufacture high-quality graphene in large quantities and at a high rate. The system consists of two spools, connected by a conveyor belt that runs through a small furnace. The first spool unfurls a long strip of copper foil, less than 1 centimeter wide. When it enters the furnace, the foil is fed through first one tube and then another, in a “split-zone” design.

While the foil rolls through the first tube, it heats up to a certain ideal temperature, at which point it is ready to roll through the second tube, where the scientists pump in a specified ratio of methane and hydrogen gas, which are deposited onto the heated foil to produce graphene.

Graphene starts forming in little islands, and then those islands grow together to form a continuous sheet,” Hart says. “By the time it’s out of the oven, the graphene should be fully covering the foil in one layer, kind of like a continuous bed of pizza.”

As the graphene exits the furnace, it’s rolled onto the second spool. The researchers found that they were able to feed the foil continuously through the system, producing high-quality graphene at a rate of 5 centimers per minute. Their longest run lasted almost four hours, during which they produced about 10 meters of continuous graphene.

“If this were in a factory, it would be running 24-7,” Hart says. “You would have big spools of foil feeding through, like a printing press.”

Flexible design

Once the researchers produced graphene using their roll-to-roll method, they unwound the foil from the second spool and cut small samples out. They cast the samples with a polymer mesh, or support, using a method developed by scientists at Harvard University, and subsequently etched away the underlying copper.

“If you don’t support graphene adequately, it will just curl up on itself,” Kidambi says. “So you etch copper out from underneath and have graphene directly supported by a porous polymer — which is basically a membrane.”

The polymer covering contains holes that are larger than graphene’s pores, which Hart says act as microscopic “drumheads,” keeping the graphene sturdy and its tiny pores open.

The researchers performed diffusion tests with the graphene membranes, flowing a solution of water, salts, and other molecules across each membrane. They found that overall, the membranes were able to withstand the flow while filtering out molecules. Their performance was comparable to graphene membranes made using conventional, small-batch approaches.

The team also ran the process at different speeds, with different ratios of methane and hydrogen gas, and characterized the quality of the resulting graphene after each run. They drew up plots to show the relationship between graphene’s quality and the speed and gas ratios of the manufacturing process. Kidambi says that if other designers can build similar setups, they can use the team’s plots to identify the settings they would need to produce a certain quality of graphene.

“The system gives you a great degree of flexibility in terms of what you’d like to tune graphene for, all the way from electronic to membrane applications,” Kidambi says.

Looking forward, Hart says he would like to find ways to include polymer casting and other steps that currently are performed by hand, in the roll-to-roll system.

“In the end-to-end process, we would need to integrate more operations into the manufacturing line,” Hart says. “For now, we’ve demonstrated that this process can be scaled up, and we hope this increases confidence and interest in graphene-based membrane technologies, and provides a pathway to commercialization.”

May 18, 2018 | More

This MIT program will purchase carbon offsets for student travel

Lead by Yakov Berenshteyn, (LGO ’19) a new Jetset Offset program will reduce the environmental impact of student travel by purchasing carbon offsets.

In one week about 100 MIT Sloan students will fly around the world to study regional economies, immerse themselves in different cultures, and produce more than 300 metric tons [PDF] of carbon dioxide.

Thanks to the necessary air travel for study tours, those students are producing the same emissions in two weeks as 1,600 average American car commuters would in that same timeframe, said Yakov Berenshteyn, LGO ’19.

While Berenshteyn doesn’t want to do away with student travel at MIT Sloan, he is hoping to lessen the impact on the environment, with the help of his Jetset Offset program.

The pilot involves purchasing carbon offsets for the three MBA and one Master of Finance study tours for spring break 2018.

Carbon offsets are vetted projects that help capture or avoid carbon emissions. These projects can include reforestation and building renewable energy sources. The reductions might not have an immediate impact on emissions, Berenshteyn said, but they are “still the primary best practice for us to use.”

“This is raising awareness of, and starting to account for, our environmental impacts from student travel,” Berenshteyn said. “You don’t get much choice in the efficiency of the airplane that you board.”

The idea for the offset came in October, when Berenshteyn was helping to plan the January Leaders for Global Operations Domestic Plant Trek. Berenshteyn at the time realized for the two weeks of the trip, the roughly 50 students and staff would be logging a total of 400,000 air miles.

Berenshteyn spent months researching an answer for counterbalancing the burned jet fuel. He also got input from MIT Sloan professor John Sterman. Berenshteyn said he looked at other options, like funding more local projects such as solar panel installation, but the calculations were too small scale to make much of a difference.

Universities around the world are applying carbon offsets and carbon-neutral practices in some form to their operations. Berenshteyn said Duke University has something similar to the air travel and carbon offsets that he proposes for MIT Sloan.

The Leaders for Global Operations program purchased 67 metric tons of offsets through Gold Standard for the January student trek, and those offsets are going to reforestation efforts in Panama.

In the case of the four upcoming study trips, MIT Sloan’s student life office is picking up the tab.

“My colleague Paul Buckley (associate director of student life) had an idea for something like this close to a decade ago, when he first arrived in student life, and noted the extent to which our students travel during their time at Sloan,” said Katie Ferrari, associate director of student life. “So this was an especially meaningful partnership for us. Yakov’s idea is exactly the kind of student initiative we love to support. He is practicing principled, innovative leadership with an eye toward improving the world.”

Ferrari said the support for the pilot this semester is a stake in the ground for incorporating carbon offset purchases into future student-organized travel — which is what Berenshteyn said was his hope for launching the pilot.

“It should be at Sloan, if a student is planning a trip, they have their checklist of insurance, emergency numbers, and carbon offsets,” he said.

March 21, 2018 | More

A machine-learning approach to inventory-constrained dynamic pricing

LGO thesis advisor and MIT Civil and Environmental Engineering Professor David Simchi-Levi lead a team on a new study showing how a model-based algorithm known as Thompson sampling can be used for revenue management.

In 1933, William R. Thompson published an article on a Bayesian model-based algorithm that would ultimately become known as Thompson sampling. This heuristic was largely ignored by the academic community until recently, when it became the subject of intense study, thanks in part to internet companies that successfully implemented it for online ad display.

Thompson sampling chooses actions for addressing the exploration-exploitation in the multiarmed bandit problem to maximize performance and continually learn, acquiring new information to improve future performance.

In a new study, “Online Network Revenue Management Using Thompson Sampling,” MIT Professor David Simchi-Levi and his team have now demonstrated that Thompson sampling can be used for a revenue management problem, where demand function is unknown.

Incorporating inventory constraints

A main challenge to adopting Thompson sampling for revenue management is that the original method does not incorporate inventory constraints. However, the authors show that Thompson sampling can be naturally combined with a classical linear program formulation to include inventory constraints.

The result is a dynamic pricing algorithm that incorporates domain knowledge and has strong theoretical performance guarantees as well as promising numerical performance results.

Interestingly, the authors demonstrate that Thompson sampling achieves poor performance when it does not take into account domain knowledge.

Simchi-Levi says, “It is exciting to demonstrate that Thomson sampling can be adapted to combine a classical linear program formulation, to include inventory constraints, and to see that this method can be applied to general revenue management problems in the business-to-consumer and business-to-business environments.”

Industry application improves revenue

The proposed dynamic pricing algorithm is highly flexible and is applicable in a range of industries, from airlines and internet advertising all the way to online retailing.

The new study, which has just been accepted by the journal Operations Research, is part of a larger research project by Simchi-Levi that combines machine learning and stochastic optimization to improve revenue, margins, and market share.

Algorithms developed in this research stream have been implemented at companies such as Groupon, a daily market maker, Rue La La, a U.S. online flash sales retailer, B2W Digital, a large online retailer in Latin America, and at a large brewing company, where Simchi-Levi and his team optimized the company’s promotion and pricing in various retail channels.

March 19, 2018 | More

A revolutionary model to optimize promotion pricing

William F. Pounds Professor of Management and LGO thesis advisor Georgia Perakis recently authored a Huffington Post article about using a scientific, data-driven approach to determine optimal promotion pricing.
Grocery stores run price promotions all the time. You see them when a particular brand of spaghetti sauce is $1 off or your favorite coffee is buy one get one free. Promotions are used for a variety of reasons from increasing traffic in stores to boosting sales of a particular brand. They are responsible for a lot of revenue, as a 2009 A.C. Nielsen study found that 42.8% of grocery store sales in the U.S. are made during promotions. This raises an important question: How much money does a retailer leave on the table by using current pricing practices as opposed to a more scientific, data-driven approach in order to determine optimal promotional prices?

The promotion planning tools currently available in the industry are mostly manual and based on “what-if” scenarios. In other words, supermarkets tend to use intuition and habit to decide when, how deep, and how often to promote products. Yet promotion pricing is very complicated. Product managers have to solve problems like whether or not to promote an item in a particular week, whether or not to promote two items together, and how to order upcoming discounts ― not to mention incorporating seasonality issues in their decision-making process.

There are plenty of people in the industry with years of experience who are good at this, but their brains are not computers. They can’t process the massive amounts of data available to determine optimal pricing. As a result, lots of money is left on the table.

To revolutionize the field of promotion pricing, my team of PhD students from the Operations Research Center at MIT, our collaborators from Oracle, and I sought to build a model based on several goals. It had to be simple and realistic. It had to be easy to estimate directly from the data, but also computationally easy and scalable. In addition, it had to lead to interesting and valuable results for retailers in practice.

Read the full post at The Huffington Post.

Georgia Perakis is the William F. Pounds Professor of Management and a Professor of Operations Research and Operations Management at the MIT Sloan School of Management.

March 16, 2018 | More

JDA Software collaborates with MIT to advance research in intelligent supply chains

David Simchi-Levi, Professor of Civil and Environmental Engineering and LGO thesis advisor is leading a multiyear collaboration with JDA Software.

MIT will work with JDA, leveraging their business domain expertise and client base, to advance research in intelligent supply chains.

The collaboration aims to improve supply chain performance and customer experiences by leveraging data, computational power, and machine learning.

Professor of civil and environmental engineering David Simchi-Levi says, “I am very pleased JDA has entered into a multiyear research collaboration with MIT, and I look forward to working with the JDA Lab and teams. The collaboration will support our students and advance research in machine learning, optimization and consumer behavior modeling. “

This collaboration with JDA brings real world challenges, opportunities, and data, and will help to further the advancement of MIT’s world-class research in supply chain and retail analytics.

The MIT and JDA research teams will create real-world use cases to expand predictive demand, intelligent execution, and smart supply chain and retail planning that will yield a unique business strategy. These use cases will explore new data science algorithms that combine natural language processing, predictive behavior, and prescriptive optimization by taking into account past behaviors, and predicting and changing future behaviors.

“It is more critical than ever to infuse innovation into every aspect of the supply chain, as edge technologies such as the Internet of Things (IoT) and artificial intelligence (AI) are essential to digitally transforming supply chains. This collaboration allows us to tap into the extraordinary mindshare at MIT to accelerate the research into more intelligent and cognitive capabilities moving forward,” says Desikan Madhavanur, executive vice president and chief development officer at JDA.

“We are excited to be working on the future of supply chain with MIT to double down on researching enhanced, innovative, and value-driven supply chain solutions,” Madhavanur says.

The multiyear collaboration will support students on the research teams and the development of knowledge and education.

Simchi-Levi will speak at JDA’s annual customer conference, JDA FOCUS 2018, in Orlando, May 6-9, 2018.

March 16, 2018 | More

Making appliances and energy grids more efficient

Professor of electrical engineering and frequent LGO thesis advisor James Kirtley Jr., is working on a new design for fans that offers high efficiency at an affordable cost, which could have a huge impact for developing countries.

The ceiling fan is one of the most widely used mechanical appliances in the world. It is also, in many cases, one of the least efficient.

In India, ceiling fans have been used for centuries to get relief from the hot, humid climate. Hand-operated fans called punkahs can be traced as far back as 500 BC and were fixtures of life under the British Raj in the 18th and 19th centuries. Today’s ceiling fans run on electricity and are more ubiquitous than ever. The Indian Fan Manufacturers’ Association reported producing 40 million units in 2014 alone, and the number of fans in use nationwide is estimated in the hundreds of millions, perhaps as many as half a billion.

James Kirtley Jr., a professor of electrical engineering at MIT, has been investigating the efficiency of small motors like those found in ceiling fans for more than 30 years.

“A typical ceiling fan in India draws about 80 watts of electricity, and it does less than 10 watts of work on the air,” he says. “That gives you an efficiency of just 12.5 percent.”

Low-efficiency fans pose a variety of energy problems. Consumers don’t get good value for the electricity they buy from the grid, and energy utilities have to deal with the power losses and grid instability that result from low-quality appliances.

But there’s a reason these low-efficiency fans, driven by single-phase induction motors, are so popular: They’re inexpensive. “The best fans on the market in India — those that move a reasonable amount of air and have a low input power — are actually quite costly,” Kirtley says. The high price puts them out of reach for most of India’s population.

Now Kirtley, with support from the Tata Center for Technology and Design, is working on a single-phase motor design that offers high efficiency at an affordable cost. He says the potential impact is huge.

“If every fan in India saved just 2 watts of electricity, that would be the equivalent of a nuclear power plant’s generation capacity,” he says. “If we could make these fans substantially more efficient than they are, operating off of DC electricity, you could imagine extending the use of ceiling fans into rural areas where they could provide a benefit to the quality of life.”

Mohammad Qasim, a graduate student in Kirtley’s research group and a fellow in the Tata Center, says the benefits could reach multiple stakeholders. “Having more efficient appliances means a lower electricity bill for the consumer and fewer power losses on the utility’s side,” he says.

Choosing the right motor

“The idea is to try and hit that high-efficiency mark at a cost that is only a little more than that of existing low-efficiency fans,” Kirtley says. “We imagine a fan that might have an input power of 15 watts and an efficiency of 75 percent.”

To accomplish that, Kirtley and Qasim are exploring two approaches: creating an improved version of the conventional induction motor, or switching to a brushless DC motor, which may be more expensive but can deliver superior efficiency.

In either case, they plan to use power electronics — devices that control and optimize the flow of electricity through the motor — to improve the power quality and grid compatibility of the fan. Power electronics can also be used to convert AC electricity from the grid into DC, opening up the possibility of using DC motors in ceiling fans.

Brushless DC motors, which are the younger technology, use permanent magnets to establish a magnetic field that creates torque between the motor’s two main components, the rotor and stator. “You can think of it almost like a dog chasing his tail,” Kirtley says. “If I establish the magnetic field in some direction, the magnet turns to align itself in that direction. As I rotate the magnetic field, the magnet moves to align, and that keeps the rotor spinning.”

Induction motors, on the other hand, use no magnets but instead create a rotating magnetic field by flowing current through the stator coils. Because they use AC electricity, they are directly grid compatible, but their efficiency and stability can be improved by using power electronics to optimize the speed of the motor.

International collaboration

In determining which path to take, induction or brushless DC motor, Kirtley and Qasim are leaning on the expertise of Vivek Agarwal, a professor of electrical engineering at the Indian Institute of Technology, Bombay (IITB). Agarwal is a specialist in power electronics.

“The collaboration with Professor Agarwal’s group is so important,” Kirtley says. “They can give us a good idea of what the two different power electronics packages will cost. You would typically think of the brushless motor package as the more expensive option, but it may or may not be.”

Outside of the lab, on-the-ground detective work is key. When Qasim visited India in January 2017, he hit the streets of Mumbai with one of the graduate students from Agarwal’s lab. Together, they visited people across the ceiling fan industry, from manufacturers to repairmen in street-side shops.

“This visit was a big motivation for us,” says Qasim, noting that they were able to glean insights that will help them design a more robust and durable motor. “We want to understand the major maintenance issues that cause these motors to break down so that we can avoid common sources of failure. It was important to make the effort to talk to local people who had real experience repairing these motors.”

Usha International, an appliance manufacturer based in New Delhi, has been a key advisor in the early stages of the project and helped identify ceiling fans as a critical focus area. Engineers at Usha agree with Kirtley’s assessment that there is an unmet need for high-efficiency motors at relatively low cost, and Qasim says the Usha team shared what they had learned from designing their own high-efficiency fans.

Now, Kirtley and Qasim are engaged in the daunting task of envisioning how an ideal motor might look.

“This is a very challenging problem, to design a motor that is both efficient and inexpensive,” Kirtley says. “There’s still a question of which type of motor is going to be the best one to pursue. If we can get a good understanding of what exactly the machine ought to do, we can proceed to do a good machine design.”

Qasim has built a test facility in Kirtley’s laboratory at MIT, which he is using to characterize a variety of existing fans. His experimental data, combined with his fieldwork in India, should provide a set of design requirements for the improved motor. From there, he and Kirtley will work with the IITB researchers to pair the machine with an appropriate power electronics package.

In reducing the power demands of the standard ceiling fan by as much as 65 watts, they hope to have a far-reaching, positive effect on India’s energy system. But that’s only the start. Ultimately, they believe efficient, affordable motors can be applied to a number of common appliances, potentially saving gigawatts of electricity in a country that is working hard to expand reliable energy access for what will soon be the world’s largest population.

This article appeared in the Autumn 2017 issue of Energy Futures, the magazine of the MIT Energy Initiative.

March 2, 2018 | More


HOME | NEWSROOM | ARTICLES This pad-free wireless charger can power multiple devices at once

This pad-free wireless charger can power multiple devices at once

magine charging your phone, tablet, and wearable device, at the same time, in any direction from the same power source.

Pi is working to make that a reality.

“Pi is unique in the power space because we are working on multi-device, orientation agnostic, wireless power,” said CEO and co-founder John MacDonald, MBA ’15. “We can charge multiple devices from the same power source — up to four at a time — without requiring precise positioning on a pad, but still being compatible with existing, safe standards.”

Pi’s charging technology can sense a device’s low battery and adjust its magnetic field based on where the device is positioned, rather than requiring the user to place the device on a specific pad or station.

“Our first product applies this to phones and small consumer devices in the United States, but we’re going to bring it to a variety of applications around the world over the next five years,” he added.

Pi’s roots extend back to early 2014, when MacDonald and his future Pi co-founder and chief technology officer, Lixin Shi, PhD ’15, enrolled in MIT Sloan’s New Enterprises course.

The two men were part of a team that pitched a wireless power project during a class competition. While they didn’t win the faculty judges’ favor, they did earn an audience choice award — giving the students the confidence to explore building a commercial wireless charger.

In the summer of 2015, the team decided to try for a first round of funding. By the fall of that year, Pi had officially launched.

August 3, 2018 | More

New ideas are getting harder to find — and more expensive

New ideas are getting harder to find — and more expensive

It’s an age of astonishing technological progress — but are we starting to have a harder time coming up with new ideas?

Yes, argues a group of MIT Sloan and Stanford University researchers, who found in a study published by the National Bureau of Economic Research in March that the productivity of scientific research is falling sharply across the board.

That, they argue, is because researchers are putting in more and more effort to sustain the same — or even a slightly lower — pace of idea generation as we experienced half a century ago.

“Just to sustain the constant growth in GDP per person, the U.S. must double the amount of research effort put into searching for a new idea every 13 years to offset the increased difficulty in finding new ideas,” write MIT Sloan professor of applied economics John Van Reenen, Stanford University professors Nicholas Bloom and Charles I. Jones, and Stanford doctoral candidate Michael Webb.

Moore’s Law — the observed doubling of the number of transistors packed onto new computer central processing units every two years — stands as a prime example. The doubling effect represents a growth rate of 35 percent each year, and that growth is driven only by ever-more-extensive research, the authors write.

“Many commentators note that Moore’s Law is not a law of nature, but instead results from intense research effort: Doubling the transistor density is often viewed as a goal or target for research programs,” they write.

They continue: “The constant exponential growth implied by Moore’s Law has been achieved only by a massive increase in the amount of resources devoted to pushing the frontier forward.”

In fact, research efforts toward semiconductor improvement have risen by a factor of 18 since the early 1970s, the study found, while productivity has fallen by the same factor. Taken together, that means it’s about 18 times harder today to push Moore’s Law forward than it was half a century ago, the authors write.

August 3, 2018 | More

How do online bots shift opinions?

How do online bots shift opinions?

How do you win an election? You can spend money. You can canvass neighborhoods. You can take meals at small-town diners, crisscross the country by bus, parade your charismatic family before the cameras.

But don’t forget the Twitter bots.

“If you’re smart about putting bots in a network in particular places, you can pretty easily manipulate people’s opinions,” said Tauhid Zaman, associate professor of operations management at MIT Sloan. “And whether an election or something else, this can help you achieve the outcomes that you want.” In a new working paper coauthored with MIT Operations Research Center graduate student David Scott Hunter, Zaman outlines how to optimize shifts in ideology using bots in a social network.

They begin with the assumption that, though people update their opinions as they receive new information, this process dampens over time; opinions harden. “You’ll listen to me less and less if you already have a lot of information, and something new won’t likely change your opinion,” Zaman said.

Working from this foundation, Zaman and Hunter built a model of opinion dynamics in social networks and dropped in a handful of bots whose opinions were preset and immutable (so-called “stubborn agents”). They developed an algorithm to identify targets for the bots to influence. These were generally people who didn’t already have firm opinions on a particular issue and who could reach many other people. Once these targets were identified, the bots could go to work, pushing their message on the targets.

One way to measure the effectiveness of this process would be to observe how the average opinion in the network changed as a result of the bots. Overall, were people more inclined to align themselves with the bots after a set period? But for Zaman and Hunter, a more interesting consideration was the specific number of individuals whose opinions shifted over a set threshold. “This is an important measure because once you get over this threshold, maybe then you go and do something like buy a product, watch a movie, or join a protest,” Zaman said. “Or maybe you go vote.”

It turns out the structure of the underlying network has a big impact on how effective bots can be. Zaman found that on polarized networks, a few bots are able to shift a disproportionate number of people over a threshold. This is important because many modern social networks have such a polarized structure, with most people only maintaining friends with people of similar ideologies. This is even more relevant given the ongoing discussion of foreign meddling in U.S. elections, and the upcoming 2018 mid-term elections. Because of how polarized the U.S. has become in recent years, the democratic process is highly vulnerable to this type of cyberattack, Zaman said. “When it comes to bots in a polarized network, a little bit goes a long way.”

This is the third in a three-part series examining new work about Twitter, influence, and bots by MIT Sloan associate professor Tauhid Zaman. Read ‘Solving Twitter’s follow-back problem‘ and ‘A new method for rooting out social media bots.’

August 3, 2018 | More

Here’s how ‘question bursts’ make better brainstorms

Here’s how ‘question bursts’ make better brainstorms

It’s among the largest of projects that Ling Xiang, a director of product management at Oracle, has encountered: helping to lead an organizational change that is part of the company’s transformation from a software developer into a cloud-based service provider.

The transition will require bucking old ways of thinking to adopt new ones. But Xiang expects such a drastic shift won’t come without some measure of resistance, and figuring out how to overcome it will require that she, too, explore new leadership methods and avenues of thought to ensure everyone comes on board.

June 15, 2018 | More

Sheryl Sandberg on Facebook's missteps and what comes next

Sheryl Sandberg on Facebook’s missteps and what comes next

The more they ask “Could we?” the more creative people become. But the more they ask “Should we?” the more ethical they become.

That was the message from Facebook chief operating officer Sheryl Sandberg at MIT’s 2018 commencement ceremony, held June 8 on campus.

Facebook in the past year has faced a series of privacy scandals and ethical questions, most notably when it was reported that consulting firm Cambridge Analytica had mined the personal data of millions of Facebook users and used it to influence voter opinion. Facebook has also been criticized for failing to contain the spread of fake news.

June 15, 2018 | More

Innovating around the box

Innovating around the box

Managers today are told that improving their business incrementally each year is no longer good enough. Rather, to succeed they must disrupt themselves — revolutionize their company and their industry — before a competitor beats them to it.

In a May 16 webinar for MIT Sloan Alumni Online, senior lecturer David Robertson discussed a third way that businesses can grow, taken from his 2017 book, “The Power of Little Ideas: A Low-Risk, High-Reward Approach to Innovation.” Rather than disrupt a business, companies can grow by finding ways to innovate around existing products.

“When you have an existing product, and have an existing market, you shouldn’t be quick to jump away from it and explore disruptive, new innovations,” Robertson said. “That’s prone to failure and is often very expensive and risky. Look to see if you can innovate around it.”

In the webinar, Robertson explains:

· What is the third way?

· How is this different than other approaches to innovation?

· Which approaches are the most important for managers to know?

What is the third way?
Robertson said too often he hears stories about mature companies feeling forced to choose between incremental change and disruption when a third way exists: Innovate around existing products and services. Lego chose this path after facing near disaster.

In the late 1990s, Lego got caught up in the disruptive innovation frenzy that gripped corporate thought. After 15 straight years of 14 percent average annual growth, sales plateaued. Lego became convinced that the brick, whose patents had expired in the 1980s, was becoming a commodity. The company’s executives convinced themselves they had to overhaul their business, move away from their iconic brick, and reinvent the future of play before a competitor did. The result was four years of expensive failures. The company almost went bankrupt.

But Lego learned a lesson: when it went away from the brick, customers had no reason to purchase Lego toys. While it wasn’t sufficient to offer only a box of bricks, it was necessary. When Lego went back to the brick and innovated around it, customers returned to the brand and sales rebounded. (Robertson was the Lego Professor of Innovation and Technology Management at the International Institute for Management Development and wrote “Brick by Brick,” a book about Lego’s success in innovation.)

To pursue this third way, a company must start by defining the product or service it wants to innovate around, then decide its business promise to its customers, then design and deliver those complementary innovations to market.

Lego checked all of those boxes when it introduced Lego Batman in 2006. A major movie followed in 2017. Along with Lego Batman, there were a series of complementary products designed to increase kids’ involvement with the story. There was a comic book, Happy Meal toys, a video game, and an iPhone tie-in. (Open Siri, say “Hey computer,” and see what happens.)

How is this different than other approaches to innovation?
An incremental improvement to current products is a necessary activity for any company, but usually only keeps you abreast of the competition. Disruptive innovations like Uber can change an entire industry. But in between the two is the third way, which any company can pursue. The secret: Build a deep relationship with your customer. Date your customer, and don’t fight your competitor, Robertson said.

Between 2010 to 2015, GoPro practiced this third-way approach and achieved five years of 90 percent average annual sales growth. The company developed not only a rugged, waterproof action camera, but also a smartphone app, a variety of camera mounts, desktop software to turn raw footage into polished movies, and a social media site for customers to share their adventures. By “dating their customer” GoPro was able to understand what they wanted to achieve with their cameras, and provide the complementary products and services to help them.

Sony thought it could knock GoPro off its perch, and developed a better and less expensive rival camera. Yet, it barely dented GoPro’s market share. Why? Sony fought the competition while GoPro was dating the customer. Sony had a better and cheaper camera, but GoPro had a portfolio of complementary products and services that together helped customers capture their adventures.

Which innovation approaches are the most important for managers to know?
There are several types of innovation, Robertson said: incremental improvements, lean-startup, blue-ocean, disruptive, and Robertson’s third-way. Successful companies cycle through these different types of innovation over the years. They may start as blue-ocean innovators, like GoPro, but end up innovating around a product to hold onto their core markets. “Managers need to know all these different types of innovations and practice them,” Robertson said.

But knowing how to innovate around a product or service is especially important, Robertson said, because it can lead to new opportunities. Consider the company behind the Spin Pop electric lollipops. The Spin Pop has a tiny motor that spins a lollipop, adding a new feature to an existing product. The company then developed the SpinBrush, which had a similar motor-and-battery combination to power an inexpensive electric toothbrush (a “blue ocean” innovation). The SpinBrush was acquired by Procter & Gamble for $475 million. Procter & Gamble then used the SpinBrush to innovate around its Crest brand and expand it from a toothpaste brand to an oral care brand. Crest now has an electric toothbrush, floss, white strips, mouthwash, and other products. By innovating around the core toothpaste product, Crest was able to revive sales for the toothpaste, as well as gain revenues from the complementary products.

“Too often we jump away from our existing customers and existing products,” Robertson said. “Innovating around those can be incredibly valuable and open up new opportunities for growth.”

Watch the full webinar below.

June 8, 2018 | More

What made Kate Spade a great entrepreneur

What made Kate Spade a great entrepreneur

When Kate Spade, her name eponymous with the working woman’s handbag, died this month at 55, the news was a blow to the fashion and corporate worlds. Those who studied her career are certain of the entrepreneurial legacy she left behind.

In 1993 Spade was an accessories editor for Mademoiselle magazine when she started her handbag company with husband. She was in a great place in terms of her career, said MIT Sloan senior lecturer in managerial communication Neal Hartman, and she took a risk to start her own handbag line.

Hartman, whose teachings include leadership and working in teams, said Spade was playful and creative at heart, and her brand reflected those qualities.

“I think she had a terrific sense of what women wanted, so she knew her customer base and had a good sense of what they wanted more than what they needed,” Hartman said. “She wasn’t going for the $3,000 bag, but she still wanted something that looked good, that was clearly fashionable.”

She told the New York Times in 1999 that she wanted “a functional bag that was sophisticated and had some style.” In an interview with the Toronto Star, she said she wanted her company “to be like a fashion version of L.L. Bean, never in or out.”

A good leader with a good team
Hartman said the other thing that Spade did to ensure her success was assemble a really good team.

“She had a combination of family and non-relative professionals who helped to move the organization forward and Kate paid close attention to both the U.S. and global operations,” Hartman said. “She looked for the right people who fit with the culture and fostered an environment where people wanted to stay with the company.”

Spade left her company in 2007, after then-Liz Claiborne Inc. bought it for $125 million from the Neiman Marcus Group, the Associated Press reported. The company Coach (now known as Tapestry) bought the brand in 2017 for $2.4 billion.

“Her name immediately you associate with her brand, with her product,” Hartman said. “Essentially everyone looked at her as being very successful. Of course it begs the question of were she still with us and continuing in her work, what would be next, where would it go?”

Building an enduring brand
That’s important to note: the question is what’s next, not will the brand survive. Hartman pointed to fashion designer Gianni Versace’s 1997 death as an example — while there were likely some periods of uncertainty for the fashion house, the brand continues today.

The same could be said of the Kate Spade brand, Hartman said, in part because of the team she built at the beginning of the company.

While she was the icon and spokesperson for the brand, others closely connected with her helped make that brand happen, Hartman said, and despite changing hands several times, the brand has endured.

“It’s a brand that people know, it’s a brand that people respect, and again, it’s classy, it’s bright, it’s fun, it’s colorful, it’s functional, it’s high quality, and it’s affordable,” Hartman said. “You essentially have of all the ingredients of a very successful product line.”

June 6, 2018 | More

Defending hospitals against life-threatening cyberattacks

Defending hospitals against life-threatening cyberattacks

From The Conversation Like any large company, a modern hospital has hundreds – even thousands – of workers using countless computers, smartphones and other electronic devices that are vulnerable to security breaches, data thefts and ransomware attacks. But hospitals are unlike other companies in two important ways. They keep medical records, which are among the most sensitive data about people. And many hospital electronics help keep patients alive, monitoring vital signs, administering medications, and even breathing and pumping blood for those in the most dire conditions. A 2013 data breach at the University of Washington Medicine medical group compromised about 90,000 patients’ records and resulted in a US$750,000 fine from federal regulators. In 2015, the UCLA Health system, which includes a number of hospitals, revealed that attackers accessed a part of its network that handled information for 4.5 million patients. Cyberattacks can interrupt medical devices, close emergency rooms and cancel surgeries. The WannaCry attack, for instance, disrupted a third of the UK’s National … Read More »

The post Defending hospitals against life-threatening cyberattacks – Mohammad S. Jalali appeared first on MIT Sloan Experts.

May 9, 2018 | More

Beepi aims for overhaul of used car industry

Beepi aims for overhaul of used car industry

Alejandro Resnik, MBA ’13, has always been driven to solve problems with innovation. So when he learned firsthand the misery of owning a lemon of a used car, Resnik set out to change the way Americans buy automobiles.

On April 15, 2014, Resnik launched Beepi, an online marketplace that enables customers to buy or sell vehicles from home with free delivery, the support of a certified inspection, and a money back guarantee for buyers. The company has grown quickly in California and last month secured a $60 million funding round to expand across the U.S.

May 6, 2018 | More

Here’s why networking isn’t just about landing your dream job

Here’s why networking isn’t just about landing your dream job

From Fortune At a dinner party a few years ago, Salesforce CRM 2.16% Founder Marc Benioff and Dropbox co-founder Drew Houston got to talking. Their conversation led to a new idea, and that idea led to Salesforce’s Chatter, an enterprise social network, Benioff recalled during an interview I had with him two years ago (for an upcoming book about what causes senior leaders, especially CEOs, to ask the right questions – before someone else does it for them). Their conversation led to a new idea, and that idea led to Salesforce’s Chatter, an enterprise social network. Chatter was not just a result of a chance encounter. At the age of 50, Benioff regularly invites 20- and 30-something year-old entrepreneurs to his house for dinner. It’s in this pursuit of perspectives different than his own that he is able to constantly bring new services and ideas to market. Benioff, who is … Read More »

The post Here’s why networking isn’t just about landing your dream job — Hal Gregersen appeared first on MIT Sloan Experts.

May 2, 2018 | More


China could face deadly heat waves due to climate change

China could face deadly heat waves due to climate change

A region that holds one of the biggest concentrations of people on Earth could be pushing against the boundaries of habitability by the latter part of this century, a new study shows.

Research has shown that beyond a certain threshold of temperature and humidity, a person cannot survive unprotected in the open for extended periods — as, for example, farmers must do. Now, a new MIT study shows that unless drastic measures are taken to limit climate-changing emissions, China’s most populous and agriculturally important region could face such deadly conditions repeatedly, suffering the most damaging heat effects, at least as far as human life is concerned, of any place on the planet.

The study shows that the risk of deadly heat waves is significantly increased because of intensive irrigation in this relatively dry but highly fertile region, known as the North China Plain — a region whose role in that country is comparable to that of the Midwest in the U.S. That increased vulnerability to heat arises because the irrigation exposes more water to evaporation, leading to higher humidity in the air than would otherwise be present and exacerbating the physiological stresses of the temperature.

The new findings, by Elfatih Eltahir at MIT and Suchul Kang at the Singapore-MIT Alliance for Research and Technology, are reported in the journal Nature Communications. The study is the third in a set; the previous two projected increases of deadly heat waves in the Persian Gulf area and in South Asia. While the earlier studies found serious looming risks, the new findings show that the North China Plain, or NCP, faces the greatest risks to human life from rising temperatures, of any location on Earth.

“The response is significantly larger than the corresponsing response in the other two regions,” says Eltahir, who is the the Breene M. Kerr Professor of Hydrology and Climate and Professor of Civil and Environmental Engineering. The three regions the researchers studied were picked because past records indicate that combined temperature and humidity levels reached greater extremes there than on any other land masses. Although some risk factors are clear — low-lying valleys and proximity to warm seas or oceans — “we don’t have a general quantitative theory through which we could have predicted” the location of these global hotspots, he explains. When looking empirically at past climate data, “Asia is what stands out,” he says.

Although the Persian Gulf study found some even greater temperature extremes, those were confined to the area over the water of the Gulf itself, not over the land. In the case of the North China Plain, “This is where people live,” Eltahir says.

The key index for determining survivability in hot weather, Eltahir explains, involves the combination of heat and humidity, as determined by a measurement called the wet-bulb temperature. It is measured by literally wrapping wet cloth around the bulb (or sensor) of a thermometer, so that evaporation of the water can cool the bulb. At 100 percent humidity, with no evaporation possible, the wet-bulb temperature equals the actual temperature.

This measurement reflects the effect of temperature extremes on a person in the open, which depends on the body’s ability to shed heat through the evaporation of sweat from the skin. At a wet-bulb temperature of 35 degrees Celsius (95 F), a healthy person may not be able to survive outdoors for more than six hours, research has shown. The new study shows that under business-as-usual scenarios for greenhouse gas emissions, that threshold will be reached several times in the NCP region between 2070 and 2100.

“This spot is just going to be the hottest spot for deadly heat waves in the future, especially under climate change,” Eltahir says. And signs of that future have already begun: There has been a substantial increase in extreme heat waves in the NCP already in the last 50 years, the study shows. Warming in this region over that period has been nearly double the global average — 0.24 degrees Celsius per decade versus 0.13. In 2013, extreme heat waves in the region persisted for up to 50 days, and maximum temperatures topped 38 C in places. Major heat waves occurred in 2006 and 2013, breaking records. Shanghai, East China’s largest city, broke a 141-year temperature record in 2013, and dozens died.

To arrive at their projections, Eltahir and Kang ran detailed climate model simulations of the NCP area — which covers about 4,000 square kilometers — for the past 30 years. They then selected only the models that did the best job of matching the actual observed conditions of the past period, and used those models to project the future climate over 30 years at the end of this century. They used two different future scenarios: business as usual, with no new efforts to reduce emissions; and moderate reductions in emissions, using standard scenarios developed by the Intergovernmental Panel on Climate Change. Each version was run two different ways: one including the effects of irrigation, and one with no irrigation.

One of the surprising findings was the significant contribution by irrigation to the problem — on average, adding about a half-degree Celsius to the overall warming in the region that would occur otherwise. That’s because, even though extra moisture in the air produces some local cooling effect at ground level, this is more than offset by the added physiological stress imposed by the higher humidity, and by the fact that extra water vapor — itself a powerful greenhouse gas — contributes to an overall warming of the air mass.

“Irrigation exacerbates the impact of climate change,” Eltahir says. In fact, the researchers report, the combined effect, as projected by the models, is a bit greater the sum of the individual impacts of irrigation or climate change alone, for reasons that will require further research.

The bottom line, as the researchers write in the paper, is the importance of reducing greenhouse gas emissions in order to reduce the likelihood of such extreme conditions. They conclude, “China is currently the largest contributor to the emissions of greenhouse gases, with potentially serious implications to its own population: Continuation of the current pattern of global emissions may limit habitability of the most populous region of the most populous country on Earth.”

“This is a solid piece of research, extending and refining some of the previous studies on man-made climate change and its role on heat waves,” says Christoph Schauer, a professor of atmospheric and climate science at ETH Zurich, who was involved in the work. “This is a very useful study. It highlights some of the potentially serious challenges that will emerge with unabated climate change. … These are important and timely results, as they may lead to adequate adaptation measures before potentially serious climate conditions will emerge.”

Schauer adds that “While there is overwhelming evidence that climate change has started to affect the frequency and intensity of heat waves, century-scale climate projections imply considerable uncertainties” that will require further study. However, he says, “Regarding the health impact of high wet-bulb temperatures, the applied health threshold (wet-bulb temperatures near the human body temperature) is very solid and it actually derives from fundamental physical principles.”

July 31, 2018 | More

Helping computers perceive human emotions

Helping computers perceive human emotions

MIT Media Lab researchers have developed a machine-learning model that takes computers a step closer to interpreting our emotions as naturally as humans do.

In the growing field of “affective computing,” robots and computers are being developed to analyze facial expressions, interpret our emotions, and respond accordingly. Applications include, for instance, monitoring an individual’s health and well-being, gauging student interest in classrooms, helping diagnose signs of certain diseases, and developing helpful robot companions.

A challenge, however, is people express emotions quite differently, depending on many factors. General differences can be seen among cultures, genders, and age groups. But other differences are even more fine-grained: The time of day, how much you slept, or even your level of familiarity with a conversation partner leads to subtle variations in the way you express, say, happiness or sadness in a given moment.

Human brains instinctively catch these deviations, but machines struggle. Deep-learning techniques were developed in recent years to help catch the subtleties, but they’re still not as accurate or as adaptable across different populations as they could be.

The Media Lab researchers have developed a machine-learning model that outperforms traditional systems in capturing these small facial expression variations, to better gauge mood while training on thousands of images of faces. Moreover, by using a little extra training data, the model can be adapted to an entirely new group of people, with the same efficacy. The aim is to improve existing affective-computing technologies.

“This is an unobtrusive way to monitor our moods,” says Oggi Rudovic, a Media Lab researcher and co-author on a paper describing the model, which was presented last week at the Conference on Machine Learning and Data Mining. “If you want robots with social intelligence, you have to make them intelligently and naturally respond to our moods and emotions, more like humans.”

Co-authors on the paper are: first author Michael Feffer, an undergraduate student in electrical engineering and computer science; and Rosalind Picard, a professor of media arts and sciences and founding director of the Affective Computing research group.

Personalized experts

Traditional affective-computing models use a “one-size-fits-all” concept. They train on one set of images depicting various facial expressions, optimizing features — such as how a lip curls when smiling — and mapping those general feature optimizations across an entire set of new images.

The researchers, instead, combined a technique, called “mixture of experts” (MoE), with model personalization techniques, which helped mine more fine-grained facial-expression data from individuals. This is the first time these two techniques have been combined for affective computing, Rudovic says.

In MoEs, a number of neural network models, called “experts,” are each trained to specialize in a separate processing task and produce one output. The researchers also incorporated a “gating network,” which calculates probabilities of which expert will best detect moods of unseen subjects. “Basically the network can discern between individuals and say, ‘This is the right expert for the given image,’” Feffer says.

For their model, the researchers personalized the MoEs by matching each expert to one of 18 individual video recordings in the RECOLA database, a public database of people conversing on a video-chat platform designed for affective-computing applications. They trained the model using nine subjects and evaluated them on the other nine, with all videos broken down into individual frames.

Each expert, and the gating network, tracked facial expressions of each individual, with the help of a residual network (“ResNet”), a neural network used for object classification. In doing so, the model scored each frame based on level of valence (pleasant or unpleasant) and arousal (excitement) — commonly used metrics to encode different emotional states. Separately, six human experts labeled each frame for valence and arousal, based on a scale of -1 (low levels) to 1 (high levels), which the model also used to train.

The researchers then performed further model personalization, where they fed the trained model data from some frames of the remaining videos of subjects, and then tested the model on all unseen frames from those videos. Results showed that, with just 5 to 10 percent of data from the new population, the model outperformed traditional models by a large margin — meaning it scored valence and arousal on unseen images much closer to the interpretations of human experts.

This shows the potential of the models to adapt from population to population, or individual to individual, with very few data, Rudovic says. “That’s key,” he says. “When you have a new population, you have to have a way to account for shifting of data distribution [subtle facial variations]. Imagine a model set to analyze facial expressions in one culture that needs to be adapted for a different culture. Without accounting for this data shift, those models will underperform. But if you just sample a bit from a new culture to adapt our model, these models can do much better, especially on the individual level. This is where the importance of the model personalization can best be seen.”

Currently available data for such affective-computing research isn’t very diverse in skin colors, so the researchers’ training data were limited. But when such data become available, the model can be trained for use on more diverse populations. The next step, Feffer says, is to train the model on “a much bigger dataset with more diverse cultures.”

Better machine-human interactions

Another goal is to train the model to help computers and robots automatically learn from small amounts of changing data to more naturally detect how we feel and better serve human needs, the researchers say.

It could, for example, run in the background of a computer or mobile device to track a user’s video-based conversations and learn subtle facial expression changes under different contexts. “You can have things like smartphone apps or websites be able to tell how people are feeling and recommend ways to cope with stress or pain, and other things that are impacting their lives negatively,” Feffer says.

This could also be helpful in monitoring, say, depression or dementia, as people’s facial expressions tend to subtly change due to those conditions. “Being able to passively monitor our facial expressions,” Rudovic says, “we could over time be able to personalize these models to users and monitor how much deviations they have on daily basis — deviating from the average level of facial expressiveness — and use it for indicators of well-being and health.”

A promising application, Rudovic says, is human-robotic interactions, such as for personal robotics or robots used for educational purposes, where the robots need to adapt to assess the emotional states of many different people. One version, for instance, has been used in helping robots better interpret the moods of children with autism.

Roddy Cowie, professor emeritus of psychology at the Queen’s University Belfast and an affective computing scholar, says the MIT work “illustrates where we really are” in the field. “We are edging toward systems that can roughly place, from pictures of people’s faces, where they lie on scales from very positive to very negative, and very active to very passive,” he says. “It seems intuitive that the emotional signs one person gives are not the same as the signs another gives, and so it makes a lot of sense that emotion recognition works better when it is personalized. The method of personalizing reflects another intriguing point, that it is more effective to train multiple ‘experts,’ and aggregate their judgments, than to train a single super-expert. The two together make a satisfying package.”

July 24, 2018 | More

A mathematical view on cell packing

A mathematical view on cell packing

A key challenge in the embryonic development of complex life forms is the correct specification of cell positions so that organs and limbs grow in the right places. To understand how cells arrange themselves at the earliest stages of development, an interdisciplinary team of applied mathematicians at MIT and experimentalists at Princeton University identified mathematical principles governing the packings of interconnected cell assemblies.

In a paper entitled “Entropic effects in cell lineage tree packings,” published this month in Nature Physics, the team reports direct experimental observations and mathematical modeling of cell packings in convex enclosures, a biological packing problem encountered in many complex organisms, including humans.

In their study, the authors investigated multi-cellular packings in the egg chambers of the fruit fly Drosophila melanogaster, an important developmental model organism. Each egg chamber contains exactly 16 germline cells that are linked by cytoplasmic bridges, resulting from a series of incomplete cell divisions. The linkages form a branched cell-lineage tree which is enclosed by an approximately spherical hull. At some later stage, one of the 16 cells develops into the fertilizable egg, and the relative positioning of the cells is thought to be important for the biochemical signal exchange during the early stages of development.

The group run by Princeton’s Stanislav Y. Shvartsman, a professor of chemical and biological engineering, and the Lewis-Sigler Institute for Integrative Genomics at Princeton succeeded in measuring the spatial positions and connectivities between individual cells in more than 100 egg chambers. The experimentalists found it difficult to explain, however, why certain tree configurations occurred much more frequently than others, says Jörn Dunkel, an associate professor in the MIT Department of Mathematics.

So while Shvartsman’s team were able to visualize the cell connections in complex biological systems, Dunkel and postdoc Norbert Stoop, a recent MIT math instructor, began to develop a mathematical framework to describe the statistics of the observed cell packings.

“This project has been a prime example of an extremely enjoyable interdisciplinary collaboration between cell biology and applied mathematics,” Dunkel says. The experiments were performed by Shvartsman’s PhD student Jasmin Imran Alsous, who will begin a postdoctoral position at Adam Martin’s lab in the MIT Department of Biology this fall. They were analyzed in collaboration with postdoc Paul Villoutreix, who is now at the Weizmann Institute of Science in Israel.

Dunkel points out that while human biology is considerably more complex than a fruit fly’s, the underlying tissue organization processes share many common aspects.

“The cell trees in the egg chamber store the history of the cell divisions, like an ancestry tree in a sense,” he says. “What we were able to do was to map the problem of packing the cell tree into an egg chamber onto a nice and simple mathematical model that basically asks: If you take the fundamental convex polyhedrons with 16 vertices, how many different ways are there to embed 16 cells on them while keeping all the bridges intact?”

The presence of rigid physical connections between cells adds interesting new constraints that make the problem different from the most commonly considered packing problems, such as the question of how to arrange oranges efficiently so that they can be transported in as few containers as possible. The interdisciplinary study of Dunkel and his colleagues, which combined modern biochemical protein labelling techniques, 3-D confocal microscopy, computational image analysis, and mathematical modeling, shows that constrained tree packing problems arise naturally in biological systems.

Understanding the packing principles of cells in tissues at the various stages of development remains a major challenge. Depending on a variety of biological and physical factors, cells originating from a single founder cell can develop in vastly different ways to form muscles, bones, and organs such as the brain. While the developmental process “involves a huge number of degrees of freedom, the end result in many cases is highly complex yet also very reproducible and robust,” Dunkel says.

“This raises the question, which many people asked before, whether such robust complexity can be understood in terms of a basic set of biochemical, physical, and mathematical rules,” he says. “Our study shows that simple physical constraints, like cell-cell bridges arising from incomplete divisions, can significantly affect cell packings. In essence, what we are trying to do is to identify relatively simple tractable models that allow us to make predictions about these complex systems. Of course, to fully understand embryonic development, mathematical simplification must go hand-in-hand with experimental insight from biology.”

Since incomplete cell-divisions have also been seen in amphibians, mollusks, birds, and mammals, Dunkel hopes the modeling approach developed in the paper might be applicable to those systems as well.

“Physical constraints could play a significant role in determining the preferences for certain types of multicellular organizations, and that may have secondary implications for larger-scale tissue dynamics which are not yet clear to us. A simple way you can think about it is that these cytoplasmic bridges, or other physical connections, can help the organism to localize cells into desired positions,” he says. “This would appear to be a very robust strategy.”

July 23, 2018 | More

Cell-sized robots can sense their environment

Cell-sized robots can sense their environment

Researchers at MIT have created what may be the smallest robots yet that can sense their environment, store data, and even carry out computational tasks. These devices, which are about the size of a human egg cell, consist of tiny electronic circuits made of two-dimensional materials, piggybacking on minuscule particles called colloids.

Colloids, which insoluble particles or molecules anywhere from a billionth to a millionth of a meter across, are so small they can stay suspended indefinitely in a liquid or even in air. By coupling these tiny objects to complex circuitry, the researchers hope to lay the groundwork for devices that could be dispersed to carry out diagnostic journeys through anything from the human digestive system to oil and gas pipelines, or perhaps to waft through air to measure compounds inside a chemical processor or refinery.

“We wanted to figure out methods to graft complete, intact electronic circuits onto colloidal particles,” explains Michael Strano, the Carbon C. Dubbs Professor of Chemical Engineering at MIT and senior author of the study, which was published today in the journal Nature Nanotechnology. MIT postdoc Volodymyr Koman is the paper’s lead author.

“Colloids can access environments and travel in ways that other materials can’t,” Strano says. Dust particles, for example, can float indefinitely in the air because they are small enough that the random motions imparted by colliding air molecules are stronger than the pull of gravity. Similarly, colloids suspended in liquid will never settle out.

Researchers produced tiny electronic circuits, just 100 micrometers across,on a substrate material which was then dissolved away to leave the individual devices floating freely in solution. (Courtesy of the researchers)

Strano says that while other groups have worked on the creation of similarly tiny robotic devices, their emphasis has been on developing ways to control movement, for example by replicating the tail-like flagellae that some microbial organisms use to propel themselves. But Strano suggests that may not be the most fruitful approach, since flagellae and other cellular movement systems are primarily used for local-scale positioning, rather than for significant movement. For most purposes, making such devices more functional is more important than making them mobile, he says.

Tiny robots made by the MIT team are self-powered, requiring no external power source or even internal batteries. A simple photodiode provides the trickle of electricity that the tiny robots’ circuits require to power their computation and memory circuits. That’s enough to let them sense information about their environment, store those data in their memory, and then later have the data read out after accomplishing their mission.

The microscopic devices, combining electronic circuits with colloid particles, are aerosolized inside a chamber and then a substance to be analyzed is introduced, where it can interact with the devices. These devices are then collected on microscope slides on a surface so they can be tested. (Courtesy of the researchers)

Such devices could ultimately be a boon for the oil and gas industry, Strano says. Currently, the main way of checking for leaks or other issues in pipelines is to have a crew physically drive along the pipe and inspect it with expensive instruments. In principle, the new devices could be inserted into one end of the pipeline, carried along with the flow, and then removed at the other end, providing a record of the conditions they encountered along the way, including the presence of contaminants that could indicate the location of problem areas. The initial proof-of-concept devices didn’t have a timing circuit that would indicate the location of particular data readings, but adding that is part of ongoing work.

Similarly, such particles could potentially be used for diagnostic purposes in the body, for example to pass through the digestive tract searching for signs of inflammation or other disease indicators, the researchers say.

Most conventional microchips, such as silicon-based or CMOS, have a flat, rigid substrate and would not perform properly when attached to colloids that can experience complex mechanical stresses while travelling through the environment. In addition, all such chips are “very energy-thirsty,” Strano says. That’s why Koman decided to try out two-dimensional electronic materials, including graphene and transition-metal dichalcogenides, which he found could be attached to colloid surfaces, remaining operational even after after being launched into air or water. And such thin-film electronics require only tiny amounts of energy. “They can be powered by nanowatts with subvolt voltages,” Koman says.

As a demonstration of how such particles might be used to test biological samples, the team placed a solution containing the devices on a leaf, and then used the devices’ internal reflectors to locate them for testing by shining a laser at the leaf. (Courtesy of the researchers)

Why not just use the 2-D electronics alone? Without some substrate to carry them, these tiny materials are too fragile to hold together and function. “They can’t exist without a substrate,” Strano says. “We need to graft them to the particles to give them mechanical rigidity and to make them large enough to get entrained in the flow.”

But the 2-D materials “are strong enough, robust enough to maintain their functionality even on unconventional substrates” such as the colloids, Koman says.

The nanodevices they produced with this method are autonomous particles that contain electronics for power generation, computation, logic, and memory storage. They are powered by light and contain tiny retroreflectors that allow them to be easily located after their travels. They can then be interrogated through probes to deliver their data. In ongoing work, the team hopes to add communications capabilities to allow the particles to deliver their data without the need for physical contact.

Other efforts at nanoscale robotics “haven’t reached that level” of creating complex electronics that are sufficiently small and energy efficient to be aerosolized or suspended in a colloidal liquid. These are “very smart particles, by current standards,” Strano says, adding, “We see this paper as the introduction of a new field” in robotics.

The research team, all at MIT, included Pingwei Liu, Daichi Kozawa, Albert Liu, Anton Cottrill, Youngwoo Son, and Jose Lebron. The work was supported by the U.S. Office of Naval Research and the Swiss National Science Foundation.

July 23, 2018 | More

School of Engineering second quarter 2018 awards

School of Engineering second quarter 2018 awards

Members of the MIT engineering faculty receive many awards in recognition of their scholarship, service, and overall excellence. Every quarter, the School of Engineering publicly recognizes their achievements by highlighting the honors, prizes, and medals won by faculty working in our academic departments, labs, and centers.

The following awards were given from April through June, 2018. Submissions for future listings are welcome at any time.

Emilio Baglietto, of the Department of Nuclear Science and Engineering, won the Ruth and Joel Spira Award for Distinguished Teaching on May 14.

Hari Balakrishnan, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won the HKN Best Instructor Award on May 18.

Robert C. Berwick, of the Department of Electrical Engineering and Computer Science, won the Jerome H. Saltzer Award for Excellence in Teaching on May 18.

Michael Birnbaum, of the Department of Biological Engineering and the Koch Institute for Integrative Cancer Research, became a 2018 Pew-Stewart Scholar for Cancer Research on June 14.

Lydia Bourouiba, of the Department of Civil and Environmental Engineering, won the Smith Family Foundation Odyssey Award on June 25.

Michele Bustamante of the Materials Research Laboratory, was awarded a 2018-19 MRS/TMS Congressional Science and Engineering Fellowship on May 22.

Oral Buyukozturk, of the Department of Civil and Environmental Engineering, won the George W. Housner Medal for Structural Control and Monitoring on May 31.

Luca Carlone of the Department of Aeronautics and Astronautics, won the IEEE Transactions on Robotics “King-Sun Fu” Best Paper Award on May 24.

Gang Chen, of the Department of Mechanical Engineering, was elected a 2018 fellow to the American Academy of Arts and Sciences on April 18.

Erik Demaine, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, was awarded the Burgess (1952) and Elizabeth Jamieson Prize for Excellence in Teaching on May 18.

Srinivas Devadas, of the Department of Electrical Engineering and Computer Science, won the Bose Award for Excellence in Teaching in May.

Thibaut Divoux, of the Department of Civil and Environmental Engineering, won the 2018 Early Career Arthur B. Metzner Award of the Rheology Society on May 3.

Dennis M. Freeman, of the Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, won an Innovative Seminar Award on May 16; he also won the Burgess (1952) and Elizabeth Jeamieson Prize for Excellence in Teaching on May 18.

Neville Hogan, of the Department of Mechanical Engineering, won the 2018 EMBS Academic Career Achievement Award on May 10.

Gim P. Hom, of the Department of Electrical Engineering and Computer Science, was honored with the IEEE/Association for Computing Machinery Best Advisor Award on May 18.

Rohit Karnik, of the Department of Mechanical Engineering, and Regina Barzilay and John N. Tsitsiklis, of the Department of Electrical Engineering and Computer Science, won the Ruth and Joel Spira Award for Distinguished Teaching in May.

Dina Katabi, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, was presented with an honorary degree from The Catholic University of America on May 12; she also won the Association for Computing Machinery 2017 Prize in Computing on April 4.

Rob Miller, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won the Richard J. Caloggero Award on May 18.

Eytan Modiano, of the Department of Aeronautics and Astronautics and the Laboratory for Information and Decision Systems, won the IEEE Infocom best paper award on April 18.

Stefanie Mueller, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, received an honorable mention for the Association for Computing Machinery Doctoral Dissertation Award on June 23. She also won the EECS Outstanding Educator Award on May 18.

Dava J. Newman, of the Department of Aeronautics and Astronautics, won the AIAA Jeffries Aerospace Medicine and Life Sciences Research Award on May 4.

Christine Ortiz, of the Department of Materials Science and Engineering, was awarded a J-WEL Grant on May 7.

Ronitt Rubinfeld, of the Department of Electrical Engineering and Computer Science, won the Capers and Marion McDonald Award for Excellence in Mentoring and Advising in May.

Jennifer Rupp, of the Department of Materials Science and Engineering, won a Displaying Futures Award on June 12.

Alex K. Shalek, of the Institute for Medical Engineering and Science, has been named one of the 2018 Pew-Stewart Scholars for Cancer Research on June 14.

Alex Slocum, of the Department of Mechanical Engineering, won the Ruth and Joel Spira Outstanding Design Educator Award on June 11.

Michael P. Short, of the Department of Nuclear Science and Engineering won the Junior Bose Award in May.

Joseph Steinmeyer, of the Department of Electrical Engineering and Computer Science, won the Louis D. Smullin (’39) Award for Excellence in Teaching on May 18.

Christopher Terman, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won a MIT Gordon Y Billard Award on May 10.

Tao B. Schardl, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won an EECS Outstanding Educator Award on May 18.

Yang Shao-Horn, of the Department of Mechanical Engineering, won the Faraday Medal on April 19.

Vinod Vaikuntanathan, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won the Harold E. Edgerton Faculty Achievement Award on April 26.

Kripa Varanasi, of the Department of Mechanical Engineering, won the Gustus L. Larson Memorial Award and the Frank E. Perkins Award for Excellence in Graduate Advising on May 10.

David Wallace, of the Department of Mechanical Engineering, was honored with the Ben C. Sparks Medal on April 27.

Amos Winter, of the Department of Mechanical Engineering, was named a leader in New Voices in Sciences, Engineering, and Medicine on June 8.

Bilge Yildiz, of the Department of Nuclear Science and Engineering and the Department of Materials Science and Engineering, won the Ross Coffin Purdy Award on June 22.

Laurence R. Young, of the Department of Aeronautics and Astronautics and the Institute for Medical Engineering and Science, won the Life Sciences and Biomedical Engineering Branch Aerospace Medical Association Professional Excellence Award on April 27.

July 17, 2018 | More

Connor Coley named 2018 DARPA Riser

Connor Coley named 2018 DARPA Riser

The U.S. Defense Advanced Research Projects Agency (DARPA) has honored Connor Coley, who is currently pursuing his graduate degree in chemical engineering, as one of 50 DARPA Risers for 2018.

The award states that DARPA Risers are considered by the agency to be “up-and-coming standouts in their fields, capable of discovering and leveraging innovative opportunities for technological surprise — the heart of DARPA’s national security mission.”

Currently a member of the Klavs Jensen and William Green research groups, Coley is focused on improving automation and computer assistance in synthesis planning and reaction optimization with medicinal chemistry applications. He is more broadly interested in the design and construction of automated microfluidic platforms for analytics (e.g. kinetic or process understanding) and on-demand synthesis.

The goal of many synthetic efforts, particularly in early stage drug discovery, is to produce a target small molecule of interest. At MIT, Coley’s early graduate research focused on streamlining organic synthesis from an experimental perspective: screening and optimizing chemical reactions in a microfluidic platform using as little material as possible.

But even with an automated platform to do just that, researchers need to know exactly what reaction to run. They must first figure out the best synthetic route to make the target compound and then turn to the chemical literature to define a suitable parameter space to operate within. As part of the DARPA Make-It program, Coley and his colleagues started working toward a much more ambitious goal. Instead of automating only the execution of reactions, could a researcher automate the entire workflow of route identification, process development, and experimental execution?

Coley’s recent research has focused on various aspects of computer-aided synthesis planning to help make a fully autonomous synthetic chemistry platform, leveraging techniques in machine learning to meaningfully generalize historical reaction data. This includes questions of how best to propose novel retrosynthetic pathways and validate those suggestions in silico before carrying them out in the laboratory. The overall goal of his work is to develop models and computational approaches that — in combination with more traditional automation techniques — will improve the efficiency of small molecule discovery.

“It’s been a privilege to participate in the Make-It program and I am grateful for being named a DARPA Riser,” Coley says. “I’m excited to take part in the D60 anniversary event and talk about my ideas for how this work can be extended to more broadly transform the process of molecular discovery.”

Coley received his BS in chemical engineering from Caltech in 2014 and is a recipient of MIT’s Robert T. Haslam Presidential Graduate Fellowship.

Coley will participate in D60, DARPA’s 60th Anniversary Symposium, Sept. 5-7 at Gaylord National Harbor. D60 will provide attendees the opportunity to engage with up-and-coming innovators, including some of today’s most creative and accomplished scientists and technologists. DARPA works to inspire attendees to explore future technologies, their potential application to tomorrow’s technical and societal challenges, and the dilemmas those applications may engender. D60 participants will have the opportunity to be a part of the new relationships, partnerships, and communities of interest that this event aims to foster, and advance dialogue on the pursuit of science in the national interest.

July 16, 2018 | More

Cooling buildings worldwide

Cooling buildings worldwide

About 40 percent of all the energy consumed by buildings worldwide is used for space heating and cooling. With the warming climate as well as growing populations and rising standards of living — especially in hot, humid regions of the developing world — the level of cooling and dehumidification needed to ensure comfort and protect human health is predicted to rise precipitously, pushing up global energy demand.

Much discussion is now focusing on replacing the greenhouse gases frequently used as refrigerants in today’s air conditioners. But another pressing concern is that most existing systems are extremely energy-inefficient.

“The main reason they’re inefficient is that they have two tasks to perform,” says Leslie Norford, the George Macomber (1948) Professor in Construction Management in the Department of Architecture. “They need to lower temperature and remove moisture, and doing both those things together takes a lot of extra energy.”

The standard approach to dehumidification is to run cold water through pipes inside a building space. If that water is colder than the dew-point temperature, water vapor in the air will condense on the outer surfaces of the pipes. (Think of water droplets beading up on a cold soda can on a hot, humid day.) In an air conditioning system, that water may drop off outside or, in a large-scale system serving a building, be gathered into a collection pan.

The problem is that running a chiller to get water that cold takes a lot of electricity — and the water is far colder than needed to lower the temperature in the room. Separating the two functions brings energy savings on two fronts. Removing moisture from outdoor air brought into the building requires cold water but far less of it than is needed to remove heat from occupied areas. With that job done, running cool (not cold) water through pipes in the ceiling or floor will maintain a comfortable temperature.

A decade ago, Norford and his colleagues at the Masdar Institute in Abu Dhabi confirmed the energy benefits of maintaining comfortable temperatures using cool-water pipes in the room — especially when indoor spaces are pre-cooled at night, when electricity is cheap and the outside air is cool. But the dehumidification process remained inefficient. Condensing water vapor is inherently energy-intensive, so the researchers needed to find another way to remove humidity.

Borrowing from desalination systems

Two years ago, a promising alternative was brought to Norford’s attention by John Lienhard, MIT’s Abdul Latif Jameel Professor of Water and Mechanical Engineering. Lienhard is Norford’s colleague at the Center for Environmental Sensing and Modeling, a research group at the Singapore-MIT Alliance for Research and Technology. Lienhard was working on energy-efficient technologies for desalination. Boiling seawater to precipitate the salt is very energy-intensive, so Lienhard’s group was looking instead at using semipermeable membranes that let water molecules through but stop salt ions. Norford thought a similar membrane could be designed that allows water vapor molecules to pass through so they can be separated from other, larger molecules that make up the indoor air.

That concept became the subject of a project undertaken by two mechanical engineering graduate students: Tianyi Chen, who was working with Norford on the impacts of outdoor airflows on building energy performance, and Omar Labban, who was collaborating with Lienhard on using membranes in desalination systems. The students met in an advanced energy conversion class taught by Ahmed Ghoniem, the Ronald C. Crane (’72) Professor of Mechanical Engineering. Paired up for a class project, they identified air conditioning as a topic that would draw on their respective areas of research interest and use their newly acquired expertise in thermodynamic modeling and analysis.

Their first task was to develop a thermodynamic model of the fundamental processes involved in air conditioning. Using that model, they calculated the theoretical least work needed to achieve dehumidification and cooling. They could then calculate the so-called second-law efficiency of a given technology, that is, the ratio of the theoretical minimum to its actual energy consumption. Using that metric as a benchmark, they could perform a systematic, consistent comparison of various designs in different climates.

As an industrial benchmark for comparison, they used coefficient of performance (COP), a metric that shows how many units of cooling are provided for each unit of input electricity. The COP is used by today’s manufacturers, so it could show how different designs might perform relative to current equipment. For reference, Norford cites the COP of commercially available systems as ranging from 5 to 7. “But manufacturers are constantly coming up with better equipment, so the goalposts for competitors are continually moving,” he says.

Norford’s earlier research had shown that cool-water pipes in the ceiling or floor can efficiently handle indoor cooling loads — that is, the heat coming from people, computers, sunlight, and so on. The researchers therefore focused on removing heat and moisture from outdoor air brought in for ventilation.

They started by examining the performance of a commercially available air conditioner that uses the standard vapor compression system (VCS) that has been used for the past century. Their analysis quantified the inefficiency of not separating temperature and humidity control. Further, it pinpointed a major source of that inefficiency: the condensation process. Their results showed that the system was least efficient in cool, humid conditions and improved as conditions got hotter and drier. But at its best, it used five to 10 times more energy than the theoretical minimum required. Thus, there was significant opportunity for improvement.

Membranes and desiccants

To explore the use of membrane technology, the researchers began with a simple system incorporating a single membrane-containing unit. Outdoor air enters the unit, and a vacuum pump pulls the water vapor in it across the membrane. The pump then raises the pressure to ambient levels so the water vapor becomes liquid water before being ejected from the system. The no-longer-humid outdoor air passes from the membrane unit through a conventional cooling coil and enters the indoor space, providing fresh air for ventilation and pushing some warmer, humid exhaust air outdoors.

According to their analysis, the system performs best in relatively dry conditions, but even then it achieves a COP of only 1.3 — not high enough to compete with a current system. The problem is that running the vacuum pump with high compression ratios consumes a lot of energy.

To help cool the incoming air stream, the researchers tried adding a heat exchanger to transfer heat from the warm incoming air to the cool exhaust air and a condenser to turn water vapor captured by the membrane unit into cool water for the cooling coil. Those changes pushed the COP up to 2.4 — better but not high enough.

The researchers next considered options using desiccants, materials that have a strong tendency to adsorb water and are often packed with consumer products to keep them dry. In air conditioning systems, a desiccant coating is typically mounted on a wheel that’s positioned between the incoming and exhaust airflows. As the wheel rotates, a portion of the desiccant first passes through the incoming air and adsorbs moisture from it. It then passes through the heated exhaust air, which dries it so it’s ready to adsorb more moisture on its next pass through the incoming air.

The researchers began by analyzing several systems incorporating a desiccant wheel, but the gains in COP were limited. They next tried using the desiccant and membrane technologies together. In this design, a desiccant wheel, a membrane moisture exchanger, and a heat exchanger all transfer moisture and heat from the incoming air to the exhaust air. A cooling coil further cools the incoming air before it’s delivered to the indoor space. A heat pump warms the exhaust air, which then passes through the desiccant to dry and regenerate it for continued use.

This complicated “hybrid” system yields a COP of 4 under a wide range of temperatures and humidity. But that’s still not high enough to compete.

Two-membrane system

The researchers then tried a novel system that omits the desiccant wheel but includes two membrane units, yielding a design that’s relatively simple but more speculative than the others. The key new concept involved the fate of the water vapor in the incoming air stream.

In this system, a vacuum pump pulls the water vapor through a membrane—now called membrane unit 1. But the captured water vapor is then pushed across the membrane in unit 2 and joins the exhaust air stream — without ever turning into liquid water. In this arrangement, the vacuum pump only has to ensure that the vapor pressure is higher on the upstream side of membrane 2 than it is on the downstream side so that the water vapor is pushed through. There’s no need for raising the pressure to ambient levels, which would condense the water vapor, so running the vacuum pump takes less work. That novel approach results in a COP that can reach as high as 10 and achieves a COP of 9 at many combinations of temperature and humidity.

Different options for different cities

For most of the systems analyzed, performance varies at different combinations of ambient temperature and humidity level. To investigate the practical impact of that variability, the researchers examined how selected systems would perform in four cities with different climates. In each case, the analysis assumed an average summertime outdoor temperature and relative humidity.

In general, the systems they considered outperformed the conventional VCS operating at COPs consistent with current practice. For example, in Dubai (representing a tropical desert climate), using the hybrid membrane-desiccant system could reduce energy consumption by as much as 30 percent relative to the standard VCS. In Las Vegas (a subtropical arid climate), where humidity is lower, a desiccant-based system (without the membrane) is the most efficient option, potentially also bringing a 30 percent reduction.

In New York (a subtropical humid climate), all the designs look good, but the desiccant-based system does best with a 70 percent reduction in overall energy consumption. And in Singapore (a tropical oceanic climate), the desiccant system and the combined membrane-desiccant system do equally well, with a potential savings of as much as 40 percent — and given the costs of the two options, the desiccant-alone system emerges as the top choice.

Taken together, the researchers’ findings provide two key messages for achieving more efficient indoor cooling worldwide. First, using membranes and desiccants can push up air conditioner efficiency, but the real performance gains come when such technologies are incorporated into carefully designed and integrated systems. And second, the local climate and the availability of resources — both energy and water — are critical factors to consider when deciding what air conditioning system will deliver the best performance in a given area of the world.

This article appears in the Spring 2018 issue of Energy Futures, the magazine of the MIT Energy Initiative.

July 11, 2018 | More

Networks in aerospace

Networks in aerospace

Along with asteroids, the moon, and the International Space Station, there are hundreds of small, 10-centimeter cubes orbiting planet Earth. Alexa Aguilar, a first-year graduate student in the Department of Aeronautics and Astronautics, is helping these small satellites, called CubeSats, communicate.

“We’d like to expand this to what we call ‘swarm technology.’ Imagine you have three, four, up to, you know, x-amount of little cubes that can talk to each other with lasers,” she explains.“You could have these massive constellations of them! For example, you could have a cluster [of CubeSats] here, a cluster there, and each of the clusters has its own camera. They could talk to each other with lasers, and they could send imaging data, and you could computationally mesh all of the [individual] pictures that they’re taking to form a massive picture in space.”

A picture like this could offer a cost-effective way to monitor Earth. In cases of natural disasters, which require rapid response and constant updates, such observational capabilities could be life-saving.

Aguilar is a builder of connections in many other aspects of her life as well. A space enthusiast, supporter of women in STEM, and mentor to other AeroAstro students, she exudes a natural warmth and self-assurance that brings people together.

Discovering herself in Cambridge

As an electrical engineering student at the University of Idaho, Aguilar didn’t always foresee herself at MIT. “My path to MIT really blossomed out of a summer internship at NASA’s Jet Propulsion Laboratory (JPL) when I was in undergrad,” she says. “I met a lot of awesome people, and everyone was doing something more incredible than the last person I’d heard.”

Some of the friends she made there encouraged her to look into the Space Telecommunications, Astronomy, and Radiation Laboratory (STAR Lab), where Aguilar now does her research.

Despite the many friendships she has forged at MIT, Aguilar sometimes struggles with the distance from her close-knit family in Idaho. Sundays are her unofficial Skype day with her mom, with whom she is particularly close. (They share a hard-headedness, Aguilar says with a smile.) Luckily, New England offers at least one of the comforts of home: Aguilar is an avid skier. This winter she attended the yearly MIT Graduate Student Council ski trip with her AeroAstro and JPL colleagues — along with some new MIT friends.

Aguilar enjoys the modern energy of Cambridge, calling it “a really nice mesh of cutting-edge technology and young, excited people who want to do cool things with it.” She also takes full advantage of the cultural opportunities in the area. She and her boyfriend share an avid interest in Japanese art and culture, and enjoy visiting the Museum of Fine Arts, which houses one of the largest collections of Japanese art in the world outside of Japan. Most recently, they visited a vibrant exhibit by artist Takashi Murakami.

Aguilar also circulates among the area restaurants, which she has thoroughly researched. “I’m trying really hard to be a foodie,” she laughs earnestly. Her favorite area restaurant, Coreanos, offers Korean-Mexican fusion — perhaps not coincidentally, this reflects Aguilar’s own heritage.

Aguilar’s mother is Korean and her father is Mexican and Native American, but Aguilar’s upbringing wasn’t strongly influenced by her parents’ ethnic backgrounds. Recently, though, Aguilar has felt pulled to explore her Mexican heritage more fully.

“It’s actually been a really interesting journey in discovering what my cultural background means to me,” she says. Aguilar recalls her grandfather, a Mexican immigrant, telling her she didn’t need to learn Spanish as a child. However, she also recalls seeing him transform into a new person at his favorite Mexican restaurant, where he would banter with the cooks and servers in his native language.

Though he has recently passed away, Aguilar and her sister are actively trying to use Spanish to feel closer to their grandfather and to understand his heritage: “[I think about] the little ways he did pass his culture on that he didn’t realize. … It’s like discovering a part of myself.”

Women supporting women

Another activity close to Aguilar’s heart is her involvement with Graduate Women in Aerospace Engineering (GWAE), a student group geared toward recruiting and supporting women in the AeroAstro department at MIT. “We have an incredible support group. The GWAE [members] are all pretty tight-knit, which is exactly the kind of community we are trying to foster,” she says.

The group has four arms: community building, a women-in-STEM speaker series, mentorship, and outreach and recruitment. As the current co-president of GWAE, Aguilar is involved in all of these efforts. “It’s important to me that women have this kind of support and encouragement because I wouldn’t be here without the support of women,” she says.

Aguilar welcomes the opportunity to build that same support in AeroAstro today.

She has many thoughts on why more women don’t pursue graduate work in aerospace engineering: “I think what happens … is a lot of women undergraduates go into industry — which is awesome, we’re really happy about that because it means a lot of them get job offers and they’re excited to go out and work. But most of the incoming graduate women come from undergraduate aerospace programs, and I think a lot of women [from other fields] may be intimidated to apply to the program. So then we don’t have the input to compensate for the number of women who have gone off to industry.”

As such, GWAE works hard to make the department seem approachable — a task to which Aguilar seems particularly well-suited.

She is grateful to have a female advisor in Kerri Cahoy, the Rockwell International Career Development Professor, whom she deeply admires. “She’s a rock star. She’s amazing — I don’t know how she does it. She’s involved in multiple flight projects, which are projects that are bound for orbit … she has a career, she has a family, she’s super successful. … [I want to say,] teach me your secrets!”

Aguilar is also a mentor to undergraduate women in the department and delights in helping her mentees secure the best internships and other opportunities that they can.

While Aguilar is still deciding whether to pursue a doctoral degree in the AeroAstro program or to conclude her work with a master’s degree, she is confident that she will remain involved in space research and engineering. Space, to Aguilar, is less a frontier than a dynamo for scientific progress: It generates research that, when applied, will eventually also transform more down-to-Earth technologies.

“Space research is going to be where so much exciting new science and technology comes from,” Aguilar says. “You hear a lot about Mars 2020 … but the technology it takes to get us there is actually really incredible. I’m excited to see emergent applications like how our internet is going change because we’re trying to get people to Mars. We need the same technology to send a message to the moon that we need to relay to Mars, and it’s that technology that’s going to have global impact.”

June 22, 2018 | More

Daniel Hastings named head of Department of Aeronautics and Astronautics

Daniel Hastings named head of Department of Aeronautics and Astronautics

Daniel E. Hastings, the Cecil and Ida Green Education Professor at MIT, has been named head of the Department of Aeronautics and Astronautics, effective Jan. 1, 2019.

“Dan has a remarkable depth of knowledge about MIT, and has served the Institute in a wide range of capacities,” says Anantha Chandrakasan, dean of the School of Engineering. “He has been a staunch advocate for students, for research, and for MIT’s international activities. We are fortunate to have him join the School of Engineering’s leadership team, and I look forward to working with him.”

Hastings, whose contributions to spacecraft and space system-environment interactions, space system architecture, and leadership in aerospace research and education earned him election to the National Academy of Engineering in 2017, has held a range of roles involving research, education, and administration at MIT.

Hastings has taught courses in space environment interactions, rocket propulsion, advanced space power and propulsion systems, space policy and space systems engineering since he first joined the faculty in 1985. He became director of the MIT Technology and Policy Program in 2000 and was named director of the Engineering Systems Division in 2004. He served as dean for undergraduate education from 2006 to 2013, and from 2014 to 2018 he has been director of the Singapore-MIT Alliance for Research and Technology (SMART).

Hastings has also had an active career of service outside MIT. His many external appointments include serving as chief scientist from 1997 to 1999 for the U.S. Air Force, where he led influential studies of Air Force investments in space and of preparations for a 21st-century science and technology workforce. He was also the chair of the Air Force Scientific Advisory Board from 2002 to 2005; from 2002 to 2008, he was a member of the National Science Board.

A fellow of the American Institute of Aeronautics and Astronautics (AIAA), Hastings was also awarded the Losey Atmospheric Sciences Award from the AIAA in 2002. He is a fellow (academician) of the International Astronautical Federation and the International Council in System Engineering. The U.S Air Force granted him its Exceptional Service Award in 2008, and in both 1997 and 1999 gave him the Air Force Distinguished Civilian Award. He received the National Reconnaissance Office Distinguished Civilian Award in 2003. He was also the recipient of MIT’s Gordon Billard Award for “special service of outstanding merit performed for the Institute” in 2013.

Hastings received his bachelor’s degree from Oxford University in 1976, and MS and PhD degrees in aeronautics and astronautics from MIT in 1978 and 1980, respectively.

Edward M. Greitzer, the H.N. Slater Professor of Aeronautics and Astronautics, will serve as interim department head from July 1 to Dec. 31, 2018.

Hastings will replace Jaime Peraire, the H. N. Slater Professor in Aeronautics and Astronautics, who has been department head since July 1, 2011. “I am grateful to Jaime for his excellent work over the last seven years,” Chandrakasan noted. “During his tenure as department head, he led the creation of a new strategic plan and made significant steps in its implementation. He addressed the department’s facilities challenges, strengthened student capstone- and research-project experience, and led the 2014 AeroAstro centennial celebrations, which highlighted the tremendous contributions MIT has made to aerospace and national service.”

June 20, 2018 | More

Novel transmitter protects wireless data from hackers

Novel transmitter protects wireless data from hackers

Today, more than 8 billion devices are connected around the world, forming an “internet of things” that includes medical devices, wearables, vehicles, and smart household and city technologies. By 2020, experts estimate that number will rise to more than 20 billion devices, all uploading and sharing data online.

But those devices are vulnerable to hacker attacks that locate, intercept, and overwrite the data, jamming signals and generally wreaking havoc. One method to protect the data is called “frequency hopping,” which sends each data packet, containing thousands of individual bits, on a random, unique radio frequency (RF) channel, so hackers can’t pin down any given packet. Hopping large packets, however, is just slow enough that hackers can still pull off an attack.

Now MIT researchers have developed a novel transmitter that frequency hops each individual 1 or 0 bit of a data packet, every microsecond, which is fast enough to thwart even the quickest hackers.

The transmitter leverages frequency-agile devices called bulk acoustic wave (BAW) resonators and rapidly switches between a wide range of RF channels, sending information for a data bit with each hop. In addition, the researchers incorporated a channel generator that, each microsecond, selects the random channel to send each bit. On top of that, the researchers developed a wireless protocol — different from the protocol used today — to support the ultrafast frequency hopping.

“With the current existing [transmitter] architecture, you wouldn’t be able to hop data bits at that speed with low power,” says Rabia Tugce Yazicigil, a postdoc in the Department of Electrical Engineering and Computer Science and first author on a paper describing the transmitter, which is being presented at the IEEE Radio Frequency Integrated Circuits Symposium. “By developing this protocol and radio frequency architecture together, we offer physical-layer security for connectivity of everything.” Initially, this could mean securing smart meters that read home utilities, control heating, or monitor the grid.

“More seriously, perhaps, the transmitter could help secure medical devices, such as insulin pumps and pacemakers, that could be attacked if a hacker wants to harm someone,” Yazicigil says. “When people start corrupting the messages [of these devices] it starts affecting people’s lives.”

Co-authors on the paper are Anantha P. Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science (EECS); former MIT postdoc Phillip Nadeau; former MIT undergraduate student Daniel Richman; EECS graduate student Chiraag Juvekar; and visiting research student Kapil Vaidya.

Ultrafast frequency hopping

One particularly sneaky attack on wireless devices is called selective jamming, where a hacker intercepts and corrupts data packets transmitting from a single device but leaves all other nearby devices unscathed. Such targeted attacks are difficult to identify, as they’re often mistaken for poor a wireless link and are difficult to combat with current packet-level frequency-hopping transmitters.

With frequency hopping, a transmitter sends data on various channels, based on a predetermined sequence shared with the receiver. Packet-level frequency hopping sends one data packet at a time, on a single 1-megahertz channel, across a range of 80 channels. A packet takes around 612 microseconds for BLE-type transmitters to send on that channel. But attackers can locate the channel during the first 1 microsecond and then jam the packet.

“Because the packet stays in the channel for long time, and the attacker only needs a microsecond to identify the frequency, the attacker has enough time to overwrite the data in the remainder of packet,” Yazicigil says.

To build their ultrafast frequency-hopping method, the researchers first replaced a crystal oscillator — which vibrates to create an electrical signal — with an oscillator based on a BAW resonator. However, the BAW resonators only cover about 4 to 5 megahertz of frequency channels, falling far short of the 80-megahertz range available in the 2.4-gigahertz band designated for wireless communication. Continuing recent work on BAW resonators — in a 2017 paper co-authored by Chandrakasan, Nadeau, and Yazicigil — the researchers incorporated components that divide an input frequency into multiple frequencies. An additional mixer component combines the divided frequencies with the BAW’s radio frequencies to create a host of new radio frequencies that can span about 80 channels.

Randomizing everything

The next step was randomizing how the data is sent. In traditional modulation schemes, when a transmitter sends data on a channel, that channel will display an offset — a slight deviation in frequency. With BLE modulations, that offset is always a fixed 250 kilohertz for a 1 bit and a fixed -250 kilohertz for a 0 bit. A receiver simply notes the channel’s 250-kilohertz or -250-kilohertz offset as each bit is sent and decodes the corresponding bits.

But that means, if hackers can pinpoint the carrier frequency, they too have access to that information. If hackers can see a 250-kilohertz offset on, say, channel 14, they’ll know that’s an incoming 1 and begin messing with the rest of the data packet.

To combat that, the researchers employed a system that each microsecond generates a pair of separate channels across the 80-channel spectrum. Based on a preshared secret key with the transmitter, the receiver does some calculations to designate one channel to carry a 1 bit and the other to carry a 0 bit. But the channel carrying the desired bit will always display more energy. The receiver then compares the energy in those two channels, notes which one has a higher energy, and decodes for the bit sent on that channel.

For example, by using the preshared key, the receiver will calculate that 1 will be sent on channel 14 and a 0 will be sent on channel 31 for one hop. But the transmitter only wants the receiver to decode a 1. The transmitter will send a 1 on channel 14, and send nothing on channel 31. The receiver sees channel 14 has a higher energy and, knowing that’s a 1-bit channel, decodes a 1. In the next microsecond, the transmitter selects two more random channels for the next bit and repeats the process.

Because the channel selection is quick and random, and there is no fixed frequency offset, a hacker can never tell which bit is going to which channel. “For an attacker, that means they can’t do any better than random guessing, making selective jamming infeasible,” Yazicigil says.

As a final innovation, the researchers integrated two transmitter paths into a time-interleaved architecture. This allows the inactive transmitter to receive the selected next channel, while the active transmitter sends data on the current channel. Then, the workload alternates. Doing so ensures a 1-microsecond frequency-hop rate and, in turn, preserves the 1-megabyte-per-second data rate similar to BLE-type transmitters.

“Most of the current vulnerability [to signal jamming] stems from the fact that transmitters hop slowly and dwell on a channel for several consecutive bits. Bit-level frequency hopping makes it very hard to detect and selectively jam the wireless link,” says Peter Kinget, a professor of electrical engineering and chair of the department at Columbia University. “This innovation was only possible by working across the various layers in the communication stack requiring new circuits, architectures, and protocols. It has the potential to address key security challenges in IoT devices across industries.”

The work was supported by Hong Kong Innovation and Technology Fund, the National Science Foundation, and Texas Instruments. The chip fabrication was supported by TSMC University Shuttle Program.

June 11, 2018 | More