News and Research
Catherine Iacobo named industry co-director for MIT Leaders for Global Operations

Catherine Iacobo named industry co-director for MIT Leaders for Global Operations

Cathy Iacobo, a lecturer at the MIT Sloan School of Management, has been named the new industry co-director for the MIT Leaders for Global Operations (LGO) program. Read more

Lgo

Leading to Green

More efficient or more sustainable? Janelle Heslop, LGO ’19, helps businesses achieve both. Heslop is no shrinking violet. She found a voice for herself and the environment when she was in middle school, volunteering as a junior docent for the Hudson River Museum. “I was a 12-year-old giving tours, preaching to people: we’ve got to protect our resources,” Heslop says. “At a very early age, I learned to have a perspective, and assert it.”

February 22, 2019 | More

Winners of inaugural AUS New Venture Challenge Announced

Danielle Castley, Dartmouth PhD Candidate, Jordan Landis, LGO ’20, and Ian McDonald, PhD, of Neutroelectric LLC won the inaugural American University of Sharjah New Ventures Challenge, winning the Chancellor’s Prize of $50,000 with radiation shielding materials  developed to improve safety margins and reduce costs for both nuclear power plant operations and transport and storage of spent nuclear waste.

February 20, 2019 | More

Tackling greenhouse gases

While a number of other MIT researchers are developing capture and reuse technologies to minimize greenhouse gas emissions, Professor Timothy Gutowski, frequent LGO advisor, is approaching climate change from a completely different angle: the economics of manufacturing.

Gutowski understands manufacturing. He has worked on both the industry and academic side of manufacturing, was the director of MIT’s Laboratory for Manufacturing and Productivity for a decade, and currently leads the Environmentally Benign Manufacturing research group at MIT. His primary research focus is assessing the environmental impact of manufacturing.

January 11, 2019 | More

Department of Mechanical Engineering announces new leadership team

Pierre Lermusiaux, LGO thesis advisor and professor of mechanical engineering and ocean science and engineering will join on the MechE department’s leadership team. Prof Lermusiaux will serve as associate department head for operations.

Evelyn Wang, the Gail E. Kendall Professor, who began her role as head of MIT’s Department of Mechanical Engineering (MechE) on July 1, has announced that Pierre Lermusiaux, professor of mechanical engineering and ocean science and engineering, and Rohit Karnik, associate professor of mechanical engineering, will join her on the department’s leadership team. Lermusiaux will serve as associate department head for operations and Karnik will be the associate department head for education.

“I am delighted to welcome Pierre and Rohit to the department’s leadership team,” says Wang. “They have both made substantial contributions to the department and are well-suited to ensure that it continues to thrive.”

Pierre Lermusiaux, associate department head for operations

Pierre Lermusiaux has been instrumental in developing MechE’s strategic plan over the past several years. In 2015, with Evelyn Wang, he was co-chair of the mechanical engineering strategic planning committee. They were responsible for interviewing individuals across the MechE community, determining priority “grand challenge” research areas, investigating new educational models, and developing mechanisms to enhance community and departmental operations. The resulting strategic plan will inform the future of MechE for years to come.

“Pierre is an asset to our department,” adds Wang. “I look forward to working with him to lead our department toward new research frontiers and cutting-edge discoveries.”

Lermusiaux joined MIT as associate professor in 2007 after serving as a research associate at Harvard University, where he also received his PhD. He is an internationally recognized thought leader at the intersection of ocean modeling and observing. He has developed new uncertainty quantification and data assimilation methods. His research has improved real-time data-driven ocean modeling and has had important implications for marine industries, fisheries, energy, security, and our understanding of human impact on the ocean’s health.

Lermusiaux’s talent as an educator has been recognized with the Ruth and Joel Spira Award for Teaching Excellence. He has been the chair of the graduate admissions committee since 2014. He has served on many MechE and institute committees and is also active in MIT-Woods Hole Oceanographic Institution Joint Program committees.

“Working for the department, from our graduate admission to the strategic planning with Evelyn, has been a pleasure,” says Lermusiaux. “I am thrilled to be continuing such contributions as associate department head for research and operations. I look forward to developing and implementing strategies and initiatives that help our department grow and thrive.”

Lermusiaux succeeds Evelyn Wang, who previously served as associate department head for operations under the former department head Gang Chen.

Rohit Karnik, associate department head for education

Over the past two years, Rohit Karnik has taken an active role in shaping the educational experience at MechE. As the undergraduate officer, he has overseen the operations of the department’s undergraduate office and chaired the undergraduate programs committee. This position has afforded Karnik the opportunity to evaluate and refine the department’s course offerings each year and work closely with undergraduate students to provide the best education.

“Rohit is a model citizen and has provided dedicated service to our department,” says Wang. “I look forward to working with him to create new education initiatives and continue to provide a world-class education for our students.”

Prior to joining MIT as a postdoc in 2006, Karnik received his PhD from the University of California at Berkeley. In 2006, he joined the faculty as an assistant professor of mechanical engineering. He is recognized as a leader in the field of micro-and-nanofluidics and has made a number of seminal contributions in the fundamental understanding of nanoscale fluid transport. He has been recognized by an National Science Foundation CAREER Award and a Department of Energy Early Career Award.

Karnik’s dedication to his students have been recognized by the Keenan Award for Innovation in Education and the Ruth and Joel Spira Award for Teaching Excellence. He has also served on the graduate admissions committee and various faculty search committees.

“It is a tremendous honor and responsibility to take this position in the top mechanical engineering department in the world,” says Karnik. “I will strive to ensure that we maintain excellence in mechanical engineering education and adapt to the changing times to offer strong and comprehensive degree programs and the best possible experience for our students.”

Karnik succeeds Professor John Brisson who previously served as associate department head for education.

August 3, 2018 | More

Boeing will be Kendall Square Initiative’s first major tenant

Boeing, the world’s largest aerospace company, and LGO Partner Company has announced they will be part MIT’s Kendall Square Initiative. The company has agreed to lease approximately 100,000 square feet at MIT’s building to be developed at 314 Main St., in the heart of Kendall Square in Cambridge.

MIT’s Kendall Square Initiative, includes six sites slated for housing, retail, research and development, office, academic, and open space uses. The building at 314 Main St. (“Site 5” on the map above) is located between the MBTA Red Line station and the Kendall Hotel. Boeing is expected to occupy its new space by the end of 2020.

“Our focus on advancing the Kendall Square innovation ecosystem includes a deep and historic understanding of what we call the ‘power of proximity’ to address pressing global challenges,” MIT Executive Vice President and Treasurer Israel Ruiz says. “MIT’s president, L. Rafael Reif, has made clear his objective of reducing the time it takes to move ideas from the classroom and lab out to the market. The power of proximity is a dynamic that propels this concept forward: Just as pharmaceutical, biotech, and tech sector scientists in Kendall Square work closely with their nearby MIT colleagues, Boeing and MIT researchers will be able to strengthen their collaborative ties to further chart the course of the aerospace industry.”

Boeing was founded in 1916 — the same year that MIT moved to Cambridge — and marked its recent centennial in a spirit similar to the Institute’s 100-year celebration in 2016, with special events, community activities, and commemorations. That period also represents a century-long research relationship between Boeing and MIT that has helped to advance the global aerospace industry.

Some of Boeing’s founding leaders, as well as engineers, executives, Boeing Technical Fellows, and student interns, are MIT alumni.

Earlier this year, Boeing announced that it will serve as the lead donor for MIT’s $18 million project to replace its 80-year-old Wright Brothers Wind Tunnel. This pledge will help to create, at MIT, the world’s most advanced academic wind tunnel.

In 2017, Boeing acquired MIT spinout Aurora Flight Sciences, which develops advanced aerospace platforms and autonomous systems. Its primary research and development center is located at 90 Broadway in Kendall Square. In the new facility at 314 Main St., Boeing will establish the Aerospace and Autonomy Center, which will focus on advancing enabling technologies for autonomous aircraft.

“Boeing is leading the development of new autonomous vehicles and future transportation systems that will bring flight closer to home,” says Greg Hyslop, Boeing chief technology officer. “By investing in this new research facility, we are creating a hub where our engineers can collaborate with other Boeing engineers and research partners around the world and leverage the Cambridge innovation ecosystem.”

“It’s fitting that Boeing will join the Kendall/MIT innovation family,” MIT Provost Martin Schmidt says. “Our research interests have been intertwined for over 100 years, and we’ve worked together to advance world-changing aerospace technologies and systems. MIT’s Department of Aeronautics and Astronautics is the oldest program of its kind in the United States, and excels at its mission of developing new air transportation concepts, autonomous systems, and small satellites through an intensive focus on cutting-edge education and research. Boeing’s presence will create an unprecedented opportunity for new synergies in this industry.”

The current appearance of the 314 Main St. site belies its future active presence in Kendall Square. The building’s foundation and basement level — which will house loading infrastructure, storage and mechanical space, and bicycle parking — is currently in construction. Adjacent to those functions is an underground parking garage, a network of newly placed utilities, and water and sewer infrastructure. Vertical construction of the building should begin in September.

August 3, 2018 | More

Reliable energy for all

Prosper Nyovanie (LGO ’19) discusses his passion for using engineering and technology to solve global problems.

 

During high school, Prosper Nyovanie had to alter his daily and nightly schedules to accommodate the frequent power outages that swept cities across Zimbabwe.

“[Power] would go almost every day — it was almost predictable,” Nyovanie recalls. “I’d come back from school at 5 p.m., have dinner, then just go to sleep because the electricity wouldn’t be there. And then I’d wake up at 2 a.m. and start studying … because by then you’d usually have electricity.”

At the time, Nyovanie knew he wanted to study engineering, and upon coming to MIT as an undergraduate, he majored in mechanical engineering. He discovered a new area of interest, however, when he took 15.031J (Energy Decisions, Markets, and Policies), which introduced him to questions of how energy is produced, distributed, and consumed. He went on to minor in energy studies.

Now as a graduate student and fellow in MIT’s Leaders for Global Operations (LGO) program, Nyovanie is on a mission to learn the management skills and engineering knowledge he needs to power off-grid communities around the world through his startup, Voya Sol. The company develops solar electric systems that can be scaled to users’ needs.

Determination and quick thinking

Nyovanie was originally drawn to MIT for its learning-by-doing engineering focus. “I thought engineering was a great way to take all these cool scientific discoveries and technologies and apply them to global problems,” he says. “One of the things that excited me a lot about MIT was the hands-on approach to solving problems. I was super excited about UROP [the Undergraduate Research Opportunities Program]. That program made MIT stick out from all the other universities.”

As a mechanical engineering major, Nyovanie took part in a UROP for 2.5 years in the Laboratory for Manufacturing and Productivity with Professor Martin Culpepper. But his experience in 15.031J made him realize his interests were broader than just research, and included the intersection of technology and business.

“One big thing that I liked about the class was that it introduced this other complexity that I hadn’t paid that much attention to before, because when you’re in the engineering side, you’re really focused on making technology, using science to come up with awesome inventions,” Nyovanie says. “But there are considerations that you need to think about when you’re implementing [such inventions]. You need to think about markets, how policies are structured.”

The class inspired Nyovanie to become a fellow in the LGO program, where he will earn an MBA from the MIT Sloan School of Management and a master’s in mechanical engineering. He is also a fellow of the Legatum Center for Development and Entrepreneurship at MIT.

When Nyovanie prepared for his fellowship interview while at home in Zimbabwe, he faced another electricity interruption: A transformer blew and would take time to repair, leaving him without power before his interview.

“I had to act quickly,” Nyovanie says. “I went and bought a petrol generator just for the interview. … The generator provided power for my laptop and for the Wi-Fi.” He recalls being surrounded by multiple solar lanterns that provided enough light for the video interview.

While Nyovanie’s determination in high school and quick thinking before graduate school enabled him to work around power supply issues, he realizes that luxury doesn’t extend to all those facing similar situations.

“I had enough money to actually go buy a petrol generator. Some of these communities in off-grid areas don’t have the resources they need to be able to get power,” Nyovanie says.

Scaling perspectives

Before co-founding Voya Sol with Stanford University graduate student Caroline Jo, Nyovanie worked at SunEdison, a renewable energy company, for three years. During most of that time, Nyovanie worked as a process engineer and analyst through the Renewable Energy Leadership Development Rotational Program. As part of the program, Nyovanie rotated between different roles at the company around the world.

During his last rotation, Nyovanie worked as a project engineer and oversaw the development of rural minigrids in Tanzania. “That’s where I got firsthand exposure to working with people who don’t have access to electricity and working to develop a solution for them,” Nyovanie says. When SunEdison went bankrupt, Nyovanie wanted to stay involved in developing electricity solutions for off-grid communities. So, he stayed in talks with rural electricity providers in Zimbabwe, Kenya, and Nigeria before eventually founding Voya Sol with Jo.

Voya Sol develops scalable solar home systems which are different than existing solar home system technologies. “A lot of them are fixed,” Nyovanie says. “So if you buy one, and need an additional light, then you have to go buy another whole new system. … The scalable system would take away some of that risk and allow the customer to build their own system so that they buy a system that fits their budget.” By giving users the opportunity to scale up or scale down their wattage to meet their energy needs, Nyovanie hopes that the solar electric systems will help power off-grid communities across the world.

Nyovanie and his co-founder are currently both full-time graduate students in dual degree programs. But to them, graduate school didn’t necessarily mean an interruption to their company’s operations; it meant new opportunities for learning, mentorship, and team building. Over this past spring break, Nyovanie and Jo traveled to Zimbabwe to perform prototype testing for their solar electric system, and they plan to conduct a second trip soon.

“We’re looking into ways we can aggregate people’s energy demands,” Nyovanie says. “Interconnected systems can bring in additional savings for customers.” In the future, Nyovanie hopes to expand the distribution of scalable solar electric systems through Voya Sol to off-grid communities worldwide. Voya Sol’s ultimate vision is to enable off-grid communities to build their own electricity grids, by allowing individual customers to not only scale their own systems, but also interconnect their systems with their neighbors’. “In other words, Voya Sol’s goal is to enable a completely build-your-own, bottom-up electricity grid,” Nyovanie says.

Supportive communities

During his time as a graduate student at MIT, Nyovanie has found friendship and support among his fellow students.

“The best thing about being at MIT is that people are working on all these cool, different things that they’re passionate about,” Nyovanie says. “I think there’s a lot of clarity that you can get just by going outside of your circle and talking to people.”

Back home in Zimbabwe, Nyovanie’s family cheers him on.

“Even though [my parents] never went to college, they were very supportive and encouraged me to push myself, to do better, and to do well in school, and to apply to the best programs that I could find,” Nyovanie says.

June 12, 2018 | More

LGO Best Thesis 2018 for Predictive Modeling Project at Massachusetts General Hospital

After the official MIT commencement ceremonies, Thomas Roemer, LGO’s executive director, announced the best thesis winner at LGO’s annual post-graduation celebration. This year’s winner was Jonathan Zanger, who developed a predictive model using machine learning at Massachusetts General Hospital. “The thesis describes breakthrough work at MGH that leverages machine learning and deep clinical knowledge to develop a decision support tool to predict discharges from the hospital in the next 24-48 hours and enable a fundamentally new and more effective discharge process,” said MIT Sloan School of Management Professor Retsef Levi, one of Zanger’s thesis advisors and the LGO management faculty co-director.

Applying MIT knowledge in the real world

Best Thesis 2018
Jonathan Zanger won the 2018 LGO best thesis award for his work using machine learning to develop a predictive model for better patient care at MGH

Zanger, who received his MBA and an SM in Electrical Engineering and Computer Science, conducted his six-month LGO internship project at MGH that sought to enable a more proactive process of managing the hospital’s bed capacity by identifying which surgical inpatients are likely to be discharged from the hospital in the next 24 to 48 hours. To do this, Zanger grouped patients by their surgery type, and worked to define and formalize milestones on the pathway to a post-operative recovery by defining barriers that may postpone patients’ discharge. Finally, he used a deep learning algorithm which uses over 900 features and is trained on 3000 types of surgeries and 20,000 surgical discharges. LGO thesis advisor Retsef Levi stated that “in my view, this thesis work represents a league of its own in terms of technical depth, creativity and potential impact.” Zanger was able to have true prediction for 97% of patients discharged within 48 hours. This helps to limit overcrowding and operational disruptions and anticipate capacity crises.

A group of faculty, alumni and staff review the theses each year to determine the winner. Thomas Sanderson (LGO ’14), LGO alumni and thesis reviewer stated that Zanger’s thesis showed  “tremendous extensibility and smart solution architecture decisions to make future work easy. Obvious and strong overlap of engineering, business, and industry.  This is potentially revolutionary work; this research advances the current state of the art well beyond anything currently available for large hospital bed management with obvious and immediate impact on healthcare costs and patient outcomes. The theory alone is hugely noteworthy but the fact that the work was also piloted during the thesis period is even more impressive. LGO has done a lot of great work at MGH but this is potentially the widest reaching and most important.”

Zanger, who earned his undergraduate degree Physics, Computer Science and Mathematics from the Hebrew University of Jerusalem, will return to Israel after graduation and resume service as an Israeli Defense Forces officer.

June 11, 2018 | More

A graphene roll-out

LGO thesis advisor and MIT Mechanical Engineering Professor John Hart, lead a team to develop a continuous manufacturing process that produces long strips of high-quality graphene.

The team’s results are the first demonstration of an industrial, scalable method for manufacturing high-quality graphene that is tailored for use in membranes that filter a variety of molecules, including salts, larger ions, proteins, or nanoparticles. Such membranes should be useful for desalination, biological separation, and other applications.

“For several years, researchers have thought of graphene as a potential route to ultrathin membranes,” says John Hart, associate professor of mechanical engineering and director of the Laboratory for Manufacturing and Productivity at MIT. “We believe this is the first study that has tailored the manufacturing of graphene toward membrane applications, which require the graphene to be seamless, cover the substrate fully, and be of high quality.”

Hart is the senior author on the paper, which appears online in the journal Applied Materials and Interfaces. The study includes first author Piran Kidambi, a former MIT postdoc who is now an assistant professor at Vanderbilt University; MIT graduate students Dhanushkodi Mariappan and Nicholas Dee; Sui Zhang of the National University of Singapore; Andrey Vyatskikh, a former student at the Skolkovo Institute of Science and Technology who is now at Caltech; and Rohit Karnik, an associate professor of mechanical engineering at MIT.

Growing graphene

For many researchers, graphene is ideal for use in filtration membranes. A single sheet of graphene resembles atomically thin chicken wire and is composed of carbon atoms joined in a pattern that makes the material extremely tough and impervious to even the smallest atom, helium.

Researchers, including Karnik’s group, have developed techniques to fabricate graphene membranes and precisely riddle them with tiny holes, or nanopores, the size of which can be tailored to filter out specific molecules. For the most part, scientists synthesize graphene through a process called chemical vapor deposition, in which they first heat a sample of copper foil and then deposit onto it a combination of carbon and other gases.

Graphene-based membranes have mostly been made in small batches in the laboratory, where researchers can carefully control the material’s growth conditions. However, Hart and his colleagues believe that if graphene membranes are ever to be used commercially they will have to be produced in large quantities, at high rates, and with reliable performance.

“We know that for industrialization, it would need to be a continuous process,” Hart says. “You would never be able to make enough by making just pieces. And membranes that are used commercially need to be fairly big ­— some so big that you would have to send a poster-wide sheet of foil into a furnace to make a membrane.”

A factory roll-out

The researchers set out to build an end-to-end, start-to-finish manufacturing process to make membrane-quality graphene.

The team’s setup combines a roll-to-roll approach — a common industrial approach for continuous processing of thin foils — with the common graphene-fabrication technique of chemical vapor deposition, to manufacture high-quality graphene in large quantities and at a high rate. The system consists of two spools, connected by a conveyor belt that runs through a small furnace. The first spool unfurls a long strip of copper foil, less than 1 centimeter wide. When it enters the furnace, the foil is fed through first one tube and then another, in a “split-zone” design.

While the foil rolls through the first tube, it heats up to a certain ideal temperature, at which point it is ready to roll through the second tube, where the scientists pump in a specified ratio of methane and hydrogen gas, which are deposited onto the heated foil to produce graphene.

Graphene starts forming in little islands, and then those islands grow together to form a continuous sheet,” Hart says. “By the time it’s out of the oven, the graphene should be fully covering the foil in one layer, kind of like a continuous bed of pizza.”

As the graphene exits the furnace, it’s rolled onto the second spool. The researchers found that they were able to feed the foil continuously through the system, producing high-quality graphene at a rate of 5 centimers per minute. Their longest run lasted almost four hours, during which they produced about 10 meters of continuous graphene.

“If this were in a factory, it would be running 24-7,” Hart says. “You would have big spools of foil feeding through, like a printing press.”

Flexible design

Once the researchers produced graphene using their roll-to-roll method, they unwound the foil from the second spool and cut small samples out. They cast the samples with a polymer mesh, or support, using a method developed by scientists at Harvard University, and subsequently etched away the underlying copper.

“If you don’t support graphene adequately, it will just curl up on itself,” Kidambi says. “So you etch copper out from underneath and have graphene directly supported by a porous polymer — which is basically a membrane.”

The polymer covering contains holes that are larger than graphene’s pores, which Hart says act as microscopic “drumheads,” keeping the graphene sturdy and its tiny pores open.

The researchers performed diffusion tests with the graphene membranes, flowing a solution of water, salts, and other molecules across each membrane. They found that overall, the membranes were able to withstand the flow while filtering out molecules. Their performance was comparable to graphene membranes made using conventional, small-batch approaches.

The team also ran the process at different speeds, with different ratios of methane and hydrogen gas, and characterized the quality of the resulting graphene after each run. They drew up plots to show the relationship between graphene’s quality and the speed and gas ratios of the manufacturing process. Kidambi says that if other designers can build similar setups, they can use the team’s plots to identify the settings they would need to produce a certain quality of graphene.

“The system gives you a great degree of flexibility in terms of what you’d like to tune graphene for, all the way from electronic to membrane applications,” Kidambi says.

Looking forward, Hart says he would like to find ways to include polymer casting and other steps that currently are performed by hand, in the roll-to-roll system.

“In the end-to-end process, we would need to integrate more operations into the manufacturing line,” Hart says. “For now, we’ve demonstrated that this process can be scaled up, and we hope this increases confidence and interest in graphene-based membrane technologies, and provides a pathway to commercialization.”

May 18, 2018 | More

This MIT program will purchase carbon offsets for student travel

Lead by Yakov Berenshteyn, (LGO ’19) a new Jetset Offset program will reduce the environmental impact of student travel by purchasing carbon offsets.

In one week about 100 MIT Sloan students will fly around the world to study regional economies, immerse themselves in different cultures, and produce more than 300 metric tons [PDF] of carbon dioxide.

Thanks to the necessary air travel for study tours, those students are producing the same emissions in two weeks as 1,600 average American car commuters would in that same timeframe, said Yakov Berenshteyn, LGO ’19.

While Berenshteyn doesn’t want to do away with student travel at MIT Sloan, he is hoping to lessen the impact on the environment, with the help of his Jetset Offset program.

The pilot involves purchasing carbon offsets for the three MBA and one Master of Finance study tours for spring break 2018.

Carbon offsets are vetted projects that help capture or avoid carbon emissions. These projects can include reforestation and building renewable energy sources. The reductions might not have an immediate impact on emissions, Berenshteyn said, but they are “still the primary best practice for us to use.”

“This is raising awareness of, and starting to account for, our environmental impacts from student travel,” Berenshteyn said. “You don’t get much choice in the efficiency of the airplane that you board.”

The idea for the offset came in October, when Berenshteyn was helping to plan the January Leaders for Global Operations Domestic Plant Trek. Berenshteyn at the time realized for the two weeks of the trip, the roughly 50 students and staff would be logging a total of 400,000 air miles.

Berenshteyn spent months researching an answer for counterbalancing the burned jet fuel. He also got input from MIT Sloan professor John Sterman. Berenshteyn said he looked at other options, like funding more local projects such as solar panel installation, but the calculations were too small scale to make much of a difference.

Universities around the world are applying carbon offsets and carbon-neutral practices in some form to their operations. Berenshteyn said Duke University has something similar to the air travel and carbon offsets that he proposes for MIT Sloan.

The Leaders for Global Operations program purchased 67 metric tons of offsets through Gold Standard for the January student trek, and those offsets are going to reforestation efforts in Panama.

In the case of the four upcoming study trips, MIT Sloan’s student life office is picking up the tab.

“My colleague Paul Buckley (associate director of student life) had an idea for something like this close to a decade ago, when he first arrived in student life, and noted the extent to which our students travel during their time at Sloan,” said Katie Ferrari, associate director of student life. “So this was an especially meaningful partnership for us. Yakov’s idea is exactly the kind of student initiative we love to support. He is practicing principled, innovative leadership with an eye toward improving the world.”

Ferrari said the support for the pilot this semester is a stake in the ground for incorporating carbon offset purchases into future student-organized travel — which is what Berenshteyn said was his hope for launching the pilot.

“It should be at Sloan, if a student is planning a trip, they have their checklist of insurance, emergency numbers, and carbon offsets,” he said.

March 21, 2018 | More

A machine-learning approach to inventory-constrained dynamic pricing

LGO thesis advisor and MIT Civil and Environmental Engineering Professor David Simchi-Levi lead a team on a new study showing how a model-based algorithm known as Thompson sampling can be used for revenue management.

In 1933, William R. Thompson published an article on a Bayesian model-based algorithm that would ultimately become known as Thompson sampling. This heuristic was largely ignored by the academic community until recently, when it became the subject of intense study, thanks in part to internet companies that successfully implemented it for online ad display.

Thompson sampling chooses actions for addressing the exploration-exploitation in the multiarmed bandit problem to maximize performance and continually learn, acquiring new information to improve future performance.

In a new study, “Online Network Revenue Management Using Thompson Sampling,” MIT Professor David Simchi-Levi and his team have now demonstrated that Thompson sampling can be used for a revenue management problem, where demand function is unknown.

Incorporating inventory constraints

A main challenge to adopting Thompson sampling for revenue management is that the original method does not incorporate inventory constraints. However, the authors show that Thompson sampling can be naturally combined with a classical linear program formulation to include inventory constraints.

The result is a dynamic pricing algorithm that incorporates domain knowledge and has strong theoretical performance guarantees as well as promising numerical performance results.

Interestingly, the authors demonstrate that Thompson sampling achieves poor performance when it does not take into account domain knowledge.

Simchi-Levi says, “It is exciting to demonstrate that Thomson sampling can be adapted to combine a classical linear program formulation, to include inventory constraints, and to see that this method can be applied to general revenue management problems in the business-to-consumer and business-to-business environments.”

Industry application improves revenue

The proposed dynamic pricing algorithm is highly flexible and is applicable in a range of industries, from airlines and internet advertising all the way to online retailing.

The new study, which has just been accepted by the journal Operations Research, is part of a larger research project by Simchi-Levi that combines machine learning and stochastic optimization to improve revenue, margins, and market share.

Algorithms developed in this research stream have been implemented at companies such as Groupon, a daily market maker, Rue La La, a U.S. online flash sales retailer, B2W Digital, a large online retailer in Latin America, and at a large brewing company, where Simchi-Levi and his team optimized the company’s promotion and pricing in various retail channels.


March 19, 2018 | More

Sloan

These are the cyberthreats lurking in your supply chain

These are the cyberthreats lurking in your supply chain

You’ve got firewalls in place. You have a team dedicated to keeping a careful watch over your networks, 24/7. Everything is under two-factor authentication. Your cyber defenses must be bulletproof.

Then your screen goes dark, and it doesn’t light back up. Soon, your company is offline entirely, and you’re losing money — fast. You didn’t account for the contractor that you hired to upgrade your point-of-sale network last month, which required accessing your systems — or what the state of their own cybersecurity looked like

February 22, 2019 | More

3 new courses cover advances every business should be tracking

3 new courses cover advances every business should be tracking

MIT Sloan students aren’t the only ones who take interest when new courses are added — they’re often a barometer of what’s about to bubble up in business.

Here’s what MIT Sloan faculty are drilling down on in three new and updated courses for spring 2019 — and why it matters to business leaders.

February 1, 2019 | More

Bye-bye ivory tower: Innovation needs an ecosystem to thrive

Bye-bye ivory tower: Innovation needs an ecosystem to thrive

If your organization is looking to innovate more in 2019 (and who isn’t?), we have good news and bad for you. The good news: The world is increasingly flat, to riff off the title of Thomas L. Friedman’s seminal 2005 book — meaning innovation isn’t confined to just Silicon Valley anymore.

January 11, 2019 | More

A calm before the AI productivity storm

A calm before the AI productivity storm

Despite all the advances in technology designed to streamline work, output per hour has actually been leveling off since around 2006. While some believe that’s the new normal for productivity, new research from MIT Sloan economist Erik Brynjolfsson and his colleagues shows it may just be a temporary lull.

January 11, 2019 | More

8 entrepreneurs’ resolutions for 2019

8 entrepreneurs’ resolutions for 2019

Batyske spoke at the Martin Trust Center for MIT Entrepreneurship this year, and told students his goal has always been to “never sit behind a desk that isn’t mine.”

Batyske said the book “The Obstacle Is the Way,” has shifted his mindset on handling problems, and that’s something he’ll be taking into 2019.

“[I want] to adopt a more stoic, balanced approach to things that happen to me, both personally and professionally,” he said.

December 28, 2018 | More

The disconnectedness of data can be a major drag on health care systems, and it can make effective collaboration much harder. But organizations in both the public and private sectors are finding ways to connect the dots. Speaking at the MIT Sloan Designing for Health Conference on Dec. 6, here’s what three experts from the health care field had to say about how to make health data work more efficiently.

Health care data is disconnected. Here’s how to change that.

The disconnectedness of data can be a major drag on health care systems, and it can make effective collaboration much harder. But organizations in both the public and private sectors are finding ways to connect the dots.

Speaking at the MIT Sloan Designing for Health Conference on Dec. 6, here’s what three experts from the health care field had to say about how to make health data work more efficiently.

December 28, 2018 | More

Erez Yoeli: How to motivate people

How to motivate people to do good for others

From TedxCambridge  How can we get people to do more good: to go to the polls, give to charity, conserve resources or just generally act better towards others? MIT research scientist Erez Yoeli shares a simple checklist for harnessing the power of reputations — or our collective desire to be seen as generous and kind instead of selfish — to motivate people to act in the interest of others. Learn more about how small changes to your approach to getting people to do good could yield surprising results. Watch the full talk at TedTalk. Erez Yoeli is a research scientist at MIT Sloan’s School of Management, where he directs the Applied Cooperation Team.

The post How to motivate people to do good for others – Erez Yoeli appeared first on MIT Sloan Experts.

December 7, 2018 | More

How many undocumented immigrants there really are, and why the number matters – Mohammad Fazel-Zarandi

How many undocumented immigrants there really are, and why the number mattersazel

From Daily News How many undocumented (illegal) immigrants are there in the United States? Previous estimates put the number at around 11-12 million. These estimates are too low. That’s because they are based on surveys that ask individuals where they were born. This approach doesn’t work well for undocumented immigrants. They are hard to track down. They don’t want to be found. And if they are found and asked this question — where are you from? — they have every reason to refuse to answer or answer untruthfully. In a new study, we estimate the number of undocumented immigrants using a different approach that doesn’t rely on surveys. And we get a very different answer. We estimate that there are at least 16.7 million and most likely more than 20 million. Our approach is to estimate the inflows of undocumented immigrants (how many are entering the United States each year) and … Read More »

The post How many undocumented immigrants there really are, and why the number matters – Mohammad Fazel-Zarandi appeared first on MIT Sloan Experts.

December 5, 2018 | More

Voices in AI – Episode 72: A Conversation with Irving Wladawsky-Berger

Voices in AI – Episode 72: A Conversation with Irving Wladawsky-Berger

From GigaOm Episode 72 of Voices in AI features host Byron Reese and Irving Wladawsky-Berger discuss the complexity of the human brain, the possibility of AGI and its origins, the implications of AI in weapons, and where else AI has and could take us. Irving has a PhD in Physics from the University of Chicago, is a research affiliate with the MIT Sloan School of Management, he is a guest columnist for the Wall Street Journal and CIO Journal, he is an agent professor of the Imperial College of London, and he is a fellow for the Center for Global Enterprise. Here is the podcast transcript: Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Irving Wladawsky-Berger. He is a bunch of things. He is a research affiliate with the MIT Sloan School of Management. He is a guest columnist … Read More »

The post Voices in AI – Episode 72: A Conversation with Irving Wladawsky-Berger appeared first on MIT Sloan Experts.

December 3, 2018 | More

This tool is pushing people to take action on climate change

This tool is pushing people to take action on climate change

The global temperature is rising and an international agreement is needed to avoid irreversible damage to the planet.

You’ve got two hours to find a solution.

That’s the mission in the role-play simulation World Climate, and according to new research from MIT Sloan professor John Sterman, it might also be the key to understanding and encouraging environmental change.

In World Climate, participants take on the role of delegates to the UN climate change summits, and negotiate face-to-face with other participants to reach a climate change agreement. Sterman said the negotiators seek to limit global warming to no more than the 2 degrees Celsius — 3.6 degrees Fahrenheit — limit affirmed at the Paris climate summit, while also taking their economic and political situations into account. Participants get immediate feedback on their proposed agreements by using the Climate Rapid Overview and Decision Support (C-ROADS) simulator.

October 16, 2018 | More

Engineering

Giving keener “electric eyesight” to autonomous vehicles On-chip system that detects signals at sub-terahertz wavelengths could help steer driverless cars through fog and dust

Giving keener “electric eyesight” to autonomous vehicles

Autonomous vehicles relying on light-based image sensors often struggle to see through blinding conditions, such as fog. But MIT researchers have developed a sub-terahertz-radiation receiving system that could help steer driverless cars when traditional methods fail.

Sub-terahertz wavelengths, which are between microwave and infrared radiation on the electromagnetic spectrum, can be detected through fog and dust clouds with ease, whereas the infrared-based LiDAR imaging systems used in autonomous vehicles struggle. To detect objects, a sub-terahertz imaging system sends an initial signal through a transmitter; a receiver then measures the absorption and reflection of the rebounding sub-terahertz wavelengths. That sends a signal to a processor that recreates an image of the object.

But implementing sub-terahertz sensors into driverless cars is challenging. Sensitive, accurate object-recognition requires a strong output baseband signal from receiver to processor. Traditional systems, made of discrete components that produce such signals, are large and expensive. Smaller, on-chip sensor arrays exist, but they produce weak signals.

In a paper published online on Feb. 8 by the IEEE Journal of Solid-State Circuits, the researchers describe a two-dimensional, sub-terahertz receiving array on a chip that’s orders of magnitude more sensitive, meaning it can better capture and interpret sub-terahertz wavelengths in the presence of a lot of signal noise.

To achieve this, they implemented a scheme of independent signal-mixing pixels — called “heterodyne detectors” — that are usually very difficult to densely integrate into chips. The researchers drastically shrank the size of the heterodyne detectors so that many of them can fit into a chip. The trick was to create a compact, multipurpose component that can simultaneously down-mix input signals, synchronize the pixel array, and produce strong output baseband signals.

The researchers built a prototype, which has a 32-pixel array integrated on a 1.2-square-millimeter device. The pixels are approximately 4,300 times more sensitive than the pixels in today’s best on-chip sub-terahertz array sensors. With a little more development, the chip could potentially be used in driverless cars and autonomous robots.

“A big motivation for this work is having better ‘electric eyes’ for autonomous vehicles and drones,” says co-author Ruonan Han, an associate professor of electrical engineering and computer science, and director of the Terahertz Integrated Electronics Group in the MIT Microsystems Technology Laboratories (MTL). “Our low-cost, on-chip sub-terahertz sensors will play a complementary role to LiDAR for when the environment is rough.”

Joining Han on the paper are first author Zhi Hu and co-author Cheng Wang, both PhD students in in the Department of Electrical Engineering and Computer Science working in Han’s research group.

Decentralized design

The key to the design is what the researchers call “decentralization.” In this design, a single pixel — called a “heterodyne” pixel — generates the frequency beat (the frequency difference between two incoming sub-terahertz signals) and the “local oscillation,” an electrical signal that changes the frequency of an input frequency. This “down-mixing” process produces a signal in the megahertz range that can be easily interpreted by a baseband processor.

The output signal can be used to calculate the distance of objects, similar to how LiDAR calculates the time it takes a laser to hit an object and rebound. In addition, combining the output signals of an array of pixels, and steering the pixels in a certain direction, can enable high-resolution images of a scene. This allows for not only the detection but also the recognition of objects, which is critical in autonomous vehicles and robots.

Heterodyne pixel arrays work only when the local oscillation signals from all pixels are synchronized, meaning that a signal-synchronizing technique is needed. Centralized designs include a single hub that shares local oscillation signals to all pixels.

These designs are usually used by receivers of lower frequencies, and can cause issues at sub-terahertz frequency bands, where generating a high-power signal from a single hub is notoriously difficult. As the array scales up, the power shared by each pixel decreases, reducing the output baseband signal strength, which is highly dependent on the power of local oscillation signal. As a result, a signal generated by each pixel can be very weak, leading to low sensitivity. Some on-chip sensors have started using this design, but are limited to eight pixels.

The researchers’ decentralized design tackles this scale-sensitivity trade-off. Each pixel generates its own local oscillation signal, used for receiving and down-mixing the incoming signal. In addition, an integrated coupler synchronizes its local oscillation signal with that of its neighbor. This gives each pixel more output power, since the local oscillation signal does not flow from a global hub.

A good analogy for the new decentralized design is an irrigation system, Han says. A traditional irrigation system has one pump that directs a powerful stream of water through a pipeline network that distributes water to many sprinkler sites. Each sprinkler spits out water that has a much weaker flow than the initial flow from the pump. If you want the sprinklers to pulse at the exact same rate, that would require another control system.

The researchers’ design, on the other hand, gives each site its own water pump, eliminating the need for connecting pipelines, and gives each sprinkler its own powerful water output. Each sprinkler also communicates with its neighbor to synchronize their pulse rates. “With our design, there’s essentially no boundary for scalability,” Han says. “You can have as many sites as you want, and each site still pumps out the same amount of water … and all pumps pulse together.”

The new architecture, however, potentially makes the footprint of each pixel much larger, which poses a great challenge to the large-scale, high-density integration in an array fashion. In their design, the researchers combined various functions of four traditionally separate components — antenna, downmixer, oscillator, and coupler — into a single “multitasking” component given to each pixel. This allows for a decentralized design of 32 pixels.

“We designed a multifunctional component for a [decentralized] design on a chip and combine a few discrete structures to shrink the size of each pixel,” Hu says. “Even though each pixel performs complicated operations, it keeps its compactness, so we can still have a large-scale dense array.”

Guided by frequencies

In order for the system to gauge an object’s distance, the frequency of the local oscillation signal must be stable.

To that end, the researchers incorporated into their chip a component called a phase-locked loop, that locks the sub-terahertz frequency of all 32 local oscillation signals to a stable, low-frequency reference. Because the pixels are coupled, their local oscillation signals all share identical, high-stability phase and frequency. This ensures that meaningful information can be extracted from the output baseband signals. This entire architecture minimizes signal loss and maximizes control.

“In summary, we achieve a coherent array, at the same time with very high local oscillation power for each pixel, so each pixel achieves high sensitivity,” Hu says.

February 14, 2019 | More

Turning desalination waste into a useful resource Process developed at MIT could turn concentrated brine into useful chemicals, making desalination more efficient.

Turning desalination waste into a useful resource

The rapidly growing desalination industry produces water for drinking and for agriculture in the world’s arid coastal regions. But it leaves behind as a waste product a lot of highly concentrated brine, which is usually disposed of by dumping it back into the sea, a process that requires costly pumping systems and that must be managed carefully to prevent damage to marine ecosystems. Now, engineers at MIT say they have found a better way.

In a new study, they show that through a fairly simple process the waste material can be converted into useful chemicals — including ones that can make the desalination process itself more efficient.

The approach can be used to produce sodium hydroxide, among other products. Otherwise known as caustic soda, sodium hydroxide can be used to pretreat seawater going into the desalination plant. This changes the acidity of the water, which helps to prevent fouling of the membranes used to filter out the salty water — a major cause of interruptions and failures in typical reverse osmosis desalination plants.

The concept is described today in the journal Nature Catalysis and in two other papers by MIT research scientist Amit Kumar, professor of mechanical engineering John. H. Lienhard V, and several others. Lienhard is the Jameel Professor of Water and Food and the director of the Abdul Latif Jameel Water and Food Systems Lab.

“The desalination industry itself uses quite a lot of it,” Kumar says of sodium hydroxide. “They’re buying it, spending money on it. So if you can make it in situ at the plant, that could be a big advantage.” The amount needed in the plants themselves is far less than the total that could be produced from the brine, so there is also potential for it to be a saleable product.

Sodium hydroxide is not the only product that can be made from the waste brine: Another important chemical used by desalination plants and many other industrial processes is hydrochloric acid, which can also easily be made on site from the waste brine using established chemical processing methods. The chemical can be used for cleaning parts of the desalination plant, but is also widely used in chemical production and as a source of hydrogen.

Currently, the world produces more than 100 billion liters (about 27 billion gallons) a day of water from desalination, which leaves a similar volume of concentrated brine. Much of that is pumped back out to sea, and current regulations require costly outfall systems to ensure adequate dilution of the salts. Converting the brine can thus be both economically and ecologically beneficial, especially as desalination continues to grow rapidly around the world. “Environmentally safe discharge of brine is manageable with current technology, but it’s much better to recover resources from the brine and reduce the amount of brine released,” Lienhard says.

The method of converting the brine into useful products uses well-known and standard chemical processes, including initial nanofiltration to remove undesirable compounds, followed by one or more electrodialysis stages to produce the desired end product. While the processes being suggested are not new, the researchers have analyzed the potential for production of useful chemicals from brine and proposed a specific combination of products and chemical processes that could be turned into commercial operations to enhance the economic viability of the desalination process, while diminishing its environmental impact.

“This very concentrated brine has to be handled carefully to protect life in the ocean, and it’s a resource waste, and it costs energy to pump it back out to sea,” so turning it into a useful commodity is a win-win, Kumar says. And sodium hydroxide is such a ubiquitous chemical that “every lab at MIT has some,” he says, so finding markets for it should not be difficult.

The researchers have discussed the concept with companies that may be interested in the next step of building a prototype plant to help work out the real-world economics of the process. “One big challenge is cost — both electricity cost and equipment cost,” at this stage, Kumar says.

The team also continues to look at the possibility of extracting other, lower-concentration materials from the brine stream, he says, including various metals and other chemicals, which could make the brine processing an even more economically viable undertaking.

“One aspect that was mentioned … and strongly resonated with me was the proposal for such technologies to support more ‘localized’ or ‘decentralized’ production of these chemicals at the point-of-use,” says Jurg Keller, a professor of water management at the University of Queensland in Australia, who was not involved in this work. “This could have some major energy and cost benefits, since the up-concentration and transport of these chemicals often adds more cost and even higher energy demand than the actual production of these at the concentrations that are typically used.”

The research team also included MIT postdoc Katherine Phillips and undergraduate Janny Cai, and Uwe Schroder at the University of Braunschweig, in Germany. The work was supported by Cadagua, a subsidiary of Ferrovial, through the MIT Energy Initiative.

February 13, 2019 | More

MIT robot combines vision and touch to learn the game of Jenga Machine-learning approach could help robots assemble cellphones and other small parts in a manufacturing line

MIT robot combines vision and touch to learn the game of Jenga

In the basement of MIT’s Building 3, a robot is carefully contemplating its next move. It gently pokes at a tower of blocks, looking for the best block to extract without toppling the tower, in a solitary, slow-moving, yet surprisingly agile game of Jenga.

The robot, developed by MIT engineers, is equipped with a soft-pronged gripper, a force-sensing wrist cuff, and an external camera, all of which it uses to see and feel the tower and its individual blocks.

As the robot carefully pushes against a block, a computer takes in visual and tactile feedback from its camera and cuff, and compares these measurements to moves that the robot previously made. It also considers the outcomes of those moves — specifically, whether a block, in a certain configuration and pushed with a certain amount of force, was successfully extracted or not. In real-time, the robot then “learns” whether to keep pushing or move to a new block, in order to keep the tower from falling.

Details of the Jenga-playing robot are published today in the journal Science Robotics. Alberto Rodriguez, the Walter Henry Gale Career Development Assistant Professor in the Department of Mechanical Engineering at MIT, says the robot demonstrates something that’s been tricky to attain in previous systems: the ability to quickly learn the best way to carry out a task, not just from visual cues, as it is commonly studied today, but also from tactile, physical interactions.

“Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces. It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks,” Rodriguez says. “This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics.”

He says the tactile learning system the researchers have developed can be used in applications beyond Jenga, especially in tasks that need careful physical interaction, including separating recyclable objects from landfill trash and assembling consumer products.

“In a cellphone assembly line, in almost every single step, the feeling of a snap-fit, or a threaded screw, is coming from force and touch rather than vision,” Rodriguez says. “Learning models for those actions is prime real-estate for this kind of technology.”

The paper’s lead author is MIT graduate student Nima Fazeli. The team also includes Miquel Oller, Jiajun Wu, Zheng Wu, and Joshua Tenenbaum, professor of brain and cognitive sciences at MIT.

Push and pull

In the game of Jenga — Swahili for “build” — 54 rectangular blocks are stacked in 18 layers of three blocks each, with the blocks in each layer oriented perpendicular to the blocks below. The aim of the game is to carefully extract a block and place it at the top of the tower, thus building a new level, without toppling the entire structure.

To program a robot to play Jenga, traditional machine-learning schemes might require capturing everything that could possibly happen between a block, the robot, and the tower — an expensive computational task requiring data from thousands if not tens of thousands of block-extraction attempts.

Instead, Rodriguez and his colleagues looked for a more data-efficient way for a robot to learn to play Jenga, inspired by human cognition and the way we ourselves might approach the game.

The team customized an industry-standard ABB IRB 120 robotic arm, then set up a Jenga tower within the robot’s reach, and began a training period in which the robot first chose a random block and a location on the block against which to push. It then exerted a small amount of force in an attempt to push the block out of the tower.

For each block attempt, a computer recorded the associated visual and force measurements, and labeled whether each attempt was a success.

Rather than carry out tens of thousands of such attempts (which would involve reconstructing the tower almost as many times), the robot trained on just about 300, with attempts of similar measurements and outcomes grouped in clusters representing certain block behaviors. For instance, one cluster of data might represent attempts on a block that was hard to move, versus one that was easier to move, or that toppled the tower when moved. For each data cluster, the robot developed a simple model to predict a block’s behavior given its current visual and tactile measurements.

Fazeli says this clustering technique dramatically increases the efficiency with which the robot can learn to play the game, and is inspired by the natural way in which humans cluster similar behavior: “The robot builds clusters and then learns models for each of these clusters, instead of learning a model that captures absolutely everything that could happen.”

Stacking up

The researchers tested their approach against other state-of-the-art machine learning algorithms, in a computer simulation of the game using the simulator MuJoCo. The lessons learned in the simulator informed the researchers of the way the robot would learn in the real world.

“We provide to these algorithms the same information our system gets, to see how they learn to play Jenga at a similar level,” Oller says. “Compared with our approach, these algorithms need to explore orders of magnitude more towers to learn the game.”

Curious as to how their machine-learning approach stacks up against actual human players, the team carried out a few informal trials with several volunteers.

“We saw how many blocks a human was able to extract before the tower fell, and the difference was not that much,” Oller says.

But there is still a way to go if the researchers want to competitively pit their robot against a human player. In addition to physical interactions, Jenga requires strategy, such as extracting just the right block that will make it difficult for an opponent to pull out the next block without toppling the tower.

For now, the team is less interested in developing a robotic Jenga champion, and more focused on applying the robot’s new skills to other application domains.

“There are many tasks that we do with our hands where the feeling of doing it ‘the right way’ comes in the language of forces and tactile cues,” Rodriguez says. “For tasks like these, a similar approach to ours could figure it out.”

This research was supported, in part, by the National Science Foundation through the National Robotics Initiative.

January 30, 2019 | More

Engineers program marine robots to take calculated risks Algorithm could help autonomous underwater vehicles explore risky but scientifically rewarding environments

Engineers program marine robots to take calculated risks

We know far less about the Earth’s oceans than we do about the surface of the moon or Mars. The sea floor is carved with expansive canyons, towering seamounts, deep trenches, and sheer cliffs, most of which are considered too dangerous or inaccessible for autonomous underwater vehicles (AUV) to navigate.

But what if the reward for traversing such places was worth the risk?

MIT engineers have now developed an algorithm that lets AUVs weigh the risks and potential rewards of exploring an unknown region. For instance, if a vehicle tasked with identifying underwater oil seeps approached a steep, rocky trench, the algorithm could assess the reward level (the probability that an oil seep exists near this trench), and the risk level (the probability of colliding with an obstacle), if it were to take a path through the trench.

“If we were very conservative with our expensive vehicle, saying its survivability was paramount above all, then we wouldn’t find anything of interest,” says Benjamin Ayton, a graduate student in MIT’s Department of Aeronautics and Astronautics. “But if we understand there’s a tradeoff between the reward of what you gather, and the risk or threat of going toward these dangerous geographies, we can take certain risks when it’s worthwhile.”

Ayton says the new algorithm can compute tradeoffs of risk versus reward in real time, as a vehicle decides where to explore next. He and his colleagues in the lab of Brian Williams, professor of aeronautics and astronautics, are implementing this algorithm and others on AUVs, with the vision of deploying fleets of bold, intelligent robotic explorers for a number of missions, including looking for offshore oil deposits, investigating the impact of climate change on coral reefs, and exploring extreme environments analogous to Europa, an ice-covered moon of Jupiter that the team hopes vehicles will one day traverse.

“If we went to Europa and had a very strong reason to believe that there might be a billion-dollar observation in a cave or crevasse, which would justify sending a spacecraft to Europa, then we would absolutely want to risk going in that cave,” Ayton says. “But algorithms that don’t consider risk are never going to find that potentially history-changing observation.”

Ayton and Williams, along with Richard Camilli of the Woods Hole Oceanographic Institution, will present their new algorithm at the Association for the Advancement of Artificial Intelligence conference this week in Honolulu.

A bold path

The team’s new algorithm is the first to enable “risk-bounded adaptive sampling.” An adaptive sampling mission is designed, for instance, to automatically adapt an AUV’s path, based on new measurements that the vehicle takes as it explores a given region. Most adaptive sampling missions that consider risk typically do so by finding paths with a concrete, acceptable level of risk. For instance, AUVs may be programmed to only chart paths with a chance of collision that doesn’t exceed 5 percent.

But the researchers found that accounting for risk alone could severely limit a mission’s potential rewards.

“Before we go into a mission, we want to specify the risk we’re willing to take for a certain level of reward,” Ayton says. “For instance, if a path were to take us to more hydrothermal vents, we would be willing to take this amount of risk, but if we’re not going to see anything, we would be willing to take less risk.”

The team’s algorithm takes in bathymetric data, or information about the ocean topography, including any surrounding obstacles, along with the vehicle’s dynamics and inertial measurements, to compute the level of risk for a certain proposed path. The algorithm also takes in all previous measurements that the AUV has taken, to compute the probability that such high-reward measurements may exist along the proposed path.

If the risk-to-reward ratio meets a certain value, determined by scientists beforehand, then the AUV goes ahead with the proposed path, taking more measurements that feed back into the algorithm to help it evaluate the risk and reward of other paths as the vehicle moves forward.

The researchers tested their algorithm in a simulation of an AUV mission east of Boston Harbor. They used bathymetric data collected from the region during a previous NOAA survey, and simulated an AUV exploring at a depth of 15 meters through regions at relatively high temperatures. They looked at how the algorithm planned out the vehicle’s route under three different scenarios of acceptable risk.

In the scenario with the lowest acceptable risk, meaning that the vehicle should avoid any regions that would have a very high chance of collision, the algorithm mapped out a conservative path, keeping the vehicle in a safe region that also did not have any high rewards — in this case, high temperatures. For scenarios of higher acceptable risk, the algorithm charted bolder paths that took a vehicle through a narrow chasm, and ultimately to a high-reward region.

The team also ran the algorithm through 10,000 numerical simulations, generating random environments in each simulation through which to plan a path, and found that the algorithm “trades off risk against reward intuitively, taking dangerous actions only when justified by the reward.”

A risky slope

Last December, Ayton, Williams, and others spent two weeks on a cruise off the coast of Costa Rica, deploying underwater gliders, on which they tested several algorithms, including this newest one. For the most part, the algorithm’s path planning agreed with those proposed by several onboard geologists who were looking for the best routes to find oil seeps.

Ayton says there was a particular moment when the risk-bounded algorithm proved especially handy. An AUV was making its way up a precarious slump, or landslide, where the vehicle couldn’t take too many risks.

“The algorithm found a method to get us up the slump quickly, while being the most worthwhile,” Ayton says. “It took us up a path that, while it didn’t help us discover oil seeps, it did help us refine our understanding of the environment.”

“What was really interesting was to watch how the machine algorithms began to ‘learn’ after the findings of several dives, and began to choose sites that we geologists might not have chosen initially,” says Lori Summa, a geologist and guest investigator at the Woods Hole Oceanographic Institution, who took part in the cruise.  “This part of the process is still evolving, but it was exciting to watch the algorithms begin to identify the new patterns from large amounts of data, and couple that information to an efficient, ‘safe’ search strategy.”

In their long-term vision, the researchers hope to use such algorithms to help autonomous vehicles explore environments beyond Earth.

“If we went to Europa and weren’t willing to take any risks in order to preserve a probe, then the probability of finding life would be very, very low,” Ayton says. “You have to risk a little to get more reward, which is generally true in life as well.”

This research was supported, in part, by Exxon Mobile, as part of the MIT Energy Initiative, and by NASA.

January 30, 2019 | More

Optimizing solar farms with smart drones MIT spinoff Raptor Maps uses machine-learning software to improve the maintenance of solar panels.

Optimizing solar farms with smart drones

As the solar industry has grown, so have some of its inefficiencies. Smart entrepreneurs see those inefficiencies as business opportunities and try to create solutions around them. Such is the nature of a maturing industry.

One of the biggest complications emerging from the industry’s breakneck growth is the maintenance of solar farms. Historically, technicians have run electrical tests on random sections of solar cells in order to identify problems. In recent years, the use of drones equipped with thermal cameras has improved the speed of data collection, but now technicians are being asked to interpret a never-ending flow of unstructured data.

That’s where Raptor Maps comes in. The company’s software analyzes imagery from drones and diagnoses problems down to the level of individual cells. The system can also estimate the costs associated with each problem it finds, allowing technicians to prioritize their work and owners to decide what’s worth fixing.

“We can enable technicians to cover 10 times the territory and pinpoint the most optimal use of their skill set on any given day,” Raptor Maps co-founder and CEO Nikhil Vadhavkar says. “We came in and said, ‘If solar is going to become the number one source of energy in the world, this process needs to be standardized and scalable.’ That’s what it takes, and our customers appreciate that approach.”

Raptor Maps processed the data of 1 percent of the world’s solar energy in 2018, amounting to the energy generated by millions of panels around the world. And as the industry continues its upward trajectory, with solar farms expanding in size and complexity, the company’s business proposition only becomes more attractive to the people driving that growth.

Picking a path

Raptor Maps was founded by Eddie Obropta ’13 SM ’15, Forrest Meyen SM ’13 PhD ’17, and Vadhavkar, who was a PhD candidate at MIT between 2011 and 2016. The former classmates had worked together in the Human Systems Laboratory of the Department of Aeronautics and Astronautics when Vadhavkar came to them with the idea of starting a drone company in 2015.

The founders were initially focused on the agriculture industry. The plan was to build drones equipped with high-definition thermal cameras to gather data, then create a machine-learning system to gain insights on crops as they grew. While the founders began the arduous process of collecting training data, they received guidance from MIT’s Venture Mentoring Service and the Martin Trust Center. In the spring of 2015, Raptor Maps won the MIT $100K Launch competition.

But even as the company began working with the owners of two large farms, Obropta and Vadhavkar were unsure of their path to scaling the company. (Meyen left the company in 2016.) Then, in 2017, they made their software publicly available and something interesting happened.

They found that most of the people who used the system were applying it to thermal images of solar farms instead of real farms. It was a message the founders took to heart.

“Solar is similar to farming: It’s out in the open, 2-D, and it’s spread over a large area,” Obropta says. “And when you see [an anomaly] in thermal images on solar, it usually means an electrical issue or a mechanical issue — you don’t have to guess as much as in agriculture. So we decided the best use case was solar. And with a big push for clean energy and renewables, that aligned really well with what we wanted to do as a team.”

Obropta and Vadhavkar also found themselves on the right side of several long-term trends as a result of the pivot. The International Energy Agency has proposed that solar power could be the world’s largest source of electricity by 2050. But as demand grows, investors, owners, and operators of solar farms are dealing with an increasingly acute shortage of technicians to keep the panels running near peak efficiency.

Since deciding to focus on solar exclusively around the beginning of 2018, Raptor Maps has found success in the industry by releasing its standards for data collection and letting customers — or the many drone operators the company partners with — use off-the-shelf hardware to gather the data themselves. After the data is submitted to the company, the system creates a detailed map of each solar farm and pinpoints any problems it finds.

“We run analytics so we can tell you, ‘This is how many solar panels have this type of issue; this is how much the power is being affected,’” Vadhavkar says. “And we can put an estimate on how many dollars each issue costs.”

The model allows Raptor Maps to stay lean while its software does the heavy lifting. In fact, the company’s current operations involve more servers than people.

The tiny operation belies a company that’s carved out a formidable space for itself in the solar industry. Last year, Raptor Maps processed four gigawatts worth of data from customers on six different continents. That’s enough energy to power nearly 3 million homes.

Vadhavkar says the company’s goal is to grow at least fivefold in 2019 as several large customers move to make the software a core part of their operations. The team is also working on getting its software to generate insights in real time using graphical processing units on the drone itself as part of a project with the multinational energy company Enel Green Power.

Ultimately, the data Raptor Maps collect are taking the uncertainty out of the solar industry, making it a more attractive space for investors, operators, and everyone in between.

“The growth of the industry is what drives us,” Vadhavkar says. “We’re directly seeing that what we’re doing is impacting the ability of the industry to grow faster. That’s huge. Growing the industry — but also, from the entrepreneurial side, building a profitable business while doing it — that’s always been a huge dream.”

January 30, 2019 | More

Learning to teach to speed up learning An algorithm that teaches robot agents how to exchange advice to complete a task helps them learn faster.

Learning to teach to speed up learning

The first artificial intelligence programs to defeat the world’s best players at chess and the game Go received at least some instruction by humans, and ultimately, would prove no match for a new generation of AI programs that learn wholly on their own, through trial and error.

A combination of deep learning and reinforcement learning algorithms are responsible for computers achieving dominance at challenging board games like chess and Go, a growing number of video games, including Ms. Pac-Man, and some card games, including poker. But for all the progress, computers still get stuck the closer a game resembles real life, with hidden information, multiple players, continuous play, and a mix of short and long-term rewards that make computing the optimal move hopelessly complex.

To get past these hurdles, AI researchers are exploring complementary techniques to help robot agents learn, modeled after the way humans pick up new information not only on our own, but from the people around us, and from newspapers, books, and other media. A collective-learning strategy developed by the MIT-IBM Watson AI Lab offers a promising new direction. Researchers show that a pair of robot agents can cut the time it takes to learn a simple navigation task by 50 percent or more when the agents learn to leverage each other’s growing body of knowledge.

The algorithm teaches the agents when to ask for help, and how to tailor their advice to what has been learned up until that point. The algorithm is unique in that neither agent is an expert; each is free to act as a student-teacher to request and offer more information. The researchers are presenting their work this week at the AAAI Conference on Artificial Intelligence in Hawaii.

Co-authors on the paper, which received an honorable mention for best student paper at AAAI, are Jonathan How, a professor in MIT’s Department of Aeronautics and Astronautics; Shayegan Omidshafiei, a former MIT graduate student now at Alphabet’s DeepMind; Dong-ki Kim of MIT; Miao Liu, Gerald Tesauro, Matthew Riemer, and Murray Campbell of IBM; and Christopher Amato of Northeastern University.

“This idea of providing actions to most improve the student’s learning, rather than just telling it what to do, is potentially quite powerful,” says Matthew E. Taylor, a research director at Borealis AI, the research arm of the Royal Bank of Canada, who was not involved in the research. “While the paper focuses on relatively simple scenarios, I believe the student/teacher framework could be scaled up and useful in multi-player video games like Dota 2, robot soccer, or disaster-recovery scenarios.”

For now, the pros still have the edge in Dota2, and other virtual games that favor teamwork and quick, strategic thinking. (Though Alphabet’s AI research arm, DeepMind, recently made news after defeating a professional player at the real-time strategy game, Starcraft.) But as machines get better at maneuvering dynamic environments, they may soon be ready for real-world tasks like managing traffic in a big city or coordinating search-and-rescue teams on the ground and in the air.

“Machines lack the common-sense knowledge we develop as children,” says Liu, a former MIT postdoc now at the MIT-IBM lab. “That’s why they need to watch millions of video frames, and spend a lot of computation time, learning to play a game well. Even then, they lack efficient ways to transfer their knowledge to the team, or generalize their skills to a new game. If we can train robots to learn from others, and generalize their learning to other tasks, we can start to better coordinate their interactions with each other, and with humans.”

The MIT-IBM team’s key insight was that a team that divides and conquers to learn a new task — in this case, maneuvering to opposite ends of a room and touching the wall at the same time — will learn faster.

Their teaching algorithm alternates between two phases. In the first, both student and teacher decide with each respective step whether to ask for, or give, advice based on their confidence that the next move, or the advice they are about to give, will bring them closer to their goal. Thus, the student only asks for advice, and the teacher only gives it, when the added information is likely to improve their performance. With each step, the agents update their respective task policies and theprocess continues until they reach their goal or run out of time.

With each iteration, the algorithm records the student’s decisions, the teacher’s advice, and their learning progress as measured by the game’s final score. In the second phase, a deep reinforcement learning technique uses the previously recorded teaching data to update both advising policies. “With each update the teacher gets better at giving the right advice at the right time,” says Kim, a graduate student at MIT.

In a follow up paper to be discussed in a workshop at AAAI, the researchers improve on the algorithm’s ability to track how well the agents are learning the underlying task — in this case, a box-pushing task — to improve the agents’ ability to give and receive advice. It’s another step that takes the team closer to its longer term goal of entering the RoboCup, an annual robotics competition started by academic AI researchers.

“We would need to scale to 11 agents before we can play a game of soccer,” says Tesauro, an IBM researcher who developed the first AI program to master the game of backgammon. “It’s going to take some more work but we’re hopeful.”

January 29, 2019 | More

3Q: Judah Cohen on improving seasonal weather forecasting Machine learning could help improve the accuracy of long-term forecasts, MIT climatologist argues

3Q: Judah Cohen on improving seasonal weather forecasting

Judah Cohen, a climatologist and visiting scientist in the Department of Civil and Environmental Engineering, is focused on improving the science of seasonal weather forecasting, in particular the winter forecasts so many people anxiously (or eagerly) await. Cohen is currently investigating the impacts of snow cover and sea ice variability on the winter climate, and how warming in the Arctic is influencing winter weather around the world. He also maintains an active Twitter account and blog, where he posts real-time weather predictions and delves into the art and science of seasonal weather forecasting.

In a recent opinion article published in WIREs Climate Change, Cohen advocates for more use of machine learning in seasonal forecasting. Machine learning systems, trained on historical data, could be used to build a forecasting model that could make predictions for the coming days, weeks, and even months, if current weather conditions were provided, he explains. MIT News spoke with Cohen about how these techniques could be used to help improve long-term winter weather forecasts.

Q: Why do you think it is important to incorporate machine learning in seasonal weather forecasting?

A: We have had great, easily quantifiable advances in short-term seasonal forecasting using dynamical models. A couple of decades ago, we were confident in the one- to two-day forecast, but by the time you got to day three it was like throwing darts at the board. We have really advanced at forecasting five six, seven, even up to 10 days out, which I think is huge progress in our accuracy. But, if you look at the longer range, starting at two weeks and going up to three months, we have not had that kind of advance using these same dynamical models.

In light of the fact that there has been disappointing progress in subseasonal to seasonal forecasting, especially relative to the improvements in short-term forecasting and given new statistical techniques and increases in computing power, a quicker and cheaper way to make advances in seasonal forecasting might be to use machine learning.

I feel like meteorology is a really great field to apply machine learning to because it’s about pattern recognition. In theory, the atmosphere could break down to an infinite number of patterns, but it seems to want to repeat the same patterns over and over again.

If we can use machine learning to improve the longer-range weather forecasts, this would be helpful for those that have exposure to weather risk. For example, weather managers, utilities, and municipalities can make few preparations for impending extreme, damaging or destructive weather, but they can make more extensive preparations for a forecast of two weeks or longer. The same could be said for supply chains and merchandise. Also, farmers can use longer-range information for planting, fertilizing, and harvesting. Better long-term forecasting could also allow water managers to better manage dams and hydropower, and energy suppliers to better target areas that need excess supply.

Q: How has the field of weather forecasting changed since you started working in this area, and what factors do you use to determine your seasonal forecasts nowadays?

A: If you go back 40 years, seasonal forecasting was pretty much based on the belief that if it’s cold today it’s going to be cold tomorrow. So, if it was a cold November we would have predicted a cold winter. Even including phenomenon like the El Niño-Southern Oscillation and the Madden-Julian Oscillation, there has not been a large amount of progress being made using the dynamical models. I also think the current models are overly sensitive to tropical forcing and insensitive to Arctic forcing.

I have been trying to say that the Arctic is important, and maybe if it wasn’t so important before, it certainly has gotten much more important, because the Arctic has been seen the greatest changes caused by climate change, compared to any other region in the world. For example, if you used Arctic predictors for last winter, you did a much better job predicting the temperature pattern across the Northern Hemisphere than if you used tropical predictors.

Another reason that the models are not carrying that progress over from the short-term forecast to more long-term forecasts is because a lot of the climate fluctuations or anomalies are related to the polar vortex. The polar vortex is located 20 to 30 kilometers above the surface, and it takes about two weeks for the effects to propagate down. If you can capitalize on that information, you can reach out into the subseasonal timescale.

Last winter was a really great example of that, as when the polar vortex is weak you tend to get more severe winter weather across the whole Northern Hemisphere. Last winter we had a big breakdown of the polar vortex in mid-February, but the dynamical models were initialized on Feb. 1, so they showed a very warm forecast across the Northern Hemisphere. As you may remember, it ended up being much colder in February and in March. The statistical models did a better job capturing the pattern of temperature variability. This highlights one of the shortcomings of the current models that are used, that they aren’t incorporating the signal from the polar vortex.

The current situation with the polar vortex is nearly identical to last winter. The models have really struggled with predicting the disruption of the polar vortex and its impacts on our weather. They have predicted a mild winter from start to finish. The mild start was correct, but they have not predicted a transition to much colder weather until very recently. Also the tropical predictors are all of opposite or different phase as last winter, but the Arctic predictors are all the same, and it is now two winters in a row that we have experienced a polar vortex split.

Q: You have a popular Twitter account where you provide your own forecasts and talk about the science behind your predictions. Do you find social media to be a useful platform for interacting with people about the science of weather forecasting?

A: I first started using social media because my attitude was that the best way to get your message out there in real-time was with social media, whether it be Twitter or my blog. I thought the community was neglecting a source of predictability, the influence of Arctic forcing, so my goal was to use social media to try to demonstrate the need to incorporate the Arctic in seasonal forecasting models.

When I started on Twitter, I figured I would have this targeted audience and narrow focus. Then, all of a sudden, all these people that I never envisioned were following me, like plow driver operators, people that do grounds maintenance and emergency management. A farmer in Pakistan wrote to me and explained that he depends on the weather, and that he follows me on Twitter and uses the information I share to time the planting and harvesting of his crops. He explained that he has internet access, but he shares my tweets and blog with all of his neighbors who don’t have internet access, and said it’s been very, very helpful. That was really mind-blowing to me.

I thought I would be focused on reaching people in the northeastern U.S. and maybe the U.K. But now I am getting messages from meteorologists and people in Jordan, the United Arab Emirates, and Turkey. It’s a much bigger spectrum than I ever envisioned. To me, that is really kind of amazing, to have that kind of reach.

January 22, 2019 | More

Fortifying the future of cryptography Vinod Vaikuntanathan aims to improve encryption in a world with growing applications and evolving adversaries

Fortifying the future of cryptography

As a boy growing up in a small South Indian village, Vinod Vaikuntanathan taught himself calculus by reading books his grandfather left lying around the house. Years later in college, he toiled away in the library studying number theory, which deals with the properties and relationships of numbers, primarily positive integers.

This field of study naturally steered Vaikuntanathan toward what he calls “the most important application of number theory in the modern world”: cryptography.

Today, Vaikuntanathan, a recently tenured associate professor of electrical engineering and computer science at MIT, is using number theory and other mathematical concepts to fortify encryption so it can be used for new applications and stand up to even the toughest adversaries.

One major focus is developing more efficient encryption techniques that can be scaled to do complex computations on large datasets. That means multiple parties can share data while ensuring the data remains private. For example, if researchers could analyze genomic data and patient data together, they may be able to identify key genome sequences associated with diseases. But the information for genomes and patients is kept private by separate entities, so collaboration is difficult. That’s a gap Vaikuntanathan wants to close.

“Data is available everywhere for these purposes, but it lives in silos. Better encryption is a way to ensure privacy yet allow the person holding the encrypted object to get something useful out of it,” Vaikuntanathan says. “Encrypting data and using data for a valuable purpose don’t have to be opposing constraints. You can achieve the best of both worlds sometimes.”

Part of his work also means “future-proofing” cryptography in a world that may soon see the rise of ultrafast quantum computers. Still in their infancy, quantum computers could one day provide breakthroughs in materials science, drug discovery, and artificial intelligence, to name just a few fields. But, because of their incredible speeds, they could also be used to break through most, if not all, today’s toughest cryptography schemes.

“All the existing encryption systems you use over the internet are insecure if you can build quantum computers,” Vaikuntanathan says. “This is something that everyone knows at this point. We need to develop other ways of doing cryptography to secure the internet so it stands strong, even in the face of quantum computers.”

Step by step

Vaikuntanathan’s journey to cryptography, and to MIT, was a step-by-step process of following his academic interests to increasingly larger cities and institutes — and teaching himself along the way.

It started in Neyyattinkara, India, a place so small “you’d find it hard to locate on map,” Vaikuntanathan says. Today, he and his wife still disagree over whether to call it a town or village. But he’s adamant on the latter: “It doesn’t even have a shopping mall — that’s my criteria for calling it a village.”

By age 12, using his grandfather’s old texts, Vaikuntanathan had taught himself an admittedly incomplete understanding of calculus. “It was buggy and error-prone, but as you go along you get better teachers. The best thing one can do is teach oneself these notions, struggle at it — you’ll get it wrong — and then later be enlightened,” he says.

After attending his area’s only high school, Vaikuntanathan, at 15, joined a pre-university program at a technical institute in a nearby bigger city, Trivandrum, about 20 miles away, where he met like-minded classmates. “There weren’t many people who cared about math and science,” he says, “but a few of us banded together and learned advanced math by ourselves.” Of course, there were some disadvantages: “Twenty miles takes an hour in India traffic, on public bus, packed like sardines. Commuting there was not the most pleasant thing in the world.”

Two years later, Vaikuntanathan enrolled in the Indian Institute of Technology (IIT) Madras, in Chennai, a top engineering school in one of the country’s largest cities. “That’s where things started,” Vaikuntanathan says. As he had at his previous institute, Vaikuntanathan formed a “band of brothers” — a trio of students, including himself, who began studying cryptography.

Then, in his junior year, his professor gave him a copy of “Lecture Notes on Cryptography,” about 300 pages of printed-out, compiled notes from a course on cryptography taught at MIT by Shafi Goldwasser and Mihir Bellare. “Our professor gave it to us and said, ‘Go read it and don’t bother me for a year,’” Vaikuntanathan says.

Working with “giants in the field”

Vaikuntanathan sought to carry his interest in cryptography to graduate school. Accepted into MIT and the University of California at Berkeley, Vaikuntanathan recalls asking his father for advice on which to attend: “I showed him pictures from Google of Cambridge, and they’re the dead of winter, with the frozen Charles River; and then Berkeley, which was sunny and full of life. My father said, ‘Go to Berkeley,’ and I said, ‘No, I’m going to MIT.’ It was the obvious choice, because it’s where the giants in the field were.”

One of those giants was Goldwasser, who became a graduate studies advisor: “I learned from her books to begin with, so that was quite fantastic.”

Some of his major MIT work revolved around reinforcing cryptography against the coming age of quantum computing. This involved using lattices, an architecture that uses number theory and hides data inside very complex math problems that even quantum computers can’t crack. His PhD studies culminated in co-inventing lattice-based cryptography schemes; he also developed a toolkit to teach others how to build and modify those schemes, along with former classmate and mentor Chris Peikert and Stanford University’s Craig Gentry.

After earning his PhD, Vaikuntanathan worked briefly as a researcher at IBM and Microsoft. During that time, Gentry invented fully homomorphic encryption, “which changed the world for all of us” working in cryptography, Vaikuntanathan says. But the original model was too computationally expensive to be practical. “For a while, fully homomorphic encryption was nice for cryptography kids to play with, but was useless otherwise,” he says.

In the late 2000s Vaikuntanathan, together with Gentry and Zvika Brakerski of the Weizmann Institute of Science, integrated lattices into fully homomorphic encryption techniques, creating a model that achieved far better security and efficiency. Other researchers have since built on top of the model, which is freely available on Github as BGV (Brakerski-Gentry-Vaikuntanathan). “People have refined that system again and again,” Vaikuntanathan says. “It’s interesting to see how far it’s come in nearly 10 years.”

Vaikuntanathan then taught for a couple years at the University of Toronto. During a summer as a visiting researcher at MIT, however, knew he had to return. “I knew this place had people with boundless energy, creativity, enthusiasm, and optimism,” he says. “It drew me back.”

Vaikuntanathan started teaching at MIT in 2013. Two years ago, he co-founded a startup, Duality Technologies, with Goldwasser and others to develop cryptography technologies that enable users to carry out complex computations and analytics on encrypted data. To Vaikuntanathan, the startup represents how the mathematical concepts he delved into all those years ago have come to fruition.

“It’s exciting to see the transition from abstract number theory into these very concrete applications,” he says.

January 16, 2019 | More

School of Engineering welcomes new faculty Eleven new professors join the MIT community. School of Engineering

School of Engineering welcomes new faculty

The School of Engineering is welcoming 11 new faculty members to its departments, institutes, labs, and centers. With research and teaching activities ranging from the development of novel microscopy techniques to intelligent systems and mixed-autonomy mobility, they are poised to make significant contributions in new directions across the school and to a wide range of research efforts around the Institute.

“I am pleased to welcome our outstanding new faculty,” says Anantha Chandrakasan, dean of the School of Engineering. “Their contributions as educators, researchers, and collaborators will enhance the engineering community and strengthen our global impact.”

Pulkit Agrawal will join the Department of Electrical Engineering and Computer Science as an assistant professor in July. Agrawal earned a BS in electrical engineering from the Indian Institute of Technology, Kanpur, and was awarded the Director’s Gold Medal. He earned a PhD in computer science from the University of California at Berkeley. A co-founder of SafelyYou, Inc., Agrawal researches topics spanning robotics, deep learning, computer vision, and computational neuroscience. His work has appeared multiple times in MIT Technology Review, Quanta, New Scientist, the New York Post, and other outlets. He is a recipient of the Signatures Fellow Award, a Fulbright science and technology award, the Goldman Sachs Global Leadership Award, OPJEMS, the Sridhar Memorial Prize, and IIT Kanpur’s academic excellence awards, among others. Agrawal also holds a “sangeet prabhakar” (the equivalent of bachelor’s degree in Indian classical music) and occasionally performs in music concerts.

Jacob Andreas will join the Department of Electrical Engineering and Computer Science as an assistant professor in July. Andreas received a BS from Columbia University and an MPhil from the University of Cambridge, where he studied as a Churchill Scholar. He earned his PhD from the University of California at Berkeley, where he was a member of the Berkeley Natural Language Processing Group and the Berkeley Artificial Intelligence Research Lab. His work is focused on using language as a scaffold for more efficient learning and as a probe for understanding model behavior. His received the 2016 Annual Conference of the North American Chapter of the Association for Computational Linguistics Best Paper Award and the 2017 International Conference on Machine Learning Honorable Mention. He has been an NSF Graduate Fellow, a Huawei-Berkeley AI Fellow, and a Facebook Fellow.

Manya Ghobadi joined the Department of Electrical Engineering and Computer Science as an assistant professor in October. Previously, she was a researcher at the Microsoft Research Mobility and Networking group. Prior to Microsoft, she was a software engineer at Google. Ghobadi received her PhD in computer science at the University of Toronto and her BEng in computer engineering at the Sharif University of Technology. A computer systems researcher with a networking focus, she has worked on a broad set of topics, including data-center networking, optical networks, transport protocols, network measurement, and hardware-software co-design. Many of the technologies she has helped develop are part of real-world systems at Microsoft and Google. She was recognized as an N2women Rising Star in networking and communications in 2017. Her work has won the best dataset award, Google research excellent-paper award (twice), and the ACM Internet Measurement Conference best-paper award.

Ashwin Gopinath joins the Department of Mechanical Engineering as an assistant professor this month. He received his PhD in electrical engineering from Boston University in 2010 and was awarded the outstanding doctoral thesis award by his department. He is presently a research scientist in the Department of Bioengineering at the California Institute of Technology. His main research is at the intersection of DNA nanotechnology, micro-fabrication, synthetic biology, optical physics, and materials science. His main research is on DNA origami and design, up to wafer-scale self-assembly with molecular-scale control, and possibilities for microfabricated devices. His present application areas involve quantum optics, nanophotonics, single molecule biophysics, and molecular diagnostics. In 2017, he received the Robert Dirks Molecular Programming Prize for his early career contributions to combining DNA nanotechnology and traditional semiconductor nanofabrication.

Richard Linares joined the Department of Aeronautics and Astronautics as an assistant professor last July. Before joining MIT, he was an assistant professor at the University of Minnesota’s aerospace engineering and mechanics department. Linares received his BS, MS, and PhD degrees in aerospace engineering from the State University of New York at Buffalo. He was a Director’s Postdoctoral Fellow at Los Alamos National Laboratory and also held a postdoc appointment at the United States Naval Observatory. His research areas are astrodynamics, estimation and controls, satellite guidance and navigation, space situational awareness, and space-traffic management.

Kevin O’Brien joined the Department of Electrical Engineering and Computer Science as an assistant professor last July. He earned a BS in physics from Purdue University and a PhD in physics from the University of California at Berkeley. He joined the Quantum Nanoelectrics Lab (Siddiqi Group) at UC Berkeley as a postdoc to lead development of multiqubit quantum processors. His work has appeared in top journals including Science, Nature Materials, and Nature Communications, among others. He has been an NSF Graduate Fellow. His research bridges nonlinear optics, metamaterials, and quantum engineering.

Negar Reiskarimian will join the Department of Electrical Engineering and Computer Science as an assistant professor in July. She received both a BS and MS degree in electrical engineering from Sharif University of Technology in Iran and is currently a PhD candidate in electrical engineering at Columbia University. She has published in top-tier IEEE IC-related journals and conferences, as well as broader-interest high-impact journals in the Nature family. Her research has been widely covered in the press and featured in IEEE Spectrum, Gizmodo, and EE Times, among others. She is the recipient of numerous awards and fellowships, including Forbes’ “30 under 30,” a Paul Baran Young Scholar award, a Qualcomm Innovation Fellowship, and multiple IEEE awards and fellowships.

Frances M. Ross joined the Department of Materials Science and Engineering as a full professor in December.  Previously she was a member of the nanoscale materials analysis department at IBM’s Thomas J. Watson Research Center, where she performed research on nanostructures using transmission electron microscopes (TEM) that allow researchers to see, in real time, how nanostructures form, and then to see how the growth process is affected by changes in temperature, environment, and other variables. Understanding materials at such a basic level has remarkable implications for many applications including semiconductors, energy storage, and more. Ross earned her BA and PhD at Cambridge University and was a postdoc at AT&T Bell Labs. She has been recognized with many awards and honors, including election to fellow in the American Physical Society, the Materials Research Society, the American Association for the Advancement of Science, the Microscopy Society of America, the American Vacuum Society, and the Royal Microscopical Society. She holds the Ellen Swallow Richards Chair.

Suvrit Sra joins the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems and Society as an assistant professor this month. He was a principal research scientist in the Laboratory for Information and Decision Systems (LIDS) at MIT. He obtained his PhD in computer science from the University of Texas at Austin in 2007. Before joining LIDS, he was a senior research scientist at the Max Planck Institute for Intelligent Systems in Tübingen, Germany. He has also held visiting faculty positions at UC Berkeley and Carnegie Mellon during 2013–14. His research bridges areas such as optimization, matrix theory, geometry, and probability with machine learning. More broadly, he is interested in data-driven questions within engineering, science, and health care. His work has won several awards at machine learning venues, as well as the 2011 SIAM Outstanding Paper Award. He founded the OPT Optimization for Machine Learning series of workshops at the Neural Information Processing Systems conference, which he has co-chaired since 2008; he has also edited a popular book with the same title (MIT Press, 2011).

Giovanni Traverso will join the Department of Mechanical Engineering as an assistant professor in July. He received his PhD in medical sciences from Johns Hopkins University in 2010. He subsequently completed medical school at Cambridge University, an internal medicine residency at the Brigham and Women’s Hospital (BWH), and his gastroenterology fellowship training at Massachusetts General Hospital. He is presently an assistant professor of medicine and associate physician in the division of gastroenterology at BWH. For his postdoctoral research at MIT, he developed a series of novel technologies for drug delivery as well as physiological sensing via the gastrointestinal tract. His present research focuses on developing efficient systems for drug delivery through the gastrointestinal tract, as well as novel ingestible electronic devices for sensing a broad array of physiologic and pathophysiologic parameters. He has been the recipient of the grand prize of the Collegiate Inventors Competition, a research fellowship from Trinity College, and was named one of the most promising innovators under 35 by MIT Technology Review. Traverso is a co-founder of Lyndra, Suono Bio, and Celero Systems which have been established to accelerate the translation of technologies developed by his team, for use in medical care.

Cathy Wu will join the Institute as an assistant professor in the Department of Civil and Environmental Engineering, with a core affiliation in the Institute for Data, Systems, and Society, in July. Wu earned a PhD in electrical engineering and computer science at the University of California at Berkeley, where she worked with the Berkeley Artificial Intelligence Research Lab, Berkeley DeepDrive, California Partners for Advanced Transportation Technology, and the Berkeley Real-time Intelligent Secure Explainable Systems Lab. Her research involves machine learning, robotics, intelligent systems, and mixed-autonomy mobility. She is the recipient of several fellowships, including the NSF Graduate Research Fellowship, the Chancellor’s Fellowship for Graduate Study at UC Berkeley, the National Defense Science and Engineering Graduate Fellowship, and the Dwight David Eisenhower Transportation Fellowship. She has been awarded the 2018 Council of University Transportation Centers’s Milton Pikarsky Memorial Award, the 2017 ITS Outstanding Graduate Student Award, and the 2016 IEEE International Conference on Intelligent Transportation Systems Best Paper Award.

January 9, 2019 | More

Tiny satellites could be “guide stars” for huge next-generation telescopes

Tiny satellites could be “guide stars” for huge next-generation telescopes

There are more than 3,900 confirmed planets beyond our solar system. Most of them have been detected because of their “transits” — instances when a planet crosses its star, momentarily blocking its light. These dips in starlight can tell astronomers a bit about a planet’s size and its distance from its star.

But knowing more about the planet, including whether it harbors oxygen, water, and other signs of life, requires far more powerful tools. Ideally, these would be much bigger telescopes in space, with light-gathering mirrors as wide as those of the largest ground observatories. NASA engineers are now developing designs for such next-generation space telescopes, including “segmented” telescopes with multiple small mirrors that could be assembled or unfurled to form one very large telescope once launched into space.

NASA’s upcoming James Webb Space Telescope is an example of a segmented primary mirror, with a diameter of 6.5 meters and 18 hexagonal segments. Next-generation space telescopes are expected to be as large as 15 meters, with over 100 mirror segments.

One challenge for segmented space telescopes is how to keep the mirror segments stable and pointing collectively toward an exoplanetary system. Such telescopes would be equipped with coronagraphs — instruments that are sensitive enough to discern between the light given off by a star and the considerably weaker light emitted by an orbiting planet. But the slightest shift in any of the telescope’s parts could throw off a coronagraph’s measurements and disrupt measurements of oxygen, water, or other planetary features.

Now MIT engineers propose that a second, shoebox-sized spacecraft equipped with a simple laser could fly at a distance from the large space telescope and act as a “guide star,” providing a steady, bright light near the target system that the telescope could use as a reference point in space to keep itself stable.

In a paper published today in the Astronomical Journal, the researchers show that the design of such a laser guide star would be feasible with today’s existing technology. The researchers say that using the laser light from the second spacecraft to stabilize the system relaxes the demand for precision in a large segmented telescope, saving time and money, and allowing for more flexible telescope designs.

“This paper suggests that in the future, we might be able to build a telescope that’s a little floppier, a little less intrinsically stable, but could use a bright source as a reference to maintain its stability,” says Ewan Douglas, a postdoc in MIT’s Department of Aeronautics and Astronautics and a lead author on the paper.

The paper also includes Kerri Cahoy, associate professor of aeronautics and astronautics at MIT, along with graduate students James Clark and Weston Marlow at MIT, and Jared Males, Olivier Guyon, and Jennifer Lumbres from the University of Arizona.

In the crosshairs

For over a century, astronomers have been using actual stars as “guides” to stabilize ground-based telescopes.

“If imperfections in the telescope motor or gears were causing your telescope to track slightly faster or slower, you could watch your guide star on a crosshairs by eye, and slowly keep it centered while you took a long exposure,” Douglas says.

In the 1990s, scientists started using lasers on the ground as artificial guide stars by exciting sodium in the upper atmosphere, pointing the lasers into the sky to create a point of light some 40 miles from the ground. Astronomers could then stabilize a telescope using this light source, which could be generated anywhere the astronomer wanted to point the telescope.

“Now we’re extending that idea, but rather than pointing a laser from the ground into space, we’re shining it from space, onto a telescope in space,” Douglas says.  Ground telescopes need guide stars to counter atmospheric effects, but space telescopes for exoplanet imaging have to counter minute changes in the system temperature and any disturbances due to motion.

The space-based laser guide star idea arose out of a project that was funded by NASA. The agency has been considering designs for large, segmented telescopes in space and tasked the researchers with finding ways of bringing down the cost of the massive observatories.

“The reason this is pertinent now is that NASA has to decide in the next couple years whether these large space telescopes will be our priority in the next few decades,” Douglas says. “That decision-making is happening now, just like the decision-making for the Hubble Space Telescope happened in the 1960s, but it didn’t launch until the 1990s.’”

Star fleet

Cahoy’s lab has been developing laser communications for use in CubeSats, which are shoebox-sized satellites that can be built and launched into space at a fraction of the cost of conventional spacecraft.

For this new study, the researchers looked at whether a laser, integrated into a CubeSat or slightly larger SmallSat, could be used to maintain the stability of a large, segmented space telescope modeled after NASA’s LUVOIR (for Large UV Optical Infrared Surveyor), a conceptual design that includes multiple mirrors that would be assembled in space.

Researchers have estimated that such a telescope would have to remain perfectly still, within 10 picometers — about a quarter the diameter of a hydrogen atom — in order for an onboard coronagraph to take accurate measurements of a planet’s light, apart from its star.

“Any disturbance on the spacecraft, like a slight change in the angle of the sun, or a piece of electronics turning on and off and changing the amount of heat dissipated across the spacecraft, will cause slight expansion or contraction of the structure,” Douglas says. “If you get disturbances bigger than around 10 picometers, you start seeing a change in the pattern of starlight inside the telescope, and the changes mean that you can’t perfectly subtract the starlight to see the planet’s reflected light.”

The team came up with a general design for a laser guide star that would be far enough away from a telescope to be seen as a fixed star — about tens of thousands of miles away — and that would point back and send its light toward the telescope’s mirrors, each of which would reflect the laser light toward an onboard camera. That camera would measure the phase of this reflected light over time. Any change of 10 picometers or more would signal a compromise to the telescope’s stability that, onboard actuators could then quickly correct.

To see if such a laser guide star design would be feasible with today’s laser technology, Douglas and Cahoy worked with colleagues at the University of Arizona to come up with different brightness sources, to figure out, for instance, how bright a laser would have to be to provide a certain amount of information about a telescope’s position, or to provide stability using models of segment stability from large space telescopes. They then drew up a set of existing laser transmitters and calculated how stable, strong, and far away each laser would have to be from the telescope to act as a reliable guide star.

In general, they found laser guide star designs are feasible with existing technologies, and that the system could fit entirely within a SmallSat about the size of a cubic foot. Douglas says that a single guide star could conceivably follow a telescope’s “gaze,” traveling from one star to the next as the telescope switches its observation targets. However, this would require the smaller spacecraft to journey hundreds of thousands of miles paired with the telescope at a distance, as the telescope repositions itself to look at different stars.

Instead, Douglas says a small fleet of guide stars could be deployed, affordably, and spaced across the sky, to help stabilize a telescope as it surveys multiple exoplanetary systems. Cahoy points out that the recent success of NASA’s MARCO CubeSats, which supported the Mars Insight lander as a communications relay, demonstrates that CubeSats with propulsion systems can work in interplanetary space, for longer durations and at large distances.

“Now we’re analyzing existing propulsion systems and figuring out the optimal way to do this, and how many spacecraft we’d want leapfrogging each other in space,” Douglas says. “Ultimately, we think this is a way to bring down the cost of these large, segmented space telescopes.”

This research was funded in part by a NASA Early Stage Innovation Award.

January 4, 2019 | More