News and Research
Catherine Iacobo named industry co-director for MIT Leaders for Global Operations

Catherine Iacobo named industry co-director for MIT Leaders for Global Operations

Cathy Iacobo, a lecturer at the MIT Sloan School of Management, has been named the new industry co-director for the MIT Leaders for Global Operations (LGO) program. Read more

Lgo

MIT team places first in U.S. Air Force virtual reality competition

When the United States Air Force put out a call for submissions for its first-ever Visionary Q-Prize competition in October 2018, a six-person team of 3 MIT students and 3 LGO alumni took up the challenge. Last month, they emerged as a first-place winner for their prototype of a virtual reality tool they called CoSMIC (Command, Sensing, and Mapping Information Center).

The challenge was hosted by the Air Force Research Labs Space Vehicles Directorate and the Wright Brothers Institute to encourage nontraditional sources with innovative products and ideas to engage with military customers to develop solutions for safe and secure operations in space.

April 12, 2019 | More

MIT graduate engineering, business programs earn top rankings from U.S. News for 2020

Graduate engineering program is No. 1 in the nation; MIT Sloan is No. 3.

MIT’s graduate program in engineering has again earned a No. 1 spot in U.S. News and Word Report’s annual rankings, a place it has held since 1990, when the magazine first ranked such programs.

The MIT Sloan School of Management also placed highly, occupying the No. 3 spot for the best graduate business program, which it shares with Harvard University and the University of Chicago.

March 22, 2019 | More

Leading to Green

More efficient or more sustainable? Janelle Heslop, LGO ’19, helps businesses achieve both. Heslop is no shrinking violet. She found a voice for herself and the environment when she was in middle school, volunteering as a junior docent for the Hudson River Museum. “I was a 12-year-old giving tours, preaching to people: we’ve got to protect our resources,” Heslop says. “At a very early age, I learned to have a perspective, and assert it.”

February 22, 2019 | More

Winners of inaugural AUS New Venture Challenge Announced

Danielle Castley, Dartmouth PhD Candidate, Jordan Landis, LGO ’20, and Ian McDonald, PhD, of Neutroelectric LLC won the inaugural American University of Sharjah New Ventures Challenge, winning the Chancellor’s Prize of $50,000 with radiation shielding materials  developed to improve safety margins and reduce costs for both nuclear power plant operations and transport and storage of spent nuclear waste.

February 20, 2019 | More

Tackling greenhouse gases

While a number of other MIT researchers are developing capture and reuse technologies to minimize greenhouse gas emissions, Professor Timothy Gutowski, frequent LGO advisor, is approaching climate change from a completely different angle: the economics of manufacturing.

Gutowski understands manufacturing. He has worked on both the industry and academic side of manufacturing, was the director of MIT’s Laboratory for Manufacturing and Productivity for a decade, and currently leads the Environmentally Benign Manufacturing research group at MIT. His primary research focus is assessing the environmental impact of manufacturing.

January 11, 2019 | More

Department of Mechanical Engineering announces new leadership team

Pierre Lermusiaux, LGO thesis advisor and professor of mechanical engineering and ocean science and engineering will join on the MechE department’s leadership team. Prof Lermusiaux will serve as associate department head for operations.

Evelyn Wang, the Gail E. Kendall Professor, who began her role as head of MIT’s Department of Mechanical Engineering (MechE) on July 1, has announced that Pierre Lermusiaux, professor of mechanical engineering and ocean science and engineering, and Rohit Karnik, associate professor of mechanical engineering, will join her on the department’s leadership team. Lermusiaux will serve as associate department head for operations and Karnik will be the associate department head for education.

“I am delighted to welcome Pierre and Rohit to the department’s leadership team,” says Wang. “They have both made substantial contributions to the department and are well-suited to ensure that it continues to thrive.”

Pierre Lermusiaux, associate department head for operations

Pierre Lermusiaux has been instrumental in developing MechE’s strategic plan over the past several years. In 2015, with Evelyn Wang, he was co-chair of the mechanical engineering strategic planning committee. They were responsible for interviewing individuals across the MechE community, determining priority “grand challenge” research areas, investigating new educational models, and developing mechanisms to enhance community and departmental operations. The resulting strategic plan will inform the future of MechE for years to come.

“Pierre is an asset to our department,” adds Wang. “I look forward to working with him to lead our department toward new research frontiers and cutting-edge discoveries.”

Lermusiaux joined MIT as associate professor in 2007 after serving as a research associate at Harvard University, where he also received his PhD. He is an internationally recognized thought leader at the intersection of ocean modeling and observing. He has developed new uncertainty quantification and data assimilation methods. His research has improved real-time data-driven ocean modeling and has had important implications for marine industries, fisheries, energy, security, and our understanding of human impact on the ocean’s health.

Lermusiaux’s talent as an educator has been recognized with the Ruth and Joel Spira Award for Teaching Excellence. He has been the chair of the graduate admissions committee since 2014. He has served on many MechE and institute committees and is also active in MIT-Woods Hole Oceanographic Institution Joint Program committees.

“Working for the department, from our graduate admission to the strategic planning with Evelyn, has been a pleasure,” says Lermusiaux. “I am thrilled to be continuing such contributions as associate department head for research and operations. I look forward to developing and implementing strategies and initiatives that help our department grow and thrive.”

Lermusiaux succeeds Evelyn Wang, who previously served as associate department head for operations under the former department head Gang Chen.

Rohit Karnik, associate department head for education

Over the past two years, Rohit Karnik has taken an active role in shaping the educational experience at MechE. As the undergraduate officer, he has overseen the operations of the department’s undergraduate office and chaired the undergraduate programs committee. This position has afforded Karnik the opportunity to evaluate and refine the department’s course offerings each year and work closely with undergraduate students to provide the best education.

“Rohit is a model citizen and has provided dedicated service to our department,” says Wang. “I look forward to working with him to create new education initiatives and continue to provide a world-class education for our students.”

Prior to joining MIT as a postdoc in 2006, Karnik received his PhD from the University of California at Berkeley. In 2006, he joined the faculty as an assistant professor of mechanical engineering. He is recognized as a leader in the field of micro-and-nanofluidics and has made a number of seminal contributions in the fundamental understanding of nanoscale fluid transport. He has been recognized by an National Science Foundation CAREER Award and a Department of Energy Early Career Award.

Karnik’s dedication to his students have been recognized by the Keenan Award for Innovation in Education and the Ruth and Joel Spira Award for Teaching Excellence. He has also served on the graduate admissions committee and various faculty search committees.

“It is a tremendous honor and responsibility to take this position in the top mechanical engineering department in the world,” says Karnik. “I will strive to ensure that we maintain excellence in mechanical engineering education and adapt to the changing times to offer strong and comprehensive degree programs and the best possible experience for our students.”

Karnik succeeds Professor John Brisson who previously served as associate department head for education.

August 3, 2018 | More

Boeing will be Kendall Square Initiative’s first major tenant

Boeing, the world’s largest aerospace company, and LGO Partner Company has announced they will be part MIT’s Kendall Square Initiative. The company has agreed to lease approximately 100,000 square feet at MIT’s building to be developed at 314 Main St., in the heart of Kendall Square in Cambridge.

MIT’s Kendall Square Initiative, includes six sites slated for housing, retail, research and development, office, academic, and open space uses. The building at 314 Main St. (“Site 5” on the map above) is located between the MBTA Red Line station and the Kendall Hotel. Boeing is expected to occupy its new space by the end of 2020.

“Our focus on advancing the Kendall Square innovation ecosystem includes a deep and historic understanding of what we call the ‘power of proximity’ to address pressing global challenges,” MIT Executive Vice President and Treasurer Israel Ruiz says. “MIT’s president, L. Rafael Reif, has made clear his objective of reducing the time it takes to move ideas from the classroom and lab out to the market. The power of proximity is a dynamic that propels this concept forward: Just as pharmaceutical, biotech, and tech sector scientists in Kendall Square work closely with their nearby MIT colleagues, Boeing and MIT researchers will be able to strengthen their collaborative ties to further chart the course of the aerospace industry.”

Boeing was founded in 1916 — the same year that MIT moved to Cambridge — and marked its recent centennial in a spirit similar to the Institute’s 100-year celebration in 2016, with special events, community activities, and commemorations. That period also represents a century-long research relationship between Boeing and MIT that has helped to advance the global aerospace industry.

Some of Boeing’s founding leaders, as well as engineers, executives, Boeing Technical Fellows, and student interns, are MIT alumni.

Earlier this year, Boeing announced that it will serve as the lead donor for MIT’s $18 million project to replace its 80-year-old Wright Brothers Wind Tunnel. This pledge will help to create, at MIT, the world’s most advanced academic wind tunnel.

In 2017, Boeing acquired MIT spinout Aurora Flight Sciences, which develops advanced aerospace platforms and autonomous systems. Its primary research and development center is located at 90 Broadway in Kendall Square. In the new facility at 314 Main St., Boeing will establish the Aerospace and Autonomy Center, which will focus on advancing enabling technologies for autonomous aircraft.

“Boeing is leading the development of new autonomous vehicles and future transportation systems that will bring flight closer to home,” says Greg Hyslop, Boeing chief technology officer. “By investing in this new research facility, we are creating a hub where our engineers can collaborate with other Boeing engineers and research partners around the world and leverage the Cambridge innovation ecosystem.”

“It’s fitting that Boeing will join the Kendall/MIT innovation family,” MIT Provost Martin Schmidt says. “Our research interests have been intertwined for over 100 years, and we’ve worked together to advance world-changing aerospace technologies and systems. MIT’s Department of Aeronautics and Astronautics is the oldest program of its kind in the United States, and excels at its mission of developing new air transportation concepts, autonomous systems, and small satellites through an intensive focus on cutting-edge education and research. Boeing’s presence will create an unprecedented opportunity for new synergies in this industry.”

The current appearance of the 314 Main St. site belies its future active presence in Kendall Square. The building’s foundation and basement level — which will house loading infrastructure, storage and mechanical space, and bicycle parking — is currently in construction. Adjacent to those functions is an underground parking garage, a network of newly placed utilities, and water and sewer infrastructure. Vertical construction of the building should begin in September.

August 3, 2018 | More

Reliable energy for all

Prosper Nyovanie (LGO ’19) discusses his passion for using engineering and technology to solve global problems.

 

During high school, Prosper Nyovanie had to alter his daily and nightly schedules to accommodate the frequent power outages that swept cities across Zimbabwe.

“[Power] would go almost every day — it was almost predictable,” Nyovanie recalls. “I’d come back from school at 5 p.m., have dinner, then just go to sleep because the electricity wouldn’t be there. And then I’d wake up at 2 a.m. and start studying … because by then you’d usually have electricity.”

At the time, Nyovanie knew he wanted to study engineering, and upon coming to MIT as an undergraduate, he majored in mechanical engineering. He discovered a new area of interest, however, when he took 15.031J (Energy Decisions, Markets, and Policies), which introduced him to questions of how energy is produced, distributed, and consumed. He went on to minor in energy studies.

Now as a graduate student and fellow in MIT’s Leaders for Global Operations (LGO) program, Nyovanie is on a mission to learn the management skills and engineering knowledge he needs to power off-grid communities around the world through his startup, Voya Sol. The company develops solar electric systems that can be scaled to users’ needs.

Determination and quick thinking

Nyovanie was originally drawn to MIT for its learning-by-doing engineering focus. “I thought engineering was a great way to take all these cool scientific discoveries and technologies and apply them to global problems,” he says. “One of the things that excited me a lot about MIT was the hands-on approach to solving problems. I was super excited about UROP [the Undergraduate Research Opportunities Program]. That program made MIT stick out from all the other universities.”

As a mechanical engineering major, Nyovanie took part in a UROP for 2.5 years in the Laboratory for Manufacturing and Productivity with Professor Martin Culpepper. But his experience in 15.031J made him realize his interests were broader than just research, and included the intersection of technology and business.

“One big thing that I liked about the class was that it introduced this other complexity that I hadn’t paid that much attention to before, because when you’re in the engineering side, you’re really focused on making technology, using science to come up with awesome inventions,” Nyovanie says. “But there are considerations that you need to think about when you’re implementing [such inventions]. You need to think about markets, how policies are structured.”

The class inspired Nyovanie to become a fellow in the LGO program, where he will earn an MBA from the MIT Sloan School of Management and a master’s in mechanical engineering. He is also a fellow of the Legatum Center for Development and Entrepreneurship at MIT.

When Nyovanie prepared for his fellowship interview while at home in Zimbabwe, he faced another electricity interruption: A transformer blew and would take time to repair, leaving him without power before his interview.

“I had to act quickly,” Nyovanie says. “I went and bought a petrol generator just for the interview. … The generator provided power for my laptop and for the Wi-Fi.” He recalls being surrounded by multiple solar lanterns that provided enough light for the video interview.

While Nyovanie’s determination in high school and quick thinking before graduate school enabled him to work around power supply issues, he realizes that luxury doesn’t extend to all those facing similar situations.

“I had enough money to actually go buy a petrol generator. Some of these communities in off-grid areas don’t have the resources they need to be able to get power,” Nyovanie says.

Scaling perspectives

Before co-founding Voya Sol with Stanford University graduate student Caroline Jo, Nyovanie worked at SunEdison, a renewable energy company, for three years. During most of that time, Nyovanie worked as a process engineer and analyst through the Renewable Energy Leadership Development Rotational Program. As part of the program, Nyovanie rotated between different roles at the company around the world.

During his last rotation, Nyovanie worked as a project engineer and oversaw the development of rural minigrids in Tanzania. “That’s where I got firsthand exposure to working with people who don’t have access to electricity and working to develop a solution for them,” Nyovanie says. When SunEdison went bankrupt, Nyovanie wanted to stay involved in developing electricity solutions for off-grid communities. So, he stayed in talks with rural electricity providers in Zimbabwe, Kenya, and Nigeria before eventually founding Voya Sol with Jo.

Voya Sol develops scalable solar home systems which are different than existing solar home system technologies. “A lot of them are fixed,” Nyovanie says. “So if you buy one, and need an additional light, then you have to go buy another whole new system. … The scalable system would take away some of that risk and allow the customer to build their own system so that they buy a system that fits their budget.” By giving users the opportunity to scale up or scale down their wattage to meet their energy needs, Nyovanie hopes that the solar electric systems will help power off-grid communities across the world.

Nyovanie and his co-founder are currently both full-time graduate students in dual degree programs. But to them, graduate school didn’t necessarily mean an interruption to their company’s operations; it meant new opportunities for learning, mentorship, and team building. Over this past spring break, Nyovanie and Jo traveled to Zimbabwe to perform prototype testing for their solar electric system, and they plan to conduct a second trip soon.

“We’re looking into ways we can aggregate people’s energy demands,” Nyovanie says. “Interconnected systems can bring in additional savings for customers.” In the future, Nyovanie hopes to expand the distribution of scalable solar electric systems through Voya Sol to off-grid communities worldwide. Voya Sol’s ultimate vision is to enable off-grid communities to build their own electricity grids, by allowing individual customers to not only scale their own systems, but also interconnect their systems with their neighbors’. “In other words, Voya Sol’s goal is to enable a completely build-your-own, bottom-up electricity grid,” Nyovanie says.

Supportive communities

During his time as a graduate student at MIT, Nyovanie has found friendship and support among his fellow students.

“The best thing about being at MIT is that people are working on all these cool, different things that they’re passionate about,” Nyovanie says. “I think there’s a lot of clarity that you can get just by going outside of your circle and talking to people.”

Back home in Zimbabwe, Nyovanie’s family cheers him on.

“Even though [my parents] never went to college, they were very supportive and encouraged me to push myself, to do better, and to do well in school, and to apply to the best programs that I could find,” Nyovanie says.

June 12, 2018 | More

LGO Best Thesis 2018 for Predictive Modeling Project at Massachusetts General Hospital

After the official MIT commencement ceremonies, Thomas Roemer, LGO’s executive director, announced the best thesis winner at LGO’s annual post-graduation celebration. This year’s winner was Jonathan Zanger, who developed a predictive model using machine learning at Massachusetts General Hospital. “The thesis describes breakthrough work at MGH that leverages machine learning and deep clinical knowledge to develop a decision support tool to predict discharges from the hospital in the next 24-48 hours and enable a fundamentally new and more effective discharge process,” said MIT Sloan School of Management Professor Retsef Levi, one of Zanger’s thesis advisors and the LGO management faculty co-director.

Applying MIT knowledge in the real world

Best Thesis 2018
Jonathan Zanger won the 2018 LGO best thesis award for his work using machine learning to develop a predictive model for better patient care at MGH

Zanger, who received his MBA and an SM in Electrical Engineering and Computer Science, conducted his six-month LGO internship project at MGH that sought to enable a more proactive process of managing the hospital’s bed capacity by identifying which surgical inpatients are likely to be discharged from the hospital in the next 24 to 48 hours. To do this, Zanger grouped patients by their surgery type, and worked to define and formalize milestones on the pathway to a post-operative recovery by defining barriers that may postpone patients’ discharge. Finally, he used a deep learning algorithm which uses over 900 features and is trained on 3000 types of surgeries and 20,000 surgical discharges. LGO thesis advisor Retsef Levi stated that “in my view, this thesis work represents a league of its own in terms of technical depth, creativity and potential impact.” Zanger was able to have true prediction for 97% of patients discharged within 48 hours. This helps to limit overcrowding and operational disruptions and anticipate capacity crises.

A group of faculty, alumni and staff review the theses each year to determine the winner. Thomas Sanderson (LGO ’14), LGO alumni and thesis reviewer stated that Zanger’s thesis showed  “tremendous extensibility and smart solution architecture decisions to make future work easy. Obvious and strong overlap of engineering, business, and industry.  This is potentially revolutionary work; this research advances the current state of the art well beyond anything currently available for large hospital bed management with obvious and immediate impact on healthcare costs and patient outcomes. The theory alone is hugely noteworthy but the fact that the work was also piloted during the thesis period is even more impressive. LGO has done a lot of great work at MGH but this is potentially the widest reaching and most important.”

Zanger, who earned his undergraduate degree Physics, Computer Science and Mathematics from the Hebrew University of Jerusalem, will return to Israel after graduation and resume service as an Israeli Defense Forces officer.

June 11, 2018 | More

A graphene roll-out

LGO thesis advisor and MIT Mechanical Engineering Professor John Hart, lead a team to develop a continuous manufacturing process that produces long strips of high-quality graphene.

The team’s results are the first demonstration of an industrial, scalable method for manufacturing high-quality graphene that is tailored for use in membranes that filter a variety of molecules, including salts, larger ions, proteins, or nanoparticles. Such membranes should be useful for desalination, biological separation, and other applications.

“For several years, researchers have thought of graphene as a potential route to ultrathin membranes,” says John Hart, associate professor of mechanical engineering and director of the Laboratory for Manufacturing and Productivity at MIT. “We believe this is the first study that has tailored the manufacturing of graphene toward membrane applications, which require the graphene to be seamless, cover the substrate fully, and be of high quality.”

Hart is the senior author on the paper, which appears online in the journal Applied Materials and Interfaces. The study includes first author Piran Kidambi, a former MIT postdoc who is now an assistant professor at Vanderbilt University; MIT graduate students Dhanushkodi Mariappan and Nicholas Dee; Sui Zhang of the National University of Singapore; Andrey Vyatskikh, a former student at the Skolkovo Institute of Science and Technology who is now at Caltech; and Rohit Karnik, an associate professor of mechanical engineering at MIT.

Growing graphene

For many researchers, graphene is ideal for use in filtration membranes. A single sheet of graphene resembles atomically thin chicken wire and is composed of carbon atoms joined in a pattern that makes the material extremely tough and impervious to even the smallest atom, helium.

Researchers, including Karnik’s group, have developed techniques to fabricate graphene membranes and precisely riddle them with tiny holes, or nanopores, the size of which can be tailored to filter out specific molecules. For the most part, scientists synthesize graphene through a process called chemical vapor deposition, in which they first heat a sample of copper foil and then deposit onto it a combination of carbon and other gases.

Graphene-based membranes have mostly been made in small batches in the laboratory, where researchers can carefully control the material’s growth conditions. However, Hart and his colleagues believe that if graphene membranes are ever to be used commercially they will have to be produced in large quantities, at high rates, and with reliable performance.

“We know that for industrialization, it would need to be a continuous process,” Hart says. “You would never be able to make enough by making just pieces. And membranes that are used commercially need to be fairly big ­— some so big that you would have to send a poster-wide sheet of foil into a furnace to make a membrane.”

A factory roll-out

The researchers set out to build an end-to-end, start-to-finish manufacturing process to make membrane-quality graphene.

The team’s setup combines a roll-to-roll approach — a common industrial approach for continuous processing of thin foils — with the common graphene-fabrication technique of chemical vapor deposition, to manufacture high-quality graphene in large quantities and at a high rate. The system consists of two spools, connected by a conveyor belt that runs through a small furnace. The first spool unfurls a long strip of copper foil, less than 1 centimeter wide. When it enters the furnace, the foil is fed through first one tube and then another, in a “split-zone” design.

While the foil rolls through the first tube, it heats up to a certain ideal temperature, at which point it is ready to roll through the second tube, where the scientists pump in a specified ratio of methane and hydrogen gas, which are deposited onto the heated foil to produce graphene.

Graphene starts forming in little islands, and then those islands grow together to form a continuous sheet,” Hart says. “By the time it’s out of the oven, the graphene should be fully covering the foil in one layer, kind of like a continuous bed of pizza.”

As the graphene exits the furnace, it’s rolled onto the second spool. The researchers found that they were able to feed the foil continuously through the system, producing high-quality graphene at a rate of 5 centimers per minute. Their longest run lasted almost four hours, during which they produced about 10 meters of continuous graphene.

“If this were in a factory, it would be running 24-7,” Hart says. “You would have big spools of foil feeding through, like a printing press.”

Flexible design

Once the researchers produced graphene using their roll-to-roll method, they unwound the foil from the second spool and cut small samples out. They cast the samples with a polymer mesh, or support, using a method developed by scientists at Harvard University, and subsequently etched away the underlying copper.

“If you don’t support graphene adequately, it will just curl up on itself,” Kidambi says. “So you etch copper out from underneath and have graphene directly supported by a porous polymer — which is basically a membrane.”

The polymer covering contains holes that are larger than graphene’s pores, which Hart says act as microscopic “drumheads,” keeping the graphene sturdy and its tiny pores open.

The researchers performed diffusion tests with the graphene membranes, flowing a solution of water, salts, and other molecules across each membrane. They found that overall, the membranes were able to withstand the flow while filtering out molecules. Their performance was comparable to graphene membranes made using conventional, small-batch approaches.

The team also ran the process at different speeds, with different ratios of methane and hydrogen gas, and characterized the quality of the resulting graphene after each run. They drew up plots to show the relationship between graphene’s quality and the speed and gas ratios of the manufacturing process. Kidambi says that if other designers can build similar setups, they can use the team’s plots to identify the settings they would need to produce a certain quality of graphene.

“The system gives you a great degree of flexibility in terms of what you’d like to tune graphene for, all the way from electronic to membrane applications,” Kidambi says.

Looking forward, Hart says he would like to find ways to include polymer casting and other steps that currently are performed by hand, in the roll-to-roll system.

“In the end-to-end process, we would need to integrate more operations into the manufacturing line,” Hart says. “For now, we’ve demonstrated that this process can be scaled up, and we hope this increases confidence and interest in graphene-based membrane technologies, and provides a pathway to commercialization.”

May 18, 2018 | More

Sloan

Setting the record straight on lean

Setting the record straight on lean

Ask what the term “lean management” means, and you might see the blood drain from someone’s face. “Lay-offs, cost-cutting, downsizing,” they might spit back at you.

But that’s not what the system — modeled after the lean manufacturing practices pioneered in the 1970s by Toyota — is about, according to lean experts John Shook, chairman and CEO of the Lean Enterprise Institute, and Jamie Bonini, vice president of the Toyota Production System Support Center. They recently joined MIT Sloan adjunct associate professor Zeynep Ton’s class to elaborate.

“[Lean management] relates to how we think about the way w

April 16, 2019 | More

To jump-start America, invest (a lot) in science

To jump-start America, invest (a lot) in science

For nearly 50 years, much of America’s growth has been concentrated in a handful of large and already prosperous coastal cities, widening the nation’s economic and cultural divides. Reversing this trend —and returning to the postwar prosperity that benefited far more Americans — will require an ambitious plan to boost public investment in scientific research and development in dozens of communities.

That’s the word from two MIT economists, Jonathan Gruber and Simon Johnson, in “Jump-Starting America: How Breakthrough Science Can Revive Economic Growth and the American Dream,” out April 9.

April 12, 2019 | More

The unintended consequences of automated vehicles

The unintended consequences of automated vehicles

The promise of automated vehicles must be managed with policies that control demand for more driving and protect public transit, researchers say. The battery-powered sedan broadcasts a request to merge into the next lane, and other nearby vehicles automatically adjust as it glides over and exits the highway. Inside, the passenger finishes a quick email check, then clicks on a monitor to catch up with the day’s news.

March 29, 2019 | More

Emotion AI, explained

Emotion AI, explained

As artificial intelligence learns to interpret and respond to human emotion, senior leaders should consider how it could change their industries and play a critical role in their firms.

What did you think of the last commercial you watched? Was it funny? Confusing? Would you buy the product? You might not remember or know for certain how you felt, but increasingly, machines do. New artificial intelligence technologies are learning and recognizing human emotions, and using that knowledge to improve everything from marketing campaigns to health care.

These technologies are referred to as “emotion AI.” Emotion AI is a subset of artificial intelligence (the broad term for mach

March 22, 2019 | More

Ideas Made to Matter Innovation What IKEA and the Instant Pot can teach us about competition

What IKEA and the Instant Pot can teach us about competition

If you want to stay ahead of the market, find a core problem and solve it. Don’t cling to a solution to an outdated need.

Swedish company IKEA bills itself as a furniture retailer — and for some, a purveyor of delicious meatballs — but ask MIT professor Sanjay Sarma, and he’ll tell you IKEA is actually in the business of selling holes.

“You never need a drill to assemble IKEA furniture,” the mechanical engineering professor said during the recent MIT Innovations in Management Conference. “The fact of the matter is, people want a job done. People want holes, they don’t want drills.”

Referencing Harvard professor Theodore Levitt, Sarma pointed out that when someone buys a drill, they don’t actually want the tool, they want what it can do.

March 22, 2019 | More

The one thing missing from your board of directors — and why it matters

The one thing missing from your board of directors — and why it matters

It’s a rare organization that can’t find a digital opportunity to harness. A tech-forward board of directors can help you find yours.

Steaming cups of coffee and baked goods smeared with frosting might not be the first products you think of as having the potential to be transformed by digital technologies, but thinking outside the Box O’ Joe — and beyond the physical world — allowed one coffee and baked goods giant to do exactly that.

In 2012, Dunkin’ Brands — owner of Dunkin’ (formerly Dunkin’ Donuts) and Baskin-Robbins — launched a mobile phone app designed to allow customers to pay from their devices and connect to the company’s DD Perks rewards program.

The app solved a range of problems plaguing the company’s vast network of franchisees by shaving time off of service speeds, increasing cross-selling, raising the size of an average customer’s order, and making payment processing easier by reducing credit card fees. It also gave the company an avenue to acquire better data about customer preferences and create deeper relationships with them.

March 22, 2019 | More

These are the cyberthreats lurking in your supply chain

These are the cyberthreats lurking in your supply chain

You’ve got firewalls in place. You have a team dedicated to keeping a careful watch over your networks, 24/7. Everything is under two-factor authentication. Your cyber defenses must be bulletproof.

Then your screen goes dark, and it doesn’t light back up. Soon, your company is offline entirely, and you’re losing money — fast. You didn’t account for the contractor that you hired to upgrade your point-of-sale network last month, which required accessing your systems — or what the state of their own cybersecurity looked like

February 22, 2019 | More

3 new courses cover advances every business should be tracking

3 new courses cover advances every business should be tracking

MIT Sloan students aren’t the only ones who take interest when new courses are added — they’re often a barometer of what’s about to bubble up in business.

Here’s what MIT Sloan faculty are drilling down on in three new and updated courses for spring 2019 — and why it matters to business leaders.

February 1, 2019 | More

Bye-bye ivory tower: Innovation needs an ecosystem to thrive

Bye-bye ivory tower: Innovation needs an ecosystem to thrive

If your organization is looking to innovate more in 2019 (and who isn’t?), we have good news and bad for you. The good news: The world is increasingly flat, to riff off the title of Thomas L. Friedman’s seminal 2005 book — meaning innovation isn’t confined to just Silicon Valley anymore.

January 11, 2019 | More

A calm before the AI productivity storm

A calm before the AI productivity storm

Despite all the advances in technology designed to streamline work, output per hour has actually been leveling off since around 2006. While some believe that’s the new normal for productivity, new research from MIT Sloan economist Erik Brynjolfsson and his colleagues shows it may just be a temporary lull.

January 11, 2019 | More

Engineering

Engineers develop concept for hybrid heavy-duty trucks Long-haul trucks with electric motors combined with gas-alcohol engines could slash pollution levels and greenhouse gases

Engineers develop concept for hybrid heavy-duty trucks

Heavy-duty trucks, such as the 18-wheelers that transport many of the world’s goods from farm or factory to market, are virtually all powered by diesel engines. They account for a significant portion of worldwide greenhouse gas emissions, but little has been done so far to curb their climate-change-inducing exhaust.

Now, researchers at MIT have devised a new way of powering these trucks that could drastically curb pollution, increase efficiency, and reduce or even eliminate their net greenhouse gas emissions.

The concept involves using a plug-in hybrid engine system, in which the truck would be primarily powered by batteries, but with a spark ignition engine (instead of a diesel engine). That engine, which would allow the trucks to conveniently travel the same distances as today’s conventional diesel trucks, would be a flex-fuel model that could run on pure gasoline, pure alcohol, or blends of these fuels.

While the ultimate goal would be to power trucks entirely with batteries, the researchers say, this flex-fuel hybrid option could provide a way for such trucks to gain early entry into the marketplace by overcoming concerns about limited range, cost, or the need for excessive battery weight to achieve longer range.

The new concept was developed by MIT Energy Initiative and Plasma Fusion and Science Center research scientist Daniel Cohn and principal research engineer Leslie Bromberg, who are presenting it at the annual SAE International conference on April 11.

“We’ve been working for a number of years on ways to make engines for cars and trucks cleaner and more efficient, and we’ve been particularly interested in what you can do with spark ignition [as opposed to the compresson ignition used in diesels], because it’s intrinsically much cleaner,” Cohn says. Compared to a diesel engine vehicle, a gasoline-powered vehicle produces only a tenth as much nitrogen oxide (NOx) pollution, a major component of air pollution.

In addition, by using a flex-fuel configuration that allows it to run on gasoline, ethanol, methanol, or blends of these, such engines have the potential to emit far less greenhouse gas than pure gasoline engines do, and the incremental cost for the fuel flexibility is very small, Cohn and Bromberg say. If run on pure methanol or ethanol derived from renewable sources such as agricultural waste or municipal trash, the net greenhouse gas emissions could even be zero. “It’s a way of making use of a low-greenhouse-gas fuel” when it’s available, “but always having the option of running it with gasoline” to ensure maximum flexibility, Cohn says.

While Tesla Motors has announced it will be producing an all-electric heavy-duty truck, Cohn says, “we think that’s going to be very challenging, because of the cost and weight of the batteries” needed to provide sufficient range. To meet the expected driving range of conventional diesel trucks, Cohn and Bromberg estimate, would require somewhere between 10 and 15 tons of batteries “That’s a significant fraction of the payload” such a truck could otherwise carry, Cohn says.

To get around that, “we think that the way to enable the use of electricity in these vehicles is with a plug-in hybrid,” he says. The engine they propose for such a hybrid is a version of one the two researchers have been working on for years, developing a highly efficient, flexible-fuel gasoline engine that would weigh far less, be more fuel-efficient, and produce a tenth as much air pollution as the best of today’s diesel-powered vehicles.

Cohn and Bromberg did a detailed analysis of both the engineering and the economics of what would be needed to develop such an engine to meet the needs of existing truck operators. In order to match the efficiency of diesels, a mix of alcohol with the gasoline, or even pure alcohol, can be used, and this can be processed using renewable energy sources, they found. Detailed computer modeling of a whole range of desired engine characteristics, combined with screening of the results using an artificial intelligence system, yielded clear indications of the most promising pathways and showed that such substitutions are indeed practically and financially feasible.

In both the present diesel and the proposed flex-fuel vehicles, the emissions are measured at the tailpipe, after a variety of emissions-control systems have done their work in both cases, so the comparison is a realistic measure of real-world emissions. The combination of a hybrid drive and flex-fuel engine is “a way to enable the introduction of electric drive into the heavy truck sector, by making it possible to meet range and cost requirements, and doing it in a way that’s clean,” Cohn says.

Bromberg says that gasoline engines have become much more efficient and clean over the years, and the relative cost of diesel fuel has gone up, so that the cost advantages that led to the near-universal adoption of diesels for heavy trucking no longer prevail. “Over time, gas engines have become more and more efficient, and they have an inherent advantage in producing less air pollution,” he says. And by using the engine in a hybrid system, it can always operate at its optimum speed, maximizing its efficiency.

Methane is an extremely potent greenhouse gas, so if it can be diverted to produce a useful fuel by converting it to methanol through a simple chemical process, “that’s one of the most attractive ways to make a clean fuel,” Bromberg says. “I think the alcohol fuels overall have a lot of promise.”

Already, he points out, California has plans for new regulations on truck emissions that are very difficult to meet with diesel engine vehicles. “We think there’s a significant rationale for trucking companies to go to gasoline or flexible fuel,” Cohn says. “The engines are cheaper, exhaust treatment systems are cheaper, and it’s a way to ensure that they can meet the expected regulations. And combining that with electric propulsion in a hybrid system, given an ever-cleaner electric grid, can further reduce emissions and pollution from the trucking sector.”

Pure electric propulsion for trucks is the ultimate goal, but today’s batteries don’t make that a realistic option yet, Cohn says: “Batteries are great, but let’s be realistic about what they can provide.”

And the combination they propose can address two major challenges at once, they say. “We don’t know which is going to be stronger, the desire to reduce greenhouse gases, or the desire to reduce air pollution.” In the U.S., climate change may be the bigger push, while in India and China air pollution may be more urgent, but “this technology has value for both challenges,” Cohn says.

The research was supported by the MIT Arthur Samberg Energy Innovation Fund.

April 9, 2019 | More

MIT and NASA engineers demonstrate a new kind of airplane wing Assembled from tiny identical pieces, the wing could enable lighter, more energy-efficient aircraft designs.

MIT and NASA engineers demonstrate a new kind of airplane wing

A team of engineers has built and tested a radically new kind of airplane wing, assembled from hundreds of tiny identical pieces. The wing can change shape to control the plane’s flight, and could provide a significant boost in aircraft production, flight, and maintenance efficiency, the researchers say.

The new approach to wing construction could afford greater flexibility in the design and manufacturing of future aircraft. The new wing design was tested in a NASA wind tunnel and is described today in a paper in the journal Smart Materials and Structures, co-authored by research engineer Nicholas Cramer at NASA Ames in California; MIT alumnus Kenneth Cheung SM ’07 PhD ’12, now at NASA Ames; Benjamin Jenett, a graduate student in MIT’s Center for Bits and Atoms; and eight others.

Instead of requiring separate movable surfaces such as ailerons to control the roll and pitch of the plane, as conventional wings do, the new assembly system makes it possible to deform the whole wing, or parts of it, by incorporating a mix of stiff and flexible components in its structure. The tiny subassemblies, which are bolted together to form an open, lightweight lattice framework, are then covered with a thin layer of similar polymer material as the framework.

The result is a wing that is much lighter, and thus much more energy efficient, than those with conventional designs, whether made from metal or composites, the researchers say. Because the structure, comprising thousands of tiny triangles of matchstick-like struts, is composed mostly of empty space, it forms a mechanical “metamaterial” that combines the structural stiffness of a rubber-like polymer and the extreme lightness and low density of an aerogel.

Jenett explains that for each of the phases of a flight — takeoff and landing, cruising, maneuvering and so on — each has its own, different set of optimal wing parameters, so a conventional wing is necessarily a compromise that is not optimized for any of these, and therefore sacrifices efficiency. A wing that is constantly deformable could provide a much better approximation of the best configuration for each stage.

While it would be possible to include motors and cables to produce the forces needed to deform the wings, the team has taken this a step further and designed a system that automatically responds to changes in its aerodynamic loading conditions by shifting its shape — a sort of self-adjusting, passive wing-reconfiguration process.

“We’re able to gain efficiency by matching the shape to the loads at different angles of attack,” says Cramer, the paper’s lead author. “We’re able to produce the exact same behavior you would do actively, but we did it passively.”

This is all accomplished by the careful design of the relative positions of struts with different amounts of flexibility or stiffness, designed so that the wing, or sections of it, bend in specific ways in response to particular kinds of stresses.

Cheung and others demonstrated the basic underlying principle a few years ago, producing a wing about a meter long, comparable to the size of typical remote-controlled model aircraft. The new version, about five times as long, is comparable in size to the wing of a real single-seater plane and could be easy to manufacture.

While this version was hand-assembled by a team of graduate students, the repetitive process is designed to be easily accomplished by a swarm of small, simple autonomous assembly robots. The design and testing of the robotic assembly system is the subject of an upcoming paper, Jenett says.

The individual parts for the previous wing were cut using a waterjet system, and it took several minutes to make each part, Jenett says. The new system uses injection molding with polyethylene resin in a complex 3-D mold, and produces each part — essentially a hollow cube made up of matchstick-size struts along each edge — in just 17 seconds, he says, which brings it a long way closer to scalable production levels.

“Now we have a manufacturing method,” he says. While there’s an upfront investment in tooling, once that’s done, “the parts are cheap,” he says. “We have boxes and boxes of them, all the same.”

The resulting lattice, he says, has a density of 5.6 kilograms per cubic meter. By way of comparison, rubber has a density of about 1,500 kilograms per cubic meter. “They have the same stiffness, but ours has less than roughly one-thousandth of the density,” Jenett says.

Because the overall configuration of the wing or other structure is built up from tiny subunits, it really doesn’t matter what the shape is. “You can make any geometry you want,” he says. “The fact that most aircraft are the same shape” — essentially a tube with wings — “is because of expense. It’s not always the most efficient shape.” But massive investments in design, tooling, and production processes make it easier to stay with long-established configurations.

Studies have shown that an integrated body and wing structure could be far more efficient for many applications, he says, and with this system those could be easily built, tested, modified, and retested.

“The research shows promise for reducing cost and increasing the performance for large, light weight, stiff structures,” says Daniel Campbell, a structures researcher at Aurora Flight Sciences, a Boeing company, who was not involved in this research. “Most promising near-term applications are structural applications for airships and space-based structures, such as antennas.”

The new wing was designed to be as large as could be accommodated in NASA’s high-speed wind tunnel at Langley Research Center, where it performed even a bit better than predicted, Jenett says.

The same system could be used to make other structures as well, Jenett says, including the wing-like blades of wind turbines, where the ability to do on-site assembly could avoid the problems of transporting ever-longer blades. Similar assemblies are being developed to build space structures, and could eventually be useful for bridges and other high performance structures.

The team included researchers at Cornell University, the University of California at Berkeley at Santa Cruz, NASA Langley Research Center, Kaunas University of Technology in Lithuania, and Qualified Technical Services, Inc., in Moffett Field, California. The work was supported by NASA ARMD Convergent Aeronautics Solutions Program (MADCAT Project), and the MIT Center for Bits and Atoms.

April 1, 2019 | More

MIT celebrates 50th anniversary of historic moon landing Symposium featuring former astronauts and other Apollo mission luminaries examines the program’s legacy.

MIT celebrates 50th anniversary of historic moon landing

On Sept. 12, 1962, in a speech given in Houston to pump up support for NASA’s Apollo program, President John F. Kennedy shook a stadium crowd with the now-famous quote: “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.”

As he delivered these lines, engineers in MIT’s Instrumentation Laboratory were already taking up the president’s challenge. One year earlier, NASA had awarded MIT the first major contract of the Apollo program, charging the Instrumentation Lab with developing the spacecraft’s guidance, navigation, and control systems that would shepherd astronauts Michael Collins, Buzz Aldrin, and Neil Armstrong to the moon and back.

On July 20, 1969, the hard work of thousands paid off, as Apollo 11 touched down on the lunar surface, safely delivering Armstrong and Aldrin ScD ’63 as the first people to land on the moon.

On Wednesday, MIT’s Department of Aeronautics and Astronautics (AeroAstro) celebrated the 50th anniversary of this historic event with the daylong symposium “Apollo 50+50,” featuring former astronauts, engineers, and NASA adminstrators who examined the legacy of the Apollo program, and MIT faculty, students, industry leaders, and alumni who envisioned what human space exploration might look like in the next 50 years.

In welcoming a large audience to Kresge Auditorium, some of whom sported NASA regalia for the occasion, Daniel Hastings, head of AeroAstro, said of today’s prospects for space exploration: “It’s the most exciting time since Armstrong and Aldrin landed on the moon.”

The event kicked off three days of programming for MIT Space Week, which also included the Media Lab’s “Beyond the Cradle: Envisioning a New Space Age” on March 14, and the student-led “New Space Age Conference” on March 15.

“We could press on”

As a “baby boomer living through Apollo,” retired astronaut Charles Bolden, NASA’s 12th administrator, said the Apollo program illustrated “how masterful we were at overcoming adversity.” In a keynote address that opened the day’s events, Bolden reminded the audience that, at the time the ambitious program got underway in the 1960s, the country was in the violent thick of the civil rights movement.

We were killing each other in the streets,” Bolden said. “And yet we had an agency like NASA, and a small group of people, who were able to bear through everything and land on the moon. … We could recognize there were greater things we could do as a people, and we could press on.”

For MIT’s part, the push began with a telegram on Aug. 9, 1961, to Charles Stark Draper, director of the Instrumentation Laboratory, notifying him that NASA had selected the MIT lab “to develop the guidance navigation system of the Project Apollo spacecraft.” Draper, who was known widely as “Doc,” famously assured NASA of MIT’s work by volunteering himself as a crew member on the mission, writing to the agency that “if I am willing to hang my life on our equipment, the whole project will surely have the strongest possible motivation.”

This of course proved unnecessary, and Draper went on to lead the development of the guidance system with “unbounded optimism,” as his former student and colleague Lawrence Young, the MIT Apollo Program Professor, recalled in his remarks.

“We owe the lighting of our fuse to Doc Draper,” Young said.

At the time that MIT took on the Apollo project, the Instrumentation Laboratory, later renamed Draper Laboratory, took up a significant footprint, with 2,000 people and 15 buildings on campus, dedicated largely to the lunar effort.

“The Instrumentation Lab dwarfed the [AeroAstro] department,” said Hastings, joking, “it was more like the department was a small pimple on the Instrumentation Lab.”

Apollo remembered

In a highlight of the day’s events, NASA astronauts Walter Cunningham (Apollo 7) and Charles Duke SM ’64 (Apollo 16), and MIT Instrumentation Laboratory engineers Donald Eyles and William Widnall ’59, SM ’62 — all from the Apollo era — took the stage to reminisce about some of the technical challenges and emotional moments that defined the program.

One of the recurring themes of their conversation was the observation that things simply got done faster back then. For instance, Duke remarked that it took just 8.5 years from when Kennedy first called for the mission, to when Armstrong’s boots hit the lunar surface.

“I would argue the proposal for such a mission would take longer [today],” Duke said to an appreciative rumble from the audience.

The Apollo Guidance Computer, developed at MIT, weighed 70 pounds, consumed 55 watts of power — half the wattage of a regular lightbulb — and took up less than 1 cubic foot inside the spacecraft. The system was one of the first digital flight computers, and one of the first computers to use integrated circuits.

Eyles and Widnall recalled in detail the technical efforts that went into developing the computer’s hardware and software. “If you’re picturing [the computer code] on a monitor, you’d be wrong,” Eyles told the audience. “We were writing the program on IBM punch cards. That clunking mechanical sound of the key-punch machine was the soundtrack to creating the software.”

Written out, that code famously amounted to a stack of paper as tall as lead software engineer Margaret Hamilton — who was not able to participate in Wednesday’s panel but attended the symposium dinner that evening.

In the end, the Apollo Guidance Computer succeeded in steering 15 space flights, including nine to the moon, and six lunar landings. That’s not to say that the system didn’t experience some drama along the way, and Duke, who was the capsule communicator, or CAPCOM, for Apollo 11, remembers having to radio up to the spacecraft during the now-famous rocky landing.

“When I heard the first alarm go off during the braking phase, I thought we were dead in the water,” Duke said of the first in a series of alerts that the Apollo astronauts reported, indicating that the computer was overloaded, during the most computationally taxing phase of the mission. The spacecraft was several miles off course and needed to fly over a “boulder field,” to land within 60 seconds or risk running out of fuel.

Flight controllers in Houston’s Mission Control Center determined that if nothing else went wrong, the astronats, despite the alarms, could proceed with landing.

“Tension was high,” Duke said of the moment. “You didn’t want to touch down on a boulder and blow a nozzle, and spoil your whole day.”

When the crew finally touched down on the Sea of Tranquility, with Armstrong’s cool report that “the Eagle has landed,” Duke, too wound-up to properly verbalize the callback “Tranquility,” recalls “I was so excited … it came out as ‘Twang,’ or something like that.’ The tension — it was like popping a balloon.”

Since the Apollo era, NASA has launched astronauts on numerous missions, many of whom are MIT graduates. On Wednesday, 13 of those graduates came onstage to be recognized along with the Apollo crew.

In introducing them to the audience, Jeffrey Hoffman, a former astronaut and now AeroAstro professor of the practice, noted MIT’s significant representation in the astronaut community. For instance, in the five missions to repair the Hubble Space Telescope, which comprised 24 spacewalks, 13 of those were performed by MIT graduates.

“That’s pretty cool,” Hoffman said.

On the horizon

The Apollo moon rocks that were were brought back to Earth have “evolved our understanding of how the moon formed,” said Maria Zuber, MIT’s vice president for research and the E.A. Griswold Professor of Geophysics in the Department of Earth, Atmospheric and Planetary Sciences. These rocks “vanquished” the idea that the moon originally formed as a cold assemblage of rocks and “foo foo dust,” she said.

Instead, after carefully analyzing samples from Apollo 11 and other missions, scientists at MIT and elsewhere have found that the moon was a dynamic body, with a surface that at one time was entirely molten, and a metallic core, or “dynamo,” powering an early, lunar magnetic field. Even more provocative was the finding that the moon was not in fact “bone-dry,” but actually harbored water — an idea that Zuber said was virtually unpublishable until an MIT graduate reported evidence of water in Apollo samples, after which the floodgates opened in support of the idea.

To consider the next 50 years of space exploration, the MIT symposium featured a panel of faculty members — Paulo Lozano, Danielle Wood, Richard Binzel, and Sara Seager — who highlighted, respectively, the development of tiny thrusters to power miniature spacecraft; an effort to enable wider access to microgravity missions; an MIT student-designed mission (REXIS) that is currently analyzing the near-Earth asteroid Bennu; and TESS and ASTERIA, satellite missions that are currently in orbit, looking for planets and possibly, life, outside our solar system.

Industry leaders also weighed in on the growing commercialization of space exploration, in a panel featuring MIT alums who currently head major aerospace companies.

Keoki Jackson, chief technology officer of Lockheed Martin, noted the pervasiveness of space-based technologies, such as GPS-dependent apps for everything from weather and news, to Uber.

“[Commercial enterprises] have made space a taken-for-granted part of life,” said Jackson, noting later in the panel that in 2015, 1 billion GPS devices had been sold around the world. “This shows you what can happen exponentially when you come up with something truly enabling.”

“The challenge we face is talent, and in particular, diversity,” said John Langford, CEO and founder of Aurora Flight Sciences, who noted the panel’s all-male participants as an example. “It’s an industry-wide challenge. We’re working to reform ourselves, as we move from the brigade-type technologies that we grew up with, to incorporating technologies such as computer technology and artificial intelligence.”

Future missions

In a glimpse of what the future of space exploration might hold, MIT students presented lightning talks on a range of projects, including a custom-designed drill to excavate ice on Mars, a system that makes oxygen on Mars to fuel return missions to Earth, and a plan to send CubeSats around the world to monitor water vapor as a measure of climate change.

Audience members voted online for the best pitch, which ultimately went to Raichelle Aniceto and her presentation of a CubeSat-enabled laser communications system designed to transmit large amounts of data from the moon to Earth in just five minutes.

In the last keynote address of the symposium, Thomas Zubuchen, associate administrator for NASA’s Science Mission Directorate, told the audience that there is still a lot of research to be done on the moon, which he said is changing, as evidenced by new craters that have formed in the last 50 years.

“The moon of the Apollo era is not the same moon of today,” said Zurbuchen, who noted that just this week, NASA announced it will open previously unlocked samples of soil collected by the Apollo missions.

In closing the symposium, Dava Newman, the Apollo Program Professor of Astronautics and former NASA deputy administrator, envisioned a future dedicated to sending humans back to the moon, and ultimately to Mars.

“I’m a rocket scientist. I got here because of Apollo, and Eleanor Roosevelt said it best: Believe in the beauty of your dreams,” Newman said. “The challenge is, within 50 years, to be boots on Mars. I think we have the brains and the doers and inspiration to really make that happen.”

March 15, 2019 | More

CEE event showcases multidisciplinary opportunities Ninth annual Research Speed Dating event fosters intradepartmental collaboration and facilitates discussion of future efforts to solve global issues.

CEE event showcases multidisciplinary opportunities

The Department of Civil and Environmental Engineering’s ninth annual Research Speed Dating event featured a well-rounded display of research from both civil and environmental engineering disciplines, from human microbiomes, carbon partitioning of plants, climate change, and urban pollution to recovering from major storm damage, algorithms for car-sharing networks, and integrating autonomy into transportation systems. The Feb. 15 event brought together a wide range of faculty, research scientists, postdocs, graduate students, and undergraduates to present their research findings.

Assistant Professor Tami Lieberman kicked off the event by highlighting her research that investigates how to add a microbe to an already established microbiome. Lieberman conducts her research through the lens of evolution, using DNA sequencing in order to identify how bacteria spreads between people, and how it is evolving within humans as they coexist with other adapted mutations.

Lieberman, who is one of the department’s newest faculty members, explained that she is searching for expertise within lipid characterization, directed evolution, DNA, large data, and high-throughput microbiology. Her lab plans to explore many different areas such as the colonization of new bacterial strains, disease-specific adaptation, colon cancer, host-specificity, and immune responses to bacteria and probiotics.

Switching gears from the world of microbiomes to the biosphere was PhD student Josh Moss in Professor Jesse Kroll’s lab. The Kroll lab studies atmospheric chemistry, and Moss’ research is concerned with various aspects of urban pollution, such as how smog forms, the reactions of certain chemicals in the atmosphere, and secondary organic aerosol.

“When discussing smog, we talk about it in terms of secondary organic aerosol, and look at what humans and the biosphere emit as primary gasses that react in the atmosphere with oxidants such as OH radical and ozone to form secondary gases. They tend to either form new particles or they will condense onto existing particles to create smog; the mixture of particles and gasses which forms in the presence of UV light,” Moss explained.

Incorporating computer modeling in conjunction with physical experiments in the lab has elicited many exciting opportunities to explore new avenues within his discipline, Moss said.

Moving the discussion from urban pollution to plant biology was Assistant Professor Dave Des Marais, who gave his talk on how plants respond to environmental stressors. He discussed how climate change is impacting growing seasons and is seeking ways to understand these issues. He explained that there is a lot of opportunity for collaboration and research within this field, including the chance to work with a lab in Israel to study numerous variables over time.

Incoming Gilbert W. Winslow Assistant Professor Cathy Wu, who will officially begin her position in CEE this June, switched gears and introduced the audience to the world of transportation. Wu’s research is centered around the integration of autonomy into transportation systems.

She said she aims to challenge conventional transportation systems by looking at the ways in which technology will influence certain systems for the better, making a difference in the long run. Wu came to the realization that researchers do not have a strong understanding of the potential impact autonomous vehicles could have on society.

“These vehicles can potentially provide access to one third of the population including the youth, the elderly, and the disabled. So, I set out and studied robotics, and then became determined to understand the impact of autonomous vehicles on the transportation system,” said Wu.

Wu’s interest in transportation continues to grow. “In the U.S, we have 37,000 traffic accidents each year, it is the leading cause of death in young people, and a vast majority of these accidents are caused by human error,” she explained. “Additionally, we waste 7 billion hours from people sitting in traffic each year, and more than a quarter of greenhouse gas emissions comes from transportation.”

Wu is investigating transportation through deep reinforcement learning — learning policy in order to maximize reward — simulations, and flow traffic LEGO blocks. Moving forward, she is interested in two technical aspects, reliability and scalability in terms of the types of decisions that can be made for urban systems. “There are so many rich perspectives in this department, and I am excited to see the interplay of this when it comes to decision-making,” concluded Wu.

Graduate student Andrew Feldman, who later went on to receive the best lightning talk award, presented on water exchange patterns in the soil-plant continuum based on SMAP Microwave Satellite Measurements. Feldman works with Bacardi Stockholm Water Foundations Professor Dara Entekhabi’s lab to study soil moisture and vegetation water content observations from NASA’s SMAP satellite in order to evaluate plant water storage changes following rainfall.

New to the event this year was a panel discussion on the future of research in infrastructure and environment, moderated by Leon and Anne Goldberg Professor of Humanities, Sociology, and Anthropology Susan Silbey. Silbey serves as chair of the MIT faculty and is also a professor of behavioral and policy sciences within the MIT Sloan School of Management.

“As a social scientist, I have a logical interest in the physical platform of social life, that is the natural and built environment within which social action takes place,” explained Silbey. “My research aims to understand how and when environmental, health, and safety regulations are more or less successful in managing hazards. More specifically, I study how management systems are introduced to control environmental, health, and safety hazards in laboratories. It seems to me my work and CEE addresses the same or overlapping phenomena.”

The panel included CEE Professor Colette Heald, Paul M. Cook Career Development Assistant Professor Benedetto Marelli, JR East Professor of Engineering Professor Ali Jadbabaie, and Breene M. Kerr Professor Elfatih Eltahir.

The discussion covered crucial topics such as the challenge of connecting engineering disciplines with the humanities in order to accomplish more as a society. A recurring issue that was brought up by the panel was how the department intends to solve environmental problems.

“The climate is evolving and changing and there are increasing needs for adapting to that new climate, involving infrastructure in a major way. How do we engineer a process of societal adaptation? I think we are in a unique position as a department to address this concern,” said Eltahir.

Heald supported that argument. “A National Academy of Sciences report about the future of engineering explained that the 20th century was regulation-driven, and the 21st century will be challenge-driven,” she said. “I think this is a great way to think about our department.”

Heald and Eltahir agreed that the diversity of the department places CEE in a unique position to capitalize on the expertise available when tackling global issues. “Our diverse knowledge is beneficial when taking on climate change, and we can provide opportunity in various disciplines including infrastructure, systems, and the environment,” said Heald.

Eltahir explained that having a connection to the social sciences would be something that the department should consider seriously. He emphasized the challenge of consumerism, as a cultural and societal problem throughout the U.S, Europe, and China. “We live in a world that has limited resources, and the global society behaves with the implicit assumption that there are no limits,” explained Eltahir.

“I thought it was a very interesting and thought-provoking afternoon; I think lots of departments, labs, and centers should stage such inviting and informative events,” said Silbey.

The final portion of the evening included a digital poster presentation, which encouraged networking and collaboration between researchers in the department. More than 20 students, ranging from first-year students to postdocs, presented their digital posters.

Postdoc Fabiola Sanchez won honorable mention for her poster about the dynamics of the active-growing bacterial community in the estuarine environment during a 24-hour period. Her poster explains that the active community is different from the total, and its abundance exhibits a strong correlation with the chlorophyll levels and the day-night cycle.

The runner up, senior Stephanie Chin, presented a poster about analyzing noisy data with limited training data, based on a CNN approach, for the specific application of a traffic surveillance camera. This approach could help adapt general-purpose models for domain-specific content and applications, such as traffic surveillance images.

PhD student Isabelle Su’s winning poster, “Exploring a Spider Web’s Structure with Sound,” explained the use of sonification in order to visualize complex 3-D spider web data through sound. Su created an interactive sonification model that can be used as a versatile data exploration tool, for instance, to find spider web patterns, as a creative platform or recreated for similar data networks.

After announcing the prizes, the night concluded with a dinner for the participants and their colleagues, allowing for further networking opportunities. Inspiring research talks, a panel discussion, and digital poster presentation session successfully displayed the bright future that CEE has ahead.

“Research speed dating gives the community of students, postdocs, staff, and faculty an opportunity to present their research and collaborative opportunities to their colleagues,” said McAfee Professor of Engineering and department head Markus Buehler. “The panel addressed many issues regarding climate change and efforts that call for numerous disciplines to come together when addressing critical challenges of infrastructure and environment. This event certainly highlighted the inspiring research that intends to solve the imperative large-scale societal issues of today, where science and engineering play a crucial role.”

March 12, 2019 | More

Using machine learning to improve subseasonal climate forecasting Professor of biology Ernest Fraenkel and visiting scientist Judah Cohen win the Sub-Seasonal Climate Forecast Rodeo competition

Using machine learning to improve subseasonal climate forecasting

Judah Cohen, director of seasonal forecasting at AER (Atmospheric and Environmental Research) and visiting scientist in MIT’s Department of Civil and Environmental Engineering, and Ernest Fraenkel, professor of biological engineering at MIT, have won first place in three out of four temperature forecasting categories in the Sub-Seasonal Climate Forecast Rodeo competitionhosted by the National Oceanic and Atmospheric Administration and sponsored by the U.S. Bureau of Reclamation.

The MIT researchers, who were joined by Stanford University PhD students Jessica Hwang and Paulo Orenstein and Microsoft researcher Lester Mackey, beat the operational long-range forecasting model used by the U.S. government.

To be eligible for the competition, the teams were required to submit their climate predictions every two weeks between April 17, 2017 and April 18, 2018. The goal was to create a model that the western United States would be able to rely on weeks in advance to help manage water resources and prepare for wildfires and drought.

The competition required that the models achieve a higher mean skill over all competitive forecasts, and two benchmarks submitted by the U.S. Government, which are unbiased versions of the physics-based U.S. Climate Forecasting System. The models also had to achieve damped persistence (indicating that the data you are contributing is increasing the correlative effect over time).

“The current weather predicting models are only able to make forecasts about seven to 10 days prior to the forecast. By using machine learning techniques like the one we created for this contest, [the new model] is able to help energy companies and cities prepare for severe storms much farther in advance,” says Cohen.

The dynamic team of experts combined historical weather-pattern recognition and machine learning in order to produce real-time predictions of temperature and precipitation anomalies two to six weeks in advance for the western United States.

“We capitalized on the current availability of ample meteorological records and high-performance computing techniques to blend both physics-based or dynamic models and statistical machine learning approaches in order to extend the skillful forecast horizon from days to weeks,” says Cohen.

The combination of machine learning techniques and historical weather-pattern recognition is very powerful because it can help the government maximize water resources and prepare for natural disasters or extreme weather conditions.

“There are certainly plans to continue this project, as we have been talking about extending the model to the entire U.S. We demonstrated with this contest that there is potential with this model to leapfrog the forecasting process. It can help provide more accuracy at lower costs in the subseasonal forecasts,” explains Cohen.

March 11, 2019 | More

Giving keener “electric eyesight” to autonomous vehicles On-chip system that detects signals at sub-terahertz wavelengths could help steer driverless cars through fog and dust

Giving keener “electric eyesight” to autonomous vehicles

Autonomous vehicles relying on light-based image sensors often struggle to see through blinding conditions, such as fog. But MIT researchers have developed a sub-terahertz-radiation receiving system that could help steer driverless cars when traditional methods fail.

Sub-terahertz wavelengths, which are between microwave and infrared radiation on the electromagnetic spectrum, can be detected through fog and dust clouds with ease, whereas the infrared-based LiDAR imaging systems used in autonomous vehicles struggle. To detect objects, a sub-terahertz imaging system sends an initial signal through a transmitter; a receiver then measures the absorption and reflection of the rebounding sub-terahertz wavelengths. That sends a signal to a processor that recreates an image of the object.

But implementing sub-terahertz sensors into driverless cars is challenging. Sensitive, accurate object-recognition requires a strong output baseband signal from receiver to processor. Traditional systems, made of discrete components that produce such signals, are large and expensive. Smaller, on-chip sensor arrays exist, but they produce weak signals.

In a paper published online on Feb. 8 by the IEEE Journal of Solid-State Circuits, the researchers describe a two-dimensional, sub-terahertz receiving array on a chip that’s orders of magnitude more sensitive, meaning it can better capture and interpret sub-terahertz wavelengths in the presence of a lot of signal noise.

To achieve this, they implemented a scheme of independent signal-mixing pixels — called “heterodyne detectors” — that are usually very difficult to densely integrate into chips. The researchers drastically shrank the size of the heterodyne detectors so that many of them can fit into a chip. The trick was to create a compact, multipurpose component that can simultaneously down-mix input signals, synchronize the pixel array, and produce strong output baseband signals.

The researchers built a prototype, which has a 32-pixel array integrated on a 1.2-square-millimeter device. The pixels are approximately 4,300 times more sensitive than the pixels in today’s best on-chip sub-terahertz array sensors. With a little more development, the chip could potentially be used in driverless cars and autonomous robots.

“A big motivation for this work is having better ‘electric eyes’ for autonomous vehicles and drones,” says co-author Ruonan Han, an associate professor of electrical engineering and computer science, and director of the Terahertz Integrated Electronics Group in the MIT Microsystems Technology Laboratories (MTL). “Our low-cost, on-chip sub-terahertz sensors will play a complementary role to LiDAR for when the environment is rough.”

Joining Han on the paper are first author Zhi Hu and co-author Cheng Wang, both PhD students in in the Department of Electrical Engineering and Computer Science working in Han’s research group.

Decentralized design

The key to the design is what the researchers call “decentralization.” In this design, a single pixel — called a “heterodyne” pixel — generates the frequency beat (the frequency difference between two incoming sub-terahertz signals) and the “local oscillation,” an electrical signal that changes the frequency of an input frequency. This “down-mixing” process produces a signal in the megahertz range that can be easily interpreted by a baseband processor.

The output signal can be used to calculate the distance of objects, similar to how LiDAR calculates the time it takes a laser to hit an object and rebound. In addition, combining the output signals of an array of pixels, and steering the pixels in a certain direction, can enable high-resolution images of a scene. This allows for not only the detection but also the recognition of objects, which is critical in autonomous vehicles and robots.

Heterodyne pixel arrays work only when the local oscillation signals from all pixels are synchronized, meaning that a signal-synchronizing technique is needed. Centralized designs include a single hub that shares local oscillation signals to all pixels.

These designs are usually used by receivers of lower frequencies, and can cause issues at sub-terahertz frequency bands, where generating a high-power signal from a single hub is notoriously difficult. As the array scales up, the power shared by each pixel decreases, reducing the output baseband signal strength, which is highly dependent on the power of local oscillation signal. As a result, a signal generated by each pixel can be very weak, leading to low sensitivity. Some on-chip sensors have started using this design, but are limited to eight pixels.

The researchers’ decentralized design tackles this scale-sensitivity trade-off. Each pixel generates its own local oscillation signal, used for receiving and down-mixing the incoming signal. In addition, an integrated coupler synchronizes its local oscillation signal with that of its neighbor. This gives each pixel more output power, since the local oscillation signal does not flow from a global hub.

A good analogy for the new decentralized design is an irrigation system, Han says. A traditional irrigation system has one pump that directs a powerful stream of water through a pipeline network that distributes water to many sprinkler sites. Each sprinkler spits out water that has a much weaker flow than the initial flow from the pump. If you want the sprinklers to pulse at the exact same rate, that would require another control system.

The researchers’ design, on the other hand, gives each site its own water pump, eliminating the need for connecting pipelines, and gives each sprinkler its own powerful water output. Each sprinkler also communicates with its neighbor to synchronize their pulse rates. “With our design, there’s essentially no boundary for scalability,” Han says. “You can have as many sites as you want, and each site still pumps out the same amount of water … and all pumps pulse together.”

The new architecture, however, potentially makes the footprint of each pixel much larger, which poses a great challenge to the large-scale, high-density integration in an array fashion. In their design, the researchers combined various functions of four traditionally separate components — antenna, downmixer, oscillator, and coupler — into a single “multitasking” component given to each pixel. This allows for a decentralized design of 32 pixels.

“We designed a multifunctional component for a [decentralized] design on a chip and combine a few discrete structures to shrink the size of each pixel,” Hu says. “Even though each pixel performs complicated operations, it keeps its compactness, so we can still have a large-scale dense array.”

Guided by frequencies

In order for the system to gauge an object’s distance, the frequency of the local oscillation signal must be stable.

To that end, the researchers incorporated into their chip a component called a phase-locked loop, that locks the sub-terahertz frequency of all 32 local oscillation signals to a stable, low-frequency reference. Because the pixels are coupled, their local oscillation signals all share identical, high-stability phase and frequency. This ensures that meaningful information can be extracted from the output baseband signals. This entire architecture minimizes signal loss and maximizes control.

“In summary, we achieve a coherent array, at the same time with very high local oscillation power for each pixel, so each pixel achieves high sensitivity,” Hu says.

February 14, 2019 | More

Turning desalination waste into a useful resource Process developed at MIT could turn concentrated brine into useful chemicals, making desalination more efficient.

Turning desalination waste into a useful resource

The rapidly growing desalination industry produces water for drinking and for agriculture in the world’s arid coastal regions. But it leaves behind as a waste product a lot of highly concentrated brine, which is usually disposed of by dumping it back into the sea, a process that requires costly pumping systems and that must be managed carefully to prevent damage to marine ecosystems. Now, engineers at MIT say they have found a better way.

In a new study, they show that through a fairly simple process the waste material can be converted into useful chemicals — including ones that can make the desalination process itself more efficient.

The approach can be used to produce sodium hydroxide, among other products. Otherwise known as caustic soda, sodium hydroxide can be used to pretreat seawater going into the desalination plant. This changes the acidity of the water, which helps to prevent fouling of the membranes used to filter out the salty water — a major cause of interruptions and failures in typical reverse osmosis desalination plants.

The concept is described today in the journal Nature Catalysis and in two other papers by MIT research scientist Amit Kumar, professor of mechanical engineering John. H. Lienhard V, and several others. Lienhard is the Jameel Professor of Water and Food and the director of the Abdul Latif Jameel Water and Food Systems Lab.

“The desalination industry itself uses quite a lot of it,” Kumar says of sodium hydroxide. “They’re buying it, spending money on it. So if you can make it in situ at the plant, that could be a big advantage.” The amount needed in the plants themselves is far less than the total that could be produced from the brine, so there is also potential for it to be a saleable product.

Sodium hydroxide is not the only product that can be made from the waste brine: Another important chemical used by desalination plants and many other industrial processes is hydrochloric acid, which can also easily be made on site from the waste brine using established chemical processing methods. The chemical can be used for cleaning parts of the desalination plant, but is also widely used in chemical production and as a source of hydrogen.

Currently, the world produces more than 100 billion liters (about 27 billion gallons) a day of water from desalination, which leaves a similar volume of concentrated brine. Much of that is pumped back out to sea, and current regulations require costly outfall systems to ensure adequate dilution of the salts. Converting the brine can thus be both economically and ecologically beneficial, especially as desalination continues to grow rapidly around the world. “Environmentally safe discharge of brine is manageable with current technology, but it’s much better to recover resources from the brine and reduce the amount of brine released,” Lienhard says.

The method of converting the brine into useful products uses well-known and standard chemical processes, including initial nanofiltration to remove undesirable compounds, followed by one or more electrodialysis stages to produce the desired end product. While the processes being suggested are not new, the researchers have analyzed the potential for production of useful chemicals from brine and proposed a specific combination of products and chemical processes that could be turned into commercial operations to enhance the economic viability of the desalination process, while diminishing its environmental impact.

“This very concentrated brine has to be handled carefully to protect life in the ocean, and it’s a resource waste, and it costs energy to pump it back out to sea,” so turning it into a useful commodity is a win-win, Kumar says. And sodium hydroxide is such a ubiquitous chemical that “every lab at MIT has some,” he says, so finding markets for it should not be difficult.

The researchers have discussed the concept with companies that may be interested in the next step of building a prototype plant to help work out the real-world economics of the process. “One big challenge is cost — both electricity cost and equipment cost,” at this stage, Kumar says.

The team also continues to look at the possibility of extracting other, lower-concentration materials from the brine stream, he says, including various metals and other chemicals, which could make the brine processing an even more economically viable undertaking.

“One aspect that was mentioned … and strongly resonated with me was the proposal for such technologies to support more ‘localized’ or ‘decentralized’ production of these chemicals at the point-of-use,” says Jurg Keller, a professor of water management at the University of Queensland in Australia, who was not involved in this work. “This could have some major energy and cost benefits, since the up-concentration and transport of these chemicals often adds more cost and even higher energy demand than the actual production of these at the concentrations that are typically used.”

The research team also included MIT postdoc Katherine Phillips and undergraduate Janny Cai, and Uwe Schroder at the University of Braunschweig, in Germany. The work was supported by Cadagua, a subsidiary of Ferrovial, through the MIT Energy Initiative.

February 13, 2019 | More

MIT robot combines vision and touch to learn the game of Jenga Machine-learning approach could help robots assemble cellphones and other small parts in a manufacturing line

MIT robot combines vision and touch to learn the game of Jenga

In the basement of MIT’s Building 3, a robot is carefully contemplating its next move. It gently pokes at a tower of blocks, looking for the best block to extract without toppling the tower, in a solitary, slow-moving, yet surprisingly agile game of Jenga.

The robot, developed by MIT engineers, is equipped with a soft-pronged gripper, a force-sensing wrist cuff, and an external camera, all of which it uses to see and feel the tower and its individual blocks.

As the robot carefully pushes against a block, a computer takes in visual and tactile feedback from its camera and cuff, and compares these measurements to moves that the robot previously made. It also considers the outcomes of those moves — specifically, whether a block, in a certain configuration and pushed with a certain amount of force, was successfully extracted or not. In real-time, the robot then “learns” whether to keep pushing or move to a new block, in order to keep the tower from falling.

Details of the Jenga-playing robot are published today in the journal Science Robotics. Alberto Rodriguez, the Walter Henry Gale Career Development Assistant Professor in the Department of Mechanical Engineering at MIT, says the robot demonstrates something that’s been tricky to attain in previous systems: the ability to quickly learn the best way to carry out a task, not just from visual cues, as it is commonly studied today, but also from tactile, physical interactions.

“Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces. It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks,” Rodriguez says. “This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics.”

He says the tactile learning system the researchers have developed can be used in applications beyond Jenga, especially in tasks that need careful physical interaction, including separating recyclable objects from landfill trash and assembling consumer products.

“In a cellphone assembly line, in almost every single step, the feeling of a snap-fit, or a threaded screw, is coming from force and touch rather than vision,” Rodriguez says. “Learning models for those actions is prime real-estate for this kind of technology.”

The paper’s lead author is MIT graduate student Nima Fazeli. The team also includes Miquel Oller, Jiajun Wu, Zheng Wu, and Joshua Tenenbaum, professor of brain and cognitive sciences at MIT.

Push and pull

In the game of Jenga — Swahili for “build” — 54 rectangular blocks are stacked in 18 layers of three blocks each, with the blocks in each layer oriented perpendicular to the blocks below. The aim of the game is to carefully extract a block and place it at the top of the tower, thus building a new level, without toppling the entire structure.

To program a robot to play Jenga, traditional machine-learning schemes might require capturing everything that could possibly happen between a block, the robot, and the tower — an expensive computational task requiring data from thousands if not tens of thousands of block-extraction attempts.

Instead, Rodriguez and his colleagues looked for a more data-efficient way for a robot to learn to play Jenga, inspired by human cognition and the way we ourselves might approach the game.

The team customized an industry-standard ABB IRB 120 robotic arm, then set up a Jenga tower within the robot’s reach, and began a training period in which the robot first chose a random block and a location on the block against which to push. It then exerted a small amount of force in an attempt to push the block out of the tower.

For each block attempt, a computer recorded the associated visual and force measurements, and labeled whether each attempt was a success.

Rather than carry out tens of thousands of such attempts (which would involve reconstructing the tower almost as many times), the robot trained on just about 300, with attempts of similar measurements and outcomes grouped in clusters representing certain block behaviors. For instance, one cluster of data might represent attempts on a block that was hard to move, versus one that was easier to move, or that toppled the tower when moved. For each data cluster, the robot developed a simple model to predict a block’s behavior given its current visual and tactile measurements.

Fazeli says this clustering technique dramatically increases the efficiency with which the robot can learn to play the game, and is inspired by the natural way in which humans cluster similar behavior: “The robot builds clusters and then learns models for each of these clusters, instead of learning a model that captures absolutely everything that could happen.”

Stacking up

The researchers tested their approach against other state-of-the-art machine learning algorithms, in a computer simulation of the game using the simulator MuJoCo. The lessons learned in the simulator informed the researchers of the way the robot would learn in the real world.

“We provide to these algorithms the same information our system gets, to see how they learn to play Jenga at a similar level,” Oller says. “Compared with our approach, these algorithms need to explore orders of magnitude more towers to learn the game.”

Curious as to how their machine-learning approach stacks up against actual human players, the team carried out a few informal trials with several volunteers.

“We saw how many blocks a human was able to extract before the tower fell, and the difference was not that much,” Oller says.

But there is still a way to go if the researchers want to competitively pit their robot against a human player. In addition to physical interactions, Jenga requires strategy, such as extracting just the right block that will make it difficult for an opponent to pull out the next block without toppling the tower.

For now, the team is less interested in developing a robotic Jenga champion, and more focused on applying the robot’s new skills to other application domains.

“There are many tasks that we do with our hands where the feeling of doing it ‘the right way’ comes in the language of forces and tactile cues,” Rodriguez says. “For tasks like these, a similar approach to ours could figure it out.”

This research was supported, in part, by the National Science Foundation through the National Robotics Initiative.

January 30, 2019 | More

Engineers program marine robots to take calculated risks Algorithm could help autonomous underwater vehicles explore risky but scientifically rewarding environments

Engineers program marine robots to take calculated risks

We know far less about the Earth’s oceans than we do about the surface of the moon or Mars. The sea floor is carved with expansive canyons, towering seamounts, deep trenches, and sheer cliffs, most of which are considered too dangerous or inaccessible for autonomous underwater vehicles (AUV) to navigate.

But what if the reward for traversing such places was worth the risk?

MIT engineers have now developed an algorithm that lets AUVs weigh the risks and potential rewards of exploring an unknown region. For instance, if a vehicle tasked with identifying underwater oil seeps approached a steep, rocky trench, the algorithm could assess the reward level (the probability that an oil seep exists near this trench), and the risk level (the probability of colliding with an obstacle), if it were to take a path through the trench.

“If we were very conservative with our expensive vehicle, saying its survivability was paramount above all, then we wouldn’t find anything of interest,” says Benjamin Ayton, a graduate student in MIT’s Department of Aeronautics and Astronautics. “But if we understand there’s a tradeoff between the reward of what you gather, and the risk or threat of going toward these dangerous geographies, we can take certain risks when it’s worthwhile.”

Ayton says the new algorithm can compute tradeoffs of risk versus reward in real time, as a vehicle decides where to explore next. He and his colleagues in the lab of Brian Williams, professor of aeronautics and astronautics, are implementing this algorithm and others on AUVs, with the vision of deploying fleets of bold, intelligent robotic explorers for a number of missions, including looking for offshore oil deposits, investigating the impact of climate change on coral reefs, and exploring extreme environments analogous to Europa, an ice-covered moon of Jupiter that the team hopes vehicles will one day traverse.

“If we went to Europa and had a very strong reason to believe that there might be a billion-dollar observation in a cave or crevasse, which would justify sending a spacecraft to Europa, then we would absolutely want to risk going in that cave,” Ayton says. “But algorithms that don’t consider risk are never going to find that potentially history-changing observation.”

Ayton and Williams, along with Richard Camilli of the Woods Hole Oceanographic Institution, will present their new algorithm at the Association for the Advancement of Artificial Intelligence conference this week in Honolulu.

A bold path

The team’s new algorithm is the first to enable “risk-bounded adaptive sampling.” An adaptive sampling mission is designed, for instance, to automatically adapt an AUV’s path, based on new measurements that the vehicle takes as it explores a given region. Most adaptive sampling missions that consider risk typically do so by finding paths with a concrete, acceptable level of risk. For instance, AUVs may be programmed to only chart paths with a chance of collision that doesn’t exceed 5 percent.

But the researchers found that accounting for risk alone could severely limit a mission’s potential rewards.

“Before we go into a mission, we want to specify the risk we’re willing to take for a certain level of reward,” Ayton says. “For instance, if a path were to take us to more hydrothermal vents, we would be willing to take this amount of risk, but if we’re not going to see anything, we would be willing to take less risk.”

The team’s algorithm takes in bathymetric data, or information about the ocean topography, including any surrounding obstacles, along with the vehicle’s dynamics and inertial measurements, to compute the level of risk for a certain proposed path. The algorithm also takes in all previous measurements that the AUV has taken, to compute the probability that such high-reward measurements may exist along the proposed path.

If the risk-to-reward ratio meets a certain value, determined by scientists beforehand, then the AUV goes ahead with the proposed path, taking more measurements that feed back into the algorithm to help it evaluate the risk and reward of other paths as the vehicle moves forward.

The researchers tested their algorithm in a simulation of an AUV mission east of Boston Harbor. They used bathymetric data collected from the region during a previous NOAA survey, and simulated an AUV exploring at a depth of 15 meters through regions at relatively high temperatures. They looked at how the algorithm planned out the vehicle’s route under three different scenarios of acceptable risk.

In the scenario with the lowest acceptable risk, meaning that the vehicle should avoid any regions that would have a very high chance of collision, the algorithm mapped out a conservative path, keeping the vehicle in a safe region that also did not have any high rewards — in this case, high temperatures. For scenarios of higher acceptable risk, the algorithm charted bolder paths that took a vehicle through a narrow chasm, and ultimately to a high-reward region.

The team also ran the algorithm through 10,000 numerical simulations, generating random environments in each simulation through which to plan a path, and found that the algorithm “trades off risk against reward intuitively, taking dangerous actions only when justified by the reward.”

A risky slope

Last December, Ayton, Williams, and others spent two weeks on a cruise off the coast of Costa Rica, deploying underwater gliders, on which they tested several algorithms, including this newest one. For the most part, the algorithm’s path planning agreed with those proposed by several onboard geologists who were looking for the best routes to find oil seeps.

Ayton says there was a particular moment when the risk-bounded algorithm proved especially handy. An AUV was making its way up a precarious slump, or landslide, where the vehicle couldn’t take too many risks.

“The algorithm found a method to get us up the slump quickly, while being the most worthwhile,” Ayton says. “It took us up a path that, while it didn’t help us discover oil seeps, it did help us refine our understanding of the environment.”

“What was really interesting was to watch how the machine algorithms began to ‘learn’ after the findings of several dives, and began to choose sites that we geologists might not have chosen initially,” says Lori Summa, a geologist and guest investigator at the Woods Hole Oceanographic Institution, who took part in the cruise.  “This part of the process is still evolving, but it was exciting to watch the algorithms begin to identify the new patterns from large amounts of data, and couple that information to an efficient, ‘safe’ search strategy.”

In their long-term vision, the researchers hope to use such algorithms to help autonomous vehicles explore environments beyond Earth.

“If we went to Europa and weren’t willing to take any risks in order to preserve a probe, then the probability of finding life would be very, very low,” Ayton says. “You have to risk a little to get more reward, which is generally true in life as well.”

This research was supported, in part, by Exxon Mobile, as part of the MIT Energy Initiative, and by NASA.

January 30, 2019 | More

Optimizing solar farms with smart drones MIT spinoff Raptor Maps uses machine-learning software to improve the maintenance of solar panels.

Optimizing solar farms with smart drones

As the solar industry has grown, so have some of its inefficiencies. Smart entrepreneurs see those inefficiencies as business opportunities and try to create solutions around them. Such is the nature of a maturing industry.

One of the biggest complications emerging from the industry’s breakneck growth is the maintenance of solar farms. Historically, technicians have run electrical tests on random sections of solar cells in order to identify problems. In recent years, the use of drones equipped with thermal cameras has improved the speed of data collection, but now technicians are being asked to interpret a never-ending flow of unstructured data.

That’s where Raptor Maps comes in. The company’s software analyzes imagery from drones and diagnoses problems down to the level of individual cells. The system can also estimate the costs associated with each problem it finds, allowing technicians to prioritize their work and owners to decide what’s worth fixing.

“We can enable technicians to cover 10 times the territory and pinpoint the most optimal use of their skill set on any given day,” Raptor Maps co-founder and CEO Nikhil Vadhavkar says. “We came in and said, ‘If solar is going to become the number one source of energy in the world, this process needs to be standardized and scalable.’ That’s what it takes, and our customers appreciate that approach.”

Raptor Maps processed the data of 1 percent of the world’s solar energy in 2018, amounting to the energy generated by millions of panels around the world. And as the industry continues its upward trajectory, with solar farms expanding in size and complexity, the company’s business proposition only becomes more attractive to the people driving that growth.

Picking a path

Raptor Maps was founded by Eddie Obropta ’13 SM ’15, Forrest Meyen SM ’13 PhD ’17, and Vadhavkar, who was a PhD candidate at MIT between 2011 and 2016. The former classmates had worked together in the Human Systems Laboratory of the Department of Aeronautics and Astronautics when Vadhavkar came to them with the idea of starting a drone company in 2015.

The founders were initially focused on the agriculture industry. The plan was to build drones equipped with high-definition thermal cameras to gather data, then create a machine-learning system to gain insights on crops as they grew. While the founders began the arduous process of collecting training data, they received guidance from MIT’s Venture Mentoring Service and the Martin Trust Center. In the spring of 2015, Raptor Maps won the MIT $100K Launch competition.

But even as the company began working with the owners of two large farms, Obropta and Vadhavkar were unsure of their path to scaling the company. (Meyen left the company in 2016.) Then, in 2017, they made their software publicly available and something interesting happened.

They found that most of the people who used the system were applying it to thermal images of solar farms instead of real farms. It was a message the founders took to heart.

“Solar is similar to farming: It’s out in the open, 2-D, and it’s spread over a large area,” Obropta says. “And when you see [an anomaly] in thermal images on solar, it usually means an electrical issue or a mechanical issue — you don’t have to guess as much as in agriculture. So we decided the best use case was solar. And with a big push for clean energy and renewables, that aligned really well with what we wanted to do as a team.”

Obropta and Vadhavkar also found themselves on the right side of several long-term trends as a result of the pivot. The International Energy Agency has proposed that solar power could be the world’s largest source of electricity by 2050. But as demand grows, investors, owners, and operators of solar farms are dealing with an increasingly acute shortage of technicians to keep the panels running near peak efficiency.

Since deciding to focus on solar exclusively around the beginning of 2018, Raptor Maps has found success in the industry by releasing its standards for data collection and letting customers — or the many drone operators the company partners with — use off-the-shelf hardware to gather the data themselves. After the data is submitted to the company, the system creates a detailed map of each solar farm and pinpoints any problems it finds.

“We run analytics so we can tell you, ‘This is how many solar panels have this type of issue; this is how much the power is being affected,’” Vadhavkar says. “And we can put an estimate on how many dollars each issue costs.”

The model allows Raptor Maps to stay lean while its software does the heavy lifting. In fact, the company’s current operations involve more servers than people.

The tiny operation belies a company that’s carved out a formidable space for itself in the solar industry. Last year, Raptor Maps processed four gigawatts worth of data from customers on six different continents. That’s enough energy to power nearly 3 million homes.

Vadhavkar says the company’s goal is to grow at least fivefold in 2019 as several large customers move to make the software a core part of their operations. The team is also working on getting its software to generate insights in real time using graphical processing units on the drone itself as part of a project with the multinational energy company Enel Green Power.

Ultimately, the data Raptor Maps collect are taking the uncertainty out of the solar industry, making it a more attractive space for investors, operators, and everyone in between.

“The growth of the industry is what drives us,” Vadhavkar says. “We’re directly seeing that what we’re doing is impacting the ability of the industry to grow faster. That’s huge. Growing the industry — but also, from the entrepreneurial side, building a profitable business while doing it — that’s always been a huge dream.”

January 30, 2019 | More