News and Research
Healthcare systems

Translating a Biologic Revolution into an Organizational Overhaul

MIT LGO students and professors work with Mass General Hospital to redesign healthcare processes and bring novel therapies to patients.
Read more


MIT Alumna is Industry-Tested, Tesla-Approved

Unlike most automotive manufacturers, Tesla has no status quo. That’s good news for Grace Overlander (LGO ’08) who believes that we can always make things better.

October 17, 2016 | More

How the Chemical Industry Joined the Fight Against Climate Change

Ken Gayer (LGO ’98), VP and General Manager of Honeywell Fluorine Products, is quoted in this NY Times article.

October 16, 2016 | More

Electron-phonon interactions affect heat dissipation in computer chips

LGO professor Gang Chen and his research group say cellphones, laptops, and other electronic devices may face a higher risk of overheating, as a result of interactions between electrons and heat-carrying particles called phonons.

The researchers have found that these previously underestimated interactions can play a significant role in preventing heat dissipation in microelectronic devices. Their results are published today in the journal Nature Communications.

In their experiments, the team used precisely timed laser pulses to measure the interactions between electrons and phonons in a very thin silicon wafer. As the concentration of electrons in the silicon increased, the more these electrons scattered phonons and prevented them from carrying heat away.

“When your computer is running, it generates heat, and you want this heat to dissipate, to be carried out by phonons,” says lead author Bolin Liao, a former graduate student in mechanical engineering at MIT. “If phonons are scattered by electrons, they’re not as good as we thought they were in carrying heat out. This will create a problem that we have to solve as chips become smaller.”

On the other hand, Liao says this same effect may benefit thermoelectric generators, which convert heat directly into electrical energy. In such devices, scattering phonons, and thereby reducing heat leakage, would significantly improve their performance.

“Now we know this effect can be significant when the concentration of electrons is high,” Liao says. “We now have to think of how to engineer the electron-phonon interaction in more sophisticated ways to benefit both thermoelectric and microelectronic devices.”

Liao’s co-authors include Gang Chen, the Carl Richard Soderberg Professor in Power Engineering and the head of the Department of Mechanical Engineering; Alexei Maznev, a senior research scientist in the Department of Chemistry; and Keith Nelson, the Haslam and Dewey Professor of Chemistry.

Blocking flow

In transistors made from semiconductor materials such as silicon, and electrical cables made from metals, electrons are the main agents responsible for conducting electricity through a material. A main reason why such materials have a finite electrical resistance is the existence of certain roadblocks to electrons’ flow — namely, interactions with the heat-carrying phonons, which can collide with electrons, throwing them off their electricity-conducting paths.

Scientists have long studied the effect of such electron-phonon interactions on electrons themselves, but how these same interactions affect phonons — and a material’s ability to conduct heat — is less well-understood.

“People hardly studied the effect on phonons because they used to think this effect was not important,” Liao says. “But as we know from Newton’s third law, every action has a reaction. We just didn’t know under what circumstances this effect can become significant.”

Scatter and decay

Liao and his colleagues had previously calculated that in silicon, the most commonly used semiconductor material, when the concentration of electrons is above 1019 per cubic centimeter, the interactions between electrons and phonons would strongly scatter phonons. And, they would reduce the material’s ability to dissipate heat by as much as 50 percent when the concentration reaches 1021 per cubic centimeter.

“That’s a really significant effect, but people were skeptical,” Liao says. That’s mainly because in previous experiments on materials with high electron concentrations they assumed the reduction of heat dissipation was due not to electron-phonon interaction but to defects in materials. Such defects arise from the process of “doping,” in which additional elements such as phosphorous and boron are added to silicon to increase its electron concentration.

“So the challenge to verify our idea was, we had to separate the contributions from electrons and defects by somehow controlling the electron concentration inside the material, without introducing any defects,” Liao says.

The team developed a technique called three-pulse photoacoustic spectroscopy to precisely increase the number of electrons in a thin wafer of silicon by optical methods, and measure any effect on the material’s phonons. The technique expands on a conventional two-pulse photoacoustic spectroscopy technique, in which scientists shine two precisely tuned and timed lasers on a material. The first laser generates a phonon pulse in the material, while the second measures the activity of the phonon pulse as it scatters, or decays.

Liao added a third laser, which when shone on silicon precisely increased the material’s concentration of electrons, without creating defects. When he measured the phonon pulse after introducing the third laser, he found that it decayed much faster, indicating that the increased concentration of electrons acted to scatter phonons and dampen their activity.

“Very happily, we found the experimental result agrees very well with our previous calculation, and we can now say this effect can be truly significant and we proved it in experiments,” Liao says. “This is among the first experiments to directly probe electron-phonon interactions’ effect on phonons.”

Interestingly, the researchers first started seeing this effect in silicon that was loaded with 1019 electrons per cubic centimeter — comparable or even lower in concentration than some current transistors.

“From our study, we show that this is going to be a really serious problem when the scale of circuits becomes smaller,” Liao says. “Even now, with transistor size being a few nanometers, I think this effect will start to appear, and we really need to seriously consider this effect and think of how to use or avoid it in real devices.”

This research was supported by S3TEC, an Energy Frontier Research Center funded by the U.S. Department of Energy’s Office of Basic Energy Sciences.

October 12, 2016 | More

Creating good jobs at a Texas grocery chain

HEB’s president talks with Good Jobs Strategy author Zeynep Ton (an LGO thesis advisor) about investing in people.

At Texas-based grocery store HEB, applying the good jobs strategy—an approach described by MIT Sloan adjunct associate professor Zeynep Ton in her book, The Good Jobs Strategy—doesn’t mean being sweet, happy, and forgiving, said HEB president and chief operating officer Craig Boyan.

Instead, the company, which has more than 380 locations and 96,000 employees in Texas and Mexico, aims for a feeling of “restless dissatisfaction,” he said. “Think about the teachers you’ve been inspired by—they weren’t the nicest or the sweetest—they set the bar really high and gave you hard problems to solve,” Boyan said. “We have some very happy people, but our culture is restless dissatisfaction.”

Boyan appeared with Ton at a September 28 talk moderated by Don Sull, MIT Sloan senior lecturer and author of Simple Rules: How to Thrive in a Complex World.

With an estimated annual revenue of $23 billion, HEB is ranked number 13 on Forbes’  list of America’s largest private companies. It was voted the most trusted company in America, according to the 2015 Temkin Trust Ratings.

Ton and Boyan shared five elements of a good jobs company:

Focus on employees.
Boyan called Costco Wholesale Company “the best company in America,” saying the company puts employees first and keeps prices competitive. “If you take care of your employees as number one, and then customers as number two, then the shareholders will be served as an outcome of taking care of employees. It makes a huge difference,” Boyan said.

Employees want to work for a company with purpose.
“Investing in people might just mean paying good wages and bonuses and having a really great incentive program … but I would argue that’s not enough for a good jobs strategy,” Boyan said. “What we are motivated by is working for a company of purpose that makes a difference in people’s lives. Instilling pride in regular jobs can be extremely valuable,” Boyan said.

Ton agreed and said it is possible to create an environment where employees can find pride and meaning in their jobs. “We need to pay attention to design the work for them to be able to connect what they do to the purpose of the company,” she said.

Internal competition, done properly, will motivate employees, but incentives may not.
Boyan said HEB fosters competition store by store. The company shares sales information with managers, who will see exactly what is sold in comparable regional stores each day of the week.

“Our employees … are always pushing to get better. That’s the good part,” Boyan said. “The bad part is that peer-to-peer [competition] can be like dog-eat-dog.”

Since incentives indirectly contributed to that negative competition, the company removed them for many employees a decade ago, and instead changed buyers’ incentives to be paid based on total company performance. Incentives for specific jobs and functions can create barriers to collaboration, limit changes to space allocation across departments and categories, and can get in the way of cross merchandising.

Putting people first means putting your money where your mouth is.
HEB starts each year with a commitment to pay employees more money and offer lower prices for customers. “That’s everybody’s challenge. None of us know … in any given year, about how we will solve this year’s ability to make those investments,” Boyan said.

“We make those investments, and we absolutely say that we are going to give more than 5 percent pre-tax to charities … We are absolutely going to pay a certain amount of wages. We are absolutely going to invest in price. And that’s where the strategy starts … it’s a constant effort to try to be better,” Boyan said.

Adaptation is important when striving to be a good jobs company.
“One thing you will see is that things change,” Ton said. Collecting and quantifying data and staying on top of what competitors are doing is one way to stay ahead of the game. “Customers want changes. Technology changes. Regulations change. It is those companies that are able to adapt to those changes … that tend to stay around and thrive.”

October 6, 2016 | More

Financial institutions must fight cybercrime from inside and out

MIT Sloan professor and LGO thesis advisor Stuart Madnick at the Cambrige Cyber Summit.

Bank hacks are inevitable in today’s world of cybercrime. The challenges for financial institutions and government agencies alike are twofold: Minimizing the impact of the hack and also understanding the motivations of the hackers.

“The core of the issue is detection and recovery,” said Stuart Madnick, a professor of information technology and engineering systems at MIT Sloan. “The question is how quickly the number can come back up from $0. [A hack] doesn’t mean the money is gone. It just means you can’t see the money.”

Madnick and several other public and private cybersecurity experts spoke Oct. 5 at the Cambridge Cyber Summit, hosted by The Aspen Institute, CNBC, and MIT. The event was held at Kresge Auditorium on the MIT campus.

Bank security has been in the headlines for several months. The global financial messaging system SWIFT has been subject to a series of hacks, including the theft of $81 million from Bangladesh Bank in February of this year.

The attacks can come in many forms, federal officials said at the summit. Sometimes it’s a lone wolf hacking a single bank throughout the day. Sometimes it’s an organization trying to funnel money to an enemy of the state. Sometimes it’s a state-sponsored group targeting an entire network.

Since SWIFT handles as much as $9 trillion in financial transactions per day, Madnick said, the network is an attractive target. Because of the interconnectedness of today’s banks, an attack on one is an attack on all, he added: “You can’t be secure unless you know your partners are secure.”

Digital currency holds potential
Jeremy Allaire, founder and CEO of digital currency firm Circle, described the technology behind financial transactions as “text files floating around the internet,” which is insecure and prone to hacking.

Allaire said blockchain, the data format behind the Bitcoin payment system, can better protect financial data. In a blockchain, each block holds a set of validated transactions, as well as the cryptographic hash connecting that block to the prior block in the chain. In the context of Bitcoin, the blockchain serves as a digital ledger.

Federal banks in a number of countries are considering a digital currency. Don Anderson, senior vice president and CIO of The Federal Reserve Bank of Boston, said he thinks that will happen in the United States “in our lifetime,” but there will be a few challenges along the way.

Digital currency is catching on in countries with limited financial infrastructure, which is not the case in the United States, Anderson said. It’s also more than a question of moving money. “How do you develop monetary policy around a digital currency?” he asked.

Finally, a “new bar” of standards must be set for Circle, Apple Pay, Google Wallet, and other payment systems that seek to replace financial business models that have existed for more than a century, Anderson said.

The biggest threats are inside
During the summit, news broke about the arrest of a National Security Administration contractor for allegedly stealing several terabytes’ worth of top-secret data. The Booz Allen Hamilton contractor, Harold T. Martin of Maryland, is suspected of stealing code, some of it outdated, which could be used to break into other nations’s computer systems.

The incident points to the serious nature of insider threats, and to the need to take a “whole-person approach” to understanding someone’s behavior, said S. Leslie Ireland, assistant secretary for intelligence and analysis in the U.S. Department of the Treasury.

“You look for activity you can’t otherwise explain,” Ireland said. According to The New York Times, Martin’s motives are still unclear; one official said Martin may have simply been a hoarder.

For all the external cyber threats, insiders pose the single greatest risk to financial institutions, the panelists said. In this case, though, it’s a matter of social engineering as opposed to malicious intent.

“You find the naïve people who click on links, find enough ways to communicate with them in earnest, and you get their credentials,” Allaire said. “The more senior person, the better.”

October 6, 2016 | More

Not just the bottom line: Lean manufacturing may help workers too

New research links management practices to compliance with labor standards in Nike’s supply chain, a common LGO internship project focus.

September 1, 2016 | More

Rick Dauch (LGO ’92), Five Lessons: Adventures in the Automotive Supply Chain

From collapsing roofs to late-night phone calls, a veteran automotive executive shares his stories of turning around companies for private equity firms.

August 24, 2016 | More

Duane Boning named head of Leaders for Global Operations program

Duane S. Boning has been named engineering faculty co-director of the Leaders for Global Operations (LGO) program, effective Sept. 1.

August 16, 2016 | More

A revolutionary model to optimize promotion pricing

Georgia Perakis is an LGO advisor and professor of operations management, operations research and statistics at MIT Sloan School of Management. In recognition of their work in this area, Perakis and her team of students from the Operations Research Center at MIT as well as her collaborators at Oracle received the 2014 Informs Service Science Best Student Paper Award and the 2015 NEDSI Best Application of Theory Paper Award. The team also was also selected as a finalist for the INFORMS Revenue Management & Pricing Section Practice Award in 2015.

Grocery stores run price promotions all the time. You see them when a particular brand of spaghetti sauce is $1 off or your favorite coffee is buy one get one free. Promotions are used for a variety of reasons from increasing traffic in stores to boosting sales of a particular brand. They are responsible for a lot of revenue, as a 2009 A.C. Nielsen study found that 42.8% of grocery store sales in the U.S. are made during promotions. This raises an important question: How much money does a retailer leave on the table by using current pricing practices as opposed to a more scientific, data-driven approach in order to determine optimal promotional prices?

The promotion planning tools currently available in the industry are mostly manual and based on “what-if” scenarios. In other words, supermarkets tend to use intuition and habit to decide when, how deep, and how often to promote products. Yet promotion pricing is very complicated. Product managers have to solve problems like whether or not to promote an item in a particular week, whether or not to promote two items together, and how to order upcoming discounts ― not to mention incorporating seasonality issues in their decision-making process.

There are plenty of people in the industry with years of experience who are good at this, but their brains are not computers. They can’t process the massive amounts of data available to determine optimal pricing. As a result, lots of money is left on the table.

To revolutionize the field of promotion pricing, my team of PhD students from the Operations Research Center at MIT, our collaborators from Oracle, and I sought to build a model based on several goals. It had to be simple and realistic. It had to be easy to estimate directly from the data, but also computationally easy and scalable. In addition, it had to lead to interesting and valuable results for retailers in practice.

Partnering with Oracle, we began by mining more than two years of sales and promotions data from several of Oracle’s clients. Using that data, our team developed various new demand models that captured price effects, promotion effects, and general consumer behavior. For example, when paper towels are promoted one week, a consequence is that people stockpile paper towels. Not surprisingly, the effect of a pricing promotion on paper towels the next week is smaller. Our model took that behavior into account. Furthermore, we developed an optimization model that determines the promotion schedule for every item fast.

The first formulation modeled demand “exactly”. Nevertheless, it proved extremely difficult for that model to solve problems in practice. As a result, we created a simpler version that captures 90+% of the complicated version and can solve practical problems. This simpler version can run on accessible software programs like Excel and provides answers in milliseconds. It allows product managers to test various what-if scenarios easily and fast – and be the final decision-makers about promotional pricing.

As for how it works in practice, the simple model is highly effective. When we compared that model with what is currently implemented, we found an average of 3-10% improvement in profits. With typical retail store margins close to 1.9%, promotions can contribute to a significant portion of stores’ profits. For instance, a 5% increase can mean $5 million for retailers with annual profits of $100 million.

So far, together with our Oracle collaborators, we have received very positive feedback on this model and have filed patents for this work. The model has a strong mathematical foundation and can be used by any retailer in any industry. It could be a game changer for retailers, as they seek to optimize promotion pricing.


August 8, 2016 | More

Rod Copes, President, North America For Royal Enfield Motorcycles

Rod Copes graduated from LGO in 1993, and recently discussed his career at Harley-Davidson and Royal Enfield with Forbes Magazine.

August 1, 2016 | More



Economist Bengt Holmström’s Nobel Prize win delights MIT Sloan colleagues

Faculty members cite his transformative work on contract theory. MIT Sloan Professor Bengt Holmström has won the 2016 Nobel Memorial Prize in Economic Sciences. Calling his work “pathbreaking,” Holmstrom’s colleagues are lauding the decision by the Royal Swedish Academy of Sciences.

Holmström, the Paul A. Samuelson Professor of Economics, was named winner of the award Oct. 10, along with Oliver Hart, a professor and economist at Harvard University.

“MIT has a long and rich tradition of having distinguished economists both on the faculty and amongst its graduates. Nothing is more fitting than having Bengt receive the Nobel Prize, won earlier by Paul Samuelson, who built the MIT economics department to its current greatness and whose professorship Bengt now holds,” said MIT Sloan finance professor Stephen Ross, a longtime colleague and friend of Holmström’s.

The Swedish Academy cited Holmström and Hart for “their contributions to contract theory.” In the late 1970s, Holmstrom, who has held a joint appointment with MIT Sloan and the MIT Department of Economics since 1994, demonstrated how contracts should be designed to weigh risks and incentives, as well as how they should link pay to performance.

“Professor Holmström’s work holds real insights for business leaders everywhere, in every industry,” said David Schmittlein, MIT Sloan dean. “He continues a long tradition of inventive economic thinking at MIT, and his ideas remain central to how we think about contracts and employee incentives. I know the faculty at MIT Sloan joins me in congratulating Professor Holmström and Professor Hart on this signature achievement.”

Groundbreaking work
“Contract theory is a pathbreaking approach to a whole host of fields in economics and in law. Almost all human interactions in economics can be characterized as contractual—either implicit or explicit,” Ross said. “Bengt and Oliver built on this intuitive insight and developed a set of models and results that allow us to clearly understand their economics from the perspective of the incentives of agents and the information they possess.” The theories developed by Holmstrom and Hart have provided a framework that enables an understanding of a broad swath of problems, from labor market interactions to the relationship between owners and enterprises to the role of control rights, Ross said.

Holmström’s work has “provided economists with a new framework to think about the role that incentives play in shaping the behavior of employees and how they interact within complex economic organizations,” said MIT Sloan finance professor Antoinette Schoar. “The modern economy is dominated by ever larger and more decentralized firms, where incentive contracts are key in ensuring the successful coordination of the workforce.”

Finance professor David Thesmar, who joined the MIT Sloan faculty from HEC Paris this year, has studied Holmstrom’s research for years, and said, “I have read, taught, and done research on many of Bengt’s papers, as he is a founding father of contract theory. His insights on relative performance evaluation, the notion of liquidity, and career concerns are essential to our understanding of organizations and finance.”

MIT Sloan professor of applied economics Robert Gibbons, who has worked closely with Holmström for years, said his work is “vitally important” for both economics and management. “I think of contract theory as analyzing how to govern collaboration—how to use all the available instruments to help a fixed set of people collaborate as well as possible. Applications range from small work groups and partnerships, to tussles between divisions of corporations, to joint ventures and other interactions between firms, to public-private partnerships between firms and governments, to interactions between countries or other autonomous entities,” Gibbons said. “I’m delighted that economics has added these issues to its agenda, to complement the analysis of markets. Bengt’s decades of award-winning work played a key role in getting this started.”

Applying contract theory to management practice
MIT Sloan Professor Scott Stern said Holmström’s “clear and deep understanding of how to bring economic theory to bear on the core challenges facing managers has been an inspiration and a guide.”

“Consider the challenge of motivating ‘innovation,’” Stern said. “How can you tell when your worker is someone [who] is working hard at innovation? This is needless to say not an easy problem. Bengt provided the foundational stepping stones that inform our teaching and research in this area. In a series of classic papers, Bengt highlighted how difficult it is to provide incentives that actually encourage the behavior that managers are hoping to encourage. For example, if you are constantly monitoring the progress of innovators, you are likely to end up with a lot of short-term deliverables but few home runs.”

“Bengt’s pioneering work pervades our teaching and our research,” Stern said.

A focus on research
Although the accolades and the Nobel Prize are gratifying, Holmström said in a phone interview Monday, he is looking forward to getting back to his research.

“I don’t have any intention to use this as a platform for throwing myself into a public debate. There are various styles of reacting to [winning] the Nobel Prize. Some people become experts on everything when they get the Prize, and others continue to be themselves, so to speak. I think I will be more of the latter,” Holmström said.

Ross said his friend’s attitude doesn’t surprise him at all. “Given his talents, Bengt is an uncommonly modest man,” he said.

MIT Sloan Professor Robert C. Merton, who won the Nobel Memorial Prize in Economic Sciences along with Myron Scholes in 1997, said he was “extraordinarily excited” Holmström won the Prize.

When asked if it will change Holmström’s life, Merton said, “yes and no.”

“He will forever have access [to meet more people], but I suspect he’ll go right back to doing his research. He’s known [around the world] already, so this will just add to it,” Merton said.

Read more about Holmström’s win here.

October 21, 2016 | More


For many, path to startup success runs through MIT Sloan’s New Enterprises class

Hundreds of companies, such as HubSpot and Lark, have gotten their start in MIT Sloan’s New Enterprises class, a well-known course that began in 1961, said Bill Aulet, managing director of the Martin Trust Center for MIT Entrepreneurship.

The MIT Sloan class, which is open to all students, as well as cross-registrants from Harvard University and Wellesley College, gives students who want to start their own companies the chance to make detailed plans for a startup. It also attracts students who want to know more about how entrepreneurship works.

The elective class, offered each semester, has an average of 250 students enrolled each year. Aulet, who is one of the instructors, said it has been a springboard to successful startups because it is practical and based on “a structured process built on the great academic environment here at MIT.” Students from Harvard and the other schools at MIT contribute to the class’s success, said Aulet, who is teaching the class this semester with professor Christian Catalini.

Three recent MIT Sloan startups all cited New Enterprises as a key component to their early success.

Aulet shared his thoughts on each company:

Ecovent offers room-by-room temperature control with a smartphone

Ecovent: This MBA-founded startup offers a room-by-room temperature control system that is controlled with a phone. Co-founders Dipul Patel, MBA ’14, CEO, and Yoel Kelman MBA ’14, COO, met each other in the New Enterprises class.

“A fantastic team with a skill set very appropriate for an exploding market,” Aulet said.

Read: “Comfort Zone.”

Alexandria Miskho and Rena Pacheco-Theard

Prepify: Founded by Rena Pacheco-Theard, MBA ’16, and Alexandria Miskho, SB ’17, who also met in the New Enterprises class, Prepify is a web application that connects low-income students to top colleges.

“This team is supremely motivated to solve this major problem in society using business techniques and incentives,” said Aulet.

Read “Startup offers free college prep services.”

Riley Clubb

Maskerade: A 12-seat virtual reality theater in Walla Walla, Washington, Maskerade opened as an experimental pop-up last summer.

Riley Clubb, MBA ’17, created Maskerade in New Enterprises last year, where he also worked with fellow MBA students Jennifer Lee and Eileen Parra on developing a business plan for a virtual reality production and distribution company that would focus on performing arts and live theater.

Clubb returned to MIT Sloan this semester to work on his next endeavor—a mixed reality theater startup.

“This is a convergence of newly omnipresent available technology combined with the founding team’s passion to create a new opportunity for artists and artistic content that might otherwise die out,” Aulet said.

Read “Lights, camera, action at pop-up virtual reality theater.”

October 20, 2016 | More

Can conversation supplant bureaucracy?

In 2012, Catherine Turco became increasingly intrigued by what she saw as a corporate preoccupation with openness. The open communication, decision processes, innovation and offices in Web 2.0 companies like Facebook and Google were on their way to becoming corporate memes.

October 19, 2016 | More

Bloomberg Video

Video: Erik Brynjolfsson, “I don’t see US economy overheating”

Erik Brynjolfsson, a professor of economics at MIT, discusses the state of the US economy and the future of labor with Bloomberg’s Matt Miller, Scarlet Fu and Joe Weisenthal on “What’d You Miss?”

October 18, 2016 | More

Data for Good Video

Video: Using data for good

Alex “Sandy” Pentland, director of the Internet Trust Consortium and Toshiba Professor of Media Art and Sciences at the MIT Media Lab, gives the keynote speech on how data can be used for public good.

October 18, 2016 | More

MIT Sloan Research Video

Video: The value of negative citations

Understanding which papers attract critical citations, and what effect they have, gives an insight into how science progresses, says Christian Catalini. Science advances through researchers sharing their work for others to extend or improve. As Isaac Newton once said, he could see further by “standing on the shoulders of giants”.

But what happens when those shoulders aren’t as sturdy as we thought? Sometimes, citations are negative, pointing out a study’s flaws or even disproving its findings. What role, relevance and impact do these negative citations have on a field as a whole?

There has been little research in this area, because of the difficulty in identifying and classifying such citations. Thanks to advancements in the ability of computers to understand human language, known as natural-language processing, and in the ability to sort and analyse large bodies of text, this is changing. We can now identify such citations and reconstruct the context in which they were made to understand the author’s intentions better. Using such techniques, my colleagues and I have found evidence to suggest that negative citations play an important role in the advancement of science.


We hypothesised that negative citations help science progress through their role in limiting and correcting previous results. To test this idea, we looked at 762,355 citations from 15,731 articles in The Journal of Immunology. Using a combination of natural-language processing and experts in the field, we identified 18,304 negative citations, or about 2.4 per cent of the total. We also found that about 7 per cent—not a trivial proportion—of these papers received at least one negative citation.

Several features of these negative citations support our hypothesis. A paper is most likely to receive a negative citation in the first few years after publication. This is probably because this is when the science is potentially newer and untested, and thus attracts more attention and scrutiny.

We also found that negatively cited studies were of higher quality and prominence, as captured by the overall number of citations received, a broadly used proxy for scientific impact. This might be because scientists pay more attention to recent, high-impact studies, and so they are also more likely to provide criticisms, extensions, and qualifications to their results.

Read the full post at Research.

Christian Catalini is the Fred Kayne (1960) Career Development Professor of Entrepreneurship and Assistant Professor of Technological Innovation, Entrepreneurship, and Strategic Management at the MIT Sloan School of Management.

October 17, 2016 | More

Sloan Startup MINT

Bringing the latest tech to the last mile of agriculture

SourceTrace was founded in 2006 by Sandeep Chatterjee and Stephen Sellers. When the founders moved out, Venkat Maroju, an MBA graduate from MIT Sloan School of Management, took charge as the CEO in 2012. The company, which was providing remote transaction solutions for financial service, agriculture, healthcare and micro-insurance organizations, started focusing only on agriculture and its allied sectors.

October 16, 2016 | More


Facing down employment discrimination with an algorithm

Getting a good job, getting promoted, getting a raise. These things aren’t easy. In many cases, it may be even harder if you’re not a straight, white, highly-educated man. Vivienne Ming knows this. And at Gild she sought to correct the problem with the help of a massive data set and a proprietary algorithm.

Ming, who spoke with MIT Sloan students Oct. 13 as part of the school’s Innovative Leadership Series, is today the co-founder of Socos, where she has created a machine learning-driven app to help improve a child’s life outcomes.

At Gild from 2012 to 2014, she worked to remove bias from the hiring process. Her example for the audience: Jade Dominguez, who taught himself to program Ruby and began seeking a job in Silicon Valley.

“They give you five seconds,” Ming said of recruiters and hiring managers. “Your name, your school, and your last job. His last name was Dominguez. Let’s be blunt. That is not the right name to get a job in the United States. School: nothing. Last jobs: nothing. Into the shredder.”

Yet Gild’s algorithm—the company builds profiles of candidates that pull from up to 55,000 data points—determined Dominguez was the “second best Ruby developer in Los Angeles,” Ming said. The company needed a developer. It hired Dominguez.

Gild’s data-driven talent acquisition model, Ming said, seeks to answer the question most recruiters ask: “Are you the right person for this particular job?” But it attempts to do that in a way that both strips bias from the hiring process and makes use of more, better predictive information. Is a candidate creative, adaptive, a problem-solver, and good with personal relationships? What sort of work have they done?

Even in a business environment that claims to promote meritocracy, bias remains rampant. According to research by MIT Sloan professor Emilio Castilla, believing you have a culture of meritocracy can make matters even worse.

A tax on being different
In a 2014 experiment, José Zamora changed the name on his résumé to Joe. The offers started rolling in.

To Ming, Zamora’s experience revealed what she called a “tax on being different.” Work she did with Gild’s data showed that someone named José would need a Master’s degree or higher to qualify for the same Silicon Valley software engineering jobs that someone named Joe would qualify for with no education. That sort of discrimination has lasting effects throughout a person’s lifetime, as “discrimination comes with compound interest,” Ming said.

The result of such discrimination is reflected in lifetime earnings. The effective “tax” on gay men in the United Kingdom is about £54,000 pounds, Ming said. For women in technology jobs in the United States, it’s $100,000 to $300,000. For men named José: $750,000.

How do people pay that “tax?” To reach equal heights, people in disadvantaged groups must attend more prestigious schools, achieve higher degrees, and have more exceptional experience and a more eye-catching list of previous employers.

“Working extra hard to grow is great,” Ming said. “Working extra hard just to prove who you are? No. That’s crap.”

A life of substance
Ming left Gild in 2014. She holds nine different positions today, according to her LinkedIn profile. Her career is not defined by a single job title or even a single industry. Theoretical neuroscientist. Entrepreneur. Hacker. Executive. Advisor. Data scientist. They all apply.

For Ming, though, it’s not about titles or even about what exactly you do. It’s about why you do it. Her work is aimed at “maximizing human potential” in “tangible, measurable ways.” She spoke, as well, of her father’s charge to “live a life of substance” and of the need for creative and career rebirth.

As parting advice, Ming encouraged students to see themselves as living and dying many times over, each time starting their “next life,” the next phase of their career, one for which they will develop new skills for a new industry in which they will possibly start a new company.

“This is me right now. I’m kind of done with startups. I’m writing six books. Never written one before. I’ve been asked to run for office. All these sorts of things,” Ming said. “So I’m trying to figure out what actually is next. And that’s great. In fact, I would say, pick yourself at your peak. Die. Take it to the next level. Poetry and hoverbikes and physics and philosophy. These are your lifetimes and I beg everyone in this audience to use them.”

October 13, 2016 | More


Financing around a health care bottleneck

A new MIT Sloan course will apply alternative funding techniques to new drug development. What good is discovering a potential cure for cancer if there’s no money to make it happen? Scientists understand more about diseases now than ever before, says MIT Sloan professor Andrew W. Lo, but investment in drug development has stalled at a critical moment. Lo, the director of the MIT Laboratory for Financial Engineering, believes a little creativity will go a long way when it comes to delivering new treatments and cures to patients.

Starting next semester, students across campus can enroll in a new course, Healthcare Finance, to team up and try to solve the problem. We spoke with with Lo to ask him why now is the perfect time, and why MIT Sloan is the perfect place, to bust up the clog.

Why is the drug development pipeline experiencing a bottleneck?
Well, one reason it’s happening is increasing risk. The fact that we’ve gotten smarter about the underlying science of various diseases like cancer has ironically meant that the risk of drug discovery and development has actually increased. Usually, when we get smarter about something, the risks go down—that’s certainly true in finance. When you learn more about a company, the risk to you as an investor should decrease. But the more we know about human biology, the more we realize there are so many ways diseases can emerge—and so many options for dealing with them—it becomes harder to pin down how biomedical investments are going to perform.

So the biopharma industry is challenged right now in terms of funding. There’s a so-called “valley of death” at the early stages of drug discovery, between the preclinical and phase I/II stages. A lot of great new science is being done, but we don’t have enough funding to bring these ideas from the laboratory into the clinic. We have to be more creative about how we finance biomedical innovation, otherwise we aren’t going to be able to maintain the current pace of innovation.

Cancer is a good example. We now know it’s not one disease but more like 200 different diseases. There are all sorts of different ways of treating cancer: starving a tumor by restricting the growth of its blood vessels, using the body’s immune system to fight cancer cells, radiation, surgery, and so on. The more possibilities we have, the more difficult it is to predict how an investment in a particular approach is going to turn out.

We seem to be at an inflection point in biomedicine right now. We’ve had an incredible series of breakthroughs over the last 10 years, and in certain fields, just in the last couple of years. For example, scientists and clinicians have developed what looks like a cure for melanoma, an immunotherapy called pembrolizumab that exploits a cancer patient’s own immune system to fight tumors. In August 2015 former president Jimmy Carter was diagnosed with stage IV melanoma, with tumors in his liver and brain, and by January 2016, President Carter reported he was tumor-free. In March 2016, his doctors discontinued all treatments and declared that he was cured. These are the kinds of breakthroughs that are happening every day, yet funding for early stage biomedical R&D is harder and harder to come by. Finance seems to be at the center of these challenges.

Is there capital out there?
Yes. There’s a tremendous amount of capital in the global investment community just waiting to be deployed. World population has grown, savings have increased, and all that money needs to be earning a reasonable rate of return. Sovereign wealth funds, pension funds, banks, and insurance companies all have huge pools of capital that aren’t generating very attractive returns in the current low-yield environment. Recent stock market performance has been relatively disappointing, and most experts feel that bond markets aren’t the answer either.

Health care offers a wonderful set of opportunities for investors, but they need to understand the risks and rewards. That’s why it’s critical to be able to combine the expertise of financial analysts with the scientific expertise of drug developers. We have to collaborate to make the ecosystem work better.

How can finance help?
Financial engineers know how to manage risk by combining many investments into a single portfolio to take “multiple shots on goal.” With drug development, only one or two successes will more than pay for all the rest of the trials.

Imagine financing a large portfolio of drug development projects using debt. We could issue “cancer bonds” to fund these projects, just like we issued war bonds during World War II to finance the war effort. There are a lot more bond holders than equity holders, and the amount of capital invested in bond markets is one or two orders of magnitude larger than in biotech venture capital.

Derivative securities like credit default swaps can also play a valuable role in helping the biopharma industry manage risk. Imagine financial institutions issuing “FDA swaps” that pay investors a fixed sum if the FDA doesn’t approve a particular drug. Such contracts are essentially FDA approval insurance and could greatly reduce the risk of the drug development project, which could attract more capital to this industry.

But your new course, Healthcare Finance, isn’t just for students of finance, right?
Right. The course is for anybody who has an interest in health care, and in applying ideas in financial engineering to biomedical innovation. By bringing together budding entrepreneurs and biomedical experts in this course, we’re hoping to build bridges between finance and the life sciences.

How will you structure the course?
It will be a combination of lectures on health care finance by me and class discussions with practitioners involved in current applications. The lectures will cover conceptual material on various financing methods and the practitioners will provide industry-specific context for these methods. Then, students will work on projects in which they’ll have a chance to apply these ideas to their own challenges, which will be the ultimate test of whether or not finance can have an impact on biomedical innovation.

So the time is right. What about the place?
MIT is absolutely the perfect place to offer this course for several reasons. First, MIT was where financial engineering was born, thanks to [Nobel laureate] Bob Merton and the many other giants of modern finance who were and are faculty here. MIT and the Cambridge/Boston area are also home to many incredible breakthroughs pioneered by biologists, clinicians, biotech entrepreneurs and VCs, and big pharma—we’re at the epicenter of this incredible industry. We also have some of the world’s best hospitals within a five-mile radius of MIT. And finally, Boston happens to be one of the mutual fund capitals of the world. What could be more perfect than to combine finance and biomedicine here at MIT? I feel very privileged to be able to draw on all of these resources for my health care finance course and can’t wait to get started.

October 13, 2016 | More


How a flex-time program at MIT improved productivity, resilience, and trust

In today’s increasingly competitive hiring market, organizations need to think differently about how to attract new employees and retain existing ones. Unfortunately, many of the obvious solutions require a financial investment: increasing salaries, bonuses, medical benefits, or vacation days. And if your “competitive advantage” in hiring simply boils down to throwing money at the problem, your hires are quite possibly going to jump ship when a higher offer or benefits package is put in front of them.

So how can an organization increase its benefits without increasing its budget? Many startups will look to add “fun” into the mix — pool tables, nerf guns, pizza Fridays, and happy hours. But that won’t necessarily appeal to all types of employees, and it may not be a sustainable option. Here at the Executive Education program at MIT Sloan School of Management, we took a different approach: introducing flex time.


Working directly with our human resources department, we launched a remote work pilot for our team of 35 employees. The program has several key principles:

  • Everyone is encouraged to work remotely at least two to three days per week
  • Wednesdays are our “work in the office if you physically can” days
  • You don’t need to work a strict 9-to-5 schedule, but be mindful of regular business hours and don’t expect others to match your unique working hours
  • Don’t feel that you need to be connected 24/7

Read the full post at Harvard Business Review.

Peter Hirst leads the team of professionals who partner with clients and faculty at the MIT Sloan School of Management to develop, design, and deliver innovative executive education programs for individuals and companies.

October 13, 2016 | More



Automating big-data analysis

Last year, MIT researchers presented a system that automated a crucial step in big-data analysis: the selection of a “feature set,” or aspects of the data that are useful for making predictions. The researchers entered the system in several data science contests, where it outperformed most of the human competitors and took only hours instead of months to perform its analyses.

This week, in a pair of papers at the IEEE International Conference on Data Science and Advanced Analytics, the team described an approach to automating most of the rest of the process of big-data analysis — the preparation of the data for analysis and even the specification of problems that the analysis might be able to solve.

The researchers believe that, again, their systems could perform in days tasks that used to take data scientists months.

“The goal of all this is to present the interesting stuff to the data scientists so that they can more quickly address all these new data sets that are coming in,” says Max Kanter MEng ’15, who is first author on last year’s paper and one of this year’s papers. “[Data scientists want to know], ‘Why don’t you show me the top 10 things that I can do the best, and then I’ll dig down into those?’ So [these methods are] shrinking the time between getting a data set and actually producing value out of it.”

Both papers focus on time-varying data, which reflects observations made over time, and they assume that the goal of analysis is to produce a probabilistic model that will predict future events on the basis of current observations.

Real-world problems

The first paper describes a general framework for analyzing time-varying data. It splits the analytic process into three stages: labeling the data, or categorizing salient data points so they can be fed to a machine-learning system; segmenting the data, or determining which time sequences of data points are relevant to which problems; and “featurizing” the data, the step performed by the system the researchers presented last year.

The second paper describes a new language for describing data-analysis problems and a set of algorithms that automatically recombine data in different ways, to determine what types of prediction problems the data might be useful for solving.

According to Kalyan Veeramachaneni, a principal research scientist at MIT’s Laboratory for Information and Decision Systems and senior author on all three papers, the work grew out of his team’s experience with real data-analysis problems brought to it by industry researchers.

“Our experience was, when we got the data, the domain experts and data scientists sat around the table for a couple months to define a prediction problem,” he says. “The reason I think that people did that is they knew that the label-segment-featurize process takes six to eight months. So we better define a good prediction problem to even start that process.”

In 2015, after completing his master’s, Kanter joined Veeramachaneni’s group as a researcher. Then, in the fall of 2015, Kanter and Veeramachaneni founded a company called Feature Labs to commercialize their data-analysis technology. Kanter is now the company’s CEO, and after receiving his master’s in 2016, another master’s student in Veeramachaneni’s group, Benjamin Schreck, joined the company as chief data scientist.

Data preparation

Developed by Schreck and Veeramachaneni, the new language, dubbed Trane, should reduce the time it takes data scientists to define good prediction problems, from months to days. Kanter, Veeramachaneni, and another Feature Labs employee, Owen Gillespie, have also devised a method that should do the same for the label-segment-featurize (LSF) process.

To get a sense of what labeling and segmentation entails, suppose that a data scientist is presented with electroencephalogram (EEG) data for several patients with epilepsy and asked to identify patterns in the data that might signal the onset of seizures.

The first step is to identify the EEG spikes that indicate seizures. The next is to extract a segment of the EEG signal that precedes each seizure. For purposes of comparison, “normal” segments of the signal — segments of similar length but far removed from seizures — should also be extracted. The segments are then labeled as either preceding a seizure or not, information that a machine-learning algorithm can use to identify patterns that indicate seizure onset.

In their LSF paper, Kanter, Veeramachaneni, and Gillespie define a general mathematical framework for describing such labeling and segmentation problems. Rather than EEG readings, for instance, the data might be the purchases by customers of a particular company, and the problem might be to determine from a customer’s buying history whether he or she is likely to buy a new product.

There, the pertinent data, for predictive purposes, may be not a customer’s behavior over some time span, but information about his or her three most recent purchases, whenever they occurred. The framework is flexible enough to accommodate such different specifications. But once those specifications are made, the researchers’ algorithm performs the corresponding segmentation and labeling automatically.

Finding problems

With Trane, time-series data is represented in tables, where the columns contain measurements and the times at which they were made. Schreck and Veeramachaneni defined a small set of operations that can be performed on either columns or rows. A row operation is something like determining whether a measurement in one row is greater than some threshold number, or raising it to particular power. A column operation is something like taking the differences between successive measurements in a column, or summing all the measurements, or taking just the first or last one.

Fed a table of data, Trane exhaustively iterates through combinations of such operations, enumerating a huge number of potential questions that can be asked of the data — whether, for instance, the differences between measurements in successive rows ever exceeds a particular value, or whether there are any rows for which it is true that the square of the data equals a particular number.

To test Trane’s utility, the researchers considered a suite of questions that data scientists had posed about roughly 60 real data sets. They limited the number of sequential operations that Trane could perform on the data to five, and those operations were drawn from a set of only six row operations and 11 column operations. Remarkably, that comparatively limited set was enough to reproduce every question that researchers had in fact posed — in addition to hundreds of others that they hadn’t.

“Probably the biggest thing here is that it’s a big step toward enabling us to represent prediction problems in a standard way so that you could share that with other analysts in an abstraction from the problem specifics,” says Kiri Wagstaff, a senior researcher in artificial intelligence and machine learning at NASA’s Jet Propulsion Laboratory. “What I would hope is that this could lead to improved collaboration between whatever domain experts you’re working with and the data analysts. Because now the domain experts, if they could learn and would be willing to use this language, could specify their problems in a much more precise way than they’re currently able to do.”

October 21, 2016 | More


MRIs for fetal health

Researchers from MIT, Boston Children’s Hospital, and Massachusetts General Hospital have joined forces in an ambitious new project to use magnetic resonance imaging (MRI) to evaluate the health of fetuses.

Typically, fetal development is monitored with ultrasound imaging, which is cheap and portable and can gauge blood flow through the placenta, the organ in the uterus that delivers nutrients to the fetus. But MRI could potentially measure the concentration of different chemicals in the placenta and in fetal organs, which may have more diagnostic value.

Earlier this year, in a project led by Ellen Grant’s group in the Fetal-Neonatal Neuroimaging and Developmental Science Center at Boston Children’s Hospital (BCH), members of the research team presented a paper showing that MRI measurements of oxygen absorption rates in the placenta can indicate placental disorders that might endanger the fetus. Grant is a professor of pediatrics and radiology at the Harvard Medical School (HMS).

And at the International Conference on Medical Image Computing and Computer Assisted Intervention this week, a team led by Polina Golland’s group at MIT’s Computer Science and Artificial Intelligence Laboratory presented a paper demonstrating a new algorithm for tracking organs through sequences of MRI scans, which will make MRI monitoring much more useful.

Much of Golland’s prior work has dealt with algorithmic analysis of MRI scans of the brain. “The question is, why can’t you just use everything that we’ve done in the last 25 years in the brain to apply in this case?” says Golland, a professor of electrical engineering and computer science. “And the answer is that for the brain, when the person is performing a particular task in the scanner, they’re lying still. And then after the fact, you can use algorithmic approaches to correct for very small motions. Inside the uterus, well, you can’t tell the mother not to breathe. And you can’t tell the baby not to kick.”

Frame by frame

“When you’re trying to understand whether it’s a healthy intrauterine environment, you look at fetal growth by doing measurements with ultrasound, and you look at the velocities of waveforms in the umbilical arteries,” says Grant. “But neither of those are direct measures of placental function. They’re downstream effects. Our goal is to come up with methods for assessing the spatiotemporal function of the placenta directly. If you really want to intervene, you want to intervene before the placenta fails.”

Grant leads the clinical arm of the project together with Lawrence Wald, a physicist at Massachusetts General Hospital and a professor of radiology at HMS. Elfar Adalsteinsson, an MIT professor of electrical engineering and computer science, is collaborating with colleagues at BCH to develop new MRI technologies for fetal imaging, and Golland’s group is in charge of developing software for interpreting the images.

An MRI image might consist of hundreds of two-dimensional cross sections of an anatomical region, stitched into a three-dimensional whole. Measuring chemical changes over time requires analyzing sequences of such three-dimensional representations — about 300, in the case of the new paper. The researchers refer to each MRI image in a series as a “frame,” analogous to frames of video.

The first step in localizing chemical changes to particular organs, of course, is identifying the organs. That’s where the researchers’ new algorithm comes in.

With MRI images of brain activity, it’s comparatively easy to determine which anatomical features in one frame correspond to which features in the next. The subject’s head is immobilized, and brain regions don’t change shape or location over the course of a scan. Algorithmically, the standard method for coordinating frames is to identify a region in the first frame and then map it separately onto each of the frames that follow.

With fetal MRIs, that won’t work, because the fetus may have moved dramatically between, say, frame one and frame 200. So Golland and her co-authors — including first author Ruizhi Liao, an MIT graduate student in electrical engineering and computer science; Grant; and Adalsteinsson — took a different approach.

On a roll

Their algorithm begins by finding a mathematical function that maps the pixels of the first frame onto those of the second; then it maps the mathematically transformed version of the first frame onto the third, and so on. The end result is a series of mathematical operations that describes the evolution of the scan as a whole. “The way to think about how this algorithm works is, it takes the baseline frame — for example, the first one — and it rolls it down the sequence,” Golland says.

Next, a human expert draws very precise boundaries around the elements of interest in the first frame — in this case, not just the placenta but the brain and liver as well. Those elements’ movements or deformations from frame to frame can then be calculated using the previously determined mathematical operations.

Hand-drawing organ boundaries — or “segmenting” an MRI scan — is a time-consuming process. But performing it only once is much less onerous than performing it 300 times.

In order to evaluate the accuracy of their algorithm, the researchers hand-segmented an additional five frames. “Two members of the team sat there for about a week and drew outlines,” Golland says. “It’s a very painful validation process, but you have to do it to believe the results.” The algorithm’s segmentations accorded very well with those performed by hand.

“One of the big problems in high-speed acquisition and MR [magnetic resonance] acquisition is definitely the incorporation of motion and trying to deal with motion issues,” says Sarang Joshi, a professor of bioengineering at the University of Utah. “Modeling and incorporating the deformation estimation in MR acquisition is a big challenge, and we have been working on it as well, and many other people have been working on it. So this is a really great step forward.”

October 21, 2016 | More


Mapping serotonin dynamics in the living brain

Serotonin is a neurotransmitter that’s partly responsible for feelings of happiness and for mood regulation in humans. This makes it a common target for antidepressants, which block serotonin from being reabsorbed by neurons after it has dispatched its signal, so more of it stays floating around the brain.

Now MIT researchers have developed an imaging technique that, for the first time, enables three-dimensional mapping of serotonin as it’s reabsorbed into neurons, across multiple regions of the living brain. This technique, the researchers say, gives an unprecedented view of serotonin dynamics, and could be a powerful tool for the research and development of antidepressants.

“Until now, it was not possible to look at how neurotransmitters are transported into cells across large regions of the brain,” says Aviad Hai, a postdoc in the Department of Biological Engineering and first author of a paper describing the technique in today’s issue of Neuron. “It’s the first time you can see the inhibitors of serotonin reuptake, like antidepressants, working in different parts of the brain, and you can use this information to analyze all sorts of antidepressant drugs, discover new ones, and see how those drugs affect the serotonin system across the brain.”

The paper’s other authors are Alan Jasanoff, a professor of biological engineering; and three other researchers in Jasanoff’s lab: Lili X. Cai, Taekwan Lee, and Victor S. Lelyveld.

Measuring reuptake

Many antidepressants that target serotonin work by blocking serotonin transporters that reabsorb the neurotransmitter into a neuron, so it can be reused after it has sent a chemical signal. Aptly called “selective serotonin reuptake inhibitors” (SSRIs), these drugs increase levels of serotonin in the brain, alleviating feelings of anxiety and depression caused by low levels of the neurotransmitter.

Researchers most commonly study the effect of antidepressants using a technique known as microdialysis, in which they insert a probe into the brain to take tiny chemical samples from the tissue. But this method is time-consuming and limited in scope, as it allows them to study only a single location at a time.

For the new imaging technique, the researchers engineered a protein to act as a sensor that latches onto serotonin and detaches at the moment of reuptake. The sensor is injected, along with serotonin, and emits a signal that can be read by functional magnetic resonance imaging (fMRI). The trick is that the sensor remains off — emitting a low signal — when bound to serotonin, and turns on — creating a much brighter signal — when serotonin is removed.

In the new system, a mathematical model uses the fMRI signal data to construct a 3-D map that consists of more than 1,000 voxels (pixels in three dimensions), with each voxel representing a single point of measurement of serotonin reuptake. Based on the signal strength at each point, the model calculates the amount of serotonin that gets absorbed, in the presence and absence of SSRIs.

“Basically, what we’ve seen in this work is a method for measuring how much of a neurotransmitter is being [absorbed], and how that amount, or rate, is affected by different drugs … in a highly parallel fashion across much of the brain,” Jasanoff says. That information could be very valuable for testing drug efficacy, he says.

Mapping antidepressant dynamics

To validate the sensor, the researchers successfully measured the expected effect of the SSRI fluoxetine, commonly called Prozac, on serotonin transporters in six subregions of a brain area known as the basal ganglia. These subregions are thought to play a role in motivation, reward, cognition, learning, emotion, and other functions and behaviors.

In doing so, the researchers simultaneously recorded a stronger decrease of serotonin reuptake in response to Prozac among three of the subregions, while noting a very weak response in one other region. These results were, more or less, anticipated, Jasanoff says. “But now we’re able to map that effect in three dimensions, across brain regions,” he says, which could lead to advances in studying the effects of drugs on specific parts of the brain.

But the researchers did uncover a surprising finding. While mapping the effects of a dopamine transport reuptake inhibitor — made to target only dopamine — they found the drug reduced serotonin reuptake, to an extent comparable to that of SSRIs, in three subregions, one of which is known for high dopamine transporter expression. Previous studies had indicated that dopamine transporter proteins can aid in low levels of serotonin reuptake, but the new findings show the effect is widespread in the living brain, Jasanoff says.

This experiment provides further proof of a strong interplay between the serotonin and dopamine systems, and indicates that antidepressants may be less effective when targeting just one of the two neurotransmitters, Hai says. “It may not be sufficient to just block serotonin reuptake, because there’s another system — dopamine — that plays a role in serotonin transport as well,” he says. “It’s almost proof that when you use antidepressants that … target both systems, it could be more effective.”

Next steps for the researchers are to explore different regions of the brain with this sensor, including the dorsal raphe, which produces most of the brain’s serotonin. They’re also making another nanoparticle-based sensor that is more sensitive than one used for this study.

“The ability to simultaneously measure serotonin clearance across a broad cross-section of brain regions is an important, and complimentary, addition to the existing tool belt of approaches used to measure serotonin release and clearance kinetics in vivo,” says Lynette Daws, a professor of physiology at the University of Texas Health Science Center. “The application of fMRI to measure these processes affords temporal and spatial resolution not previously possible, and opens new vistas for analyzing drug action on serotonin neurochemistry in living animals.”

The research was funded by the National Institutes of Health, as well as fellowships from the Edmond and Lily Safra Center for Brain Sciences at the Hebrew University of Jerusalem, and the European Molecular Biology Organization.

October 20, 2016 | More


MIT Campaign for a Better World ends FY16 with $2.9 billion

In May, MIT President L. Rafael Reif announced the public launch of the MIT Campaign for a Better World, an ambitious fundraising effort that aims to advance the Institute’s ability to address some of the world’s greatest challenges. The Campaign seeks to raise $5 billion and began its public phase with $2.6 billion already raised toward its goal. Thanks to the enthusiasm of MIT alumni and friends, the Institute had a strong finish to fiscal year 2016, closing with $529 million in new gifts and pledges and $2.9 billion raised in the Campaign.

“We have made great progress toward our very ambitious goal,” says Julie Lucas, MIT’s vice president for resource development. “We benefit from the strong support of alumni and friends around the world who believe in MIT’s mission and ability to have a positive impact on the world. We are very appreciative of that support. It will take the entire MIT community working together to realize our financial goal, and to make progress in our efforts to help create a better world.”

Through the MIT Campaign for a Better World, the Institute is focusing its strengths in education, research, and innovation on six priority areas:

  • Discovery Science: Transforming our world through fundamental scientific research
  • Health of the Planet: Addressing critical environmental and sustainability challenges facing humankind
  • Human Health: Defining the future of health through advances from bench to bedside across a broad range of disciplines
  • Innovation and Entrepreneurship: Accelerating the path from idea to impact
  • Teaching, Learning, and Living: Reimagining education for the 21st-century learner
  • The MIT Core: Attracting extraordinary students and faculty and providing them with the resources they need to thrive

Alumni and friends have responded enthusiastically to each priority area, making contributions to support work in areas ranging from human health to innovation and entrepreneurship. The Institute is already seeing the impact of some early Campaign gifts — including, to name just a few, the launch of the Abdul Latif Jameel Water and Food Security Lab and a gift from the Leventhal family in support of the Center for Advanced Urbanism. Plans are also under way for the creation of a new makerspace made possible by the Victor and William Fung Foundation. Across campus, new teaching, learning, and research spaces are providing MIT’s outstanding faculty and students with the facilities they need to collaborate and flourish. Funds for student scholarships and faculty research continue to ensure that the heart of the Institute — its exceptionally talented community — remains strong.

MIT will continue to build on the momentum of this past spring by engaging alumni from around the world through a series of Campaign roadshow events. The roadshow kicks off this fall in New York, and then moves to San Francisco. Events are scheduled through spring 2017. In this first year of the roadshow, alumni in Hong Kong, London, Tel Aviv, Los Angeles, Mexico City, and Washington will be invited to hear President Reif’s vision for building a better world. Additional roadshow cities will be announced.

“The Campaign presents inspiring new opportunities to leverage the MIT community’s excellence in research, education, and innovation to do good in the world,” Reif says. “Wherever I travel, I am finding that our aspirations for the Campaign are resonating with both our alumni and others who want to make a difference. A Campaign to address complex global challenges is a perfect extension of MIT’s mission. By advancing the Campaign’s priorities, we also advance the values that matter so deeply to us.”

For more information about the MIT Campaign for a Better World, visit and follow #MITBetterWorld.

October 20, 2016 | More


Gregory Stephanopoulos receives Samson Prize for Innovation in Alternative Fuels

Gregory Stephanopoulos, the Willard Henry Dow Professor of Chemical Engineering and Biotechnology at MIT, has been selected to receive the Eric and Sheila Samson Prime Minister’s Prize for Innovation in Alternative Fuels for Transportation.

Awarded by the prime minister of the state of Israel and totaling $1 million, the Samson Prize is the world’s largest monetary prize awarded in the field of alternative fuels. Stephanopoulos shares the honor with Mercouri G. Kanatzidis of Northwestern University. The two researchers are being honored for “their innovative scientific and technological contributions that have the potential to lead to the development of alternative fuels for transportation, replacing the fast depleting fossil fuels which are the major fuels used nowadays in transportation.”

Stephanopoulos was recognized “for his pioneering work in the field of metabolic engineering which contributed in a major way to the progress in the engineering of microbes for biofuels production.” The prize citation reads:

“Prof. Gregory Stephanopoulos is a pioneer in the field of metabolic engineering and made seminal contributions to the engineering of microbes for biofuels production. He authored the first report on the targeting and engineering of mitochondria as a favorable component for production of biofuels and introduced the concept of global Transcriptional Machinery Engineering (gTME) for improving multigene microbial phenotypes. Of specific relevance are his achievements on xylose isomerase overexpression along with the engineering of the pentose phosphate pathway that enables rapid xylose utilization and ethanol production by Saccharomyces Cerevisiae (a species of yeast). He has also developed several strategies for the conversion of natural gas (methane) to liquid fuel with much higher energy density.”

Stephanopoulos’s current research focuses on metabolic engineering and its applications to the production of fuels, biochemicals and specialty chemicals, as well as mammalian cell physiology as it pertains to diabetes and metabolism. He has co-authored or edited five books — including co-authoring the first textbook on metabolic engineering — and some 300 papers, and he holds 25 patents. Stephanopoulos is presently the editor-in-chief of the journal Metabolic Engineering; he also serves on the editorial boards of seven scientific journals. He has been recognized with many awards and honors, including the Founders Award from the American Institute of Chemical Engineers (AIChE), the M.J. Johnson Award of the American Chemical Society, the Merck Award in Metabolic Engineering, and election to the National Academy of Engineering. In 2014, he was the recipient of the Walker Award from AIChE and currently serves as the organization’s president.

Stephanopoulos received his BS from the National Technical University of Athens, in Greece; his MS from the University of Florida; and his PhD from the University of Minnesota. He is presently directing a research group of approximately 25 researchers who work on applications of metabolic engineering for the production of fuels and chemicals.

Kanatzidis and Stephanopoulos were selected by a committee of international experts, who submitted their recommendation to a board of trustees, headed by former Technion president, Professor Yitzhak Apeloig. This is the fourth time the Samson Prize has been awarded by the prime minister’s office — the Ministry of Science, Technology and Space — and Keren Hayesod, the official fundraising organization for Israel.

The prize ceremony will take place during the Fuel Choices Conference in Tel Aviv on Nov. 2.

October 19, 2016 | More


How to achieve “green” desalination

In one of the most remarkable turnarounds ever achieved in the face of a natural resource crisis, Israel has overcome a looming fresh water shortage in less than a decade. The country now has such a large water surplus that it can sell significant amounts to its parched neighboring countries. The reversal was made possible by the construction of the world’s largest desalination plants, which convert seawater from the Mediterranean into potable water for both domestic use and agriculture.

But while that new glut of water can provide a valuable example for nations and regions around the world that are facing water shortages, it also has an environmental price: Desalination plants are intensive users of energy, the production of which typically requires burning fossil fuels in large power plants.

To address that issue and work toward a roadmap for future research and demonstrations, some of the world’s leading specialists in the technology, economics, and regulatory issues surrounding desalination gathered at MIT this week. They discussed how to get the salt out of seawater or brackish aquifers at all scales, from small, local installations to the kinds of megaprojects that transformed Israel’s situation, while minimizing or eliminating the associated greenhouse gas emissions.

The two-day workshop, organized by MIT’s Abdul Latif Jameel World Water and Food Security Lab (J-WAFS) and its director, John H. Lienhard V, brought together experts from 11 nations to discuss the issues and frame a report to be delivered next month at the 22nd session of the Conference of the Parties to the United Nations Framework Convention on Climate Change, or, COP22, in Marrakesh, Morocco. The aim is to map out the areas where research and development, and demonstration projects are most needed and could yield the greatest benefits.

“What you are doing is so crucially important,” said Maria Zuber, MIT’s vice president for research, to the participants at the conclusion of the workshop. She pointed out that while the world population is “going up, up, up, the amount of fresh water is basically a fixed asset.” And yet, there is “an incredible resource in the ocean, all the water you could want, yet it’s not suitable for human needs.” That’s why it is so essential, she said, to find a way to provide “access to clean water that doesn’t impact the environment in a negative way with its carbon footprint.”

“We need breakthoughs on this,” Zuber said, “and thanks to the efforts of all of you, I think we’re going to have it.”

Many potential solutions to that problem — as well as the challenges that need further research — were discussed at length by the participants. Coupling desalination facilities with carbon-free or low-carbon power sources such as solar, wind, or nuclear power plants could make it possible to gain the benefits of clean water without the climate impact. But some of these renewable energy sources do not deliver power continuously, and some types of desalination technology encounter difficulties when their operation is not constant.

For example, variations in the operation of the plants can lead to increased fouling of the membranes that separate the salt from the water. Wind and solar installations produce variable power, so to avoid the ramping up and down of the desalination plants, these power sources might need to be coupled with storage systems, raising the cost. And nuclear plants tend to be larger than needed for desalination, so such facilities might have to be coupled with power production for other uses.

Boris Liberman, vice president and chief technology officer of IDE Technologies, the Israel-based company responsible for the design and construction of that county’s giant new seawater desalination plants, including the largest such plant in the world, said that those plants have now demonstrated that with proper design and operation it is possible to operate efficiently even with power supplies that ebb and flow. The key, he said, is to maintain constant pressure inside the system while allowing the flow rate and freshwater output to rise and fall. The company’s largest plant, called Sorek, which produces 150 million cubic meters of water per year, “has worked for two years with no fouling,” he said.

Jacopo Buongiorno, MIT’s TEPCO Professor of Nuclear Science and Engineering, described a concept for making floating offshore desalination plants, which could be coupled to floating offshore nuclear plants that he and his students have been designing. The paired facilities would take advantage of the inherent advantages of building many identical units that could be assembled in shipyards and towed to their eventual point of use. This approach allows for controlled construction in facilities that could develop expertise in building those plants rather than relying on local construction crews and materials at each end-use location.

Creating desalination plants on a floating platform, he said, would also eliminate many of the problems associated with the long and complex intake tubes that bring seawater to the plant and discharge brine back into the sea. These intake and outflow systems, in many locations, can now cost as much as the desalination plants themselves. “Offshore nuclear with water desalination offers a new and flexible deployment and operation paradigm for zero-carbon cogeneration of power and fresh water,” Buongiorno said.

But there are many other potential pairings of power sources with desalination facilities, many workshop participants said. For example, geothermal energy could potentially provide both electricity and heat — the two big requirements for desalination — and could be suitable for many different kinds of locations, since geothermal heat is available anywhere if drills can reach deep enough. Other possibilities include the use of wave or tidal power, or of advanced solar technologies such as thermal plants that store the sun’s heat in vats of molten salt. These plants can then be used to deliver that heat when it’s needed, even during the night, providing a way to get constant output from solar power.

Lienhard and others pointed out that desalination could, in fact, be thought of as a kind of storage technology in itself. That is, in situations where there is a mismatch between the times when renewable power sources are available and when the power is actually needed, the excess power could be used to make and store fresh water, which could then be delivered whenever needed without having to draw power for desalination during periods of peak loads on the grid. “Water is cheap to store,” he says, compared to electricity, which requires expensive battery or pumped-hydro storage systems.

A final report from the workshop, based on the input from all of the participants, will be produced in the next few weeks, aiming to outline “what are the priorities for research funding, what are the barriers, and how to prioritize the work,” Lienhard said.

October 19, 2016 | More


With new model, buildings may “sense” internal damage

When a truck rumbles by a building, vibrations can travel up to the structure’s roof and down again, generating transient tremors through the intervening floors and beams.

Now researchers at MIT have developed a computational model that makes sense of such ambient vibrations, picking out key features in the noise that give indications of a building’s stability. The model may be used to monitor a building over time for signs of damage or mechanical stress. The team’s results are published online in the journal Mechanical Systems and Signal Processing.

“The broader implication is, after an event like an earthquake, we would see immediately the changes of these features, and if and where there is damage in the system,” says Oral Buyukozturk, a professor in MIT’s Department of Civil and Environmental Engineering (CEE). “This provides continuous monitoring and a database that would be like a health book for the building, as a function of time, much like a person’s changing blood pressure with age.”

Buyukozturk’s co-authors Hao Sun, a CEE postdoc who was the paper’s lead author; Aurélien Mordret, a postdoc in the Department of Earth, Atmospheric and Planetary Sciences (EAPS); Germán Prieto, the Cecil and Ida Green Career Development Assistant Professor in EAPS; and M. Nafi Toksöz, an EAPS professor.

Taking vital signs

The team tested its computational model on MIT’s Green Building — a 21-story research building made completely from reinforced concrete. The building was designed in the 1960s by architect and MIT alum I.M. Pei ’40, and stands as the tallest structure in Cambridge, Massachusetts. In 2010, Toksöz and others at MIT worked with the United States Geological Survey to outfit the Green Building with 36 accelerometers that record vibrations and movements on selected floors, from the building’s foundation to its roof.

“These sensors represent an embedded nervous system,” Buyukozturk says. “The challenge is to extract vital signs from the sensors’ data and link them to health characteristics of a building, which has been a challenge in the engineering community.”

To do this, the team first built a computer simulation of the Green Building, in the form of a finite element model — a numerical simulation that represents a large physical structure, and all its underlying physics, as a collection of smaller, simpler subdivisions. In the case of the Green Building, the researchers built a high-fidelity finite element model, then plugged various parameters into the model, including the strength and density of concrete walls, slabs, beams, and stairs in each floor.

As the model is designed, researchers should be able to introduce an excitation in the simulation — for example, a truck-like vibration — and the model would predict how the building and its various elements should respond.

“But the model uses a lot of assumptions about the building’s material, its geometry, the thickness of its elements, et cetera, which may not correspond exactly to the structure,” Buyukozturk notes. “So we are updating the model with actual measurements to be able to give better information about what may have happened to the building.”

Mining for features

To more accurately predict a building’s response to ambient vibrations, the group mined data from the Green Building’s accelerometers, looking for key features that correspond directly to a building’s stiffness or other indicators of health. To do this efficiently, the team developed a new method with the seismic interferometry concept that describes how a vibration’s pattern changes as it travels from the ground level to the roof.

“We look at the foundation level and see what motions a truck, for instance, caused there, and then how that vibration travels upward and horizontally, in speed and direction,” Buyukozturk explains.

The researchers added this equation to their model of the Green Building and ran the model multiple times, each time with a set of measurements taken by the accelerometers at a given point in time. In all, the group plugged into the model vibration measurements that were taken continuously over a two-week period in May 2015.

“We are continuously making our computational system more intelligent over time, with more data,” Buyukozturk says. “We’re confident if there is damage in the building, it will show up in our system.”

Intelligent buildings

So how has the Green Building fared since its construction more than 50 years ago?

“The building is safe, but it is subject to quite a bit of vibration, particularly in the upper floors,” Buyukozturk says. “The building, which is built on soft soil, is long in one direction and narrow in the other with stiff concrete walls on each end. Therefore, it manifests torsional movements and rocking, especially on windy days,” he says.

The team plans to verify its computational model with experiments in the lab. The researchers have constructed a 4-meter-tall replica of a building structure, which they will outfit with accelerometers. They will study the effects of ambient vibrations, as well as how the structure responds to hammer strikes and other seismic stimuli. The team is also erecting a large steel structure in Woburn, Massachusetts, about the size of a cellphone tower, and will carry out similar experiments that will ultimately help to refine the researchers’ computational model.

“I would envision that, in the future, such a monitoring system will be instrumented on all our buildings, city-wide,” says lead author Hao Sun. “Outfitted with sensors and central processing algorithms, those buildings will become intelligent, and will feel their own health in real time and possibly be resilient to extreme events.”

This research was funded, in part, by Royal Dutch Shell through the MIT Energy Initiative, and by the Kuwait-MIT Signature Project through the Kuwait Foundation for the Advancement of Sciences and the Kuwait-MIT Center for Natural Resources and the Environment.

October 19, 2016 | More


Algorithm connects students to the most interesting person they’ve never met

It started with a simple Google doc. In the spring of 2015, Mohammad Ghassemi and Tuka Al-Hanai, two graduate students in the MIT Department of Electrical Engineering and Computer Science (EECS), emailed their graduate community a sign-up sheet that read: “MIT Connect pairs members of the graduate student body for platonic, one-on-one lunches once a week over the course of the semester. This form contains a few questions to help match you with others that share your interests and schedule.”

One year, and more than 1,000 lunches later, the two students have launched the “beta edition” of MIT Connect at Grants from the Office of the Dean for Graduate Education (ODGE), the MindHandHeart Initiative, the DeFlorez Fund, and the Legatum Fellowship have all helped Ghassemi and AlHanai to expand the program: MIT Connect is now open to graduate students, undergraduates, and postdocs, as well as all MIT alumni and employees wanting to meet others around campus. To sign up, visit the website and enter your name and MIT ID. Users can then provide their schedule and what they like to eat, and the platform tells them who to meet and when.

“We have been really surprised by the level of interest that Connect has created on campus,” Al-Hanai noted in an evaluation of the program. “We have over 500 students signed up; last fall it was featured in the campus paper; and we were even approached by an investor interested in expanding the service to the public.”

Both student founders specialize in artificial Intelligence, Ghassemi in the context of health care, and Al-Hanai in the context of speech. Their project uses an artificial intelligence algorithm they call the Maven. “Thanks to incredible feedback from users, the algorithm does a pretty good job at matching people,” says Ghassemi. Meanwhile, Al-Hanai reports that “93 percent of participants surveyed rate the program four or above, on a five-point scale, and that 52 percent made a lasting friend using the program.”

Interestingly, Connect was inadvertently inspired by a tragedy. In 2015, Ghassemi was grappling with two deaths that occurred back-to-back, in his hometown and at school. After confiding in his colleague, Al-Hanai, they started asking what personal challenges friends in their community had encountered. Ghassemi and Al-Hanai’s informal polling yielded an unexpected finding: “Time and again,” Ghassemi says, “students would explain how they wanted to find friends, mentors, or co-founders, but found mixers to be impersonal, and sometimes even awkward. One graduate student put it bluntly: ‘Nothing feels worse than going to an event alone and watching everyone else have fun.’”

After discussing with friends ways to reduce isolation and expand social networks on campus, an idea for a solution began to emerge. “One suggestion that came up was to meet with semi-random people during lunch,” says Al-Hanai. “If you were going to take time to eat lunch anyway, you might as well have lunch with someone new. An incredibly interesting person exists out there who might turn out to be your new favorite person.”

The concept moved quickly from idea to action to platform. “We quickly put together a very simple proof-of-concept and received a Graduate Student Life Grant (GSLG) from the Office of the Dean for Graduate Education (ODGE). Then, the pieces starting coming together — from a barebones Google doc to a full-fledged platform.”

For Al-Hanai, creating MIT Connect has been rewarding on many levels. “It’s wonderful to build a system from the ground up that serves the community. I’ve enjoyed learning new technologies, generating visualizations, raising support, and improving MIT Connect’s communications efforts. It’s broadened my sphere of interest. It’s a platform that I myself would use, and it’s fulfilling to hear that my peers are gaining meaningful experiences, connections, and friends through it.”

While MIT Connect has been focused on peer-to-peer pairing, the creators are currently working to launch a version of the platform to help students find mentors, employers, and entrepreneurial co-founders.

Some of the grant funding that supported the growth of MIT Connect is currently available to students, staff, and faculty at MIT: The MindHandHeart Initiative’s Innovation Fund is open for online applications through Oct. 31, and will open again in the spring.

October 17, 2016 | More


Karen Gleason: Inventing at the nanoscale and growing an innovation ecosystem

With innovation, there are many truisms, but here are just two. Progress is often made on the nanoscale, and no one company has a monopoly on good ideas in this arena. Karen Gleason has experienced the former and ensures the continuation of the latter. As a professor of chemical engineering, she’s designed an ultrathin coating process that’s benefited rubber manufacturing and holds the potential to speed up computers and help address the global water crisis. As MIT’s associate provost, she manages space issues and industrial relationships, with an intent to encourage even more companies into the already crowded innovation ecosystem surrounding the MIT campus. That density is a key element, she says, in order to continue to produce technology that works not only in a theoretical sense, but also, more importantly, that answers real-world needs and problems.

When membranes last longer

Gleason’s academic focus is on modifying surfaces at the nanoscale and then translating those improvements to larger areas and higher-volume commercial uses. Among other things, she’s created a suite of chemical vapor deposition (CVD) processes, which represent, as she says, a platform technology. These microscopic layers have led her to co-found two companies. GVD Corporation started in 2001 and works with rubber manufacturing; applied to molds, GVD’s polymer release coating is durable for months and fully covers the tread features, allowing tires to be easily removed, she says. DropWise, established in 2014, utilizes a related technique on metal surfaces, particularly heat exchangers, improving their heat transfer and thermal performance.

The new CVD process is free of solvent and operates at low temperatures to produce organic materials with a broader range of potential uses. One area with particular potential, Gleason says, is water desalination. In the commonly-used method of reverse osmosis, microbes and molecules accumulate on — and end up fouling — the membranes in this filtration process. Her antibiofouling coating wouldn’t negatively affect the speed at which water would be desalinated, but would allow the membranes to last longer and need less maintenance, thus lowering operating costs, she says.

Reverse osmosis, though, isn’t all-encompassing for water purification. It’s not economical in remote areas where a big plant is not commercially viable. In these cases, Gleason says that her CVD technology can be applied inside microfluidic devices that operate by an electrical current to separate salt ions and are more portable and less expensive since they handle smaller quantities of water. Reverse osmosis is also difficult to use in places where sea water has a high saline percentage, as too large of a pressure across the membrane would be required. Instead, multi-stage flash distillation is an alternative. The process essentially boils water in a series of stages without needing membranes, and treating the inside of the distillation unit would slow the formation of salt and lower the frequency of cleaning, she says.

Regardless of the technique, “the idea is to have the process run more reliably and without intervention,” says Gleason, adding that the result would be more drinkable and usable water in more locations around the world for a lower cost. The technology has been lab-tested on 4-by-6-inch samples. Scaling up isn’t a concern, she says, as she’s done it effectively with CVD reactors before. What is needed is to partner with companies that can test particular surface chemistries for reliability and durability.

Outside of water purification application, Gleason says that CVD polymers could benefit any company making semiconductors. That industry’s desire is to continually shrink dimensions in order to improve performance. With devices like laptops and iPads, companies would like to bring the feature size down below 22 nanometers. As perspective, the typical diameter of human hair is 100 microns, which equates to 100,000 nanometers. Gleason says that CVD polymer technology can hit 7-nanometer feature sizes, adding that her method allows for superior process control and higher purity films, resulting in a higher, more consistent yield of successful devices. The next step is to partner with a vendor in order to vet the technology in an industrial setting and make commercial tools and processes available to chip manufacturers.

Growing the field

As MIT’s associate provost for over two years, Gleason says that a main focus has been to continue a decades-long thrust to enhance the already recognized innovation ecosystem in and around MIT. At its root is an interdisciplinary approach that’s always been a part of the campus. The Institute’s original buildings, she says, were intentionally interconnected by design 100 years ago. People would be near each other and inevitably have chance discussions, blurring the boundaries between departments.

That collaborative attitude has continued, but there’s another component involved, an expectation that innovation doesn’t stop at the boundaries of campus. “It’s not enough to have an original idea,” Gleason says. “You have to make that idea work for the world.” It’s not an approach that’s embraced by every university. There’s a common reluctance that an industrial end use will limit intellectual discovery. She’s found the inverse to be true. “By trying to take your idea out, you ask a different set of questions and come back richer for the experience,” Gleason says.

With GVD, for example, she learned that “markets are harder than molecules.” It was easy to control testing conditions in the lab, but talking to companies about their challenges forces scientists and engineers to address less predictable evaluation parameters in order to produce a product that is responsive to the market.

There’s always been a draw to the Kendall Square area that surrounds the eastern side of the MIT campus. Companies have access to faculty, students, postdocs, and former CEOs looking for their next endeavor. Tech, biotech, and biopharma companies are heavily represented. Gleason says that last year the Commonwealth of Massachusetts had more biotech initial public offerings than Silicon Valley. The Cambridge Innovation Center, started in 1999, alone houses over 800 companies, mostly startups. MIT’s Kendall Square Initiative will add six buildings, equaling 1.6 million new square feet for academic and commercial uses, over the next four to 10 years. Additionally, in 2018, a new 200,000 square foot laboratory will open in the center of campus. Known as MIT.nano, it will be able to support 2,000 researchers annually and house shared cleanrooms and state-of-the-art imaging facilities, along with spaces for education, collaboration, and the production of prototypes, all with the intent of advancing the frontiers of technology at the nanoscale.

Gleason says that she would actually like to see more companies adjacent to campus, representing a broader range of industries, including robotics, lighting, and digital health care. This density would facilitate the chance to partner more rapidly and successfully navigate the shared hurdles that all companies, especially new ones, face. “As Lita Nelson, the former head of MIT’s Technology Licensing Office, often quipped, ‘Tech transfer is a contact sport.’ You need other players on the field,” Gleason says. “The richer the environment, the better it is for everybody.”

October 17, 2016 | More


Stretchy optical fibers for implanting in the body

Researchers from MIT and Harvard Medical School have developed a biocompatible and highly stretchable optical fiber made from hydrogel — an elastic, rubbery material composed mostly of water. The fiber, which is as bendable as a rope of licorice, may one day be implanted in the body to deliver therapeutic pulses of light or light up at the first sign of disease.

The researchers say the fiber may serve as a long-lasting implant that would bend and twist with the body without breaking down. The team has published its results online in the journal Advanced Materials.

Using light to activate cells, and particularly neurons in the brain, is a highly active field known as optogenetics, in which researchers deliver short pulses of light to targeted tissues using needle-like fibers, through which they shine light from an LED source.

“But the brain is like a bowl of Jell-O, whereas these fibers are like glass — very rigid, which can possibly damage brain tissues,” says Xuanhe Zhao, the Robert N. Noyce Career Development Associate Professor in MIT’s Department of Mechanical Engineering. “If these fibers could match the flexibility and softness of the brain, they could provide long-term more effective stimulation and therapy.”

Getting to the core of it

Zhao’s group at MIT, including graduate students Xinyue Liu and Hyunwoo Yuk, specializes in tuning the mechanical properties of hydrogels. The researchers have devised multiple recipes for making tough yet pliable hydrogels out of various biopolymers. The team has also come up with ways to bond hydrogels with various surfaces such as metallic sensors and LEDs, to create stretchable electronics.

The researchers only thought to explore hydrogel’s use in optical fibers after conversations with the bio-optics group at Harvard Medical School, led by Associate Professor Seok-Hyun (Andy) Yun. Yun’s group had previously fabricated an optical fiber from hydrogel material that successfully transmitted light through the fiber. However, the material broke apart when bent or slightly stretched. Zhao’s hydrogels, in contrast, could stretch and bend like taffy. The two groups joined efforts and looked for ways to incorporate Zhao’s hydrogel into Yun’s optical fiber design.

Yun’s design consists of a core material encased in an outer cladding. To transmit the maximum amount of light through the core of the fiber, the core and the cladding should be made of materials with very different refractive indices, or degrees to which they can bend light.

“If these two things are too similar, whatever light source flows through the fiber will just fade away,” Yuk explains. “In optical fibers, people want to have a much higher refractive index in the core, versus cladding, so that when light goes through the core, it bounces off the interface of the cladding and stays within the core.”

Happily, they found that Zhao’s hydrogel material was highly transparent and possessed a refractive index that was ideal as a core material. But when they tried to coat the hydrogel with a cladding polymer solution, the two materials tended to peel apart when the fiber was stretched or bent.

To bond the two materials together, the researchers added conjugation chemicals to the cladding solution, which, when coated over the hydrogel core, generated chemical links between the outer surfaces of both materials.

“It clicks together the carboxyl groups in the cladding, and the amine groups in the core material, like molecular-level glue,” Yuk says.

Sensing strain

The researchers tested the optical fibers’ ability to propagate light by shining a laser through fibers of various lengths. Each fiber transmitted light without significant  attenuation, or fading. They also found that fibers could be stretched over seven times their original length without breaking.

Now that they had developed a highly flexible and robust optical fiber, made from a hydrogel material that was also biocompatible, the researchers began to play with the fiber’s optical properties, to see if they could design a fiber that could sense when and where it was being stretched.

They first loaded a fiber with red, green, and blue organic dyes, placed at specific spots along the fiber’s length. Next, they shone a laser through the fiber and stretched, for instance, the red region. They measured the spectrum of light that made it all the way through the fiber, and noted the intensity of the red light. They reasoned that this intensity relates directly to the amount of light absorbed by the red dye, as a result of that region being stretched.

In other words, by measuring the amount of light at the far end of the fiber, the researchers can quantitatively determine where and by how much a fiber was stretched.

“When you stretch a certain portion of the fiber, the dimensions of that part of the fiber changes, along with the amount of light that region absorbs and scatters, so in this way, the fiber can serve as a sensor of strain,” Liu explains.

“This is like a multistrain sensor through a single fiber,” Yuk adds. “So it can be an implantable or wearable strain gauge.”

The researchers imagine that such stretchable, strain-sensing optical fibers could be implanted or fitted along the length of a patient’s arm or leg, to monitor for signs of improving mobility.

Zhao envisions the fibers may also serve as sensors, lighting up in response to signs of disease.

“We may be able to use optical fibers for long-term diagnostics, to optically monitor tumors or inflammation,” he says. “The applications can be impactful.”

“Hydrogel fibers are very interesting and provide a compelling direction for embedding light within the human body,” says Fiorenzo Omenetto, a professor of biological engineering at Tufts University, who was not involved in the work.  “These efforts in optimizing and managing the physical and mechanical properties of fibers are necessary and important next steps that will enable practical applications of medical relevance.”

This research was supported, in part, by the National Institutes of Health, and the Department of Defense.

October 17, 2016 | More