Collaborative biophysics research

There’s some exciting collaboration across disciplinary silos at Penn. This article in Omnia, “The physics of us”, is a light-weight overview, but it provides names and topics of research that can be followed up. Here are a few excerpts:

Mathijssen studied particle physics in college but began working on biophysics while earning his doctorate, “because life itself fascinates me.” He and his lab move back and forth between microorganisms and microrobotics, collaborating with researchers in Penn Engineering and the Singh Center for Nanotechnology. They’ll sometimes build a device based on a biological finding, and sometimes try to build a model to help figure out a biological function.

“It’s entirely possible to come up with a thought on a Monday, and then by Friday you may already have tested something in the lab,” he says. “This close connection of going back and forth all the time and testing your idea and then maybe changing your theory again, that dialogue is something that really attracted me.”

The vasculature in our bodies is not entirely genetically directed, [Katifori] says, but instead self-organizes according to a set of equations she’s trying to discover. “We are asking, is this consistent with optimizing efficiency? Is this consistent with optimizing robustness? And if it’s not, what constraints [got] in the way?

Along with faster computers and machine learning, new technological tools for biophysics include ways to record and analyze neurons, or to genetically modify them to fire under laser light or to glow when they fire. DNA and nanostructures can be manipulated. Cameras can take hundreds of images per second, and there are new ways to bring light to microscopes which allow researchers to study living systems.

There are challenges to collaboration in that researchers from different areas need to learn each other’s lingo. “You have to pick these things up by working with somebody, or reading enough papers,” says Balasubramanian. “And you learn how to talk to each other.”

Balasubramanian has long theorized about William of Occam’s “razor,” which states that causes should not be multiplied beyond necessity. Scientists now use it as a rule of thumb that says when you don’t have much data, a simple explanation is better than a more complicated one. He’s working on a paper with former postdoc Philip Fleig, now at a Max Planck institute in Germany, in which they argue that systems making decisions with limited information—for example, bacterial colonies deciding which phenotype to express—do better by implicitly using simpler models of the world.

Beyond bacteria, Balasubramanian says that data are showing that both humans and machine learning systems working with limited information implicitly implement a tendency toward simplicity. “Probability theory says to prefer simple explanations when you don’t have much data. And these two learning systems, the human brain and deep learning networks, seem to learn such biases against complexity without being told that they should. I think these common features of learning in animals and machines are very exciting.”

1 Like