MIT Algorithm Identifies Influencing Variables in Complex Systems—Access It Here

Dealing with variables is no small feat, especially in complex systems where many variables converge. MIT engineers are on it. They developed an algorithm to identify variables that likely influence other variables in a complex system. The engineers published their results in Nature Communications.

The algorithm takes in data collected over time and measures the interactions between every variable in a system, estimating the degree to which a change in one variable can predict the state of another. Next, they generate a “causality map” linking variables likely to have a cause-and-effect relationship. The algorithm determines whether two variables are synergistic or redundant, and it can also make an estimate of “causal leakage,” the degree to which the system’s behavior cannot be explained through the variables available; some unknown influence is at play, and more variables must be considered.

“The significance of our method lies in its versatility across disciplines,” says Álvaro Martínez-Sánchez, a graduate student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “It can be applied to better understand the evolution of species in an ecosystem, the communication of neurons in the brain, and the interplay of climatological variables between regions, to name a few examples.”

Traditional methods don’t distinguish between a “unique” causality, in which one variable has a unique effect on another, apart from every other variable, from a “synergistic” or a “redundant” link. An example of a synergistic causality would be if one variable (say, the action of drug A) did not affect another variable (a person’s blood pressure), unless the first variable was paired with a second (drug B). In comparison, a redundant causality would be if one variable (a student’s work habits) affects another variable (their chance of getting good grades), but that effect has the same impact as another variable (the amount of sleep the student gets).

The MIT engineers took a page from information theory — the science of how messages are communicated through a network, based on a theory formulated by the late MIT professor emeritus Claude Shannon. The algorithm evaluates multiple variables simultaneously and defines information as the likelihood that a change in one variable will also see a change in another. This likelihood can strengthen or weaken as the algorithm evaluates more system data over time.

The method generates a map of causality showing which variables in the network are strongly linked, and researchers can then distinguish which variables have a unique, synergistic, or redundant relationship. The algorithm can also estimate the amount of “causality leak” in the system, meaning the degree to which a system’s behavior cannot be predicted based on available information.

The method the team coined SURD for Synergistic-Unique-Redundant Decomposition of causality is available online for others to test on their own systems.

Leave A Reply

Your email address will not be published.