Tuesday, December 8, 2015

CONCERNING CERN: ARTIFICIAL INTELLIGENCE TO UNPLUG DATA FLOOD AT CERN? ...

THis is one of the articles that so many of you shared that I simply have to comment about it. There seems to be an expanded role for AI envisioned at CERN, which the the world's largest single-entity generator of "data", according to this article from Scientific American:
Artificial Intelligence Called In to Tackle LHC Data Deluge
Now, before I venture into my high octane speculation of the day, I want the reader to focus on the following paragraphs, which summarize the data filtration and collection system in use at CERN's Large Hadron Collider currently, and which I reviewed in my most recent book, The Third Way:
Driven by an eagerness to make discoveries and the knowledge that they will be hit with unmanageable volumes of data in ten years’ time, physicists who work on the Large Hadron Collider (LHC), near Geneva, Switzerland, are enlisting the help of AI experts.
On November 9-13, leading lights from both communities attended a workshop—the first of its kind—at which they discussed how advanced AI techniques could speed discoveries at the LHC. Particle physicists have “realized that they cannot do it alone”, says Cécile Germain, a computer scientist at the University of Paris South in Orsay, who spoke at the workshop at CERN, the particle-physics lab that hosts the LHC.
Computer scientists are responding in droves. Last year, Germain helped to organize a competition to write programs that could ‘discover’ traces of the Higgs boson in a set of simulated data; it attracted submissions from more than 1,700 teams.
Particle physics is already no stranger to AI. In particular, when ATLAS and CMS, the LHC’s two largest experiments, discovered the Higgs boson in 2012, they did so in part using machine learning—a form of AI that ‘trains’ algorithms to recognize patterns in data. The algorithms were primed using simulations of the debris from particle collisions, and learned to spot the patterns produced by the decay of rare Higgs particles among millions of more mundane events. They were then set to work on the real thing.
But in the near future, the experiments will need to get smarter at collecting their data, not just processing it. CMS and ATLAS each currently produces hundreds of millions of collisions per second, and uses quick and dirty criteria to ignore all but 1 in 1,000 events. Upgrades scheduled for 2025 mean that the number of collisions will grow 20-fold, and that the detectors will have to use more sophisticated methods to choose what they keep, says CMS physicist María Spiropulu of the California Institute of Technology in Pasadena, who helped to organize the CERN workshop. “We’re going into the unknown,” she says.
Inspiration could come from another LHC experiment, LHCb, which is dedicated to studying subtle asymmetries between particles and their antimatter counterparts. In preparation for the second, higher-energy run of the LHC, which began in April, the LHCb team programmed its detector to use machine learning to decide which data to keep.
In effect, what all this means, is that the enormous mountain of data that CERN's collider generates is first filtered by computer algorithms which are programed to sift through the mountain of data and pull certain events which conform to this programmed filter for human analysis and review.
It was this fact that led me to propose, in The Third Way, the hypothesis that there could be hidden algorithms, in all the millions of lines of code, designed to pull anomalous or other types of events, and shunt them into a covert program consisting of covert analysts that might have  Additionally, I suggested that one such program would not consist so much of the experiments themselves, but rather, "data correlation" experiments, pulling data not only from the collider, but from concurrent events, be they geophysical, or events concurrent with collider runs that occur in the magnetosphere of the earth, solar events, and so on. In other words, I was, and am, proposing the idea that in addition to the public story of "particle physics," there might be hidden experiments, only revealed by means of such data correlation algorithms, dealing with the macro-systemic effects of the collider's operation.
With that in mind, consider the very opening paragraph of the article:
The next generation of particle-collider experiments will feature some of the world’s most advanced thinking machines, if links now being forged between particle physicists and artificial intelligence (AI) researchers take off. Such machines could make discoveries with little human input—a prospect that makes some physicists queasy. (Emphasis added)
Such a statement seems to imply the possibility for a hidden program, but more importantly, throws an interesting and intriguing light on my "correlation" experiment idea, for such an experiment would seem, perforce, to demand such vast computational powers that only an AI could provide, sifting through reams of data not only from particle collisions, but "concurrent" data seemingly unrelated save only their occurrence in time frames when the collider is active, and their absence when it is not, data from human behavior trends (if any), data from alterations in the magnetosphere's shape and behavior(and there is some suggestive stuff out there), to other types of data. This would require enormous computational ability and considerable skill in designing the algorithms.
So in my high octane speculation of the day, I suspect that perhaps we've been given a hint of this, and of these types of possibilities, in this article, for it seems, reading between the lines a bit, these very types of possibilities.

No comments:

Post a Comment