Sunday, June 5, 2022

[2206.00520] Deep Learning Opacity in Scientific Discovery

[2206.00520] Deep Learning Opacity in Scientific Discovery

Deep Learning Opacity in Scientific Discovery Eamon Duede Philosophers have recently focused on critical, epistemological challenges that arise from the opacity of deep neural networks. One might conclude from this literature that doing good science with opaque models is exceptionally challenging, if not impossible. Yet, this is hard to square with the recent boom in optimism for AI in science alongside a flood of recent scientific breakthroughs driven by AI methods. 

In this paper, I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a failure to examine how AI is actually used in science. I show that, in order to understand the epistemic justification for AI-powered breakthroughs, philosophers must examine the role played by deep learning as part of a wider process of discovery. The philosophical distinction between the 'context of discovery' and the 'context of justification' is helpful in this regard.

I demonstrate the importance of attending to this distinction with two cases drawn from the scientific literature, and show that epistemic opacity need not diminish AI's capacity to lead scientists to significant and justifiable breakthroughs. 

Discussion

What I hope to have shown in this paper is that, despite their epistemic opacity, deep learning models can be used quite effectively in science, not just for pragmatic ends but for genuine discovery and deeper theoretical understanding, as well.  This can be accomplished when DLMs are used as guides for exploring promising avenues of pursuit in thecontext of discovery.  

In science, we want to make the best conjectures and pose the besthypotheses that we can.  The history of science is replete with efforts to develop processes for arriving at promising ideas.  For instance, thought experiments are cognitive devices for  hypothesis  generation,  exploration,  and  theory  selection.   In  general,  we  want  our processes of discovery to be as reliable or trustworthy as possible.  But, here, inductive considerations are, perhaps, sufficient to establish reliability.  After all, the processes by which we arrive at our conjectures and hypotheses do not typically serve also to justify them.  

While philosophers are right to raise epistemological concerns about neural net-work opacity, these problems primarily concern the treatment and use of deep learning outputs as findings in their own right that stand, as such, in need of justification which (as  of  now)  only  network  transparency  can  provide.   Yet,  when  DLMs  serve  the  more modest  (though  no  less  impactful)  role  of  guiding  science  in  the  context  of  discovery,their capacity to lead scientists to significant breakthroughs is in no way diminished.

Comments: 12 pages, 1 figure
Subjects: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Machine Learning (cs.LG); Physics and Society (physics.soc-ph)
Cite as: arXiv:2206.00520 [cs.AI]
  (or arXiv:2206.00520v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2206.00520

 

No comments:

Post a Comment

Disappointing Weapons Systems in Russian - Ukraine "special military operation"

Disappointing Systems in Ukraine - From imprecise precision munitions to explosive IFVs The Australians at PERun have spent a lot of time i...