Jump to content

Talk:Sensitivity analysis: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
WillBecker (talk | contribs)
No edit summary
WillBecker (talk | contribs)
No edit summary
Line 1: Line 1:
==Proposal for significant changes to this page - please read==
==Proposal for significant changes to this page - please read==

''Note: these changes have now been implemented."

I would like to propose some fairly significant changes to the structure of this page. I think the comments on this talk page reflect the fact that at the moment, the page is quite poorly organised. Currently, some information appears out of place, duplicated, and in many cases absent. Particularly the methodology section must appear to be extremely confusing to the uninitiated reader, being a list of overlapping classes of methods with no explanation of when or why a practitioner might use each one. Following that, there are several sub-sections with no apparent logical order to them.
I would like to propose some fairly significant changes to the structure of this page. I think the comments on this talk page reflect the fact that at the moment, the page is quite poorly organised. Currently, some information appears out of place, duplicated, and in many cases absent. Particularly the methodology section must appear to be extremely confusing to the uninitiated reader, being a list of overlapping classes of methods with no explanation of when or why a practitioner might use each one. Following that, there are several sub-sections with no apparent logical order to them.



Revision as of 12:52, 14 November 2012

Proposal for significant changes to this page - please read

Note: these changes have now been implemented."

I would like to propose some fairly significant changes to the structure of this page. I think the comments on this talk page reflect the fact that at the moment, the page is quite poorly organised. Currently, some information appears out of place, duplicated, and in many cases absent. Particularly the methodology section must appear to be extremely confusing to the uninitiated reader, being a list of overlapping classes of methods with no explanation of when or why a practitioner might use each one. Following that, there are several sub-sections with no apparent logical order to them.

I propose to re-organise the page to follow a more logical order, such that the reader is presented with a concise overview, followed by a definitive list of current methodologies, with links to methods. Probably the most widely-accepted technique for global sensitivity analysis, Sobol's method based on variance decompostion, is only afforded two sentences here, which is a shame since most modern sensitivity analyses make extensive use of these techniques. I propose therefore to create a separate page outlining these methods in detail, which can be linked to from this page. I would also add more on the use of emulators and HDMR representations (either on this page or a separate one), since these methods are increasingly in demand.

I would like to stress however that I do not propose to alter the text of the page significantly; rather to re-shuffle it into a more logical order. The page already contains a wealth of collective knowledge, but could definitely be presented better in my opinion.

In summary, some key changes that I propose are the following:

  1. Re-organise text. e.g.
    1. Give a new home to lost sections such as "Errors" and "Assumptions vs Inferences"
    2. Collect and merge listed motivations for sensitivity analysis (currently divided between the opening paragraph and "Applications". Possibly group under "Motivations" heading.
    3. Remove any duplicated information
  2. Re-organise Methodology section, e.g.
    1. Outline the key situations that a practitioner might face - e.g. difficulty of computation expense, dimensionality of model, nonlinearities, correlations, interactions, or the fact that data points may be arbitrarily-placed. Then outline methods that can deal with these situations, clearly categorised into type. E.g. HDMR is essentially a type of emulator, or the fact that sampling-based sensitivity analysis encompasses pretty much all methods except analytical methods. The structure of this section should reflect these facts.
    2. I would add more information about available emulators (in brief, with links), and also link to a new page on Sobol' indices and methods for their calculation. This is because variance-based SA is a core method which is barely mentioned here, but to include it on the main page would make it a very long page.
    3. I would keep the steps to SA but put it under its own subheading and clear up formatting.
    4. Add FAST to the methodology section, currently only linked to at the end.

Overall the suggestions relate to re-organisation and bringing the page up to date with the state of the art. I am posting the suggestions here because I would like the contributors to this page to give their opinions - whether this is a good idea in general, whether individual suggestions sound reasonable, and whether there should be additional changes. If there are no major objections I will start to make changes within a couple of weeks or so, and take on any suggestions or criticisms.

Please post here any opinions on the changes suggested. Thanks. WillBecker (talk) 11:45, 22 August 2012 (UTC)[reply]


Ok so I had no comments on the revisions proposed. I have now uploaded my "cleaned up" version of the page. I have tried to re-organise everything in a logical order, delete duplicated information, add more information where appropriate, and include more links. I have also made a new page called Variance-based sensitivity analysis, which is now linked to.

Please post opinions on this. If you are unhappy with the edit and want to make major changes, please discuss it here. I have deleted very little; instead I have moved things to what I believe to be the appropriate places. Thanks WillBecker (talk) 16:43, 30 October 2012 (UTC)[reply]

Some added stuff

Added some stuff about its relation to business's, not sure if youll like it!--lincs_geezer 04:23, 6 December 2005 (UTC)--lincs_geezer 04:23, 6 December 2005 (UTC)[reply]

Complicating

Wikipedia sometimes baffles me. I write a perfectly straightforward section, and then someone comes along and rewords it to mean exactly the same thing but practically imcomprehensible to the passer by. Simple facts and theories should be left understandable to everyone in my opinion....--lincs_geezer 00:40, 16 September 2006 (UTC)[reply]

What-if analysis

"what-if" analysis???? I have never heard anyone actually use this terminology and it should just be deleted, I am a statistical consultant. peace

'What if' analysis is a subset of the Sensitivity analysis. Sensitivity analysis is done using a number of what if analysis. To clear this further, let us take an event that is affected by two parameters. 'What-if' parameter A is altered? What is the effect on the output? this forms the 'what if' analysis. When n number of such analysis is performed and based on them a uniform causal formula or relationship is aimed at, then that becomes the senstivity analysis. Both are needed. Whereas what if is single incident, senstivity is based on multiple such incidents.

Within microbiological risk asessment there is some debate over whether sensitivity analysis and "what-if" analysis (perhaps more elegantly termed scenario analysis) are two separate things - the aim is perhaps narrowly defined for scenario analysis (what is the effect of changing the values of an input by so much on the output?) - but in the end a very simple scenario analysis (e.g. manually changing the value of your model parameters) seems to me a very similar process to a very simple sensitivity analysis (as alluded to above). In my opinion a passing comment that "what-if?" scenarios can be investigated using sensitivity analysis methods would be all that is necessary.

Too wordy for mere mortals

The underlying problem i see with this page is that anyone who understands what:

"However, when the assumptions are uncertain, and/or there are alternative sets of assumptions to chose from, the inference will also be also uncertain. Investigating the uncertainty in the inference (regardless of its source) goes under the name of Uncertainty analysis."

means, without having to go through it veeeeery slowly, probably already knows all about sensitivity analysis, and has no use for this page anyway. I personally do not, and have absolutely no idea what anything says. I think that maybe something a little less wordy would be more appropriate for a site such as wikipedia, with the whole 'available to everyone' thing going on.

Splooj (talk) 12:29, 2 April 2008 (UTC)[reply]

Good point! I tried to improve by quoting from Leamer and Kennedy, two econometricians I like.

Saltean (talk) 12:37, 18 August 2008 (UTC)[reply]

Problems with OAT section


I don't understand the OAT section figures and discussion. How do I get to x=0.5 and y=0.5 by changing only one factor at a time? And if I can get to the point [0.5,0.5] then what prevents me from getting to [1,1]? At a minimum, this section needs a reference.

-- Skeptdc (talk) 10:47, 17 May 2009 (UTC)[reply]

Indeed. Why a hypersphere, our of every inscribed shape that could encompass the points? Why not a cube, a simplex, a star, an arbitrary manifold...? --Livingthingdan (talk) 01:51, 21 April 2011 (UTC)[reply]


Skeptdc is right; with OAT in two dimensions one neither gets to [0.5,0.5], nor to [1,1], but only to points [0,0], [0,1] and [1,0] (I have assumed here the origin of the coordinates in the centre of the square/cube/hypercube). The argument is that all OAT point are at most at a distance=1 from the origin by design. Given that the diagonal of the hypecube is in dimensions, if the points are distributed randomly there will be points (in the corner) which are distant from the origin . Hence the paradox of OAT is that all points are in a circumscribed volume near to the origin. This volume becomes negligible with respect to the total volume as increases. Think of the corners: in ten dimensions there are of them.

Another argument against my formulation of the OAT paradox -- which is perhaps implicit in the remark of Skeptdc -- is that when one throws a handful of point in a multidimensional space these points will be sparce, and in no way the space will be fully explored. A retort to this remark is that with OAT one already knows that by design none of the points will be even by chance close to the bounday of the region of interest. In the end, even if one has only a handful of points at his/her disposal, there is no reason why one should concentrate all these points close to the origin.

-- Andrea Saltelli 15:27, 29 June 2009 (UTC)


Thanks for the response, Andrea.

I feel that some more clarification is in order in order to make this make clear to the punters, however.

Consider this remark here: > with OAT in two dimensions one neither gets to [0.5,0.5], nor to [1,1], but only to points [0,0], [0,1] and [1,0]

So, from the initial discussion of the OAT business, I understood that the key factor in the OAT technique is that we change only one parameter at a time, not that the change is in increments of 1 *and* that it is only to one parameter at a time. (I gather that 1 is in fact scaled to me the maximal parameter range. Or are you referring to the ranges over which the parameter can be scaled? In which case, we should perhaps be referring not to "points" by the "intervals".

>The argument is that all OAT point are at most at a distance=1 from the origin by design. Hm - the implication so far sounds like it's stronger than that - it is that the points are constrained to the volume of the, er "hypercross" (?) define as the k-dimensional shape containing all points lying on a vector with all components apart from at most one being zero. If that is so, than the volume which we explore is not a hypersphere at all, and the volume calculations are extraneous - since the shape we explore has a volume of zero for all k.

The first point i suspect that might be a pure talk-page confusion, although i mention it to be sure.

The second one I am sure is an unresolved confusion important. Can two or more parameters be varied simultaneously at any point in the OAT process? If so, how do we know that they remain within the unit hypersphere? Do we have any citations to this point?

--Livingthingdan (talk) 06:11, 11 October 2010 (UTC)[reply]

The "OAT paradox" may or may not reflect consensus within the discipline. Either way, this section does not give a coherent explanation of it. Moreover, it is only supported by one citation which appears to belong to the author of that section. This may indicate that this section is original research, or that it needs a rewrite. tagging accordingly. --Livingthingdan (talk) 01:51, 21 April 2011 (UTC)[reply]


Hello, We have considerably simplified the discussion about the OAT by: 1. removing the unclear figure about the sphere and the cube; 2. discarding the figure showing the ratio between the sphere and cube volume and the associated discussion.

We have stressed the two major pitfalls of OAT which does not explore enough the input space and does not take into acocunt their simultaneous variation (no possibility to highlight interactions).

We hope that this helps clarifying the matter. Thank you for your comments.

Paola Annoni and Stefano Tarantola

15 March 2012

Courses and Conferences on Sensitivity Analysis

10:26, 26 November 2009 (UTC)draft stefano —Preceding unsigned comment added by 139.191.246.63 (talk)

Undefined acronym

NPV at the end of the first paragraph is undefined. —Preceding unsigned comment added by 139.229.33.6 (talk) 18:46, 10 May 2011 (UTC)[reply]