2
%Copyright (c) 2005 EDF-EADS-PHIMECA.
3
% Permission is granted to copy, distribute and/or modify this document
4
% under the terms of the GNU Free Documentation License, Version 1.2
5
% or any later version published by the Free Software Foundation;
6
% with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
7
% Texts. A copy of the license is included in the section entitled "GNU
8
% Free Documentation License".
10
\documentclass[11pt]{article}
13
\usepackage[dvips]{graphicx}
16
\usepackage{longtable}
24
\usepackage{Math_Notations}
27
\newcommand{\espace}{{\strut\\}}
28
\newcommand{\D}{{\mathrm{d}}}
30
\newcommand{\requirements}[2]{{
32
\begin{tabular}{||p{2.3cm}||p{12.7cm}||}
34
Requirements\vfill & \parbox{12cm}{
38
Results\vfill & \parbox{12cm}{
45
\newcommand{\longrequirements}[2]{{
47
\begin{longtable}{||p{2.3cm}||p{12.7cm}||}
49
Requirements\vfill & \parbox{12cm}{
53
Results\vfill & \parbox{12cm}{
60
\setlength{\textwidth}{18.5cm}
61
\setlength{\textheight}{23cm}
62
\setlength{\hoffset}{-1.04cm}
63
\setlength{\voffset}{-1.54cm}
64
\setlength{\oddsidemargin}{0cm}
65
\setlength{\evensidemargin}{0cm}
66
\setlength{\topmargin}{0cm}
67
\setlength{\headheight}{1cm}
68
\setlength{\headsep}{0.5cm}
69
\setlength{\marginparsep}{0cm}
70
\setlength{\marginparwidth}{0cm}
71
\setlength{\footskip}{1cm}
72
\setlength{\parindent}{0cm}
75
\fancyhf{} \rhead{\bfseries \thepage} \lhead{\bfseries \nouppercase Open TURNS -- Example Guide}
76
\rfoot{\bfseries \copyright 2005 EDF - EADS - PhiMeca} \lfoot{}
85
{\huge \bf Examples Guide}
86
\input{GenericInformation.tex}
94
% -------------------------------------------------------------------------------------------------
96
\section{Example 1 : deviation of a cantilever beam}
98
\subsection{Presentation of the study case}
100
This Example Guide regroups several Use Cases described in the Use Cases Guide in order to show one example of a complete study. \\
102
This example has been presented in the ESREL 2007 conference in the paper : {\itshape Open TURNS, an Open source initiative to Treat Uncertainties, Risks'N Statistics in a structured industrial approach}, from A. Dutfoy(EDF R\&D), I. Dutka-Malen(EDF R\&D), R. Lebrun (EADS innovation Works) \& all.\\
104
Let's consider the following analytical example of a cantilever beam, of Young's modulus $E$, length $L$, section modulus $I$. One end is built in a wall and we apply a concentrated bending load at the other end of the beam. The deviation (vertical displacement) y of the free end is equal to :
106
y(E, F, L, I) = \frac{FL^3}{3EI}
109
\begin{figure}[Hhbtp]
117
\includegraphics[width=10cm]{poutre.pdf}
119
\caption{cantilever beam under a ponctual bending load.}
123
The objective of this study is to evaluate the influence of uncertainties of the input data $(E, F, L, I)$ on the deviation $y$.\\
125
We consider a steel beam with a hollow square section of length $a$ and of thickness $e$. Thus, the flexion section inertie of the beam is equal to $I = \displaystyle \frac{a^4 - (a-e)^4}{12}$. The beam length is $L$. The Young's modulus is $E$. The charge applied is $F$.\\
127
The values used for the deterministic studies are :
138
which corresponds to the point $(3.0e7, 30000, 250, 400)$ when the lenght $L$ is given in unit $cm$ et noo in the standard unit $m$.\\
141
This example treats the following points of the methodology :
143
\item[$\bullet$] Min/Max approach : evaluation of the range of the output variable of interest (deviation)
145
\item with a deterministic experiment plane,
146
\item with a random experiment plane,
148
\item[$\bullet$] Central tendancy approach : evaluation of the central indicators of the output variable of interest (deviation)
150
\item Taylor variance decomposition,
151
\item Random sampling,
152
\item Kernel smoothing of the distribution of the output variable of interest,
154
\item[$\bullet$] Threshold exceedance approach : evaluation of the probability that the output variable of interest (deviation) 30$\geq 30cm$
157
\item Monte Carlo simulation method,
158
\item Directional Sampling method,
159
\item Importance Sampling method.
164
\subsection{Probabilistic modelisation}
166
\subsubsection{Marginal distributions}
168
The random modelisation of the input data is the following one :
170
\item[$\bullet$] E = Beta$(*)$ where $r = O.93,t = 3.2,a = 2.8e7,b = 4.8e7$,
171
\item[$\bullet$] F = LogNormal, where the mean value is $E[F] = 30000$, the standard deviation is $\sqrt{Var[F]} = 9000$ and the min value is $min(E) = 15000$,
172
\item[$\bullet$] L = Uniform on $[250; 260]$,
173
\item[$\bullet$] I = Beta$(*)$ where $r = 2.5,t = 4.0,a = 3.1e2,b = 4.5e2$.
175
(*) We recall here the expression of the probability density function of the Beta distribution :
177
\displaystyle p(x) = \frac{(x-a)^{(r-1)}(b-x)^{(t-r-1)}}{(b-a)^{(t-1)}B(r,t-r)}\boldsymbol{1}_{[a,b]}(x)
179
where $r>0$, $t>r$ and $a < b$.
183
\subsubsection{Dependence structure}
185
We suppose that the probabilstic variables $L$ and $I$ are dependent. This dependence may be explained by the manufacturing process of the beam : the thiner the beam has been laminated, the longer it is.\\
186
We modelise the dependence structure by a Normal copula, parameterized from the Spearman correlation coefficient of both correlated variables : $\rho_S = -0.2$.\\
187
Then, the Spearman correlation matrix of the input random vector $(E,F,L,I)$ is :
203
\subsection{Min/Max approach}
206
\subsubsection{Deterministic experiment plane}
208
We consider a composite experiment plane, where :
210
\item the levels of the centered and reducted grid are +/-0.5, +/-1., +/-3.,
211
\item the unit per dimension (scaling factor) is given by the standard deviation of the marginal distribution of the corresponding variable,
212
\item the center is the mean point of the input random vector distribution.
217
\subsubsection{Random sampling}
219
We evaluate the range of the deviation from a random sample of size $10^4$.
223
\subsection{Central tendancy approach}
226
\subsubsection{Taylor variance decomposition}
228
We evaluate the mean and the standard deviation of the deviation thanks to the Taylor variance decomposition method. The importance factors of that method rank the influence of the input uncertainties on the mean of the deviation.
230
\subsubsection{Random sampling}
232
We evaluate the mean and standard deviation of the deviation from a random sample of size $10^4$.
234
\subsubsection{Kernel smoothing}
236
We fit the distribution of the deviation with a Normal kernel, which bandwith is evaluated from the Scott rule, from a random sample of size $10^4$.\\
237
We superpose then the kernel smoothing pdf and the normal one which mean and standard deviation are those of the random sample of the output variable of interest in order to graphically check if the Normal model fits to the deviation distribution.
239
\subsection{Threshold exceedance approach}
241
We consider the event where the deviation exceeds $30 cm$.\\
245
We use the Cobyla algorithm to research the design point, which requires no evaluation of the gradient of the limit state function. We parameterize the Cobyla algorithmwit hte following parameters :
247
\item Maximum Iterations Number = $10^3$,
248
\item Maximum Absolute Error = $10^{-10}$,
249
\item Maximum Relative Error = $10^{-10}$,
250
\item Maximum Residual Error = $10^{-10}$,
251
\item Maximum Constraint Error = $10^{-10}$.
255
\subsubsection{Monte Carlo simulation method}
257
We evaluate the probability with the Monte Carlo method, parameterized as follows :
259
\item Maximum Outer Sampling = $4\, 10^4$,
260
\item Block Size = $10^2$,
261
\item Maximum Coefficient of Variation = $10^{-1}$.
264
We evaluate the confidence interval of level $0.95$ and we draw the convergence graph of the Monte Carlo estimator with its confidence interval of level 0.90.
268
\subsubsection{Directional Sampling method}
270
We evaluate the probability with the Directional Sampling method, whith its default parameters :
272
\item 'Slow and Safe' for the root strategy,
273
\item 'Random direction' for the sampling strategy
277
We evaluate the confidence interval of level $0.95$ and we draw the convergence graph of the Directional Sampling estimator with its confidence interval of level 0.90.
280
\subsubsection{Latin Hyper Cube Sampling method}
282
We evaluate the probability with the Latin Hyper Cube Sampling method with the same parameters as the Monte Carlo method and we draw the convergence graph of the LHS estimator with its confidence interval of level 0.90.
288
\subsubsection{Importance Sampling method}
291
We evaluate the probability with the Importance Sampling method in the standard sapce, with the same parameters as the Monte carlo method. The importance distribution is the normal one, centered on the standard design point and which standard deviation is 4. The importance sampling is performed in the standard sapce.\\
293
We fix the BlockSize is fixed to 1 and the MaximumOuterIteration to $4\, 10^4$.\\
295
We draw the convergence graph of the Importance Sampling estimator with its confidence interval of level 0.90.
300
\subsection{Response surface by polynomial chaos expansion}
303
We evaluate the meta model determined thanks to the polynomial chaos expansion technique.
308
We took the following 1D polynomial families, which parameters have been determined in order to be adapted to the marginal distributions of the input random vector :
310
\item $E$ : Jacobi($\alpha = 1.3$, $\beta = -0.1$),
311
\item $F$ : Laguerre($k = 1.78$),
312
\item $L$ : Legendre,
313
\item $I$ : Jacobi($\alpha = 0.5$, $\beta = 1.5$).
316
The truncature strategy of the multivariate orthonormal basis is the Cleaning Strategy where we considered within the $500$ first terms of the multivariate basis, among the 50 most significant ones, those which contribution wre significant (which means superior to $10^{-4}$).\\
318
The evaluation strategy of the approximation coefficients is the least square strategy based on a experiment plane determined with the Monte Carlo sampling technique of size 100.\\
320
Figures (\ref{PCE_E}) to (\ref{ModelsComparison}) draw the following graphs :
322
\item the drawings of some members of the 1D polynomial family,
323
\item the cloud of points making the comparison between the model values and the meta model ones : if the adequation is perfect, points must be on the first diagonal.
340
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
341
\subsection{The Python script}
344
\lstset{language=python, keywordstyle=\color{black}\bfseries,tabsize=2,framexleftmargin=8mm,frame=shadowbox,rulesepcolor=\color{black},numbers=left,breaklines=true}
346
\lstinputlisting{scriptExample_beam.py}
351
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
352
\subsection{Output of the Python script}
354
\lstinputlisting{resultatExampleBeam}
358
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
362
The probability density function (PDF) of each marginal is given in Figures \ref{pdfE} to \ref{pdfI}.
365
\begin{figure}[Hhbtp]
366
\begin{minipage}{9.8cm}
368
\includegraphics[width=7cm]{distributionE_pdf.pdf}
369
\caption{Probability density function of the parameter E}
374
\begin{minipage}{9.8cm}
376
\includegraphics[width=7cm]{distributionF_pdf.pdf}
377
\caption{Probability density function of the parameter F}
384
\begin{figure}[Hhbtp]
385
\begin{minipage}{9.8cm}
387
\includegraphics[width=7cm]{distributionL_pdf.pdf}
388
\caption{PDFof the parameter L}
393
\begin{minipage}{9.8cm}
395
\includegraphics[width=7cm]{distributionI_pdf.pdf}
396
\caption{PDF of the parameter I}
402
The probability density function (PDF) and the cumulative density function (CDF) of the deviation fiited with the kernel smoothing metid are drawn in Figures \ref{KernelSmoothing} and \ref{KernelSmoothing2}.
406
\begin{figure}[Hhbtp]
407
\begin{minipage}{9.8cm}
409
\includegraphics[width=7cm]{smoothedPDF.pdf}
410
\caption{PDF of the deviation with the kernel smoothing method.}
411
\label{KernelSmoothing}
415
\begin{minipage}{9.8cm}
417
\includegraphics[width=7cm]{smoothedCDF.pdf}
418
\caption{CDF of the deviation with the kernel smoothing method.}
419
\label{KernelSmoothing2}
425
The superposition of the kernel smoothed density function and the normal fitted from the same sample with the maximum likelihood method is drawn in Figure \ref{superp}.
428
\begin{figure}[Hhbtp]
430
\includegraphics[width=9cm]{smoothedPDF_and_GaussianPDF.pdf}
432
\caption{Superposition of the kernel smoothed density function and the normal fitted from the same sample.}
436
The importance factors from the FORM method are given in Figure \ref{FormIF}.
438
\begin{figure}[Hhbtp]
440
\includegraphics[width=9cm]{ImportanceFactorsDrawingFORM.pdf}
442
\caption{FORM importance factors of the event : deviation > 30 cm.}
446
The convergence graphs of the simulation methods are given in Figures \ref{MCConvergence} to \ref{LHSConvergence}.
449
\begin{figure}[Hhbtp]
450
\begin{minipage}{9.8cm}
452
\includegraphics[width=7cm]{convergenceGrapheMonteCarlo.pdf}
453
\caption{Monte Carlo convergence graph.}
454
\label{MCConvergence}
458
\begin{minipage}{9.8cm}
460
\includegraphics[width=7cm]{convergenceGrapheLHS.pdf}
461
\caption{LHS convergence graph.}
462
\label{LHSConvergence}
470
\begin{figure}[Hhbtp]
471
\begin{minipage}{9.8cm}
473
\includegraphics[width=7cm]{convergenceGrapheDS.pdf}
474
\caption{Directional Sampling convergence graph.}
475
\label{DSConvergence}
479
\begin{minipage}{9.8cm}
481
\includegraphics[width=7cm]{convergenceGrapheIS.pdf}
482
\caption{LHS convergence graph.}
483
\label{LHSConvergence}
489
Figures (\ref{PCE_E}) to (\ref{ModelsComparison}) contain the graphs :
492
\item Graph 1 : the drawings of the fith first members of the 1D polynomial family,
493
\item Graph 2 : the cloud of points making the comparison between the model values and the meta model ones : if the adequation is perfect, points must be on the first diagonal.
497
\begin{figure}[Hhbtp]
498
\begin{minipage}{9cm}
500
\includegraphics[width=7cm]{PCE_JacobiPolynomials_VariableE.pdf}
501
\caption{The 5-th first polynomials of the Jacobi family associated to the variable E.}
506
\begin{minipage}{9cm}
508
\includegraphics[width=7cm]{PCE_LaguerrePolynomials_VariableF.pdf}
509
\caption{The 5-th first polynomials of the Laguerre family associated to the variable F.}
515
\begin{figure}[Hhbtp]
516
\begin{minipage}{9cm}
518
\includegraphics[width=7cm]{PCE_LegendrePolynomials_VariableL.pdf}
519
\caption{The 5-th first polynomials of the Legendre associated to the variable I.}
524
\begin{minipage}{9cm}
526
\includegraphics[width=7cm]{PCE_JacobiPolynomials_VariableI.pdf}
527
\caption{he 5-th first polynomials of the Jacobi family associated to the variable I.}
536
\begin{figure}[Hhbtp]
538
\includegraphics[width=7cm]{PCE_comparisonModels.pdf}
539
\caption{Comparison of values from the model and the polynomial chaos meta model.}
540
\label{ModelsComparison}
547
\subsection{Results comments}
549
\subsubsection{Min/Max approach}
551
The Min/Max approach enables to evaluate the range of the deviation.\\
553
We note that the use of an experiment plane may be benefical with regard the random sampling technique as we can catch more easily (which means with less evaluations of the limit state function) the extrem values of the output variable of interest :h ere, we have managed to catch both extrem bounds of the deviation with the composite experiment plane, whereas the random sampling technique did not manage to give a good evaluation of them.\\
555
Note that the composite experiment plane has 73 points, where as the random sampling technique has been effected with $10^4$ points.
559
\subsubsection{Central tendancy approach}
561
The Taylor variance decomposition has given a good approximation of the mean value of the deviation : the value is comparable to the one obtained with the random technique. Furthermore, note that the Taylor variance decomposition required only 1 evaluation of the limit state function, whereas the random sampling technique required $10^4$ evaluations.\\
563
The second order evaluation of the mean by the Taylor variance decomposition method adds no information, which probably means that around the mean point of the input random vector, the limit state function is well approximated by its tangent plane.\\
565
The importance factors indicate that the mean of the deviation is mostly influenced by the uncertainty of the variable F.\\
567
The kernel smoothing technique enables to have a look on the distribution shape and another approximation of the mean value of the deviation.\\
568
Note that the normal fitting on the sample is not adapted.
572
\subsubsection{Threshold exceedance approach}
574
The whole event probabilities evaluated from the simulation methods are equivalent and confirm the event probability evaluated with FORM.\\
576
Note that the FORM probability required only 176 evaluations of the limit state function whereas the Monte Carlo probability required 17300 evaluations, the Directional Sampling one 17297 evaluations and the LHS one 20300 evaluations.\\
577
The Importance Sampling is a simulation method but the importance density has been centered around the design point, where the threshold exceedance is concentrated. That's why the succession of the FORM technique and the Importance sampling one where the importance density is a normal distribution centered around the design point, performed in the standard space, seems to be the better compromise between the limit state evaluation calls number and the probability evaluation precision.\\
579
The simulation methods give a confidence interval, which is not possible with FORM.\\
581
FORM ranks the influence of the input uncertainties on the realisation of the threshold exceedance event : the variable F is largely the more influent. Thus, if the threshold exceedance probability is judged too high, it is recommanded to decrease the variability of the variable F first.
584
\subsubsection{Response surface : Polynomial expansion chaos}
586
The polynomial expansion chaos has defined a meta model thanks to 100 points, which gives very satisfactory results compared to those obtained with other methods.