1
%% This is soda209.all. This file is to be used for creating a paper
2
%% in the ACM/SIAM Preprint series with LaTeX. It consists of the following
5
%% ltexprt.tex ---- an example and documentation file
6
%% ltexprt.sty ---- the macro file
8
%% To use, cut this file apart at the appropriate places. You can run the
9
%% example file with the macros to get sample output.
11
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CUT HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
14
%%%%%%%%%%%%%%%%%%%%%%%%%% ltexprt.tex %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
16
% This is ltexprt.tex, an example file for use with the SIAM LaTeX (version 2.09)
17
% Preprint Series macros. It is designed to provide double-column output.
18
% Please take the time to read the following comments, as they document
19
% how to use these macros. This file can be composed and printed out for
20
% use as sample output.
22
% Any comments or questions regarding these macros should be directed to:
26
% 3600 University City Science Center
27
% Philadelphia, PA 19104-2688
29
% Telephone: (215) 382-9800
31
% e-mail: gray@siam.org
34
% This file is to be used as an example for style only. It should not be read
37
%%%%%%%%%%%%%%% PLEASE NOTE THE FOLLOWING STYLE RESTRICTIONS %%%%%%%%%%%%%%%
39
%% 1. There are no new tags. Existing LaTeX tags have been formatted to match
40
%% the Preprint series style.
42
%% 2. You must use \cite in the text to mark your reference citations and
43
%% \bibitem in the listing of references at the end of your chapter. See
44
%% the examples in the following file. If you are using BibTeX, please
45
%% supply the bst file with the manuscript file.
48
%% 3. This macro is set up for two levels of headings (\section and
49
%% \subsection). The macro will automatically number the headings for you.
51
%% 4. No running heads are to be used in this volume.
53
%% 5. Theorems, Lemmas, Definitions, etc. are to be double numbered,
54
%% indicating the section and the occurence of that element
55
%% within that section. (For example, the first theorem in the second
56
%% section would be numbered 2.1. The macro will
57
%% automatically do the numbering for you.
59
%% 6. Figures, equations, and tables must be single-numbered.
60
%% Use existing LaTeX tags for these elements.
61
%% Numbering will be done automatically.
64
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
68
\documentstyle[twoside,leqno,twocolumn,ltexprt]{article}
73
\title{\Large SIAM/ACM Preprint Series Macros for
74
Use With LaTeX\thanks{Supported by GSF grants ABC123, DEF456, and GHI789.}}
75
\author{Corey Gray\thanks{Society for Industrial and Applied Mathematics.} \\
77
Tricia Manning\thanks{Society for Industrial and Applied Mathematics.}}
82
\pagestyle{myheadings}
85
%\pagenumbering{arabic}
88
\begin{abstract} \small\baselineskip=9pt This is the text of my abstract. It is a brief
90
paper, outlining the purposes and goals I am trying to address.\end{abstract}
92
\section{Problem Specification.}In this paper, we consider the solution of the $N \times
95
\begin{equation} \label{e1.1}
98
where $A$ is large, sparse, symmetric, and positive definite. We consider
99
the direct solution of (\ref{e1.1}) by means of general sparse Gaussian
100
elimination. In such a procedure, we find a permutation matrix $P$, and
101
compute the decomposition
103
P A P^{t} = L D L^{t}
105
where $L$ is unit lower triangular and $D$ is diagonal.
108
\section{Design Considerations.}Several good ordering algorithms (nested dissection and
110
are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}.
111
Since our interest here does not
112
focus directly on the ordering, we assume for convenience that $P=I$,
113
or that $A$ has been preordered to reflect an appropriate choice of $P$.
115
Our purpose here is to examine the nonnumerical complexity of the
116
sparse elimination algorithm given in \cite{BANKSMITH}.
117
As was shown there, a general sparse elimination scheme based on the
118
bordering algorithm requires less storage for pointers and
119
row/column indices than more traditional implementations of general
120
sparse elimination. This is accomplished by exploiting the m-tree,
121
a particular spanning tree for the graph of the filled-in matrix.
123
\begin{theorem} The method was extended to three
124
dimensions. For the standard multigrid
126
(in which, for a given grid, the next coarser grid has $1/8$
127
as many points), anisotropic problems require plane
129
obtain a good smoothing factor.\end{theorem}
131
Our purpose here is to examine the nonnumerical complexity of the
132
sparse elimination algorithm given in \cite{BANKSMITH}.
133
As was shown there, a general sparse elimination scheme based on the
134
bordering algorithm requires less storage for pointers and
135
row/column indices than more traditional implementations of general
136
sparse elimination. This is accomplished by exploiting the m-tree,
137
a particular spanning tree for the graph of the filled-in matrix.
138
Several good ordering algorithms (nested dissection and minimum degree)
139
are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}.
140
Since our interest here does not
141
focus directly on the ordering, we assume for convenience that $P=I$,
142
or that $A$ has been preordered to reflect an appropriate choice of $P$.
144
\begin{proof} In this paper we consider two methods. The first method
146
basically the method considered with two differences:
147
first, we perform plane relaxation by a two-dimensional
148
multigrid method, and second, we use a slightly different
150
interpolation operator, which improves performance
151
for nearly singular problems. In the second method coarsening
152
is done by successively coarsening in each of the three
153
independent variables and then ignoring the intermediate
154
grids; this artifice simplifies coding considerably.
157
Our purpose here is to examine the nonnumerical complexity of the
158
sparse elimination algorithm given in \cite{BANKSMITH}.
159
As was shown there, a general sparse elimination scheme based on the
160
bordering algorithm requires less storage for pointers and
161
row/column indices than more traditional implementations of general
162
sparse elimination. This is accomplished by exploiting the m-tree,
163
a particular spanning tree for the graph of the filled-in matrix.
165
\begin{Definition}{\rm We describe the two methods in \S 1.2. In \S\ 1.3. we
167
some remaining details.}
170
Our purpose here is to examine the nonnumerical complexity of the
171
sparse elimination algorithm given in \cite{BANKSMITH}.
172
As was shown there, a general sparse elimination scheme based on the
173
bordering algorithm requires less storage for pointers and
174
row/column indices than more traditional implementations of general
175
sparse elimination. This is accomplished by exploiting the m-tree,
176
a particular spanning tree for the graph of the filled-in matrix.
177
Several good ordering algorithms (nested dissection and minimum degree)
178
are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}.
179
Since our interest here does not
180
focus directly on the ordering, we assume for convenience that $P=I$,
181
or that $A$ has been preordered to reflect an appropriate choice of $P$.
183
Our purpose here is to examine the nonnumerical complexity of the
184
sparse elimination algorithm given in \cite{BANKSMITH}.
185
As was shown there, a general sparse elimination scheme based on the
186
bordering algorithm requires less storage for pointers and
187
row/column indices than more traditional implementations of general
190
\begin{lemma} We discuss first the choice for $I_{k-1}^k$
191
which is a generalization. We assume that $G^{k-1}$ is
194
by standard coarsening; that is, if $G^k$ is a tensor product
195
grid $G_{x}^k \times G_{y}^k \times G_{z}^k$,
196
$G^{k-1}=G_{x}^{k-1} \times G_{y}^{k-1} \times G_{z}^{k-1}$,
197
where $G_{x}^{k-1}$ is obtained by deleting every other grid
198
point of $G_x^k$ and similarly for $G_{y}^k$ and $G_{z}^k$.
201
To our knowledge, the m-tree previously has not been applied in this
202
fashion to the numerical factorization, but it has been used,
203
directly or indirectly, in several optimal order algorithms for
204
computing the fill-in during the symbolic factorization phase
205
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
206
approaches to the intersection problem for the special case of
207
an $n \times n$ grid ordered by nested dissection. The special
208
structure of this problem allows us to make exact estimates of
209
the complexity. To our knowledge, the m-tree previously has not been applied in this
210
fashion to the numerical factorization, but it has been used,
211
directly or indirectly, in several optimal order algorithms for
212
computing the fill-in during the symbolic factorization phase
213
[4] - [10], [5], [6].
215
In \S 1.2, we review the bordering algorithm, and introduce
216
the sorting and intersection problems that arise in the
217
sparse formulation of the algorithm.
218
In \S 1.3., we analyze the complexity of the old and new
219
approaches to the intersection problem for the special case of
220
an $n \times n$ grid ordered by nested dissection. The special
221
structure of this problem allows us to make exact estimates of
222
the complexity. To our knowledge, the m-tree previously has not been applied in this
223
fashion to the numerical factorization, but it has been used,
224
directly or indirectly, in several optimal order algorithms for
225
computing the fill-in during the symbolic factorization phase
226
[4] - [10], [5], [6].
229
For the old approach, we show that the
230
complexity of the intersection problem is $O(n^{3})$, the same
231
as the complexity of the numerical computations. For the
232
new approach, the complexity of the second part is reduced to
233
$O(n^{2} (\log n)^{2})$.
235
To our knowledge, the m-tree previously has not been applied in this
236
fashion to the numerical factorization, but it has been used,
237
directly or indirectly, in several optimal order algorithms for
238
computing the fill-in during the symbolic factorization phase
239
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
240
approaches to the intersection problem for the special case of
241
an $n \times n$ grid ordered by nested dissection. The special
242
structure of this problem allows us to make exact estimates of
243
the complexity. To our knowledge, the m-tree previously has not been applied in this
244
fashion to the numerical factorization, but it has been used,
245
directly or indirectly, in several optimal order algorithms for
246
computing the fill-in during the symbolic factorization phase
247
[4] - [10], [5], [6].
248
This is accomplished by exploiting the m-tree,
249
a particular spanning tree for the graph of the filled-in matrix.
250
To our knowledge, the m-tree previously has not been applied in this
251
fashion to the numerical factorization, but it has been used,
252
directly or indirectly, in several optimal order algorithms for
253
computing the fill-in during the symbolic factorization phase
254
\cite{EISENSTAT} - \cite{LIU2}, \cite{ROSE76}, \cite{SCHREIBER}.
256
\subsection{Robustness.}\ We do not
257
attempt to present an overview
258
here, but rather attempt to focus on those results that
259
are relevant to our particular algorithm.
260
This section assumes prior knowledge of the role of graph theory
261
in sparse Gaussian elimination; surveys of this role are
262
available in \cite{ROSE72} and \cite{GEORGELIU}. More general
263
discussions of elimination trees are given in
264
\cite{LAW} - \cite{LIU2}, \cite{SCHREIBER}.
265
Thus, at the $k$th stage, the bordering algorithm consists of
266
solving the lower triangular system
267
\begin{equation} \label{1.2}
272
\ell &=& D^{-1}_{k-1}v , \\
273
\delta &=& \alpha - \ell^{t} v .
278
\caption{This is a figure 1.1.}
281
\section{Robustness.} We do not
282
attempt to present an overview
283
here, but rather attempt to focus on those results that
284
are relevant to our particular algorithm.
286
\subsection{Versatility.}\ The special
287
structure of this problem allows us to make exact estimates of
288
the complexity. For the old approach, we show that the
289
complexity of the intersection problem is $O(n^{3})$, the same
290
as the complexity of the numerical computations
291
\cite{GEORGELIU}, \cite{ROSEWHITTEN}. For the
292
new approach, the complexity of the second part is reduced to
293
$O(n^{2} (\log n)^{2})$.
295
To our knowledge, the m-tree previously has not been applied in this
296
fashion to the numerical factorization, but it has been used,
297
directly or indirectly, in several optimal order algorithms for
298
computing the fill-in during the symbolic factorization phase
299
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
300
approaches to the intersection problem for the special case of
301
an $n \times n$ grid ordered by nested dissection. The special
302
structure of this problem allows us to make exact estimates of
303
the complexity. To our knowledge, the m-tree previously has not been applied in this
304
fashion to the numerical factorization, but it has been used,
305
directly or indirectly, in several optimal order algorithms for
306
computing the fill-in during the symbolic factorization phase
307
[4] - [10], [5], [6].
309
In \S 1.2, we review the bordering algorithm, and introduce
310
the sorting and intersection problems that arise in the
311
sparse formulation of the algorithm.
312
In \S 1.3., we analyze the complexity of the old and new
313
approaches to the intersection problem for the special case of
314
an $n \times n$ grid ordered by nested dissection. The special
315
structure of this problem allows us to make exact estimates of
316
the complexity. To our knowledge, the m-tree previously has not been applied in this
317
fashion to the numerical factorization, but it has been used,
318
directly or indirectly, in several optimal order algorithms for
319
computing the fill-in during the symbolic factorization phase
320
[4] - [10], [5], [6].
323
For the old approach, we show that the
324
complexity of the intersection problem is $O(n^{3})$, the same
325
as the complexity of the numerical computations. For the
326
new approach, the complexity of the second part is reduced to
327
$O(n^{2} (\log n)^{2})$.
329
To our knowledge, the m-tree previously has not been applied in this
330
fashion to the numerical factorization, but it has been used,
331
directly or indirectly, in several optimal order algorithms for
332
computing the fill-in during the symbolic factorization phase
333
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
334
approaches to the intersection problem for the special case of
335
an $n \times n$ grid ordered by nested dissection. The special
336
structure of this problem allows us to make exact estimates of
337
the complexity. To our knowledge, the m-tree previously has not been applied in this
338
fashion to the numerical factorization, but it has been used,
339
directly or indirectly, in several optimal order algorithms for
340
computing the fill-in during the symbolic factorization phase
341
[4] - [10], [5], [6].
342
This is accomplished by exploiting the m-tree,
343
a particular spanning tree for the graph of the filled-in matrix.
344
To our knowledge, the m-tree previously has not been applied in this
345
fashion to the numerical factorization, but it has been used,
346
directly or indirectly, in several optimal order algorithms for
347
computing the fill-in during the symbolic factorization phase
348
\cite{EISENSTAT} - \cite{LIU2}, \cite{ROSE76}, \cite{SCHREIBER}.
350
\begin{thebibliography}{99}
353
%R.~E. Bank, {\em PLTMG users' guide, edition 5.0}, tech. report,
354
% Department of Mathematics, University of California, San Diego, CA, 1988.
357
%R.~E. Bank, T.~F. Dupont, and H.~Yserentant, {\em The hierarchical basis
358
% multigrid method}, Numer. Math., 52 (1988), pp.~427--458.
361
R.~E. Bank and R.~K. Smith, {\em General sparse elimination requires no
362
permanent integer storage}, SIAM J. Sci. Stat. Comput., 8 (1987),
366
S.~C. Eisenstat, M.~C. Gursky, M.~Schultz, and A.~Sherman, {\em
367
Algorithms and data structures for sparse symmetric gaussian elimination},
368
SIAM J. Sci. Stat. Comput., 2 (1982), pp.~225--237.
371
A.~George and J.~Liu, {\em Computer Solution of Large Sparse Positive
372
Definite Systems}, Prentice Hall, Englewood Cliffs, NJ, 1981.
375
K.~H. Law and S.~J. Fenves, {\em A node addition model for symbolic
376
factorization}, ACM TOMS, 12 (1986), pp.~37--50.
379
J.~W.~H. Liu, {\em A compact row storage scheme for cholesky factors
380
using elimination trees}, ACM TOMS, 12 (1986), pp.~127--148.
383
\sameauthor , {\em The role of
384
elimination trees in sparse factorization}, Tech. Report CS-87-12,Department
385
of Computer Science, York University, Ontario, Canada, 1987.
388
D.~J. Rose, {\em A graph theoretic study of the numeric solution of
389
sparse positive definite systems}, in Graph Theory and Computing, Academic� Press, New
393
D.~J. Rose, R.~E. Tarjan, and G.~S. Lueker, {\em Algorithmic aspects of
394
vertex elimination on graphs}, SIAM J. Comput., 5 (1976), pp.~226--283.
396
\bibitem{ROSEWHITTEN}
397
D.~J. Rose and G.~F. Whitten, {\em A recursive analysis of disection
398
strategies}, in Sparse Matrix Computations, Academic Press, New York, 1976.
401
R.~Schrieber, {\em A new implementation of sparse gaussian elimination},
402
ACM TOMS, 8 (1982), pp.~256--276.
404
\end{thebibliography}
411
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CUT HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
415
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ltexprt.sty %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
417
% This is ltexprt.sty, a file of macros and definitions for creating a
418
% chapter for publication in the ACM/SIAM Preprint series using LaTeX (version 2.09).
419
% It is designed to produce double-column output.
420
% This file may be freely distributed but may not be altered in any way.
421
% Any comments or questions regarding these macros should be directed to:
425
% 3600 University City Science Center
426
% Philadelphia, PA 19104-2688
428
% Telephone: (215) 382-9800
429
% Fax: (215) 386-7999
430
% e-mail: gray@siam.org
433
% Report the version.
434
\message{*** ACM/SIAM LaTeX 2.09 Preprint Series macro package, version 1.0,
435
September 24,1990 ***}
451
\textheight 52.5pc \advance\textheight by \topskip
459
%% footnotes to be set 8/10
460
\def\footnotesize{\@setsize\footnotesize{10pt}\viiipt\@viiipt
462
\abovedisplayskip \z@
464
\abovedisplayshortskip\abovedisplayskip
465
\belowdisplayshortskip\belowdisplayshortskip
466
\def\@listi{\leftmargin\leftmargini \topsep 3pt plus 1pt minus 1pt
467
\parsep 2pt plus 1pt minus 1pt
470
\let\referencesize\footnotesize
474
\skip\footins 12pt plus 12pt
476
\def\footnoterule{\kern3\p@ \hrule width 3em} % the \hrule is .4pt high
478
\def\ps@plain{\let\@mkboth\@gobbletwo
479
\def\@oddfoot{{\hfil\small\thepage\hfil}}%
481
\def\@evenhead{}\def\@evenfoot{}}
487
\def\ps@headings{\let\@mkboth\markboth
488
\def\@oddfoot{}\def\@evenfoot{}%
489
\def\@evenhead{{\rm\thepage}\hfil{\small\leftmark}}%
490
\def\@oddhead{{\noindent\small\rightmark}\hfil{\rm\thepage}}%
494
\def\ps@myheadings{\let\@mkboth\@gobbletwo
495
\def\@oddfoot{}\def\@evenfoot{}%
496
\def\@oddhead{\rlap{\normalsize\rm\rightmark}\hfil{small\thepage}}%
497
\def\@evenhead%{\hfil{\small\@chapapp}\
498
{\small\thepage}\hfil\llap{\normalsize\rm\leftmark}}%
499
\def\chaptermark##1{}%
500
\def\sectionmark##1{}\def\subsectionmark##1{}}
503
\def\theequation{\arabic{section}.\arabic{equation}}
506
\def\section{\@startsection{section}{1}{0pt}{-12pt}{3pt}{\hyphenpenalty=\@M
507
\exhyphenpenalty=\@M\normalsize\bf}}
508
\def\subsection{\@startsection{subsection}{2}{0pt}{-12pt}{0pt}{\normalsize\bf}
510
\def\subsubsection{\@startsection
511
{subsubsection}{3}{0pt}{-12pt}{0pt}{\normalsize\bf}}
512
\def\paragraph{\@startsection
513
{paragraph}{4}{\parindent}{0pt}{0pt}{\normalsize\bf}}
514
\def\subparagraph{\@startsection
515
{subparagraph}{4}{\parindent}{0pt}{0pt}{\normalsize\bf}}
517
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
519
% THEOREMS, PROOFS, ALGORITHMS %
521
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
523
%%% defined proof environment by theorem model (took out counter)
525
\def\newproof#1{\@nprf{#1}}
527
\def\@nprf#1#2{\@xnprf{#1}{#2}}
529
\def\@xnprf#1#2{\expandafter\@ifdefinable\csname #1\endcsname
530
\global\@namedef{#1}{\@prf{#1}{#2}}\global\@namedef{end#1}{\@endproof}}
532
\def\@prf#1#2{\@xprf{#1}{#2}}
534
\def\@xprf#1#2{\@beginproof{#2}{\csname the#1\endcsname}\ignorespaces}
538
%%% defined algorithm environment by theorem model
540
\def\newalgorithm#1{\@ifnextchar[{\@oalg{#1}}{\@nalg{#1}}}
543
\@ifnextchar[{\@xnalg{#1}{#2}}{\@ynalg{#1}{#2}}}
545
\def\@xnalg#1#2[#3]{\expandafter\@ifdefinable\csname #1\endcsname
546
{\@definecounter{#1}\@addtoreset{#1}{#3}%
547
\expandafter\xdef\csname the#1\endcsname{\expandafter\noexpand
548
\csname the#3\endcsname \@thmcountersep \@thmcounter{#1}}%
549
\global\@namedef{#1}{\@alg{#1}{#2}}\global\@namedef{end#1}{\@endalgorithm}}}
551
\def\@ynalg#1#2{\expandafter\@ifdefinable\csname #1\endcsname
552
{\@definecounter{#1}%
553
\expandafter\xdef\csname the#1\endcsname{\@thmcounter{#1}}%
554
\global\@namedef{#1}{\@alg{#1}{#2}}\global\@namedef{end#1}{\@endalgorithm}}}
556
\def\@oalg#1[#2]#3{\expandafter\@ifdefinable\csname #1\endcsname
557
{\global\@namedef{the#1}{\@nameuse{the#2}}%
558
\global\@namedef{#1}{\@alg{#2}{#3}}%
559
\global\@namedef{end#1}{\@endalgorithm}}}
561
\def\@alg#1#2{\refstepcounter
562
{#1}\@ifnextchar[{\@yalg{#1}{#2}}{\@xalg{#1}{#2}}}
564
\def\@xalg#1#2{\@beginalgorithm{#2}{\csname the#1\endcsname}\ignorespaces}
565
\def\@yalg#1#2[#3]{\@opargbeginalgorithm{#2}{\csname
566
the#1\endcsname}{#3}\ignorespaces}
571
\def\@beginproof#1{\rm \trivlist \item[\hskip \labelsep{\it #1.\/}]}
572
\def\@endproof{\outerparskip 0pt\endtrivlist}
574
\def\@begintheorem#1#2{\it \trivlist \item[\hskip \labelsep{\sc #1\ #2.}]}
575
\def\@opargbegintheorem#1#2#3{\it \trivlist
576
\item[\hskip \labelsep{\sc #1\ #2.\ (#3)}]}
577
\def\@endtheorem{\outerparskip 0pt\endtrivlist}
579
%\def\@begindefinition#1#2{\rm \trivlist \item[\hskip \labelsep{\sc #1\ #2.}]}
580
%\def\@opargbegindefinition#1#2#3{\rm \trivlist
581
% \item[\hskip \labelsep{\sc #1\ #2.\ (#3)}]}
582
%\def\@enddefinition{\outerparskip 0pt\endtrivlist}
585
\def\@beginalgorithm#1#2{\rm \trivlist \item[\hskip \labelsep{\sc #1\ #2.}]}
586
\def\@opargbeginalgorithm#1#2#3{\rm \trivlist
587
\item[\hskip \labelsep{\sc #1\ #2.\ (#3)}]}
588
\def\@endalgorithm{\outerparskip 6pt\endtrivlist}
591
\newskip\outerparskip
593
\def\trivlist{\parsep\outerparskip
594
\@trivlist \labelwidth\z@ \leftmargin\z@
595
\itemindent\parindent \def\makelabel##1{##1}}
597
\def\@trivlist{\topsep=0pt\@topsepadd\topsep
598
\if@noskipsec \leavevmode \fi
599
\ifvmode \advance\@topsepadd\partopsep \else \unskip\par\fi
600
\if@inlabel \@noparitemtrue \@noparlisttrue
601
\else \@noparlistfalse \@topsep\@topsepadd \fi
602
\advance\@topsep \parskip
603
\leftskip\z@\rightskip\@rightskip \parfillskip\@flushglue
604
\@setpar{\if@newlist\else{\@@par}\fi}%
605
\global\@newlisttrue \@outerparskip\parskip}
608
\def\endtrivlist{\if@newlist\@noitemerr\fi
609
\if@inlabel\indent\fi
610
\ifhmode\unskip \par\fi
612
\ifdim\lastskip >\z@ \@tempskipa\lastskip \vskip -\lastskip
613
\advance\@tempskipa\parskip \advance\@tempskipa -\@outerparskip
620
\newproof{@proof}{Proof}
621
\newenvironment{proof}{\begin{@proof}}{\end{@proof}}
623
\newtheorem{@theorem}{Theorem}[section]
624
\newenvironment{theorem}{\begin{@theorem}}{\end{@theorem}}
626
\newalgorithm{@algorithm}{Algorithm}[section]
627
\newenvironment{algorithm}{\begin{@algorithm}}{\end{@algorithm}}
631
\newtheorem{lemma}{Lemma}[section]
632
\newtheorem{fact}{Fact}[section]
633
\newtheorem{corollary}{Corollary}[section]
634
\newtheorem{axiom}{Axiom}[section]
635
\newtheorem{cond}{Condition}[section]
636
\newtheorem{property}{Property}[section]
637
\newtheorem{proposition}{Proposition}[section]
639
\newtheorem{Conjecture}{Conjecture}[section]
640
%\newtheorem{Corollary}[Theorem]{Corollary}
641
\newtheorem{Definition}{Definition}[section]
642
\newtheorem{Lemma}{Lemma}[section]
643
\newtheorem{Remark}{Remark}[section]
645
\newproof{Example}{Example}
646
\newproof{Method}{Method}
647
\newproof{Exercise}{Exercise}
650
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
654
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
657
\def\thebibliography#1{%
661
\begin{flushleft}\normalsize\bf References\end{flushleft}
662
\addvspace{3pt}\nopagebreak\list
663
%% default is no labels, for those not using \cite or BibTeX
664
% {[\arabic{enumi}]} {\settowidth\labelwidth{[#1]}
665
{[\arabic{enumi}]}{\settowidth\labelwidth{mm}
666
\leftmargin\labelwidth
667
\advance\leftmargin\labelsep
668
\usecounter{enumi}\@bibsetup}
669
\def\newblock{\hskip .11em plus .33em minus -.07em}
670
\sloppy\clubpenalty4000\widowpenalty4000
671
\sfcode`\.=1000\relax}
674
\def\@bibsetup{\itemindent=0pt \itemsep=0pt \parsep=0pt
677
\def\sameauthor{\leavevmode\vrule height 2pt depth -1.6pt width 23pt}
680
%% End of ltexprt.sty
682
%%%%%%%%%%%%%%%%%%%%%%%%% End of soda209.all %%%%%%%%%%%%%%%%%%%%%%%