1
%% This is ltexpprt.all. This file is to be used for creating a paper
2
%% in the ACM/SIAM Preprint series with LaTeX. It consists of the following
5
%% ltexpprt.tex ---- an example and documentation file
6
%% ltexpprt.sty ---- the macro file
8
%% To use, cut this file apart at the appropriate places. You can run the
9
%% example file with the macros to get sample output.
11
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CUT HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
14
%%%%%%%%%%%%%%%%%%%%%%%%%% ltexpprt.tex %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
16
% This is ltexpprt.tex, an example file for use with the SIAM LaTeX
17
% Preprint Series macros. It is designed to provide double-column output.
18
% Please take the time to read the following comments, as they document
19
% how to use these macros. This file can be composed and printed out for
20
% use as sample output.
22
% Any comments or questions regarding these macros should be directed to:
26
% 3600 University City Science Center
27
% Philadelphia, PA 19104-2688
29
% Telephone: (215) 382-9800
31
% e-mail: gray@siam.org
34
% This file is to be used as an example for style only. It should not be read
37
%%%%%%%%%%%%%%% PLEASE NOTE THE FOLLOWING STYLE RESTRICTIONS %%%%%%%%%%%%%%%
39
%% 1. There are no new tags. Existing LaTeX tags have been formatted to match
40
%% the Preprint series style.
42
%% 2. You must use \cite in the text to mark your reference citations and
43
%% \bibitem in the listing of references at the end of your chapter. See
44
%% the examples in the following file. If you are using BibTeX, please
45
%% supply the bst file with the manuscript file.
47
%% 3. Unless otherwise stated by your editor, do your chapter as if it
49
%% If you know which number your chapter is, you must do the following:
51
%% Use the \setcounter command to set the counters for chapter,
52
%% section, and page number to the appropriate number. The counter
53
%% for chapter is incremental, so it should be set one less than
54
%% the actual chapter number. The section counter is not
55
%% incremental. Set the page counter to 1. The following example
56
%% is set up as if it were chapter 3. Please note the placement of
57
%% the three \setcounter commands and follow it exactly.
59
%% 4. This macro is set up for two levels of headings (\section and
60
%% \subsection). The macro will automatically number the headings for you.
62
%% 5. The running heads are defined by the \markboth command. The left running
63
%% head (the first field of the \markboth command) should be defined with
64
%% the authors names. The right running head (the second field of the
65
%% \markboth command) should be defined with the title (or shortened title)
66
%% of your chapter. Neither running head may be more than 40 characters.
68
%% 6. Theorems, Lemmas, Definitions, etc. are to be triple numbered,
69
%% indicating the chapter, section, and the occurence of that element
70
%% within that section. (For example, the first theorem in the second
71
%% section of chapter three would be numbered 3.2.1. The macro will
72
%% automatically do the numbering for you.
74
%% 7. Figures, equations, and tables must be double-numbered indicating
75
%% chapter and occurence. Use existing LaTeX tags for these elements.
76
%% Numbering will be done automatically.
79
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
83
\documentstyle[twoside,leqno,twocolumn,ltexpprt]{article}
88
%\setcounter{chapter}{2} % If you are doing your chapter as chapter one,
89
%\setcounter{section}{3} % comment these two lines out.
91
\title{\large\bf Chapter 1 \\
92
\Large SIAM/ACM Preprint Series Macros for
93
Use With LaTeX\thanks{Supported by GSF grants ABC123, DEF456, and GHI789.}}
94
\author{Corey Gray\thanks{Society for Industrial and Applied Mathematics.} \\
96
Tricia Manning\thanks{Society for Industrial and Applied Mathematics.}}
101
\pagestyle{myheadings}
102
\markboth{AUTHORS NAMES}{CHAPTER TITLE}
104
%\pagenumbering{arabic}
105
%\setcounter{page}{1}%Leave this line commented out.
107
\begin{abstract} \small\baselineskip=9pt This is the text of my abstract. It is a brief
109
paper, outlining the purposes and goals I am trying to address.\end{abstract}
111
\section{Problem Specification.}In this paper, we consider the solution of the $N \times
114
\begin{equation} \label{e1.1}
117
where $A$ is large, sparse, symmetric, and positive definite. We consider
118
the direct solution of (\ref{e1.1}) by means of general sparse Gaussian
119
elimination. In such a procedure, we find a permutation matrix $P$, and
120
compute the decomposition
122
P A P^{t} = L D L^{t}
124
where $L$ is unit lower triangular and $D$ is diagonal.
127
\section{Design Considerations.}Several good ordering algorithms (nested dissection and
129
are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}.
130
Since our interest here does not
131
focus directly on the ordering, we assume for convenience that $P=I$,
132
or that $A$ has been preordered to reflect an appropriate choice of $P$.
134
Our purpose here is to examine the nonnumerical complexity of the
135
sparse elimination algorithm given in \cite{BANKSMITH}.
136
As was shown there, a general sparse elimination scheme based on the
137
bordering algorithm requires less storage for pointers and
138
row/column indices than more traditional implementations of general
139
sparse elimination. This is accomplished by exploiting the m-tree,
140
a particular spanning tree for the graph of the filled-in matrix.
142
\begin{theorem} The method was extended to three
143
dimensions. For the standard multigrid
145
(in which, for a given grid, the next coarser grid has $1/8$
146
as many points), anisotropic problems require plane
148
obtain a good smoothing factor.\end{theorem}
150
Our purpose here is to examine the nonnumerical complexity of the
151
sparse elimination algorithm given in \cite{BANKSMITH}.
152
As was shown there, a general sparse elimination scheme based on the
153
bordering algorithm requires less storage for pointers and
154
row/column indices than more traditional implementations of general
155
sparse elimination. This is accomplished by exploiting the m-tree,
156
a particular spanning tree for the graph of the filled-in matrix.
157
Several good ordering algorithms (nested dissection and minimum degree)
158
are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}.
159
Since our interest here does not
160
focus directly on the ordering, we assume for convenience that $P=I$,
161
or that $A$ has been preordered to reflect an appropriate choice of $P$.
163
\begin{proof} In this paper we consider two methods. The first method
165
basically the method considered with two differences:
166
first, we perform plane relaxation by a two-dimensional
167
multigrid method, and second, we use a slightly different
169
interpolation operator, which improves performance
170
for nearly singular problems. In the second method coarsening
171
is done by successively coarsening in each of the three
172
independent variables and then ignoring the intermediate
173
grids; this artifice simplifies coding considerably.
176
Our purpose here is to examine the nonnumerical complexity of the
177
sparse elimination algorithm given in \cite{BANKSMITH}.
178
As was shown there, a general sparse elimination scheme based on the
179
bordering algorithm requires less storage for pointers and
180
row/column indices than more traditional implementations of general
181
sparse elimination. This is accomplished by exploiting the m-tree,
182
a particular spanning tree for the graph of the filled-in matrix.
184
\begin{Definition}{\rm We describe the two methods in \S 1.2. In \S\ 1.3. we
186
some remaining details.}
189
Our purpose here is to examine the nonnumerical complexity of the
190
sparse elimination algorithm given in \cite{BANKSMITH}.
191
As was shown there, a general sparse elimination scheme based on the
192
bordering algorithm requires less storage for pointers and
193
row/column indices than more traditional implementations of general
194
sparse elimination. This is accomplished by exploiting the m-tree,
195
a particular spanning tree for the graph of the filled-in matrix.
196
Several good ordering algorithms (nested dissection and minimum degree)
197
are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}.
198
Since our interest here does not
199
focus directly on the ordering, we assume for convenience that $P=I$,
200
or that $A$ has been preordered to reflect an appropriate choice of $P$.
202
Our purpose here is to examine the nonnumerical complexity of the
203
sparse elimination algorithm given in \cite{BANKSMITH}.
204
As was shown there, a general sparse elimination scheme based on the
205
bordering algorithm requires less storage for pointers and
206
row/column indices than more traditional implementations of general
209
\begin{lemma} We discuss first the choice for $I_{k-1}^k$
210
which is a generalization. We assume that $G^{k-1}$ is
213
by standard coarsening; that is, if $G^k$ is a tensor product
214
grid $G_{x}^k \times G_{y}^k \times G_{z}^k$,
215
$G^{k-1}=G_{x}^{k-1} \times G_{y}^{k-1} \times G_{z}^{k-1}$,
216
where $G_{x}^{k-1}$ is obtained by deleting every other grid
217
point of $G_x^k$ and similarly for $G_{y}^k$ and $G_{z}^k$.
220
To our knowledge, the m-tree previously has not been applied in this
221
fashion to the numerical factorization, but it has been used,
222
directly or indirectly, in several optimal order algorithms for
223
computing the fill-in during the symbolic factorization phase
224
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
225
approaches to the intersection problem for the special case of
226
an $n \times n$ grid ordered by nested dissection. The special
227
structure of this problem allows us to make exact estimates of
228
the complexity. To our knowledge, the m-tree previously has not been applied in this
229
fashion to the numerical factorization, but it has been used,
230
directly or indirectly, in several optimal order algorithms for
231
computing the fill-in during the symbolic factorization phase
232
[4] - [10], [5], [6].
234
In \S 1.2, we review the bordering algorithm, and introduce
235
the sorting and intersection problems that arise in the
236
sparse formulation of the algorithm.
237
In \S 1.3., we analyze the complexity of the old and new
238
approaches to the intersection problem for the special case of
239
an $n \times n$ grid ordered by nested dissection. The special
240
structure of this problem allows us to make exact estimates of
241
the complexity. To our knowledge, the m-tree previously has not been applied in this
242
fashion to the numerical factorization, but it has been used,
243
directly or indirectly, in several optimal order algorithms for
244
computing the fill-in during the symbolic factorization phase
245
[4] - [10], [5], [6].
248
For the old approach, we show that the
249
complexity of the intersection problem is $O(n^{3})$, the same
250
as the complexity of the numerical computations. For the
251
new approach, the complexity of the second part is reduced to
252
$O(n^{2} (\log n)^{2})$.
254
To our knowledge, the m-tree previously has not been applied in this
255
fashion to the numerical factorization, but it has been used,
256
directly or indirectly, in several optimal order algorithms for
257
computing the fill-in during the symbolic factorization phase
258
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
259
approaches to the intersection problem for the special case of
260
an $n \times n$ grid ordered by nested dissection. The special
261
structure of this problem allows us to make exact estimates of
262
the complexity. To our knowledge, the m-tree previously has not been applied in this
263
fashion to the numerical factorization, but it has been used,
264
directly or indirectly, in several optimal order algorithms for
265
computing the fill-in during the symbolic factorization phase
266
[4] - [10], [5], [6].
267
This is accomplished by exploiting the m-tree,
268
a particular spanning tree for the graph of the filled-in matrix.
269
To our knowledge, the m-tree previously has not been applied in this
270
fashion to the numerical factorization, but it has been used,
271
directly or indirectly, in several optimal order algorithms for
272
computing the fill-in during the symbolic factorization phase
273
\cite{EISENSTAT} - \cite{LIU2}, \cite{ROSE76}, \cite{SCHREIBER}.
275
\subsection{Robustness.}\ We do not
276
attempt to present an overview
277
here, but rather attempt to focus on those results that
278
are relevant to our particular algorithm.
279
This section assumes prior knowledge of the role of graph theory
280
in sparse Gaussian elimination; surveys of this role are
281
available in \cite{ROSE72} and \cite{GEORGELIU}. More general
282
discussions of elimination trees are given in
283
\cite{LAW} - \cite{LIU2}, \cite{SCHREIBER}.
284
Thus, at the $k$th stage, the bordering algorithm consists of
285
solving the lower triangular system
286
\begin{equation} \label{1.2}
291
\ell &=& D^{-1}_{k-1}v , \\
292
\delta &=& \alpha - \ell^{t} v .
297
\caption{This is a figure 1.1.}
300
\section{Robustness.} We do not
301
attempt to present an overview
302
here, but rather attempt to focus on those results that
303
are relevant to our particular algorithm.
305
\subsection{Versatility.}\ The special
306
structure of this problem allows us to make exact estimates of
307
the complexity. For the old approach, we show that the
308
complexity of the intersection problem is $O(n^{3})$, the same
309
as the complexity of the numerical computations
310
\cite{GEORGELIU}, \cite{ROSEWHITTEN}. For the
311
new approach, the complexity of the second part is reduced to
312
$O(n^{2} (\log n)^{2})$.
314
To our knowledge, the m-tree previously has not been applied in this
315
fashion to the numerical factorization, but it has been used,
316
directly or indirectly, in several optimal order algorithms for
317
computing the fill-in during the symbolic factorization phase
318
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
319
approaches to the intersection problem for the special case of
320
an $n \times n$ grid ordered by nested dissection. The special
321
structure of this problem allows us to make exact estimates of
322
the complexity. To our knowledge, the m-tree previously has not been applied in this
323
fashion to the numerical factorization, but it has been used,
324
directly or indirectly, in several optimal order algorithms for
325
computing the fill-in during the symbolic factorization phase
326
[4] - [10], [5], [6].
328
In \S 1.2, we review the bordering algorithm, and introduce
329
the sorting and intersection problems that arise in the
330
sparse formulation of the algorithm.
331
In \S 1.3., we analyze the complexity of the old and new
332
approaches to the intersection problem for the special case of
333
an $n \times n$ grid ordered by nested dissection. The special
334
structure of this problem allows us to make exact estimates of
335
the complexity. To our knowledge, the m-tree previously has not been applied in this
336
fashion to the numerical factorization, but it has been used,
337
directly or indirectly, in several optimal order algorithms for
338
computing the fill-in during the symbolic factorization phase
339
[4] - [10], [5], [6].
342
For the old approach, we show that the
343
complexity of the intersection problem is $O(n^{3})$, the same
344
as the complexity of the numerical computations. For the
345
new approach, the complexity of the second part is reduced to
346
$O(n^{2} (\log n)^{2})$.
348
To our knowledge, the m-tree previously has not been applied in this
349
fashion to the numerical factorization, but it has been used,
350
directly or indirectly, in several optimal order algorithms for
351
computing the fill-in during the symbolic factorization phase
352
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
353
approaches to the intersection problem for the special case of
354
an $n \times n$ grid ordered by nested dissection. The special
355
structure of this problem allows us to make exact estimates of
356
the complexity. To our knowledge, the m-tree previously has not been applied in this
357
fashion to the numerical factorization, but it has been used,
358
directly or indirectly, in several optimal order algorithms for
359
computing the fill-in during the symbolic factorization phase
360
[4] - [10], [5], [6].
361
This is accomplished by exploiting the m-tree,
362
a particular spanning tree for the graph of the filled-in matrix.
363
To our knowledge, the m-tree previously has not been applied in this
364
fashion to the numerical factorization, but it has been used,
365
directly or indirectly, in several optimal order algorithms for
366
computing the fill-in during the symbolic factorization phase
367
\cite{EISENSTAT} - \cite{LIU2}, \cite{ROSE76}, \cite{SCHREIBER}.
369
\begin{thebibliography}{99}
372
%R.~E. Bank, {\em PLTMG users' guide, edition 5.0}, tech. report,
373
% Department of Mathematics, University of California, San Diego, CA, 1988.
376
%R.~E. Bank, T.~F. Dupont, and H.~Yserentant, {\em The hierarchical basis
377
% multigrid method}, Numer. Math., 52 (1988), pp.~427--458.
380
R.~E. Bank and R.~K. Smith, {\em General sparse elimination requires no
381
permanent integer storage}, SIAM J. Sci. Stat. Comput., 8 (1987),
385
S.~C. Eisenstat, M.~C. Gursky, M.~Schultz, and A.~Sherman, {\em
386
Algorithms and data structures for sparse symmetric gaussian elimination},
387
SIAM J. Sci. Stat. Comput., 2 (1982), pp.~225--237.
390
A.~George and J.~Liu, {\em Computer Solution of Large Sparse Positive
391
Definite Systems}, Prentice Hall, Englewood Cliffs, NJ, 1981.
394
K.~H. Law and S.~J. Fenves, {\em A node addition model for symbolic
395
factorization}, ACM TOMS, 12 (1986), pp.~37--50.
398
J.~W.~H. Liu, {\em A compact row storage scheme for cholesky factors
399
using elimination trees}, ACM TOMS, 12 (1986), pp.~127--148.
402
\sameauthor , {\em The role of
403
elimination trees in sparse factorization}, Tech. Report CS-87-12,Department
404
of Computer Science, York University, Ontario, Canada, 1987.
407
D.~J. Rose, {\em A graph theoretic study of the numeric solution of
408
sparse positive definite systems}, in Graph Theory and Computing, Academic� Press, New
412
D.~J. Rose, R.~E. Tarjan, and G.~S. Lueker, {\em Algorithmic aspects of
413
vertex elimination on graphs}, SIAM J. Comput., 5 (1976), pp.~226--283.
415
\bibitem{ROSEWHITTEN}
416
D.~J. Rose and G.~F. Whitten, {\em A recursive analysis of disection
417
strategies}, in Sparse Matrix Computations, Academic Press, New York, 1976.
420
R.~Schrieber, {\em A new implementation of sparse gaussian elimination},
421
ACM TOMS, 8 (1982), pp.~256--276.
423
\end{thebibliography}
426
% End of ltexpprt.tex
430
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CUT HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
434
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ltexpprt.sty %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
436
% This is ltexpprt.sty, a file of macros and definitions for creating a
437
% chapter for publication in the ACM/SIAM Preprint series using LaTeX.
438
% It is designed to produce double-column output.
439
% This file may be freely distributed but may not be altered in any way.
440
% Any comments or questions regarding these macros should be directed to:
444
% 3600 University City Science Center
445
% Philadelphia, PA 19104-2688
447
% Telephone: (215) 382-9800
448
% Fax: (215) 386-7999
449
% e-mail: gray@siam.org
452
% Report the version.
453
\message{*** ACM/SIAM LaTeX Preprint Series macro package, version 1.0,
454
September 24,1990 ***}
470
\textheight 52.5pc \advance\textheight by \topskip
478
%% footnotes to be set 8/10
479
\def\footnotesize{\@setsize\footnotesize{10pt}\viiipt\@viiipt
481
\abovedisplayskip \z@
483
\abovedisplayshortskip\abovedisplayskip
484
\belowdisplayshortskip\belowdisplayshortskip
485
\def\@listi{\leftmargin\leftmargini \topsep 3pt plus 1pt minus 1pt
486
\parsep 2pt plus 1pt minus 1pt
489
\let\referencesize\footnotesize
493
\skip\footins 12pt plus 12pt
495
\def\footnoterule{\kern3\p@ \hrule width 3em} % the \hrule is .4pt high
497
\def\ps@plain{\let\@mkboth\@gobbletwo
498
\def\@oddfoot{{\hfil\small\thepage\hfil}}%
500
\def\@evenhead{}\def\@evenfoot{}}
506
\def\ps@headings{\let\@mkboth\markboth
507
\def\@oddfoot{}\def\@evenfoot{}%
508
\def\@evenhead{{\rm\thepage}\hfil{\small\leftmark}}%
509
\def\@oddhead{{\noindent\small\rightmark}\hfil{\rm\thepage}}%
513
\def\ps@myheadings{\let\@mkboth\@gobbletwo
514
\def\@oddfoot{}\def\@evenfoot{}%
515
\def\@oddhead{\rlap{\normalsize\rm\rightmark}\hfil{small\thepage}}%
516
\def\@evenhead%{\hfil{\small\@chapapp}\
517
{\small\thepage}\hfil\llap{\normalsize\rm\leftmark}}%
518
\def\chaptermark##1{}%
519
\def\sectionmark##1{}\def\subsectionmark##1{}}
522
\def\theequation{\arabic{section}.\arabic{equation}}
525
\def\section{\@startsection{section}{1}{0pt}{-12pt}{3pt}{\hyphenpenalty=\@M
526
\exhyphenpenalty=\@M\normalsize\bf}}
527
\def\subsection{\@startsection{subsection}{2}{0pt}{-12pt}{0pt}{\normalsize\bf}
529
\def\subsubsection{\@startsection
530
{subsubsection}{3}{0pt}{-12pt}{0pt}{\normalsize\bf}}
531
\def\paragraph{\@startsection
532
{paragraph}{4}{\parindent}{0pt}{0pt}{\normalsize\bf}}
533
\def\subparagraph{\@startsection
534
{subparagraph}{4}{\parindent}{0pt}{0pt}{\normalsize\bf}}
536
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
538
% THEOREMS, PROOFS, ALGORITHMS %
540
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
542
%%% defined proof environment by theorem model (took out counter)
544
\def\newproof#1{\@nprf{#1}}
546
\def\@nprf#1#2{\@xnprf{#1}{#2}}
548
\def\@xnprf#1#2{\expandafter\@ifdefinable\csname #1\endcsname
549
\global\@namedef{#1}{\@prf{#1}{#2}}\global\@namedef{end#1}{\@endproof}}
551
\def\@prf#1#2{\@xprf{#1}{#2}}
553
\def\@xprf#1#2{\@beginproof{#2}{\csname the#1\endcsname}\ignorespaces}
557
%%% defined algorithm environment by theorem model
559
\def\newalgorithm#1{\@ifnextchar[{\@oalg{#1}}{\@nalg{#1}}}
562
\@ifnextchar[{\@xnalg{#1}{#2}}{\@ynalg{#1}{#2}}}
564
\def\@xnalg#1#2[#3]{\expandafter\@ifdefinable\csname #1\endcsname
565
{\@definecounter{#1}\@addtoreset{#1}{#3}%
566
\expandafter\xdef\csname the#1\endcsname{\expandafter\noexpand
567
\csname the#3\endcsname \@thmcountersep \@thmcounter{#1}}%
568
\global\@namedef{#1}{\@alg{#1}{#2}}\global\@namedef{end#1}{\@endalgorithm}}}
570
\def\@ynalg#1#2{\expandafter\@ifdefinable\csname #1\endcsname
571
{\@definecounter{#1}%
572
\expandafter\xdef\csname the#1\endcsname{\@thmcounter{#1}}%
573
\global\@namedef{#1}{\@alg{#1}{#2}}\global\@namedef{end#1}{\@endalgorithm}}}
575
\def\@oalg#1[#2]#3{\expandafter\@ifdefinable\csname #1\endcsname
576
{\global\@namedef{the#1}{\@nameuse{the#2}}%
577
\global\@namedef{#1}{\@alg{#2}{#3}}%
578
\global\@namedef{end#1}{\@endalgorithm}}}
580
\def\@alg#1#2{\refstepcounter
581
{#1}\@ifnextchar[{\@yalg{#1}{#2}}{\@xalg{#1}{#2}}}
583
\def\@xalg#1#2{\@beginalgorithm{#2}{\csname the#1\endcsname}\ignorespaces}
584
\def\@yalg#1#2[#3]{\@opargbeginalgorithm{#2}{\csname
585
the#1\endcsname}{#3}\ignorespaces}
590
\def\@beginproof#1{\rm \trivlist \item[\hskip \labelsep{\it #1.\/}]}
591
\def\@endproof{\outerparskip 0pt\endtrivlist}
593
\def\@begintheorem#1#2{\it \trivlist \item[\hskip \labelsep{\sc #1\ #2.}]}
594
\def\@opargbegintheorem#1#2#3{\it \trivlist
595
\item[\hskip \labelsep{\sc #1\ #2.\ (#3)}]}
596
\def\@endtheorem{\outerparskip 0pt\endtrivlist}
598
%\def\@begindefinition#1#2{\rm \trivlist \item[\hskip \labelsep{\sc #1\ #2.}]}
599
%\def\@opargbegindefinition#1#2#3{\rm \trivlist
600
% \item[\hskip \labelsep{\sc #1\ #2.\ (#3)}]}
601
%\def\@enddefinition{\outerparskip 0pt\endtrivlist}
604
\def\@beginalgorithm#1#2{\rm \trivlist \item[\hskip \labelsep{\sc #1\ #2.}]}
605
\def\@opargbeginalgorithm#1#2#3{\rm \trivlist
606
\item[\hskip \labelsep{\sc #1\ #2.\ (#3)}]}
607
\def\@endalgorithm{\outerparskip 6pt\endtrivlist}
610
\newskip\outerparskip
612
\def\trivlist{\parsep\outerparskip
613
\@trivlist \labelwidth\z@ \leftmargin\z@
614
\itemindent\parindent \def\makelabel##1{##1}}
616
\def\@trivlist{\topsep=0pt\@topsepadd\topsep
617
\if@noskipsec \leavevmode \fi
618
\ifvmode \advance\@topsepadd\partopsep \else \unskip\par\fi
619
\if@inlabel \@noparitemtrue \@noparlisttrue
620
\else \@noparlistfalse \@topsep\@topsepadd \fi
621
\advance\@topsep \parskip
622
\leftskip\z@\rightskip\@rightskip \parfillskip\@flushglue
623
\@setpar{\if@newlist\else{\@@par}\fi}%
624
\global\@newlisttrue \@outerparskip\parskip}
627
\def\endtrivlist{\if@newlist\@noitemerr\fi
628
\if@inlabel\indent\fi
629
\ifhmode\unskip \par\fi
631
\ifdim\lastskip >\z@ \@tempskipa\lastskip \vskip -\lastskip
632
\advance\@tempskipa\parskip \advance\@tempskipa -\@outerparskip
639
\newproof{@proof}{Proof}
640
\newenvironment{proof}{\begin{@proof}}{\end{@proof}}
642
\newtheorem{@theorem}{Theorem}[section]
643
\newenvironment{theorem}{\begin{@theorem}}{\end{@theorem}}
645
\newalgorithm{@algorithm}{Algorithm}[section]
646
\newenvironment{algorithm}{\begin{@algorithm}}{\end{@algorithm}}
650
\newtheorem{lemma}{Lemma}[section]
651
\newtheorem{fact}{Fact}[section]
652
\newtheorem{corollary}{Corollary}[section]
653
\newtheorem{axiom}{Axiom}[section]
654
\newtheorem{cond}{Condition}[section]
655
\newtheorem{property}{Property}[section]
656
\newtheorem{proposition}{Proposition}[section]
658
\newtheorem{Conjecture}{Conjecture}[section]
659
%\newtheorem{Corollary}[Theorem]{Corollary}
660
\newtheorem{Definition}{Definition}[section]
661
\newtheorem{Lemma}{Lemma}[section]
662
\newtheorem{Remark}{Remark}[section]
664
\newproof{Example}{Example}
665
\newproof{Method}{Method}
666
\newproof{Exercise}{Exercise}
669
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
673
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
676
\def\thebibliography#1{%
680
\begin{flushleft}\normalsize\bf References\end{flushleft}
681
\addvspace{3pt}\nopagebreak\list
682
%% default is no labels, for those not using \cite or BibTeX
683
% {[\arabic{enumi}]} {\settowidth\labelwidth{[#1]}
684
{[\arabic{enumi}]}{\settowidth\labelwidth{mm}
685
\leftmargin\labelwidth
686
\advance\leftmargin\labelsep
687
\usecounter{enumi}\@bibsetup}
688
\def\newblock{\hskip .11em plus .33em minus -.07em}
689
\sloppy\clubpenalty4000\widowpenalty4000
690
\sfcode`\.=1000\relax}
693
\def\@bibsetup{\itemindent=0pt \itemsep=0pt \parsep=0pt
696
\def\sameauthor{\leavevmode\vrule height 2pt depth -1.6pt width 23pt}
699
%% End of ltexpprt.sty
701
%%%%%%%%%%%%%%%%%%%%%%%%% End of ltexpprt.all %%%%%%%%%%%%%%%%%%%%%%%
b'\\ No newline at end of file'