2
PyTables NetCDF version 3 emulation API.
4
This module provides an API is nearly identical to Scientific.IO.NetCDF
5
(http://starship.python.net/~hinsen/ScientificPython/ScientificPythonManual/Scientific.html).
6
Some key differences between the Scientific.IO.NetCDF API and the pytables
7
NetCDF emulation API to keep in mind are:
9
1) data is stored in an HDF5 file instead of a netCDF file.
10
2) Although each variable can have only one unlimited
11
dimension, it need not be the first as in a true NetCDF file.
12
Complex data types 'F' (Complex32) and 'D' (Complex64) are supported
13
in tables.NetCDF, but are not supported in netCDF
14
(or Scientific.IO.NetCDF). Files with variables that have
15
these datatypes, or an unlimited dimension other than the first,
16
cannot be converted to netCDF using h5tonc.
17
3) variables are compressed on disk by default using
18
HDF5 zlib compression with the 'shuffle' filter.
19
If the 'least_significant_digit' keyword is used when a
20
variable is created with the createVariable method, data will be
21
truncated (quantized) before being written to the file.
22
This can significantly improve compression. For example, if
23
least_significant_digit=1, data will be quantized using
24
numarray.around(scale*data)/scale, where scale = 2**bits, and
25
bits is determined so that a precision of 0.1 is retained (in
27
From http://www.cdc.noaa.gov/cdc/conventions/cdc_netcdf_standard.shtml:
28
"least_significant_digit -- power of ten of the smallest decimal
29
place in unpacked data that is a reliable value."
30
4) data must be appended to a variable with an unlimited
31
dimension using the 'append' method of the netCDF
32
variable object. In Scientific.IO.NetCDF, data can be added
33
along an unlimited dimension by assigning it to a slice (there
35
The 'sync' method synchronizes the size
36
of all variables with an unlimited dimension by filling in
37
data using the default netCDF _FillValue, and
38
is invoked automatically when the NetCDFFile object is closed.
39
In the Scientific.IO.NetCDF, the 'sync' method flushes the data to disk.
40
5) the createVariable method has three extra optional keyword
41
arguments not found in the Scientific.IO.NetCDF interface,
42
'least_significant_digit' (see item (2) above), 'expectedsize'
44
The 'expectedsize' keyword applies only to variables with an
45
unlimited dimension, and is an estimate of the number
46
of entries that will be added along that dimension
47
(default 1000). This estimate is used to optimize
48
HDF5 file access and memory usage.
49
The 'filters' keyword is a PyTables filters instance
50
that describes how to store the data on disk.
51
The default corresponds to complevel=6, complib='zlib',
52
shuffle=1 and fletcher32=0.
53
6) data can be saved to a real netCDF file using the NetCDFFile class
54
method 'h5tonc' (if Scientific.IO.NetCDF is installed). The
55
unlimited dimension must be the first (for all variables in the file)
56
in order to use the 'h5tonc' method.
57
Data can also be imported from a true netCDF file and saved
58
in an HDF5 file using the 'nctoh5' class method.
59
7) A list of attributes corresponding to global netCDF attributes
60
defined in the file can be obtained with the NetCDFFile ncattrs method.
61
Similarly, netCDF variable attributes can be obtained with
62
the NetCDFVariable ncattrs method.
63
8) you should not define global or variable attributes that start
64
with '_NetCDF_', those names are reserved for internal use.
65
9) output similar to 'ncdump -h' can be obtained by simply
66
printing the NetCDFFile instance.
68
A tables.NetCDF file consists of array objects (either EArrays or
69
CArrays) located in the root group of a pytables hdf5 file. Each of
70
the array objects must have a dimensions attribute, consisting of a
71
tuple of dimension names (the length of this tuple should be the same
72
as the rank of the array object). Any such objects with one
73
of the supported data types in a pytables file that conforms to
74
this simple structure can be read with the tables.NetCDF module.
76
Note: This module does not yet create HDF5 files that are compatible
77
with netCDF version 4.
79
Datasets created with the PyTables netCDF emulation API can be shared
80
over the internet with the OPeNDAP protocol (http://opendap.org), via
81
the python opendap module (http://opendap.oceanografia.org). A plugin
82
for the python opendap server is included with the pytables
83
distribution (contrib/h5_dap_plugin.py). Simply copy that file into
84
the 'plugins' directory of the opendap python module source
85
distribution, run 'setup.py install', point the opendap server to the
86
directory containing your hdf5 files, and away you go. Any OPeNDAP
87
aware client (such as Matlab or IDL) can now access your data over
88
http as if it were a local disk file.
90
Jeffrey Whitaker <jeffrey.s.whitaker@noaa.gov>
94
__version__ = '20051110'
96
import math, tables, numarray
97
# need Numeric for h5 <--> netCDF conversion.
98
Numeric_imported = True
102
Numeric_imported = False
103
# need Scientific to convert to/from real netCDF files.
104
ScientificIONetCDF_imported = True
106
import Scientific.IO.NetCDF as RealNetCDF
108
ScientificIONetCDF_imported = False
110
# dictionary that maps pytables types to single-character Numeric typecodes.
111
_typecode_dict = {'Float64':'d',
121
# dictionary that maps single character Numeric typecodes to netCDF
122
# data types (False if no corresponding netCDF datatype exists).
123
_netcdftype_dict = {'s':'short','1':'byte','l':'int','i':'int',
124
'f':'float','d':'double','c':'character','F':False,'D':False}
125
# values to print out in __repr__ method.
126
_reprtype_dict = {'s':'short','1':'byte','l':'int','i':'int',
127
'f':'float','d':'double','c':'character','F':'complex','D':'double_complex'}
129
# _NetCDF_FillValue defaults taken netCDF 3.6.1 header file.
130
_fillvalue_dict = {'f': 9.9692099683868690e+36,
131
'd': 9.9692099683868690e+36, # near 15 * 2^119
132
'F': 9.9692099683868690e+36+0j, # next two I made up
133
'D': 9.9692099683868690e+36+0j, # (no Complex in netCDF)
137
'1': -127, # (signed char)-127
138
'c': chr(0)} # (char)0
140
def _quantize(data,least_significant_digit):
141
"""quantize data to improve compression.
142
data is quantized using around(scale*data)/scale,
143
where scale is 2**bits, and bits is determined from
144
the least_significant_digit.
145
For example, if least_significant_digit=1, bits will be 4."""
146
precision = 10.**-least_significant_digit
147
exp = math.log(precision,10)
149
exp = int(math.floor(exp))
151
exp = int(math.ceil(exp))
152
bits = math.ceil(math.log(10.**-exp,2))
154
return numarray.around(scale*data)/scale
158
netCDF file Constructor: NetCDFFile(filename, mode="r",history=None)
162
filename -- Name of hdf5 file to hold data.
164
mode -- access mode. "r" means read-only; no data can be modified.
165
"w" means write; a new file is created, an existing
166
file with the same name is deleted. "a" means append
167
(in analogy with serial files); an existing file is
168
opened for reading and writing.
170
history -- a string that is used to define the global NetCDF
173
A NetCDFFile object has two standard attributes: 'dimensions' and
174
'variables'. The values of both are dictionaries, mapping
175
dimension names to their associated lengths and variable names to
176
variables, respectively. Application programs should never modify
179
A list of attributes corresponding to global netCDF attributes
180
defined in the file can be obtained with the ncattrs method.
181
Global file attributes are created by assigning to an attribute of
182
the NetCDFFile object.
185
def __init__(self,filename,mode='r',history=None):
187
self._NetCDF_h5file = tables.openFile(filename, mode=mode)
188
self._NetCDF_mode = mode
189
# file already exists, set up variable and dimension dicts.
193
for var in self._NetCDF_h5file.root:
194
if not isinstance(var,tables.CArray) and not isinstance(var,tables.EArray):
195
print 'object',var,'is not a EArray or CArray, skipping ..'
197
if var.stype not in _typecode_dict.keys():
198
print 'object',var.name,'is not a supported datatype (',var.stype,'), skipping ..'
200
if var.attrs.__dict__.has_key('dimensions'):
202
for dim in var.attrs.__dict__['dimensions']:
203
if var.extdim >= 0 and n == var.extdim:
206
val=int(var.shape[n])
207
if not self.dimensions.has_key(dim):
208
self.dimensions[dim] = val
210
# raise an exception of a dimension of that
211
# name has already been encountered with a
213
if self.dimensions[dim] != val:
214
raise KeyError,'dimension lengths not consistent'
217
print 'object',var.name,'does not have a dimensions attribute, skipping ..'
219
self.variables[var.name]=_NetCDFVariable(var,self)
220
if len(self.variables.keys()) == 0:
221
raise IOError, 'file does not contain any objects compatible with tables.NetCDF'
223
# initialize dimension and variable dictionaries for a new file.
226
# set history attribute.
229
self.history = history
231
def createDimension(self,dimname,size):
232
"""Creates a new dimension with the given "dimname" and
233
"size". "size" must be a positive integer or 'None',
234
which stands for the unlimited dimension. There can
235
be only one unlimited dimension per dataset."""
236
self.dimensions[dimname] = size
237
# make sure there is only one unlimited dimension.
238
if self.dimensions.values().count(None) > 1:
239
raise ValueError, 'only one unlimited dimension allowed!'
241
def createVariable(self,varname,datatype,dimensions,least_significant_digit=None,expectedsize=1000,filters=None):
242
"""Creates a new variable with the given "varname", "datatype", and
243
"dimensions". The "datatype" is a one-letter string with the same
244
meaning as the typecodes for arrays in module Numeric; in
245
practice the predefined type constants from Numeric should
246
be used. "dimensions" must be a tuple containing dimension
247
names (strings) that have been defined previously.
248
The unlimited dimension must be the first (leftmost)
249
dimension of the variable.
251
If the optional keyword parameter 'least_significant_digit' is
252
specified, multidimensional variables will be truncated
253
(quantized). This can significantly improve compression. For
254
example, if least_significant_digit=1, data will be quantized
255
using Numeric.around(scale*data)/scale, where scale = 2**bits,
256
and bits is determined so that a precision of 0.1 is retained
257
(in this case bits=4).
258
From http://www.cdc.noaa.gov/cdc/conventions/cdc_netcdf_standard.shtml:
259
"least_significant_digit -- power of ten of the smallest decimal
260
place in unpacked data that is a reliable value."
262
The 'expectedsize' keyword applies only to variables with an
263
unlimited dimension - it is the expected number of entries
264
that will be added along the unlimited dimension (default
265
1000). If think the actual number of entries will be an order
266
of magnitude different than the default, consider providing a
267
guess; this will optimize the HDF5 B-Tree creation, management
268
process time, and memory usage.
270
The 'filters' keyword also applies only to variables with
271
an unlimited dimension, and is a PyTables filters instance
272
that describes how to store an enlargeable array on disk.
273
The default is tables.Filters(complevel=6, complib='zlib',
274
shuffle=1, fletcher32=0).
276
The return value is the NetCDFVariable object describing the
278
# create NetCDFVariable instance.
279
var = NetCDFVariable(varname,self,datatype,dimensions,least_significant_digit=least_significant_digit,expectedsize=expectedsize,filters=filters)
280
# update shelf variable dictionary, global variable
282
self.variables[varname] = var
286
"""Closes the file (after calling the sync method)"""
288
self._NetCDF_h5file.close()
292
synchronize variables along unlimited dimension, filling in data
293
with default netCDF _FillValue. Returns the length of the
294
unlimited dimension. Invoked automatically when the NetCDFFile
297
# find max length of unlimited dimension.
300
for varname,var in self.variables.iteritems():
303
len_unlim_dims.append(var.shape[var.extdim])
306
len_max = max(len_unlim_dims)
307
if self._NetCDF_mode == 'r':
308
return len_max # just returns max length of unlim dim if read-only
309
# fill in variables that have an unlimited
310
# dimension with _FillValue if they have fewer
311
# entries along unlimited dimension than the max.
312
for varname,var in self.variables.iteritems():
313
len_var = var.shape[var.extdim]
314
if var.extdim >= 0 and len_var < len_max:
315
shp = list(var.shape)
316
shp[var.extdim]=len_max-len_var
317
var._NetCDF_varobj.append(var._NetCDF_FillValue*numarray.ones(shp,var.typecode()))
321
"""produces output similar to 'ncdump -h'."""
322
info=[self._NetCDF_h5file.filename+' {\n']
323
info.append('dimensions:\n')
325
len_unlim = int(self.sync())
326
for key,val in self.dimensions.iteritems():
329
info.append(' '+key+' = UNLIMITED ; // ('+repr(size)+' currently)\n')
331
info.append(' '+key+' = '+repr(val)+' ;\n')
333
info.append('variables:\n')
334
for varname in self.variables.keys():
335
var = self.variables[varname]
337
type = _reprtype_dict[var.typecode()]
338
info.append(' '+type+' '+varname+str(dim)+' ;\n')
339
for key in var.ncattrs():
340
val = getattr(var,key)
341
info.append(' '+varname+':'+key+' = '+repr(val)+' ;\n')
342
info.append('// global attributes:\n')
343
for key in self.ncattrs():
344
val = getattr(self,key)
345
info.append(' :'+key+' = '+repr(val)+' ;\n')
349
def __setattr__(self,name,value):
350
# if name = 'dimensions', 'variables', or begins with
351
# '_NetCDF_', it is a temporary at the python level
352
# (not stored in the hdf5 file).
353
if not name.startswith('_') and name not in ['dimensions','variables']:
354
setattr(self._NetCDF_h5file.root._v_attrs,name,value)
355
elif not name.endswith('__'):
356
self.__dict__[name]=value
358
def __getattr__(self,name):
359
if name.startswith('__') and name.endswith('__'):
361
elif name.startswith('_NetCDF_') or name in ['dimensions','variables']:
362
return self.__dict__[name]
364
if self.__dict__.has_key(name):
365
return self.__dict__[name]
367
return self._NetCDF_h5file.root._v_attrs.__dict__[name]
370
"""return attributes corresponding to netCDF file attributes"""
371
return [attr for attr in self._NetCDF_h5file.root._v_attrs._v_attrnamesuser]
373
def h5tonc(self,filename,packshort=False,scale_factor=None,add_offset=None):
374
"""convert to a true netcdf file (filename). Requires
375
Scientific.IO.NetCDF module. If packshort=True, variables are
376
packed as short integers using the dictionaries scale_factor
377
and add_offset. The dictionary keys are the the variable names
378
in the hdf5 file to be packed as short integers. Each
379
variable's unlimited dimension must be the slowest varying
380
(the first dimension for C/Python, the last for Fortran)."""
382
if not ScientificIONetCDF_imported or not Numeric_imported:
383
print 'Scientific.IO.NetCDF and Numeric must be installed to convert to NetCDF'
385
ncfile = RealNetCDF.NetCDFFile(filename,'w')
387
for dimname,size in self.dimensions.iteritems():
388
ncfile.createDimension(dimname,size)
389
# create global attributes.
390
for key in self.ncattrs():
391
setattr(ncfile,key,getattr(self,key))
393
for varname,varin in self.variables.iteritems():
395
dims = varin.dimensions
396
dimsizes = [self.dimensions[dim] for dim in dims]
398
if dimsizes.index(None) != 0:
399
raise ValueError,'unlimited or enlargeable dimension must be most significant (slowest changing, or first) one in order to convert to a true netCDF file'
400
if packshort and scale_factor.has_key(varname) and add_offset.has_key(varname):
401
print 'packing %s as short integers ...'%(varname)
405
datatype = varin.typecode()
406
if not _netcdftype_dict[datatype]:
407
raise ValueError,'datatype not supported in netCDF, cannot convert to a true netCDF file'
409
varout = ncfile.createVariable(varname,datatype,dims)
410
for key in varin.ncattrs():
411
setattr(varout,key,getattr(varin,key))
413
setattr(varout,'scale_factor',scale_factor[varname])
414
setattr(varout,'add_offset',add_offset[varname])
415
for n in range(varin.shape[0]):
417
varout[n] = ((1./scale_factor[varname])*(varin[n] - add_offset[varname])).astype('s')
420
varout[n] = Numeric.reshape(Numeric.array(varin[n].flat,'c'),varin.shape[1:])
426
def nctoh5(self,filename,unpackshort=True,filters=None):
427
"""convert a true netcdf file (filename) to a hdf5 file
428
compatible with this module. Requires Scientific.IO.NetCDF
429
module. If unpackshort=True, variables stored as short
430
integers with a scale and offset are unpacked to Float32
431
variables in the hdf5 file. If the least_significant_digit
432
attribute is set, the data is quantized to improve
433
compression. Use the filters keyword to change the default
434
tables.Filters instance used for compression (see the
435
createVariable docstring for details)."""
437
if not ScientificIONetCDF_imported or not Numeric_imported:
438
print 'Scientific.IO.NetCDF and Numeric must be installed to convert from NetCDF'
440
ncfile = RealNetCDF.NetCDFFile(filename,'r')
443
for dimname,size in ncfile.dimensions.iteritems():
444
self.createDimension(dimname,size)
449
for varname,ncvar in ncfile.variables.iteritems():
450
if hasattr(ncvar,'least_significant_digit'):
451
lsd = ncvar.least_significant_digit
454
if unpackshort and hasattr(ncvar,'scale_factor') and hasattr(ncvar,'add_offset'):
458
dounpackshort = False
459
datatype = ncvar.typecode()
460
var = self.createVariable(varname,datatype,ncvar.dimensions,least_significant_digit=lsd,filters=filters)
461
for key,val in ncvar.__dict__.iteritems():
462
if dounpackshort and key in ['add_offset','scale_factor']: continue
463
if dounpackshort and key == 'missing_value': val=1.e30
464
# convert rank-0 Numeric array.to python float/int/string
465
if isinstance(val,type(Numeric.array([1]))) and len(val)==1:
468
# fill variables with data.
469
nobjects = 0; nbytes = 0 # Initialize counters
470
for varname,ncvar in ncfile.variables.iteritems():
471
var = self.variables[varname]
472
extdim = var._NetCDF_varobj.extdim
477
if unpackshort and hasattr(ncvar,'scale_factor') and hasattr(ncvar,'add_offset'):
480
dounpackshort = False
482
# write data to enlargeable array one chunk of records at a
483
# time (so the whole array doesn't have to be kept in memory).
484
nrowsinbuf = var._NetCDF_varobj._v_maxTuples
485
# The slices parameter for var.__getitem__()
486
slices = [slice(0, dim, 1) for dim in ncvar.shape]
488
start = 0; stop = ncvar.shape[extdim]; step = nrowsinbuf
489
if step < 1: step = 1
490
# Start the copy itself
491
for start2 in range(start, stop, step):
492
# Save the records on disk
496
# Set the proper slice in the extensible dimension
497
slices[extdim] = slice(start2, stop2, step)
498
idata = ncvar[tuple(slices)]
500
tmpdata = (ncvar.scale_factor*idata+ncvar.add_offset).astype('f')
503
if hasattr(ncvar,'missing_value'):
504
tmpdata = Numeric.where(idata >= ncvar.missing_value, 1.e30, tmpdata)
509
tmpdata = (ncvar.scale_factor*idata+ncvar.add_offset).astype('f')
512
if hasattr(ncvar,'missing_value'):
513
tmpdata = Numeric.where(idata >= ncvar.missing_value, 1.e30, tmpdata)
514
if ncvar.typecode() == 'c':
515
# numarray string arrays with itemsize=1 used for netCDF char arrays.
516
# It is important to set the padding character to NULL
517
# in order to avoid the '' string to become a ' '
518
# after de-serializing. See:
519
# http://sourceforge.net/tracker/index.php?func=detail&aid=1304615&group_id=1369&atid=450446
521
# F. Altet 2005-11-07
522
var[:] = numarray.strings.array(tmpdata.tolist(),
524
padc=_fillvalue_dict[ncvar.typecode()])
526
# if data is in a CArray, convert to numarray
527
# (done automatically for EArrays)
528
if isinstance(var._NetCDF_varobj,tables.CArray):
529
tmpdata = tables.utils.convertToNA(tmpdata,var._NetCDF_varobj.atom)
531
# Increment the counters
533
nbytes += reduce(lambda x,y:x*y, var._NetCDF_varobj.shape) * var._NetCDF_varobj.itemsize
534
# create global attributes.
535
for key,val in ncfile.__dict__.iteritems():
536
# convert Numeric rank-0 array to a python float/int/string
537
if isinstance(val,type(Numeric.array([1]))) and len(val)==1:
539
# if attribute is a Numeric array, convert to python list.
540
if isinstance(val,type(Numeric.array([1]))) and len(val)>1:
542
setattr(self,key,val)
546
return nobjects, nbytes
548
class NetCDFVariable:
549
"""Variable in a netCDF file
551
NetCDFVariable objects are constructed by calling the method
552
'createVariable' on the NetCDFFile object.
554
NetCDFVariable objects behave much like array objects defined in
555
module Numeric, except that their data resides in a file. Data is
556
read by indexing and written by assigning to an indexed subset;
557
the entire array can be accessed by the index '[:]'.
559
Variables with an unlimited dimension are can be compressed on
560
disk (by default, zlib compression (level=6) and the HDF5
561
'shuffle' filter are used). The default can be changed by passing
562
a tables.Filters instance to createVariable via the filters
563
keyword argument. Truncating the data to a precision specified by
564
the least_significant_digit optional keyword argument to
565
createVariable will signficantly improve compression.
567
A list of attributes corresponding to variable attributes defined
568
in the netCDF file can be obtained with the ncattrs method.
571
def __init__(self, varname, NetCDFFile, datatype, dimensions, least_significant_digit=None,expectedsize=1000,filters=None):
572
if datatype not in _netcdftype_dict.keys():
573
raise ValueError, 'datatype must be one of %s'%_netcdftype_dict.keys()
574
self._NetCDF_parent = NetCDFFile
575
_NetCDF_FillValue = _fillvalue_dict[datatype]
578
vardimsizes.append(NetCDFFile.dimensions[d])
579
extdim = -1; ndim = 0
580
for vardim in vardimsizes:
586
# set shape to 0 for extdim.
587
vardimsizes[extdim] = 0
589
# Special case for Numeric character objects
590
# (on which base Scientific.IO.NetCDF works)
591
atom = tables.StringAtom(shape=tuple(vardimsizes), length=1)
593
atom = tables.Atom(dtype=datatype, shape=tuple(vardimsizes))
595
# default filters instance.
596
filters = tables.Filters(complevel=6,complib='zlib',shuffle=1)
598
# check that unlimited dimension is first (extdim=0).
600
# raise ValueError,'unlimited or enlargeable dimension must be most significant (slowest changing, or first) one in order to convert to a true netCDF file'
601
# enlargeable dimension, use EArray
602
self._NetCDF_varobj = NetCDFFile._NetCDF_h5file.createEArray(
603
where=NetCDFFile._NetCDF_h5file.root,
604
name=varname,atom=atom,title=varname,
606
expectedrows=expectedsize)
608
# no enlargeable dimension, use CArray
609
self._NetCDF_varobj = NetCDFFile._NetCDF_h5file.createCArray(
610
where=NetCDFFile._NetCDF_h5file.root,
611
name=varname,shape=tuple(vardimsizes),
612
atom=atom,title=varname,filters=filters)
613
# fill with _FillValue
615
# numarray string arrays with itemsize=1 used for char arrays.
616
self[:] = numarray.strings.array(shape=tuple(vardimsizes),itemsize=1)
618
self[:] = _NetCDF_FillValue*numarray.ones(tuple(vardimsizes),datatype)
619
if least_significant_digit != None:
620
setattr(self._NetCDF_varobj.attrs,'least_significant_digit',least_significant_digit)
621
setattr(self._NetCDF_varobj.attrs,'dimensions',dimensions)
622
self._NetCDF_FillValue = _NetCDF_FillValue
624
def __setitem__(self,key,data):
625
# if assigning to a CArray, convert to numarray.
626
# (done automatically for EArrays)
627
if isinstance(self._NetCDF_varobj,tables.CArray):
628
data = tables.utils.convertToNA(data,self._NetCDF_varobj.atom)
629
if hasattr(self,'least_significant_digit'):
630
self._NetCDF_varobj[key] = _quantize(data,self.least_significant_digit)
632
self._NetCDF_varobj[key] = data
634
def __getitem__(self,key):
635
return self._NetCDF_varobj[key]
638
return int(self._NetCDF_varobj.shape[0])
640
def __setattr__(self,name,value):
641
# if name begins with '_NetCDF_', it is a temporary at the python level
642
# (not stored in the hdf5 file).
643
# dimensions is a read only attribute
644
if name in ['dimensions']:
645
raise KeyError, '"dimensions" is a read-only attribute - cannot modify'
646
if not name.startswith('_NetCDF_'):
647
setattr(self._NetCDF_varobj.attrs,name,value)
648
elif not name.endswith('__'):
649
self.__dict__[name]=value
651
def __getattr__(self,name):
652
if name.startswith('__') and name.endswith('__'):
654
elif name.startswith('_NetCDF_'):
655
return self.__dict__[name]
657
if self._NetCDF_varobj.__dict__.has_key(name):
658
return self._NetCDF_varobj.__dict__[name]
660
return self._NetCDF_varobj.attrs.__dict__[name]
664
return a single character Numeric typecode.
666
'd' == Float64, 'f' == Float32, 'l' == Int32,
667
'i' == Int32, 's' == Int16, '1' == Int8,
668
'c' == StringType (length 1), 'F' == Complex32 and 'D' == Complex64.
669
The corresponding NetCDF data types are
670
'double', 'float', 'int', 'int', 'short', 'byte' and 'character'.
671
('D' and 'F' have no corresponding netCDF data types).
673
return _typecode_dict[self._NetCDF_varobj.stype]
676
"""return attributes corresponding to netCDF variable attributes"""
677
return [attr for attr in self._NetCDF_varobj.attrs._v_attrnamesuser if attr != 'dimensions']
679
def append(self,data):
681
Append data along unlimited dimension of a NetCDFVariable.
683
The data must have either the same number of dimensions as the NetCDFVariable
684
instance that it is being append to, or one less. If it has one less
685
dimension, it assumed that the missing dimension is a singleton dimension
686
corresponding to the unlimited dimension of the NetCDFVariable.
688
If the NetCDFVariable has a least_significant_digit attribute,
689
the data is truncated (quantized) to improve compression.
691
if self._NetCDF_parent._NetCDF_mode == 'r':
692
raise IOError, 'file is read only'
693
# if data is not an array, try to make it so.
697
data = numarray.array(data,self.typecode())
698
# check to make sure there is an unlimited dimension.
699
# (i.e. data is in an EArray).
700
extdim = self._NetCDF_varobj.extdim
702
raise IndexError, 'variable has no unlimited dimension'
703
# name of unlimited dimension.
704
extdim_name = self.dimensions[extdim]
705
# special case that data array is same
706
# shape as EArray, minus the enlargeable dimension.
707
# if so, add an extra singleton dimension.
708
if len(data.shape) != len(self._NetCDF_varobj.shape):
710
for n,dim in enumerate(self._NetCDF_varobj.shape):
712
shapem1 = shapem1+(dim,)
713
if data.shape == shapem1:
714
shapenew = list(self._NetCDF_varobj.shape)
716
data = numarray.reshape(data,shapenew)
718
raise IndexError,'data must either have same number of dimensions as variable, or one less (excluding unlimited dimension)'
719
# append the data to the variable object.
720
if hasattr(self,'least_significant_digit'):
721
self._NetCDF_varobj.append(_quantize(data,self.least_significant_digit))
723
self._NetCDF_varobj.append(data)
726
def assignValue(self,value):
728
Assigns value to the variable.
730
if self._NetCDF_varobj.extdim >=0:
737
Returns the value of the variable.
741
# only used internally to create netCDF variable objects
742
# from Array objects read in from an hdf5 file.
743
class _NetCDFVariable(NetCDFVariable):
744
def __init__(self, var, NetCDFFile):
745
self._NetCDF_parent = NetCDFFile
746
self._NetCDF_varobj = var
747
self._NetCDF_FillValue = _fillvalue_dict[self.typecode()]