Call the Doctor - HDF(5) Clinic

Table of Contents

Clinic 2021-05-11

Your Questions

  • Where is the page that I'm showing?
  • How did we prepare the webinar radial diagrams?

Last week's highlights

Tips, tricks, & insights

Coming soon

  • What happens to open HDF5 handles/IDs when your program ends?
    • Suggested by Quincey Koziol (LBNL)
    • We'll take it in pieces
      • Current behavior
      • How async I/O changes that picture
  • Other topics of interest?

    Let us know!

Clinic 2021-05-04

Your Questions

???

Last week's highlights

Tips, tricks, & insights

  • What is H5S_ALL all about?
        {
            __label__ fail_update, fail_fspace, fail_dset, fail_file;
            hid_t file, dset, fspace;
    
            unsigned mode           = H5F_ACC_RDWR;
            char     file_name[]    = "d1.h5";
            char     dset_name[]    = "σύνολο/δεδομένων";
            int      new_elts[6][2] = {{-1, 1}, {-2, 2}, {-3, 3}, {-4, 4},
                                       {-5, 5}, {-6, 6}};
    
            if ((file = H5Fopen(file_name, mode, H5P_DEFAULT))
                == H5I_INVALID_HID) {
                ret_val = EXIT_FAILURE;
                goto fail_file;
            }
            if ((dset = H5Dopen2(file, dset_name, H5P_DEFAULT))
                == H5I_INVALID_HID) {
                ret_val = EXIT_FAILURE;
                goto fail_dset;
            }
            // get the dataset's dataspace
            if ((fspace = H5Dget_space(dset)) == H5I_INVALID_HID) {
                ret_val = EXIT_FAILURE;
                goto fail_fspace;
            }
            // select the first 5 elements in odd positions
            if (H5Sselect_hyperslab(fspace, H5S_SELECT_SET,
                                    (hsize_t[]){1},
                                    (hsize_t[]){2},
                                    (hsize_t[]){5},
                                    NULL) < 0) {
                ret_val = EXIT_FAILURE;
                goto fail_update;
            }
    
            // (implicitly) select and write the first 5 elements of the second
            // column of NEW_ELTS
            if (H5Dwrite(dset, H5T_NATIVE_INT, H5S_ALL, fspace, H5P_DEFAULT,
                         new_elts) < 0)
                ret_val = EXIT_FAILURE;
    
    fail_update:
            H5Sclose(fspace);
    fail_fspace:
            H5Dclose(dset);
    fail_dset:
            H5Fclose(file);
    fail_file:;
        }
    
    

Coming soon

  • Fixed- vs. variable-length string performance cage match
    • Contributed by Steven (Canada Dry) Varga
    • You don't want to miss that one!
  • What happens to open HDF5 handles/IDs when your program ends?
    • Suggested by Quincey Koziol (LBNL)
    • We'll take it in pieces
      • Current behavior
      • How async I/O changes that picture
  • Other topics of interest?

    Let us know!

Clinic 2021-04-27

Your questions

  • Question 1

    Last week you mentioned that one might use the Fortran version of the HDF5 library from C/C++ when working with column-major data. Could you say more about this? Is the difference simply how the arguments to the library functions are interpreted (e.g H5Screate, H5Sselect_hyperslab) are interpreted, or is it possible to discern from the file itself whether the data is column-major or row-major?

Last week's highlights

Tips, tricks, & insights

  • The h5stat tool
    Usage: h5stat [OPTIONS] file
    
          OPTIONS
         -h, --help            Print a usage message and exit
         -V, --version         Print version number and exit
         -f, --file            Print file information
         -F, --filemetadata    Print file space information for file's metadata
         -g, --group           Print group information
         -l N, --links=N       Set the threshold for the # of links when printing
                               information for small groups.  N is an integer greater
                               than 0.  The default threshold is 10.
         -G, --groupmetadata   Print file space information for groups' metadata
         -d, --dset            Print dataset information
         -m N, --dims=N        Set the threshold for the dimension sizes when printing
                               information for small datasets.  N is an integer greater
                               than 0.  The default threshold is 10.
         -D, --dsetmetadata    Print file space information for datasets' metadata
         -T, --dtypemetadata   Print datasets' datatype information
         -A, --attribute       Print attribute information
         -a N, --numattrs=N    Set the threshold for the # of attributes when printing
                               information for small # of attributes.  N is an integer greater
                               than 0.  The default threshold is 10.
         -s, --freespace       Print free space information
         -S, --summary         Print summary of file space information
         --enable-error-stack  Prints messages from the HDF5 error stack as they occur
         --s3-cred=<cred>      Access file on S3, using provided credential
                               <cred> :: (region,id,key)
                               If <cred> == "(,,)", no authentication is used.
         --hdfs-attrs=<attrs>  Access a file on HDFS with given configuration
                               attributes.
                               <attrs> :: (<namenode name>,<namenode port>,
                                           <kerberos cache path>,<username>,
                                           <buffer size>)
                               If an attribute is empty, a default value will be
                               used.
    

    Let's see this in action:

    File information
            # of unique groups: 718
            # of unique datasets: 351
            # of unique named datatypes: 4
            # of unique links: 353
            # of unique other: 0
            Max. # of links to object: 701
            Max. # of objects in group: 350
    File space information for file metadata (in bytes):
            Superblock: 48
            Superblock extension: 0
            User block: 0
            Object headers: (total/unused)
                    Groups: 156725/16817
                    Datasets(exclude compact data): 129918/538
                    Datatypes: 1474/133
            Groups:
                    B-tree/List: 21656
                    Heap: 33772
            Attributes:
                    B-tree/List: 0
                    Heap: 0
            Chunked datasets:
                    Index: 138
            Datasets:
                    Heap: 0
            Shared Messages:
                    Header: 0
                    B-tree/List: 0
                    Heap: 0
            Free-space managers:
                    Header: 0
                    Amount of free space: 0
    Small groups (with 0 to 9 links):
            # of groups with 0 link(s): 1
            # of groups with 1 link(s): 710
            # of groups with 2 link(s): 1
            # of groups with 3 link(s): 2
            # of groups with 4 link(s): 1
            # of groups with 5 link(s): 1
            Total # of small groups: 716
    Group bins:
            # of groups with 0 link: 1
            # of groups with 1 - 9 links: 715
            # of groups with 100 - 999 links: 2
            Total # of groups: 718
    Dataset dimension information:
            Max. rank of datasets: 1
            Dataset ranks:
                    # of dataset with rank 1: 351
    1-D Dataset information:
            Max. dimension size of 1-D datasets: 736548
            Small 1-D datasets (with dimension sizes 0 to 9):
                    # of datasets with dimension sizes 1: 1
                    Total # of small datasets: 1
            1-D Dataset dimension bins:
                    # of datasets with dimension size 1 - 9: 1
                    # of datasets with dimension size 100000 - 999999: 350
                    Total # of datasets: 351
    Dataset storage information:
            Total raw data size: 9330522
            Total external raw data size: 0
    Dataset layout information:
            Dataset layout counts[COMPACT]: 0
            Dataset layout counts[CONTIG]: 0
            Dataset layout counts[CHUNKED]: 351
            Dataset layout counts[VIRTUAL]: 0
            Number of external files : 0
    Dataset filters information:
            Number of datasets with:
                    NO filter: 1
                    GZIP filter: 0
                    SHUFFLE filter: 350
                    FLETCHER32 filter: 0
                    SZIP filter: 0
                    NBIT filter: 0
                    SCALEOFFSET filter: 0
                    USER-DEFINED filter: 350
    Dataset datatype information:
            # of unique datatypes used by datasets: 4
            Dataset datatype #0:
                    Count (total/named) = (1/1)
                    Size (desc./elmt) = (60/64)
            Dataset datatype #1:
                    Count (total/named) = (347/0)
                    Size (desc./elmt) = (14/1)
            Dataset datatype #2:
                    Count (total/named) = (2/0)
                    Size (desc./elmt) = (14/2)
            Dataset datatype #3:
                    Count (total/named) = (1/1)
                    Size (desc./elmt) = (79/12)
            Total dataset datatype count: 351
    Small # of attributes (objects with 1 to 10 attributes):
            # of objects with 1 attributes: 1
            # of objects with 2 attributes: 551
            # of objects with 3 attributes: 147
            # of objects with 4 attributes: 2
            # of objects with 5 attributes: 4
            # of objects with 6 attributes: 1
            Total # of objects with small # of attributes: 706
    Attribute bins:
            # of objects with 1 - 9 attributes: 706
            Total # of objects with attributes: 706
            Max. # of attributes to objects: 6
    Free-space persist: FALSE
    Free-space section threshold: 1 bytes
    Small size free-space sections (< 10 bytes):
            Total # of small size sections: 0
    Free-space section bins:
            Total # of sections: 0
    File space management strategy: H5F_FSPACE_STRATEGY_FSM_AGGR
    File space page size: 4096 bytes
    Summary of file space information:
      File metadata: 343731 bytes
      Raw data: 9330522 bytes
      Amount/Percent of tracked free space: 0 bytes/0.0%
      Unaccounted space: 5582 bytes
    Total space: 9679835 bytes
    

Coming soon

  • What happens to open HDF5 handles/IDs when your program ends?
    • Suggested by Quincey Koziol (LBNL)
    • We'll take it in pieces
      • Current behavior
      • How async I/O changes that picture
  • Other topics of interest?

    Let us know!

Clinic 2021-04-20

Your questions

Last week's highlights

Tips, tricks, & insights

  • Do I need a degree to use H5Pset_fclose_degree?
    • Identifiers are transient runtime handles to manage HDF5 things
    • Everything begins with a file handle, but how does it end?
      • Files can be re-opened
      • Other files can be mounted in HDF5 groups
      • Traversal of external links may trigger the opening of other files and objects, but see H5Pset_elink_file_cache_size
    • What happens if a file is closed before other (non-file) handles?
      H5F_CLOSE_WEAK
      • File is closed if last open handle
      • Invalidate file handle and delay file close until remaining objects are closed
      H5F_CLOSE_SEMI
      • File is closed if last open handle
      • H5Fclose generates error if open handles remain
      H5F_CLOSE_STRONG
      • File is closed, closing any remaining handles if necessary.
      H5F_CLOSE_DEFAULT
      VFD decides, H5F_CLOSE_WEAK for most VFDs. Notable exception: MPI-IO - H5F_CLOSE_SEMI

Coming soon

  • What happens to open HDF5 handles/IDs when your program ends?
    • Suggested by Quincey Koziol (LBNL)
    • We'll take it in pieces
      • Current behavior
      • How async I/O changes that picture
  • Other topics of interest?

    Let us know!

Clinic 2021-04-06

Your questions

  • Question 1

    We have observed that reading a dataset with variable-length ASCII strings and setting the read mem. type to H5T_C_S1 (size=H5T_VARIABLE / cset=H5T_CSET_UTF8), produces an error with “H5T.c line 4893 in H5T__path_find_real(): no appropriate function for conversion path”. However, if we read first another dataset of the same file that contains UTF8 strings and then the same dataset with ASCII strings, no errors are returned whatsoever and the content seems to be retrieved. Is this an expected behaviour, or are we missing something?

    • As a side note, the same situation can be replicated by setting the cset to H5T_CSET_ASCII and opening first the ASCII-based dataset before the UTF8-dataset, or any other combination, as long as the first call succeeded (e.g., opening the ASCII dataset with cset=H5T_CSET_ASCII, then opening the same ASCII dataset with cset=H5T_CSET_UTF8 also seems to work).
    • Tested using HDF5 v1.10.7, v1.12.0, and manually compiling the most recent commit on the official GitHub repository. The code was compiled with GCC 9.3.0 + HPE-MPI v2.22, but no MPI file access property was given (i.e., using H5P_DEFAULT to avoid MPI-IO).
    • Further information: https://github.com/HDFGroup/hdf5/issues/544

Last week's highlights

  • Announcements
  • Forum
    • How can attributes of an existing object be modified?
      • There are several different "namespaces" in HDF5
      • Examples:
        • Global (=file-level) path names
        • Per object attribute names
        • Per compound type field names
        • Etc.
      • Some have constraints such as reserved characters, character encoding, length, etc.
      • Most importantly, they are disjoint and don't mix
        • Disambiguation would be too costly, if not impossible
    • HDF5DotNet library
      • There's perhaps a place for wrappers of the HDF5 C-API and and independent .NET native (=full-managed) solution (e.g., HDF5.NET)
      • SWIG (Simplified Wrapper and Interface Generator) has come a long way
        • Should that be the path forward for HDF.PInvoke
        • We need greater automation and (.NET) platform independence
        • Focus on testing
        • Any thoughts/comments?
    • Parallel HDF5 write with irregular size in one dimension
      • Posted an example that shows how different ranks can write varying amounts of data to a chunked dataset in parallel. Some ranks don't write any data. The chunk size is chosen arbitrarily.

Tips & tricks

  • The "mystery" of the HDF5 file format
    • The specification published here can seem overwhelming. Part of the problem is that you are seeing at least three versions layered on top of each other.
    • The first (?) release was a lot simpler, and has all the core ideas
    • Once you've digested that, you are ready for the other releases and consider writing your own (de-)serializer
    • Don't get carried away: only a tiny fraction of the HDF5 library's code deals w/ serialization

Coming soon

  • What happens to open HDF5 handles/IDs when your program ends?
    • Suggested by Quincey Koziol (LBNL)
    • We'll take it in pieces
      • Current behavior
      • How async I/O changes that picture
  • Other topics of interest?

    Let us know!

Clinic 2021-03-30

Canceled because of ECP event.

Clinic 2021-03-23

Your questions

???

Last week's highlights

  • Announcements
  • Forum
    • How to convert XML to HDF5
      • There is no canonical conversion path, even if you have an XML schema
        • XML is simpler because elements are strictly nested
        • XML can be trickier because of element repetition and the non-obligatory nature of certain elements or attributes
      • Start w/ a scripting language that has XML (parsing) and HDF5 modules
        • Jannson works well if you prefer C
      • Consider XSLT to simplify first
    • HDF5DotNet library
      • It's been out of maintenance for many years
      • Alternatives: HDF.PInvoke (Windows only) and HDF.PInvoke.1.10 (.NET Standard)
        • Both are based on HDF5 1.10.x
      • Note: We (The HDF Group) are neither C# nor .NET experts. PInvoke is about the level of abstraction we can handle. We count on and rely on knowledgeable community members for advice and contributions.
      • There are many interesting community projects, for example, HDF5.NET:
        • Based on the HDF5 file format spec. & no HDF5 library dependence!
    • Parallel HDF5 write with irregular size in one dimension
      • Many of our examples s..k, and we have to do a lot better
        • Maybe we created them this way to generate more questions? :-/
      • HDF5 dataspaces are logical, chunks are physical
        • Write a (logically) correct program first and then optimize performance!

Tips & tricks

  • Large (> 64 KiB) HDF5 attributes
    import h5py, numpy as np
    
    with h5py.File('my.h5', 'w', libver='latest') as file:
        file.attrs['random[1024]'] = np.random.random(1024)
        file.attrs['random[1048576]'] = np.random.random(1024*1024)
    
    

    The h5dump output looks like this:

    
    gerd@guix ~/scratch/run$ h5dump -pBH my.h5
    HDF5 "my.h5" {
    SUPER_BLOCK {
       SUPERBLOCK_VERSION 3
       FREELIST_VERSION 0
       SYMBOLTABLE_VERSION 0
       OBJECTHEADER_VERSION 0
       OFFSET_SIZE 8
       LENGTH_SIZE 8
       BTREE_RANK 16
       BTREE_LEAF 4
       ISTORE_K 32
       FILE_SPACE_STRATEGY H5F_FSPACE_STRATEGY_FSM_AGGR
       FREE_SPACE_PERSIST FALSE
       FREE_SPACE_SECTION_THRESHOLD 1
       FILE_SPACE_PAGE_SIZE 4096
       USER_BLOCK {
          USERBLOCK_SIZE 0
       }
    }
    GROUP "/" {
       ATTRIBUTE "random[1024]" {
          DATATYPE  H5T_IEEE_F64LE
          DATASPACE  SIMPLE { ( 1024 ) / ( 1024 ) }
       }
       ATTRIBUTE "random[1048576]" {
          DATATYPE  H5T_IEEE_F64LE
          DATASPACE  SIMPLE { ( 1048576 ) / ( 1048576 ) }
       }
    }
    }
    
    

    The libver='latest' keyword is critical. Running without produces this error:

    
    gerd@guix ~/scratch/run$ python3 large_attribute.py
    Traceback (most recent call last):
      File "large_attribute.py", line 6, in <module>
        file.attrs['random[1048576]'] = np.random.random(1024*1024)
      File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
      File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
      File "/home/gerd/.guix-profile/lib/python3.8/site-packages/h5py/_hl/attrs.py", line 100, in __setitem__
        self.create(name, data=value)
      File "/home/gerd/.guix-profile/lib/python3.8/site-packages/h5py/_hl/attrs.py", line 201, in create
        attr = h5a.create(self._id, self._e(tempname), htype, space)
      File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
      File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
      File "h5py/h5a.pyx", line 47, in h5py.h5a.create
    RuntimeError: Unable to create attribute (object header message is too large)
    
    

    libver=('v108', 'v108') also works. (v108 corresponds to HDF5 1.8.x).

Clinic 2021-03-16

Your questions

???

Last week's highlights

  • Announcements
  • Forum
    • Multithreaded writing to a single file in C++
      • Beware of non-thread-safe wrappers or language bindings!
      • Compiling the C library with --enable-threadsafe is only the first step
    • Reference Manual in Doxygen
    • H5Iget_name call is very slow for HDF5 file > 5 GB
      • H5Iget_name constructs an HDF5 path name given an object identifier
        • Use Case: You are in a corner of an application where all you've got is a handle (identifier) and you would like to render something meaningful to humans.
      • It's not so much the file size but the number and arrangement of objects that makes H5Iget_name slow
        • See the h5stat output the user provided!
      • What contributes to H5Iget_name being slow?
        • The path names are not stored in an HDF5 file (except in symbolic links…) and are created on-demand
        • In general, HDF5 arrangements are not trees, not even directed graphs, but directed multi-graphs
          • A node can be the target of multiple edges (including from the same source node)
          • Certain nodes (groups) can be source and target of an edge
      • *Take-Home-Message:*Unless you are certain that your HDF5 arrangement is a tree, you are skating on thin ice with path names!
        • Trying to uniquely identify objects via path name is asking for trouble
          • Use addresses + file IDs (pre-HDF 1.12) or tokens (HDF 1.12+) for that!
      • Quincey points out that
        • The library caches metadata that can accelerate H5Iget_name
        • But there are other complications

          • For example, you can have "anonymous" objects (objects that haven't

          been linked to groups in the file. i.e., no path yet)

          • Another source of trouble are objects that have been unlinked

Tips & tricks

  • How to open an HDF5 in append mode?

    To be clear, there is no H5F* call that behaves like an append call. But we can mimic one as follows:

    Credits: Werner Benger

     1: 
     2: hid = H5Fcreate(filename, H5F_ACC_EXCL|H5F_ACC_SWMR_WRITE, fcpl_id, fapl_id);
     3: if (hid < 0)
     4:   {
     5:     hid = H5Fopen(filename, H5F_ACC_RDWR|H5F_ACC_SWMR_WRITE, fapl_id);
     6:   }
     7: 
     8: if (hid < 0)
     9:   // something's going on...
    10: 
    
    • If the file exists H5Fcreate will fail and H5Fopen with H5F_ACC_RDWR will kick in.
      • If the file is not an HDF5 file, both will fail.
    • If the file does not exist, H5Fcreate will do its job.

Clinic 2021-03-09

Your questions (as of 9:00 a.m. Central Time)

  • Question 1

    Is there a limit on array size if I save an array as an attribute of a dataset?

    In terms of the performance, is there any consequence if I save a large amount of data into an attribute?

    Size limit
    No, not in newer versions (1.8.x+) of HDF5. See What limits are there in HDF5?
    • Make sure that downstream applications can handle such attributes (i.e., use HDF5 1.8.x or later)
    • Remember to tell the library that you want to use the 1.8 or later file format via H5Fset_libverbounds (e.g., set low to H5F_LIBVER_V18)
    • Also keep an eye on H5Pset_attr_phase_change (Consider setting max_compact to 0.)
    Performance
    It depends. (…on what you mean by performance)
    • Attributes have a different function (from datasets) in HDF5
      • They "decorate" other objects - application metadata
    • Their values are treated as atomic units, i.e., you will always write and read the entire "large" value.
      • In other words, you lose partial I/O
      • Several layouts available for datasets are not supported with attributes
        • No compression
  • Question 2

    Question regarding hdf5 I/O performance, compare saving data into a large array in one dataset Vs saving data into several smaller arrays and in several dataset. Any consequence in terms of the performance? Will there be any sweet spot for best performance? Or any tricks to make it reading/writing faster? I know parallel I/O but parallel I/O would need hardware support which is not always available. So the question is about the tricks to speed up I/O without parallel I/O.

    One large dataset vs. many small datasets, which is faster?
    It depends.
    • How do you access the data?
      • Do you always write/read the entire array in the order it was written?
      • Is it WORM (write once read many)?
        • How and how frequently does it change?
    • How compressible is the data?
      • Do you need to store data at all? E.g., HDF5-UDF
    • What is performance for you and how do you measure it?
    • What percentage of total runtime does your application spend doing I/O?
    • What scalability behavior do you expect?
    • Assuming throughput is the measure, create a baseline for your target system, for example, via FIO or IOR
      • Your goal is to saturate the I/O subsystem
      • Is this a dedicated system?
    • Which other systems do you need to support? Are you the only user? What's the future?
    • What's the budget?

Last week's highlights

  • Announcements
  • Forum
    • Get Object Header size
      • The user created a compound type with 100s of fields and eventually saw this error:

        H5Oalloc.c line 1312 in H5O__alloc(): object header message is too large
        
      • This issue was first raised (Jira-ticket HDFFV-1089 date) on Jun 08, 2009
      • Root cause: the size of header message data is represented in a 2 byte unsigned integer (see section IV.A.1.a and IV.A.1.b of the HDF5 file format spec.)
        • Ergo, header messages, currently, cannot be larger than 64 KB.
        • Datatype information is stored in a header message (see section IV.A.2.d)
        • This can be fixed with a file format update, but it's fallen through the cracks for over 10 years
      • The customer is always right, but who needs 100s of fields in a compound type?
        • Use Case: You have a large record type and you always (or most of the time) read and write all fields together.
        • Outside this narrow use case you are bound to lose a lot of performance and flexibility
      • You are Leaving the American Sector Mainstream: not too many tools will be able to handle your data
      • Better approach: divide-and-conquer, i.e., go w/ a group of compounds or individual columns
    • Using HDF5 in Qt Creator
      • Linker can't find H5::FileAccPropList() and H5::FileCreatPropList()
      • Works fine in release mode, but not in debug mode
      • AFAIK, we don't distribute debug libraries in binary form. Still doesn't explain why the user couldn't use the release binaries in a debug build, unless QT Creator is extra pedantic?
    • Reference Manual in Doxygen
    • H5Iget_name call is very slow for HDF5 file > 5 GB
      • H5Iget_nname constructs an HDF5 path name given an object identifier
        • Use Case: You are in a corner of an application where all you've got is a handle (identifier) and you would like to render something meaningful to humans.
      • It's not so much the file size but the number and arrangement of objects that makes H5Iget_name slow
        • See the h5stat output the user provided!
      • What contributes to H5Iget_name being slow?
        • The path names are not stored in an HDF5 file (except in symbolic links…) and are created on-demand
        • In general, HDF5 arrangements are not trees, not even directed graphs, but directed multi-graphs
          • A node can be the target of multiple edges (including from the same source node)
          • Certain nodes (groups) can be source and target of an edge
      • *Take-Home-Message:*Unless you are certain that your HDF5 arrangement is a tree, you are skating on thin ice with path names!
        • Trying to uniquely identify objects via path name is asking for trouble
          • Use addresses + file IDs (pre-HDF 1.12) or tokens (HDF 1.12+) for that!

Clinic 2021-03-02

Your questions

  • h5rnd
    • Question: How are generated HDF5 objects named? An integer name, or can a randomized string be used?
      • h5rnd Generates a pool of random strings as link names
      • Uniform length distribution between 5 and 30 over [a-z][A-Z]
    • Question: Does it create multi-dimensional datasets with a rich set of HDF5 datatypes? Compound datatypes, perhaps?
      • Currently, it creates 1,000 element 1D FP64 datasets (w/ attribute)
      • RE: types - anything is possible. Budget?
    • Question: Are named datatypes generated? If not, are these reasonable types of extensions for h5rnd?
      • Not currently, but anything is possible
  • Other questions?
    • Question: How do these extensions fit with the general intent and extensibility of h5rnd?
      • It was written as an illustration
      • Uses an older version of H5CPP
      • Labeling could be improved
      • Dataset generation under development
      • Some enhancements in a future version

Last week's highlights

  • Forum
    • External link access in parallel HDF5 1.12.0
      • Can't access externally linked datasets in parallel; fine in 1.10.x and in serial
      • It appears that someone encountered a known bug in the field
      • Dev. claim it's fixed in develop, waiting for confirmation from the user
    • H5I_dec_ref hangs
      • H5Idec_ref is one of those functions that needs to be used w/ extra care
      • Using mpi4py and h5py
      • User provided an MWE (in Python) and, honestly, there is limited help we can offer (as we are neither mpi4py nor h5py experts)
      • A C or C++ MWE might be the better starting point
    • h5diff exits with 1 but doesn’t print differences
      • Case of out-of-date/poor documentation
      • h5diff is perhaps the most complex tool (multi-graph comparison + what does '=' mean?)
      • Writing code is the easy part
      • We need to do better
    • Independent datasets for MPI processes. Progress?
      • Need some clarification on the problem formulation
      • Current status (w/ MPI) MD-modifying ops. must be collective
      • On the horizon: asynchronous operations (ASYNC VOL)
    • Writing to virtual datasets
      • Apparently broken when a datatype conversion (truncation!) is involved

Clinic 2021-02-23

Your questions

  • How to use H5Ocopy in C++ code?
    • Forum post

      sandhya.v250 (Feb 19)

      Hello Team, I want to copy few groups from one hdf5 file to hdf5 another file which is not yet created and this should be done inside the C++ code..can you please tell me how can I use this inside this tool

    • The function in question (there is also a tool called h5copy):

      herr_t H5Ocopy
      (
       hid_t       src_loc_id,
       const char* src_name,
       hid_t       dst_loc_id,
       const char* dst_name,
       hid_t       ocpypl_id,
       hid_t       lcpl_id
      );
      
      
    • The emphasis appears to be on C++
      • You can do this in C. It's just more boilerplate.
      • Whenever I need something C++, I turn to my colleague Steven Varga (= Mr. H5CPP)
      • He also created a nice random HDF5 file generator/tester (= 'Prüfer' in German)
  • Steven's solution (excerpt)

    The full example can be downloaded from here.

    Basic idea: Visit all objects in the source via H5Ovisit and invoke H5Ocopy in the callback.

     1: 
     2: #include "argparse.h"
     3: #include <h5cpp/all>
     4: #include <string>
     5: 
     6: herr_t ocpy_callback(hid_t src, const char *name, const H5O_info_t *info,
     7:                      void *dst_) {
     8:   hid_t* dst = static_cast<hid_t*>(dst_);
     9:   int err = 0;
    10:   switch( info->type ){
    11:   case H5O_TYPE_GROUP:
    12:     if(H5Lexists( *dst, name, H5P_DEFAULT) >= 0)
    13:       err = H5Ocopy(src, name, *dst, name, H5P_DEFAULT, H5P_DEFAULT);
    14:     break;
    15:   case H5O_TYPE_DATASET:
    16:     err = H5Ocopy(src, name, *dst, name, H5P_DEFAULT, H5P_DEFAULT);
    17:     break;
    18:   default: /*H5O_TYPE_NAMED_DATATYPE, H5O_TYPE_NTYPES, H5O_TYPE_UNKNOWN */
    19:     ; // nop to keep compiler happy
    20:   }
    21:   return 0;
    22: }
    23: 
    24: int main(int argc, char **argv)
    25: {
    26:   argparse::ArgumentParser arg("ocpy", "0.0.1");
    27:   arg.add_argument("-i", "--input")
    28:     .required().help("path to input hdf5 file");
    29:   arg.add_argument("-s", "--source")
    30:     .default_value(std::string("/"))
    31:     .help("path to group within hdf5 container");
    32:   arg.add_argument("-o", "--output").required()
    33:     .help("the new hdf5 will be created/or opened rw");
    34:   arg.add_argument("-d", "--destination")
    35:     .default_value(std::string("/"))
    36:     .help("target group");
    37: 
    38:   std::string input, output, source, destination;
    39:   try {
    40:     arg.parse_args(argc, argv);
    41:     input = arg.get<std::string>("--input");
    42:     output = arg.get<std::string>("--output");
    43:     source = arg.get<std::string>("--source");
    44:     destination = arg.get<std::string>("--destination");
    45: 
    46:     h5::fd_t fd_i = h5::open(input, H5F_ACC_RDONLY);
    47:     h5::fd_t fd_o = h5::create(output, H5F_ACC_TRUNC);
    48:     h5::gr_t dgr{H5I_UNINIT}, sgr = h5::gr_t{H5Gopen(fd_i, source.data(),
    49:                                                      H5P_DEFAULT)};
    50:     h5::mute();
    51:     if( destination != "/" ){
    52:       char * gname = destination.data();
    53:       dgr = H5Lexists(fd_o, gname, H5P_DEFAULT) >= 0 ?
    54:         h5::gr_t{H5Gcreate(fd_o, gname, H5P_DEFAULT, H5P_DEFAULT,
    55:                            H5P_DEFAULT)}
    56:         : h5::gr_t{H5Gopen(fd_i, gname, H5P_DEFAULT)};
    57:       H5Ovisit(sgr, H5_INDEX_CRT_ORDER, H5_ITER_NATIVE, ocpy_callback, &dgr );
    58:     } else
    59:       H5Ovisit(sgr, H5_INDEX_CRT_ORDER, H5_ITER_NATIVE, ocpy_callback, &fd_o);
    60:     h5::unmute();
    61:   } catch ( const h5::error::any& e ) {
    62:     std::cerr << e.what() << std::endl;
    63:     std::cout << arg;
    64:   }
    65:   return 0;
    66: }
    67: 
    
  • Parting thoughts
    • This is can be tricky business depending on how selective you want to be
    • H5Ovisit visits objects and does not account for dangling links, etc.
    • H5Ocopy's behavior is highly customizable. Check the options & play w/ h5copy to see the effect!
  • More Questions
    • Question 1

      I have an unrelated question. I have 7,000 HDF5 files, each 500 MB long. When I use them, should I open them selectively, when I need them, or is it advantageous to make one big file, or to open virtual files? I am interested in the speed of the different approaches.

      • 40 GbE connectivity
      • 10 contiguously laid out Datasets per file => ~50 MB per dataset
      • Always reading full datasets
      • Considerations:
        • If you have the RAM and use all data in an "epoch" just read whole files and use HDF5 file images for "in-memory I/O."
        • You could maintain an index file I which contains external links (one for each of the 7,000 files), and a dataset which for each external file and dataset contains the offset of the dataset in the file. You would keep I (small!) in memory and, for each dataset request, read the ~50MB directly w/o the HDF5 library. This assumes that no datatype conversion is necessary and you have no trouble interpreting the bytes.
        • A variation of the previous approach would be for the stub-file to contain HDF5 virtual datasets, datasets stitched together from other datasets. This would we a good option, if you wanted to simplify your application code and make everything appear as a single large HDF5 file. It'd be important though to have that (small) stub-file on the clients in memory to not incur a high latency penalty.
        • Both approaches can be easily parallelized, assuming read-only access. If there are writers involved, it's still doable, but additional considerations apply.

      Another question: what is the recommended way to combine Python with C++ with C++ reading in and working on large hdf5 files that require a lot of speed.

      • To be honest, we ran out of time and I (GH) didn't fully grasp the question.
      • Steven said something about Julia
      • Henric uses Boost Python. What about Cython?
      • What's the access pattern?

        Let's continue the discussion on the forum or come back next week!

Last week's highlights

Appendix

  • The h5copy command line tool
    gerd@guix ~$ h5copy
    
    usage: h5copy [OPTIONS] [OBJECTS...]
       OBJECTS
          -i, --input        input file name
          -o, --output       output file name
          -s, --source       source object name
          -d, --destination  destination object name
       OPTIONS
          -h, --help         Print a usage message and exit
          -p, --parents      No error if existing, make parent groups as needed
          -v, --verbose      Print information about OBJECTS and OPTIONS
          -V, --version      Print version number and exit
          --enable-error-stack
                      Prints messages from the HDF5 error stack as they occur.
          -f, --flag         Flag type
    
          Flag type is one of the following strings:
    
          shallow     Copy only immediate members for groups
    
          soft        Expand soft links into new objects
    
          ext         Expand external links into new objects
    
          ref         Copy references and any referenced objects, i.e., objects
                      that the references point to.
                        Referenced objects are copied in addition to the objects
                      specified on the command line and reference datasets are
                      populated with correct reference values. Copies of referenced
                      datasets outside the copy range specified on the command line
                      will normally have a different name from the original.
                        (Default:Without this option, reference value(s) in any
                      reference datasets are set to NULL and referenced objects are
                      not copied unless they are otherwise within the copy range
                      specified on the command line.)
    
          noattr      Copy object without copying attributes
    
          allflags    Switches all flags from the default to the non-default setting
    
          These flag types correspond to the following API symbols
    
          H5O_COPY_SHALLOW_HIERARCHY_FLAG
          H5O_COPY_EXPAND_SOFT_LINK_FLAG
          H5O_COPY_EXPAND_EXT_LINK_FLAG
          H5O_COPY_EXPAND_REFERENCE_FLAG
          H5O_COPY_WITHOUT_ATTR_FLAG
          H5O_COPY_ALL
    

Clinic 2021-02-16

Your questions

Last week's highlights

Notes

  • What (if any) are the ACID properties of HDF5 operations?
    • Split-state

      The state of an open (for RW) HDF5 file is split between RAM and persistent storage. Often the partial states will be out of sync. In the event of a "catastrophic" failure (power outage, application crash, system crash), it is impossible to predict what the partial state on disk will be.

      hdf5-file-state.png

    • Non-transactional

      The main reason why it is impossible to predict the outcome is that HDF5 operations are non-transactional. By 'transaction' I mean a collection of operations (and the effects of their execution) on the physical and abstract application state. In particular, there are no concepts of beginning a transaction, a commit, or a roll-back. Since they are not transactional, it is not straightforward to speak about the ACID properties of HDF5 operations.

    • File system facilities

      People sometimes speak about ACID properties with respect to file system operations. Although the HDF5 library relies on file system operations for the implementation of HDF5 operations, the correspondence is not as direct as might wish. For example, what appears as a single HDF5 operation to the user often includes multiple file system operations. Several file system operations have a certain property only at the level of a single operation, but not multiple operations combined.

    • ACID
      Atomicity
      All changes to an HDF5 file's state must complete or fail as a whole unit.
      • Supported in HDF5? No.
      • Some file systems only support single op. atomicity, if at all.
      • A lot of HDF5 operations are in-place; mixed success -> impossible to recover
      Consistency
      An operation is a correct transformation of the HDF5 file's state.
      • Supported in HDF5? Yes and No
      • Depends on one's definition of HDF5 file/object integrity constraints
      • Assuming we are dealing with a correct program
      • Special case w/ multiple processes: Single Writer Multiple Reader
      Isolation (serialization)
      Even though operations execute concurrently, it appears to each operation, OP, that others executed either before OP or after OP, but not both.
      • Supported in HDF5? No.
      • Depends on concurrency scenario and requires special configuration (e.g., MT, MPI).
      • Time-of-check-time-of-use vulnerability
      Durability
      Once an operation completes successfully, it's changes to the file's state survive failure.
      • Supported in HDF5? No.
      • "Split brain"
      • No transaction log

Clinic 2021-02-09

THIS MEETING IS BEING RECORDED and the recording will be available on The HDF Group's YouTube channel. Remember to subscribe!

Goal(s)

This is a meeting dedicated to your questions.

In the unlikely event there aren't any

We have a few prepared topics (forum posts, announcements, etc.)

Sometimes life deals you an HDF5 file

No question is too small. We are here to learn. All of us.

Meeting Etiquette

Be social, turn on your camera (if you've got one)

Talking to black boxes isn't fun.

Raise your hand to signal a contribution (question, comment)

Mute yourself while others are speaking, be ready to participate.

Be mindful of your "airtime"

We want to cover as many of your topics as possible. Be fair to others.

Introduce yourself

  1. Your Name
  2. Your affiliation/organization/group
  3. One reason why you are here today

Use the shared Google doc for questions and code snippets

The link can be found in the chat window.

When the 30 min. timer runs out, this meeting is over.

Continue the discussion on the HDF Forum or come back next week!

Notes

Don't miss our next webinar about data virtualization with HDF5-UDF and how it can streamline your work

  • Presented by Lucas Villa Real (IBM Research)
  • Feb 12, 2021 11:00 AM in Central Time (US and Canada)
  • Sign-up link

Bug-of-the-Week Award (my candidate)

  • Write data to variable length string attribute by Kerim Khemraev
  • Jira issue HDFFV-11215
  • Quick demonstration

    #include "hdf5.h"
    
    #include <filesystem>
    #include <iostream>
    #include <string>
    
    #define H5FILE_NAME "Attributes.h5"
    #define ATTR_NAME   "VarLenAttr"
    
    namespace fs = std::filesystem;
    
    int main(int argc, char *argv[])
    {
      hid_t file, attr;
    
      auto attr_type = H5Tcopy(H5T_C_S1);
      H5Tset_size(attr_type, H5T_VARIABLE);
      H5Tset_cset(attr_type, H5T_CSET_UTF8);
    
      auto make_scalar_attr = [](auto& file, auto& attr_type)
       -> hid_t
      {
        auto attr_space  = H5Screate(H5S_SCALAR);
        auto result = H5Acreate(file, ATTR_NAME,
                                attr_type, attr_space,
                                H5P_DEFAULT, H5P_DEFAULT);
        H5Sclose(attr_space);
        return result;
      };
    
      if( !fs::exists(H5FILE_NAME) )
        { // If the file doesn't exist we create it &
          // add a root group attribute
          std::cout << "Creating file...\n";
          file = H5Fcreate(H5FILE_NAME, H5F_ACC_TRUNC,
                          H5P_DEFAULT, H5P_DEFAULT);
          attr = make_scalar_attr(file, attr_type);
        }
      else
        { // File exists: we either delete the attribute and
          // re-create it, or we just re-write it.
          std::cout << "Opening file...\n";
          file = H5Fopen(H5FILE_NAME, H5F_ACC_RDWR, H5P_DEFAULT);
    
    #ifndef REWRITE_ONLY
          H5Adelete(file, ATTR_NAME);
          attr = make_scalar_attr(file, attr_type);
    #else
          attr = H5Aopen(file, ATTR_NAME, H5P_DEFAULT);
    #endif
        }
    
      // Write or re-write the attribute
      const char* data[1] = { "Let it be λ!" };
      H5Awrite(attr, attr_type, data);
    
      hsize_t size;
      H5Fget_filesize(file, &size);
      std::cout << "File size: " << size << " bytes\n";
    
      H5Tclose(attr_type);
      H5Aclose(attr);
      H5Fclose(file);
    }
    

Documentation update

Author: gerdheber

Created: 2021-05-13 Thu 19:18

Validate