Frequently Asked Questions

TrueVFS Access / File* API

  1. When trying to read an archive file, I get an exception saying I cannot read directories. What's wrong?
  2. When trying to write an archive file, I get an exception saying I cannot write directories. What's wrong?
  3. Copying a file to an archive file does not seem to work. What's wrong?
  4. The set of detected extensions for archive files is too large. How can I change it?
  5. How to install a (custom) file system driver?
  6. The API is not detecting an archive file as a virtual directory. What's wrong?
  7. The API should not detect an individual archive file as a virtual directory. How can I do this?
  8. When I create or update archive entries, the modified archive file gets corrupted. What's wrong?
  9. How can I access an archive file via HTTP(S)?
  10. How can I access entries with absolute entry names in archive files?

TrueVFS Access / Path API

  1. Can I use the APIs of the modules TrueVFS Access Path and TrueVFS Access File* concurrently?
  2. The NIO.2 API and the TrueVFS Access File* API both provide a copy method. Which one should I use?
  3. How to install a (custom) file system driver?
  4. The API should not detect an individual archive file as a virtual directory. How can I do this?

General Questions

  1. Where are the latest news and announcements?
  2. Do I have to use Maven to use TrueVFS?
  3. Where is the Javadoc?
  4. How does TrueVFS deal with binary compatibility?
  5. How does TrueVFS deal with the compression of nested archive files like e.g. app.war/WEB-INF/lib/lib.jar?
  6. How does TrueVFS deal with entries with absolute entry names in archive files?
  7. How does TrueVFS deal with entries with dot "." or dot-dot ".." segments in archive files?
  8. How does TrueVFS deal with entries which use "\" as the separator character in archive files?
  9. How does TrueVFS deal with duplicate entries in archive files?
  10. I have another question or issue. How do I get it responded and resolved?

TrueVFS Access / File* API

When trying to read an archive file, I get an exception saying I cannot read directories. What's wrong?

When configured correctly, the File* API will treat an archive file like a virtual directory (that's what TrueVFS is all about). Like with plain directories, applications cannot read or write virtual directories using an input or output stream.

Use one of the list*(*) methods in the TFile class instead.

For example, to list the contents of the top level (virtual) directory of the archive file archive.zip, you could use...

TFile[] entries = new TFile("archive.zip").listFiles();

When trying to write an archive file, I get an exception saying I cannot write directories. What's wrong?

When configured correctly, the File* API will treat an archive file like a virtual directory (that's what TrueVFS is all about). Like with plain directories, applications cannot read or write virtual directories using an input or output stream.

Use one of the mkdir*(*) methods in the TFile class or directly write to the entry within the archive file using a TFileOutputStream instead. For example, to (over)write the entry entry within the archive file archive.zip, you could use...

OutputStream out = new TFileOutputStream("archive.zip/entry");
try {
    ... // write something here
} finally {
    out.close();
}

This would work even if archive.zip would not initially exist unless TFile.setLenient(false) had been called by your application before. In this case, you would need to create the archive file archive.zip in advance by using...

new TFile("archive.zip").mkdir(false);

Copying a file to an archive file does not seem to work. What's wrong?

Users often assume that when copying a file to an archive file, the File* API would automatically complete the path name of the destination archive file so that it ends with the base name of the source file. This is probably assumed because that's how it works with command line utilities like cp on POSIX or copy on Windows. However, this is not true: The File* API does never do path name completion. Hence, the following code may behave unexpectedly:

TFile src = new TFile(string1); // e.g. "file"
TFile dst = new TFile(string2); // e.g. "archive.zip"
src.cp_rp(dst);

If successful, this would only result in a verbatim copy of file to archive.zip, which is probably unexpected. However, the way the copy command line utilities work can be easily emulated by using the following instead:

TFile src = new TFile(string1); // e.g. "file"
TFile dst = new TFile(string2); // e.g. "archive.zip"
if (TFile.isLenient() && dst.isArchive() || dst.isDirectory())
    dst = new TFile(dst, src.getName());
src.cp_rp(dst);

This will append the base name of the source path to the destination path if either the destination path name ends with a recognized archive file extension like e.g. ".zip" or if the destination file system entry already exists as a directory. If TFile.setLenient(false) is never called by your application, then you could shorten this to...

TFile src = new TFile(string1); // e.g. "file"
TFile dst = new TFile(string2); // e.g. "archive.zip"
if (dst.isArchive() || dst.isDirectory())
    dst = new TFile(dst, src.getName());
src.cp_rp(dst);

If you don't like path name completion for non-existent files which just look like archive files according to their file name, then you could even shorten this to...

TFile src = new TFile(string1); // e.g. "file"
TFile dst = new TFile(string2); // e.g. "archive.zip"
if (dst.isDirectory())
    dst = new TFile(dst, src.getName());
src.cp_rp(dst);

The set of detected extensions for archive files is too large. How can I change it?

You can easily filter the set of canonical extensions installed by the file system driver modules on the run time class path. For example, the TrueVFS Driver ZIP module will install a large set of canonical file extensions for ZIP files. If all you want is to detect *.zip files however, you can do so easily using the following statement:

TConfig.get().setArchiveDetector(new TArchiveDetector("zip"));

Check the Maven archetype for more options and to help you get started with this quickly.


How to install a (custom) file system driver?

Make the following call early in your application:

TConfig.get().setArchiveDetector(
        new TArchiveDetector(
            TArchiveDetector.NULL,
            "zip", new ZipDriver(IOPoolLocator.SINGLETON)));

This example presumes that you are going to map the file extension .zip to the ZipDriver. This is actually the default if you add the JAR artifact of the module TrueVFSDriverZIP to the run time class path.

Furthermore, this example presumes that you want no other archive file extensions to get detected, hence the use of TArchiveDetector.NULL as the decorated archive detector. The class TArchiveDetector has many different constructors. Check the Javadoc to make sure you get what you need.


The API is not detecting an archive file as a virtual directory. What's wrong?

Most likely the TrueVFS Access File* module is not set up correctly in order to detect the file extension of the archive type you want to access. To make sure it does, make the following call early in your application:

TConfig.get().setArchiveDetector(
        new TArchiveDetector(
            TArchiveDetector.NULL,
            new Object[][] {
                { "tar", new TarDriver(IOPoolLocator.SINGLETON) },
                { "tgz|tar.gz", new TarGZipDriver(IOPoolLocator.SINGLETON) },
                { "tbz|tb2|tar.bz2", new TarBZip2Driver(IOPoolLocator.SINGLETON) },
                { "zip", new ZipDriver(IOPoolLocator.SINGLETON)},
            }));

Check the Maven archetype for more options and to help you get started with this quickly.


The API should not detect an individual archive file as a virtual directory. How can I do this?

Every now and then you might want to treat an archive file like a regular file rather than a virtual directory. For example, when trying to obtain the length of the archive file in bytes. You would normally do this by calling the method File.length(). However, if the File object is an instance of the TFile class and the path has been detected to name a valid archive file, then this method would always return zero. This is because you might have changed the archive file and then it would be impossible to return a precise result until the changes have been committed to the target archive file.

You can easily solve this issue by committing any pending changes and then calling length() on a new TFile object which has been instructed to ignore an archive file name extension in the last path element. This could look as follows:

// Note that the actual path may refer to anything, even a nested archive file.
TFile inner = new TFile("outer.zip/inner.zip");
TFile file = inner.toNonArchiveFile(); // convert
... // there may be some I/O here
TVFS.umount(inner); // unmount potential archive file
// Now you can safely do any I/O to $file.
long length = file.length();

Note that the path outer.zip/inner.zip refers to a nested archive file, so using the TFile* classes is required to access it.

Last but not least, using the object file for any I/O bypasses any state associated with the path outer.zip/inner.zip in the TrueVFS Kernel. This could result in an inconsistent state of the federated file system space and may even incur loss of data! In order to avoid this, it's a good idea not to access the object inner again until you are done with the object file.


When I create or update archive entries, the modified archive file gets corrupted. What's wrong?

The TrueVFS Kernel module applies neat caching and buffering strategies for archive entries. So whenever your application creates or updates archive entries, the changes need to get committed to the archive file when done. On certain events, e.g. whenever the JVM terminates normally with or without a throwable (= doesn't crash), this happens automatically. However, in a long running application you may want to do this manually in order to allow third parties to access the archive file. The term third party in this context includes any other process in the operating system, the plain File* API and even a TFile object which uses TArchiveDetector.NULL.

Committing any unsynchronized changes to all archive files is easy - just call...

TVFS.umount();

Please have a look at the method's Javadoc for more options.


How can I access an archive file via HTTP(S)?

The TrueVFS Access / File* API can only access the platform file system because the TFile class extends the File class. To access an archive file via HTTP(S) or any other procotol scheme, you need to use the TrueVFS Access / Path API (class TPath et al) in the same package.


How can I access entries with absolute entry names in archive files?

You can't because there is no addressing scheme for this. For example, the expression new TFile("archive.zip/entry") gets decomposed into the file system path archive.zip as the mount point of the archive file and the relative entry name entry as the entry name within this archive file. There's no expression to address the absolute entry name /entry within the archive file instead. Even if you tried new TFile("archive.zip//entry"), it would just get normalized to the previous expression.

See also here.

TrueVFS Access / Path API

Can I use the APIs of the modules TrueVFS Access Path and TrueVFS Access File* concurrently?

Absolutely yes, because both module APIs are just facades for the TrueVFS Kernel. The NIO.2 API defines some methods for the interoperability of File and Path objects:

  • Path.toFile() returns a File object for this Path. However, according to the interface contract for this method, it's supposed to work with the default file system provider only (e.g. if it's a WindowsPath).
  • File.toPath() returns a Path object for this File. However, according to the interface contract for this method, the returned object is associated with the default file system provider (e.g. it's a WindowsPath). Furthermore, this method puts a compile time dependency on the NIO.2 API.

To solve these issues, in TrueVFS this is implemented as follows:

  • TPath.toFile() returns a TFile object for this TPath. This works for any (virtual) file system.
  • To avoid a compile time dependency in the TrueVFS Access File* module on the TrueVFS Access Path module, TFile.toPath() throws an UnsupportedOperationException. To create a TPath from a File object, call new TPath(File) instead. Note that this works for plain File objects too, not just TFile objects.

The NIO.2 API and the TrueVFS Access File* API both provide a copy method. Which one should I use?

Currently, the NIO.2 API supports only the copying of a single file: Files.copy(Path, Path, CopyOption). Unfortunately, these methods use a simple read-then-write-in-a-loop implementation which results in bad performance. Furthermore, there are no methods for recursive copying of directory trees, so you'ld have to write this yourselve.

With TrueVFS however, you can easily "back out" from a TPath object to a TFile object to use the advanced copy methods of the TrueVFS File* API. So instead of calling...

Path src = ...
Path dst = ...
Files.copy(src, dst, REPLACE_EXISTING);

You could call...

Path src = ...
Path dst = ...
TFile.cp(src.toFile(), dst.toFile());

in order to benefit from its superior performance. Likewise, you could call any other TFile.cp*(*) method, e.g. TFile.cp_r(File src, File dst, TArchiveDetector detector) for recursive copying of a directory tree.


How to install a (custom) file system driver?

In exactly the same way as with the module TrueVFSFile*, see here.


The API should not detect an individual archive file as a virtual directory. How can I do this?

Every now and then you might want to treat an archive file like a regular file rather than a virtual directory. For example, when trying to obtain the length of the archive file in bytes. You would normally do this by calling the method Files.size(Path). However, if the Path object is an instance of the TPath class and the path has been detected to name a valid archive file, then this method would always return zero. This is because you might have changed the archive file and then it would be impossible to return a precise result until the changes have been committed to the target archive file.

You can easily solve this issue by committing any pending changes and then calling Files.size(Path) with a new TPath object which has been instructed to ignore an archive file name extension in the last path element. This could look as follows:

TPath inner = new TPath("outer.zip/inner.zip");
TPath path = inner.toNonArchivePath(); // convert
... // there may be some I/O here
inner.getFileSystem().close(); // unmount potential archive file
// Now you can safely do any I/O to $path.
long size = Files.size(path);

Note that the path outer.zip/inner.zip refers to a nested archive file, so using the TPath class is required to access it.

Last but not least, using the object path for any I/O bypasses any state associated with the path outer.zip/inner.zip in the TrueVFS Kernel. This could result in an inconsistent state of the federated file system space and may even incur loss of data! In order to avoid this, it's a good idea not to access the object inner again until you are done with the object path.

General Questions

Where are the latest news and announcements?

Starting from version 7.0, the TrueVFS project has it's own blog for announcements, release notes, feature show cases and more named The TrueVFS Blog.


Do I have to use Maven to use TrueVFS?

Absolutely not! To learn about your options, please read the article Using TrueVFS Without Maven.


Where is the Javadoc?

You can find the Javadoc for the entire TrueVFS API including all modules in the navigation bar by clicking Project Reports -> Project Reports -> JavaDocs, or you could simply click here.


How does TrueVFS deal with binary compatibility?

TrueVFS uses the same version numbering scheme like Maven, i.e. <major>.<minor>.<incremental>-<qualifier>. Within the same major version number, binary compatibility should be retained so that recompilation of a client application should not be necessary.

However, there is one exception: Binary compatibility may be broken in a subsquent release if all of the following conditions apply:

  1. A feature's design is broken.
  2. The feature is assumed to be rarely used by client applications or the implications of not changing it are considered to be unacceptable.
  3. This issue is documented as a ticket in the project's Issue Tracking System (ITS) with the tag binary-compatibility.
  4. A workaround is explained in the ITS ticket.
  5. The ITS ticket is referenced in the Release Notes.

In case your client application is affected by a change and the documented workaround is unacceptable for any reason, please address this using the ITS at http://java.net/jira/browse/TRUEVFS.


How does TrueVFS deal with the compression of nested archive files like e.g. app.war/WEB-INF/lib/lib.jar?

With the advent of release 7.1, TrueVFS implements a new strategy to avoid compressing already compressed archive files in an enclosing archive file again. In contrast, the old stragegy of TrueZIP 7.0 and earlier was to compress everything - even if it was already compressed.

The new strategy results in a better overall compression ratio than the old strategy because compressing already compressed data again just inflates the data a bit because of some algorithm specific overhead.

For the example in the question, the new strategy uses the DEFLATE method to compress the entries of the inner archive file lib.jar while it uses the STORE method for the corresponding entry WEB-INF/lib/lib.jar within the outer archive file app.war. This behavior is in conformance to the JEE specs.

The new strategy is implemented by the archive drivers, so it works best with all supported archive types. For example, when storing a TAR file within a ZIP file, the ZIP entry for the TAR file would use the DEFLATE method because the TAR driver knows that plain TAR files are not compressed. In contrast, when storing a TAR.GZ file within a ZIP file, the ZIP entry for the TAR.GZ file would use the STORE method because the TAR.GZ driver knows that TAR.GZ files are already compressed.


How does TrueVFS deal with entries with absolute entry names in archive files?

As answered here, you cannot access entries with absolute entry names in archive files. This implies that you cannot create archive files which contain entries with absolute entry names.

However, you can use TrueVFS to read, modify or delete archive files which contain entries with absolute entry names: If you use TrueVFS to modify an archive file which contains entries with absolute entry names, these entry names are preserved. Likewise, an archive file can get deleted like any empty directory if it contains only entries with absolute entry names.


How does TrueVFS deal with entries with dot "." or dot-dot ".." segments in archive files?

Wherever possible, redundant segments are removed by a normalization of the entry name before the corresponding archive entry is mounted into the file system. When updating the archive file however, the original archive entry name is preserved. If a dot-dot segment remains at the start of the entry name, the corresponding entry will not be accessible by the application, but preserved with its original entry name upon an update of its archive file.


How does TrueVFS deal with entries which use "\" as the separator character in archive files?

Any occurence of this illegal separator character is replaced by the correct separator character "/" before the entry name is normalized and the corresponding archive entry is mounted into the file system. When updating the archive file however, the original archive entry name is preserved.


How does TrueVFS deal with duplicate entries in archive files?

When mounting an archive file system, TrueZIP 7.1 and later use covariant file system entries in order to enable an application to access archive entries of different types (FILE, DIRECTORY, SYMLINK or SPECIAL) which share the same normalized entry name.

For example, the ZIP and TAR file formats use a trailing slash '/' character in entry names to indicate a directory entry. In case an archive file contains two entries which, after normalization of their name, differ only in a trailing slash character, then both archive entries are mounted into a covariant file system entry for the otherwise equal normalized entry name. Then, an application can read the contents of the file entry by using e.g. TFileInputStream and list the members of the directory entry by using TFile.listFiles().

Note that this feature solely exists to enable applications to read the contents of all archive files, even if they have a strange directory layout. Note again that a TrueVFS application cannot create a covariant archive entry because this is considered to be a bad practice.


I have another question or issue. How do I get it responded and resolved?

For any bug report, improvement request, feature request, task request, help request etc. please post it to the User Mailing List once you have subscribed to it. The User Mailing List is your direct connection to the community. My response time is usually less than a day - this goes without warranties!

Once your question or issue has been approved as a bug report, improvement request, feature request or task request it gets tracked in JIRA. You can then use JIRA to monitor and discuss its progress, vote for it, add file attachments to it etc. JIRA is now also used to schedule new TrueVFS versions and prepare their Release Notes.