26.6. Other plug-ins

26.6.1. Authentication plug-ins
Internal vs. external authentation
The Authenticator interface
Configuration settings
26.6.2. Secondary file storage plugins
Primary vs. secondary storage
The SecondaryStorageController interface
Configuration settings
26.6.3. File unpacker plug-ins
26.6.4. File packer plug-ins
26.6.5. File validator and metadata reader plug-ins
26.6.6. Logging plug-ins
The LogManagerFactory interface
The LogManager interface
The EntityLogger interface

26.6.1. Authentication plug-ins

BASE provides a plug-in mechanism for authenticating users (validating the username and password) when they are logging in. This plug-in mechanism is not the same as the regular plug-in API. That is, you do not have worry about user interaction or implementing the Plugin interface.

Internal vs. external authentation

BASE can authenticate users in two ways. Either it uses the internal authentication or the external authentication. With internal authentication BASE stores logins and passwords in its own database. With external authentication this is handled by some external application. Even with external authentication it is possible to let BASE cache the logins/passwords. This makes it possible to login to BASE if the external authentication server is down.

[Note] Note

An external authentication server can only be used to grant or deny a user access to BASE. It cannot be used to give a user permissions, or put a user into groups or different roles inside BASE.

The external authentication service is only used when a user logs in. Now, one or more of several things can happen:

  • The ROOT user is logging on. Internal authentication is always used for the root user and the authenticator plug-in is never used.

  • The login is correct and the user is already known to BASE. If the plug-in supports extra information (name, email, phone, etc.) and the auth.synchronize setting is TRUE the extra information is copied to the BASE server.

  • The login is correct, but the user is not known to BASE. This happens the first time a user logs in. BASE will create a new user account. If the driver supports extra information, it is copied to the BASE server (even if auth.synchronize is not set). The new user account will get the default quota and be added to the all roles and groups which has been marked as default.

    [Note] Note

    Prior to BASE 2.4 it was hardcoded to add the new user to the Users role only.

  • If password caching is enabled, the password is copied to BASE. If an expiration timeout has been set, an expiration date will be calculated and set on the user account. The expiration date is only checked when the external authentication server is down.

  • The authentication server says that the login is invalid or the password is incorrect. The user will not be logged in. If a user account with the specified login already exists in BASE, it will be disabled.

  • The authentication driver says that something else is wrong. If password caching is enabled, internal authentication will be used. Otherwise the user will not be logged in. An already existing account is not modified or disabled.

[Note] Note

The Encrypt password option that is available on the login page does not work with external authentication. The simple reason is that the password is encrypted with a one-way algorithm making it impossible to call Authenticator.authenticate().

The Authenticator interface

To be able to use external authentication you must create a class that implements the net.sf.based.core.authentication.Authenticator interface. Specify the name of the class in the auth.driver setting in base.config and its initialisation parameters in the auth.init setting.

Your class must have a public no-argument constructor. The BASE application will create only one instance of the class for lifetime of the BASE server. It must be thread-safe since it may be invoked by multiple threads at the same time. Here are the methods that you must implement

public void init(String settings)
    throws AuthenticationException;

This method is called just after the object has been created with its argument taken from the auth.init setting in your base.config file. This method is only called once for an instance of the object. The syntax and meaning of the parameter is driver-dependent and should be documented by the plug-in. It is irrelevant for the BASE core.

public boolean supportsExtraInformation();

This method should simply return TRUE or FALSE depending on if the plug-in supports extra user information or not. The only required information about a user is a unique ID and the login. Extra information includes name, address, phone, email, etc.

public AuthenticationInformation authenticate(String login,
                                              String password)
    throws UnknownLoginException, InvalidPasswordException, AuthenticationException;

Try to authenticate a login/password combination. The plug-in should return an AuthenticationInformation object if the authentication is successful or throw an exception if not. There are three exceptions to choose from:

  • UnknownLoginException: This exception should be thrown if the login is not known to the external authentication system.

  • InvalidPasswordException: This exception should be thrown if the login is known but the password is invalid. In case it is considered a security issue to reveal that a login exists, the plugin may throw an UnknowLoginException instead.

  • AuthenticationException: In case there is another problem, such as the authentication service being down. This exception triggers the use of cached passwords if caching has been enabled.

Configuration settings

The configuration settings for the authentication driver are located in the base.config file. Here is an overview of the settings. For more information read the section called “Authentication section”.

auth.driver

The class name of the authentication plug-in.

auth.init

Initialisation parameters sent to the plug-in when calling the Authenticator.init() method.

auth.synchronize

If extra user information is synchronized at login time or not. This setting is ignored if the driver does not support extra information.

auth.cachepasswords

If passwords should be cached by BASE or not. If the passwords are cached a user may login to BASE even if the external authentication server is down.

auth.daystocache

How many days to cache the passwords if caching has been enabled. A value of 0 caches the passwords for ever.

26.6.2. Secondary file storage plugins

Primary vs. secondary storage

BASE has support for storing files in two locations, the primary storage and the secondary storage. The primary storage is always disk-based and must be accessible by the BASE server as a path on the file system. The path to the primary storage is configured by the userfiles setting in the base.config file. The primary storage is internal to the core. Client applications don't get access to read or manipulate the files directly from the file system.

The secondary storage can be anything that can store files. It could, for example, be another directory, a remote FTP server, or a tape based archiving system. A file located in the secondary storage is not accessible by the core, client applications or plug-ins. The secondary storage can only be accessed by the secondary storage controller. The core (and client) applications uses flags on the file items to handle the interaction with the secondary storage.

Each file has an action attribute which default's to File.Action.NOTHING. It can take two other values:

  1. File.Action.MOVE_TO_SECONDARY

  2. File.Action.MOVE_TO_PRIMARY

All files with the action attribute set to MOVE_TO_SECONDARY should be moved to the secondary storage by the controller, and all files with the action attribute set to MOVE_TO_PRIMARY should be brought back to primary storage.

The moving of files between primary and secondary storage doesn't happen immediately. It is up to the server administrator to configure how often and at what times the controller should check for files that should be moved. This is configured by the secondary.storage.interval and secondary.storage.time settings in the base.config file.

The SecondaryStorageController interface

All you have to do to create a secondary storage controller is to create a class that implements the net.sf.basedb.core.SecondaryStorageController interface. In your base.config file you then specify the class name in the secondary.storage.driver setting and its initialisation parameters in the secondary.storage.init setting.

Your class must have a public no-argument constructor. The BASE application will create only one instance of the class for lifetime of the BASE server. Here are the methods that you must implement:

public void init(String settings);

This method is called just after the object has been created with its argument taken from the secondary.storage.init setting in your base.config file. This method is only called once for an object.

public void run();

This method is called whenever the core thinks it is time to do some management of the secondary storage. How often the run() method is called is controlled by the secondary.storage.interval and secondary.storage.time settings in the base.config file. When this method is called the controller should:

  • Move all files which has action=MOVE_TO_SECONDARY to the secondary storage. When the file has been moved call File.setLocation(Location.SECONDARY) to tell the core that the file is now in the secondary storage. You should also call File.setAction(File.Action.NOTHING) to reset the action attribute.

  • Restore all files which has action=MOVE_TO_PRIMARY. The core will set the location attribute automatically, but you should call File.setAction(File.Action.NOTHING) to reset the action attribute.

  • Delete all files from the secondary storage that are not present in the database with location=Location.SECONDARY. This includes files which has been deleted and files that have been moved offline or re-uploaded.

As a final act the method should send a message to each user owning files that has been moved from one location to the other. The message should include a list of files that has been moved to the secondary storage and a list of files moved from the secondary storage and a list of files that has been deleted due to some of the reasons above.

public void close()();

This method is called when the server is closing down. After this the object is never used again.

Configuration settings

The configuration settings for the secondary storage controller is located in the base.config file. Here is an overview of the settings. For more information read Appendix C, base.config reference.

secondary.storage.driver

The class name of the secondary storage plug-in.

secondary.storage.init

Initialisation parameters sent to the plug-in by calling the init() method.

secondary.storage.interval

Interval in seconds between each execution of the secondary storage controller plug-in.

secondary.storage.time

Time points during the day when the secondary storage controller plugin should be executed.

26.6.3. File unpacker plug-ins

The BASE web client has integrated support for unpacking of compressed files. See Section 8.2.1, “Upload a new file”. Behind the scenes, this support is provided by plug-ins. The standard BASE distribution comes with support for ZIP files (net.sf.basedb.plugins.ZipFileUnpacker) and TAR files (net.sf.basedb.plugins.TarFileUnpacker).

To add support for additional compressed formats you have to create a plug-in that implements the net.sf.basedb.util.zip.FileUnpacker interface. The best way to do this is to extend the net.sf.basedb.util.zip.AbstractFileUnpacker which implements all methods in the Plugin and InteractivePlugin interfaces except Plugin.getAbout(). This leaves you with the actual unpacking of the files as the only thing to implement.

[Note] No support for configurations
The integrated upload in the web interface only works with plug-ins that does not require a configuration to run.

Methods in the FileUnpacker interface

public String getFormatName();

Return a short string naming the file format. For example: ZIP files or TAR files.

public Set<String> getExtensions();

Return a set of strings with the file extensions that are most commonly used with the compressed file format. For example: [zip, jar]. Do not include the dot in the extensions. The web client and the AbstractFlatFileUnpacker.isInContext() method will use this information to automatically guess which plug-in to use for unpacking the files.

public Set<String> getMimeTypes();

Return a set of string with the MIME types that commonly used with the compressed file format. For example: [application/zip, application/java-archive]. This information is used by the AbstractFlatFileUnpacker.isInContext() method to automatically guess which plug-in to use for unpacking the files.

public int unpack(DbControl dc,
                  Directory dir,
                  InputStream in,
                  boolean overwrite,
                  AbsoluteProgressReporter progress)
    throws IOException, BaseException;

Unpack the files and store them in the BASE file system.

  • Do not close() or commit() the DbControl passed to this method. This is done automatically by the AbstractFileUnpacker or by the web client.

  • The dir parameter is the root directory where the unpacked files should be placed. If the compressed file contains subdirectories the plug-in must create those subdirectories unless they already exists.

  • If the overwrite parameter is FALSE no existing file should be overwritten unless the file is OFFLINE.

  • The in parameter is the stream containing the compressed data. The stream may come directly from the web upload or from an existing file in the BASE file system.

  • The progress parameter, if not null, should be used to report the progress back to the calling code. The plug-in should count the number of bytes read from the in stream. If it is not possible by other means the stream can be wrapped by a net.sf.basedb.util.InputStreamTracker object which has a getNumRead() method.

When the compressed file is uncompressed during the file upload from the web interface, the call sequence to the plug-in is slightly altered from the standard call sequence described in the section called “Executing a job”.

  • After the plug-in instance has been created, the Plugin.init() method is called with null values for both the configuration and job parameters.

  • Then, the unpack() method is called. The Plugin.run() method is never called in this case.

26.6.4. File packer plug-ins

BASE has support for compressing and downloading a set of selected files and/or directories. This functionality is provided by a plug-in, the PackedFileExporter. This plug-in doesn't do the actual packing itself. This is delegated to classes implementing the net.sf.basedb.util.zip.FilePacker interface.

BASE ships with a number of packing methods, including ZIP and TAR. To add support for other methods you have to provide an implementation of the FilePacker interface. Then, create a new configuration for the PackedFileExporter and enter the name of your class in the configuration wizard.

The FilePacker interface is not a regular plug-in interface (ie. it is not a subinterface to Plugin). This means that you don't have to mess with configuration or job parameters. Another difference is that your class must be installed in Tomcat's classpath (ie. in one of the WEB-INF/classes or WEB-INF/lib folders).

Methods in the FilePacker interface

public String getDescription();

Return a short description the file format that is suitable for use in dropdown lists in client applications. For example: Zip-archive (.zip) or TAR-archive (.tar).

public String getFileExtension();

Return the default file extension of the packed format. The returned value should not include the dot. For example: zip or tar.

public String getMimeType();

Return the standard MIME type of the packed file format. For example: application/zip or application/x-tar.

public void setOutputStream(OutputStream out)
    throws IOException;

Sets the outputstream that the packer should write the packed files to.

public void pack(String entryName,
                 InputStream in,
                 long size,
                 long lastModified)
    throws IOException;

Add another file or directory to the packed file. The entryName is the name of the new entry, including path information. The in is the stream to read the file data from. If in is null then the entry denotes a directory. The size parameter gives the size in bytes of the file (zero for empty files or directories). The lastModified is that time the file was last modified or 0 if not known.

public void close()
    throws IOException;

Finish the packing. The packer should release any resources, flush all data and close all output streams, including the out stream set in the setOutputStream method.

26.6.5. File validator and metadata reader plug-ins

In those cases where files are used to store data instead of importing it to the database, BASE can use plug-ins to check that the supplied files are valid and also to extract metadata from the files. For example, the net.sf.basedb.core.filehandler.CelFileHandler is used to check if a file is a valid Affymetrix CEL file and to extract data headers and the number of spots from it.

The validator and metadata reader plug-ins are not regular plug-ins (ie. they don't have to implement the Plugin interface). This means that you don't have to mess with configuration or job parameters.

Validator plug-ins must implement the net.sf.basedb.core.filehandler.DataFileHandler and net.sf.basedb.core.filehandler.DataValidator interfaces. Metadata reader plug-ins should implement the net.sf.basedb.core.filehandler.DataFileHandler and net.sf.basedb.core.filehandler.DataFileMetadataReader interfaces.

[Note] Note

Meta data extraction can only be done if the file has first been validated. We recommend that metadata reader plug-ins also takes the role as validator plug-ins. This will make BASE re-use the same object instance and the file doesn't have to be parsed twice.

[Important] Always extend the net.sf.basedb.core.filehandler.AbstractDataFileHandler class

We consider the mentioned interface to be part of the public API only from the caller side, not from the implementor side. Thus, we may add methods to those interfaces in the future without prior notice. The AbstractDataFileHandler will provide default implementations of the new methods in order to not break existing plug-ins.

Methods in the DataFileHandler interface

public void setFile(FileSetMember member);

Sets the file that is going to be validated or used for metadata extraction. If the same plug-in can be used for validating more than one type of file, this method will be called one time for each file that is present in the file set.

public void setItem(FileStoreEnabled item);

Sets the item that the files belong to. This method is only called once.

Methods in the DataFileValidator interface

public void validate(DbControl dc)
    throws InvalidDataException, InvalidRelationException;

Validate the file. The file is valid if this method returns sucessfully. If the file is not valid an InvalidDataException should be thrown. Note that BASE will still accept the file, but will indicate the failure with a flag and also keep the message of the exception in the database to remind the user of the failure.

The InvalidRelationException should be used to indicate a partial success/partial failure, where the file as such is a valid file, but in relation to other files it is not. For example, we may assign a valid CEL file to a raw bioassay, but the chip type doesn't match the chip type of the CDF file of the related array design. This exception will also allow metadata to be extracted from the file.

Methods in the DataFileMetadataReader interface

public void extractMetadata(DbControl dc);

Extract metadata from the file. It is up to the plug-in to decide what to extract and how to store it. The CelFileHandler will, for example, extract headers and the number of spots from the file and store it with the raw bioassay.

public void resetMetadata(DbControl dc);

Remove all metadata that the plug-in usually can extract. This method is called if a file is unlinked from an item or if the validation fails. It is important that the plug-in cleans up everything so that data from a previous file doesn't remain in the database.

Methods in the AbstractDataFileHandler class

public FileStoreEnabled getItem();

Get the item that was previously added to setItem()

public FileSetMember getMember(String dataFileTypeId);

Get a file that was previously added to setFile(). The dataFileTypeId is the external ID of the DataFileType.

26.6.6. Logging plug-ins

BASE provides a plug-in mechanism for logging changes that are made to items. This plug-in mechanism is not the same as the regular plug-in API. That is, you do not have worry about user interaction or implementing the Plugin interface.

The logging mechanism works on the data layer level and hooks into callbacks provided by Hibernate. EntityLogger:s are used to extract relevant information from Hibernate and create log entries. While it is possible to have a generic logger it is usually better to have different implementations depending on the type of entity that was changed. For example, a change in a child item should, for usability reasons, be logged as a change in the parent item. Entity loggers are created by a LogManagerFactory. All changes made in a single transaction are usually collected by a LogManager which is also created by the factory.

The LogManagerFactory interface

Which LogManagerFactory to use is configured in base.config (See the section called “Change history logging section”). A single factory instance is created when BASE starts and is used for the lifetime of the virtual machine. The factory implementation must of course be thread-safe. Here is a list of the methods the factory must implement:

public LogManager getLogManager(LogControl logControl);

Creates a log manager for a single transaction. Since a transaction is not thread-safe the log manager implementation doesn't have to be either. The factory has the possibility to create new log managers for each transaction.

public boolean isLoggable(Object entity);

Checks if changes to the given entity should be logged or not. For performance reasons, it usually makes sense to not log everything. For example, the database logger implementation only logs changes if the entity implements the LoggableData interface. The return value of this method should be consistent with getEntityLogger().

public EntityLogger getEntityLogger(LogManager logManager,
                                    Object entity);

Create or get an entity logger that knows how to log changes to the given entity. If the entity should not be logged, null can be returned. This method is called for each modified item in the transaction.

The LogManager interface

A new log manager is created for each transaction. The log manager is responsible for collecting all changes made in the transaction and store those changes in the appropriate place. The interface doesn't define any methods for this collection, since each implementation may have very different needs.

public LogControl getLogControl();

Get the log control object that was supplied by the BASE core when the transaction was started. The log controller contains methods for accessing information about the transaction, such as the logged in user, executing plug-in, etc. It can also be used to execute queries against the database to get even more information.

[Warning] Warning

Be careful about the queries that are executed by the log controller. Since all logging code is executed at flush time in callbacks from Hibernate we are not allowed to use the regular session. Instead, all queries are sent through the stateless session. The stateless session has no caching functionality which means that Hibernate will use extra queries to load associations. Our recommendation is to avoid quires that return full entities, use scalar queries instead to just load the values that are needed.

public void afterCommit(); , public void afterRollback();
Called after a successful commit or after a rollback. Note that the connection to the database has been closed at this time and it is not possible to save any more information to it at this time.

The EntityLogger interface

An entity logger is responsible for extracting the changes made to an entity and converting it to something that is useful as a log entry. In most cases, this is not very complicated, but in some cases, a change in one entity should actually be logged as a change in a different entity. For example, changes to annotations are handled by the AnnotationLogger which which log it as a change on the parent item.

public void logChanges(LogManager logManager,
                       EntityDetails details);

This method is called whenever a change has been detected in an entity. The details variable contains information about the entity and, to a certain degree, what changes that has been made.