![]() |
This may be converted to an extension point in the future |
---|---|
There are certain plans to convert the authentication mechanism to an extension point in the future. There are several benefits with this:
See ticket #1599: Convert authentication plug-in system to an extension point for more information. |
BASE provides a plug-in mechanism for authenticating users
(validating the username and password) when they are logging in.
This plug-in mechanism is not the same as the regular plug-in API.
That is, you do not have worry about user interaction or implementing the
Plugin
BASE can authenticate users in two ways. Either it uses the internal authentication or the external authentication. With internal authentication BASE stores logins and passwords in its own database. With external authentication this is handled by some external application. Even with external authentication it is possible to let BASE cache the logins/passwords. This makes it possible to login to BASE if the external authentication server is down.
![]() |
Note |
---|---|
An external authentication server can only be used to grant or deny a user access to BASE. It cannot be used to give a user permissions, or put a user into groups or different roles inside BASE. |
The external authentication service is only used when a user logs in. Now, one or more of several things can happen:
The ROOT user is logging on. Internal authentication is always used for the root user and the authenticator plug-in is never used.
The login is correct and the user is already known to BASE.
If the plug-in supports extra information (name, email, phone, etc.)
and the auth.synchronize setting
is TRUE
the extra information is copied to
the BASE server.
The login is correct, but the user is not known to BASE. This happens the first time a user logs in. BASE will create a new user account. If the driver supports extra information, it is copied to the BASE server (even if auth.synchronize is not set). The new user account will get the default quota and be added to the all roles and groups which has been marked as default.
If password caching is enabled, the password is copied to BASE. If an expiration timeout has been set, an expiration date will be calculated and set on the user account. The expiration date is only checked when the external authentication server is down.
The authentication server says that the login is invalid or the password is incorrect. The user will not be logged in. If a user account with the specified login already exists in BASE, it will be disabled.
The authentication driver says that something else is wrong. If password caching is enabled, internal authentication will be used. Otherwise the user will not be logged in. An already existing account is not modified or disabled.
To be able to use external authentication you must create a class
that implements the
net.sf.based.core.authentication.Authenticator
base.config
and
its initialisation parameters in the auth.init setting.
The class can either be installed on Tomcat's class path (eg. WEB-INF/lib
)
or on an external path. In the latter case the auth.jarpath
must be set in base.config
.
Your class must have a public no-argument constructor. The BASE application will create only one instance of the class for lifetime of the BASE server. It must be thread-safe since it may be invoked by multiple threads at the same time. Here are the methods that you must implement
public void init(String settings)
throws AuthenticationException;
This method is called just after the object has been created with its argument
taken from the auth.init setting in your base.config
file. This method is only called once for an instance of the object. The syntax and meaning of
the parameter is driver-dependent and should be documented by the plug-in.
It is irrelevant for the BASE core.
public boolean supportsExtraInformation();
This method should simply return TRUE
or FALSE
depending on if the plug-in supports extra user information or not. The only required
information about a user is a unique ID and the login. Extra information includes
name, address, phone, email, etc.
public AuthenticationInformation authenticate(String login,
String password)
throws UnknownLoginException, InvalidPasswordException, LoginException, AuthenticationException;
Try to authenticate a login/password combination. The plug-in should return
an AuthenticationInformation
UnknownLoginException
:
This exception should be thrown if the login is not known to the
external authentication system.
InvalidPasswordException
:
This exception should be thrown if the login is known but the
password is invalid. In case it is considered a security issue
to reveal that a login exists, the plugin may throw an
UnknowLoginException
or
LoginException
instead.
LoginException
:
This exception should be thrown if the login failed but it
is not known if the cause is an incorrect login or password.
The authenticator implementation must specify an error message
that is displayed to the user.
AuthenticationException
:
In case there is another problem, such as the authentication service
being down. This exception triggers the use of cached passwords
if caching has been enabled.
The configuration settings for the authentication driver are located
in the base.config
file.
Here is an overview of the settings. For more information read
the section called “Authentication section”.
The class name of the authentication plug-in.
The path to the JAR file containing the authentication plug-in.
This should be left empty if the plug-in is installed in the
WEB-INF/lib
directory.
Initialisation parameters sent to the plug-in when calling the
Authenticator.init()
method.
If extra user information is synchronized at login time or not. This setting is ignored if the driver does not support extra information.
If passwords should be cached by BASE or not. If the passwords are cached a user may login to BASE even if the external authentication server is down.
How many days to cache the passwords if caching has been enabled. A value of 0 caches the passwords for ever.
BASE has support for storing files in two locations, the primary storage and
the secondary storage. The primary storage is always disk-based and must be
accessible by the BASE server as a path on the file system. The path to the
primary storage is configured by the userfiles
setting in the
base.config
file. The primary storage is internal to
the core. Client applications don't get access to read or manipulate the
files directly from the file system.
The secondary storage can be anything that can store files. It could, for example, be another directory, a remote FTP server, or a tape based archiving system. A file located in the secondary storage is not accessible by the core, client applications or plug-ins. The secondary storage can only be accessed by the secondary storage controller. The core (and client) applications uses flags on the file items to handle the interaction with the secondary storage.
Each file has an action attribute which default's to
File.Action.NOTHING
. It can take two other values:
File.Action.MOVE_TO_SECONDARY
File.Action.MOVE_TO_PRIMARY
All files with the action attribute set to MOVE_TO_SECONDARY
should be moved to the secondary storage by the controller, and all files
with the action attribute set to MOVE_TO_PRIMARY
should be
brought back to primary storage.
The moving of files between primary and secondary storage doesn't happen
immediately. It is up to the server administrator to configure how often and
at what times the controller should check for files that should be moved.
This is configured by the secondary.storage.interval
and secondary.storage.time
settings in the
base.config
file.
All you have to do to create a secondary storage controller is to
create a class that implements the
net.sf.basedb.core.SecondaryStorageController
base.config
file you then specify the
class name in the secondary.storage.driver
setting and its
initialisation parameters in the secondary.storage.init
setting.
Your class must have a public no-argument constructor. The BASE application will create only one instance of the class for lifetime of the BASE server. Here are the methods that you must implement:
public void init(String settings);
This method is called just after the object has been created with its argument
taken from the secondary.storage.init
setting in your
base.config
file. This method is only called once for
an object.
public void run();
This method is called whenever the core thinks it is time to do some
management of the secondary storage. How often the run()
method is called is controlled by the secondary.storage.interval
and secondary.storage.time
settings in the
base.config
file.
When this method is called the controller should:
Move all files which has action=MOVE_TO_SECONDARY
to
the secondary storage. When the file has been moved call
File.setLocation(Location.SECONDARY)
to tell the
core that the file is now in the secondary storage. You should also call
File.setAction(File.Action.NOTHING)
to reset the
action attribute.
Restore all files which has action=MOVE_TO_PRIMARY
.
The core will set the location attribute automatically, but you should
call File.setAction(File.Action.NOTHING)
to reset
the action attribute.
Delete all files from the secondary storage that are not present
in the database with location=Location.SECONDARY
.
This includes files which has been deleted and files that have been
moved offline or re-uploaded.
As a final act the method should send a message to each user owning files that has been moved from one location to the other. The message should include a list of files that has been moved to the secondary storage and a list of files moved from the secondary storage and a list of files that has been deleted due to some of the reasons above.
public void close()();
This method is called when the server is closing down. After this the object is never used again.
The configuration settings for the secondary storage controller is located in the
base.config
file. Here is an overview of the settings.
For more information read Appendix B, base.config reference.
The class name of the secondary storage plug-in.
Initialisation parameters sent to the plug-in by calling the
init()
method.
Interval in seconds between each execution of the secondary storage controller plug-in.
Time points during the day when the secondary storage controller plugin should be executed.
The BASE web client has integrated support for unpacking of
compressed files. See Section 7.2.1, “Upload a new file”.
Behind the scenes, this support is provided by plug-ins. The standard
BASE distribution comes with support for ZIP files
(net.sf.basedb.plugins.ZipFileUnpacker
net.sf.basedb.plugins.TarFileUnpacker
To add support for additional compressed formats you have to create a plug-in that
implements the net.sf.basedb.util.zip.FileUnpacker
net.sf.basedb.util.zip.AbstractFileUnpacker
Plugin
InteractivePlugin
![]() |
No support for configurations |
---|---|
The integrated upload in the web interface only works with plug-ins that does not require a configuration to run. |
Methods in the FileUnpacker
public String getFormatName();
Return a short string naming the file format. For example:
ZIP files
or TAR files
.
public Set<String> getExtensions();
Return a set of strings with the file extensions that
are most commonly used with the compressed file format.
For example: [zip, jar]
. Do not include
the dot in the extensions. The web client and the
AbstractFlatFileUnpacker.isInContext()
method
will use this information to automatically guess which plug-in to
use for unpacking the files.
public Set<String> getMimeTypes();
Return a set of string with the MIME types that commonly used with
the compressed file format. For example:
[application/zip, application/java-archive]
.
This information is used by the
AbstractFlatFileUnpacker.isInContext()
method to automatically guess which plug-in to use for unpacking
the files.
public int unpack(DbControl dc,
Directory dir,
InputStream in,
File sourceFile,
boolean overwrite,
AbsoluteProgressReporter progress)
throws IOException, BaseException;
Unpack the files and store them in the BASE file system.
Do not close()
or
commit()
the
DbControl
AbstractFileUnpacker
The dir
parameter is the root directory where
the unpacked files should be placed. If the compressed file
contains subdirectories the plug-in must create those subdirectories
unless they already exists.
If the overwrite
parameter is
FALSE
no existing file should be overwritten
unless the file is OFFLINE
or marked as
removed (do not forget to clear the removed attribute).
The in
parameter is the stream
containing the compressed data. The stream may come
directly from the web upload or from an existing
file in the BASE file system.
The sourceFile
parameter is the file
item representing the compressed file. This item may already be in
the database, or a new item that may or may not be saved in the database
at the end of the transaction. The information in this parameter
can be used to discover the options for file type, character set, MIME
type, etc. that was selected by the user in the upload dialog.
The PackUtil
The progress
parameter, if not
null
, should be used to report the
progress back to the calling code. The plug-in should count
the number of bytes read from the in
stream. If it is not possible by other means the stream can
be wrapped by a net.sf.basedb.util.InputStreamTracker
getNumRead()
method.
When the compressed file is uncompressed during the file upload from the web interface, the call sequence to the plug-in is slightly altered from the standard call sequence described in the section called “Executing a job”.
After the plug-in instance has been created, the
Plugin.init()
method is called with null
values for both the configuration
and job
parameters.
Then, the unpack()
method is called. The
Plugin.run()
method is never called in this case.
BASE has support for compressing and downloading a set of selected files and/or
directories. This functionality is provided by a plug-in, the
PackedFileExporter
net.sf.basedb.util.zip.FilePacker
BASE ships with a number of packing methods, including ZIP and TAR. To
add support for other methods you have to provide an implementation
of the FilePacker
PackedFileExporter
The FilePacker
Plugin
WEB-INF/classes
or WEB-INF/lib
folders).
![]() |
This may be converted to an extension point in the future |
---|---|
There are certain plans to convert the packing mechanism to an extension point in the future. The main reason is easier installation since code doesn't have to be installed in the WEB-INF/lib or WEB-INF/classes directory. See ticket #1600: Convert file packing plug-in system to an extension point for more information. |
Methods in the FilePacker
public String getDescription();
Return a short description the file format that is suitable for use
in dropdown lists in client applications. For example:
Zip-archive (.zip)
or TAR-archive (.tar)
.
public String getFileExtension();
Return the default file extension of the packed format. The returned
value should not include the dot. For example:
zip
or tar
.
public String getMimeType();
Return the standard MIME type of the packed file format.
For example:
application/zip
or application/x-tar
.
public void setOutputStream(OutputStream out)
throws IOException;
Sets the outputstream that the packer should write the packed files to.
public void pack(String entryName,
InputStream in,
long size,
long lastModified)
throws IOException;
Add another file or directory to the packed file. The
entryName
is the name of the new entry, including
path information. The in
is the stream to read
the file data from. If in
is null
then the entry denotes a directory. The size
parameter
gives the size in bytes of the file (zero for empty files or directories).
The lastModified
is that time the file was last modified or 0 if not known.
public void close()
throws IOException;
Finish the packing. The packer should release any resources, flush
all data and close all output streams, including the out
stream
set in the setOutputStream
method.
BASE provides a plug-in mechanism for logging changes that are made to items.
This plug-in mechanism is not the same as the regular plug-in API. That is, you do not
have worry about user interaction or implementing the Plugin
![]() |
This may be converted to an extension point in the future |
---|---|
There are certain plans to convert the logging mechanism to an extension point in the future. There are several benefits with this:
See ticket #1601: Convert logging plug-in system to an extension point for more information. |
The logging mechanism works on the data layer level and hooks into
callbacks provided by Hibernate. EntityLogger
LogManagerFactory
LogManager
Which LogManagerFactory
base.config
(See the section called “Change history logging section”). A single factory instance is created
when BASE starts and is used for the lifetime of the virtual machine. The
factory implementation must of course be thread-safe. Here is a list of
the methods the factory must implement:
public LogManager getLogManager(LogControl logControl);
Creates a log manager for a single transaction. Since a transaction is not thread-safe the log manager implementation doesn't have to be either. The factory has the possibility to create new log managers for each transaction.
public boolean isLoggable(Object entity);
Checks if changes to the given entity should be
logged or not. For performance reasons, it usually makes sense to
not log everything. For example, the database logger implementation
only logs changes if the entity implements the LoggableData
getEntityLogger()
.
public EntityLogger getEntityLogger(LogManager logManager,
Object entity);
Create or get an entity logger that knows how to log
changes to the given entity. If the entity should not be
logged, null
can be returned. This method
is called for each modified item in the transaction.
A new log manager is created for each transaction. The log manager is responsible for collecting all changes made in the transaction and store those changes in the appropriate place. The interface doesn't define any methods for this collection, since each implementation may have very different needs.
public LogControl getLogControl();
Get the log control object that was supplied by the BASE core when the transaction was started. The log controller contains methods for accessing information about the transaction, such as the logged in user, executing plug-in, etc. It can also be used to execute queries against the database to get even more information.
![]() |
Warning |
---|---|
Be careful about the queries that are executed by the log controller. Since all logging code is executed at flush time in callbacks from Hibernate we are not allowed to use the regular session. Instead, all queries are sent through the stateless session. The stateless session has no caching functionality which means that Hibernate will use extra queries to load associations. Our recommendation is to avoid quires that return full entities, use scalar queries instead to just load the values that are needed. |
public void afterCommit();
,
public void afterRollback();
An entity logger is responsible for extracting the changes
made to an entity and converting it to something that is useful
as a log entry. In most cases, this is not very complicated, but
in some cases, a change in one entity should actually be logged
as a change in a different entity. For example, changes to
annotations are handled by the AnnotationLogger
public void logChanges(LogManager logManager,
EntityDetails details);
This method is called whenever a change has been detected
in an entity. The details
variable contains
information about the entity and, to a certain degree,
what changes that has been made.