|Conditions Database (COOL) release based on the latest version of RAL including bulk insertion operations and extended tagging functionality.||Completed.
implementation port from RAL to CORAL achieved in COOL 1.2.7 (January 2006).
API still based on POOL, with CORAL extensions for early adopters.
Extended tagging functionalities ("user tags" and "HVS") achieved in COOL 1.3.0 (April 2006). Integration with CORAL connection service functionalities also achieved in COOL 1.3.0 (higher priority for the experiments than multi-channel bulk insertion). New API based on CORAL, no residual POOL dependency.
Multi-channel bulk insertion operations stripped off this milestone and rescheduled as new milestone COOL-4. Its implementation in COOL depends on the CORAL bulk update/delete functionalities (which were missing in RAL, hence were not not available before the move from RAL to CORAL).
prototypes of API and command line tools for data extraction and
cross-population of COOL databases. These tools are important for supporting
partial or complete distribution of the experiment's conditions databases
with several databases technologies.
||Basic data extraction and copy tool has been implemented in the new package PyCoolUtilities in COOL 1.2.6 (November 2005). Basic data inspection tool has been implemented in the same package.|
|COOL-3||31.03.06||COOL overall performance study and validation of the experiments requirements. This study should identify the areas that will require further work and optimization.||Completed.
||Validation of the ATLAS 'first pass' reconstruction use case
achieved in October 2005 (one retrieval of full DCS data snapshots every 5
seconds from an Oracle RAC cluster, corresponding to sustained data rates of
20 MB/s and 200k table rows/s).
Performance optimizations for single version folders implemented in COOL 1.3.0 (April 2006): removal of linear increase in retrieval time of an IOV with the start-of-validity timestamp.
Further performance optimization for multi version folders delayed because the relevant person has left the development team and needs to be replaced. Rescheduled as new milestone COOL-12.
Also identified potential problem with large number of tables in the schema. New relational schema with fewer tables scheduled as new milestone COOL-13.
|Support for multi-channel bulk insertion operations. This task requires the implementation of a channels table, which is also needed for channel name management.||Completed.||The
implementation of the channels table and channel name management was achieved
in COOL 2.0.0 (January 2007).
Full support for multi-channel bulk operations has been implemented in COOL 2.2.0 (July 2007). Tests have shown that the new implementation does provide a significant performance improvement. This task was rescheduled several times because it was allocated to the one of the two ATLAS developers who left the COOL project during Q2 2006. The same developer resumed work on the project in Q4 2006 (even if only at the 20% FTE level) and ensured its completion during Q2 2007.
|COOL-5||30.09.06||Integration of the Frontier backend in COOL.||Completed.
||Support for Frontier was added in COOL 1.3.2 (May 2006).
Frontier integration was completed in COOL 1.3.3a (September 2006). This was only possible after extensive debugging and testing of the Frontier server, client and FrontierAccess software layers, to which the COOL team actively contibuted during Q3 2006.
comprehensive report on COOL performance is produced for every release since
COOL 1.3.3 (August 2006).
|New RecordSpecification API (to specify the precision of persistent data types) and port to AMD64.||Completed.
||The new RecordSpecification API and the port to AMD64 were
achieved in COOL 2.0.0 (January 2007). This development required a schema
change (the description of user-defined payload specifications is now stored
using a different format). In addition to the record and field specification
classes (and interfaces), the new API also includes the record and field data
classes (and interfaces).
|Dynamic replication (at each replication request, only data
inserted in the master database after the previous replication request is
||The dynamic replication tool was released as part of package
PyCoolUtilities in COOL 2.0.0 (January 2007). Its implementation required
several schema changes (a column indicating the last modification date of
each row had to be added to several tables).
This milestone was actually completed in the COOL 2.1.0 release (March 2007), which includes several important bug fixes for COOL dynamic replication.
|Deployment of COOL database services at Tier0 (separate instances for online and offline) and Tier1 for Atlas with Streams replication.||Completed.||For Atlas: a test service setup was prepared with two-step
Streams replication between CERN online (IT-PSS 'Atlas-online' RAC), CERN
offline (IT-PSS 'integration' RAC), six 'phase-1' and one 'phase-2' Tier1
sites (BNL, CNAF, Gridka/FZK, IN2P3, RAL, Taiwan/ASGC; Nikhef/SARA, Triumf)
by Q4 2006. Of the two remaining 'phase-2' Tier1 sites, Nordugrid joined in
Q1 2007 and PIC in Q2 2007. The production T0 setup was also completed in Q2
2007, with the move from the 'integration' RAC to the production IT-PSS 'Atlas-offline'
Oracle RAC server.
|Deployment of COOL database services at Tier0 (separate instances
for online and offline) and Tier1 for LHCb with Streams replication.
||Completed.||For LHCb: a test service setup was prepared with two-step
Streams replication between CERN online (private LHCb test single-instance
server at the pit), CERN offline (IT-PSS 'integration' RAC) and three
'phase-1' Tier1 sites (Gridka/FZK, IN2P3, RAL) by Q4 2006. One 'phase-1'
(CNAF) and one 'phase-2' (Nikhef/SARA) Tier1 sites joined in Q1 2007. The
last 'phase-2' site (PIC) joined in Q2 2007. The production 'LHCb-offline'
RAC server replaced the 'integration' RAC in the T0 setup for LHCb in Q2
The production T0 setup was finally completed in Q3 2008, with the move to the production 'LHCb-online' RAC server, installed and managed by LHCb at the pit. The delay is due entirely to LHCb.
|COOL-10||31.03.07||Implement a tag 'locking' mechanism to prevent changes to locked tags.||Completed.
||All schema changes relevant to this task have been included in
COOL 2.0.0 (January 2007). A 'tag lock status' column has been added to the
The actual tag locking functionality and the corresponding API extensions were included in the COOL 2.1.0 release (March 2007).
|Support for MacOSX.||Completed.||A full build of COOL on PowerPC MacOSX (using SCRAM) was first
completed in November 2006, using a private build of CORAL and a private
installation of Oracle.
A public installation of COOL for PowerPC MacOSX (using CMT) has been prepared in the COOL 2.2.0 release (July 2007). All C++ tests are successful on MySQL and SQLite, and all except one on Oracle (where the failure is due to a known bug in the Oracle 10.1 client library - 10.2 is not yet available for PowerPC MacOSX). PyCool could not be ported due to missing support for PyROOT on PowerPC MacOSX.
Support for Intel MacOSX (using CMT) has been achieved in COOL 2.2.1 (October 2007). This includes support for PyCool, thanks to important fixes in the python and ROOT installations for Intel MacOSX. All C++ and PyCool tests are successful on MySQL and SQLite. Oracle and Frontier are not yet supported because no Oracle client library is available for Intel MacOSX.
||Server-side (SQL query) performance optimization for SV
single-channel and MV user-tag retrieval.
||Completed.||Two server-side SQL performance optimizations, for
single-channel retrieval from single-version (SV) folders, and for user tag
retrieval from multi-version (MV) folders, have been included in COOL 2.1.0
Other important performance optimizations, including those for MV tag retrieval, have been rescheduled as milestones COOL-18, COOL-19 and COOL-20.
|New relational schema with fewer tables.||In progress.
|The COOL 2.0.0 release (January 2007) includes several schema
changes relevant to this task. In addition to the global (database) schema
version, it is now possible to define a schema version at the folder level.
In the future, it will thus be possible to create new 2.x.0 folders (using
fewer relational tables) on a 2.0.0 database which can still be read (except
for the new 2.x.0 folders) using the 2.0.0 software.
This milestone has been removed because there is no agreed date to complete it for the moment.
|Support for simple payload queries (lookup of IOVs by payload data).||Completed.||The implementation of payload queries will be based on the new
record and field interfaces described in milestone COOL-7 and released in
COOL 2.0.0 (January 2007).
This milestone was resumed in Q3 2008 after being removed in Q2 2007. The new API and its implementation were released in COOL 2.6.0 (November 2008).
|COOL-15||31.03.07||Move from SCRAM to CMT. Integration with the nightly build system and QMTEST.||Completed.
configuration to build COOL and its integration with the nightly build system
and QMTEST were completed in December 2006. Nightly tests of COOL have been
executed against SQLite since December 2006. Since February 2007, they are
now also executed against Oracle, MySQL and Frontier.
While COOL 1.3.4 (December 2006) was released using CMT,
SCRAM developments were not immediately dropped. COOL 2.0.0 (January 2007) was the last release prepared using SCRAM. COOL 2.1.0 (March 2007) and all subsequent releases were then released again using CMT.
SCRAM was still kept for a few months, exclusively for internal use by the COOL development team. The use of SCRAM was completely dropped as of COOL 2.2.2 (November 2007).
|Move from the SEAL component model to the new CORAL component model.||Completed.||The COOL team, together with the CORAL and SEAL teams, actively
contributed to the debugging and testing of the SEAL component model in
multi-threaded mode during Q3/Q4 2006. These activities led to the SEAL 1.9.0
and 1.9.1 releases in Q4 2006 and to the decision to drop the SEAL component
model and move it into CORAL.
Most of the C++ API changes relevant to this task were completed in COOL 2.0.0 (January 2007). All SEAL classes were removed from the COOL C++ API (with one minor exception requested by the Atlas users to be able to retrieve the underlying CORAL ConnectionService as long as SEAL was not dropped completely).
The SEAL-free internal reimplementation of COOL was completed in COOL 2.5.0 (June 2008). SEAL message streams have been replaced by CORAL message streams. The plugin loading functionality previously provided by SEAL is not needed anymore as the COOL database service component should now be explicitly linked rather than loaded at runtime. The COOL C++ API has been extended to allow Atlas users to use inside COOL an existing CORAL ConnectionService or to create one inside COOL and then use it in CORAL-only applications.
|COOL-17||31.03.07||Integration with the CORAL LFC-based lookup service.
||The integration of COOL (both in C++ and in PyCool) with the
CORAL LFC replica service was completed in COOL 2.1.0 (March 2007). The use
of the LFC replica service with COOL has been tested and a user example has
been included in the COOL Examples package.
Support for LFC in several COOL command line tools has been added in COOL 2.2.0 (July 2007).
||Server-side (SQL query) performance optimization for SV
||Completed.||The SQL performance optimization for multi-channel retrieval
from SV folders has been included in COOL 2.2.0 (July 2007). This improvement
was the main focus of CORAL and COOL changes in Q2 2007, as the poor data
retrieval performance for this use case was a blocker for the distributed
conditions database stress tests performed by Atlas at T0 and T1 sites.
||Client-side performance (C++ profile) optimization.
||Completed.||Important client-side performance improvements have been
achieved in COOL 2.2.0 (July 2007). This release includes C++ API extensions
that allow data retrieval with minimal overhead with respect to CORAL (the
data retrieved by CORAL are now available to COOL users without any
additional in-memory data copy).
|Server-side (SQL query) performance optimization for MV tag retrieval.||Completed.||The server-side performance optimizations for standard tag
retrieval from MV folders was originally foreseen in the COOL-12 milestone
due in March 2007. It has been rescheduled several times due to more urgent
performance improvements, such as the SQL query optimizations for data
retrieval from SV folders.
This optimization, requested by LHCb, has been achieved in the COOL 2.3.1 release (February 2008), using code which was designed from the start to be reusable also for several other use cases where performance optimization is still needed. These additional performance improvements have been scheduled as a new milestone COOL-31.
|COOL-21||30.10.07||Allow insertion of IOVs in the past for SV folders.||Completed.||Back-insertion of non-overlapping IOVs in single-version folders
has been implemented in COOL 2.2.1 (October 2007), following a requirement
from the Atlas online community. Bulk insertion is not always possible in
this case, but the available functionality is enough to provide a solution
for the Atlas needs.
||Performance optimization for channel metadata retrieval in MV folders.||Completed.||The implementation of channel metadata retrieval has been
improved with functional bug fixes and performance optimizations in the COOL
2.2.2 release (November 2007).
Further performance enhancements were achieved through the API extension to retrieve channel names in bulk in COOL 2.3.0 (January 2008).
|COOL-23||31.01.08||IOV retrieval from one channel selected by its name.
||Completed.||The retrieval of IOVs from a single channel selected by its name has been completed in COOL 2.3.0 (January 2008).|
|COOL-24||31.01.08||Remove all warnings and errors from the nightly builds and
||Completed.||Whenever a COOL release has been produced, it has always passed
all functional tests on all supported platforms, executed in the environment
of the development team. A few configuration issues still had to be fixed to
make sure that the same tests would also pass in the environment of the
automatic nightly builds. These issues have all been solved in COOL 2.3.0
(January 2008). All pending build warnings have also been removed at the same
|Implement a 'partial' tag locking mechanism.||Completed.||Partial' tag locking is meant to prevent the removal but allow
the addition of new IOVs or HVS nodes to partially locked tags.
The generic API for partial tag locking, and its implementation for the additions of new HVS tags, have been completed in COOL 2.3.0 (January 2008). The functionality to allow also the addition of IOVs to partially locked tags was completed in Q3 2008 and released in COOL 2.6.0 (November 2008).
|Support for the gcc4 compiler on Linux.
||Completed.||The port of the COOL code and configuration to support gcc4.1
was completed in COOL 2.3.0 (January 2008). This has never become an
officially supported platform in the LCG AA, because it has been replaced by
The port of the COOL code to gcc4.3 started in October 2008 and was completed in Q4 2008. This required several API changes ('const int f()' -> 'int f()') to fully comply with the gcc4.3 standard, stricter than the gcc4.1 standard. COOL has been released for gcc4.3 in COOL 2.7.0 (February 2009), also thanks to the completion of the CORAL port (POOL-19).
||Add the option to disable inserting into the global HEAD when
inserting MV IOVs with a 'user tag'.
||Completed.||This simplification of the data model for IOVs inserted with a 'user tag' associated to them has been completed in COOL 2.4.0 (February 2008). This feature will allow users to 'clone' existing tags in a simple and efficient way.|
|Support for the 'CORAL server' backend.
||Completed.||Support for 'coral://' URLs was first prototyped in COOL 2.4.0
(February 2008), allowing simple tests against early prototypes of the CORAL
server and the definition of additional constraints on its development for
its integration into COOL. Since then, the COOL read-only tests have been
routinely used to validate the CORAL server implementation (POOL-13).
Full support for this new backend has been achieved in COOL 2.8.1 (June 2009) thanks to the first release of the CORAL server in CORAL 2.3.1 (POOL-16).
|Expose transaction management in the user API.
|Prototypes of the API and implementation for this feature
(requested by ATLAS) were prepared in Q4 2008. The task of reviewing and
releasing this implementation has never been completed, due to more urgent
priorities for the PF in 2009 (such as the CORAL server developments and the
support for new platforms and externals). The milestone has been removed
because this functionality is no longer a priority for ATLAS and because a
more general review of transaction management in CORAL and COOL is likely to take
place in the context of CORAL server developments in 2010.
|Allow session sharing in the user API.
||Depends on COOL-29.
|This milestone depends on transaction management (COOL-29). Both milestones have been removed as they are no longer a priority for ATLAS.|
||Reimplement and optimize all SQL queries for IOV retrieval by
time, reusing the same C++ methods for different SV and MV use cases.
||Completed.||The SQL queries needed to handle the various COOL use cases (SV,
MV tags, MV user tags, MV HEAD...) were originally defined in separate C++
methods, added over time. In order to allow the future maintenance of the
software and further performance optimizations, these pieces of code need to
be merged together.
Some improvements in this direction were added in the COOL 2.3.1 release (February 2008): the same code is used for IOV retrieval from MV tags and MV user tags. This has allowed the simultaneous performance optimizations of IOV retrieval from MV tags, and IOV insertion with MV user tags. Additional improvements were then added in COOL 2.5.0 (June 2008) to reuse the same code also for some SV and MV 'head' queries.
The major internal refactoring and cleanup that are necessary to achieve this task were finally prepared during Q3 2008. The code was released in COOL 2.6.0 (November 2008).
||Implement the 'tag cloning' functionality.||Completed.
||This functionality has been requested by LHCb. Its
implementation was completed during Q3 2008 and was released in COOL 2.6.0
||Avoid unnecessary COUNT(*) queries in IOV retrieval.
||This performance optimization has been requested by Atlas as a
result of their distributed stress tests in Q3 2008. Its implementation was
completed and released in COOL 2.6.0 (November 2008). The size of IOV
iterators is now computed only on demand, avoiding unnecessary COUNT(*)
queries against the database server.
|Support for MS VC9 (except for PyCool).
||Completed.||A significant effort was spent during Q3 2008 on the port of the
COOL code and configuration to support the Microsoft Visual Studio Express
2008 (VC9) compiler. In cooperation with the SPI and ROOT teams, this
resulted on good progress also in fixing several issues with gccxml, ROOT and
LCGCMT. COOL could be fully built by November 2008 but several issues still
existed at runtime during tests.
Thanks mainly to the completion of the CORAL port to VC9 (POOL-21) and the rebuilding of several external packages using VC9, COOL has been released with full C++ support for VC9 in COOL 2.7.0 (February 2009).
The only pending problem is PyCool, which cannot be loaded at runtime. The issue persists even after rebuilding Python natively on VC9. This task has been removed from this milestone and will be rescheduled only if PyCool support on VC9 is required by the experiments.
|Migration from CVS to SVN.
||No progress. Rescheduled.
||This task has now a lower priority and has been rescheduled
because the CVS service will be maintained until all experiments have
migrated to SVN, which is not expected to happen before the end of
||Support for Linux SLC5 (except for Oracle).
||Completed.||The port of COOL and all other PF projects to SLC5 has been
relatively smooth, involving only few configuration changes. COOL has been
released on SLC5 in COOL 2.7.0 (February 2009).
Only partial support is presently provided for Oracle on SLC5, due to incompatibilities of the Oracle client libraries and SELinux. This task has been removed from this milestone and rescheduled as COOL-37.
||Full support for Oracle on Linux SLC5.
||Completed.||For LCG releases using Oracle 10.2, support for Oracle on SLC5
can only be provided if the SELinux security layer is partially disabled.
This is due to a bug in the Oracle 10.2 client libraries, where the presence
of text relocations may result in failures at runtime ('cannot restore
segment prot after relocation') if SElinux is fully enabled. The issue, which
has been followed up with Oracle Support by the PF team, can only be solved
by an upgrade to the 11.2 Oracle client, while no bug fix will be provided by
Oracle for the 10.2 release series.
In Q3 2009, the first version of the 11.2 client libraries for Linux were released by Oracle and were used to prepare the LCG_57 release, including COOL 2.8.3 (September 2009). According to Oracle, the problem in the OCI libraries (used by CORAL) should have been fully solved in the September 2009 release of the 11.2 client. It is worth noting that the issue was instead still unsolved for OCCI-based applications (such as some CMS packages).
New tests in Q4 2009 showed that the bug was still present in the 11.2 OCI libraries for linux64, while it had been solved only for linux32. However the problem was no longer considered a showstopper, thanks to a much better understanding of the causes and possible workarounds for this issue.
In Q1 2010, new patches for the bugs in the linux64 OCI and linux32 OCCI libraries were received from Oracle Support and installed in the '184.108.40.206.0p1' client on AFS. The issue is now fully fixed for CORAL (which uses OCI), although it is not yet solved for the linux64 OCCI libraries.
||New relational schema with a separate payload table.
||Completed.||Support for a new relational schema offering the option to store
conditions data payload in a separate payload table has been implemented in
COOL 2.8.0 (May 2009), following a request from ATLAS. This option can be
especially useful for avoiding the duplication of large payload in MV
folders. It is only available for new folders, as the implementation of
schema evolution tools has not been requested.
|Performance improvement for CLOB data (bulk retrieval).||Completed.||During Q2 2009 Atlas reported slow performance for read access
to COOL folders containing CLOB data. The COOL implementation has been
changed so that CLOB data are retrieved in bulk via CORAL rather than row by
row. After being validated through functional and performance tests, the
patch was released in COOL 2.8.4 (December 2009).
|COOL-40||30.09.09||Validate COOL performance against Oracle 11g servers.
||Completed.||The production database servers for physics users are presently
running the Oracle 10.2 software. It is foreseen that from 2010 onwards the
servers will progressively be migrated to the latest Oracle 11.2 version,
released in September 2009. To ensure no disruption in the quality of COOL
services, the performance of COOL queries has been analysed and validated in
Q3 2009 against dedicated test servers running Oracle 11.1, which already
contains most of the new performance-relevant changes in Oracle 11.2.
||Speed up the execution of the COOL nightly tests.||Completed.||The very large COOL test suite has been a key ingredient of the
success of COOL. As new features have been added to COOL, the test suite has
kept growing and its execution time in the nightlies has significantly
increased. Most of this time is spent in tests against Oracle and MySQL
database servers, which were getting increasingly overloaded from the
simultaneous tests on all platforms (~10) and slots (~3). Oracle tests were
especially suffering from repeated DDL operations (table creation and drop)
from concurrent sessions on the same schema.
Several improvements have been prepared in the COOL and CORAL versions for LCG_57 (September 2009). First, COOL tests have been modified to avoid table creation and drop unless strictly necessary. This alone has reduced the single-client execution time by more than a factor 3 for some Oracle tests. Second, the CORAL OracleAccess plugin has been modified to optimize queries against the Oracle data dictionary. Tests against Oracle now run faster than those against MySQL, which are the new bottleneck of the COOL nightlies. Third, MySQL and Oracle tests can now be selectively disabled on test slots not relevant to upcoming releases, keeping only the faster tests against SQLite.
|Oracle partitioning for the COOL relational schema.
||In progress. Rescheduled.||Oracle partitioning is being evaluated as a component of the
strategy for the long term archiving of the large volumes of COOL conditions
data from the LHC experiments. Tests of COOL query performance on partitioned
schema prototypes have been resumed in Q3 2009, giving more optimistic
results than previous tests performed in 2008 using earlier software
During Q1 2010, it was finally understood that the more recent tests gave a better query performance because they had been executed against an Oracle 11g database server. In particular, the bad query performance obtained with the 10.2.0.4 Oracle server version (currentlly used for all production database servers) is due to a bug which can only be solved by the upgrade to the 10.2.0.5 or 11.2 server versions, which will not be deployed in production at CERN before summer 2010 at least. This task has been postponed accordingly, as it is not a highest-priority issue for the experiments in any case.
|COOL-43||31.01.10||COOL, CORAL and POOL port to the ICC compiler.
||Completed.||A new development platform using the icc compiler on Linux was
introduced in Q4 2009. CORAL, POOL and COOL have been ported to the new
platform and the nightly builds and tests are now successful for all three
projects. The port to icc was useful to improve the code by removing new
build warnings from icc, and to further investigate the issues caused by text
relocations on SLC5 when SELinux is enabled (previously reported for the
Oracle libraries). It was observed that icc 11.1 is necessary because icc 11.0
does not correctly handle text relocations.
|New relational schema where several payload records are
associated to the same IOV.
||In progress. Rescheduled.||A new COOL functionality was requested by ATLAS to allow one IOV
to be associated to a variable-size vector of payload records, rather than to
a single payload record. This is meant to
be functionally equivalent to the ATLAS CoraCool package, but to offer
better performance thanks to the use of SQL joins for retrieving payloads at
the same time as IOV metadata.
The new functionality has been implemented by the ATLAS members of the COOL team during Q2 2010 and is presently being reviewed before its inclusion in the COOL code base. Its release has been rescheduled because ATLAS requested binary compatibility of the CORAL and COOL APIs until late 2010, to be able to run older releases of the ATLAS HLT online software against the more recent CORAL and COOL versions used in the ATLAS offline software.
|POOL (includes CORAL)|
|POOL-1||31.10.05||Production quality release of the relational database API (RAL) package, which should include the new interface recently reviewed.||Completed.||The first
public release of CORAL, was made available on 22/11/2005 (version 1.0.0)
which contained all of the new functionality exposed through the reviewed
interfaces. Prior to that there had been a few internal releases allowing
beta-testers from the COOL development team and the experiment software
integrators to send early feedback. Since the first public release, two other
main and one minor (bug-fix) ones have been made available.
As of CORAL version 1.1.0, a plugin based on the frontier-client is included in the releases. The plugin allows to transparently use FroNtier servers (possibly cached via SQUID) as read-only Database backed. This new functionality has been tested by the CMS experiment with the full POOL software stack (including object relational mapping) as part of their LCG 3D work and several enhancements went into the Frontier code and the first production release of the plugin in POOL 2.3.
|POOL-2||31.12.05||POOL framework based on new C++ reflection libraries (Reflex) available for the experiments to be used in production. Validation by the experiments completed.||Completed.||The POOL code is based on SEAL Reflex as of version 2.2.0, released on 21/9/2005. Since then 7 bug-fix releases have been produced which allowed the experiments to gradually pick up this part of the SEAL software through POOL. POOL will migrate to the ROOT version of Reflex (moved with SEAL 1.8) with the upcoming POOL 2.3 release.|
|Finalize the migration POOL/CORAL to the new platforms (MacOSX,
SLC4_amd64) with regular builds, and full running of the functional and data
regression tests. Migration to scram v1.
||Completed.||Regular builds for slc4_amd64 exist for CORAL and POOL as of versions 1.4.1 and 2.4.2 respectivelly. The support of MacOSX will arrive as soon as the underlying externals become available (expected date 31.02.07). Migration to scram v1 has been replaced last quarter by the AF decision to move POOL and CORAL to CMT based builds. This is reflected by the new milestone POOL-9.|
|POOL-4||30.09.06 30.11.06||Development and deployment of LFC-based lookup and DB authentication services of CORAL.||Completed.||The LFC based DB lookup service prototype has been provided and
releasedin CORAL 1.5.4. The production version, extended to allow
authentication based on LFC is completed and released with CORAL 1.6.3. It is
to be validated by the experiments and deployed within the first quarter 2007.
|POOL-5||30.10.06||Complete migration to CORAL (AttributeList) and the SEAL component model of all POOL components||Completed.||The migration of to the SEAL component model has been completed
in the POOL CVS repository for most packages. The same applies for the
migration to the CORAL AttributeList. However, no corresponding releases have
been made available yet and the finalization of the development has been
canceled as a consequence of the recent AF decision to migrate the SEAL
functionality into CORAL and to deprecate the use of SEAL in POOL and CORAL.
This is reflected by the new milestone POOL-9.
|Make all CORAL components thread-safe.||Completed.||The work started with updates in the SEAL component model to
make sure the problems manifesting in multi-threaded applications are fixed.
CORAL has been updated to allow the switching off of the "cleanup
thread" in ConnectionService, in case the problems still persist.
The high level CORAL services (ConnectionService, RelationalService) have been already made thread safe, as well as the high level classes (up to ISchema) in OracleAccess. The relevant system test is exercising the relevant use cases defined by the experiments (mainly ATLAS online) are passing and the new functionality has been released with CORAL 1.7.0.
|POOL-7||31.12.06||Provide a python interface for CORAL.||Completed.||A Python C++ extension module, implemented based on the Python C
API, has been developed in collaboration with RRCAT, Indore, India. It is
available in every CORAL release as of version 1.6.3.
|Provide schema evolution for relational data according to a priority list of required use cases provided by the experiments.||Completed.||The Schema evolution features in POOL/ORA have been defined in
collaboration with the CMS team, who has provided a set of concrete use
cases. The implementation of the new features has been completed and tested.
The related code has been included in the POOL release since POOL_2_7_0. The
CMS team is currently working on the testing of the new features.
|POOL-9||31.01.07||CMT migration finished for POOL/CORAL.||Completed||The migration has been achieved with CORAL 1.7.0 and POOL 2.5.0.
In addition, both CORAL and POOL have been integrated into the AA nightly
build system, which involved migrating all integration and system tests to
the use of Qmtest, while the POOL regression tests are still pending (see
|POOL and CORAL independent from SEAL.||Completed.||The work of removing the SEAL dependencies from POOL is
completed. The first configuration LCG_54 with these changes has been
recently released and is being integrated by LHCb and ATLAS.
|Complete the porting of the POOL data regression tests into the
nightly build system.
||Completed.||The work on this milestone was completed in August 2007.|
|POOL-12||31.03.07||Extend the CORAL API for the new functionalities requested by
experiments: execution of stored procedure, interface for replica re-ordering.
||The support of stored procedure has been added in the general
RelationalAccess API, and fully implemented for the Oracle plug-in. Similar
support for MySQL (and SQLite?) requires further developments. The
abstraction for the replica re-ordering and its access in the configuration
has been also added in the RelationalAccess API. The corresponding Connection
Service implementation has been updated accordingly. The new features have
been released on 4.6.2007 with CORAL_1_8_0.
|POOL-13||30.06.08 31.12.08||CORAL server development. COOL read-only tests for selected basic
use cases pass.
||Included in POOL-16. Removed.||This milestone has been included in POOL-16 (release of CORAL server with read-only functionality) and removed.|
|POOL-14||15.08.08 31.04.09||CORAL server development. All CORAL integration tests (including write test) pass. This will also require some extension of the current CORAL tests suite to achieve full coverage.||Included in POOL-17. Removed.||This milestone has been included in POOL-17 (release of CORAL server with update functionality) and removed.|
|CORAL Server (read-only) scalability and stress tests pass. Validation using the Atlas HLT tests.||Completed.||This
milestone has been reduced in scope to tests of the read-only functionality
and performance required by the Atlas HLT team. Good progress has been
achieved in both areas using the new implementation developed in 2009. The
software now passes the basic functionality tests against all three Oracle
databases relevant to the Atlas HLT use case: COOL (achieved in Q1), as well
as geometry and trigger data (achieved in Q2).
The development and testing process was greatly improved in Q2 2009 when our Atlas-HLT collaborators developed a standalone Athena HLT test that could be executed also by the non-Atlas members of the team. This test is being used extensively for functional debugging, but also for measuring and reducing the performance overhead of running through a CORAL server rather than connecting directly to Oracle.
During Q3, the software was installed in the Atlas control room to perform more complete and realistic tests of functionality and performance for the HLT use case, including the validation of the data quality of the software chain for physics processing. All functional tests using a dedicated partition in August were successful, basically on the very first try: the CoralServer and a tree of 839 CoralServerProxies let the 6480 HLT processes configure without any obvious problems. In September, after
several minor enhancements, the CoralServer/CoralServerProxy software has been installed on the production ATLAS partition and is now used instead of M2O/DbProxy for the configuration of the high-level trigger. The software is running smoothly and no problems have been observed. In summary, the CoralServer and CoralServerProxy software is now fully deployed and validated for the ATLAS online system.
|First CORAL release with read-only CORAL server support. COOL and CORAL read-only tests pass. Start of experiment validation.||Completed.||This milestone, previously expected for October 2008, has been
reduced in scope to the release of the read-only functionality. The releases
of more complete CORAL server software with secure authentication and full
write functionalities have been rescheduled as milestones POOL-17 and
Progress was slow in 2008. An internal review of the software was held in December 2008, leading to a new architecture design. This has significantly sped up the progress of development in 2009, which restarted in Q1 when resources were freed after the LCG_56 release. An implementation with read-only functionalities was completed and passed COOL and CORAL read-only tests by the end of Q1.
During Q2 2009, the software was further improved with several fixes and enhancements, until it successfully passed the basic functionality tests against the COOL, geometry and trigger databases relevant to the Atlas HLT use case. This version of the software has been released in CORAL 2.3.1 (June 2009) and will now be installed in the Atlas control room for more complete performance and functionality tests of the HLT use case (POOL-15).
A locking issue when Oracle connection sharing is enabled has been observed during the tests. Its resolution has been rescheduled as a separate milestone POOL-24. This is very important to fully exploit the multiplexing capabilities of the CORAL server in the general case, but is irrelevant for the Atlas HLT use case which adds an intermediate caching proxy.
|Release of CORAL Server with secure authentication. All functional tests pass.||In progress.
|This is a rescheduled milestone, previously expected for October
2008 as part of POOL-16. A first
implementation of secure data transmission and grid certificate
authentication using VOMS and ssl was prepared in Q1 2009, using the new
design for component architecture. During Q2, the implementation was
completed with the addition of VOMS-based authorization, of a tool for
maintaining a list of connections and credentials, and of a more complete
The package has not yet been released because its external dependencies and integration with LCGCMT still need to be finalised in the wider context of LCG AA dependencies on Grid packages. There was no progress on these issues in Q3 or Q4. The CORAL server software was developed and tested (on SLC4 and SLC5) using a 1.9 VOMS package that uses the system version of ssl and does not depend on Globus. However, this may lead to incompatibilities with other Grid packages (like gfal) that on SLC4 can only be supported using the Globus version of ssl. It is likely that the secure CORAL server will be release either only on SLC5 using the no-Globus VOMS, or also on SLC4 using the Globus-based VOMS.
|Release of CORAL Server with full write functionality (DML and
DDL). All functional tests pass.
||Rescheduled.||This is a rescheduled milestone previously expected for October 2008 as part of POOL-16.|
|CORAL support for gcc4.3.||Completed.||The CORAL port to gcc4.3 was completed in Q4 2008 and is ready
to be released in LCG56. This required several API changes ('const int f()'
-> 'int f()') to fully comply with the gcc4.3 standard.
|POOL support for gcc4.3 (except for build warnings).||Completed.||The POOL port to gcc4.3 was completed in Q1 2009 and released in
POOL 2.8.3 as part of LCG_56 (February 2009). In contrast to what was
incorrectly claimed in the Q1 2009 report, this did include some changes in
the public API required to make it fully compliant with the gcc4.3 standard
('const int f()' -> 'int f()').
Some implementation changes are however still needed to get rid of a few pending warnings in the gcc4.3 build. This task has been rescheduled as POOL-23.
|CORAL support for MS VC9.||Completed.||The CORAL port to VC9 was completed in Q1 2009 and released in CORAL 2.2.0 as part of LCG_56 (February 2009).|
|POOL support for MS VC9.||Completed.||The POOL port to VC9 was completed in Q1 2009 and released in POOL 2.8.3 as part of LCG_56 (February 2009).|
|Remove gcc4.3 build warnings for POOL.||Completed.||This is a rescheduled milestone, previously included in POOL-20.
It consists in getting rid of all build warnings caused by the stricter
gcc4.3 standard. This task has been renamed because, in contrast to what was
incorrectly claimed in the Q1 2009 report, no change was required in the
public POOL API. The relevant changes in the POOL implementation code and
build configuration have been completed in August 2009.
|POOL-24||30.09.09||Full support for Oracle connection sharing in the CORAL server.||Completed.||This is a rescheduled milestone, previously included in POOL-16.
Complete support for Oracle connection sharing is needed to fully exploit the
multiplexing capabilities of the CORAL server in the absence of an
intermediate caching proxy. To achieve this, the deadlock observed in the
CORAL Oracle plugin when connection sharing is enabled must be addressed.
The hang has been investigated and is now fully reproducible in queries against BLOB or CLOB columns from parallel sessions opened on multiple threads but sharing the same physical connection to Oracle. A workaround has been implemented in CORAL to avoid the Oracle OCI calls which cause the hang.
In parallel, the issue has been followed up with Oracle Support and is now confirmed as a bug in the Oracle 10.2.0.4 client libraries. The upgrade to the 220.127.116.11.0 Oracle client has been proposed for the next configuration.
|Performance optimizations in the CORAL LFC replica service.
||Completed.||Peformance issues with the LFC replica service were first
reported by LHCb during Q2 2009. A first patch to fix some of these problems
was included in CORAL 2.3.2 (July 2009). In parallel, LHCb implemented a
workaround in its production chain for reconstruction jobs, while additional
patches were being prototyped and tested.
The issue reappeared in Q1 2010 for LHCb user analysis jobs, to which the workaround for production jobs did not apply. A more complete analysis of the LFC server logs was performed by the CORAL team with the help of LFC developers, resulting in a better understanding of the root cause of the problem: the loops on LFC data performed by CORAL were not properly closed, keeping the threads busy until terminated by an LFC server timeout after 5 minutes. A fix for this specific issue, as well as several other minor performance improvements, was finally released for LHCb in CORAL 2.3.9 (April 2010), completing this task. One last patch, further halving the number of LFC server threads used by CORAL, was later added to the code and will be included in the next upcoming release for LHCb.
|Monitoring tools for the CORAL server and CORAL server
|A new package CoralMonitor has been added during Q3 2009. This presently allows the collection of timing and other statistics from the CORAL server and client components and their dump to a csv file or their real-time visualization. More work is needed to allow fine-grained monitoring of individual resource-intensive requests, as well as the monitoring of the CORAL server proxy.|
|POOL-27||31.07.09||Install new Oracle client libraries to fix the "Cannot
allocate an OCI environment handle" intermittent failure in CORAL
||Completed.||Several CORAL users have reported intermittent failures of their
applications with the "Cannot allocate an OCI environment handle"
error message, since the end of 2007. This problem has been difficult to
reproduce because it does not happen all the time (e.g. during an Atlas data
challenge it only affected 2% of the jobs at a single Grid site). The problem
was reported to Oracle Support and was eventually identified as a bug in the
Oracle 10.2.0.4 client libraries. A patch for Linux was received and new
"10.2.0.4p1" libraries were installed for the LCG56c configuration,
including CORAL 2.3.2 (July 2009). The patch is also included in the 11.2
libraries used for the LCG_57 configuration, including CORAL 2.3.3 (September
|POOL-28||31.08.09||Deployment of a CORAL server instance for executing the nightly
CORAL and COOL tests.
||Completed.||A CORAL server instance dedicated to the nightly tests
(coralserver.cern.ch) has been deployed in July 2009. Simple R/O tests are
executed against it since August 2009, within both the CORAL and COOL nightly
test suites. More tests will be added with time (including R/W tests when
this functionality is implemented for milestone POOL-18).
|POOL-29||28.02.10||Fast merge of POOL files.||Completed.
||Support for fast merge of POOL files has been requested by
ATLAS. The implementation of this feature was released in POOL 2.9.3
(September 2009) and was then tested and validated by ATLAS during Q4
|CORAL API for Oracle partitioning.
|Deployment of a general-purpose CORAL server instance for CERN
||In progress. Rescheduled.||A Linux box in the CERN Computer Center was requested by the PF team. This has been allocated in April 2010 and needs to be configured before a CORAL server instance is installed on it.|
|POOL-32||31.12.09||Reduce the high memory footprint of CORAL-based applications
caused by the Oracle instant client libraries.
||All new versions of CORAL released in Q4 2009 have been built
using the "light" version of the Oracle instant client library to
reduce the memory footprint of CORAL-based applications (which was especially
a problem for ATLAS). The full instant client had been needed to support the
character set previously used by the devdb10 Oracle server, which was finally
moved in Q4 2009 to a character set supported by the light instant
|POOL-33||30.06.10||Install new Oracle 11g client libraries to fix the problems
reported by ATLAS on AMD multi-core processors.
||In February 2010, ATLAS reported problems with the 32bit Oracle
11.2 client software on Grid sites using 64bit AMD multi-core processors,
such as Opteron quad-cores. As the issue was preventing the NDGF Tier-1 from
participating in ATLAS reprocessing and no fix could be obtained in a timely
manner from Oracle support, ATLAS moved back to the 10.2 Oracle client in the
In April 2010, a new patch was finally received from Oracle support and successfully passed the tests performed by the PF team on an ATLAS AMD multi-core node in Slovenia. The patch was included in the 18.104.22.168.0p1 version of the Oracle client software for the LCG AA projects. Thanks to this patch, ATLAS could upgrade back to the Oracle 11g client in the CORAL 2.3.10a release in the LCGCMT_59 configuration (July 2010).
|Improve transaction handling when CORAL automatically reconnects to a database after a network glitch.||In progress. Rescheduled.
||A useful feature of CORAL is that it automatically tries to
reconnect to the database server when the connection is lost, for instance
because of a network glitch. However, many users have reported problems in
the way CORAL handles the broken transactions in this case. Several
improvements were added to CORAL already in Q4 2009, for both R/W
transactions and the default 'serializable' R/O transactions. Additional
problems have been reported by ATLAS HLT in Q1 2010 in a third category of
use cases specific to the CORAL server, involving 'non-serializable' R/O
A first patch for the problems reported by ATLAS in Q1 2010 has been included in the CORAL 2.3.10a release in July 2010. Additional improvements are needed to complete this patch in Q3 2010.
|POOL-35||30.06.10||Solve the inconsistency in libexpat versions used by
FrontierAccess and other AA software packages.
|An inconsistency between the libexpat.so versions used by the
Frontier client and other LCG AA libraries was identified as the cause of
some ATLAS job failures observed in Q2 (for instance in conditions POOL file
access via gfal at SARA). The problem was solved in the CORAL 2.3.9a release
in LCG56g (May 2010), primarily by upgrading to a new version 2.7.14 of the
frontier_client library, prepared by the Frontier team. The new library is
now linked to libexpat.so.0 (used by all other LCG AA projects) instead of
|POOL-36||30.06.10||Performance optimizations in the Frontier plugin for ATLAS.||New.
|Frontier has been used by ATLAS for conditions data access,
mainly for analysis jobs at T2 sites, since Q4 2009. While the switch to
Frontier in ATLAS client applications has been very easy as this backend was
already fully supported both in CORAL (for CMS) and in COOL (just in case one
experiment needed it), Frontier had never been tested against the ATLAS
production use cases and a few ATLAS-specific functional fixes and
performance optimizations have been requested in both CORAL and COOL. The
latest performance optimizations, avoiding duplicated queries in the ATLAS
use cases, have been included in the CORAL 2.3.10a release (July 2010).
|Summary of PF Progress in Q2 2010 (report date 09.07.2010)|
Two new releases of all PF projects have been prepared for ATLAS in Q2 2010. The main motivation for LCG_56g (May 2010) was the CORAL upgrade to version 2.7.14 of the frontier_client library, to fix a wrong libexpat.so dependency which had triggered the failure of some ATLAS jobs accessing conditions data on the Grid. The LCG_59 release (July 2010) was motivated by major upgrades in many external dependencies (including ROOT, python and oracle), functionality enhancements in POOL collections and bug fixes in CORAL. Functionality enhancements are being prepared for ATLAS offline users also in CORAL and COOL, but their release has been postponed because they involve API extensions which would break binary compatibility with the ATLAS online software in the HLT system. A new LCG_58d is also being prepared (July 2010) for LHCb, using for all PF projects the same code base used in the ATLAS LCG_59 release. Following the upgrade of ATLAS to the same ROOT 5.26 code base as LHCb, the only difference between these two configurations is that ATLAS and LHCb will use python 2.6 and 2.5, respectively. In particular, both LCG_59 and LCG_58d use a new '22.214.171.124.0p2' version of the oracle client software, which completes the SELinux fixes for the 32 and 64 bit versions of the OCI and OCCI libraries and also contains the fix for the Oracle 11g bug on AMD multicore hardware, which had triggered the temporary downgrade of ATLAS to the 10g client in Q1 2010.
The use of Frontier for conditions data access in ATLAS (mainly for analysis jobs at T2 sites) is steadily increasing. While the integration of Frontier into the ATLAS client software in Q4 2009 had been very smooth as this backend was already fully supported both in CORAL (for CMS) and in COOL (just in case one experiment should need it), Frontier had never been tested against the ATLAS production use cases and a few ATLAS-specific optimizations have been requested. The latest such improvements to the FrontierAccess plugin are included in the LCG_59 release of CORAL.
One new developer joined the Persistency team in IT. His main responsibility will be the maintenance of POOL.
|Summary of PF Progress in Q1 2010 (report date 30.04.2010)|
New versions of all PF projects have been released in Q1 2010 for five new configurations. The two configurations LCG_56e (February 2010) and LCG_56f (April 2010), based on ROOT 5.22, were requested by ATLAS: LCG_56e, in particular, was motivated by the need to downgrade CORAL from the Oracle 11g to the Oracle 10g client as a workaround for an issue observed on AMD multicore processors on the ATLAS Grid sites. The other three configurations LCG_58a (February 2010), LCG_58b (March 2010) and LCG_58c (April 2010), based on ROOT 5.26, were requested by LHCb: LCG_58c, in particular, was motivated by the inclusion of a performance fix of the CORAL LFC replica service, needed by LHCb. High-priority bug fixes in ROOT for dcache and xrootd were included in both the ATLAS and LHCb configuration branches. The five new configurations include several other enhancements and bug fixes in all PF projects, such as a CORAL fix for Oracle 11g database servers needed by ATLAS at DESY, as well as other fixes in POOL collections, in CORAL transactions, in COOL retrieval of CLOBs and in COOL handling of NULL string payloads. The code base of all three PF projects has also been ported to the osx10.6 platform (also on 64bit, for the first time on osx) and to the new gcc4.5 and llvm compilers. There was little progress instead in Q1 2010 on the enhancement of the CORAL server software, due to the workload from the releases and more generally from experiment support during data taking.
The new bug in the Oracle 11g client on AMD multicore, as well as the long-standing issues caused by OCI and OCCI text relocations on SLC5 with SELinux enabled, have been followed up with Oracle support. Three new patches (for AMD multicore, for SElinux in OCI linux64 and for SElinux in OCCI linux32) have been received and installed as a new '126.96.36.199.0p1' client. This is already used by the latest LHCb release, while ATLAS will only adopt it after fully validating the fix for AMD multicore.
|Summary of PF Progress in Q4 2009 (report date 25.01.2010)|
New versions of all PF projects have been released in Q4 2009 for the three new configurations LCG_57a (November 2009), LCG_56d (December 2009) and LCG_58 (January 2010). LCG_56d is based on ROOT 5.22 and was requested by ATLAS, while LCG_57a and LCG_58 are based on ROOT 5.24 and ROOT 5.26 respectively and were requested by LHCb. The three releases include several enhancements specific to PF projects, such as a COOL performance fix for CLOB data access and the CORAL move to the "light" version of the Oracle instant client, both requested by ATLAS. Reconnecting to an Oracle database after a connection glitch has been made more robust in CORAL, following many support requests of production users in the experiments at the time of the LHC startup. The POOL fast file merge feature implemented in an earlier realease has also been validated by ATLAS during Q4 2009.
Progress was made in Q4 2009 on improving monitoring and performance for the CORAL server software, but these enhancements have not yet been fully tested, therefore their release and deployment has been postponed to avoid disruptions to the ATLAS online system.
New issues have been reported in the Oracle client libraries, caused by text relocations on SLC5 with SELinux enabled, and are being followed up with Oracle support. The port of CORAL to the icc compiler in Q4 2009 was useful to further investigate this problem, as the same symptoms have been observed in the CORAL libraries built using an old version of icc.
The main maintainer of POOL left the PF team in January 2010 as he moved from IT to lead another project in PH-SFT.
|Summary of PF Progress in Q3 2009 (report date 15.10.2009)|
|New versions of all PF projects have been
released in Q3 2009 for the two new configurations LCG_56c (July 2009) and
LCG_57 (September 2009). The latter is based on ROOT 5.24 and is used by
LHCb, while the former is based on ROOT 5.22 and is used by ATLAS and CMS,
that have expressed their intention not to migrate yet to the more recent
ROOT. It is therefore likely that the two branches will have to be maintained
in parallel for several months, which may imply the need to rebuild the same
code base of PF projects for the two different configurations.
The two releases include several enhancements specific to PF projects. The LCG_56c release features a new Oracle client library 10.2.0.4p1 for Linux (fixing a long-standing problem with Oracle client initialization reported by several CORAL users), as well as an optimization of the CORAL LFC replica service (used by LHCb). The LCG_57 release includes a new Oracle client library 188.8.131.52.0 for Linux (fixing both a long-standing incompatibility with SELinux on SLC5 and a blocking issue for connection multiplexing in the CORAL server), as well as substantial performance optimizations of the COOL test suite and the validation of COOL performance against Oracle 11g servers. Since Q3, the CORAL and COOL software is also being tested against a dedicated instance of the CORAL server in the nightlies.
The main achievement for PF in Q3 is the full production deployment and validation of the CORAL server and proxy software for the ATLAS system. Following initial tests in August using a dedicated test partition of the ATLAS online system, which were successful basically on the very first try, the software has been installed on the production ATLAS partition and is now used instead of M2O/DbProxy for the configuration of the high-level trigger. The system is now running smoothly and no problems have been observed.
|Summary of PF Progress in Q2 2009 (report date 05.07.2009)|
New versions of CORAL, COOL and POOL have been released in Q2 2009 against the external dependencies defined by the two new configurations LCG_56a (April 2009) and LCG_56b (June 2009). The main achievement of PF development during this quarter has been the first release of the CORAL server components with read-only functionalities, which passed the first offline validation tests for the Atlas HLT use case and will now be tested more extensively in the experiment control room. An enhanced version supporting secue authentication using Grid certificates and VOMS authorization is undergoing some final tests and configuration fixes and will be included in an upcoming release. New features have also been added to POOL (improved collections) and COOL (new relational schema supporting conditions data payload stored in a different table from IOV metadata). The migration from CVS to SVN has been postponed as the external pressure to complete this task has decreased.
|Summary of PF Progress in Q1 2009 (report date 25.04.2009)|
|The main achievement of the PF projects in Q1
2009 has been the release of new versions of CORAL, COOL and POOL for the
LCG_56 configuration (February 2009), involving major upgrades in the ROOT,
Boost and CMT versions, a new CMT tag policy, and support for several new
platforms such as the gcc4.3 compiler on Linux, the VC9 compiler on Windows,
and the SLC5 Linux operating system. For the time being, support for Oracle
on SLC5 can only be provided if a special installation procedure is used to
bypass the SELinux security constraints for the Oracle client
The PF effort is presently focusing on the development of the CORAL server software components, which has restarted in February 2009 according to a new architecture design that has significantly sped up its progress. A first implementation with read-only functionalities is being validated for the Atlas HLT use case, and the addition of secure authentication using VOMS and openssl is underway.
|Summary of PF Progress in Q4 2008 (report date 16.01.2009)|
Several new releases with functionality and performance enhancements have been produced for all Persistency Framework projects in the "de-SEALed" LCG_55 release series. No more patches will be produced from now on for the earlier LCG_54 release series, based on SEAL, which has finally been abandoned during Q4 2008.
More recently, fewer feature enhancements have been possible as a large effort has been spent in all PF projects to prepare for the upcoming LCG_56 release (expected in early February 2009), involving major upgrades in the ROOT, Boost and CMT versions, a new CMT tag policy, and support for several new platforms such as the gcc4.3 compiler on Linux, the VC9 compiler on Windows, and the SLC5 Linux operating system.
An internal review of the CORAL server software has been held in December 2008, leading to a new architecture design that is expected to speed up the development progress (when resources are again available after the LCG_56 release). The PF projects are currently facing a temporary manpower shortage due to the departure of several developers.
|Summary of PF Progress in Q3 2008 (report date 17.10.2008)|
No new releases were produced for any of the Persistency Framework projects since the LCG_55 release in June 2008. Experiments were getting ready for beam and didn't require any major change. On the other hand we have made substantial progress in preparing for next year production releases.
Several new functionalities and performance optimizations have been prepared for COOL and are ready to be released in the upcoming COOL 2.6.0 (November 2008). Significant progress was made also in the port of COOL to gcc4.3 and VC9. Progress was made in the development of the initial read-only implementation of the CORAL server, but a few functional and performance issues still need to be addressed before the software can be released. The addition of secure authentication and write functionalities have been postponed and rescheduled as separate milestones to be completed in 2009. A few enhancements of the POOL collections package have also been prepared and will be released in Q4 2008. For all three projects, some bug fixes have also been produced in the LCG_54-patches nightlies as not all the experiments have migrated to the "deSEALed" LCG_55 releases yet.
A new PF project leader was appointed in Q3 2008. There were also several other changes in PF manpower during the whole of 2008.
|Summary of PF Progress in Q1-Q2 2008 (report date 26.06.2008)|
The major change in the Persistency Framework software during this period was the removal of SEAL from the internal implementation and user API. This was first achieved in the LCG_55 release in June 2008, including COOL 2.5.0. Previously the last configuration based on SEAL (LCG_54) had also been released, using the new ROOT production series 5.18.
CORAL and POOL have reviewed their database related tests and consolidate the test procedure together with SPI. The common part of the database related set-up has been extracted into a common module which simplifies running all appropriate tests against all available database back-ends. Some remaining configuration steps duplicated into individual test are now done centrally, which simplifies the reconfiguration of the test set-up and now also allows to run database tests in parallel for the different platforms.
A first prototype of the CORAL server has been produced, which handles a significant part of the read-only use cases (including nested queries). This prototype is under test by the ATLAS online developers who contributed a prototype for a caching proxy based on the new CORAL protocol.
A review of the experiment use of POOL components has been initiated to prepare for a cleanup of the Persistency Framework CVS repository from obsolete or unused components. The review did not show any larger unused components at this point. All currently released components are still actively used, but the overlap in the use/dependency matrix between the experiments is at this point limited. We identified some possibility to reduce/avoid the use of the Data Service and the associated pool::Ref smart pointers in ATLAS and CMS, as this is the case already for LHCb. More detailed discussion between POOL and the experiments will follow to plan for further consolidation in this area.
|Summary of PF Progress in Q4 2007 (report date 03.03.2008)|
Several CORAL, COOL and POOL releases have been produced on request by the experiments. The latest COOL releases include functional enhancements (channel selection by name, partial locking of HVS tags, simpler data model for user tags), performance optimizations(IOV retrieval from MV tags, IOV insertion with user tags, channel metadata retrieval in MV folders) and configuration improvements (removal of all warnings and errors from the nightly builds and tests, port to gcc4.1).
|Summary of PF Progress in Q3 2007 (report date 08.11.2007)|
A number of complete AA software configurations (LCG_53(x)) have been made available mainly to correct bugs and provide latest improvements to experiments. The AA nighly build system is used and monitored regularly. A new procedure based on the nightlies is being put in place to speedup the time needed to deliver validated software releases to experiments.
The main achievements in COOL are the port to MacOSX/Intel and the enhancement of single-version folders to allow the insertion of IOVs in the past. Several bug fixes relevant to COOL have been released in the latest CORAL plugins for SQLite and Frontier.
|Summary of PF Progress in Q2 2007 (report date 26.07.2007)|
A number of complete software configurations (LCG_50, LCG_51 and LCG_52) have been made available with special emphasis on consolidation and getting as stable as possible for the scheduled LHC technical run.
Several CORAL, COOL and POOL releases have been produced on request by the experiments. The main focus for CORAL and COOL were server-side improvements in COOL single-version multi-channel queries, to allow the experiments to do scalability tests with realistic conditions data workloads against the Tier 1 database replicas provided by the LCG 3D project. Two other important performance improvements were also implemented in COOL 2.2.0: the long standing issue of server-side optimization of single-version multi-channel bulk insertion, as well as an improved API to minimise the time spent in client-side data manipulation. With the CORAL 1.9.0 release, support was added for OSX/PPC (OSX/Intel is expected be be added soon). Libraries for OSX/PPC were also built for the first time in COOL 2.2.0, although Oracle support is incomplete due to a bug in the Oracle 10.1 client library, and PyCool could not be ported because of missing support for PyROOT on PPC. The POOL framework released a significant update on the collection implemention, which became available with POOL 2.6.0. The CORAL and POOL project schedules are affected by replacements in the development team.
|Summary of PF Progress in Q1 2007 (report date 19.04.2007)|
The nightly build and test system for the LCG software stack has been put into production. All PF projects have been adapted to use the CMT build and configuration tool and have standardized on the way to run the tests using the QmTest tool.
CORAL improvements, in particular for the online environments, have been delivered such as thread safety and access to stored procedures. The recent COOL releases include a new API for user payload specification, a port to the AMD64 architecture and new locking' and 'dynamic replication' functionalities as well as examples on how to use the CORAL LFC Replica Service.
|Summary of PF Progress in Q4 2006 (report date 11.01.2007)|
The Relation Abstract Layer (CORAL) have had the first release of authentication functionality based of LFC and the Python API.
The Conditions Database (COOL) version 2.0.0 is almost ready for release and includes an API for the Record Specification and
the port to AMD64 architecture. This new version is being currently tested for integration by ATLAS and LHCb since requires some changes in the DB schema and API. These new versions of the AA packages ROOT, Geant4, CORAL, COOL are currently being integrated by the experiments and will basically be the versions, besides possible bug fixes, that are going to be used for the startup of the LHC experiments.
|Summary of PF Progress in Q3 2006|
A couple of new complete software configurations (LCG_46 and LCG_47) have been made available during the last quarter in the Applications Area. They include the new releases of ROOT, CORAL, POOL and COOL packages, which are currently used by the experiments for the various data challenges.
|Summary of PF Progress in Q2 2006|
The POOL/CORAL project has been consolidating the generic RDBMS interface for Oracle, MySQL, SQLight and FroNTier.
New functionality has been developed for improving the overall reliability of user applications with database back ends. This
new functionality consists of database lookup by logical name; fail over to other databases; connection pooling; authentication
and monitoring facilities. In addition, the COOL project (conditions database) has been improving the versioning capabilities
by the use of tags and hierarchical tags.
|Summary of PF Progress in Q1 2006|
The main activity during this quarter has been the preparation of the software releases that are going to be used in the various data challenges and combined test runs of the LHC experiments during this year. About half of the functionality of SEAL has been completely migrated to ROOT and the experiments and the AA projects had made considerable effort in adapting their software to use the packages that have been migrated. A detailed plan has been started to be prepared for the migration of the second half of the functionality. During the quarter many different releases and software configurations has been produced to help the experiments for the preparation of their production releases. POOL products such as CORAL and COOL are coming with new functionality requested by the experiments. The AA software is being adapted for the AMD64 architecture and this is almost complete. Certification and preparation for the new Linux SLC4 has been made. Next releases will be made available for this new platform.
|Summary of PF Progress in Q4 2005|
The migration to the new Reflex library for the persistency framework (POOL) and Python scripting has been completed. Several iterations of POOL have been made available to the experiments to validate the changes. The final version of Reflex and Cintex has been released as part of ROOT 5.08 in December and will be used instead of the SEAL ones in the coming weeks. At the same time the appropriate components will be removed from SEAL releases and this will be an important milestone towards the completion of the SEAL and ROOT merge. The first public release of the new re-engineered version of the relational database API package (CORAL) was made available. The adaptation of POOL and COOL (conditions database) to this new package is ongoing and finishing soon. This will be on time for the experiments to be integrated in their production software to be used in this year’s major data challenges.