J. Knobloch,
20th January 2004
2-3 October
2003, CERN
Present:
Tobias Haas (Chair),
Thomas Kachelhoffer (France), Jürgen Knobloch
(CERN and secretary), Milos Lokajicek
(Czech republic), Bjorn Nilsson (Denmark), Rainer Mankel
(Germany), Francesco Forti (Italy,) Els de Wolf (Netherlands), Nicanor
Colino (Spain), Dave Bailey (UK)
Tobias started with remarks
on Videoconferencing subgroup established at
Two years ago an internationalization of the HEPCCC
had been proposed. Now a mandate of the new body called iHEPCCC
has been approved by ICFA The first chairperson of iHEPCCC is Guy Wormser. He will
select 21 members according to a predefined distribution by area. This change will have an impact on the role
of HTASC and a possible internationalization towards “iHTASC”.
This point was further discussed the following day (see below).
The minutes were approved
without changes.
The minutes
of HEPCCC written by D. Jacobs are available on the HEPCCC web.
Rainer explained the
computing challenges after the upgrade from HERA-I to HERA-II which will lead to
a 4-5 fold luminosity increase. Although the increased luminosity would
increase the annual number of events from 50 M to 250 M, it is expected that an
improved online event selection will allow limiting the number of events to
about 100 M for both H1 and ZEUS.
DESY has followed the general
move towards commodity computing for CPU- and disk-servers. They use STK Powderhorn tape libraries as mass-storage devices. The move
from 50 GB cartridges to the 200 GB 9940s requires a new disk caching layer
with the corresponding middleware to shield the users from tape handling
effects such as increased loading times. DESY uses dCACHE
as mass storage middleware which is a joint development of DESY and FNAL. The
need for
Update on CDF and D0 Planning for run II (Frank
Wuerthwein, FNAL) (transparencies)
Frank presented first
experience with the computing models for Run-II at Fermilab.
CDF and D0 have currently a total of 1 PB of data on tape. The experiments have
jointly developed middleware called SAM (Sequential Access via Meta-data)
providing grid-like access to detector and Monte Carlo Data. They also use the
caching software dCache developed jointly with DESY.
Oracle on SUN systems is chosen as database for SAM meta-data as well as for
calibration, run and trigger table tracking and for luminosity accounting.
Work is underway to converge
towards a common system with CMS using the Grid following the Open Science Grid
(OSG) initiative. This will also allow making better use of off-site facilities
which are currently mainly used for
Alan explained the status of HEPiX/HEPNT
summarizing its history and explaining its organizational structure. Meetings
take place twice a year alternating between Europe (in Spring)
and
Concluding, Alan confirmed continuing interest by the
attendees and their funding labs. HEPiX/HEPNT has a
complementary role to CHEP which does not take place often enough to address
urgent questions such as security. The HEPiX mailing
list is actively used to cover even more urgent issues.
In the discussion, the recent change in support and release
policy by RedHat was raised. Alan announced that
major HEP labs are meeting RedHat representatives to
find an affordable and acceptable solution.
On the question of Tobias why HEPiX
was so successful, Alan said that the key to success was self-organization and
that the workload was spread such that no single individual was overloaded.
We resumed the discussion on
the future of HTASC in the light of the internationalization of HEPCCC towards iHEPCCC. Tobias has summarized the discussion in a document
reproduced here:
Summary of HTASC Discussion on the Future of HTASC and
iHTASC
On
Firstly it was discussed how HTASC functions currently
·
HTASC
discusses an extremely broad range of topics in HEP computing.
·
Members
of HTASC are directly involved at the technical level and therefore discuss
topics directly at the technical level with invited expert speakers.
·
In
addition to topics referred to it by HEPCCC, HTASC picks up topics
independently.
·
HTASC
serves as a forum for information exchange on all topics within HEP computing.
It is the view of the members of the members of HTASC that HTASC
functions well in the current mode of operation and serves an important need
within the community. It is the unanimous view of the members present at the
discussion that HTASC should not be disbanded for the benefit of iHTASC
Secondly, the discussion turned to how a future iHTASC
should look like:
Again, it was the unanimous opinion of the members present that a future
iHTASC should probably work in a very similar fashion
as HTASC does now. Therefore HTASC should be expanded and merge into iHTASC by adding 5 new members for each of the other 2
major regions and 3 members for so-called ‘Region 4’ countries. This would add
13 new members to the existing 19 members.
DESY has now a lab-wide spam and virus filter in
operation providing a solution for UNIX as well as Windows mail servers. Viruses
have been successfully blocked by a combination of the email virus filter, the
firewall and the enforcement of virus scanners.
DESY has opted to base the future Linux systems on SuSE 8.2. The initially favored RedHat
was discarded because of the short support time and because of the cost of the enterprise
version.
GridKA
have now all compute nodes in water cooled racks and upgraded the disk space to
93 TB. The choice of a Linux distribution is being discussed.
CERN (Jürgen
Knobloch) (transparencies )
Jürgen re-used slides that were shown the previous day
by W. von Rüden and J-M Jouanigot
at the FOCUS meeting. CERN is enforcing a number of security actions: AFS
password expiry, requiring hardware address registration (in particular for portables),
closing off-site ftp, establish rules for systems connecting to the CERN
network.
Proposal for further CASTOR software development has
been well received by the users and the operations team. Support for external
installations is being discussed in HEPCCC.
The persistency framework project POOL has been
successfully integrated into the ATLAS and CMS frameworks. CMS has already successfully
stored 700k simulated events in POOL. The use of Oracle databases for
physics-related applications is increasing.
The central data recording at CERN was successful with
the main users COMPASS (250 TB in 2003) and NA48 (115 TB).
The deployment of grid middleware LCG-1 to external sites is progressing
rapidly. The next version LCG-2 will be used for the data challenges of the LHC
experiments in 2004.
The IS group running pilot services on web storage and web access to the
Windows DFS files system as well as on windows terminal services.
Thomas summarized the
CPU and storage capacity at the IN2P3 computing center
in
The WAN connectivity
is now at the gbps level – going even to 2.5 gbps and 10 gbps to Renater and the GEANT backbone, respectively.
The center runs a large number of services to satisfy the needs
of specific experiments – in particular Objectivity for BaBar
and SAM for D0.
József
described recent developments on the KFKI campus infrastructure: the backbone
has been upgraded to Gb
Ethernet and IPv6 has been deployed. All incoming mail traffic is forced
through gateways filtering for spam and viruses.
The RMKI grid at KFKI is running now 50 CPUs and 1.8
TB of disk extending to 150 CPUs and 22 TB of disk by beginning of 2004. At the
Hungarian institutes have signed the EGEE project
proposal.
Will provide reports next time – in particular the