J. Knobloch, 20th January 2004

Minutes of the XXV HTASC Meeting

2-3 October 2003, CERN

 

Present:

Tobias Haas (Chair),  Thomas Kachelhoffer (France), Jürgen Knobloch (CERN and secretary), Milos Lokajicek (Czech republic), Bjorn Nilsson (Denmark), Rainer Mankel (Germany), Francesco Forti (Italy,) Els de Wolf (Netherlands), Nicanor Colino (Spain), Dave Bailey (UK)

 

Introduction (Tobias Haas) (transparencies)

 

Tobias started with remarks on Videoconferencing subgroup established at Pisa chaired by Christian Helft. The mandate and membership have in the meantime been finalized (document).

Two years ago an internationalization of the HEPCCC had been proposed. Now a mandate of the new body called iHEPCCC has been approved by ICFA The first chairperson of iHEPCCC is Guy Wormser. He will select 21 members according to a predefined distribution by area.  This change will have an impact on the role of HTASC and a possible internationalization towards “iHTASC”. This point was further discussed the following day (see below).

Approval of Minutes (   )

The minutes were approved without changes.

 

Report from HEPCCC (Tobias Haas) ( transparencies)

The minutes of HEPCCC written by D. Jacobs are available on the HEPCCC web.

 

 

Update on H1 and ZEUS Planning for run II (Rainer Mankel, DESY) ( transparencies )

Rainer explained the computing challenges after the upgrade from HERA-I to HERA-II which will lead to a 4-5 fold luminosity increase. Although the increased luminosity would increase the annual number of events from 50 M to 250 M, it is expected that an improved online event selection will allow limiting the number of events to about 100 M for both H1 and ZEUS.

DESY has followed the general move towards commodity computing for CPU- and disk-servers. They use STK Powderhorn tape libraries as mass-storage devices. The move from 50 GB cartridges to the 200 GB 9940s requires a new disk caching layer with the corresponding middleware to shield the users from tape handling effects such as increased loading times. DESY uses dCACHE as mass storage middleware which is a joint development of DESY and FNAL. The need for Monte Carlo production increases with the luminosity increase. While ZEUS uses the distributed Funnel system in many sites, H1 concentrated the production on two sites: DESY and Dortmund. Both experiments consider moving to grid-based productions. For the analysis systems both experiments make use of ROOT.

 

Update on CDF and D0 Planning for run II (Frank Wuerthwein, FNAL) (transparencies)

Frank presented first experience with the computing models for Run-II at Fermilab. CDF and D0 have currently a total of 1 PB of data on tape. The experiments have jointly developed middleware called SAM (Sequential Access via Meta-data) providing grid-like access to detector and Monte Carlo Data. They also use the caching software dCache developed jointly with DESY. Oracle on SUN systems is chosen as database for SAM meta-data as well as for calibration, run and trigger table tracking and for luminosity accounting.

Work is underway to converge towards a common system with CMS using the Grid following the Open Science Grid (OSG) initiative. This will also allow making better use of off-site facilities which are currently mainly used for Monte Carlo productions.

Status of HEPIX/HEPNT  (Alan Silvermann, CERN) ( transparencies )

Alan explained the status of HEPiX/HEPNT summarizing its history and explaining its organizational structure. Meetings take place twice a year alternating between Europe (in Spring) and north America (in Autumn) having about 100 participants. The upcoming meeting on 20-24 October 2003 will take place at TRIUMF, Vancouver. Apart from 3 days of normal HEPiX-HEPNT business there will be 1.5 days of security sessions organized by the “Large Systems Special Interest Group” which focuses on topics related to large clusters.

Concluding, Alan confirmed continuing interest by the attendees and their funding labs. HEPiX/HEPNT has a complementary role to CHEP which does not take place often enough to address urgent questions such as security. The HEPiX mailing list is actively used to cover even more urgent issues.

In the discussion, the recent change in support and release policy by RedHat was raised. Alan announced that major HEP labs are meeting RedHat representatives to find an affordable and acceptable solution.

On the question of Tobias why HEPiX was so successful, Alan said that the key to success was self-organization and that the workload was spread such that no single individual was overloaded.

Future of HTASC/HEPCCC

We resumed the discussion on the future of HTASC in the light of the internationalization of HEPCCC towards iHEPCCC. Tobias has summarized the discussion in a document reproduced here:


Summary of HTASC Discussion on the Future of HTASC and iHTASC

 

On 3 October, 2003 during its regular XXVth session at CERN, HTASC held a discussion on the future of HTASC and the relationship with the planned international advisory body iHTASC.

 

Firstly it was discussed how HTASC functions currently

·  HTASC discusses an extremely broad range of topics in HEP computing.

·  Members of HTASC are directly involved at the technical level and therefore discuss topics directly at the technical level with invited expert speakers.

·  In addition to topics referred to it by HEPCCC, HTASC picks up topics independently.

·  HTASC serves as a forum for information exchange on all topics within HEP computing.

It is the view of the members of the members of HTASC that HTASC functions well in the current mode of operation and serves an important need within the community. It is the unanimous view of the members present at the discussion that HTASC should not be disbanded for the benefit of iHTASC

 

Secondly, the discussion turned to how a future iHTASC should look like:

Again, it was the unanimous opinion of the members present that a future iHTASC should probably work in a very similar fashion as HTASC does now. Therefore HTASC should be expanded and merge into iHTASC by adding 5 new members for each of the other 2 major regions and 3 members for so-called ‘Region 4’ countries. This would add 13 new members to the existing 19 members.


Site Reports

Germany (Rainer Mankel) (transparencies)

DESY has now a lab-wide spam and virus filter in operation providing a solution for UNIX as well as Windows mail servers. Viruses have been successfully blocked by a combination of the email virus filter, the firewall and the enforcement of virus scanners.

DESY has opted to base the future Linux systems on SuSE 8.2. The initially favored RedHat was discarded because of the short support time and because of the cost of the enterprise version.

GridKA have now all compute nodes in water cooled racks and upgraded the disk space to 93 TB. The choice of a Linux distribution is being discussed.

 

CERN (Jürgen Knobloch) (transparencies )

Jürgen re-used slides that were shown the previous day by W. von Rüden and J-M Jouanigot at the FOCUS meeting. CERN is enforcing a number of security actions: AFS password expiry, requiring hardware address registration (in particular for portables), closing off-site ftp, establish rules for systems connecting to the CERN network.

Proposal for further CASTOR software development has been well received by the users and the operations team. Support for external installations is being discussed in HEPCCC.

The persistency framework project POOL has been successfully integrated into the ATLAS and CMS frameworks. CMS has already successfully stored 700k simulated events in POOL. The use of Oracle databases for physics-related applications is increasing.

The central data recording at CERN was successful with the main users COMPASS (250 TB in 2003) and NA48 (115 TB).

The deployment of grid middleware LCG-1 to external sites is progressing rapidly. The next version LCG-2 will be used for the data challenges of the LHC experiments in 2004.

The IS group running pilot services on web storage and web access to the Windows DFS files system as well as on windows terminal services.

France (Thomas Kachelhoffer, CC-IN2P3) (  more information transparencies )

Thomas summarized the CPU and storage capacity at the IN2P3 computing center in Lyon. They have now about 900 processors running Linux – mostly RedHat 7.2. They also run 42 SUN Ultra 60 and 19 AIX  CPUs. They have 6 STK silos with a total of 36000 slots and a DLT robot with 4000 slots. The 60 TB disk space is used for AFS, HPSS cache, Objectivity, Oracle, Xstage and NFS.

The WAN connectivity is now at the gbps level – going even to 2.5 gbps and 10 gbps to Renater and the GEANT backbone, respectively.

The center runs a large number of services to satisfy the needs of specific experiments – in particular Objectivity for BaBar and SAM for D0.

 

Hungary (József Kadlecsik, KFKI, Budapest) (  transparencies )

József described recent developments on the KFKI campus infrastructure: the backbone has been upgraded to Gb Ethernet and IPv6 has been deployed. All incoming mail traffic is forced through gateways filtering for spam and viruses.

The RMKI grid at KFKI is running now 50 CPUs and 1.8 TB of disk extending to 150 CPUs and 22 TB of disk by beginning of 2004. At the Budapest center, the LCG-1 software has been installed. It is planned to be part of the LCG certification testbed.

Hungarian institutes have signed the EGEE project proposal.

Other countries

Will provide reports next time – in particular the UK is considering a more extended presentation.

Next Meeting:

12-13 February at CERN (Room 40-R-D10)