ATLAS Level-1 Calorimeter
THURSDAY, 4 JULY, MORNING SESSION
Informal online software discussion
THURSDAY, 4 JULY, AFTERNOON SESSION
THURSDAY, 4 JULY, EVENING
Management Committee meeting
FRIDAY, 5 JULY, MORNING SESSION
FRIDAY, 5 JULY, AFTERNOON SESSION
SATURDAY, 6 JULY, MORNING SESSION
These minutes are based on summaries furnished by the individual speakers, with some material on discussions added by Eric Eisenhandler. All slide files are in pdf format.
THURSDAY, 4 JULY 2002, AFTERNOON SESSION
Calorimeter trigger offline simulation - Ed Moyse (slides)
Ed summarised the current status of TrigT1Calo: the e.m./tau ROD simulation is complete, as is the CTP integration. The jet trigger is complete but not yet fully tested.A new RoIDecoder class has been introduced which returns coordinates from RoI words. The energy trigger is started, and will be finished by the end of July. He also summarised the objects produced by TrigT1Calo and gave an overview of their format.
The simulation must be validated by re-doing TDR plots; Alan will try to help. Nick said that then the simulation should change to implement the new detector description.
For simulation of the analogue pulses, noise, etc. Nick said that we need to understand both coherent and incoherent noise. We need to help by talking to the people doing this; Alan will act as contact from our side. The TileCal has given the job of pulse simulation to Ed Frank; a name is expected soon from the liquid argon group.
Daniel and Åsa described their work performed so far on the JEP simulation code, using C++ and Together. They have designed the majority of the classes to correspond to hardware on the board, such as individual FPGAs or LVDS cables. The base classes and general design have been based on Steve Hillier's work on the Cluster Processor Module simulation. They are currently performing flow tests to see if input data flows through the program and is written to an output file. The main structure and hierarchy of the classes are finished. Some new classes need to be made and the existing classes expanded before the final release. The board-specific algorithms will be added in the last stage of the development. They plan to put the algorithms in a separate class to make modifications of the code easier.
Secondary RoIs - Alan Watson (slides)
The distinction between "primary" and "secondary" RoIs was reviewed. In a typical trigger menu, some thresholds may be used only for creating secondary RoIs, i.e. will not contribute to the level-1 decision. Nevertheless, the multiplicities of objects passing such thresholds will be reported to the CTP (which will not use the information). This is not a problem provided we have sufficient thresholds, and this usage was included in estimates of the number of thresholds we would need.
Should a future trigger menu demand more thresholds than we anticipated, it should be possible (with firmware and software changes) to provide non-trigger RoIs which do not use up part of the "hit multiplicity" traffic to the CTP. More radical possibilities might also exist. At present, there is no need to believe that either will be needed. However, we should be sure that there is plenty of spare capacity in our algorithmic FPGAs to allow for future development of new ideas.
LAr front-end electronics ASSO - Tony Gillman (slides)
The review was held at CERN on 17th June, chaired by M Nessi. The review focused on issues such as QA, documentation, scheduling, installation and interfaces. A Gillman participated only by e-mail, but only ~half the 13 talks presented were made available before the review. Tony sent 20 comments, based on the available documentation, to Marzio, with 40% relating to the Level-1 interfaces. Several of these points were included in the draft recommendations.
Although the overall conclusion was positive, the draft report contained a
large number of recommendations for further actions, of which Tony
LAr Tower Driver and Tower Builder TDRs - Paul Hanke (slides)
Paul had been a reviewer for Technical Design Reviews of the liquid-argon e.m. calorimeter tower-builder board and the liquid-argon hadronic endcap tower-driver board. These took place at CERN on 13 June 2002. Both items form the final stage of the analogue signal chain for the trigger. They are situated in the front-end crates mounted on the detector. The conclusions from both reviews were summarised from the point of view of the trigger. The reviews proved to be a very useful forum for clearing up remaining uncertainties.
One slightly surprising point is that the timing of different "layers" can only be checked in the Preprocessor. Another thing to note is that the long twisted-pair cables have an impedance of 87 ohms, and not the expected 100 ohms. Finally, an unavoidable discontinuity in impedance in the cabling inside the cryostat gives a bump at varying places on the trailing edge of the pulses.
Input connectivity document and TileCal cables - Murrough Landon (slides)
TileCal Cables: A meeting with TileCal people was recently held at CERN to discuss the signal cables from them to us. This was mainly prompted by an ATLAS review of cables to USA15. The TileCal group have now provided a document describing the cables, but it doesn't include details of pinouts yet.
The main development at this meeting was that Technical Coordination do not think there is enough space to send Calo and Muon trigger signals on separate cables due to lack of space in the cable trays, especially from the extended barrel. So it is proposed to send them both on a single 16-pair cable which could be identical to those chosen for the LAr signals. The muon signals would be split off at a patch panel in USA15. We have now agreed to this proposal after some discussion.
Cabling document: One of our actions from the ASSO last year was to document the details of all our input cabling. We chose also to include details of the bulk of our internal cabling. A first version of several spreadsheets was prepared by Steve and Murrough and sent to Bill Cleland for comments at the end of April. Bill responded a few weeks later, correcting some of our misunderstandings and providing a new document which describes the HEC and FCAL cabling in greater detail. Some of our spreadsheets have been updated with these corrections but this process has not been completed. The text of a document to include all the spreadsheet tables has also not been written yet.
It was noted in discussion that in order to proceed with purchase of the cables, we need to specify the connectors at the TileCal end as well as the length. We should also be sure we understand the grounding in this mixed-cable situation.
Note added by EE: our TileCal contact person is now Kerstin Jon-And of Stockholm.
TileCal receiver specifications - Eric Eisenhandler (slides)
Eric said that the U.S. DoE had now approved Pittsburgh to build the LAr receivers. They would also like to build the TileCal receivers, and are the only logical choice to do so. However, they need someone to write a specification for the required signal-handling. In the absence of analogue engineers, Eric will try to do this during the next few weeks.
The receivers should be as similar as possible to the LAr ones. This, plus the mixing of muon and calorimeter signals in the cables now agreed (see previous talk), mandates patch panels upstream of the receivers; these are our responsibility. We need to specify connectors, number of modules needed, etc. We should also decide whether we want a facility to monitor the analogue signals like the one used by the LAr people. More detailed questions, including some from Bill Cleland, include whether transformer coupling is acceptable (for the TileCal's monopolar pulses this could introduce a luminosity-dependent baseline shift), whether to reshape the pulses in any way, whether to integrate to remove high-frequency noise as is done for the LAr, range of gain values for conversion to transverse energy, limits on receiver noise and linearity, etc.
In discussion, the need to understand what 60 metres of cable does to the signals was stressed. It was also mentioned that Svante Berglund still comes in about once per week and should be consulted.
PPr-ASIC and MCM status and test plans - Ralf Achenbach (slides)
Ralf first showed some results on the ASIC heating, including a "movie" taken with an infrared camera.
A rather large set of specialised "printed circuit boards" were developed and produced to built up a test utility for the Preprocessor's "core components", i.e. the PPrASIC and PPrMCM. The ASIC will be tested in numbers on a wafer-probe station using a "master" MCM without ASIC. The actual die under test (on wafer) is accessed from the MCM via a needle-probe. A second application for the boards is the MCM production test, where the bonded MCM can be plugged on for functional testing before sealing the MCM.
Analogue inputs and software for MCM tests - Karsten Penno (slides)
Both tests procedures (ASIC and MCM) described in the previous talk require a large amount of firmware and software. The functional blocks involving FPGAs of various types (XC4010, XCV300, XCV50E) were described. The illustration of data-paths underlined the different input/output mechanisms, which all have to be implemented for a full functional test. Status of development was given and the line of work for the immediate future was outlined.
Status of PPM and other Preprocessor items - Paul Hanke (slides)
The ongoing work was described as a decisive step in getting the Preprocessor's functionality tested since the system relies completely on the sub-components PPrASIC and PPrMCM. Other work on the system evolves around or follows the current test work. The next work items were listed and estimates were given of their schedule. Planning aims towards the joint slice test of all level-1 subsystems.
One important item is to test driving of fanned-out LVDS pulses using a Xilinx XCV50E. A very critical situation concerns the Rem-FPGA firmware, which needs to be reduced in size and taken over by somebody. Plans were also shown for evolving from the present test ROD to a full-size version; this is running late for the slice test.
In discussion, the lack of effort to work on online software for the slice test within our agreed overall framework was emphasised.
FRIDAY, 5 JULY 2002, MORNING SESSION
FRIDAY, 5 JULY 2002, MORNING SESSION
Summary of Management Committee Meeting - Eric Eisenhandler
The meeting was fairly short. The main items covered concerned the shortage of effort in hardware, firmware, online software, and offline simulation. Each group reported on its situation. The most serious problem concerns the online software, where testing of modules is beginning to be delayed by lack of test software, there is a serious lack of effort to work on HDMC, and the people doing the work are under a lot of pressure to work too hard and could "burn out". The offline simulation is also critical since it must be passed on to someone else soon.
We should try to have more publications in refereed journals; NIM takes a lot of effort and time so IEEE might be an easier route until we have some good test results.
The next joint meeting will be from 7 to 9 November in Birmingham; we assume the "usual" format but if the airlines have abandoned the "Saturday night" rule sufficiently by then we might make the meeting all day Thursday and Friday and not Saturday morning.
Future meetings sketched in are March 2003 in Mainz, and possibly July 2003 at Queen Mary.
New ATLAS trigger/DAQ organisation - Nick Ellis (slides)
Nick described the new setup, with three joint project leaders and an expanded Trigger/DAQ Steering Committee. One aim is to make the workings of the TDSG more visible to the community, with agendas and minutes widely circulated.
Nick also showed a preliminary version of the comments of the LHCC referees from the Comprehensive Review that had just taken place. There are no areas of concern that we are involved in.
Cluster Processor Module and test card status - Richard Staley (slides)
One fully assembled CPM , as shown on the slide photograph, has been undergoing tests. Some statistics were given, such as the PCB having 6 power plane layers , and 10 signal layers, which contain 0.1 mm wide tracks. There are over 23,000 component pins or soldered connections on the module.
Clock distribution to all devices is running. and the problems with the CAN microcontroller overheating and geographical addressing, as reported in a recent telephone conference, have been resolved. An intermittent connection to a pin on the CAN microcontroller (oscillator crystal) caused the part to malfunction and drive current onto the module's internal geographical address lines. Re-soldering the pin cured both effects.
The VME interface is working without problems, with access to the ID, Firmware Revision, Control and Status registers. The onboard flash memory, which holds the FPGA configuration data, can be erased and re-programmed via VME.
All Serialiser FPGAs configure and are accessible from VME. However two slides were shown which highlighted the problems with the current circuit; groups of FPGAs have had their DONE pins tied together. However, in the light of experience it was discovered that the DONE pin, which signals the state of configuration process within the FPGA, is also an input and stalls the configuration until released externally. This action is fine if all FPGAs configure correctly together, but gives poor fault tolerance, and is useless for development work where we may only want to configure a single part. Luckily this behaviour can be changed when producing the configuration bit-stream file. The next revision of CPM hardware will buffer individual DONE signals so that each FPGA only sees its own DONE signal. The buffered signal will also be conditioned from a masking register before being sampled by the controller.
The mechanical stability needs to be improved, but the current version has no means for doing this. The rigidity is improved by fitting the crate with longer guide rails, and shims will be fitted along the top and bottom edges of the module so that it is tightly gripped by the runners. The Birmingham crate now has all the power pins connected to the bus-bars, except for CPU slot1 which is not used.
Test access to the module is difficult when inside the crate. At Birmingham we shall make a test-jig to hold a 9U module, providing power and control signals, thus allowing probe access to the pins of fine-pitch surface-mount devices to a running module.
The power units for the 9U processor crate have caused us some problems at Birmingham. The inrush current on power-up has occasionally caused an adjacent VME crate to reset and even power down. Bruce Barnett remarked that this may have happened also at RAL. The power unit contains two switch-mode power supplies, each capable of drawing up to 35A at switch-on, which is a typical value for this type of supply. A better solution would be to stagger the switch-on for these units, with the 3.3V supply enabled after the 5V. There were various comments about this, and Nick Ellis was concerned about the effects of simultaneously powering-up a large number of these supplies.
A list of tests for the immediate future was presented, which included generating real-time data and validating the transmission of all 160 Mb/s signals on the module.
Certain test cards will be made to ease the testing of backplane signals to and from various modules. One CMM slot adapter has been made. This , together with the DSS/GIO combination, will be used for checking the processor hit outputs. Five more PCBs will be ordered. One CPM slot adapter has been made. This, together with the DSS/GIO combination, will be used for driving the hit inputs of the CMM. Five CPM loop-back adapter PCBs will be ordered. These allow the 160M b/s signals to be re-routed at the backplane , and provides connection for a logic analyser.
The summary stated that steady progress was being made with validating the CPM design. There is a need for better probe access, and the 9U crate power-unit needs taming.
Cluster Processor Module test progress and plans - Gilles Mahout (slides)
A CPM board has been received early this year and extensive tests have been performed. The test bench in the lab consists of :
The Bit3 system enables us to switch off the 9U crate without shunting down the CPU system. The CPU consists of Linux RedHat 7.2 with HDMC to perform hardware access.
Simple VME accesses have been performed and motherboard/firmware IDs read back correctly. The downloading of FPGA using flash RAM has worked successfully. So far only the Serialiser chips have been downloaded. The next step will be to download test vectors inside the Serialiser dual-port RAM and play them back across the board. But the play back enable signal coming from the TTC controller does not seem to be delivered correctly. The problem is still under investigation, but meanwhile we have modified the Serialiser F/W and implemented the playback function via a bit control. All these previous tasks have been performed with the help of new HDMC parts such as CPFpgaFlashRam. This part asks for an external binary file created with Xilinx designer S/W and loads the code into the flash RAM. The code could also be retrieved from the RAM to check that it is not corrupted before transferring it into the FPGA device itself. It has to be noticed that although only the Serialisers have been downloaded using this new part, CP chips could also be loaded with it. To be used with other boards will require some modifications.
Another part has been written to download simple test vectors of an external ASCII file into available RAM on the board. A last part needs to be implemented to perform I2C access to the TTCrx according to the I2C controller scheme implemented in the CPM/CMM.
Next steps will be to make the playback memory work. CP chips will then be
loaded and real-time path data tested together with its connectivity. Other
F/W will be downloaded, checked and debugged if problems are met. If everything
goes well, external signals will be brought via DSSs and the handling of LVDS
Jet algorithm implementation - Torbjörn Söderström (slides)
Torbjörn gave a presentation on the Jet algorithm implementation on behalf of himself and Anders Fern. Their design goal was to produce an algorithm capable of running at 80 MHz with a minimum of latency and FPGA resource usage. They accomplished this using 5-bit arithmetic for much of the design, as well as careful avoidance of unnecessary calculations.
The adder tree that builds 2x2, 3x3 and 4x4 cluster sums was reduced from a 4-step to a three-step process by putting two layers of adders between 80 MHz registers rather than one. Addition steps were staggered to minimize the number of values saved between registers. Redundant summations were eliminated from the design, so for instance the 2x2 summation requires only 95 adders rather than 135.
Local maximum identification takes advantage of the boolean equality A lt B = not (b le A). This allows one comparater to be shared by two neighboring RoI positions to identify and decluster local maxima, reducing the number of comparators in this section from 256 to 160.
Synthesis-level tests in Leonardo on the VirtexII (XV1500bg575-4) show roughly 35 percent usage of function generators and CLB slices, leaving ample room for other logic on the JEM main processor FPGA.
Jet/Energy Module status and firmware - Andrea Dahlhoff (slides)
At present a standalone test setup in Mainz exists which includes the JEM0.0, DSS Module with LVDS Source Cards, Crate CPU with Linux and HDMC, and Link-Replicator Module. All functionalities of the InputFPGA implementation of the realtime data path except synchronisation and playback memory were tested and debugged, as well as the implementation of the total energy-sum tree of the MainProcessor. These results were obtained by spy memory capturing the data at different places.
While testing connectivity between each InputFPGA and MainProcessor we could discover an unequal behaviour of the incoming data stream from various InputFPGAs due to differences of the track length. Some timing adjustments are necessary to solve this problem.
For the following tests in the near future of the realtime data path, already existing test vectors (binary and real physics patterns) based on the adder tree for Fast Trigger Simulation will be used.
The treatment of the VME access was changed in the firmware. The firmware of the Readout Controller is not complete yet. For the time being only some components, e.g. VME access and registers, and activation of G-Links, exist.
Jet/Energy Module plans - Uli Schäfer (slides)
Uli began with a JEM overview, showing a block diagram of JEM0. He pointed out that on the current JEM0 the restricted logic resources on the FPGAs and the limited boundary scan capabilities are of concern. He showed a photograph of JEM0.0 and identified the devices and data paths concerned. Uli then explained the standalone tests to be done on a single JEM0.0 /0.1 at Mainz. Tests of the real-time data path are under way. DAQ and RoI paths might be tested at a later stage. System components that will not be present on the final modules have lowest priority. Test of the jet data paths is difficult without a TTC system up and running. Some help from Stockholm is required.
The next step in the test programme is a sub-slice test at RAL, where a TTC system is available, thus allowing for inter-JEM communication tests.
In parallel to tests on the current JEM a new module will be designed (JEM1). Start of schematic capture is August 2002. The processors will be VIRTEX2 chips, the de-serialisers are boundary-scan enabled 6-channel devices SCAN921260. Re-targeting the JEM0.1 firmware to Virtex2 is assumed not to be critical. Several options were discussed for the input processors. Due to the 6-channel deserialisers, numerology suggests the use of a single large input processor per 3 phi bins (24 channels). XC2V1500E-4FF896C is not prohibitively expensive and has sufficient pin-count and logic resources to accommodate 24 channels. Due to XCITE/DCI technology the chip provides impedance-matched drivers. A block diagram of the new JEM was shown. It was stressed again that all general control circuitry ought to be compatible with the CPM, so as to make efficient use of hardware and software designers. This module (3 or 4 pcs) will go into the Heidelberg slice test in 2003, where full functionality of the new modules can be tested.
FRIDAY, 5 JULY 2002, AFTERNOON SESSION
FRIDAY, 5 JULY 2002, AFTERNOON SESSION
Common Merger Module, adapter modules and test cards: status and test plans - Ian Brawn (slides)
One CMM has been manufactured. An error in the CPLD JTAG chain prevents the configuration of one CPLD: the Crate-FPGA Configuration Controller. As a consequence the Crate FPGA is not configured automatically at power up and it must be configured via JTAG. Another design iteration of the CMM will be necessary to correct this. However, testing can continue using the current board. So far the following have been found to work: the VME interface, JTAG programming of FPGAs, RAM access, DLLs, and transfer of data from the Crate to System FPGA in playback mode.
A Rear Transition Module (RTM) is required to re-map the signals that pass from the rear of the CMM, through the backplane, to three SCSI-3 cables. One RTM has just arrived back from the manufacturers; it has not yet been tested. The next stage in the CMM test plan is to transfer real-time data to and from the CMM using the RTM and the DSS with the GIO card. A rack with all of the hardware required for these tests is being assembled. In parallel with this, the Configuation-Controller CPLD for the System FPGA will be tested. Once the real-time data path has been fully tested the readout logic will be tested, unless by that time there is a desire to test the CMM in connection with other pieces of hardware or software.
In discussion after the presentation we were reminded that the use of SCSI-3 cables to transfer data between CMMs is for this prototype system only. The choice of cable for the final system has not yet been made and more communication is required on this issue.
CP/JEP backplane and crate status - Sam Silverstein (slides)
Sam gave a summary of the processor backplane status. So far, the backplane and processor crates appear to work fine, with the exception of one error described below. The backplanes are susceptible to damage during assembly and some unused rear connectors are not very accessible; these will need some consideration before moving to final production. Three of the processor crates have been shipped, two to RAL and one to Birmingham.
At the time of the Heidelberg meeting the four backplanes had been delivered, as well as the high-current pins for 5 and 3.3V power. The first-make, last-break ground pins, however, had been delayed by AMP, so the first two processor crates were shipped to RAL and Birmingham before the ground pins became available. RAL and Birmingham are retrofitting the first two crates with grounds.
Insertion tests at RAL and Birmingham showed some bowing of the modules, which can partially addressed by fitting longer guide rails in the processor crates. These rails have now been received, and RAL and Birmingham will retrofit the first two crates. Later revisions of the modules should have stiffening hardware added.
An error was found in the geographical addressing for the CMMs; both positions showed the same address in the crate. This is fixed by removing a pin from slot 20 of the backplane. The pin removal has been done at Stockholm for crates 3 and 4, while RAL and Birmingham will modify the first two crates.
The fourth crate will be sent incomplete to RAL for completion and shipped from there to Mainz for summer tests. A crate should be sent back to Stockholm in the autumn for tests on the backplane.
CP/JEP ROD prototype test status - Bruce Barnett (slides)
Bruce presented a review of CP/JEP ROD software, firmware and test status. Software used in the prototyping setup incorporating DSS and ROD has matured so that it largely conforms to the planned slice-test model. The definitions of software interfaces have matured. Concerning firmware, RAL ID (James Edwards) has provided that necessary to provide wrap-around at arbitrary DSS source length, on the one hand, and L1A functionality on the other. Both of these require evaluation and the provision additional software. Parts of this firmware must be tested in conjunction of the ROD. Concerning the ROD itself, soak-testing is still incomplete and new modules await testing, this latter to be accomplished within a 'module acceptance' framework which is being set up at RAL.
Timing Control Module and VME Mount Module status - Tony Gillman (slides)
TCMs are now in use at RAL, with two more modules available at Birmingham and Heidelberg. A minor design fault has just been observed at RAL, which may require a firmware or hardware modification.
The VMMs are in use at RAL and Birmingham for modules testing. Minor design bugs are being addressed.
The General-Purpose I/O (GIO) Card was described, and a photograph of the first card was shown. This DSS-mounted CMC card will be used for testing the CPM, CMM (and RTM) and possibly the JEM. A connector design bug requires a temporary re-mapping interposer adapter card for initial tests. Testing of the first two cards is now well-advanced.
Finally, Tony mentioned the work that Adam Davis has been carrying out to get the CANbus system to operate. He has used the Processor Backplane to transfer error-free CAN data packets successfully between a TCM and a CMM at opposite ends of a crate; see the following talk.
Fujitsu CANbus status - Eric Eisenhandler for David Mills (slides)
Eric gave this talk which was written by Dave, incorporating a contribution from Adam, describing their joint progress.
Adam has now got the Fujitsu microcontroller to both send and accept CAN frames using real modules. His first setup counted how many CAN frames have been seen by a TCM. Next, he sent data from a CMM to a TCM with the modules at opposite ends of the crate with no errors or problems. Then he used three TCMs (not in the crate) to set up a little network, with a master module requesting data from slave modules. Finally, he digitised a voltage using the Fujitsu's ADC and transmitted the reading.
Dave has continued to work on control software, based on CANopen. He will put a GUI onto this. A list of progress and suggested tasks was shown in the slides.
In discussion, there was agreement that that the Fujitsu now looks like a viable solution. Once the basic libraries are available, we should pause and only proceed to a full design of the DCS later, when more effort is available. The precise roles of the on-board Fujitsu and the TCM's Fujitsu must be defined, but that is not an urgent decision at this stage.
Sam reminded us of his suggestion that we might be able to diagnose incorrect (and potentially damaging) firmware loads in FPGAs by monitoring currents rather than voltages.
Summary of discussions with calorimeter calibration people, and plans for future joint tests - Thomas Trefzger (slides)
When ATLAS is running we want to calibrate the energy, timing, check the granularity and pulse different areas of the calorimeter simultaneously. We've had a first meeting with LAr and TileCal people at CERN. Tools for calibration in the Tilecal are cesium source, laser system and chage injection. No detailed scheme has been developed how to calibrate the TileCal during ATLAS operation, so we can have some influence on that. Tools for calibration in the LAr are charge injection. A detailed scheme using a LTP (free from cenral DAQ) has been developed in Annecy. More work is required on both sides to establish a procedure for doing calibrations together with level-1. A note describing our needs and the calibration procedures foreseen by the Tilecal and LAr people will be put on the web for comments.
After the slice test, in 2004 we aim to have an integration test with both calorimeters, level-1, TTC, DAQ, calibration system, etc. This would be a big and complex operation; not everything requires real beams so we have to plan carefully what is needed when, and just what the priorities are.
Nick pointed out that there are also requests to check the calorimeter interfaces
to the trigger in 2003. We will be very busy with the slice test then, so it
will not be easy to spare any effort and the work would have to be done efficiently.
However, this also was agreed to be a good idea.
SATURDAY, 5 JULY 2002, MORNING SESSION
Overview, databases and run control - Murrough Landon (slides)
Murrough reviewed the goals and status of our online software developments. Since the last joint meeting a lot of progress has been made in integrating the core packages which have, up till now, been developed mainly in isolation. New hardware and GUI parts, including use of the CERN VME driver, have been added to HDMC and its CVS repository has been moved to RAL. The set of module services packages is being extended to the CMM and CPM and has (almost) been fully integrated with the database and run control packages. The database package has been extended to cover all the modules in the CP subsystem via a new set of integrated classes which provide access to all the data for a module via a single object. The simulation has been extended to the CPROD and is being developed to handle the proposed scheme for generating L1As in a controlled way. An API for exchanging test vectors between the simulation and module services has been agreed, and a new round of integrating the simulation with the database and run control has started.
Apart from HDMC, all our packages are now managed by CMT which has proved very helpful. The proposed development path is to complete the work on integrating these packages to operate the present CPROD tests under the full control of the ATLAS run control software. This will then be extended to cover the full CP subsytem and then to include first the JEP subsytem and finally the PP subsystem.
Module services, ROS, RoIB and system management - Bruce Barnett (slides)
Bruce presented this overview talk, concentrating on the status of and technical details of the module services the software layer which is intended to provide a software interface to hardware modules compatible with the GUI interface provided by HDMC. The definition of module services is maturing, with new module variants (TCM, CMM) emerging. The application of module services is in the process of being certified in conjunction with the Run Control/DB packages of Murrough in the context of ROD-DSS test system (proto-slice) software.
Concerning ROS, a public distribution was made available to us in April, with comments being fed back in May. A new distribution which supports Red Hat 7.2 and kernel-2.4 is now available for distribution and test.
Bruce updated the attendees on the RoIB status. At a PDR which took place early in the spring some technical issues arose which are being taken into account by the designers in particular in areas pertaining to flow control, timeout and simulation.
On the topic of system management, Bruce reiterated the system and VME driver status. Birmingham, RAL and Mainz now have experience with the CERN VME driver, which supports specific Concurrent hardware as well as generic Pentium/Tundra Universe architecture under Linux (kernel 2.2 and 2.4). The recommendation is to deprecate other drivers and consolidate our operating system strategy to a single Linux version and distribution: Red Hat 7.2, which has recently been certified by CERN.
Subsystem and slice test planning; milestones - Tony Gillman (slides)
Tony showed the first sub-system (CP) that will be assembled in the UK over the next six months or so. He then outlined the new re-baselined schedule that takes into account the delays to the overall ATLAS schedule. He separated the items leading up to the conclusion of the Slice Tests in Heidelberg (Dec 2003) from the post-Slice Tests events starting Jan 2004. The Slice Tests themselves would take place in two phases (although probably in an unbroken period) between March and December 2003. All modules are already to full Module-0 specification, except for the JEM and the 9U CP/JEP ROD, which will both require re-design. Following design iterations of all modules in Q3-4 2003, there will be a lengthy period of FDRs and PRRs leading up to production, with the goal of a fully tested calorimeter trigger system integrated to the calorimeters by April 2006. Finally, Tony showed a heavily revised set of milestones which had been extracted from the previous two Gantt charts, which will be used as the definitive work plan for the next four years.
In discussion, since the Preprocessor information was not complete and had not yet been approved by Heidelberg, it was agreed that the proposed timetable would be circulated for people to digest, and approved. A timescale of one month for this was agreed.
The meeting summary talk is available via the link above, and was also circulated by e-mail.
This page was last updated on
16 December, 2013