ClusterAlg.gif (577 bytes) The ATLAS Level-1 Calorimeter Trigger

[Home] [Architecture] [Meetings] [Participants] [Publications] [Software] [Search]


ATLAS Level-1 Calorimeter Trigger
Joint Meeting at Birmingham
7–9 November 2002

AGENDA

THURSDAY, 7 NOVEMBER, MORNING SESSION

Informal online software discussion

THURSDAY, 7 NOVEMBER, AFTERNOON SESSION

Simulation of the trigger

  • Calorimeter trigger offline simulation - Ed Moyse (pdf)
  • Validation of the simulation, and forward rates - Alan Watson (pdf1) and (pdf2)
  • Forward jet trigger for invisible Higgs - Karl Jakobs (pdf)
  • Level-1 simulation and configuration, and its place in the offline computing world - Ed Moyse for Thomas Schoerner-Sadenius (pdf)

Calorimeter signals

  • Calorimeter interface issues - Eric Eisenhandler (pdf)

Preprocessor

  • Testing the PPr-ASIC: status and plans - Karsten Penno (pdf)
  • Preprocessor system: status and plans - Karlheinz Meier (pdf)

THURSDAY, 7 NOVEMBER, EVENING

Management Committee meeting

  • Chaired by Eric Eisenhandler - summary presented on Friday morning

FRIDAY, 8 NOVEMBER, MORNING SESSION

  • Summary of Management Committee meeting - Eric Eisenhandler

Cluster Processor Module

  • CPM and test card status - Richard Staley (pdfdf)
  • CPM test progress and plans - Gilles Mahout (pdf)

Jet/Energy Module

  • JEM firmware - Andrea Dahlhoff (pdf)
  • JEM prototype tests and test vectors - Jürgen Thomas (pdf)
  • JEM plans - Uli Schäfer (pdf)
  • Jet algorithm implementation - Attila Hidvegi (pdf)

Common modules and backplane

  • Common Merger Module, adapter modules and test cards: status and test plans - Ian Brawn (pdf)
  • CP/JEP backplane and crate status - Sam Silverstein (pdf)
  • CP/JEP ROD firmware - James Edwards (pdf)
  • CP/JEP ROD prototype tests - Bruce Barnett (pdf)
  • TCM, CAN, GIO, DSS, TTCrx - Adam Davis (pdf)
  • Purchasing of CERN standard crates - Norman Gee (pdf)

FRIDAY, 8 NOVEMBER, AFTERNOON SESSION

  • Online software demonstration

DCS

  • DCS status - Uli Schäfer (pdf)

External interfaces and calibration

  • Summary of discussions with calorimeter calibration people, and plans for future joint tests - Thomas Trefzger (pdf)
  • Where to calculate calibration constants - Norman Gee (pdf)
  • Integration of CTPD - Ralf Spiwoks (pdf)
  • Local Trigger Processor - Ralf Spiwoks (pdf)

SATURDAY, 9 NOVEMBER, MORNING SESSION

Online software

  • Preprocessor software - Paul Hanke (pdf)
  • JEM software - Thomas Trefzger (pdf)
  • Module services, ROS, RoIB and system management - Bruce Barnett (pdf)
  • Simulation, test vectors and test organisation - Stephen Hillier (pdf)
  • Databases, status summary and plans - Murrough Landon (pdf)

Subsystem and slice test planning; milestones

  • Test organisation, timescale and milestones - Tony Gillman (pdf)

Summary

  • Main issues, highlights & lowlights of the meeting - Eric Eisenhandler (pdf)

 

MINUTES

These minutes are based on summaries furnished by the individual speakers, with some material on discussions added by Eric Eisenhandler. All slide files are in pdf format.

THURSDAY, 7 NOVMEBER 2002, AFTERNOON SESSION

Physics simulation of the trigger

Calorimeter trigger offline simulation - Ed Moyse (slides)

Ed reported on his progress so far. The jet trigger is finished (but needs validation); the energy trigger needs a little more work, and he thinks the "quirk" in the e.m./tau t rigger may be caused by jobOptions errors. Ed has simplified the TrigT1Calo jobOptions files to prevent this happening again. Documentation is updated. He explained the problems he's had with persistency, and suggests that the code should be finished within a fortnight. Bug fixing and validation may take longer.

Validation of the simulation, and forward rates - Alan Watson (slides1) and (slides2)

Updates on various simulation matters:

  • Validation tests with TrigT1Calo uncovered a few minor bugs, which have now been fixed. Comparisons of results with Atrig show good consistency, though not all distributions are identical (and probably shouldn't be expected to be). New DC1 datasets show effect of increased material – calibration adjustments will be needed.
  • The new Linux farm has arrived at Birmingham. It has been commissioned, and validation of DC1 phase-2 pileup addition has been performed. Real use should start soon.
  • Requirements for trigger tower simulation have been discussed with the Tile and LAr groups, and some examples of current thinking were shown. PPr simulation will also be needed to accompany this (can be developed in parallel). All of this should be available by about the end of the year.
  • Updated studies of forward jet rates revealed an error in earlier estimates. Both possible implementations show rather similar performance. Standalone forward jet triggers are probably not viable, but in combination with other signatures the rates and thresholds look useful. The most important physics process would prefer the forward jets be integrated with the central jet trigger. This should be feasible, but further studies are required.

Forward jet trigger for invisible Higgs - Karl Jakobs (slides)

The ATLAS Higgs group has investigated the possibilitiy for detecting aan invisible Higgs at the LHC using the vector boson production mode. The final state is characterized by two tag jets and a large missing transverse momentum. First studies indicate that after background normalisation a signal can be extracted for invisible branching ratios exceeding about 20–30%, assuming full trigger efficiency.

The kinematic distribution of the tag jets has been investigated in detail. The pseudorapidity distribution of the two tag jets has its maximum at rapidity values around 3.0, i.e. in the endcap calorimeter. A forward FCAL jet trigger, i.e. a trigger which requires a jet in each FCAL, does therefore have too low an acceptance, on the order of only 8%. Extending the forward jet trigger into the endcap region improves the situation. For an FCAL trigger extending down to 1.4 in rapidity, an acceptance around 80% is found, normalising to those events which pass the offline selection (rapidity separation, delta-eta and pt-miss cuts).

It seems that the Higgs events could be better covered by an inclusive 2-jet + pt-miss trigger, which could be realised easily with the existing hardware. More discussions with the PESA group (responsible for the trigger menus) are needed.

Level-1 simulation and configuration, and its place in the offline computing world - Ed Moyse for Thomas Schoerner-Sadenius (slides)

Thomas was unable to come to the meeting, so Ed agreed to present his slides.

The talk began with a description of what is needed to simulate the calorimeter and muon triggers and the Central Trigger Processor, and stated what the status of each was at present. It then moved on to the RoI builder. Next came the task of configuring the trigger: what is needed, how it might be done, and the current status. The role of the level-1 simulation in the overall task of PESA was discussed. Finally, the conversion of level-1 bytestream information for use in level-2 was described.

Calorimeter signals

Calorimeter interface issues - Eric Eisenhandler (slides)

Eric listed some jobs that should be progressing but recently have not been. He first mentioned documentation of input cabling. Tables are done, but text needs to be added and someone should check it. Then he mentioned the need to purchase long cables from TileCal to trigger; these should be the same type as used by the LAr calorimeters. The holding information we need here is the maximum length. Third is a specification for the TileCal receiver stations, needed by Pittsburgh in order to get their work on them authorised. The specification was started during the summer but work needs to resume. Fourth, Bill Cleland has pointed out that the variable-gain amplifier chips used for the LAr receivers are out of production and advises buying sufficient for the TileCal. We should try to do this urgently. Fifth, tests with calorimeters – not just the large-scale tests mentioned later for 2004, but if at all possible a quick check with the analogue front-end in 2003 mainly to make sure nothing 'obvious' has been overlooked.

Preprocessor

Testing the Pr-ASIC: status and plans - Karsten Penno (slides)

The testing status as presented at the Stockholm meeting was recalled. Since then, a quantitative analysis of pulse-height at analogue input with respect to digital LVDS output yielded satisfacory results considering the fact that all components on the MCM are run with their "power-up" default settings. One flaw has been observed already: the "odd parity" generation does not work for data-frames containing "zero" only. However, inserting a non-zero parity-bit into such frames was the main reson for "odd parity". The fault has been corrected in the Verilog-code. Action will only be taken when the full test programme is completed. Absolute priority in the immediate future will be given to access the ASIC's registers for control and readout. Only then, systematic checks with full analysis can be and will be performed.

Preprocessor system: status and plans - Karlheinz Meier (slides)

The planning for the Preprocessor project was revisited as a whole. Two major impacts have to be taken into account:

1) a rather dramatic decrease of manpower as compared to earlier days (1998–2001) – not in technical staff but in physicists working on system development and software.

2) the move of the institute to a new building imposed a deadtime of roughly 3 months on all ATLAS activities – much more than ever anticipated. Under these circumstances a new schedule considered as "realistic" was presented. Not too many developments can go on in parallel, hence stages are introduced which appear to bring along big gaps in time. A discussion guided by a collaborative effort yielded some scenarios which might shorten certain time-gaps. These suggestions, also involving some engineering changes on the system level, will be examined in the immediate future. Conclusions shall be reached within the coming weeks.

FRIDAY, 8 NOVMEBER 2002, MORNING SESSION

Summary of Management Committee Meeting - Eric Eisenhandler

The main points covered at the meeting were as follows:

  • Eric was re-elected as coordinator.
  • For future joint meetings, we will try using the CERN agenda system.
  • There was a discussion on preferred mechanisms for sharing the cost of common and joint items. We will try to minimise cash moved between institutes by having groups buy all of some of the items.
  • The paper prepared by John Garvey for the Amsterdam conference last summer will be submitted to NIM. We should make an effort to publish more in refereed journals.
  • Future joint meetings will not go into weekends. The next two meetings will be at Mainz (5-7 March) and Queen Mary (2-4 July).
  • There was then a long and very open discussion about the schedule problems due to shortage of effort at Heidelberg. See Tony Gillman's talk at the end of the Saturday morning session.

Cluster Processor Module

CPM and test card status - Richard Staley (slides)

Richard reported steady progress in testing the CPM since last joint meeting. The majority of CPM functions are opperational; Clock distribution uses the TTCdec module, All FPGAs configure from flash memeory, and there is communication at 160Mb/s between Serialiser and CP chip.

Richard said that it is very difficult to probe the CPM. A backplane extender is being built to allow clear access to both sides of a module. All signals except the 160Mb/s fanin/fanout (FIO) will be connected. The FIO may be looped back at the module connector, allowing the module to test itself. Four PCBs have been ordered, and are expected 28th October.

For driving test signals into a CPM, a new LVDS source card is also being made.This needed a new connector suitable for using the high-density AMP cable. Ten PCBs are expected end of October.

We discovered the TTCdec cards contained a version 2.2 TTCrx chip, and not the version 3.0 as expected. Version 2.2 does not have an I2C control bus, so Gilles is adjusting clock phase using TTC Commands. Also , the ID is derived from a PROM and not the crate geographical address. Version 3.0 never made it to fabrication, and the latest version of TTCrx is only available in a larger package. It was decided to design a new TTCdec module as soon as possible. In the meantime, the original TTCdec cards can still be used for the initial testing of the CPM.

The polarisation shrouds on the present backplane have been fitted either way round in a random manner, but are not very effective. This needs to be defined. A worse problem is that the outer columns of the backplane connector are full length and interfere with the cable connector body. These should really be trimmed back to the backplane. However the present situation can be tolerated for the slice tests .

There are a few mechanical omissions from the design of the PCB. There is no suitable way of strengthening the current CPM, which bends on insertion into the crate. Also, heatsinks must be removable to allow possible rework of the PCB. This will be fixed in the next PCB design.

More testing is needed before we should give the go-ahead to make further CPMs. The LVDS receivers are still untested, as is the connection into the Serialisers. The design of the 160Mb/s backplane FIO design needs verifying, and the termination checked on all signals. Connectivity into the HitSum FPGAs, ROCs and GLink has to be checked.

For the next revision of PCB, the CPM should be re-designed to accommodate low-voltage signalling in order to reduce noise. The present design uses TTL/CMOS levels, and this could sum-up to 10A over the whole module, switching at 160 MHz. The lower voltage signal levels require certain pins of the CP chip to be fed by an external reference voltage. This addition involves only a minor change to the PCB layout, but cannot be done on the present module as there are no tracks to the Vref pins of the CP Chip FPGA.

In the summary, Richard repeated that the Crate Extender is essential to the development and testing of the CPM, and that so far nothing has been found to prevent the present design of CPM from being used in the slice test.

CPM test progress and plans - Gilles Mahout (slides)

The different firmware of the CPM has all been loaded and tested. Configurations are stored in flash-RAM and downloaded to the FPGA on VME request. The algorithm of the CP chip has been checked to work correctly. Some channels appear to be corrupted from time to time and further investigations need to be done, with the help of ChipScope. Corrupted data disappear with some particular timing settings between the between the CP chip and the serialiser. In the meantime, new Xilinx software has been released and the implementation of the CP chip with this new environment shows some problems.

The serialiser chip is in good shape, but the LVDS path has not been tested yet. The hit merger firmware is working correctly and the use of the extender board for the CPM will help to measure the latency of the hit merging, together with the latency of the CP chip algorithm. The readout controller performs correctly but first analysis of output data on the G-link port shows some timing problems between slices, which could be a problem in the design itself.

Standalone tests are almost completed, nearly 80 %, and it will need external sources to be fully finished. Testing of LVDS receivers has already started and backplane investigation will be performed with the help of GIO cards, CMM emulator and additional DSSs. If all perform well, the CPM could be fully tested by the end of the month.

At the software level, all tests have been done using standalone code but written using moduleServices of the L1calo software environment. Integration with the L1Calo database scheme has been successful and it is expected to do the same with the Run Control framework by the end of the month.

Jet/Energy Module

JEM firmware - Andrea Dahlhoff (slides)

The implementation of the JEM firmware was completed and revised. With the current version, tests of the realtime data path which included the Ex, Ey, ET-tree, playback, spy and the TTC interface via VME were done.

At the same time a test-bench of the realtime data path was compiled to compare the results of the hardware implementation with the results which were based on the adder tree for fast trigger simulation. Both results were found to be the same and the hardware simulation pointed to a latency of (7+2,5) clock cycles. The same latency could be observed while testing with playback and spy on the board. The implementation of the readout controller (ROC) was finalised, simulated and tested.

A temporary version of the jet algorithm was integrated into the framework of the energy code together with all additional functionalities for the jet algorithm according to the specification. The preliminary estimate of the required resources after place and route yield that the device Virtex XCV600E-7 doesn’t match the requirements. The device Virtex XCV1600E-6 contains sufficient resources but the maximum frequency is still below 40 MHz.

JEM prototype tests and test vectors - Jürgen Thomas (slides)

The prototypes JEM 0.0 and 0.1 have been tested in Mainz using test vectors for the energy summation trees. The adder tree for the trigger simulation (Fortran version) has been modified to match the firmware implementation with regard to saturation, multiplication for Ex, Ey via LUT and quad linear encoding. It provides input data for the 8x4x2 (phi x eta x em/had) core region of the JEM and the output for SumEx, SumEy and SumET, equivalent to the JEM output to the Sum Merger CMM. Both physics patterns (ttbar -> 4jets from PYTHIA) and random patterns with the full range of 0–511 GeV have been produced, which use a common simple ASCII format.

Two test modes have been performed:

(1) Standalone test of JEM (64 ch.): Fill the playback memories inside the InputFPGAs, start a test run via TTC command (by VME), read out the spy memory of the JEM FPGAs and compare to the expected values from the simulation. 6 million events filled with random data have been processed, all results are identical to the expected values. The same result has been achieved processing 60 million ttbar events (24x offline-produced library of 2.5 million events).

(2) Use DSS with 2 LVDS data source data buffers (16 ch., 15m AMP cable, LR-precompensation): Fill the data buffers of the DSS, write-enable the spy memory for one cycle, read out spy memory, match to the continuous data stream from DSS, and again compare to the expected values from the simulation. This test can only be performed for neighbouring pairs of InputFPGAs due to the cable mapping and the limited number of channels. The four pairs of InputFPGAs in the JEM's core region have been fed with 1.8 million events of random patterns. Non-fed channels are zeroed in the simulation. However, this test is only successful if only one LVDS DB has variable patterns, with static ones on the other. In that case, the results are again all identical to the expected values. A problem with the DSS timing may be the cause, which is currently being examined.

The tests will be repeated using the final input synchronisation method, and also on the upcoming prototype JEM 0.2.

Ed would like to get Jürgen's test vectors in order to check his simulator with them.

JEM plans - Uli Schäfer (slides)

Uli started with a summary of the JEP status. Currently two JEMs are up and running, JEM 0.1 is considered a fully functional prototype ready to go into sub-slice test. A third module, JEM 0.2, is under production. It carries a main processor of increased logic capacity. The firmware is nearly complete and tested, on-line software is available, though largely not yet converted into the module services framework. Work on the realtime data path of the energy-sum merger starts after the B’ham meeting. Documentation has been updated.

So as to get ready for the sub-slice test at RAL, a few further tests are required on the input synchronisation circuitry and on the FIO data resynchronisation. The RoI interface has not been implemented yet. Since it is a copy of the DAQ interface, no dedicated tests are assumed to be necessary. Uli explained requirements and goals for the sub-slice test at RAL. The tests will require infrastructure assumed to be available at RAL due to previous and concurrent CPM tests. The JEM should be ready to go into the sub-slice test by the end of the month.

Uli explained that due to the trigger-wide move from 1.6mm card rails to 2mm rails the current JEM, though otherwise functional, cannot act as a basis for the production modules. A few minor bugs will have to be fixed on the production modules and FPGA resources will be allocated differently.

The baseline for a JEM re-design was presented. The detailed design started in September. Uli explained problems encountered due to the high density of vias required on de-serialisers and FPGAs. The design work is delayed and together with the PCB manufacturer a solution is being sought. A daughter-board construct is seriously considered.

Further old-style JEM 0s will be manufactured for the slice test in 2003, to make sure that the algorithms can be tested even if the prototype of the production modules is not available in time.

In discussion, a final specfication for JEM 0 was requested. There would be a light review of the JEM 1 design soon.

Jet algorithm implementation - Attila Hidvegi (slides)

Attila reported that the speed of the jet algorithm has been improved, so that the design is capable of running at 80 MHz. Many bugs have been corrected, but some still remain to be done. The jet algorithm has been simulated on a real FPGA on a development board with a Virtex-II on it, which is connected to a computer through RS-232. The jet code is now merged into the main processor, but the speed needs to be improved. Work is starting on other firmware that is related to the jets.

Common modules and backplane

Common Merger Module, adapter modules and test cards: status and test plans - Ian Brawn (slides)

Ian reported on the status of the CMM and its rear transition module (RTM). The RTM and a complete slice through the real-time path on the CMM have been tested and shown to work. Currently the configuration-control logic on the CMM is being commissioned and the readout logic is being tested. In response to a question from Murrough, Ian said it would be both possible and desirable to send the CMM to the PPD lab at RAL once testing in the electronics lab has been completed. It was also decided that to help Mainz develop the JEP-CMM firmware it would be of benefit to give them some of the existing CP-CMM firmware, together with some tuition.

The design will be updated before making further modules. These should be ready by March.

CP/JEP backplane and crate status - Sam Silverstein (slides)

A summary of the current CP/JEP backplane and crate status was given. Two crates at RAL and one at Birmingham have been completed and used in system tests for some time. The fourth has been fitted with power pins at Birmingham, and will be returned to Stockholm for tests there. The backplane errata contain no showstoppers up to this point. An addressing error in slot 21 can be corrected by removing a single pin from that slot. The crate geographic address pins GA(6:4) are active low, rather than the active high found in the specifications. Modifications to the interface firmware can compensate for this error. And pin lengths and shroud thicknesses on the rear of the backplane must be better specified in the production version.

The newest problem discovered is that the cable shrouds in the rear of the backplane are polarised, and that they were installed with the polarisation in a random orientation. During discussion a strong preference was indicated for the use of polarised shrouds in the next backplane revision, and the polarisation direction should be specified. The shield pins on the cable inputs have been shown to be unneeded and obtrusive, and will be omitted in the next revision. There was a comment that the polarisation should be robust enough to prevent mis-connections.

A list of errata and experiences with the backplane should be compiled and made available online.

CP/JEP ROD firmware - James Edwards (slides)

James showed a table giving the status of the various ROD firmware versions. CPM DAQ and RoI f/w is tested. CMM CP f/w and JEM DAQ and RoI f/w is written but needs testing. CMM JEP f/w (jet and energy) is not yet specified and so not written.

CP/JEP ROD prototype test status - Bruce Barnett (slides)

Following a reminder of the hardware test setup (DSS-ROD-DSS) Bruce reported on the status of the tests. ChipScope sessions with James have resulted in diagnosis of the remaining flow-control problems in the S-Link interface of the ROD. There remain a few difficult problems in the design ('sometimes works') but these will be investigated. A few other issues (bit 23 in hit data words, status bit treatment, details of zero suppression) need to be addressed, but are not crucial issues which affect the flow of uncorrupted data through the ROD.

The test software has matured to a point where it is largely integrated within the simulation/test software (l1calo packages) and should be easy to use by most.

Most urgent on the 'to/do' list is progress on new firmware variants, for which simulation software (from S. Hillier) is now ready for use. CP/RoI firmware should be retested within the current software environment.

Norman noted the system of numbered problems reports. Something like this is essential; casual emails are too easy to lose or forget.

TCM, CAN, GIO, DSS, TTCrx - Adam Davis (slides)

Some improvements have been made to the TCMs. Their VME display problem has been solved. The GIO card is now in use for testing the CMM; a slightly improved version is being laid out and we will probably order eight. Further work has been done on CAN monitoring software for the CMM; Adam is looking ahead to the CPM. Firmware improvements have been put into the DSS modules and they have been tested. New TTCdecoder cards for the new version of TTCrx chips are currently being laid out.

The TCM documentation should be updated and put into EDMS.

Purchasing of CERN standard crates - Norman Gee (slides)

Norman presented various information about the bulk crate purchase organised at CERN, and showed pictures of crates which he and Bruce had seen. One important issue is the positioning of the power supply – we need unrestricted access to the rear of CP and JEP crates, and space for a lot of cables.

There will be just one bulk order per year, with delivery phased over the first six months of the following year. Orders were being collected in November. It was agreed to order two crates (one VME-64x, one without backplane) to evaluate.

In discussion, the different rack depths was mentioned: ATLAS ones are 1 m deep, but for our tests we will use shallower ones. Can we accommodate the crates and cool them?

FRIDAY, 8 NOVMEBER 2002, AFTERNOON SESSION

Online software demonstration

The online software group gave a demonstration of their new package (see Saturday morning), remotely running modules at RAL.

DCS

DCS status - Uli Schäfer (slides)

Uli reported on the DCS meeting Oct. 8th. There were many presentations of limited relevance to the trigger (radiation hardness, cabling …). Some of the more important issues were stressed:

  • Radiation test results led to a re-design of the ELMB. There is no longer a co-processor on the ELMB. As a result the ELMB firmware gets simpler and might be more easily adapted to the trigger needs.
  • CERN has decided on the PCI/CAN interface (Kvaser). Branch test results with the new card were presented. Though the NICAN card continues to be supported, everyone is encouraged to use the Kvaser card.
  • Helfried gave an overview of the complete DCS chain ELMB-OPC-PVSS and explained which services are performed in which part of the chain. The most recent version of the OPC server, as well as utilities now supports Kvaser and NICAN concurrently.
  • Fernando Varela explained the structure of the DCS backend (PVSS). Each subsystem needs to find out whether a geographical or a functional substructure is more adequate. Services required/available on the three levels (global, subdetector, subsystem) were explained. S
  • There was a short presentation on the synchronisation of LHC states, DAQ states, and DCS states.
  • The status of the DAQ to DCS communication (DDC) was reported. The trigger would have to find out whether any data need to be exchanged between its DCS sub-system at the PVSS level and the DAQ.
  • Helfried announced the procurement of a batch of Kvaser cards. Helfried wants to talk to the subdetectors before Christmas to discuss the DCS sub-structures.

External interfaces and calibration

Summary of discussions with calorimeter calibration people, and plans for future joint tests - Thomas Trefzger (slides)

A group of eight people has been set up to discuss the possibilities of combined trigger and calorimeter runs. A first paper draft has been circulated to the calorimeter trigger and the calorimeter calibration people. A detailed procedure how to take a combined run, including setup steps and steps for each calibration run, was discussed in the talk.

In 2004 a detector/DAQ integration test is planned; goals of this run were presented.

The calorimeter and trigger community would like to make use of the Local Trigger Processor instead of using the CTP, but more discussion is needed with the people from CERN (see Ralf Spiwok's talk later in this session).

In discussion, the "light" tests in 2003 with analogue front-ends were mentioned. For 2004, Norman said that we must be clear on what we are testing and what we hope to achieve. We must be careful to do what is needed but not over-commit ourselves.

Where to calculate calibration constants - Norman Gee (slides)

Norman started by assuming that joint calibration runs with LAr or TileCal would use the same general structure as existing LAr runs, with nested loops over tower builder settings, patterns, and amplitudes. The obvious place to do the calibration calculations is at the event builder, using built calorimeter and trigger events. It is the only possibility if laser pulser (with pulse-to-pulse variations) is used.

However, the ROS can probably pass < 1 kHz of events to the event builder, and the bandwidth of the ROS is about to be reduced. The big calibration events will probably lower the event rate further. A solution, as adopted by LAr, is to process data on the Preprocessor RODs, using a CPU daughter board. Simple simulation shows that this will work up to the 10 kHz frequency available from the calibration pulsers. We will proceed on this basis. Another possibility, raised in discussion by Paul Hanke, was to use the VME CPU in each crate.

Integration of CTPD - Ralf Spiwoks (slides)

Ralf reported on the integration tests of the Central Trigger Processor Demonstrator (CTPD) and the Muon-CTP-Interface (MUCTPI) carried out earlier this year. He further reported on the status of the Programmable Patch Panel and the software, which are both required for the integration of the CTPD in the Level-1 calorimeter trigger slice test foreseen for next year.

Local Trigger Processor - Ralf Spiwoks (slides)

After recalling the current model of the interface between the Central Trigger Processor (CTP) and the individual sub-detector TTC partitions, Ralf reported on a proposal for a Local Trigger Processor (LTP). The LTP will be connected to the CTP and allow it to run in "common mode". It will also contain a pattern generator which will allow it to run in "stand-alone mode" in which it generates its own timing and trigger signals. Provision is also made in the LTP to allow several sets of TTC partitions to run concurrently and independently of the CTP.

 

SATURDAY, 9 NOVEMBER 2002, MORNING SESSION

Online software

Preprocessor software - Paul Hanke (slides)

The software effort was started using the C++ simulation framework established by Steve Hillier. Its purpose is to predict digital Preprocessor data derived from analogue input at various stages of the hardware chain. Comparison between data from software emulation and data read from hardware should ease the testing work significantly. While the analogue front-end part (calorimeter pulse library from test-beams) to FADC digitisation is completed, the 40 MHz digital part of ASIC algorithms has still to be written. However, the framework of data transport from an input-reader through Preprocessor classes to an output-writer is functioning. The intention is to parallelise to several (4) PPr-channels and to hand data-arrays to LabView for comparison with hardware results.

In discussion, it was pointed out that the PPr readout format must be documented. Also, a clear summary of work on data compression should be available as it is not easy to find out what was done.

JEM software - Thomas Trefzger (slides)

In Mainz all software necessary to test the functionality of the JEM is available (configure FPGAs,load DSS buffer memories, l oad and read playback, spy memory, test vectors). But the software has to be modified to work in the module services framework.

Cano Ay has already rewritten most of the existing software packages (configure FPGAs and to load LUT, check the link status and check for parity errors) into the HDMC framework. As a next step he will work together for a couple of days with Bruce and Murrough at RAL.

An additional PC has been set up in Mainz and is working with RedHat 7.3. The CERN VME driver is now installed on all existing CPUs.

Module services, ROS, RoIB and system management - Bruce Barnett (slides)

An overview of a number of areas was presented here. The module services progress well, but work on the 'newer' modules (CPM, CMM) will need to congeal soon onto mature test beds for those modules. Module service infrastructure (HDMC) matures, with two important bugs having been identified and fixed.

The new RoIBuilder prototyping is advancing well – there is a need to attempt integration tests with the developers in 2003. ROS software availability is a worrisome issue. Crucial for the slice testing is a released OO-ROS with support for S-Link input by not-much-later than year's end, 2002. At the very least, S-Link support under 7.2 (2.4 kernel) must be available in some form.

New S-Link hardware will appear in 2002 which might help in the planning of a late slice-test, and in particular the choice of ROS host, but a PICMG form factor PC with 10 standard (32 big/33 MHz) or mixed PC slots is still, apparently, the most appropriate solution.

In the area of system management, Bruce reported that all RAL machines are now 7.2 kernel-based – the Concurrent ones being diskless, net-booted. Concerning the imminent CERN move to 7.3 Redhat Linux and the required use of gcc 2.95.2 compilation for expected ROS and backend releases, it was observed that we will need to consider the optimal upgrade timeline. Although inconvenient before a slice test, utilisation of recent releases may necessitate an early move.

Simulation, test vectors and test organisation - Stephen Hillier (slides)

Since the last report in Heidelberg, much progress has been made on many fronts, particularly in the integration of the simulation into the rest of the online software world. Highlights include the migration to CMT, development of common level-1 calorimeter tools (such as TTC information needed by all module simulations), improvement of the CPM and CP/JEP ROD simulations, the development of a common test-vector reading scheme for hardware and simulation, and most importantly the integration with the online database and run control.

The result of all this work was that now the simulation can be performed under the control of the standard ATLAS run control interface and it would take essentially all its information for setting up the simulation from the database. This produces a way of interacting with the simulation which is unified between the hardware and simulation. A scheme for generating test vectors has also been writen to interact with run control in a similar way. Again the specifications of the generation are entirely governed by the database.

The application of this new software to the DSS/ROD system was illustrated and the results of tests with the hardware at RAL were summarised. After initial tests at the end of August, a few minor improvements were made before more stringent tests by Bruce were used to validate the new software. This ended with the first online software release towards the end of October.

Finally, future plans were presented. The most important immediate work was to extend the ROD tests to more input data types – the software is already in place to do this, but needs testing – and try to transfer the software framework to be used for testing the CPM at Birmingham. The simulation for other modules was being done by Paul for the PPM, the Stockholm students for the JEM and Norman for the CMM, and all of these are progressing well.

Databases, status summary and plans - Murrough Landon (slides)

Murrough gave a summary of the online software status. The most significant recent development was the first release of the software. This allows the simulation and module services to be run via the run control and configured using the database. The release is intended to be used for CPROD tests and feedback from non-expert users is welcome.

Compared with the schedule presented in Stockholm, the integration of the CPROD tests under the run control was delivered late and work on including the CPM and CMM is also delayed as the modules are still being debugged. The aim now is to integrate the CPM by the end of the year and the CMM early next year.

Murrough indicated the missing functionality required for the CP subsystem a general summary of the missing software areas, which are mainly the JEP and PP subsystems. The previous schedule assumed the JEP subsystem would be tackled after the CP subsystem, however given progress with the JEM, plans for JEP subsystem software should be accelerated.

Subsystem and slice-test planning; milestones

Test organisation, timescale and milestones - Tony Gillman (slides)

Tony suggested that an appropriate quotation for his talk might be "Plus ca change ... ". He reminded everyone of the traditional two stages of Slice Tests – sub-slices of CP/JEP followed by the full slice with the PPr sub-system, and noted that in the full slice set-up, the PPr modules formed a relatively small part of the system.

He showed two Gantt charts – before and after the PRR stage – as agreed during the Stockholm Collaboration meeting, which had been submitted to ATLAS in September. The "Pre-PRR" schedule concluded with a 5-month period of FDRs and PRRs in early-2004, following the full slice tests. The "Production" schedule occupied three phases, Production, Standalone Testing and Installation, totalling ~30 months. As part of the September re-baselining exercise, 29 milestones had been re-defined up to 2006. Tony then showed two draft schedules, in which he had tried to accommodate the newly-reported delays to the PPr sub-system. Although the full slice tests would inevitably be delayed, he suggested that there should be correspondingly longer time spent working with the UK sub-slice tests, in particular simultaneous operation of the CP and JEP sub-systems. With relatively few PPr modules in comparison to the CP/JEP modules, it may be more efficient to bring them with the CTPD to the UK, rather than move everything to Heidelberg. The overall effect of the PPr delay would be to move the PRR phase by ~9 months, with a consequent shortening of the Production/Test/Installation phase from ~30 months to ~21 months, which looked very uncomfortable for such a large and complex system. All milestone dates would also have to be re-defined.

In conclusion, he suggested that we must explore ways of reducing the PPr delays (out-sourcing boards, etc.) as well as re-arranging our programme to focus on the sub-slice tests.

Summary

Main issues, highlights and lowlights of the meeting - Eric Eisenhandler (slides

The meeting summary talk is available via the link above, and was also circulated by e-mail.

This page was last updated on 16 December, 2013
Comments and requests for clarification may be sent to
E-mail:
Telephone: