ClusterAlg.gif (577 bytes) The ATLAS Level-1 Calorimeter Trigger

[Home] [Architecture] [Meetings] [Participants] [Publications] [Software] [Search]


ATLAS Level-1 Calorimeter Trigger Joint Meeting at Heidelberg
2–4 November 2000

AGENDA

THURSDAY, 2 NOVEMBER, MORNING

Software Meeting

  • Organised by Murrough Landon. Minutes are available here.

THURSDAY, 2 NOVEMBER, AFTERNOON SESSION

Physics simulation

  • LVL1 acceptance of 4b jet final states from Higgs, and an updated fast simulation package: ATLFAST-L1 - Kambiz Mahboubi (pdf)
  • Trigger simulation in the new ATLAS software framework - Ed Moyse (pdf)

Calorimeter signals and cables

  • Trigger cable issues after the PRR - Paul Hanke (html and ps)
  • TileCal receiver tests and status - Svante Berglund (slides not available)

Preprocessor

  • PPr-ASIC status - Cornelius Schumacher (html and ps)
  • PPr-MCM status and preparations for testing - Ullrich Pfeiffer (html and ps)
  • Preprocessor ROD functionality - Bernd Stelzer (html and ps)
  • Preprocessor Module, system issues, timescale and milestones - Paul Hanke (html and ps)

THURSDAY, 2 NOVEMBER, EVENING

Management Committee meeting

  • Chaired by Eric Eisenhandler - summary presented on Friday morning

FRIDAY, 3 NOVEMBER, MORNING SESSION

  • Summary of Management Committee meeting - Eric Eisenhandler

Data transmission

  • UK LVDS tests and new LVDS chips - Richard Staley (pdf)
  • Refining the rack layout and cable lengths - Murrough Landon (pdf)

Cluster Processor

  • Cluster Processor chip - Viraj Perera (pdf)
  • Testing the Serialiser and CP chip - Ian Brawn (pdf)
  • Cluster Processor Module status - Richard Staley (pdf)
  • CPM specification and nomenclature - Paul Bright-Thomas (pdf)
  • Timescale and milestones - Tony Gillman (pdf)

Jet/Energy-sum Processor

  • JEM status, and JEP timescale and milestones - Uli Schäfer (pdf)
  • Implementation of energy summation algorithms - Carsten Nöding (pdf)

FRIDAY, 3 NOVEMBER, AFTERNOON SESSION

Common modules and backplane

  • Common Merger Module specifications - Norman Gee (pdf)
  • CP/JEP ROD prototype - Viraj Perera (pdf)
  • Timing Control Module and TTC situation - Bob Hatley (pdf)
  • CP/JEP backplane - Sam Silverstein (pdf)
  • Timescale and milestones - Tony Gillman (pdf)
  • Re-defining our long-term milestones - Eric Eisenhandler (pdf)
  • Summary of DCS Workshop - Uli Schäfer (pdf)

SATURDAY, 4 NOVEMBER, MORNING SESSION

General Items

  • Integration and tests with level-2, DAQ and CTP, and preparation for T/DAQ workshop 13–17 Nov - Norman Gee and Ralf Spiwoks (pdf)
  • Summary of ROD Workshop - Norman Gee (pdf)

Software

  • Work on DAQ software and DAQ –1 - Bruce Barnett and Scott Talbot(pdf)
  • Software planning for the slice tests - Murrough Landon (pdf)

Summary

  • Main issues, highlights & lowlights of the meeting - Eric Eisenhandler (text)

SATURDAY, 4 NOVEMBER, AFTERNOON

Informal brainstorming on issues related to common modules

  • Organised by Tony Gillman

 

MINUTES

THURSDAY, 2 NOVEMBER 2000, AFTERNOON SESSION

PHYSICS SIMULATION
(minutes by Ed Moyse)

LVL1 acceptance of 4b jet final states from Higgs, and an updated fast simulation package: ATLFAST-L1 - Kambiz Mahboubi

The slides are available here (pdf).

The L1CT (Calorimeter Trigger) package has been around for a while now and ATLFAST has been modified to include L1CT cell maps and combine them with ATLFAST cell maps to produce calibrated jet maps. ATLFAST entities (except muons) have been corrected for transition region effects. To illustrate jet corrections, histograms were shown comparing uncalibrated and calibrated jet ET, calibration variables alpha and beta at different energies, and low-luminosity pile up. Aim is to find the ET threshold which is at least 90% efficient in the relevant bin. Doing this at all energies allows a plot to be drawn, showing trigger threshold versus jet-ET at both low and high luminosity, and the rates for several triggers (1XJ180(113), 3XJ75(31), 4XJ55(17)). Some differences have been noticed compared to the TDR, but (a) Kambiz pointed that the meaning of these triggers is not clearly defined (i.e. is energy when trigger is 90% efficient or actual threshold) and (b) Karl Jakobs noted that rates at low energy depend on strongly on choice of structure function. Signal acceptance was shown with the L1 jet trigger reduction effect being about 20% at low luminosity and about 85% at high luminosity.

Trigger simulation in the new ATLAS software framework - Ed Moyse

The slides are available here (pdf).

Work has now begun on re-writing Atrig in Athena, the new ATLAS offline software framework. Athena has several potential advantages for the eventual user, but imposes constraints on developers which must be followed. The algorithm implementation of LVL1 is independent of Athena though, as the approach being taken was explained. The interface of the RoI object saved in the Transient Event Store will be defined by Feb. 2001, and the e.m. trigger itself will be finished by May 2001. A draft RoI class was shown. There is still much to do, but Athena seems to be gathering momentum now and the pace of development is increasing. The completion of the Athena version of ATLFAST should prove useful both in inspiring further Athena development and as a source of data for Atrig.

CALORIMETER SIGNALS AND CABLES
(minutes by Oliver Stelzer)

Trigger cable issues after the PRR - Paul Hanke

The slides are available, together with the system issues talk below, here (html and ps).

The "Filotex" cable of the LAr calorimeters passed the PRR. It is electrically ok from the LVL1 point of view. But the maximum length of the cable is still not fixed and the installation is an unresolved issue. What is new is that the cable is now more flexible, with a minimum bending radius of less than 10 cm. A lot of studies on cross-talk have been carried out at Saclay, with the important conclusion that most of the cross-talk happens in the connector. Paul then showed a flexible SCSI cable with similar mechanical properties, demonstrating that such a cable could be used for our analogue cabling from the calorimeter receivers to the front of the PPr.

[At the October ROD workshop it was announced that the cables would be installed with connectors only on the detector end. In USA15 they will be cut to length and connectors installed and tested. There is an opportunity to buy our short cables together with the LAr long ones, and perhaps the TileCal long cables.]

TileCal receiver tests and status - Svante Berglund

The slides are not available.

Svante's presentation started with the question of whether we can use the LAr transformer coupling also for analogue signals from the Tile Calorimeter. This is related to the unipolar shape of the TileCal signal, which results in two complications: it produces a little "bump" on the tail of the signal, and the pedestal level is rate dependent. The later is less of a problem as a plot from Martine Bosman showed, where high-energy depositions are very rare and the bulk of signals are low-energy signals which do not change the pedestal much. Signals have been examined using "soft" and "hard" differentiation, where the relaxation time has been found to be 5 and 0.5 microsec respectively.

Test-beam results were presented showing the influence of the tower-builder electronics. A discussion started on understanding the width of the distribution. There seems to be confusion on the costs and availability of cables. An attempt to obtain a single twisted-pair cable resulted in no response, therefore a "Tensolite" cable with similar properties but much higher cost was used in these tests.

Discussion:

It is not clear anymore that the TileCal group have understood that we want the triggers as 9 towers in the barrel and 6 in the extended barrel, as already agreed, rather than 10 and 7. This must be checked.

PREPROCESSOR
(minutes by Oliver Stelzer)

Slides for all of the Preprocessor talks can be accessed from this Web page.

PPr-ASIC status - Cornelius Schumacher

The slides are available here (html and ps).

New efficiency plots for unsaturated pulses were shown. If the decision logic selects the right algorithm, the overall efficiency is equal to one over the whole energy range. The algorithm is selected depending on the results of the FIR filter when saturation occurs in the tower builder. If saturation occurs in the shaper, it leads to a drop of the FIR filter output at high energies. Then it might be better to choose the algorithm from the raw FADC data instead. This situation is very uncommon, as it would mean that all energy goes into one calorimeter cell. But the possibility to make a selection from the raw FADC data is now implemented in the PPr-ASIC. This is one outcome of the post-FDR studies, as well as the detection of two small bugs in the code. Plots show a maximum gate processing of 10 ns on the chip giving a safe margin with respect to the 25 ns clock. A submission date at the end of December 2000 has been announced.

PPr-MCM status and preparations for testing - Ullrich Pfeiffer

The slides are available here (html and ps).

The layout of the PPr-MCM is not "routed" yet as the PPr-ASIC layout is still to be finalised. Simulation results were shown where a PSpice pulse is input and an LVDS output is received. This checks for set-up and hold times, ADC phase-adjustment and parametric simulations. Then an MCM test sequence during production was outlined. A defined digital stimulus vector generates an analogue signal with the help of commercially available video RAM cards. The digital result vectors from the LVDS receivers can then be checked against the original input and should be equal, neglecting least-significant bits. This test has to be done prior to encapsulation of the MCM to allow the exchange of defective components. The testing shall be set up at Hasec, the chosen assembly company.

Preprocessor ROD functionality - Bernd Stelzer

The slides are available here (html and ps).

Bernd's presentation started with an outline of the ROD requirements. The differences between the prototype –1, a CMC card within the 6U Heidelberg test system, and the prototype, which will be on a 9U board, were shown. One important difference is that in the test system environment, the interface from VME to the FPGA has been implemented via a dual-port memory. For the prototype, this will be changed to a direct access from VME to the FPGA. An S-Ram will be added, which can be accessed from VME through the FPGA. Then the VME access, the programming model, monitoring tasks as well as the ATLAS DAQ event format were outlined. A draft specification for the PPr-ROD prototype is in preparation and will be available soon. Performance tests and results on data transmission via S-link, building of ATLAS DAQ fragments and simple monitoring were then presented.

The question was raised whether there should be a PDR for the PPr-ROD. This PDR will now take place together with the PPM PDR at the beginning of 2001.

Preprocessor Module, system issues, timescale and milestones - Paul Hanke

The slides are available, together with the analogue cable talk above, here (html and ps). This talk was actually given on Friday morning, but is placed here because this is more logical.

Paul presented a new PPM block layout. A change might be that only one FPGA is needed for the read-out. A draft specification for the PPM is now available. News that the 60 MHz LVDS transmitter will have a smaller skew of 2 ns in comparison to the 5 ns of the 40 MHz version was reported. Two types of LFAN driver chips have been submitted for a MPW run: a voltage and a current driver. A draft specification can also be found on the webpage of the ATLAS Heidelberg group. The KIP "home-brew" approach for a crate controller CPU was then presented, with a cost estimate. In the current design, the crate has 2×8 PPMs, with three slots in the centre holding the two RODs and a TCM. The CPU (possibly double-width) is at the left. The Preprocessor schedule was updated. The interdependence between the components, from PPr-ASIC to MCM to PPM, shifts the whole timeplan.

 

FRIDAY, 3 NOVEMBER 2000, MORNING SESSION

Summary of Management Committee Meeting - Eric Eisenhandler

Some of the main points covered in this short meeting were:

  • Election of coordinator – After accepting nomination for another two years, Eric was unanimously re-elected.
     
  • Payment for prototypes of common modules – Prototype costs shall not be split as long as expenditure does not reach substantial amounts. The current example (TileCal receiver prototype) is non-CORE; it does not seem very expensive but if Stockholm needs financial help with it they should ask. (Big joint items are split equally between the six participating institutes, whereas common module costs are split between groups using them.)
     
  • Next joint meeting(s) – The next meeting is to take place in Birmingham on 1–3 March 2001; tentatively to be followed by Mainz on 28–30 June 2001. This is out of sequence since Stockholm is moving the institute in summer 2001. When the slice-test has taken shape, it was felt appropriate to have a meeting in Heidelberg in autumn (e.g. 11–13 October 2001).
     
  • AOB – With Paul Bright-Thomas leaving the Birmingham group, a new contact person to the TileCal must be found. Similarly, with the imminent retirement of Svante Berglund, the TileCal Receiver development must be followed up. No candidates could be nominated yet. The construction and running-in of a full system of TileCal receivers is still open. Contact shall be made with potential new groups who might be willing to take over this task.

DATA TRANSMISSION
(Minutes by Scott Talbot)

UK LVDS tests and new LVDS chips - Richard Staley

The slides are available here (pdf).

Earlier tests of the LVDS links using the TTC clock gave acceptable error rates of <10–12. However, the original de-serialisers used in these tests could only be used with a high-stability transmitter, cables which were over-compensated for HF loss, and a heavily filtered receiver supply. Now there are new LVDS chipsets available:

  • An improved 16–40 MHz device with an improved timing margin of 450 ps (100 ps), a lower threshold of 50 mV (100 mV) and a slightly larger power consumption of 191 mW (145 mW).
  • A 40–66 MHz device with the same specifications.

The latency of all chipsets is 1.75 ticks. So Richard no longer considers using the original device.

Several overnight tests with the faster LVDS device were done using 4 channels over 15 m AMP cable, 8 channels over 12 m Datwyler cable and 4 channels over 20 m Datwyler cable. 3×1013 bits were sent over each link, and no errors at all were detected. A further test of 8 channels over two 15 m AMP cables with high VME activity reading the DSS status register was done and no errors were detected over 40 hours (5×1013 bits per link).

The new improved LVDS chipsets do not need the high-stability 40 MHz transmitter clock, are tolerant of supply noise, and allow the cable equalisation to be relaxed. The cable compensation is no longer critical but it will still be done for safety. Although the links work using a pick 'n' mix of chipsets, we need to choose one set. The 40–66 MHz transmitter is not available in die form yet.

Future work will repeat the tests with a stressed system to determine the error boundaries (e.g. increasing cable length, reducing the amplitude of the signal or heating the boards). Also, an LVDS source module with fanout will be produced either using the Heidelberg LFAN chip, or Pericom Bus LVDS crosspoint/repeater device.

Richard confirmed that the timing skew is no longer an issue. The current cables are high density so it will not be possible to use cheaper cables, but there is now a clearer idea about the rack layout. As a result the slice test will use 10 m cables.

There is no indication that the current 16–40 MHz receiver will be upgraded and removed. The current transmitter does not need to be upgraded. 600 LVDS transmitter and receivers will be bought soon. The Heidelberg group will get an offer of the cost of a bulk quantity and will buy after the slice tests. As all ADCs have been bought then the LVDS chipset cannot just be changed as the footprints might be incompatible.

Refining the rack layout and cable lengths - Murrough Landon

The slides are available here (pdf).

There was nothing very different to report and there had been no new input from technical coordination. As the cables are going to have the connectors fitted on site then we may be able to get away with fewer holes in the shielding wall. The holes will now be filled in with sand. The aim of the rack layout is to minimise the overall latency by having the smallest possible cable path and keep installation, use and maintenance easy. The CTP and TTC position is considered the most important, although the position of the Silicon RODs may now have priority as they do not use the TTC for signals. The digital cables between the PPM and CPM/JEM are less than 10 m long, so may now be less of an issue.

Constraints to this are the high fanout of equal-length cables between crates, and that the calorimeter grounding rules mean the receiver crates must be in separate racks. The suggested layout uses both floors of USA15, the first floor for the muon trigger, CTP and TTC and the second floor for the calorimeter trigger in a contiguous central block of racks. The analogue cables will arrive through the shielding wall about 5 racks from the centre, so the receiver crates will be on the edges with front-panel cables to the adjacent PPM racks. The CPM/JEM racks are in the centre. There may also be a central TTCvi and ROD rack. The CTP hits cable will go down through private holes in the floor.

The racks are 52U high (about 2.4 m) and have two centrally positioned 9U crates, with an upper cable tray about 30 cm above the rack for front panel cables only. There are three cable trays 40, 65 and 90 cm under the floor for both front and back cables. Between the receivers and PPM the cables run from front panel to front panel. The shortest cable length is 4.2 m, but this option would prevent replacement of the fan tray from the upper crates and it may be hard to lose the excess cable length. The longest route is about 9 m.

Between the PPM and CPM/JEM the cables run from backplane to backplane; we have to use the lower cable trays for this. Using the lowest tray gives a length about 9 m. Between the CMMs there is no decision whether to use the front panel or backplane. If the backplane is used then we could cut a small hole between adjacent CPM racks and use about 2 m cable length, otherwise using the cable tray would need just over 4 m. The JEM mergers need 1 m of cable. Between the mergers and CTP we need about 14 m of cable. For the G-links to RODs, if the RODs are central the length is under 3 m, but if the RODs are at the edge then we need 12 m.

The total latency will be between 6 and 8 ticks. The TDR value of 6 ticks assumed only 2 m for cables to the CTP.

Outstanding issues are:

  • Leaving a central rack for RODs will slightly reduce latency, but leave no room for future trigger improvements.
  • The JEM crates are closer to the CTP than the CPM, but if the latency of the JEM is shorter this should be changed.
  • It would be easier if grounding permits the Barrel and Endcap receivers from one side to be in the same crate.
  • Do the cables from the Receiver modules to PPM need to be the same type as from the detector?
  • There is limited space on front of rack. As cables will not have a connector on when installed we may not need all three holes and could shift the trigger racks to one side (reversing a previous decision) and put the calorimeter trigger above the CTP.
  • As the Muon RPC trigger has the lowest latency the CTP, MuCTPI and TTC should be more central.
  • Changes in the calorimeter trigger do not have much effect and can be left until later. If the USA15 end connectors are not fitted when they are bought, can we leave the decision about length until later? Check for developments in the shielding holes during T/DAQ workshop.
  • What is the last date for changing USA15 rack layout?

Discussion:

There is no chance of more computer/desk space in USA15 than has already been assigned. Nick will follow up on the query of how much room there is for us on the surface.

There should not be any problem putting a limited number of cables through the floor. We should use the same cable type from the receiver as used from the calorimeters, so we need to buy the cables soon.

Should we make a mock-up of the racks to check cable layout? A full version would be difficult.

CLUSTER PROCESSOR
(Minutes by Scott Talbot)

Cluster Processor chip - Viraj Perera

The slides are available here (pdf).

The chip processes a 4 by 2 trigger window. It receivers BCmux data at 160 MHz, captures and synchronises the data, performs de-multiplexing and error detection, and applies the trigger algorithm to be sent to the CTP and level 2. The chip can have a setup/diagnostic algorithm as a second configuration.

All logic blocks have been designed and the algorithm has been simulated and tested using a set of test vectors from Alan. Two vectors out of 1000 failed the test and are being investigated. The logic blocks are in the process of being integrated.

The target device is either the Xilinx XCV1000E or XCV1600E. The number of blocks should fit into the 1000E with 79% CLB utilisation, and the 1600E with roughly 62%. The 1000E causes some worry in case the volume used causes the chip to run slowly. It was suggested that if the next size chip does not cost too much more then we should think about using it anyway since it will allow more flexibility.

The latency has been calculated from the individual block simulations to be 6±1 ticks, about half for synchronisation/demux and half for the algorithm.

The cost of the devices has decreased significantly in the last six months. The 1000E currently costs £582 (was £1190), the 1600E costs £777 (£1200) and the 100E for the Serialiser costs £20 (£100).

More test vectors, including BCmux test vectors, are needed to complete the design.

Testing the Serialiser and CP chip - Ian Brawn

The slides are available here (pdf).

The serialiser and CP chip are both going to be FPGAs. The device to be tested will not be the final design but is of the same family. The objectives of the test are:

  • To gain experience working with large devices and at 160 MHz.
  • To test the hardware performance compared to the simulation.
  • To demonstrate a working design.

The tests will be done on the Generic Test Module. The specifications of this module are on the UK web pages. This is a joint ATLAS/CMS module, and can use the Xilinx XCV600E–XCV2000E devices. The core of the receiver is built from already-designed blocks and the real-time path will be tested.

The CP chip will be tested on the same module. There are three FPGA designs needed:

  • Board control and VME decoder
  • CP test - this already exists.
  • Transmitter

The board was sent to the manufacturer on 20 October, but was returned as the manufacturer had upgraded their software and need a different format. It is expected back at the end of November. The firmware will be completed by its arrival time. The VME will be modified as necessary. The CP is still to be done.

Cluster Processor Module status - Richard Staley

The slides are available here (pdf).

There was a PDR in July. Several points have been improved and the latest specification is available on the UK web pages. The design should be completed by the end of November 2000, the layout by the end of March 2001. Assembly will be during April 2001. It will be tested in Birmingham in May 2001. The CP system will then be tested in June 2001 and should be ready for the slice tests in July 2001.

An FDR is scheduled for early 2002. The module could be used in the final system if all works as designed.

The cables will be impedance compensated, with the G-links soldered directly onto the PCB. This package can easily be replaced with standard tools.

The backplane is packed full of transmission pins so there are not enough high-amperage pins available. Hence the board will have power converters. The 5V source is expected to draw 14.6A and the 3.3V source 4.6A, so these converters need to be rated above 20A. A decision on which type of converter needs to be made. The total module power will be 90W. It was suggested that part of the board be set aside for a copper area which the power cable could be soldered onto to give a good connection.

Changes from the PDR:

  • Originally EEPROMS were going to be used for the Xilinx configuration memories, however flash memory will now be used as these are bigger and cheaper. This causes some delay in loading but this can be overcome by interleaving the process.
  • The number of configurations has been reduced from 3 to 2 as the third option seems to be very rarely used. The normal running code will always be on the board and a trial/diagnostic version will only be needed to be loaded once.

CPM specification and nomenclature - Paul Bright-Thomas

The slides are available here (pdf).

Paul presented his version of the updated CPM specification:

  • TTC broadcast implemented in VME controller.
  • Readout simplified through Read-out Controller.
  • Latency and power estimates have been revised.
  • CP chip and Serialiser are to be configured by flash memory, not EEPROM.
  • Only two configurations resident on the board.
  • Real-time error counting and signal labelling have been defined.
  • Programming model has been revised.
Paul then went on to describe a consistent naming convention for the CPM, and needs to confer with Jürgen to adapt a compatible scheme for the JEM.

Timescale and milestones - Tony Gillman

The slides are available here (pdf).

There will be an informal schematic review of the CPM at the end of November. The CPM should be available in April followed by a stage-1 test in the UK and then the slice test in Heidelberg. The GTM will be available at the beginning of next year and the CP FPGA around April. Everything is currently running together, but a couple of months behind schedule.

There will be a design iteration after the slice test, and the CPM FDR is scheduled for March 2002.

Four CPMs (the minimum need to fully populate a module) will be built for the slice test.

JET/ENERGY-SUM PROCESSOR
(Minutes by Jürgen Thomas)

JEM status, and JEP timescale and milestones - Uli Schäfer

The slides are available here (pdf).

Uli gave a status report on the Jet/Energy-sum Processor system, which will consist of two crates covering two phi quadrants each. A change is that the quadrants are opposite in phi to help prevent energy-sum overflows. One quadrant is processed by eight Jet/Energy Modules (JEMs). Each crate hosts two Common Merger Modules (CMMs), one for jet threshold processing, and one for energy sums. The data of the first crate is fed into the second crate's CMMs for top level merging, before being transmitted to the CTP. The functionality of the modules was listed in detail, with the real-time data path of the JEM and the CMMs, the DAQ, RoI, timing and diagnostic path. Saturation in the adder trees will be done by setting the output to an all-ones state if any input is saturated.

The JEM architecture has been changed in order to treat FCAL signals correctly without the need for a special FCAL JEM module. This will be done by merging the previously considered seperate processor FPGAs for Jet and Energy-sum algorithms into one FPGA. The input signals for this processor will be 80 MHz multiplexed 5-bit jet elements. The cable mapping onto the InputFPGA has also been changed due to the treatment of FCAL signals. A schematic was shown for the cable inputs of the barrel calorimeter, endcap and FCAL channels. This mapping will allow the use of 4-pair cables for nearly all signals. Remapping of the FCALs to a 0.2 × 0.2 jet-element geometry will increase latency.

A prototype module with JEM functionality is currently being designed at Mainz, of which a floorplan and implementation details were shown. A latency overview for the missing-ET real-time path was given. Since implementation work has now progressed and timing simulations done, an increase in latency in the InputFPGA has become apparent. However, there is room for saving latency in later stages, especially in the Common Merger Module, for which a latency-optimised proposal regarding both design and algorithm has been presented. The PDR document on the JEM-0 is currently being written, and will be ready for a review before the end of 2000. For the module currently being designed in Mainz, the specs will be implemented as closely as possible, however it cannot be guaranteed to include all modifications coming out of the review, since the timescale of this module had to be adapted to availability of human resources in the institutes working on the JEP.

Discussion:

Eric asked if the current prototype module is really much different from a JEM-0. Uli answered that there are some differences due to parts choice, DCS and TTC, and it is currently planned to have 30 cm of depth for cost reasons. However, it was stated that other prototype modules like the CPM prototype are also not strictly "module-0". It was also pointed out the the JEM nomenclature scheme will be synchronized with the CP system labelling by Paul B-T. Nick asked if the implementation of FCAL triggers is possible with the current specs. Uli answered that this would only need more 2-channel cables, but otherwise the module is prepared. However, the 25 bits carrying jet multiplicities would have to be rearranged, which would be no technical problem in the FPGAs.

Implementation of energy summation algorithms - Carsten Nöding

The slides are available here (pdf).

Carsten reported on recent work on the energy summation algorithm implementation for the InputFPGA and Jet/Energy-sum FPGA on the JEM. It was done in VHDL using Mentor FPGA Advantage 2000. Simulations of the real time data path and the DAQ part have been carried out using one VHDL model testbenches, random numbers have been used as input data, but it is also possible to use test vectors read in from ASCII files.

The modified data flow in the InputFPGA was shown for the current JEM design presented in Uli's talk. For the InputFPGA, the simulation showed a latency of 4 BCs for the current target device, which is a Xilinx Spartan2-150. Data input synchronisation will be adapted from the CP Serialiser. The DAQ readout path has been implemented using BlockRAM. The size of playback and spy memories will be limited to remaining resources available in the device. The Jet/Energy-sum FPGA real-time data path has been adapted to the current design with just one FPGA for both Energy-sums and Jet algorithms. RTL (register transfer level) and gate level simulations have been carried out, and showed the algorithm to be working within 1.5 BCs.

Work on the algorithms will continue, including the processing of FCAL signals, completion of the control path and a top-level design for the now joint jet and energy-sum algorithms in one FPGA. Also switching from LUT to BlockRAMs for Ex, Ey conversion will be tried, which could save latency.

Nick pointed out that more care has to be taken when calculating the effect of synchronising delays in order to avoid double-counting.

 

FRIDAY, 3 NOVEMBER 2000, AFTERNOON SESSION

COMMON MODULES AND BACKPLANE
(Minutes by Jürgen Thomas)

Common Merger Module specifications - Norman Gee

The slides are available here (pdf).

Note that there was some overlap between Uli's talk on the JEM and this one. Uli's last few slides are relevant to the CMM.

Norman reported the status of the Common Merger Module algorithms and specifications document. The jet and cluster final summation is unchanged since the last meeting, whereas energy final summation has been modified using a 6+2 quad-linear compressed scheme, which allows for much smaller LUTs. The transmission from crate CMM to the top-level system CMM was planned to use a quad-linear encoding also; however Uli's recent proposal to save latency will be incorporated and so this will be modified.

A revised draft specification v2.0a was produced in October 2000 and is nearly ready for the review. It is available on the Modules web page. In order to process FCAL triggers, some bits can be picked out by routing blocks, which is implemented with a special block on the CMM. The firmware for the various functions will be produced by the respective institutes: RAL will implement the CP merging functionality, while the jet and energy-sum functions will be done by the Mainz and Stockholm groups.

CP/JEP ROD prototype - Viraj Perera

The slides are available here (pdf).

Viraj presented recent work on the ROD prototype. A 6U module has been produced receiving 4 channels via G-links on a CMC daughtercard. It formats and buffers data and is read out via S-link at 160 Mbyte/s sited on an off-the-shelf S-link card, and stores the data into a spy memory in addition. All processing steps are carried out by FPGAs; the firmware has been adapted to handle data from the CP Serialiser. The same module handles RoI data using different firmware. The module can host a TTCdec card to interface with the TTC system. In order to read out DAQ data from the 20 Serialisers on the CPM, a ROD with 20 G-Link input channels and one S-link daughter-card is needed. Readout of the 8 CP chips needs 16 G-links and two S-link cards on the ROD, one sending RoIs and one sending DAQ data. Viraj showed the formats of the incoming Serialiser and CP data and outgoing DAQ data.

The status as of 2 Nov. 2000 is that four modules have been manufactured, two of which have been fully assembled and boundary scanned. One module is currently under test. The test setup uses a DSS module equipped with a G-link Tx daughter-card providing input data for the prototype ROD, and an S-link Rx daughtercard for reading out. The DSS sends fixed patterns to the prototype ROD via G-link, which are then received, formatted, written into FIFOs, and sent back to the DSS emulating a ROB via the S-link. The data samples imitating events have been received correctly on the DSS and have been stored to spy buffers.

The tests will continue with the TTC system running, application of 'busy' signals and RoI readout. The second module will be set up for test and the further two modules will be assembled. Also the monitoring functions will be completed.

Viraj also mentioned a tool by Xilinx named 'Chipscope', which allows monitoring of single signals inside an FPGA and is especially useful when using BGA packages. Norman asked who will do the firmware for the JEP ROD, since although there are many similarities it has to be adapted to the JEP system. Sam answered that this will be done by the JEP group, which is made easier by the fact that a common design tool (Mentor Renoir) is used.

Timing Control Module and TTC situation - Bob Hatley

The slides are available here (pdf).

Bob presented the status of the TCM module. It incorporates three functions: distributing TTC signals in electrical form to the modules, providing a link to the DCS, and displaying VME activity in the crate. It is compatible with all three processor types (PPr, CP, JEP) by use of two adapter link cards fitting to the two different backplanes. It is divided into a timing section, which receives optical signals and converts them into ECL, and a control section hosting a CAN node and fanning out CAN signals. The VME display will have both LEDs and HEX front-panel displays. Currently the functionality of the TCM has been specified completely, the schematic design is in progress, but awaiting some crate parameters. The same applies for the electrical and mechanical specifications. The RAL drawing office is available as of the end of november, the PCB layout will take an estimates 6 weeks. The functionality of the adapter link cards have been specified, but the schematics have not yet started.

We discussed how geographical addressing is applied on the TCM. This is currently not foreseen. Morrough asked how the nodes addressed by CAN are applied. Norman said that it is unclear how ATLAS applies CAN node addresses.

Bob then reported on the TTCrx production and packaging situation. He showed a statement by Ph. Farthouat, in which he explained that there have been two versions of the TTCrx which will come back from production in Dec 2000, and afterwards will be shipped for packaging. One version will have a different receiver diode, hopefully curing the loss of data due to SEUs. The package has changed to a 144-pin fine-pitch BGA, making it necessary to modify the layout of the TTCdec. Philippe said that we will get 30 TTCrx chips in March 2001.

Nick then explained the situation and problems with the TTCrx ASIC, which has seen a number of delays, of which the latest has been caused by the necessity to switch the packaging company. Uli emphasized that the JEP system also needs TTCrx for the system tests, and will include a footprint on the prototype, therefore details on the footprint are urgently needed for PCB layout. Nick said he will ask Philippe about this.

Bob then showed the personality card format and layout. This card is used to interface the 6U standard VME processor to the CP/JEP Common Backplane.

CP/JEP backplane - Sam Silverstein

The slides are available here (pdf).

Sam presented the status of the CP/JEP Common Backplane. A PDR specifications document has existed for a while and has matured, but is not yet complete. A new draft will be released after the meeting. Most questions have been solved, with the exception of the CMM pinout and some VME issues. An option for a two slot VME processor has been incorporated. The Backplane will have no power lines, internal vias or active components in order to minimize the possibility of failures. The PCB has 15 layers, of which 8 are signal layers and 7 ground. Sam showed schematic views of the FIO, merger and outer layers. He conducted a market survey, and it seems that a suitable manufacturer has been found: Bustronic Corp. (www.bustronic.com), which specialises in custom backplanes, and also welcomes small orders. They also do assembly, which they coordinate with external partners. They will soon be asked for a preliminary quote. The company gave a first estimate for a PCB of this complexity of about $3000.

The Stockholm group has built a crate mechanics demonstrator, which is 9U with a one-slot Compact-PCI backplane similar to the planned design. A dummy module PCB has been manufactured in order to test insertion and extraction forces; it is equippped with module handles. First tests were successful: insertion and extraction have been straighforward, 30 cycles have shown no obvious wear on the handles. However, it is clear that the backplane will have to be braced. Regarding the timescale for the Common Backplane, the PDR will take place on 20 Nov. at Mainz, and the layout will start soon after this is finished.

Eric suggested also asking other manufacturers for offers. Sam explained that it was generally a problem to find interested companies due to the small number of PCBs. The cost threshold has been discussed, the ATLAS-UK costing gives 3000 pounds, Sten quoted 50000 SKr for one backplane.

Eric asked whether the book could be closed on live insertion of modules. This was generally agreed. Statements by Chris Parkman also suggested not to adopt it.

Timescale and milestones - Tony Gillman

The slides are available here (pdf).

Tony showed the updated timescale and milestones for the common modules. For the stage-one CP prototype test taking place in Birmingham, the CPM, common backplane, the TCM including the adapter card, the ROD prototype and the CMM will be needed. The CMM is the critical path item, its PDR will take place in Dec 2000. He concluded that the current timescale shows that there will be no CMM modules available before June 2001. One very important milestone is the satisfactoy completion of the slice test, which is planned to be in Dec 2001. Iterations of the modules have been planned, though probably (hopefully) not necessary.

A list of upcoming PDRs was given: The Common Backplane PDR will take place in Mainz on 20 Nov., the PDR for the CMM on 11 Dec. at RAL or Mainz, maybe together with the JEM. The PDR for the PPM is foreseen to be in Jan. 2001 in Heidelberg, which may well also include the PPr-ROD after the discussion in the PPr session yesterday showed many similarities to the PPM.

Murrough mentioned that the personality card interfacing a 6U VME processor into the Common Backplane is also necessary for the slice test. Eric answered that this is indeed important and must be included in the timescale.

Re-defining our long-term milestones - Eric Eisenhandler

The slides are available here (pdf).

Eric emphasized that the long-term milestones need to be redefined. Various items mentioned in the list in the TDR have become outdated or obsolete, e.g. the CP-ASIC or CP-MCM, while other important items are not mentioned, such as the various common modules. The list is not logical any more, since many items which have been separate for the subsystems in the past are now synchronised due to the slice tests. Eric and Tony have written a new proposal for milestones, which everyone should take a look at within the next 2 weeks, and which will afterwards be submitted to ATLAS. This does not at all mean that our project is late, but that many things have evolved and the milestones have to be fitted to those developments. The PRRs should be combined with the FDR, which will be based on the results from the slice test leading to an iteration decision for the modules.

Sten asked about exact meaning of the PRR. Eric answered that it is considered to resolve the FDR with participation of the ATLAS Technical Coordination, which has to be convinced that the questions coming out of the FDR have to be resolved satisfactory. Nick explained that PRRs have been considered to be something useful, it should be a final check to prevent groups from spending money on useless devices. It is formal, but could be light if coupled with the FDR procedure. The PRR should also check compatibility with other systems.

Eric continued explaining that the CP/JEP ROD and the TCM need PDRs since there will be true prototype (non mod-0) versions in the slice test in 2001. Those iterations will eventually need more slice tests.

Murrough mentioned that the TileCal Receivers also have to be included, Eric answered that this will be done.

It was also agreed that our collaboration wants to help T/DAQ, which has a TDR coming up, but that we're very short of effort and can't allow disruption of our time schedule. We also discussed whether software milestones would make sense, but did not reach a conclusion so this will be pursued.

Our new milestones could be handed in to ATLAS together with HLT/DAQ/DCS at the end of Nov., but this was thought to be quiet tight. Ralf emphasized that the integration with HLT/DAQ/DCS is missing in the milestone. Again it was said that the integration test, e.g. with the RoI builder in March 2002 could fit into our schedule, but that those issues must not interfere with our internal plans.

Summary of DCS Workshop - Uli Schäfer

The slides are available here (pdf).

Uli reported on the recent DCS workshop at CERN. The standard DCS hardware is the credit-card sized ELMB board. It has 8 or 64 ADCs; JTAG and I2C interfaces, full CANbus and has two processors. It costs 150 or 250 CHF for 8 or 64 ADCs. All software and firmware is provided by the DCS group. This may be useful to us - Uli proposes to use it on the JEM.

The commercial SCADA system for all LHC experiments has now been chosen to be PVSS from an Austrian company ELT. It is free to us, but training is expensive (CHF 1000/weeks course). Subdetectors must provide software connection to SCADA using the CANopen protcol - unless using the standard ELMB. The hardware is connected to WinNT machines, but Linux can be used for higher level control. Work to define the DCS-DAQ connection is going on => draft URD.

What should we do? Either use standard ELMB; or use our own CAN hardware (as presently proposed via TCM to e.g. CPMs) and develop our own software. Paul Hanke wondered whether we can ask for a physically smaller version, perhaps with fewer than 64 ADC channels since the two options appear to be "overkill" and "underkill". The use or not of ELMB is a trade-off between cost (least important), board area (ELMB seems too big for what we need), and effort (in very short supply and ELMB would help). [In the brainstorming it was proposed to make a list of what we actually want to monitor on each module.]

 

SATURDAY, 4 NOVEMBER 2000, MORNING SESSION

GENERAL ITEMS
(Minutes by Murrough Landon)

Integration and tests with level-2, DAQ and CTP, and preparation for T/DAQ workshop 13–17 November - Norman Gee and Ralf Spiwoks

The slides are available here (pdf).

Norman gave what was billed as a "provocative" presentation. He outlined the proposals for various integrated tests of level-1 together with the RoI Builder (RoIB), the Readout System, and later with the integrated DAQ/HLT slice. He asked why should we bother when we would aim to fully test all our modules and associated software before sending them off for joint tests. Such tests should include stressing the system and operating near expected margins. However, these will not test mutual understanding of our interfaces with other systems. Integration tests are also the only way to really find lurking architectural issues. We need to make these integration tests before designs are frozen.

Norman listed what resources are required: hardware, firmware, software, test vectors, people, and a calorimeter. What we need for RoIB and ROS (Readout System, formerly ROB) tests, we will need anyway (a bit later) for the slice tests. We will probably need 1–2 people for one week at CERN for the RoIB/ROS tests; maybe 2–3 people for six weeks for DAQ/HLT integration.

For the forthcoming TDAQ workshop at CERN, we have to decide on which tests we agree to join and on what timescales.

The discussion (summarised below) was long and involved. The outcome was that:

  • We will take part in ROD–RoIB tests in early 2001, with a ROD and DSS.
  • We will take part in ROD–ROS tests later in 2001, again with minimal hardware and effort on our side.
  • We aim to participate in DAQ–HLT integrated slice, but not before late summer 2002, and subject to a minimum set of tests (still to be defined) having been successfully passed in the internal L1Calo slice tests at Heidelberg.
  • We would strongly like a calorimeter present at the test beam to provide level-1 trigger towers. (We must not forget that we also have a very difficult job to integrate with the calorimeters.)

Summary of ROD Workshop - Norman Gee

The slides are available here (pdf).

Norman gave a summary of the ROD workshop held recently. Most of the talks at the workshop itself are available via the University of Geneva site (see Program).

All subdetector groups presented their RODs, and later talks covered common areas and software. Key points were:

  • BUSY module specification has been agreed.
  • Policy on resets proposed.
  • Very minimal VME64x subset suggested.
  • Varous TTC interfaces are being developed.
  • Suggestion for ATLAS standard event time-stamp (may not be possible).
  • An outline specification for 9U × 400mm deep crates exists, but no delivery before Q3 of 2001. This is too late for many groups.
  • New TTCrx chips should arrive in December (after delays), TTCvi version II now made.
  • New faster S-link available, also as VHDL to incorporate onto RODs.
  • L1ID must be unique inside the readout system, which has implications for frequency of resets.
  • Information on resets, initialisation data volume and times etc. was collected - but not yet published on the web.
  • TileCal test beam was successful using DAQ –1 readout crate.
  • There will be a mini Readout crate implemented in a PC with S-link cards.
  • Work towards a DetDAQ implementation as a ROD fragment collector which looks to DAQ –1 like a ROB was proposed.

SOFTWARE
(Minutes by Murrough Landon)

Work on DAQ software and DAQ –1 - Bruce Barnett and Scott Talbot

The slides are available here (pdf).

Bruce described his work on the DAQ system. The goal has been to produce a new DAQ system, integrated with the tools provided by HDMC, using an OO design which should be extendable to the long term. Initially required for ROD tests, it could be used as the basis for the slice test DAQ.

The DAQ is based around our old buffer manager (PBM), with a producer program reading modules and storing events into the buffer manager and analyser program(s) for monitoring. The event format will be the ATLAS event format, with non-ROD data packaged to look like pseudo-ROD fragments. Scott (see below) is writing an event dump for our events. For the DAQ configuration database we use HDMC parts files. We still need to complete the work of integrating the software which has been developed into the DAQ –1 run control framework. More work is also needed to produce test vectors.

Event dump:  Scott supplemented Bruce's talk by showing the ATLAS event structure. He has developed a program to read events, navigate among the levels of event fragment, and display the event structure. More work on decoding the data contents of the ROD fragment would be useful.

Software planning for the slice tests - Murrough Landon

The slides are available here (pdf).

Murrough gave a short recapitulation of his similar talk during the software meeting. It has been difficult to define software milestones as these usually depend on the hardware schedules, which have been proved to be optimistic. Different decisions on the direction of DAQ development might have been taken if the present timescales had been clear.

We have three possible development paths for readout software for the slice tests:

  • Continue our private developments along the lines of Bruce's talk.
  • Push for the new DetDAQ to have multi-ROD-crate event building.
  • Buy lots of S-link-PCI cards and use the mini Readout crate in a PC.
We must decide which of these routes to invest our limited effort in.

There is a large amount of software that we require for the slice tests. This includes:

  • The local run controllers to initialise each crate.
  • Readout software for RODs and other modules (in some framework).
  • Monitoring programs...
  • ...including online simulation of the trigger.
  • Generating test vectors.
  • Calibration programs.
  • Offline analysis of stored data.
  • Definition of the database schema, data access library and tools.
  • Distributed histogramming.
  • Maybe something for DCS.
To carry out this work we have perhaps four people in the UK, posibly plus Cornelius in Heidelberg.

Lastly, Murrough mentioned the imminient TDAQ workshop at CERN. We have to present our requirements on the Online software (use cases), our assessment of the Online packages, and future plans.

SUMMARY

Summary of highlights and "lowlights" of the meeting - Eric Eisenhandler

The summary is available here (text), and was also circulated by e-mail.

This page was last updated on 16 December, 2013
Comments and requests for clarification may be sent to
E-mail:
Telephone: