ClusterAlg.gif (577 bytes) The ATLAS Level-1 Calorimeter Trigger

[Home] [Architecture] [Meetings] [Participants] [Publications] [Software] [Search]


ATLAS Level-1 Calorimeter Trigger
Joint Meeting at Heidelberg
14–16 March 2002

AGENDA

THURSDAY 14 MARCH MORNING

Technical software meeting

  • Organised by Murrough Landon. Minutes are available here and include
    the informal brainstorming session on Saturday afternoon.

THURSDAY, 14 MARCH, AFTERNOON SESSION

Physics simulation of the trigger

  • Calorimeter trigger offline simulation - Ed Moyse (pdf)
  • RoI-guided B-physics triggers - Alan Watson (pdf)

Calorimeter signals

  • TileCal summing amplifier gain - Eric Eisenhandler (pdf)
  • ATLAS System Status Overview actions - Eric Eisenhandler (pdf)
  • Rack layout - Murrough Landon (pdf)
  • Points of interest from talk at LAr week - Norman Gee (pdf)

Preprocessor

  • PPr-ASIC status and final parameters - Kambiz Mahboubi (pdf)
  • PPr-MCM status and MCM/ASIC test plans - Werner Hinderer (pdf)
  • Analogue inputs and software for MCM tests - Karsten Penno (pdf)
  • REM FPGA status - Dominique Kaiser (pdf)
  • Status of PPM and other Preprocessor items - Paul Hanke (pdf)

THURSDAY, 14 MARCH, EVENING

Management Committee meeting

  • Chaired by Eric Eisenhandler - summary presented on Friday morning

FRIDAY, 15 MARCH, MORNING SESSION

  • Tour of the new KIP building and slice-test area - Heidelberg Group
  • Summary of Management Committee meeting - Eric Eisenhandler

Cluster Processor

  • Cluster Processor FPGA tests using Generic Test Module - Viraj Perera (pdf)
  • Cluster Processor Module and test cards: status and test plans - Gilles Mahout (pdf)

Jet/Energy-sum Processor

  • JEM status, firmware, plans and timescale - Uli Schäfer (pdf)
  • Work on jet algorithm and JEM simulation - Sam Silverstein

FRIDAY, 15 MARCH, AFTERNOON SESSION

Common modules and backplane

  • Common Merger Module, adapter modules and test cards: status and test plans - Tony Gillman (pdf)
  • CP/JEP backplane and crate status - Sam Silverstein (pdf)
  • CP/JEP ROD prototype test status - Bruce Barnett (pdf)
  • Timing Control Module and VME Mount Module status - Tony Gillman (pdf)

DCS etc.

  • Report on DCS FDR/PRR - Uli Schäfer (pdf)
  • Fujitsu CANbus status and other options - David Mills (pdf)
  • FPGA damage from incorrect code - Viraj Perera (pdf)

Central Trigger Processor

  • Integration of CTPd, and final CTP design - Ralf Spiwoks (pdf)
  • Trigger menus and trigger configuration - Thomas Schoerner-Sadenius (pdf)
  • CTP simulation - Thomas Schoerner-Sadenius (pdf)

SATURDAY, 16 MARCH, MORNING SESSION

Online software

  • Overview, databases and run control - Murrough Landon (pdf)
  • Module services, ROS and system management - Bruce Barnett (pdf)
  • Envisaging slice-test DAQ - Oliver Nix (pdf)
  • Simulation, test vectors and test organisation - Stephen Hillier (pdf)
  • Readout, DIG, and RoI-builder issues - Norman Gee (pdf)
  • ROD-crate DAQ - Ralf Spiwoks (pdf)

Slice test planning

  • Slice test organisation and timescale - Tony Gillman (pdf)

Summary

  • Main issues, highlights & lowlights of the meeting - Eric Eisenhandler (pdf)

 

MINUTES

These minutes are based on summaries furnished by the individual speakers. Some material on discussions has been added by Eric Eisenhandler. All slide files are in pdf format.

THURSDAY, 14 MARCH 2002, AFTERNOON SESSION

PHYSICS SIMULATION OF THE TRIGGER

Calorimeter trigger offline simulation - Ed Moyse (slides)

Ed has moved all his code to CMT, the new ATLAS software management tool. This was a fairly convoluted process, made harder by lack of documentation.

The status of the TrigT1Calo simulation is that the em/tau trigger is functional, and can accept information from several inputs, the jet trigger is nearing completion, and the energy-sum triggers are not yet started but should be quick.

Following the CERN Level-1/HLT intergration meeting, work has started on a ROD simulation for RoIs, and it has been established that interfacing TrigT1Calo with Thomas' trigger menu will be easy, but a very important step.

In discussion, Kambiz said that he is working to improve the fast detector simulation. It will include noise on trigger tower signals.

RoI-guided B-physics triggers - Alan Watson (slides)

ATLAS B-physics plans have long relied on the ability to do a full TRT track scan on muon-triggered events. TDAQ deferrals of the scale currently suggested would threaten this. One possible way of retaining some of our B-physics potential would be to use low-ET calorimeter RoIs to guide level-2 track scans.

The ATLFAST-based calorimeter trigger simulation has been used to investigate the potential for level-1 RoIs to assist in B-physics. Two applications have been considered:

  • Using e/gamma RoIs to guide search for electrons from B decays (B -> e nu X and B -> J/Psi X -> e+ e– X).
  • Using jet RoIs to guide track searches for hadronic modes (B -> pi+ pi–, Bs -> Ds phi.

In each case, a 6 GeV muon would be used to trigger the event. Initial results suggest that with suitably low thresholds, useful efficiency can be achieved for reasonable RoI multiplicities. (The e.m. thresholds are typically a few GeV, and the jet thresholds are typically 15-20 GeV.) Follow-up studies using a full detector model are needed to confirm these results.

CALORIMETER SIGNALS

TileCal summing amplifier gain - Eric Eisenhandler (slides)

Following up on the FDR/PRR for the TileCal trigger summing amplifiers, Eric recalled that we had asked for a somewhat lower gain of about 7 rather than 8 in order to ensure that the extended barrel energy signals did not turn out to saturate below 256 GeV when converted to tranverse energy. Rupert Leitner has produced some simulations, first for 1 TeV jets and single pions, and then on our request for more realistic 150 GeV jets and single pions. These show the effect of the large amount of dead material in the TileCal as we move to increasing values of eta. The plots show his results, which demonstrate clearly that a large amount of signal is lost in dead material and that the resulting TileCal energy signal is almost flat with eta, rather than increasing as 1/sin(theta) as expected. The message is that the gain of the amplifiers is not a problem, and will not be altered.

ATLAS System Status Overview actions - Eric Eisenhandler (slides)

Eric showed the list of actions from the ASSO relevant to the calorimeter trigger. We have to document the connections from the calorimeters to the Preprocessor, via the receivers, for both LAr and Tile calorimeters. Murrough and Steve have volunteered to do this, but it is a big job and anyone who wishes to help would be more than welcome. Both calorimeters should name contact people for calibration purposes; LAr have named someone informally but not confirmed it, and TileCal have yet to do so. Our calibration contact person is Thomas Trefzger. Responsibility for the LAr receivers (presumably Bill Cleland and Pittsburgh) needs to be confirmed, and for the Tile receivers we hope it will also be Pittsburgh and new people there to be designated. We will need to specify the TileCal receivers, although this is not a formal action on us.

Rack layout - Murrough Landon (slides)

Murrough reported that our preferred rack layout has been accepted at the recent rack allocation meeting at CERN. The rack layout on the floor below us has been optimised to place the CTP almost directly under our JEP and CP racks.

We now have to consider layout within the racks, in particular cabling between receiver and Preprocessor racks. If we decide to use patch panels rather than special "octopus" cables for the transition and outermost eta regions we will need space for them in the racks, which may be uncomfortable. In discussion, Paul Hanke said that patch panels are nicer but need more effort.

Points of interest from talk at LAr week - Norman Gee (slides)

Norman had described the level-1 trigger in an invited talk at a recent LAr plenary. The talk provided an overview, described the algorithms and hardware implementation, and discussed calibration, software, and the project plans.

On calibration, he proposed that we set up a small group (in order to cover the different areas of expertise needed) to work with the calorimeter teams and decide how the system should run. Our needs include analogue pulse timing (phase and pipeline delay), pulse shape (for BCID coefficients), energy calibration (tower-builder delays, gains, and PPr LUT values), and checks of tower-builder integrity. DIG presentations indicate that the LAR team have a proposed scheme which does not appear to refer to our needs. The small group should also explore options for an integration test with calorimeters, either in a test beam or equivalent; a likely time might be after the middle of 2003.

PREPROCESSOR

PPr-ASIC status and final parameters - Kambiz Mahboubi (slides)

The ASIC was submitted with final design data for fabrication at the end of January. Before that, intensive simulation studies were done to check all areas, especially including the readout area, in great detail. These timing studies were performed at the netlist level as well as on the level of "sdf" files, where real physical layout is known (e.g. wire lengths). Some of the fixes included:

  • the asynchronous memory reset
  • the alignment of the BCID decision-bits with the BCID data byte
  • a programmable delay for the inclusion of the saturated BCID algorithm’s result
  • widening of the time interval in case FADC data are latched with the negative clock edge

This last modification is the only modification on the real-time data-path.

Test-vector generation for gate-level tests is not possible with our software installation. But functional tests will be carried out with available inputs.

Documentation has undergone major brushing-up, so that the documentation base should enable us to go through PRR procedures:

  • All source files are in the CVS repository
  • All netlist and layout data of the chip are documented as well
  • The User Manual needs approximately one more month of work

The ASIC has meanwhile come back from AMS (the manufacturer). We have 4 wafers (160 ASICs/wafer), one in the form of dice. In discussion, Kambiz said that the latency is as originally planned.

PPr-MCM status and MCM/ASIC test plans - Werner Hinderer (slides)

Due to the high power dissipation of the 4-channel ASIC (2.5-3.0 W) the MCM layout had to be re-designed. Now thermal vias are used under all nine chip dies on the MCM as a safeguard. In addition, minor changes were done such as better fixation to the PPM board by means of screws, which proved to be the most compact and also reliable solution. The routing of the FADC inputs and the corresponding "external BCID" inputs have been re-ordered to avoid cross-over routing on the PPM. Six MCMs of this new version will be delivered by Würth Elektronik at the end of March.

The hardware components of the MCM test system were shown. The only hardware component still missing is the MCM test-board. The layout of this board is currently under design. Lots of effort is necessary for the software of this test system. Three FPGA designs have to be written, and HDMC has to be modified to be able to analyse the test data with simulation data and to be able to configure the test system.

The planned setup for a functional ASIC test was shown. An adapter card and a needle-probe card have to be developed. The printed-circuit-board for MCM/ASIC tests was shown. It allows mounting on the wafer-probe station. Hence, ASIC wafer tests can be perforned with a functioning "MCM-master". The test-board should be ready for manufacturing in two weeks.

Documentation for the MCM is effectively Werner's thesis, but that is in German. In discussion, Norman confirmed that the 40 MHz clock for the analogue video inputs cannot be synchronised to an external clock.

Analogue inputs and software for MCM tests - Karsten Penno (slides)

Displays of "calorimeter pulses" were shown as "lines of intensity" on a video screen as well as on an oscilloscope. Real pulses from test-beams and analytical functions describing the pulse can be replayed this way. Synchronising the clocking of video output with a digitizing LHC clock should be achievable for tests. Synchronisation of several video cards on a PCIbus is a different (and more difficult) matter. It is probably not necessary. Analogue tests can be done with group of channels sequentially (e.g. 6 = 2*RGB from a dual-head card).

In preparation for MCM/ASIC tests, FPGA code for ModularVME mezzanine cards is being developed.

REM FPGA status - Dominique Kaiser (slides)

An overview on the ReM_FPGA code was given; this is nearing completion. This central readout device of the PPM has to talk to many items via several interfaces: 4*SPI to AnIn-DACs, I2C to Phos4 and TTCdec, 32*serial IFaces to ASICs, to VME and to PipeLineBus. The data-compression implemented acts on FADC data as well as on the LUT result. A debug-mode can be switched on to check out faulty data on the readout path. The code has been "MODSIMulated", but has not been implemented on a real device, which will be an XCV1000E, where 40% of the CLB resources and 90% of the memory resources are used. This has all been documented.

There is an improvement in the compression that now allows one LUT slice plus five raw data slices to be read out at 100 kHz. This needs 227 MB/s out of a possible 240 MB/s on the PipelineBus using a 60 MHz clock. The main reason for the improvement is the removal of some unnecessary flag bits. Each PipelineBus will need two S-links.

In discussion, we said that there is not a hard line between what we must be capable of reading out all the time and what extra data we might need for diagnostics that might be permitted to slow the rate down. Dominique said that the ReM_FPGA code was almost done, but the ROD FPGA code would need to be modified and documented by someone else. Nick expressed some concern at the use of a clock rate that is not a multiple of 40 MHz.

Status of PPM and other Preprocessor items - Paul Hanke (slides)

The summing-up presented a real layout floorplan of the PPM. The status of development for the daughterboards (plug-ons) was also given along with a schedule leading to a fully equipped PPM. A very large portion of the real-time functionality is implemented on the so-called "plug-ons", hence a PPM-PCB without these is far from complete for slice-testing. The priority of work lies with the "plug-ons": AnIn (exists), MCM (with PPrASIC), LVDS output using XCV50E on a daughter card. The main-board layout will be carried out in parallel with those tests. The custom part of the backplane for LVDS has been received from TreNew. The TCM adapter is proceeding and should be ready by August. It should also be noted that KIP is moving house to the new campus in the course of the summer. Therefore, the timescale for a working PPM reaches into the autumn.

Software is needed for comparing output vs. input on the realtime data path. It is not currently clear who this work will come from. The tests will start using the prototype ROD; the final ROD is mostly a copy of part of the PPM (due to how the PipelineBus works) and might by ready by the end of this year.

FRIDAY, 15 MARCH 2002, MORNING SESSION

Summary of Management Committee Meeting - Eric Eisenhandler

The meeting was fairly short. The main items covered were:

  • CORE spending already done and estimates for the future have been submitted to Nick for the RRB.
  • The state of ATLAS budgets in the three countries was summarised briefly. The calorimeter trigger does not have any big new problems.
  • The hardware status telephone conferences are useful, but so far there have only been two. We should aim to do this once per month.
  • Our contact person for calorimeter calibration systems is Thomas Trefzger. A small working group with a wide spread of expertise should be set up to formulate our requirements and to start to define how we need to drive these systems. Suggestions of people for this to Eric, please.
  • A few people (initially Eric, Norman and Tony; others welcome) will try to think of ways to de-scope or stage the slice tests in order to optimise the use of scarce online software writing effort.
  • The next meeting will take place in Stockholm, 4-6 July. The following one is tentatively planned for Birmingham in November. NOTE ADDED: dates are 7-9 November.

CLUSTER PROCESSOR

Cluster Processor FPGA tests using Generic Test Module - Tony Gillman for Viraj Perera (slides)

The CP Chip will be implemented on Xilinx XCV1000E-6BG560. It will receive 108 x 160-Mbit/s BC-multiplexed serial data streams and process a 4 x 2 x 2 trigger tower window. The serial data are captured and synchronised to the on-chip clock, BC is demultiplexed before the data enters the e/gamma and tau/hadron algorithm block to find the cluster hits and the RoIs. The RoIs are read out on each L1A signal.

The GTM is similar in architecture to the DSS modul,e with memory blocks interfaced to an XCV 600E and an XCV1600E and interconnect between the Xilinx devices and CMC connectors.

On one test the CP chip firmware was loaded on to the XCV1600E and the test data was loaded on to the internal RAM of the XCV600E. All 108 160-Mbit/s serial data were received and latched onto the CP Chip by automatically selecting the clock phase. These tests helped to debug a number of minor problems and required the 160 MHz serial to parallel converters to be "hand placed". On the second test, the test data was loaded on the memory blocks and manipulated by firmware on the XCV600E to obtain the 160Mbit/s data streams. The GTM was a useful aid to debug the CP chip firmware and will save time when we get the CPM.

Cluster Processor Module and test cards: status and test plans - Gilles Mahout (slides)

The CPM schematic was sent out before Christmas. The PCB was back by the end of January and assembled during February. After visual inspection, the board was sent back to the assembler due to a problem with the power connector. The board is now back at RAL. Once the backplane is there, the board will be powered up and checked for shorts. Once preliminary tests finished and FPGA/PLD downloaded, the quality of the clocks at different test-points on the board will be checked. VME access will be started and the different logical device codes will be checked for correct working.

The testbench will be composed of a PC running Red Hat Linux 7.2 and HDMC for hardware access. Early tests could be done with just one CPM, testing connectivity and calibration, by using the playback memory of the FPGA Serialiser. Quality of fanned-out signals coming from the backplane will be checked with the help of a small card pluggable into one backplane connector. This card has not been designed yet but it should be noted that due to the non-modularity of the backplane connectors, only a small group of signals can be probed. At this stage, the CP algorithm and Serialiser code could not be fully tested. An additional crate is necessary holding DSS cards and delivering external data.

In addition to the previous small card, a similar card, pluggable in the upper end or lower end of the CMM slot, has been designed. It will retrieve multiplicity data coming from the backplane and send them to a DSS. This latter needs to be populated with a General I/O card under development at RAL. A "pseudo-realtime" data path between CPM and CMM could be tested. In a similar way, a CPM emulator card has been designed and built, to help in testing the CMM.

The serialisers could be tested but it will require four CPMs if we want to test the CP algorithm. Once the CP algorithm has been probed, the last stage will be to test the time-slice data with the help of a ROD.

The firmware needed for the CPM is nearly completed. A model of the CP chip has been sent to Birmingham in order to understand the logic of the chip. It will help debugging if a problem occurs in the CP chip, and corrections could be made quickly by James. From the software point of view, hardware access will be performed with HDMC. The required parts file has been nearly written for the CPM and it uses the submodule architecture for the CP and serialiser elements. A start has been made on the module services but a lot more still needs to be done.

JET/ENERGY-SUM PROCESSOR

JEM status, firmware, plans and timescale - Uli Schäfer (slides)

Uli began with a JEM overview, showing a block diagram and photograph of JEM0. He recounted the history of JEM0.0 module production, problems encountered before Nov. 2001, and the more recent success story of recent PCB production run and module assembly (module JEM0.1) at Rohde & Schwarz. He described the JEM hardware status and a task list for production of modules with full "module-0" specifications for use in the slice test. Work on TTC and DCS interfaces is required, and an FPGA upgrade considered from XCV600E to XCV1000E.

On the firmware (and for Andrea) he gave a JEM trigger algorithm overview. The current latency estimate is ~4 ticks each in input and main processor energy paths. The firmware status is no change on real-time code, work going on for VME interfaces and readout controller FPGA.

The status of module tests is that Andrea is doing firmware tests on JEM0.0. Tests of the real-time data path with DSS-generated patterns by Thomas are starting on JEM0.0. Hardware tests (boundary scan) are being done by Bruno on JEM0.1.

The next JEM iteration will need to improve boundary scan testability; currently not all chips support this. The module size and BGA pitch related issues seem getting less critical with with experienced assembly companies. The timescale now has no particular problems, it is expected to have four JEMs of mod0 specifications ready for the start of the slice test. A lot of work is required on the firmware. Design work for the new iteration might start early. On documentation, new specifications (version 0.8d) were posted in November.

In discussion, Tony said that in view of the delays to the slice test schedule, the design should certainly be iterated before more boards are made.Also, in view of the Preprocessor schedule, it might be more sensible to first bring the JEM to the UK so it could be tested with the CMM and the ROD.

Work on jet algorithm and JEM simulation - Sam Silverstein

Anders Ferm and Torbjörn Söderström have completed a jet algorithm for the JEM main processor FPGA as a senior thesis project. It is meant for the Virtex XCV600E devices on the prototype JEMs, so some possible final features such as separate FCAL jet handling and multiple sets of 8 jet thresholds for different values of eta were not included. The algorithm does allow free choice of 0.4, 0.6 or 0.8 cluster sizes for each of the eight jet definitions.

The algorithm has been successfully synthesized by Leonardo, and simulated at the algorithmic level using ModelSim. The latency of the algorithm is only four bunch-crossings, and total logic use in the XV600E is under 50% without any memory blocks used, due to several innovative ideas by Anders and Torbjörn. Timing estimates by Leonardo show a comfortable margin for the 80 MHz clock that will drive the algorithm. However, allowing for FCAL etc. it would probably be safer to upgrade to XCV1000E. Stockholm and Mainz are now preparing to merge the jet algorithm with the remainder of the main processor FPGA code.

Åsa Oscarsson and Daniel Öijerholm-ström are writing a simulation of the JEM for their senior thesis this semester. The simulation will be based on the code for the CPM written by Steve Hillier, who has provided good support and documentation. The aim will be to partition the simulation according to the different FPGAs on the JEM, so that a piece of behavioral code will correspond to one configuration of an FPGA. Existing classes will be used wherever appropriate, and structure will be similar to the CPM code, to simplify future code maintenance and integration. Åsa and Daniel have so far "reverse engineered" the CPM code, including a complete set of class diagrams, and will soon begin to build the framework for the JEM.

FRIDAY, 15 MARCH 2002, AFTERNOON SESSION

COMMON MODULES AND BACKPLANE

Common Merger Module, adapter modules and test cards: status and test plans - Tony Gillman (slides)

One CMM (of 5 PCBs) has been assembled and is ready for JTAG tests. A design problem means one of the Merger FPGAs cannot be configured via Flash RAM, but will use JTAG. The remaining PCBs will be re-made to correct the problem. Electronics Group engineers will carry out initial checks using LabView, through to confirming all VME operations. More hardware will be needed for system tests (e.g. GIO, CPME and RTM cards, all of which are almost ready), which will proceed in four stages:

  • checking readout of slice data into ROD
  • checking real-time crate-level merging with emulated HIT data
  • checking real-time system-level merging operation
  • checking real-time true data-flow operation at full system-level

As an example of these phases, the first will involve pre-loaded data patterns (ramps, etc) being scrolled through the CMM in playback mode under the control of a TCM, and checked in I/O memories via VME when the clocks are stopped. Each channel will be independently checked. These tests should then be repeated in L1A triggered mode via a ROD. The various stages will build up to the final test phase, which will involve the use of a pair of (tested) CPMs sourcing HIT data directly via the backplane into the crate and system CMMs, with a DSS/GIO sinking the resultant merged HITs from the system CMM by emulating the CTP.

In conclusion, the programme will be lengthy, with the various phases being only loosely delineated. Not all the necessary hardware yet exists, and much of the firmware isn't yet written. A dedicated team effort is needed, with hardware, firmware and software expertise.

In discussion, Steve asked whether we have enough DSS modules. Tony said there were 10.

CP/JEP backplane and crate status - Sam Silverstein (slides)

Four prototype backplanes have been received in Stockholm. All were tested by APW, and board and connector integrity were verified. One of the backplanes has a slightly misplaced shroud, which APW will repair quickly. The other three have no known problems. Delivery date was several weeks later than we had hoped, largely due to a late delivery of the 9-pin CAN connector from AMP. The 40A, make-first break-last ground contacts have still not arrived from AMP, due to a "lost tooling" issue on their part. They are expected at the end of March, but no firm date has yet been given. The 3.3V and 5V contacts arrived in Stockholm on March 7, and all other parts and tooling are in hand.

One completed processor crate (minus ground contacts) has already been delivered to RAL. Given the difficulty in working with the thick AWG8 cables between the power pins and bus bars, it would be preferable to wait until the ground contacts are available before shipping further crates. This would push delivery dates into April. The crates can be partially completed (mounting, support hardware, bus bars, 5V power) before the ground contacts are received. Stockholm will ship two more crates to RAL and one to Mainz. The crate already at RAL will then be returned to Stockholm for reworking and backplane tests. If a backplane is needed before the ground contacts are available, it should be possible to ship an incomplete crate within a few days. Arrangements would have to be made to retrofit the grounds later. For the future, we should try to reduce our dependence on one supplier.

CP/JEP ROD prototype test status - Bruce Barnett (slides)

Bruce presented an update on the status of firmware tests of the CP/JEP ROD at RAL. The DSS is used in these tests, and it was mentioned that its firmware has been adjusted to correct an address counter error in the S-Link receive logic. It was noted that additional firmware appropriate to tests of the ROD in its JEP modes, as well as firmware designed to help solve the 'L1A problem' in the slice tests would be needed to be specified, designed and tested.

Although all faults described in the fault table presented at the last meeting have been corrected, there are still remaining problems in the implemention of flow control logic at the ROD S-Link interface. The first iteration in solving those provided the opportunity to use the CERN SLIBOX S-Link breakout in combination with a logic analyser in the study of the ROD interface. A second iteration, addressing new flow-control problems in the same design, is in progress. The plans include resolving the current fault, in iteration with James Edwards, retesting the ROD in its RoI mode with the S-Link (the above faults having been identified in slice mode), testing new DSS firmware which will have to appear for the slice tests, and the application of heavy soak-tests.

In discussion, Norman noted that even if firmware faults are diagnosed quickly the turnaround time to do all the necessary steps is at least two days.

Timing Control Module and VME Mount Module status - Tony Gillman (slides)

This talk summarised the recent work of Adam Davis. There are now six tested TCMs with their Adapter Link Cards, which are stiil awaiting VME-- checks and final CANbus tests. One of them is already in Heidelberg for use in the PreProcessor system, and requires the design and manufacture of the appropriate ALC. A second TCM/ALC is ready to be shipped to Mainz for use in the JEP system.

All six VMMs have been partially tested, after some minor design and assembly modifications. Now that the Crate/Backplane assembly has been delivered from Stockholm to RAL, the Concurrent SBC will be used in the VMM to access the TCM via VME--. Two of the VMMs are reserved for the JEP system in Mainz.

DCS etc.

Report on DCS FDR/PRR - Uli Schäfer (slides)

Uli had attended the FDR of the DCS system combined with the PRR of the ELMB. He first showed a diagram of the DCS standard data path, hardware and softwar, then the makeup of the review panel and agenda of FDR. He had given a summarypresentation of the calorimeter trigger DCS, including requirements, channel count, ADC resolution, data volume, hardware and software components. He had said that the current solution was the Fujitsu microcontroller, but listed some alternatives. A reviewer had commented that monitoring of individual modules seemed to be a bit of overkill.

Other presentations showed that we are not the only ones whose solution is not yet final. Many standard and non-standard uses of ELMB were presented. The total number of ELMBs in ATLAS is around 5000.

There are no major problems with the chosen PVSS SCADA system that has been chosen, but it is slow and not very user-friendly. The message is therefore to do as much local processing as possible, to minimise the amount of data to be transmitted. Finally, Uli showed some examples of interesting topics that came up during the review.

We then discussed whether it is worth monitoring each board. Tony pointed out that the CPM, for example, is a very expensive board. The provision of a microcontroller and perhaps a couple of other components on each module is not a big expense, and if we have them we can use them a lot, a little, or even not at all. If we do not have them then we do not have the option. So the consensus seemed to be that it is not a very big overhead and allows us to be flexible.

Fujitsu CANbus status and other options - David Mills (slides)

The work with the Fujitsu chip is going very slowly, there is still no joy in getting the device to receive and acknowledge CAN frames. Using the Zanthic CAN4USB device, Dave has been able to determine that he is generating valid CAN data with the Fujitsu micro.

Several other options were presented as possible replacements for the Fujitsu device, however all except the Hitachi SH7055 these are limited compared to the Fujitsu. See the table in the presentation for more details.

UPDATE 21-03-2002: After a phone call to Hitachi and to their distributors Dave has been informed that Hitachi do not intend to supply either evaluation boards or samples of the micros to "non volume" customers.

FPGA damage from incorrect code - Tony Gillman for Viraj Perera (slides)

For all things that are flexible, the chances of inadvertent failure are increased. Hence some mechanisms should be in place to prevent these. Damage to FPGAs can occur due to:

1. Bad designs.
a. Use of internal busses can cause contention if the external signals enabling the busses are enabled incorrectly on power-up.
b. Not considering the maximum power dissipation of the package.

2. Generating incorrect bit files - if for example a new design is done then use the correct pin allocation file to prevent I/O being assigned incorrectly, since the I/O will be fixed and connected to other devices on the board.

3. Loading incorrect bit files from VME; this can happen in three possible ways and this seems to be the most likely way to damage the FPGAs.
a. Bit files generated for similar devices can be mixed up. For example, on the CMM there are two similar FPGAs, the crate FPGA and the system FPGA. If the two bit files were incorrectly loaded then it can damage the devices due to different I/O connectivity between the devices and to other devices.
b. Bit files generated for different package types also can be loaded incorrectly, for example CP vs. CMM.
c. Bit file generated for different devices can be loaded. For example XCV600 vs. XCV1000. Here, since the internal gates will be enabled and connected in an unknown fashion, internal contention can damage the device due to high current draw.

4. How can we prevent damage to FPGAs? Software can check the bit file header before downloading. Bit files have ASCII characters indicating: file name (make it clear and unique), device and package (could be the same: CMM), and a date. In hardware, the temperature of the device can be monitored by using internal temperature diode of the FPGA (monitor via CANbus, slow but still useful!). Other hardware solutions (for general purpose modules) could interface between temperature diode and power supply or an interface between diode and INIT to reset configuration.

CENTRAL TRIGGER PROCESSOR

Integration of CTPd, and final CTP design - Ralf Spiwoks (slides)

Ralf gave an overview of the features of the Central Trigger Processor Demonstrator and of the Patch Panel required to connect the CTPD to the calorimeter trigger processors. He summarised the hardware and software required to integrate the CTPD in the calorimeter trigger slice test.

He then continued with an overview of the first ideas for the final design of the CTP, which will be built with different modules for input, core processing and output. There will also be modules for common timing signals and the beam monitoring. Ralf went through the functionality and first ideas on implementation for the different modules. He finished with an overview of the interface between the CTP and the sub-detector TTC partitions.

In discussion, Norman asked how many CTPs would be built since it would be useful when commissioning to have several, for different parts of ATLAS. Ralf said this depended on the use of partitions.

Trigger menus and trigger configuration - Thomas Schoerner-Sadenius (slides)

We will need consistent configuration of LVL1, LVL2, EF and hardware. It should be easy, transparent and controlled by a user-friendly GUI. A ttop-down approach will be used: Describe the trigger strategy in terms of physics signatures. Then, using knowledge of the abilities of the trigger system (in terms of 'algorithms' in case of LVL2 and EF) recursively build trigger menus for EF, LVL2, LVL1 and hardware. (This assumes that it is easy to configure LVL1 and that some simple rules can be found which govern the communication between the different components of the trigger).

Thomas reminded us of the LVL2/EF principle: Decision is taken in steps, with principles of constant refinement of trigger elements in question and of fast reject. A recursive algorithm is establised which builds the necessary input for LVL2 / EF configuration / steering (menu and sequence tables) starting from some physics signatures (given as XML files). Derivation of the LVL1 menu should be quite straightforward. However, the algorithm still has to 'learn' which abilities LVL1 offers to it and what boundary conditions are given by the hardware. For the LVL1 menus, code exists which runs on XML trigger menus, derives CTP configuration (look-up tables and combinatoric devices) and simulates CTP behaviour.

Next steps are:

  • Integration of global trigger configuration with HLT steering code.
  • Discussions on rules for calo/muon hardware <-> CTP communication.
  • Definition of interfaces with online calo simulation (Murrough).
  • Integration with Ed's offline calo simulation and his way of coding trigger menus..

CTP simulation - Thomas Schoerner-Sadenius - Thomas Schoerner-Sadenius (slides)

Most LVL1 trigger components have their own stand-alone simulation software (mainly used for hardware tests up to now). For use as offline simulation software these pieces have to be adapted and put into one common framework. The calo trigger and CTP simulation are the most advanced, therefore start integrating the two using the Athena framework and Ed's experience with it. Interfaces are supposed to follow hardware very closely; however, we do not yet have a complete picture of the LVL1 event data model - a document is in preparation.

The question of databases is not yet decided.

For the CTPD, LVL1 accepts will be built from conditions on threshold multiplicities which are delivered by subsystems. These conditions can be logically combined to so-called trigger items the 'OR' of which finally decides about LVL1 Accept. CTPD simulation cannot be separated from question of trigger menus!

In the current approach, XML trigger menus read in by C++ code (stand-alone) which, using a certain API, transform structure into C++ class objects. Code (and this simulation) is in principle ready; however no real-life tests have been done up to now (waiting for integration with Ed and for slice-test). .

 

SATURDAY, 15 MARCH 2002, MORNING SESSION

ONLINE SOFTWARE COMPONENT STATUS SUMMARIES

Overview, databases and run control - Murrough Landon (slides)

This talk was actually given at the end of Friday afternoon.

Murrough reminded us of the structure of our software packages and reviewed the status and recent developments of each one.

Bruce has extended the HDMC parts file syntax for the Module Services package which is now being used by Gilles for the CPM.

Oliver and Karsten are working to add new HDMC parts for ASIC/MCM testing and to develop the VME driver for the homebrew CPU.

Murrough has done further work on databases, adding description of connections and has restarted work on calibration data. He has also extended the main run control GUI to show our run control parameters.

Steve has completed work on the core simulation package and the CPM simulation. He has also provided both user and reference documentation. A scheme for organising (sub)system test and test vectors has been discussed. This still needs implementing.

There is still a lot more work to do, in particular integration of the separate developments of our component packages. This is likely to take a couple more months (at least). Any distractions imposed on the software developers will only add additional delay to the timescale.

Module services, ROS and system management - Bruce Barnett (slides)

This talk summarised the presentations which Bruce had made at the Thursday morning s/w session. In addition, he elaborated on the Linux system infrastructure, which Bruce sees as a prototype for both UK and joint slice tests.

Bruce acts as a contact point with the ROS, and has interacted with a subset of their s/w in the course of the ROD tests. In that scope, as described earlier, he has tested the linux and S-link aspects of a private development version of the ROS s/w. To date no integrated ROS functionality has been considered. Extending that work, he foresees installing and testing the ROS official distribution when it becomes available .... it appears that that version will be better than beta quality and it is anticipated that although some feedback may be required, major problems are unlikely to arise.

Bruce also reported on the progress of Markus Joos, at CERN, who is evaluating a high PCI slot-count industrial PC. Such a PC would allow us to use ROS in our slice tests with only a single PC, and hence without the difficulties/complications arising from a multi-PC architecture. Progress is good, with Markus already benchmarking the new S32PCI64 S-Link which shows promising performance. Extensive results are expected in a few weeks time.

The system infrastructure at RAL (intended for UK module and slice testing) currently consists of 2 6U-VME and one (old L1 demonstrator) 9U crate. The 9U crate will be replaced by the recently arrived Stockholm L1-crate. With the arrival of new concurrent CPUs, all 3 VME crates will have Linux-based diskless controllers - over the 2 described at the meeting. The system is provided with KVM access to the controllers, 100 baseTX local connections, a disk and boot server (also hosting a S-Link ODIN destination from the test system.) and a dual-boot (Linux/NT) utility PC which can be used for s/w support or firmware and CAN download/diagnostic support. Although there is a mix of Linux distributions in use, it is planned to rationalise to the 7.2 RH linux (2.4 kernel) which has been recently installed on "atlun01". Some other distributed-system issues should be addressed and agreed within the collaboration, (names of mount points, etc.) but progress has been made.

Finally, Bruce gave an overview of the status of Module services. He mentioned that some changes had been made to HDMC to allow recursive sub-module descriptions in parts files - an extension of his 'composites' (was 'inheritance') syntax. This syntax forms a key part of the module-services design, and its use allows the consistent application of the module representation in both DAQ and diagnostic L1CALO application. The module services are shared object libary based, with a single library per module. Bruce has provided sample code to help in the use of these constructs, and should provide more extensive documentation soon. The code from the earlier DSS and ROD library services is being migrated from the HDMC directory structure to the module services tree. In addition, the old ROD test code is gradually merging with the new code.

Envisaging slice-test DAQ - Oliver Nix (slides)

To come.

Simulation, test vectors and test organisation - Stephen Hillier (slide)

The simulation framework is now stable and well documented. The best place to look for the latest information is www.ep.ph.bham.ac.uk/user/hillier/level1/simulation. Reasonable simulations exist for the CPM and CPM RODs, but these need some improvements. Two students from Stockholm are working on the JEM simulation, and it is expected other modules will be implemented in the future.

On test-vectors, currently only a small number of simple types exist, and their uses are fairly basic. In the near future we will need to be able to generate test-vectors and predicted outputs for far more complex situations. The need to organise these test-vectors led to a recent brain-storming session in the UK. A summary of this can be found at hepwww.ph.qmul.ac.uk/l1calo/sweb/meetings/2002/testvectors.html. The main outcome of this was a proposal to integrate the test-vector files into the database, such that for each type of test set-up, the necessary files could be found or generated with the correct conditions. A solution to the L1Accept problem using the DSSwas also discussed. Some first attempts to implement this solution have been made by Murrough and Steve.

Readout, DIG, and RoI-builder issues - Norman Gee (slides)

Norman had been looking at data formats, which he is collecting in a compendium and would like them checked/reviewed. Plots were shown for CPM and JEM readout for a range of tower occupancy and slice counts.

The planned CPM readout can read 5 slices zero-suppressed with 10% tower occupancy at 100 KHz, or 1 slice without zero suppression. The JEP system needs twice the number of RODs previously planned, and with 2 RODs per JEP crate (8 S-Links) can read 3 slices at 100 KHz zero-suppressed, or 1 slice without zero suppression. We should decide what maximum parameters the system should handle.

Norman also described the recent review of the proposed new RoI builder. The new design uses an ANL S-Link running over Gigabit Ethernet, using Ethernet switches to copy RoI fragments to several RoIB boards. The reviewers were however concerned about flow control in this system, and have also asked for error handling scenarios and internal monitoring. There will more RoIB presentations in April. The RoIB team request a further integration test with LVL1 - Norman had said this could not be before about December 2002.

The CERN S-Link group were developing a new S-Link implementation using 2.5 Gbit data over 2 fibres (rather than the present 4-fibre links). There is also a fast "FILAR" S-Link to fast PCI interface. LVL1 were asked if we were willing to try this when it had been fully debugged. Norman had again said not before December 2002.

In discussion, it was asked whether the increased number of JEP RODs means that we may not have purchased enough G-links.

ROD-crate DAQ - Ralf Spiwoks (slides)

Ralf repeated his talk given at the third DIG Forum of 28 February 2002. He reminded the audience of the context and of the interfaces of the Read-out Driver System. He also reminded the audience of the history of a common data acqusition for the Read-out Driver System. He reported that a task force has been set up within the Detector Interface Group with the aim to define ROD Crate DAQ. He further reported on the operation of the task force (see atlasinfo.cern.ch/Atlas/GROUPS/DAQTRIG/DIG/rodtaskforce/) and gave an overview of the ROD Crate DAQ definition document which is in preparation. Note that our need for multi-crate event building is being taken seriously.

In discussion, Norman noted that all of our crates come under this group's definition of "ROD" crates.

SLICE TEST PLANNING

Slice test organisation and planning - Tony Gillman (slides)

The first foil reminded people of what was said just six months ago at the official ASSO Review meeting at CERN, where we defined a number of milestones for the next three years. Four of these related to the slice tests, which we had estimated starting in April 2002 and finishing (phase 1) in July 2002. Since then, it would seem we have slipped by a further six months, with a starting date not before October 2002.

The UK-based CP sub-system was used to illustrate the magnitude of the task, which will require a very large number of complex modules and components to be tested and assembled into the full sub-system. A table was presented listing all these elements, and their different stages of readiness. Fortunately, very little hardware design work is now outstanding, although there will need to be a major programme for individual module testing, and also many pieces of firmware are still not yet written.

In conclusion, it was suggested that it would be more efficient for the JEP sub-system to be assembled in the UK, building on the software framework already well advanced for the CP sub-system. The use of the common modules (CMM, ROD, TCM) also suggests that this would be appropriate, as the expertise for these modules is UK-based. The proposal would then be to debug both CP and JEP sub-systems together before moving them to Heidelberg towards the end of 2002 for the complete Slice tests.

As it is clear that the collaboration remains seriously short of software effort, it was suggested that a small group of people should re-visit the definition of the scope of the sub-system and slice tests, in order to prioritise the necessary software packages. (See also summary of Management Committee meeting.)

SUMMARY

Main issues, highlights and "lowlights" of the meeting - Eric Eisenhandler (slides)

This meeting summary talk is available via the link above, and was also circulated by e-mail.

This page was last updated on 16 December, 2013
Comments and requests for clarification may be sent to
E-mail:
Telephone: