There was a full-day meeting of the Snowmass/Les Houches QCD group on Thursday Jan. 31 at Fermilab. The following talks were given:

Use of LHC data in PDF fits now and in the future: Juan Rojo
Need for precision PDFs and need for an eLHC: Max Klein
NNLO progress: Frank Petriello
Scale choices for inclusive and non-inclusive cross sections: Joey Huston
Scale choices for complex processes: Kalanand Mishra
MINLO procedure for scale-setting: Keith Hamilton
QCD issues in jet substructure: Liantao Wang

The idea was to concentrate on a few of the issues that we need to address, with timescales both immediate (such as scale choices and 
uncertainties) as well as long-term (such as the need/plans for an eLHC), with other issues being discussed in subsequent meetings. 

The talks can be accessed at: https://indico.cern.ch/conferenceTimeTable.py?confId=226756#20130131

Here are some brief notes on issues brought up in these talks. 

1) We are just beginning to see the use of LHC data in global PDF fits, most of that from the 2010 low statistics data at 7 TeV. So far data from jet production, photon production and W and Z rapidity distributions have been used. In the near/medium term future,we can expect data from:

-W+c
-W/Z+jets
-off-resonance Drell-Yan
-top differential distributions
-Z+c
-single top 
-charmonium production

to be available and to supply useful PDF information. Note that for this data to be useful, full correlated error information must be available. As the LHC has run/will be 
run at different energies, the ratios of cross sections at those energies may provide useful PDF information, as many of the systematic errors may cancel. It is fairly
clear which theoretical errors will cancel, but more thought might have to go into the degree of cancellation for the experimental errors. 

The latest PDF benchmarking, mostly at NNLO, can be found in arXiv:1211.5142. The three global fit PDFs (CT10, MSTW08, NNPDF2.3) are in
good agreement with each other for both quark and gluon distributions in most of the kinematic regions of interest at the LHC. HERAPDF1.5 provides
similar results, but with somewhat higher uncertainties due to the non-global nature of the fit. 

2) An eLHC will provide a complete data base to determine all PDFs, in x and Q^2  ranges not accessible at HERA, and a a greater precision than possible from
in situ LHC measurements. A precision on the determination of alpha_s(m_Z) of 0.1% is possible. The timescale for the start of data-taking is in the mid-2020's. 
Any later than that, and the LHC might not be operating. 

3) Most cross sections of interest are known at NLO. Even at NLO, though, a considerable scale dependence can remain. In many cases, there is a motivated 'physical'
scale, such as pT_jet for inclusive jet production. The logarithms that contribute to the scale dependence depend on the factorization and renormalization scale. It's most
useful (and rarely done nowadays) to look at a QCD cross section plotted in 2-D versus the two scales. If we're lucky, there will be a saddle-shaped curve, where in the 
saddle region the scale dependence is relatively flat. The saddle point is typically near the physical scale (pT_jet for inclusive jet production), within factors of 2. The orientation of the saddle region and the location of the saddle point depend on the kinematics (pT,y) of the cross section point under consideration. The saddle point moves to lower scales for larger jet sizes, and to higher scales for larger rapidities. The orientation of the saddle region typically rotates by about -45 degrees in going from low pT to high pT. For extreme kinematics, the
saddle point can be at very high scales. 

Two questions were discussed: should the saddle point scale be treated as a special point (for example used for the central cross section) and/or should any estimate for the scale uncertainty in the cross section encompass the saddle point, if another scale is used for the central evaluation. This may especially be of relevance for inclusive Higgs production
which has a particularly bad scale behavior: there is no saddle point at NLO (the scale dependence is monotonic) and the saddle point at NNLO is at small scales (of the order of 0.1 times the Higgs mass). This is outside the error range typically chosen (starting from a central scale of mHiggs or mHiggs/2), and again it is somewhat controversial whether the
uncertainty range should include this saddle point (which would  greatly expand the range of uncertainty for gg->Higgs). There was disagreement at the meeting about the inclusion of the saddle point in the uncertainty range at NNLO, since the scale involved is so much smaller than the Higgs mass.  NNNLO will tell us more, but so far only approximate
calculations have been completed, and it is not clear whether these partial cross sections agree. 

Another important question is in regards to the scale uncertainty for exclusive/binned cross sections. In the last few years, the Stewart-Tackmann procedure has been
adopted, especially with regards to Higgs(+jets) cross sections. The result is (in most case) a reasonable extension of the uncertainties, although with strong cuts on jet vetoes or related quantities, the uncertainty can grow substantially.  For W/Z+jets, however, the 
application of S-T can result in much larger uncertainties than may  seem reasonable, especially given the good agreement of the theory with the central prediction to the measured data.  Several other prescriptions for estimating the uncertainty were discussed. A comparison of the alternate prescriptions to the S-T approach, for Higgs+0 jet production,  was made in the 2011 Les Houches NLM writeup and general agreement was found.  Comparisons of the alternate techniques for calculating the uncertainty to the S-T approach need to also be carried for cross sections such as W/Z+==n jets. Much information on the resummation of  Higgs + 0 jet and Higgs + 1 jet vetoed cross sections
is available, and some detailed comparisons of the impact of jet vetoes, and the impacts of resummation on the relevant cross sections are in progress.  

4) Some processes are so complex that it is not clear what the natural scale is. An example is tT+jets, which is typically compared to ME+PS predictions. From the LHC data, it seems
that larger scale choices (smaller alpha_s, but larger Q^2 for hard gluon emission) seem to describe the data best. For complex processes like W/Z +1,2,3,4,5 jets, scales of HT/2 seem to work well both in describing the data and in keeping the size of the NLO corrections reasonably small. It's not clear at the moment why such a large scale works so well, but 
comparisons with the MINLO procedure may help with this. The MINLO procedure aims not at estimating the uncertainty but at a better determination of the central cross section,
accounting for potentially big Sudakov logs that can ruin the predictive power of any calculation. It's basically the application of the CKKW procedure to NLO, with coupling constant
re-weighting for the branching vertices and Sudakov suppression factors for further emission in the matrix element region. It can be used with a fixed NLO prediction, or by using vetoed showers, with a PS Monte Carlo. 

MINLO is always NLO-accurate, and for sufficiently inclusive observables is also accurate to NLL. The MINLO procedure appears to agree with the use of a large conventional scale like HT/2, and it may be that the large scale 'stands-in' for the additional Sudakov suppression in MINLO. There's no clear understanding at the moment, but this is clearly something that
should be further understood. 

5) Of course, better precision is achieved by going to NNLO. So far, most NNLO calculations have been for 1-body final states, but the steady work on 2->2 processes is close to fruition, with partial NNLO jet cross section predictions now available, and Higgs+1 jet expected on a few month time scale (and W/Z+1 jet presumably on a similar timescale). The results released so far for inclusive jet production (the gg leading color contributions) show that the scale dependence becomes very flat at NNLO. This should increase the constraining 
power of jet production in NNLO PDF fits. Higgs + 1 jet at NNLO, among other things, is needed to improve recent efforts at resumming logs associated with jet vetoes in the 1-jet bin. 
We want to be able to match the NNLL+NNLO precision achieved for zero jets.

6) There has been a great deal of activity in the last several years on the subjects of jet grooming/jet substructure, with most of the results based on very simple intuitions of QCD radiation. There still is work to do on a better QCD understanding of the effects of most of these tools, and Monte Carlo comparisons with data are still very critical.

Use REPLY-ALL to reply to list

To unsubscribe from the SNOWMASS-QCD list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=SNOWMASS-QCD&A=1