Print

Print


Dear Martin,

In addition to what Andy said about variables being additive or
overwritten there is a common pattern to set all.role and all.manager
in a clustered configuration

all.role server
all.role manager if headnode.example.com
all.manager headnode.example.com 1213

This way you can use the same configuration for the headnode in a
cluster (here headnode.example.com) as for the worker nodes, and they
will have the correct values for all.role and all.manager. The server
role is the default, and if the hostname matches headnode.example.com
the role will be overwritten to specify manager instead.

Regards,
Henrik

On 24 May 2013 17:08, Martin Philipp Hellmich <[log in to unmask]> wrote:
> Hi Henrik,
>
> another question:
> If I set a variable twice in the config file, e.g. ofs.osslib, will the value get overwritten?
> I know that some vars, like all.export can appear multiple times, but not all, right?
>
> Cheers
> Martin
> On May 24, 2013, at 12:27 PM, Henrik Öhman <[log in to unmask]> wrote:
>
>> Hi Martin,
>>
>> I'm trying to get the DPM xrootd module working in a custom
>> environment, but I am still not entirely sure how to begin. Right now
>> I have a basic VM instance running CentOS 6.4 with puppet installed. I
>> have downloaded the punch modules and the manifests-testbed, but I
>> don't believe that the xrdbase.pp maniest is applicable to this use
>> case. Could you give me some hints on whether I can use the punch
>> modules on a custom VM, or if they depend on some domain specific
>> requirements for CERN?
>>
>> Secondly, I looked through the configuration template, and so far I've
>> found a couple of things that might not agree with the ATLAS Tier3
>> configuration file I've been using. Perhaps these options are
>> configured in some other way, but here are the differences:
>>
>> oss.localroot: We set this to /data/scratch by convention
>> acc.authdb: This is set to /etc/xrootd/auth_file, which contains the
>> single line 'u * /atlas a' to give users full access to the atlas area
>> frm.xfr.copycmd: This is set to 'in stats /etc/xrootd/stagein.sh $SRC
>> $DST $CGI' on the nodes
>>
>> There are a few other options we use that are not available in the
>> xrootd.cfg.erb template. I'm not an xrootd expert - in fact I barely
>> know it - so I don't know the significance or impact of these options.
>> I have attached xrootd.cfg, auth_file, and stagein.sh.
>>
>> Best regards,
>> Henrik
>>
>> On 22 May 2013 09:53, Martin Philipp Hellmich <[log in to unmask]> wrote:
>>> Hi Henrik,
>>>
>>> did you get a chance to try it?
>>>
>>> Cheers
>>> Martin
>>>
>>> On May 17, 2013, at 3:55 PM, Martin Philipp Hellmich <[log in to unmask]> wrote:
>>>
>>>> Hi all,
>>>>
>>>> so the module should be able to configure a dpm with xrootd as head node or disk node, but _without_ federation.
>>>> It would be great, if you could test that.
>>>>
>>>> The federation part is trickier and I am still working on it.
>>>> If you want to have a look at this, I believe it can follow the same schema as the other configs, look into "modules/dmlite/manifest/xrootd.pp".
>>>>
>>>> Cheers
>>>> Martin
>>>>
>>>> On May 17, 2013, at 11:10 AM, Henrik Öhman <[log in to unmask]> wrote:
>>>>
>>>>> Thanks Martin,
>>>>>
>>>>> I'll have a look over the weekend!
>>>>>
>>>>> Best regards,
>>>>> Henrik
>>>>>
>>>>> On 17 May 2013 10:52, Martin Philipp Hellmich <[log in to unmask]> wrote:
>>>>>> Hi Henrik
>>>>>>
>>>>>> the branch to use is 'test'. The punch-modules are the collection of all puppet modules used at CERN.
>>>>>> This is where the xrootd module should go in the end, too.
>>>>>>
>>>>>> If you look in the folder 'modules', you can find the xrootd and the lcgdm modules.
>>>>>> The templates are inside there, too, in a template folder.
>>>>>> You can find an example how to use it in a dpm configuration here:
>>>>>> https://svnweb.cern.ch/trac/lcgdm/browser/extras/puppet/manifests-testbed
>>>>>>
>>>>>> The xrootd manifests are in 'xrootd'. The xrdbase.pp holds all the information.
>>>>>>
>>>>>> Hope that helps, please ask if I missed something!
>>>>>>
>>>>>> Cheers
>>>>>> Martin
>>>>>>
>>>>>>
>>>>>> On May 17, 2013, at 10:43 AM, Henrik Öhman <[log in to unmask]>
>>>>>> wrote:
>>>>>>
>>>>>>> Dear Martin,
>>>>>>>
>>>>>>> Thanks for the repo link. I have cloned it, but there is no default
>>>>>>> branch (e.g. master), only 'devel' and 'test'. Which one should I use?
>>>>>>>
>>>>>>> Further I have no experience with punch-modules - could you tell me
>>>>>>> where to start digging? For the ATLAS (US) Tier3 perspective, I'd like
>>>>>>> to examine the configuration file templates and also the manifests
>>>>>>> responsible for creating the configuration files. Could you give me a
>>>>>>> pointer or two?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Henrik
>>>>>>>
>>>>>>> On 16 May 2013 11:06, Martin Philipp Hellmich <[log in to unmask]> wrote:
>>>>>>>> Hi Henrik, Andrea,
>>>>>>>>
>>>>>>>> I am very happy to collaborate.
>>>>>>>> I have started making an xrootd puppet module, which is general and extensible.
>>>>>>>> It now works together with a dpm-xrootd module defining extra variables and parameters and hiding some of the configuration complexity to the user.
>>>>>>>>
>>>>>>>> The xrootd part should be invoked through
>>>>>>>> class{"xrootd::config":}
>>>>>>>> class{"xrootd::install":}
>>>>>>>> class{"xrootd::service":}
>>>>>>>> plus two functions after config, which create the configuration files:
>>>>>>>> create_config{"config_disk":}
>>>>>>>> create_config{"config_redir":}
>>>>>>>> create_sysconfig{"sysconfig":}
>>>>>>>>
>>>>>>>> so in all it looks like this in your manifest:
>>>>>>>> class{"xrootd::config":}
>>>>>>>> create_config{"config_disk":}
>>>>>>>> create_config{"config_redir":}
>>>>>>>> create_sysconfig{"sysconfig":}
>>>>>>>> class{"xrootd::install":}
>>>>>>>> class{"xrootd::service":}
>>>>>>>>
>>>>>>>> If you are interested to have a look at the code, checkout the test branch from here:
>>>>>>>> [log in to unmask]:./public/repo/punch-modules
>>>>>>>> It's the punch module repo with the xrootd and lcgdm::xrootd modules added (and some minor changes throughout so it works without hiera)
>>>>>>>>
>>>>>>>> I am happy to help dig through the code and try it!
>>>>>>>>
>>>>>>>> Cheers
>>>>>>>> Martin
>>>>>>>>
>>>>>>>> On May 16, 2013, at 10:00 AM, Henrik Öhman <[log in to unmask]>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Dear Fabrizio,
>>>>>>>>>
>>>>>>>>> I got this mail through Sergey Panitkin, and I believe I can be of some help. Note that I'm not on the xrootd-l list, so please make sure that my email is among the recipients in your reply.
>>>>>>>>>
>>>>>>>>> I have been using puppet to configure xrootd on Google Compute Engine (GCE). In this activity I have based my modules and templates on the ATLAS US Tier3 puppet modules. I am by no means an expert on puppet, but I do have some experience crafting puppet modules and templates for different services. I have some spare time right now, so if you'd like me to take a look at your specific setup let me know.
>>>>>>>>>
>>>>>>>>> Best regards,
>>>>>>>>> Henrik
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -------- Original Message --------
>>>>>>>>> Subject:      Anybody with puppet for xrootd ?
>>>>>>>>> Date: Wed, 15 May 2013 16:09:38 +0200
>>>>>>>>> From: Fabrizio Furano <[log in to unmask]>
>>>>>>>>> To:   xrootd-l <[log in to unmask]>
>>>>>>>>> CC:   Martin Philipp Hellmich <[log in to unmask]>, Oliver Keeble <[log in to unmask]>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi xrootd folks,
>>>>>>>>>
>>>>>>>>> as DPM team we are planning to invest into puppet for the setup, and
>>>>>>>>> of course this would cover also the so-called xrootd plugin, that
>>>>>>>>> involves an xrootd setup as a frontend of DPM, plus some DPM-related
>>>>>>>>> Xrd* plugins.
>>>>>>>>>
>>>>>>>>> The question is... is there anybody who has experience/templates/etc.
>>>>>>>>> about setting up xrootd clusters with puppet ? Our idea is to be
>>>>>>>>> compatible with existing practices, if any, or, even better, to share
>>>>>>>>> material on this.
>>>>>>>>>
>>>>>>>>> Thank you!
>>>>>>>>>
>>>>>>>>> Fabrizio
>>>>>>>>>
>>>>>>>>> ########################################################################
>>>>>>>>> Use REPLY-ALL to reply to list
>>>>>>>>>
>>>>>>>>> To unsubscribe from the XROOTD-L list, click the following link:
>>>>>>>>>
>>>>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Martin Hellmich                    Information Technology Department
>>>>>>>> [log in to unmask]               CERN
>>>>>>>> +41 22 76 765 26                 CH-1211 Geneva 23
>>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Martin Hellmich                    Information Technology Department
>>>>>> [log in to unmask]               CERN
>>>>>> +41 22 76 765 26                 CH-1211 Geneva 23
>>>>>>
>>>>
>>>> --
>>>> Martin Hellmich                    Information Technology Department
>>>> [log in to unmask]               CERN
>>>> +41 22 76 765 26                 CH-1211 Geneva 23
>>>>
>>>
>>> --
>>> Martin Hellmich                    Information Technology Department
>>> [log in to unmask]               CERN
>>> +41 22 76 765 26                 CH-1211 Geneva 23
>>>
>> <auth_file><xrootd-clustered.cfg><stagein.sh>
>
> --
> Martin Hellmich                    Information Technology Department
> [log in to unmask]               CERN
> +41 22 76 765 26                 CH-1211 Geneva 23
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1