You are browsing a read-only backup copy of Wikitech. The live site can be found at

Puppet coding: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
m (use rbenv to install bundler gem or it would use the local ruby)
(BCornwall moved page Puppet coding to Puppet/Coding and style guidelines: Organizing Puppet-related items under one parent)
(33 intermediate revisions by 21 users not shown)
Line 1: Line 1:
This page is about writing puppet code:  how to write it, when to write it, where to put it.  For information about how to install or manage puppet, visit [[Puppet | this page]].
#REDIRECT [[Puppet/Coding and style guidelines]]
==Getting the source==
As simple as
git clone --recursive
== Set up local environment ==
It is possible to run some tests locally. To do that, you will need an environment setup
=== Install rbenv and python tox packages ===
==== MacOS macports ====
  # port install rbenv ruby-build
  # port search tox
  # port install py<ver>-tox
==== Linux ====
=== Setup env ===
* Either run <tt>rbenv init</tt> to hook rbenv to your shell or follow instructions from here [].
* Go to your local puppet repo to have rbenv install the appropriate ruby version, and bundler
  $ rbenv install
  $ rbenv versions
  $ rbenv exec gem install bundler
  $ rbenv exec bundle install
* Test that it is ok, the following will show you a list of tasks
  $ rbenv exec bundle exec rake --tasks
Done, your env is ready!
== When we use Puppet ==
Puppet is our configuration management system. Anything related to the configuration files & state of a server should be puppetized. There are a few cases where configurations are deployed into systems without involving Puppet but these are the exceptions rather than the rule ([[How_to_deploy_code|MediaWiki configuration]] etc.); all package installs and configurations should happen via Puppet in order to ensure peer review and reproducibility.
However, Puppet is ''not'' being used as a deployment system at Wikimedia. Pushing code via Puppet, e.g. with the define <code>git::clone</code> should be avoided. Depending on the case, Debian packages or the use of our deployment system ([[Scap3]]) should be employed. Deploying software via Scap3 can be achieved in Puppet by using the puppet class <code>scap::target</code>.
=== Wikimedia [[Help:Cloud_Services_Introduction#VPS_.3D.3E_Wikimedia_VPS|Cloud VPS]] ===
Cloud VPS users have considerable freedom in the configuration of their systems, and the state of machines is frequently not puppetized.  Specific projects (e.g. [ Toolforge]) often have their own systems for maintaining code and configuration.
That said, any system being used for semi-production purposes (e.g. public test sites, bot hosts, etc.) should be fully puppetized. VPS users should always be ready for their instance to vanish at a moment's notice, and have a plan to reproduce the same functionality on a new instance -- generally this is accomplished using Puppet.
The node definitions for a VPS instances are not stored in <code>manifests/site.pp</code> -- they are configured via the [[Horizon|OpenStack Horizon]] user interface and stored in a backing persistent data store.
We maintain certain instance standards that must be preserved, such as: LDAP, DNS, security settings, administrative accounts, etc.  Removing or overriding these settings means an instance is no longer manageable as part of the Cloud VPS environment.  These instances may be removed or turned off out of necessity. This is part of the [[Instance lifecycle|instance lifecycle]].
== Organization ==
As of December 2016, we decided<ref></ref> to adopt our own variation of the role/profile pattern that is pretty common in puppet coding nowadays. Please note that existing code might not respect this convention, but any new code should definitely follow this model.
The code should be organized in modules, profiles and roles, where
# '''Modules should be basic units of functionality''' (e.g. "set up, configure and run HHVM")
# '''Profiles are collection of resources from modules that represent a high-level functionality'''  ("a webserver able to serve mediawiki"),
# '''Roles represent a collective function of one class of servers''' (e.g. "A mediawiki appserver for the API cluster")
# '''Any node declaration must only include one role, invoked with the role function'''. No exceptions to this rule. If you need to include two roles in a node, that means that's another role including the two.
Let's see more in detail what rules apply to each of this logical divisions.
=== Modules ===
Modules should represent basic units of functionality and should be mostly general-purpose and reusable across very different environments. Rules regarding organizing code in modules are simple enough:
# Any class, define or resource in a module '''must not  use classes from other modules''', and avoid, wherever possible, to use defines from other modules as well.
# '''No hiera call, explicit or implicit, should happen within a module.'''
These rules will ensure the amount of WMF-specific code that makes it into modules is minimal, and improve debugging/refactoring/testing of modules as they don't really depend on each other. Keeping up with the HHVM module example, the base class [ hhvm] is a good example of what should be in a module, while [ hhvm::admin] is a good example of what should '''not''' be in a module; surely not in this one: it is a class to configure apache to forward requests to HHVM, depends mostly on another module (apache) and also adds ferm rules, which of course requires the WMF-specific <code>network::constants</code> class.
=== Profiles ===
Profiles are the classes where resources from modules are collected together, organized and configured. There are several rules on how to write a profile, specifically:
# Profile classes should only have '''parameters that default to an explicit hiera calls with no fallback value'''.
#*<tt>$web_workers = hiera('profile::ores::web::workers')</tt> is good
#* <tt>$web_workers = hiera('profile::ores::web::workers', 48)</tt> is frowned upon, but is still commonly used.
# No hiera call should be made outside of said parameters.
# No resource should be added to a profile using the <code>include class</code> method, but with '''explicit class instantiations'''. Only very specific exceptions are allowed, like global classes like the network::constants class.
# If a profile needs another one as a precondition, it must be listed with a <code>require ::profile::foo</code> at the start of the class, but '''profile cross-dependencies should be mostly avoided'''.
Most of what we used to call "roles" at the WMF are in fact profiles. Following our example, an apache webserver that proxies to a local HHVM installation should be configured via a <code>profile::hhvm::webproxy</code> class; a mediawiki installation served through such a webserver should be configured via a <code>profile::mediawiki::web</code> class.
=== Roles ===
Roles are the abstraction describing a class of servers, and:
# '''Roles must only include profiles''' via the <code>include</code> keyword, plus a <code>system::role</code> definition describing the role
# A role can include more than one profile, but '''no conditionals, hiera calls, etc are allowed'''.
# '''Inheritance can be used between roles, but it is strongly discouraged''': for instance, it should be remembered that inheritance will not work in hiera.
# All roles should include the standard profile
Following our example, we should have a <code>role::mediawiki::web</code> that just includes <code>profile::mediawiki::web</code> and <code>profile::hhvm::webproxy</code>.
=== Hiera ===
Hiera is a powerful tool to decouple data from code in puppet, but as we saw transitioning to its use, it's not without its dangers and it can easily become an untangled mess. To make it easier to understand and debug, the following rules apply when using it:
# '''No class parameter autolookup is allowed, ever.''' This means you should explicitly declare any variable as a parameter of the profile classes, and passed along explicitly to the classes/defines within the profile.
# '''Hiera calls can only be defined in profiles, as default values of class parameters'''.
# All hiera definitions for such parameters should be defined in the <code>role</code> hierarchy. Only exceptions can be shared data structures that can be used by many profiles or to feed different modules with their data. Those should go in the <code>common/$site</code> global hierarchy. A good example is, in our codebase, <code>ganglia_clusters</code>, or any list of servers (the memcached hosts for mediawiki, for example).
# '''Per-host hiera''' should only be used to allow tweaking some knobs for testing or to maybe declare canaries in a cluster. It '''should not be used to add/subtract functionality.''' If you need to do that, add a new role and configure it accordingly, within its own hiera namespace.
# '''Hiera keys must reflect the most specific common shared namespace of the puppet classes trying to look them up'''. This should allow easier grepping and avoid conflicts. Global variables should be avoided as much as possible. This means that a parameter specific to a profile will have its namespace (say <code>profile::hhvm::webproxy::fcgi_settings</code>), while things shared among all profiles for a technology should be at the smallest common level (say the common settings for any hhvm install, <code>profile::hhvm::common_settings</code>), and finally global variables should have no namespace (the <code>ganglia_clusters</code> example above) and use only snake case and no semicolons.
=== Nodes in site.pp ===
Our traditional node definitions included a lot of global variables and boilerplate that needed to be added. Nowadays, your node definitions should just include a simple one-liner:
<source lang="ruby">
node 'redis01.codfw.wmnet' {
this will include the class role::db::redis and look for role-related hiera configs in <tt>hieradata/role/codfw/db/redis.yaml</tt> and then in <tt>hieradata/role/common/db/redis.yaml</tt>, which may for example be:
<source lang="yaml">
cluster: redis
  - redis-admins
=== A working example: deployment host ===
Say we want to set up a deployment host role with the following things:
* A scap 3 master, which includes the scap configuration and a simple http server
* A docker build and deploy environment, which will need the kubernetes client cli tool
So we will want to have a simple module that installs and does the basic scap setup:<syntaxhighlight lang="puppet">
# Class docs, see below for the format
class scap3 (
    $config = {}
  ) {
    require_package('python-scap3', 'git')
    scap3::config { 'main':
        config => $config,
</syntaxhighlight>as you can see, the only resource referenced is coming from the same module. In order for a master to work, we also need apache set up and firewall rules to be created. This is going to be a profile: we are defining one specific unit of functionality, the scap master.<syntaxhighlight lang="puppet">
# Class docs, as usual
class profile::scap3::master (
    $scap_config = hiera('profile::scap3::base_config'), # This might be shared with other profiles
    $server_name = hiera('profile::scap3::master::server_name'),
    $max_post_size = hiera('profile::scap3::master::max_post_size'),
    $mediawiki_proxies = hiera('scap_mediawiki_proxies'), # This is a global list
) {
    class { 'scap3':
        config => merge({ 'server_name' => $server_name}, $scap_config),
    # Set up apache
    apache::conf { 'Max_post_size':
        content => "LimitRequestBody ${max_post_size}",
    apache::site { 'scap_master':
        content => template('profile/scap/scap_master.conf.erb'),
    # Firewalling
    ferm::service { 'scap_http':
        proto => 'tcp',
        port  => $scap_config['http_port'],
    # Monitoring
    monitoring::service { 'scap_http':
        command => "check_http!${server_name}!/"
</syntaxhighlight>For the docker building environment, you will probably want to set up a specific profile, and then set one up for the docker deployment environment. The latter actually depends on the former in order to work. Assuming we already have a docker module that helps set install docker and set it up on a server, and that we have a kubernetes module that has a class for installing the cli tools, we can first create a profile for the build environment:<syntaxhighlight lang="puppet">
class profile::docker::builder(
    $proxy_address = hiera('profile::docker::builder::proxy_address'),
    $proxy_port = hiera('profile::docker::builder::proxy_port'),
    $registry = hiera('docker_registry'), # this is a global variable again
    # Let's suppose we have a simple class setting up docker, with no params
    # to override in this case
    class { 'docker':
    # We will need docker baseimages
    class { 'docker::baseimages':
        docker_registry => $registry,
        proxy_address  => $proxy_address,
        proxy_port      => $proxy_port,
        distributions  => ['jessie'],
    # we will need some build scripts; they belong here
    file { '/usr/local/bin/build-in-docker':
        source => 'puppet:///modules/profile/docker/builder/',
    # Monitoring goes here, not in the docker class
    nrpe::monitor_systemd_unit_state { 'docker-engine': }
</syntaxhighlight>and then the deployment profile will need to add credentials for the docker registry for uploading, the kubernetes cli tools, and a deploy script. It can't work if the profile::docker::builder::profile is not included, though<syntaxhighlight lang="puppet">
class profile::docker::deployment_server (
    $registry = hiera('docker_registry')
    $registry_credentials = hiera('profile::docker::registry_credentials'),
) {
    # Require a profile needed by this one
    require ::profile::docker::builder
    # auth config
    file { '/root/.docker/config.json':
    class { 'kubernetes::cli':
    # Kubernetes-based deployment script
</syntaxhighlight>Then the role class for the whole deployment server will just include the relevant profiles:<syntaxhighlight lang="puppet">
class role::deployment_server {
    system::role { 'role::deployment_server':
        description => 'Deployment server for production.'
    # Standard is a profile, to all effects
    include standard
    include profile::scap3::master
    include profile::docker::deployment_server
</syntaxhighlight>and in site.pp<syntaxhighlight lang="puppet">
node /deployment.*/ {
</syntaxhighlight>Configuration will be done via hiera; most of the definitions will go into the role hierarchy, so in this case:
'''hieradata/role/common/deployment_server.yaml'''<syntaxhighlight lang="puppet">
profile::scap3::master::server_name: "deployment.%{::site}.wmnet"
# This could be shared with other roles and thus defined in a common hierarchy in some cases.
    use_proxies: yes
    deployment_dir: "/srv"
profile::scap3::master::max_post_size: "100M"
profile::docker::builder::proxy_address: 'http://webproxy.eqiad.wmnet:3128'
</syntaxhighlight>some other definitions are global, and they might go in the common hierarchy, so:
'''hieradata/common.yaml'''<syntaxhighlight lang="puppet">
docker_registry: ''
    - mw1121.eqiad.wmnet
    - mw2212.eqiad.wmnet
</syntaxhighlight>while others can be shared between multiple profiles (in this case, note the 'private' prefix as this is supposed to be a secret)
'''hieradata/private/common/profile/docker.yaml'''<syntaxhighlight lang="puppet">
profile::docker::registry_credentials: "some_secret"
=== WMF Design conventions ===
* '''Always''' include the 'base' class for every node (note that ''standard'' includes ''base'' and should be used in most cases)
* For every service deployed, please use a <tt>system::role</tt> definition (defined in <tt>modules/system/manifests/role.pp</tt>) to indicate what a server is running. This will be put in the MOTD. As the definition name, you should normally use the relevant puppet class. For example:
system::role { "role::cache::bits": description => "bits Varnish cache server" }
* Files that are fully deployed by Puppet using the ''file'' type, should generally use a read-only file mode (i.e., '''0444''' or '''0555'''). This makes it more obvious that this file should not be modified, as Puppet will overwrite it anyway.
* For each service, create a nested class with the name <code>profile::''service''::monitoring</code> (e.g. ''profile::squid::monitoring'') which sets up any required (Nagios) monitoring configuration on the monitoring server.
* Any top-level class definitions should be documented with descriptive header, like this:
  # Mediawiki_singlnode: A one-step class for setting up a single-node MediaWiki install,
  #  running from a Git tree.
  #  Roles can insert additional lines into LocalSettings.php via the
  #  $role_requires and $role_config_lines vars.
  #  etc.
Such descriptions are especially important for role classes.  Comments like these are used to generate our [ online puppet documentation].
== Coding Style ==
Please read the [ upstream style-guide]. And install [[Puppet_coding#install_puppet-lint|puppet-lint]].
Our codebase is only compatible with Puppet 4.5 and above. Use of puppet 4.x constructs like loops, new functions, and in particular parameter types are strongly encouraged. See the slides linked here for more details. [[File:Puppet 4 - An introduction.pdf|thumb|The slideset for a short presentation about the new things in recent versions of the  puppet language.]]
Many existing manifests use two-spaces (as suggested in the style guide) instead of our 4 space indent standard; when working on existing code always follow the existing whitespace style of the file or module you are editing. Please do not mix cleanup changes with functional changes in a single patch.
====Spacing, Indentation, & Whitespace====
* Must use four-space soft tabs.
* Must not use literal tab characters.
* Must not contain trailing white space
* Must align fat comma arrows (=>) within blocks of attributes.
* Must use single quotes unless interpolating variables.
* All variables should be enclosed in in braces ({}) when being interpolated in a string. [ like this]
* Variables standing by themselves should not be quoted.[ like this]
* Must not quote booleans: '''<tt>true</tt>''' is ok, but not '''<tt>'true'</tt>''' or '''<tt>"true"</tt>'''
* Must single quote all resource names and their attribute, except ensure. (unless they contain a variable, of course).
* Ensure must always be the first attribute.
* Put a trailing comma after the final resource parameter.
* Again: Must align fat comma arrows (=>) within blocks of attributes.
* Don't group resources of the same type (a.k.a compression) :
<source lang=puppet>
file { '/etc/default/exim4':
    require => Package['exim4-config'],
    owner  => 'root',
    group  => 'root',
    mode    => '0444',
    content => template('exim/exim4.default.erb'),
file { '/etc/exim4/aliases/':
    ensure  => directory,
    require => Package['exim4-config'],
    mode    => '0755',
    owner  => 'root',
    group  => 'root',
don't do
<source lang=puppet>
file { '/etc/default/exim4':
    require => Package['exim4-config'],
    owner  => 'root',
    group  => 'root',
    mode    => '0444',
    content => template('exim/exim4.default.erb');
    ensure  => directory,
    require => Package['exim4-config'],
    mode    => '0755',
    owner  => 'root',
    group  => 'root',
* keep the resource name and the resource type on the same line. No need for extra indentation.
* Don't use selectors inside resources:
<source lang=puppet>
$file_mode = $::operatingsystem ? {
    debian => '0007',
    redhat => '0776',
    fedora => '0007',
file { '/tmp/readme.txt':
    content => "Hello World\n",
    mode    => $file_mode,
<source lang=puppet>
file { '/tmp/readme.txt':
    mode => $::operatingsystem ? {
        debian => '0777',
        redhat => '0776',
        fedora => '0007',
* Case statements should have default cases. [ like this].
All classes and resource type definitions must be in separate files in the manifests directory of their module.
* Do not nest classes.
* NEVER EVER use inheritance, puppet is not good at that. Also, inheritance will make your life harder when you need to use hiera - really, don't.
* Try not to use top-space variables, but if you do use them, scope them correctly:
but not
* Do not use dashes in class names, preferably use alpha-betaic names only.
* In parameterized class and defined resource type declarations, parameters that are required should be listed before optional parameters. [ like this].
* It is in general better to avoid parameters that don't have a default; that will only make your life harder as you need to define that variable for every host that includes it.
* One include per line.
* One class per include.
* Include only the class you need, not the entire scope.
=== Useful global variables ===
These are useful variables you can refer to from anywhere in the Puppet manifests. Most of these get defined in <tt>realm.pp</tt> or <tt>base.pp</tt>.
; <code>$::realm</code> : The "realm" the system belongs to. As of July 2013 we have the realms '''production''', '''fundraising''' and '''labs'''.
; <code>$::site</code> : Contains the 5-letter site name of the server, e.g. "[[pmtpa]]", "[[eqiad]]" or "[[esams]]".
== Testing a patch ==
Before submitting a patch and have Jenkins downvote you, you can do it yourself. After committing changes you can run
  $ rbenv exec bundle exec rake test
You should get a <tt>congratulations :)</tt> message
==== Parser validation ====
You can syntax check your changes by
# puppet parser validate filename-here
==== Lint ====
You can locally install [ puppet-lint] and use it to check your code before submitting, or enhance existing code by fixing puppet-lint errors/warnings.
[ puppet-lint on github]
==== on Debian/Ubuntu ====
apt-get install puppet-lint  [], []
===== on Mac OS X =====
sudo gem install puppet-lint
===== generic (Ruby gem) =====
gem install puppet-lint
==== How to use ====
puppet-lint manifest.pp
puppet-lint --with-filename /etc/puppet/modules
==== Ignoring lint warnings and errors ====
If you want you can always ignore specific warnings/errors by surrounding the relevant lines with a special "lint::ignore"-comment. For example, this would ignore "WARNING: top-scope variable being used without an explicit namespace" on line 54.
53        # lint:ignore:variable_scope
54        default    => $fqdn
55        # lint:endignore
You can get all the names of the separate checks with '''puppet-lint --help'''. More info is on
==== Find out which warnings/errors are currently ignored ====
There are 2 parts to this. The first is to check the global .puppet-lint.rc file in the root of the puppet repo. You will see that the following 4 checks are currently ignored globally:
*--no-80-chars-check (This is about the lines over 80 chars, we are not planning to remove this exception)
*--no-autoloader_layout-check (This is about moving all remaining class out of manifests/role/ and we want this fixed)
*--no-puppet-url_without_modules-check (Yes, we want this fixed)
*--no-documentation-check (
The second part is checking for individual ignores, find these with a '''grep -r "lint:ignore" *''' in the root of the puppet repo. You can help by fixing and removing any of these remaining issues.
The tracking task for getting this to perfection is
=== [[Portal:Wikimedia VPS|Cloud VPS]] testing ===
Nontrivial puppet changes should be applied to a Cloud VPS instance before being merged into production.  This can uncover some behaviors and code interactions that don't appear during individual file tests -- for example, puppet runs frequently fail due to duplicate resource definitions that aren't obvious to the naked eye.
To test a puppet patch:
1. Create a [[Help:Self-hosted_puppetmaster|self-hosted puppetmaster]]  instance.
2. Configure that instance so that it defines the class you're working on.  You can do this either via the 'configure instance' page or by editing /var/lib/git/operations/puppet/manifests/site.pp to contain something like this:
    node this-is-my-hostname {
        include class::I::am::working::on
3. Run puppet a couple of times ('$ sudo puppetd -tv') until each subsequent puppet run is clean doesn't modify anything
4. Apply your patch to /var/lib/git/operations/puppet.  Do this by cherry-picking from gerrit or by rsyncing from a local working directory.
5. Run puppet again, and note the changes that this puppet run makes.  Does puppet succeed?  Are the changes what you expected?
You could also review the generated catalog and a diff by using [[Help:Puppet-compiler | puppet-compiler]].
=== Jenkins dry run build ===
One way to test a change is to use
* Use your gerrit change number (without the patchset)
* Type in the box you want to see changes for (eg. analytics1003.eqiad.wmnet)
* Run the job, go to it and click on Console Output on the left when it's done
* Navigating this is kind of hard, but all changes are described in detail if you click on Change Catalog from the Compilation Results page for each node.  You can get to that from the Console Output.
=== Manual module testing ===
A relatively simple and crude testing way is
puppet apply --noop --modulepath /path/to/modules <manifest>.pp
Do note however that this might not work if you reference stuff outside of the module hierarchy
You can get around the missing module hierarchy problem by cloning a local copy of the puppet repo and symlinking in your new module directory.
git clone --branch production
cd puppet/modules
ln -s /path/to/mymodule .
puppet apply --verbose --noop --modulepath=/home/${USER}/puppet/modules /path/to/mymodule/manifest/init.pp
=== Unit Testing modules ===
==== Rake tests ====
Some modules have unit tests already written -- the other modules still need them!  Modules with tests have 'rakefile' in their top dir, and a subdir called 'spec'.
Modules imported from upstream typically have tests that run against other upstream modules -- we, however, seek to have tests pass when running only against our own repository.  In order to set that up you'll need to run
  $ rake spec
once in the top level directory to get things set up properly.  After that you can test individual modules by running
  $ rake spec_standalone
in specific module subdirs.
If you want to compare your results with the test results on our official testing box, check out this page:
==== Custom tests ====
If you are testing a module it makes sense to group these simple tests in a tests/ directory in your modules hierarchy. Your tests can be as simple as
<syntaxhighlight lang=ruby>
include myclass
in a file called <code>myclass.pp</code> in the tests directory.
or a lot more complex, calling parameterized classes and definitions.
All this can also be automated by including in the tests directory the following <code>Makefile</code>
<syntaxhighlight lang="make">
MANIFESTS=$(wildcard *.pp)
all: test
test: $(OBJS)
%.po: %.pp
puppet parser validate $<
puppet apply --noop --modulepath $(MODULES_DIR) $<
and running <code>make</code> from the command line (assuming you have make installed)
Please note that <code>--noop</code> does not mean that no code will be executed. It means puppet will not change the state of any resource. So at least exec resources' conditionals as well puppet parser functions and facts will execute normally. So don't go around testing untrusted code.
==== Common errors ====
===== tab character found on line .. =====
FIX: do not use tabs, use 4-space soft tabs
Have this in your vim config (<code>.vimrc</code>):
<syntaxhighlight lang=vim>
set tabstop=4
set shiftwidth=4
set softtabstop=4
set smarttab
set expandtab
or use something like this (if your local path is ./wmf/puppet/) to apply it to puppet files only.
<syntaxhighlight lang=vim>
" Wikimedia style uses 4 spaces for indentation
autocmd BufRead */wmf/puppet/* set sw=4 ts=4 et
open the file, :retab, :wq, done.  Make sure to review the resulting change carefully before submitting it.
Or put this in your emacs config (.emacs)
<syntaxhighlight lang=emacs>
;; Puppet config with 4 spaces
(setq puppet-indent-level 4)
(setq puppet-include-indent 4)
===== double quoted string containing no variables =====
FIX: use single quotes (') for all strings unless there are variables to parse in it
===== unquoted file mode =====
FIX: always quote file modes with single quotes,like:  mode => '0750'
===== line has more than 80 characters =====
FIX: wrap your lines to be less than 80 chars, if you have to, there is \<newline>.
Vim can help when writing. Place this in your .vimrc
set textwidth=80
===== not in autoload module layout =====
FIX: turn your code into a puppet module ([ Module Fundamentals])
===== ensure found on line but it's not the first attribute =====
FIX: move your "ensure =>" to the top of the resource section. (don't forget to turn a ; into a , if it was the last attribute before)
===== unquoted resource title =====
FIX: quote all resource titles, single quotes
===== top-scope variable being used without an explicit namespace =====
FIX: use an explicit namespace in variable names ([ Scope and Puppet])
===== class defined inside a class =====
FIX: don't define classes inside classes
===== quoted boolean value =====
FIX: do NOT quote boolean values ( => true/ => false)
===== case statement without a default case =====
FIX: add a default case to your case statement
== Puppet modules ==
There are currently two high level types of modules.  For most things, modules should not contain anything that is specific to the Wikimedia Foundation.  Non WMF specific modules could be useable an other puppet repository at any other organization. A WMF specific module is different: it may contain configurations specific to WMF (duh), but remember that it is still a module, so it must be useable on its own as well. Users of either type of module should be able able to use the module without editing anything inside of the module.  WMF specific modules will probably be higher level abstractions of services that use and depend on other modules, but they may not refer to anything inside of the top level manifests/ directory.  E.g. the 'applicationserver' module abstracts usages of apache, php and pybal to set up a WMF application server.
Often it will be difficult to choose between creating role classes and creating a WMF specific module.  There isn't a hard rule on this.  You should use your best judgement.  If role classes start to get overly complicated, you might consider creating a WMF specific module instead.
=== 3rd party or upstream modules ===
There are so many great modules out there!  Why spend time writing your own?!
Well, for good reasons.  Puppet runs as root on the production nodes.  We can't import just any 3rd party module, as we can't be sure to trust them.  Not because they would do something malicious (although they might), but because they might do something stupid.
All 3rd party modules must be reviewed in the same manner that we review our own code before it goes to production.
=== git submodules ===
Even so, since puppet modules are supposed to be their own projects, it is sometimes improper to maintain them as subdirectories inside of the operations/puppet repository.  This goes for 3rd party modules as well as non-WMF specific modules.  WMF specific modules can and probably should remain as subdirectories inside of operations/puppet.
We are starting to use git submodules to manage puppet modules.  Puppet modules must go through the same review process as anything in operations/puppet.
=== Adding a new puppet module as a git submodule ===
First up is adding a new puppet module.  'git submodule add' will modify the
.gitmodules file, and also take care of cloning the remote into the local
directory you specify.
  otto@localhost:~/puppet# git submodule add<my-module> modules/<my-module>
git status shows the modified .gitmodules file, as well as a new 'file' at
modules/<my-module>.  This new file is a pointer to a specific commit in the
<my-module> repository.
  otto@localhost:~/puppet# git status
  # On branch master
  # Changes to be committed:
  #  (use "git reset HEAD <file>..." to unstage)
  # modified:  .gitmodules
  # new file:  modules/<my-module>
Commit the changes and post them to gerrit for review.
  otto@localhost:~/puppet# git commit && git review
This will show up as a change in the operations/puppet repository.  This change will not show the actual code that is added, but instead, only show diffs with the SHA1s that the new submodule points at.  Once this change has been reviewed, approved, and merge, those with operations/puppet checked out will have to run
  git submodule update --init
to be sure that they get the changes required in their local working copies.  This is only really necessary if other users want to view the submodule's content locally.
=== Making changes to a submodule ===
'''You should never edit a submodule directly in the subdirectory a operations/puppet working copy.'''  If you want to make changes to a submodule, clone that submodule elsewhere directly.  Edit there and submit changes for review.
  git clone<my-module>
  cd <my-module>
  # edit stuff, push to gerrit for review.
  git commit && git review
Once your module change has been approved, you can update your operations/puppet working copy so that it points to the SHA1 you want it to.
  cd path/to/operations/puppet/modules/<my-module>
  git pull # or whatever you want to checkout your desired SHA1.
  cd ../..
  git commit .gitmodules && git review
This will push the update of the new SHA1 to operations/puppet for review.
== Miscellaneous ==
=== VIM guidelines ===
The following in <code>~/.vim/ftdetect/puppet.vim</code> can help with a lot of formatting errors
<syntaxhighlight lang=vim>
" detect puppet filetype
autocmd BufRead,BufNewFile *.pp set filetype=puppet
autocmd BufRead,BufNewFile *.pp setlocal tabstop=4 shiftwidth=4 softtabstop=4 expandtab textwidth=80 smarttab
And for a proper syntax hightlighting the following can be done
sudo aptitude install vim-puppet
mkdir -p ~/.vim/syntax
cp /usr/share/vim/addons/syntax/puppet.vim ~/.vim/syntax/
And definitely have a look at the vim plugin which will report puppet errors directly in your buffer whenever you save the file (works for python/php etc as well).
Of course symlinks can be used or you can just install vim-addon-manager to manage plugins. vim-puppet provides ftplugin and indent plugin as well. Maybe there are worth the time, but it is up to each user to decide.
=== Emacs guidelines ===
Syntax Highlighting
The puppet-el deb package can be used for emacs syntax highlighting, or the raw emacs libraries can be found
[ here].
<pre>puppet-el - syntax highlighting for puppet manifests in emacs</pre>
The following two sections can be added to a .emacs file to help with 4 space indentions and trailing whitespace.
<syntaxhighlight lang=emacs>
;; Puppet config with 4 spaces
(setq puppet-indent-level 4)
(setq puppet-include-indent 4)
;; Remove Trailing Whitespace on Save
(add-hook 'before-save-hook 'delete-trailing-whitespace)
=== Puppet for [[Portal:Wikimedia VPS|Wikimedia VPS Projects]] ===
There is currently only one Puppet repository, and it is applied both to Cloud VPS and production instances.  Classes ''may'' be added to the Operations/Puppet repository that are only intended for use on a Cloud VPS instances.  The future is uncertain, though:  code is often reused, and Cloud VPS services are sometimes promoted to production.  For that reason, any changes made to Operations/Puppet must be held to the same style and security standards as code that would be applied on a production service.
=== Packages that use pip, gem, etc. ===
Other than mediawiki and related extensions, any software installed by puppet should be a Debian package and should come either from the WMF apt repo or from an official upstream Ubuntu repo.  Never use source-based packaging systems like pip or gem as these options haven't been properly evaluated and approved as secure by WMF Operations staff.
== References ==
<references />

Latest revision as of 16:03, 29 June 2022