You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Apache Traffic Server
Apache Traffic Server is a caching proxy server.
Architecture
There are three distinct processes in Traffic Server:
- traffic_server
- traffic_manager
- traffic_cop
traffic_server is the process responsible for dealing with user traffic: accepting connections, processing requests, serving documents from cache or the origin server. traffic_server is a event-driven multi-threaded process. Threads are used to take advantage of multiple CPUs, not to handle multiple connections concurrently (eg: by spawning a thread per connection, or by using a thread pool). Instead, an event system is used in order to schedule work on threads. ATS uses a state machine to handle each transaction (single HTTP request from a client and the response Traffic Server sends to that client) and provides a system of hooks where plugins (eg: lua) can step in and do things. Specific timers are used at the various states.
traffic_manager is responsible for launching, monitoring and configuring traffic_server, handling the statistics interface, cluster administration and virtual IP failover.
traffic_cop is a watchdog program monitoring the health of both traffic_manager and traffic_server. This has traditionally been the command to use in order to start ATS. In a systemd world, it can probably be avoided, and traffic_manager can be used as the program to be executed in order to start the unit.
Configuration
The changes to the default configuration required to get a caching proxy are:
# /etc/trafficserver/remap.config
map client_url origin_server_url
The following rules map grafana and phabricator to their respective backends and define a catchall for requests that don't match either of the first two rules:
# /etc/trafficserver/remap.config
map http://grafana.wikimedia.org/ http://krypton.eqiad.wmnet/
map http://phabricator.wikimedia.org/ http://iridium.eqiad.wmnet/
map / http://deployment-mediawiki05.deployment-prep.eqiad.wmflabs/
# /etc/trafficserver/records.config
CONFIG proxy.config.http.server_ports STRING 3128 3128:ipv6
CONFIG proxy.config.admin.user_id STRING trafficserver
CONFIG proxy.config.http.cache.required_headers INT 1
CONFIG proxy.config.url_remap.pristine_host_hdr INT 1
CONFIG proxy.config.disable_configuration_modification INT 1
If proxy.config.http.cache.required_headers is set to 2, which is the default, the origin server is required to set an explicit lifetime, from either Expires or Cache-Control: max-age. By setting required_headers to 1, objects with Last-Modified are considered for caching too. Setting the value to 0 means that no headers are required to make documents cachable.
TLS
basic TLS termination can be configured with the following configuration:
# /etc/trafficserver/records.config
CONFIG proxy.config.http.server_ports STRING 3128 3128:ipv6 3129:ssl 3129:ipv6:ssl
CONFIG proxy.config.ssl.server.cert.path STRING /etc/acmecerts/
CONFIG proxy.config.ssl.server.private_key.path STRING /etc/acmecerts/
# /etc/trafficserver/ssl_multicert.config
dest_ip=* ssl_cert_name=rsa.crt,ecdsa.crt ssl_key_name=rsa.key,ecdsa.key
Load balancing
In order to load balance requests among origin servers, parent_proxy_routing needs to be enabled in records.config:
# records.config
CONFIG proxy.config.http.parent_proxy_routing_enable INT 1
CONFIG proxy.config.diags.debug.enabled INT 1
CONFIG proxy.config.diags.debug.tags STRING parent_select
A remap rule needs to be configured for the site:
# remap.config
map http://en.wikipedia.org https://enwiki.org
Finally load balancing can be configured by specifying the nodes and the load balancing policy in parent.config:
# parent.config
dest_domain=enwiki.org parent="mw1261.eqiad.wmnet:443,mw1262.eqiad.wmnet:443" parent_is_proxy=false round_robin=strict
Logging
Diagnostic output can be sent to standard output and error instead of the default logfiles, which is a good idea in order to take advantage of systemd's journal.
# /etc/trafficserver/records.config
CONFIG proxy.config.diags.output.status STRING O
CONFIG proxy.config.diags.output.note STRING O
CONFIG proxy.config.diags.output.warning STRING O
CONFIG proxy.config.diags.output.error STRING E
CONFIG proxy.config.diags.output.fatal STRING E
CONFIG proxy.config.diags.output.alert STRING E
CONFIG proxy.config.diags.output.emergency STRING E
Health checks
Load the `healthchecks` plugin:
# /etc/trafficserver/plugin.config
healthchecks.so /etc/trafficserver/healtchecks.conf
Define health check:
# /etc/trafficserver/healtchecks.conf
/check /etc/trafficserver/ts-alive text/plain 200 403
Response body:
# /etc/trafficserver/ts-alive
All good
With the above configuration, GET requests to `/check` will result in 200 responses from ATS with the response body defined in `/etc/trafficserver/ts-alive`.
Cache inspector
To enable the cache inspector functionality, add the following remap rules:
map /cache-internal/ http://{cache-internal}
map /cache/ http://{cache}
map /stat/ http://{stat}
map /test/ http://{test}
map /hostdb/ http://{hostdb}
map /net/ http://{net}
map /http/ http://{http}
systemd unit
# /etc/systemd/system/trafficserver.service.d/puppet-override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/traffic_manager --nosyslog
Restart=always
RestartSec=1
ExecReload=
# XXX: `traffic_server -C verify_config` is broken: it causes configuration
# reloads, which cause errors with ascii_pipe logs
#ExecReload=/usr/bin/traffic_server -C verify_config
ExecReload=/usr/bin/traffic_ctl config reload
# traffic_manager is terminated with SIGTERM and exits with the received signal
# number (15)
SuccessExitStatus=15
LimitNOFILE=500000
LimitMEMLOCK=90000
# Security options
ProtectKernelModules=yes
ProtectKernelTunables=yes
PrivateTmp=yes
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX AF_NETLINK
CapabilityBoundingSet=CAP_DAC_OVERRIDE CAP_SETGID CAP_SETUID
SystemCallFilter=~@keyring @clock @cpu-emulation @obsolete @module @raw-io @debug
# The entire file system hierarchy is mounted read-only, except for the API
# file system subtrees /dev, /proc and /sys
ProtectSystem=strict
# Whitelist read/write directories
ReadWritePaths=/var/log/trafficserver
ReadWritePaths=/var/run/trafficserver
ReadWritePaths=/var/cache/trafficserver
Additional ATS instances
Traffic server provides a poorly documented feature called layouts. The ATS layout defines the following paths:
- exec_prefix (TS_BUILD_EXEC_PREFIX)
- bindir (TS_BUILD_BINDIR)
- sbindir (TS_BUILD_SBINDIR)
- sysconfdir (TS_BUILD_SYSCONFDIR)
- datadir (TS_BUILD_DATADIR)
- includedir (TS_BUILD_INCLUDEDIR)
- libdir (TS_BUILD_LIBDIR)
- libexecdir (TS_BUILD_LIBEXECDIR)
- localstatedir (TS_BUILD_LOCALSTATEDIR)
- runtimedir (TS_BUILD_RUNTIMEDIR)
- logdir (TS_BUILD_LOGDIR)
- mandir (TS_BUILD_MANDIR)
- infodir (TS_BUILD_INFODIR
- cachedir (TS_BUILD_CACHEDIR)
Those paths are defined at building time by their corresponding TS_BUILD_ constants. However those can be replaced in runtime by using a layout/runroot file. A layout file is a YAML file that defines the paths listed above and it has the following syntax:
prefix: ./runroot
exec_prefix: ./runroot
bindir: ./runroot/custom_bin
sbindir: ./runroot/custom_sbin
sysconfdir: ./runroot/custom_sysconf
datadir: ./runroot/custom_data
includedir: ./runroot/custom_include
libdir: ./runroot/custom_lib
libexecdir: ./runroot/custom_libexec
localstatedir: ./runroot/custom_localstate
runtimedir: ./runroot/custom_runtime
logdir: ./runroot/custom_log
cachedir: ./runroot/custom_cache
After defining the layout file, the runroot can be initialized by running traffic_layout:
$ traffic_layout --init --layout="custom.yml" --copy-style=soft
Take into account that the custom layout defines its own bin and sbin directories, so it needs to copy the binaries inside the runroot. Fortunately the flag --copy-style allows to control how the executables are being copied:
- copy: Full copy
- hard: Use hard links
- soft: Use symlinks
Our goal here is to run several instances of the same ATS version, so --copy-style=soft allows to do that and still benefit from system-wide ATS upgrades.
After the layout has been initialized any traffic server CLI tool can use it by adding the option --run-root or setting the TS_RUNROOT environment variable:
$ traffic_ctl --run-root="custom.yaml" reload
$ TS_RUNROOT="custom.yaml" traffic_ctl reload
Debugging
The XDebug plugin allows clients to check various aspects of ATS operation.
To enable the plugin, add xdebug.so to plugin.config, add the following lines to records.config, and restart trafficserver.
CONFIG proxy.config.diags.debug.enabled INT 1
CONFIG proxy.config.diags.debug.tags STRING xdebugs.tag
Once the plugin is enabled, clients can specify various values in the X-Debug header and receive the relevant information back.
For example:
# cache hit
$ curl -H "X-Debug: X-Milestones" http://localhost 2>&1 | grep Milestones:
< X-Milestones: PLUGIN-TOTAL=0.000022445, PLUGIN-ACTIVE=0.000022445, CACHE-OPEN-READ-END=0.000078570, CACHE-OPEN-READ-BEGIN=0.000078570, UA-BEGIN-WRITE=0.000199094, UA-READ-HEADER-DONE=0.000000000, UA-FIRST-READ=0.000000000, UA-BEGIN=0.000000000
# cache miss
< X-Milestones: PLUGIN-TOTAL=0.000017432, PLUGIN-ACTIVE=0.000017432, DNS-LOOKUP-END=0.091413811, DNS-LOOKUP-BEGIN=0.000148548, CACHE-OPEN-WRITE-END=0.091413811, CACHE-OPEN-WRITE-BEGIN=0.091413811, CACHE-OPEN-READ-END=0.000056997, CACHE-OPEN-READ-BEGIN=0.000056997, SERVER-READ-HEADER-DONE=0.218755336, SERVER-FIRST-READ=0.218755336, SERVER-BEGIN-WRITE=0.091413811, SERVER-CONNECT-END=0.091413811, SERVER-CONNECT=0.091413811, SERVER-FIRST-CONNECT=0.091413811, UA-BEGIN-WRITE=0.218755336, UA-READ-HEADER-DONE=0.000000000, UA-FIRST-READ=0.000000000, UA-BEGIN=0.000000000
The full list of debugging headers is available in the XDebug Plugin documentation.
In the setup at WMF, the plugin can be enabled by setting profile::trafficserver::backend::enable_xdebug to true in hiera. It can then be used by specifying the X-Debug-ATS request header. For example, to dump all client/intermediary/origin request/response headers:
$ curl -H "X-ATS-Debug: log-headers" http://localhost
Request logs
Non-purge request logs can be inspected by running atslog-backend, a wrapper around fifo-log-tailer:
$ sudo atslog-backend
Call fifo-log-tailer directly to inspect PURGE traffic:
# LOG_SOCKET=/var/run/trafficserver/notpurge.sock fifo-log-tailer
Building and running from Git
To build trafficserver from git:
autoreconf -if
./configure --enable-layout=Debian --sysconfdir=/etc/trafficserver --libdir=/usr/lib/trafficserver --libexecdir=/usr/lib/trafficserver/modules
make -j8
Add a minimal /etc/trafficserver/records.config:
CONFIG proxy.config.disable_configuration_modification INT 1
# Replace $PATH_TO_REPO!
CONFIG proxy.config.bin_path STRING ${PATH_TO_REPO}/trafficserver/src/traffic_server/
The newly built traffic_server and traffic_manager binaries can be tested as follows:
sudo -u trafficserver ./src/traffic_server/traffic_server
sudo -u trafficserver ./src/traffic_manager/traffic_manager --nosyslog
Packaging
To package a new stable release, download it from https://trafficserver.apache.org/downloads and check its SHA.
Then import it into operations/debs/trafficserver with:
PRISTINE_ALL_XDELTA=xdelta gbp import-orig --pristine-tar /tmp/trafficserver-8.0.2.tar.bz2
This will upgrade the following branches, don't forget to push all of them to repository:
- master
- upstream
- pristine-tar
Build with:
WIKIMEDIA=yes ARCH=amd64 BACKPORTS=yes DIST=stretch GIT_PBUILDER_AUTOCONF=no gbp buildpackage -jauto -us -uc -sa --git-builder=git-pbuilder
The procedure to package new RC versions is roughly as follows. This assumes that: (1) the new RC artifacts are made available under https://people.apache.org/~bcall/8.0.3-rc0/, and (2) you want to build the new packages on boron.eqiad.wmnet.
https_proxy=http://url-downloader.wikimedia.org:8080 wget https://people.apache.org/~bcall/8.0.3-rc0/trafficserver-8.0.3-rc0.tar.bz2
# Check that the sha512 matches https://people.apache.org/~bcall/8.0.3-rc0/trafficserver-8.0.3-rc0.tar.bz2.sha512
Then obtain our latest prod packages and update them:
apt-get source trafficserver
cd trafficserver-8.0.2/
uupdate -v 8.0.3~rc0 ../trafficserver-8.0.3-rc0.tar.bz2
cd ../trafficserver-8.0.3~rc0
BACKPORTS=yes WIKIMEDIA=yes ARCH=amd64 DIST=stretch GIT_PBUILDER_AUTOCONF=no git-pbuilder
Cheatsheet
Rolling restart in codfw:
sudo cumin -b1 'A:cp-ats-codfw' 'ats-backend-restart ; sleep 30'
Show non-default configuration values:
sudo traffic_ctl config diff
Configuration reload:
sudo traffic_ctl config reload
Check if a reload/restart is needed:
sudo traffic_ctl config status
Start in debugging mode, dumping headers
sudo traffic_server -T http_hdrs
Access metrics from the CLI:
traffic_ctl metric get proxy.process.http.cache_hit_fresh
Multiple metrics can be accessed with 'match':
traffic_ctl metric match proxy.process.ssl.*
Set the value of a metric to zero:
traffic_ctl metric zero proxy.process.http.completed_requests
Show storage usage:
traffic_server -C check
Lua scripting
ATS plugins can be written in Lua. As an example, this is how to choose an origin server dynamically:
# /etc/trafficserver/remap.config
map http://127.0.0.1:3128/ http://$origin_server_ip/ @plugin=/usr/lib/trafficserver/modules/tslua.so @pparam=/var/tmp/ats-set-backend.lua
reverse_map http://$origin_server_ip/ http://127.0.0.1:3128/
Choosing origin server
Selecting the appropriate origin server for a given request can be done using ATS mapping rules. The same goal can be achieved in lua:
-- /var/tmp/ats-set-backend.lua
function do_remap()
url = ts.client_request.get_url()
if url:match("/api/rest_v1/") then
ts.client_request.set_url_host('origin-server.eqiad.wmnet')
ts.client_request.set_url_port(80)
ts.client_request.set_url_scheme('http')
return TS_LUA_REMAP_DID_REMAP
end
end
Negative response caching
By default ATS caches negative responses such as 404, 503 and others only if the response defines a maxage via the Cache-Control header. This behavior can be changed by setting the configuration option proxy.config.http.negative_caching_enabled, which allows caching of negative responses that do NOT specify Cache-Control. If negative caching is enabled, the lifetime of negative responses without Cache-Control is defined by proxy.config.http.negative_caching_lifetime, in seconds, defaulting to 1800.
One might however desire to cache 404 responses which do not send Cache-Control, without caching any 503 response. Given that proxy.config.http.negative_caching_enabled enables the behavior for a bunch of negative responses, and that ATS versions below 8.0.0 did not allow to specify the list of negative response status codes to cache, the goal can be achieved by setting Cache-Control in lua only for certain status codes:
function read_response()
local status_code = ts.server_response.get_status()
local cache_control = ts.server_response.header['Cache-Control']
-- Cache 404 responses without CC for 10s
if status_code == 404 and not(cache_control) then
ts.server_response.header['Cache-Control'] = 'max-age=10'
end
end
function do_remap()
ts.hook(TS_LUA_HOOK_READ_RESPONSE_HDR, read_response)
return 0
end
Starting with ATS 8.0.0, the configuration option proxy-config-http-negative-caching-list allows to specify the list of negative response status codes to cache.
Setting X-Cache-Int
As another example, the following script takes care of setting the X-Cache-Int response header:
-- /var/tmp/ats-set-x-cache-int.lua
function cache_lookup()
local cache_status = ts.http.get_cache_lookup_status()
ts.ctx['cstatus'] = cache_status
end
function cache_status_to_string(status)
if status == TS_LUA_CACHE_LOOKUP_MISS then
return "miss"
end
if status == TS_LUA_CACHE_LOOKUP_HIT_FRESH then
return "hit"
end
if status == TS_LUA_CACHE_LOOKUP_HIT_STALE then
return "miss"
end
if status == TS_LUA_CACHE_LOOKUP_SKIPPED then
return "pass"
end
return "bug"
end
function gen_x_cache_int()
local hostname = "cp4242" -- from puppet
local cache_status = cache_status_to_string(ts.ctx['cstatus'])
local v = ts.client_response.header['X-Cache-Int']
local mine = hostname .. " " .. cache_status
if (v) then
v = v .. ", " .. mine
else
v = mine
end
ts.client_response.header['X-Cache-Int'] = v
ts.client_response.header['X-Cache-Status'] = cache_status
end
function do_remap()
ts.hook(TS_LUA_HOOK_CACHE_LOOKUP_COMPLETE, cache_lookup)
ts.hook(TS_LUA_HOOK_SEND_RESPONSE_HDR, gen_x_cache_int)
return 0
end
Custom metrics
Ad-hoc metrics can be created, incremented and accessed in Lua. For example, to keep per-origin counters of origin server requests:
function do_global_send_request()
local ip = ts.server_request.server_addr.get_ip()
if ip == "0.0.0.0" then
-- internal stuff, not an actual origin server request
return 0
end
local counter_name = "origin_requests_" .. ip
counter = ts.stat_find(counter_name)
if counter == nil then
counter = ts.stat_create(counter_name,
TS_LUA_RECORDDATATYPE_INT,
TS_LUA_STAT_PERSISTENT,
TS_LUA_STAT_SYNC_COUNT)
end
counter:increment(1)
end
Forcing a cache miss (similar to ban)
Sometimes it is desirable to ensure that certain cached responses are not returned to clients, and that instead the object is fetched again from the origin server.
This can be done by changing the lookup status in do_global_cache_lookup_complete. For example, to re-fetch all objects with Content-Type 'application/x-www-form-urlencoded':
function do_global_cache_lookup_complete()
local cache_status = ts.http.get_cache_lookup_status()
if cache_status == TS_LUA_CACHE_LOOKUP_HIT_FRESH then
if ts.cached_response.header['Content-Type'] == 'application/x-www-form-urlencoded' then
ts.http.set_cache_lookup_status(TS_LUA_CACHE_LOOKUP_MISS)
end
end
end
Functionally, this is equivalent to the concept of banning in Varnish.
Debugging
Debugging output can be produced from Lua with ts.debug("message"). The following configuration needs to be enabled to log debug output:
CONFIG proxy.config.diags.debug.enabled INT 1
CONFIG proxy.config.diags.debug.tags STRING ts_lua
In case other debugging tags need to be enabled, such as for example http_hdrs:
CONFIG proxy.config.diags.debug.tags STRING ts_lua|http_hdrs
See the documentation for more tags.
Unit testing
The busted framework allows to test Lua scripts. It can be installed as follows:
apt install luarocks
luarocks install busted
luarocks install luacov
The following unit tests cover some of the functionalities implemented by ats-set-x-cache-int.lua:
-- unit_test.lua
_G.ts = { client_response = { header = {} }, ctx = {} }
describe("Busted unit testing framework", function()
describe("script for ATS Lua Plugin", function()
it("test - hook", function()
stub(ts, "hook")
require("ats-set-x-cache-int")
local result = do_remap()
assert.are.equals(0, result)
end)
it("test - gen_x_cache_hit", function()
stub(ts, "hook")
require("ats-set-x-cache-int")
local result = gen_x_cache_int()
assert.are.equals('miss', ts.client_response.header['X-Cache-Status'])
assert.are.equals('cp4242 miss', ts.client_response.header['X-Cache-Int'])
end)
end)
end)
Run the tests and generate a coverage report with:
$ busted -c unit_test.lua
●●
2 successes / 0 failures / 0 errors / 0 pending : 0.012771 seconds
$ luacov ; cat luacov.report.out
Storage
Information about permanent storage can be obtained by using the python3-superior-cache-analyzer Debian package:
from scan import span
s = span.Span("/dev/nvme0n1p1")
print(s)