Temporal Cluster configuration reference
Much of the behavior of a Temporal Cluster is configured using the development.yaml
file and may contain the following top-level sections:
global
persistence
log
clusterMetadata
services
publicClient
archival
namespaceDefaults
dcRedirectionPolicy
dynamicConfigClient
Changing any properties in the development.yaml
file requires a process restart for changes to take effect.
Configuration parsing code is available here.
global
The global
section contains process-wide configuration. See below for a minimal configuration (optional parameters are commented out.)
global:
membership:
broadcastAddress: '127.0.0.1'
metrics:
prometheus:
framework: 'tally'
listenAddress: '127.0.0.1:8000'
membership
The membership
section controls the following membership layer parameters.
maxJoinDuration
The amount of time the service will attempt to join the gossip layer before failing.
Default is 10s.
broadcastAddress
Used by gossip protocol to communicate with other hosts in the same Cluster for membership info.
Use IP address that is reachable by other hosts in the same Cluster.
If there is only one host in the Cluster, you can use 127.0.0.1.
Check net.ParseIP
for supported syntax, only IPv4 is supported.
metrics
Configures the Cluster's metric subsystem. Specific provides are configured using provider names as the keys.
statsd
prometheus
m3
prefix
The prefix to be applied to all outgoing metrics.
tags
The set of key-value pairs to be reported as part of every metric.
excludeTags
A map from tag name string to tag values string list.
This is useful to exclude some tags that might have unbounded cardinality.
The value string list can be used to whitelist values of that excluded tag to continue to be included.
For example, if you want to exclude task_queue
because it has unbounded cardinality, but you still want to see a whitelisted value for task_queue
.
statsd
The statsd
sections supports the following settings:
hostPort
: The host:port of the statsd server.prefix
: Specific prefix in reporting tostatsd
.flushInterval
: Maximum interval for sending packets. (Default 300ms).flushBytes
: Specifies the maximum UDP packet size you wish to send. (Default 1432 bytes).
prometheus
The prometheus
sections supports the following settings:
framework
: The framework to use, currently supportsopentelemetry
andtally
, default istally
. We plan to switch default toopentelemetry
once its API become stable.listenAddress
: Address for Prometheus to scrape metrics from. The Temporal Server uses the Prometheus client API, and thelistenAddress
configuration is used to listen for metrics.handlerPath
: Metrics handler path for scraper; default is/metrics
.
m3
The m3
sections supports the following settings:
hostPort
: The host:port of the M3 server.service
: The service tag to that this client emits.queue
: M3 reporter queue size, default is 4k.packetSize
: M3 reporter max packet size, default is 32k.
pprof
port
: If specified, this will initialize pprof upon process start on the listed port.
tls
The tls
section controls the SSL/TLS settings for network communication and contains two subsections, internode
and frontend
.
The internode
section governs internal service communication among roles where the frontend
governs SDK client communication to the Frontend Service role.
Each of these subsections contain a server
section and a client
section.
The server
contains the following parameters:
certFile
: The path to the file containing the PEM-encoded public key of the certificate to use.keyFile
: The path to the file containing the PEM-encoded private key of the certificate to use.requireClientAuth
: boolean - Requires clients to authenticate with a certificate when connecting, otherwise known as mutual TLS.clientCaFiles
: A list of paths to files containing the PEM-encoded public key of the Certificate Authorities you wish to trust for client authentication. This value is ignored ifrequireClientAuth
is not enabled.
See the server samples repo for sample TLS configurations.
Below is an example enabling Server TLS (https) between SDKs and the Frontend APIs:
global:
tls:
frontend:
server:
certFile: /path/to/cert/file
keyFile: /path/to/key/file
client:
serverName: dnsSanInFrontendCertificate
Note, the client
section generally needs to be provided to specify an expected DNS SubjectName contained in the presented server certificate via the serverName
field; this is needed as Temporal uses IP to IP communication.
You can avoid specifying this if your server certificates contain the appropriate IP Subject Alternative Names.
Additionally, the rootCaFiles
field needs to be provided when the client's host does not trust the Root CA used by the server.
The example below extends the above example to manually specify the Root CA used by the Frontend Services:
global:
tls:
frontend:
server:
certFile: /path/to/cert/file
keyFile: /path/to/key/file
client:
serverName: dnsSanInFrontendCertificate
rootCaFiles:
- /path/to/frontend/server/CA/files
Below is an additional example of a fully secured cluster using mutual TLS for both frontend and internode communication with manually specified CAs:
global:
tls:
internode:
server:
certFile: /path/to/internode/cert/file
keyFile: /path/to/internode/key/file
requireClientAuth: true
clientCaFiles:
- /path/to/internode/serverCa
client:
serverName: dnsSanInInternodeCertificate
rootCaFiles:
- /path/to/internode/serverCa
frontend:
server:
certFile: /path/to/frontend/cert/file
keyFile: /path/to/frontend/key/file
requireClientAuth: true
clientCaFiles:
- /path/to/internode/serverCa
- /path/to/sdkClientPool1/ca
- /path/to/sdkClientPool2/ca
client:
serverName: dnsSanInFrontendCertificate
rootCaFiles:
- /path/to/frontend/serverCa
Note: In the case that client authentication is enabled, the internode.server
certificate is used as the client certificate among services. This adds the following requirements:
-
The
internode.server
certificate must be specified on all roles, even for a frontend-only configuration. -
Internode server certificates must be minted with either no Extended Key Usages or both ServerAuth and ClientAuth EKUs.
-
If your Certificate Authorities are untrusted, such as in the previous example, the internode server Ca will need to be specified in the following places:
internode.server.clientCaFiles
internode.client.rootCaFiles
frontend.server.clientCaFiles
persistence
The persistence
section holds configuration for the data store/persistence layer.
The following example shows a minimal specification for a password-secured Cluster using Cassandra.
persistence:
defaultStore: default
visibilityStore: cass-visibility # The primary Visibility store.
secondaryVisibilityStore: es-visibility # A secondary Visibility store added to enable Dual Visibility.
numHistoryShards: 512
datastores:
default:
cassandra:
hosts: '127.0.0.1'
keyspace: 'temporal'
user: 'username'
password: 'password'
cass-visibility:
cassandra:
hosts: '127.0.0.1'
keyspace: 'temporal_visibility'
es-visibility:
elasticsearch:
version: 'v7'
logLevel: 'error'
url:
scheme: 'http'
host: '127.0.0.1:9200'
indices:
visibility: temporal_visibility_v1_dev
closeIdleConnectionsInterval: 15s
The following top level configuration items are required:
numHistoryShards
Required - The number of history shards to create when initializing the Cluster.
Warning: This value is immutable and will be ignored after the first run. Please ensure you set this value appropriately high enough to scale with the worst case peak load for this Cluster.
defaultStore
Required - The name of the data store definition that should be used by the Temporal server.
visibilityStore
Required - The name of the primary data store definition that should be used to set up Visibility on the Temporal Cluster.
secondaryVisibilityStore
Optional - The name of the secondary data store definition that should be used to set up Dual Visibility on the Temporal Cluster.
datastores
Required - contains named data store definitions to be referenced.
Each definition is defined with a heading declaring a name (ie: default:
and visibility:
above), which contains a data store definition.
Data store definitions must be either cassandra
or sql
.
cassandra
A cassandra
data store definition can contain the following values:
hosts
: Required - "," separated Cassandra endpoints, e.g. "192.168.1.2,192.168.1.3,192.168.1.4".port
: Default: 9042 - Cassandra port used for connection bygocql
client.user
: Cassandra username used for authentication bygocql
client.password
: Cassandra password used for authentication bygocql
client.keyspace
: Required - the Cassandra keyspace.datacenter
: The data center filter arg for Cassandra.maxConns
: The max number of connections to this data store for a single TLS configuration.tls
: See TLS below.
sql
A sql
data store definition can contain the following values:
user
: Username used for authentication.password
: Password used for authentication.pluginName
: Required - SQL database type.- Valid values:
mysql
orpostgres
.
- Valid values:
databaseName
- required - the name of SQL database to connect to.connectAddr
- required - the remote address of the database, e.g. "192.168.1.2".connectProtocol
- required - the protocol that goes with theconnectAddr
- Valid values:
tcp
orunix
- Valid values:
connectAttributes
- a map of key-value attributes to be sent as part of connectdata_source_name
url.maxConns
- the max number of connections to this data store.maxIdleConns
- the max number of idle connections to this data storemaxConnLifetime
- is the maximum time a connection can be alive.tls
- See below.
tls
The tls
and mtls
sections can contain the following values:
enabled
- boolean.serverName
- name of the server hosting the data store.certFile
- path to the cert file.keyFile
- path to the key file.caFile
- path to the ca file.enableHostVerification
- boolean -true
to verify the hostname and server cert (like a wildcard for Cassandra cluster). This option is basically the inverse ofInSecureSkipVerify
. SeeInSecureSkipVerify
in http://golang.org/pkg/crypto/tls/ for more info.
Note: certFile
and keyFile
are optional depending on server config, but both fields must be omitted to avoid using a client certificate.
log
The log
section is optional and contains the following possible values:
stdout
- boolean -true
if the output needs to go to standard out.level
- sets the logging level.- Valid values - debug, info, warn, error or fatal, default to info.
outputFile
- path to output log file.
clusterMetadata
clusterMetadata
contains the local cluster information. The information is used in Multi-Cluster Replication.
An example clusterMetadata
section:
clusterMetadata:
enableGlobalNamespace: true
failoverVersionIncrement: 10
masterClusterName: 'active'
currentClusterName: 'active'
clusterInformation:
active:
enabled: true
initialFailoverVersion: 0
rpcAddress: '127.0.0.1:7233'
#replicationConsumer:
#type: kafka
currentClusterName
- required - the name of the current cluster. Warning: This value is immutable and will be ignored after the first run.enableGlobalNamespace
- Default:false
.replicationConsumer
- determines which method to use to consume replication tasks. The type may be eitherkafka
orrpc
.failoverVersionIncrement
- the increment of each cluster version when failover happens.masterClusterName
- the master cluster name, only the master cluster can register/update namespace. All clusters can do namespace failover.clusterInformation
- contains the local cluster name toClusterInformation
definition. The local cluster name should be consistent withcurrentClusterName
.ClusterInformation
sections consist of:enabled
- boolean - whether a remote cluster is enabled for replication.initialFailoverVersion
rpcAddress
- indicate the remote service address (host:port). Host can be DNS name. Usedns:///
prefix to enable round-robin between IP address for DNS name.
services
The services
section contains configuration keyed by service role type.
There are four supported service roles:
frontend
matching
worker
history
Below is a minimal example of a frontend
service definition under services
:
services:
frontend:
rpc:
grpcPort: 8233
membershipPort: 8933
bindOnIP: '0.0.0.0'
There are two sections defined under each service heading:
rpc
Required
rpc
contains settings related to the way a service interacts with other services. The following values are supported:
grpcPort
: Port on which gRPC will listen.membershipPort
: Port used to communicate with other hosts in the same Cluster for membership info. Each service should use different port. If there are multiple Temporal Clusters in your environment (Kubernetes for example), and they have network access to each other, each Cluster should use a different membership port.bindOnLocalHost
: Determines whether uses127.0.0.1
as the listener address.bindOnIP
: Used to bind service on specific IP, or0.0.0.0
. Checknet.ParseIP
for supported syntax, only IPv4 is supported, mutually exclusive withBindOnLocalHost
option.
Note: Port values are currently expected to be consistent among role types across all hosts.
publicClient
The publicClient
a required section describing the configuration needed to for worker to connect to Temporal server for background server maintenance.
hostPort
IPv4 host port or DNS name to reach Temporal frontend, reference
Example:
publicClient:
hostPort: 'localhost:8933'
Use dns:///
prefix to enable round-robin between IP address for DNS name.
archival
Optional
Archival is an optional configuration needed to set up the Archival store.
It can be enabled on history
and visibility
data.
The following list describes supported values for each configuration on the history
and visibility
data.
state
: State for Archival setting. Supported values areenabled
,disabled
. This value must beenabled
to use Archival with any Namespace in your Cluster.enabled
: Enables Archival in your Cluster setup. When set toenabled
,URI
andnamespaceDefaults
values must be provided.disabled
: Disables Archival in your Cluster setup. When set todisabled
, theenableRead
value must be set tofalse
, and undernamespaceDefaults
,state
must be set todisabled
, with no values set forprovider
andURI
fields.
enableRead
: Supported values aretrue
orfalse
. Set totrue
to allow read operations from the archived Event History data.provider
: Location where data should be archived. Subprovider configs arefilestore
,gstorage
,s3
, oryour_custom_provider
. Default configuration specifiesfilestore
.
Example:
-
To enable Archival in your Cluster configuration:
# Cluster-level Archival config enabled
archival:
# Event History configuration
history:
# Archival is enabled for the History Service data.
state: 'enabled'
enableRead: true
# Namespaces can use either the local filestore provider or the Google Cloud provider.
provider:
filestore:
fileMode: '0666'
dirMode: '0766'
gstorage:
credentialsPath: '/tmp/gcloud/keyfile.json'
# Configuration for archiving Visibility data.
visibility:
# Archival is enabled for Visibility data.
state: 'enabled'
enableRead: true
provider:
filestore:
fileMode: '0666'
dirMode: '0766' -
To disable Archival in your Cluster configuration:
# Cluster-level Archival config disabled
archival:
history:
state: 'disabled'
enableRead: false
visibility:
state: 'disabled'
enableRead: false
namespaceDefaults:
archival:
history:
state: 'disabled'
visibility:
state: 'disabled'
For more details on Archival setup, see Set up Archival.
namespaceDefaults
Optional
Sets default Archival configuration for each Namespace using namespaceDefaults
for history
and visibility
data.
state
: Default state of the Archival for the Namespace. Supported values areenabled
ordisabled
.URI
: Default URI for the Namespace.
For more details on setting Namespace defaults on Archival, see Namespace creation in Archival setup
Example:
# Default values for a Namespace if none are provided at creation.
namespaceDefaults:
# Archival defaults.
archival:
# Event History defaults.
history:
state: 'enabled'
# New Namespaces will default to the local provider.
URI: 'file:///tmp/temporal_archival/development'
visibility:
state: 'disabled'
URI: 'file:///tmp/temporal_vis_archival/development'
dcRedirectionPolicy
Optional
Contains the Frontend datacenter API redirection policy that you can use for cross-DC replication.
Supported values:
policy
: Supported values arenoop
,selected-apis-forwarding
, andall-apis-forwarding
.noop
: Not setting a value or settingnoop
means no redirection. This is the default value.selected-apis-forwarding
: Sets up forwarding for the following APIs to the active Cluster based on the Namespace.StartWorkflowExecution
SignalWithStartWorkflowExecution
SignalWorkflowExecution
RequestCancelWorkflowExecution
TerminateWorkflowExecution
QueryWorkflow
all-apis-forwarding
: Sets up forwarding for all APIs on the Namespace in the active Cluster.
Example:
#...
dcRedirectionPolicy:
policy: 'selected-apis-forwarding'
#...
dynamicConfigClient
Optional
Configuration for setting up file-based dynamic configuration client for the Cluster.
This setting is required if specifying dynamic configuration. Supported configuration values are as follows:
filepath
: Specifies the path where the dynamic configuration YAML file is stored. The path should be relative to the root directory.pollInterval
: Interval between the file-based client polls to check for dynamic configuration updates. The minimum period you can set is 5 seconds.
Example:
dynamicConfigClient:
filepath: 'config/dynamicconfig/development-cass.yaml'
pollInterval: '10s'