How to setup and run Stellar Core node?
Stellar is a decentralized network connecting several nodes, which are computers that keep a shared distributed ledger. The nodes communicate with each other to validate and add transactions to the ledger. Stellar Core is an implementation of the Stellar Consensus Protocol. Nodes use it to stay in sync while working on validating transactions and applying them to the ledger.
This article explains technical and operational aspects of installing, configuring, monitoring and maintaining a Stellar Core node. The basic flow of the article goes like this:
- Choose the type of node to run
- Environment setup
- Install Stellar Core
- Configuration
- Publish History Archives
- Running
- Monitor and maintain node
Types of nodes
All Stellar nodes’ basic functionalities are almost the same: running Stellar Core, connecting to peers, handling transactions, storing the ledger’s state in SQL database, and keeping a copy of the ledger in flat XDR files called buckets. Horizon is the Stellar API that is supported by all the nodes. Besides the basic functionalities, two key configuration options determine a node’s behavior. A node can:
- participate in consensus for transitions validation
- publish an archive that other nodes can access to find the complete history of the network.
Based on these two key points, nodes are categorized into the following four types:
- Watcher
- Basic validator
- Full validator
- Archiver
All types of nodes support Horizon and submit transactions to the network; the difference lies in validating transactions and publishing history.
- Watcher
A Watcher keeps track of the ledger and submits transactions, but it is not configured to participate in transaction validation or publish an archived history. It means a watcher doesn’t do anything to support the network or increase decentralization. It is the lightest node. Watcher pairs well with Horizon and if the requirement is only for Horizon instance, a Watcher would be the right choice. - Basic validator
A Basic validator has similar operational requirements and provides similar advantages as a Watcher. The difference is that the Basic validator uses a secret key and it is configured to participate in the consensus. - Full validator
Like the Basic validator, the Full validator also performs all similar tasks except that it also publishes a history archive containing the ledger’s snapshots. Though it’s a bit expensive and complex to run, it supports the network’s resilience and decentralization. New nodes that join the network or those that fall temporarily out of sync can consult archives provided by Full validators to catch up on the network’s history.
A Full validator can also support Horizon, but the organizations that run them don’t use them to query network data or submit transactions. Instead, the organizations prefer to run Watcher in addition to Full validator to handle Horizon. - Archiver
Like a Full validator, an archiver publishes activities of the network but doesn’t participate in consensus. Its usefulness is relatively limited. If the need is to run a Stellar-facing service such as blockchain explorer, archives need to run. Use an archiver if you want to referee the network, which is quite unlikely.
Environment setup
There are various ways to install Stellar Core. After installation, it can be configured to participate in the network at different levels: Watcher, Basic validator or Full validator. No matter how the Stellar Core is installed, it is necessary to connect to a peer-to-peer network, store the ledger’s state in the database and connect to the Stellar API, Horizon.
- Compute requirements
Stellar Core PostgreSQL works well on an m5.large in AWS, having dual-core 2.5GHz Intel Xenon and 8GB RAM. Storage-wise, 1 TB storage space is suggested to be sufficient. If Stellar Core is running in conjunction with Horizon, ensure that the system setup is equipped to handle Horizon’s compute requirements. - Network access
Stellar Core connects to a peer-to-peer network to keep the ledger in sync. It means the node needs to maintain specific TCP ports available for inbound and outbound communication.
Inbound: Stellar Core node allows all IPs to connect to its PEER_PORT over TCP. A port can be specified while configuring Stellar Core, but the default (11625) is mainly used.
Outbound: Stellar Core needs to connect to other nodes via PEER_PORT over TCP. Information about other nodes PEER_PORT can be found on network explorer such as Stellarbeat.
Install Stellar Core
Developers can install Stellar Core in three ways: use a Docker image, pre-built packages or build from source. Here, the first two methods are discussed; installation from source will require build instructions.
- Docker-based installation: SDF maintains a quickstart image that bundles Stellar Core with PostgreSQL database and Horizon. It is a quick way to set up a non-validating ephemeral configuration that works for most developers. SDF also handles Stellar Corre only standalone image that starts a three node-local Stellar core network, all on the same docker host. Moreover, SatoshiPay maintains Docker’s separate image for Horizon and Stellar Core. It comes in a few varieties, including one with AWS CLI installed and the other with Google Cloud SDK installed.
- Package-based installation: For Ubuntu 16.40 LTS users, there’s Stellar Core and Horizon’s latest release in Debian binary package format. These packages can be installed individually, offering the most incredible flexibility but needing manual configuration files and the PostgreSQL database. However, the option to install a stellar quickstart package is also available, which configures a Testnet Stellar Core and Horizon, both backed by a local PostgreSQL database. It can also be modified once installed.
Configuration
After installation, the next step is to complete a configuration file that states important things about the nodes like what database it writes to, whether it connects to a public network or testnet and which other nodes are in the quorum set. It is done by using TOML. By default stellar core loads that file from ./stellar-core.cfg. A different file can also be loaded using the command:
$ stellar-core --conf betterfile.cfg
1. Database
Stellar Core has two copies of the ledger: one in SQL database and the other in XDR files on local disks called buckets. During consensus, the database is consulted and modified automatically when a transaction set is applied to the ledger. It is fast, fine-grained and random access.
2. Buckets
Stellar Core stores a copy of the ledger in flat XDR files called buckets. These files are stored in the directory specified in the configuration file as BUCKET_DIR_PATH, which defaults to buckets. The bucket files do hashing and transmission of ledger differences to historical archives. Fast local disks with sufficient space are necessary to use to store buckets. However, for most parts, the content of buckets and databases is managed by Stellar Core to be ignored. While running stellar for the first time, initialize database and buckets with the following command:
$ stellar-core new-db
This command is also used if the database gets corrupted or needs to restart from scratch.
3. Network Passphrase
NETWORK_PASSPHRASE is used to specify whether the nodes connect to a public network or testnet. Choices are:
NETWORK_PASSPHRASE=”Public Global Stellar Network; September 2015″
NETWORK_PASSPHRASE=”Test SDF Network ; September 2015″
4. Validating
Stellar Core isn’t set up to validate by default. It needs to be configured if you want a node to be a Basic validator or Full validator. It means preparing the node to participate in SCP (Stellar Consensus Protocol) and sign messages pledging that the network agrees to the particular transaction set. It is a three-step process:
Generate a keypair stellar-core gen-seed
Add NODE_SEED=”SD7DN…” to the configuration file, where a secret key will be presented from the keypair.
Add NODE_IS_VALIDATOR=true to the configuration file.
It is crucial to safeguard the node’s secret key. If an unintended user gets access to it, they can send messages to the network and appear to originate from the root node. Every node has its secret key. If more than one node is running, HOME_DOMAIN must be set as a standard to those nodes using the NODE_HOME_DOMAIN property. It will allow nodes to group correctly during quorum set generation.
5. Choose quorum set
The selection of the quorum set doesn’t depend on the type of node. It consists of validators (specified by organization) that the node checks with to determine that apply transaction set to a ledger. An ideal quorum set:
aligns with your organization’s priorities
maintains good quorum intersection
has enough redundancy to handle a few node failures
Stellar automatically generates a quorum set based on structured information provided in the configuration file. The choice of validators is yours, while the Stellar Core configures them into an optimal quorum set. To accomplish this task, stellar does the following:
Classifies validators run by the same organization into a sub quorum
Assigns weights to those sub quorums based on quality
Specifies the threshold for each of those sub quorums
While this automatic quorum set by stellar doesn’t relieve us from all responsibilities, choosing a trustworthy validator and keeping an eye on them for consistency and reliability is an important task.
5.1 Validator discovery
It’s important to note that when adding a validating node to the quorum set, the node’s trust is on the organization running the node, the SDF and not on some anonymous Stellar public key. The validator declares a home domain on-chain using set_options operation and publishes information in stellar to generate a self-verified link between a node and the organization.toml file. This link allows to look up a node by its Stellar public key and checks stellar.toml to find who runs it. It is easier to consult the list of nodes on Stellarbeat.io rather than doing it manually. This list lets you discover that most reliable organizations run more than one validator. The critical point here is that you need to depend on precisely one entity or at least four entities for automatic quorum set configuration to work efficiently.
5.2 Home domain array
Stellar Core relies on two arrays of tables to create your quorum set: [[HOME_DOMAINS]] and [[VALIDATORS]].
[[HOME_DOMAINS]] defines a superset of validators: a home domain and information in [[HOME_DOMAINS]] table is shared when you add nodes hosted by the same organization to your configuration, specifically the quality rating that automatically applies to every validator.
5.3 Validator array
For each node added to the quorum set, complete [[VALIDATORS]] table consists of NAME, QUALITY, HOME_DOMAIN and PUBLIC_KEY fields. If the HOME_DOMAIN field of node aligns with the organization’s [[HOME_DOMAINS]] array, the quality rating mentioned there would be applied to the node.
5.4 Validator quality
QUALITY is a required field for every node in the quorum set. The quality rating for nodes can be HIGH, MEDIUM or LOW. Most weight in automatic quorum set configuration is given to High-quality validators. It is necessary to endure that the node has low latency and good uptime; the organization running the node is trustworthy and reliable.
A high-quality validator belongs to a suite of nodes that provide redundancy and publishes archives.
A medium-quality validator is nested under high-quality validators. The combined weight of all medium-quality validators is equivalent to a single high-quality entity.
A low-quality validator is nested under medium quality validator. The combined weight of all low-quality validators is equivalent to a single medium quality entity. They should be proved reliable over time.
5.5 Automatic quorum set generation
Once the validators are added to the configuration, the Stellar Core generates a quorum set based on the following rules:
Validators having the same home domain are grouped and provided a threshold requiring a simple majority.
Diverse groups of validators are provided thresholds assuming byzantine failure.
Based on QUALITY, entities are grouped and are nested from HIGH to LOW.
Decision-making priority is given to top HIGH-quality entities.
A single HIGH-quality entity weighs equal to the combined weights of all MEDIUM quality entities.
A single MEDIUM quality weighs equal to the combined weights of all LOW-quality entities.
5.6 Quorum and Overlay network
It’s a good practice to share the information with your validator about other reliable validators. KNOWN_PEERS and PREFERRED_PEERS are configured with addresses of these dependencies. PREFERRED_PEER_KEYS configured with keys from quorum set help to maintain the priority of nodes that allow reaching consensus.
5.7 Updating and coordinating quorum set
The best way to keep the quorum set updated and coordinated is to connect to the validators channel on Stellar Keybase and sign up for the validators google group. Whenever there’s a need for changes in the validator or quorum set – like take down validator for maintenance or add new to the quorum set – it is essential to staging changes to preserve quorum intersection and maintain the good health of the network:
Do not remove too many nodes from the quorum set before nodes are taken down. When different validators remove different nodes, the remaining sets do not overlap, causing network splits.
Do not add too many nodes to the quorum set at the same time. In such a case, nodes could overpower the configuration.
Adding or removing nodes begins by modifying your own nodes’ quorum sets and then coordinating with others to reflect changes gradually.
6. History
History archives are generally off-site commodity storage services such as Google Cloud Storage, Amazon S3 or custom SFTP/HTTP servers. Regardless of the kind of running node, it should be configured to get history from one or more public archives. It is possible to configure any number of archives, and Stellar Core will automatically round-robin between them.
While choosing a high-quality quorum set, high-quality nodes should be included, which add each archive’s location to the HISTORY field in the validators array.
7. Automatic maintenance
Stellar Core database has some tables that act as publishing queues for external systems like Horizon. These generate metadata for modifications made to the distributed ledger. The tables can grow without bounds if not appropriately managed.
A built-in scheduler deletes data from old ledgers that are not used anymore to avoid this situation. To control automatic maintenance behavior: AUTOMATIC_MAINTENANCE_PERIOD, AUTOMATIC_MAINTENANCE_COUNT and KNOWN_CURSORS are used.
Stellar Core performs this automatic maintenance by default. Ensure that it is disabled until appropriate data ingestion in downstream systems is done. If there’s a need to regenerate metadata, the easiest way is to replay ledgers for the required range after clearing the database in newdb.
Leverage Stellar’s secure infrastructure and interoperability for diverse development projects.
LeewayHertz Stellar Development Services
Publishing History Archives
If the running validator is Watcher or Basic validator, skip this section. For a Full validator and an Archiver, set up the node to publish a history archive. Using blob stores like Amazon’s s3 or Digital Ocean’s space, host an archive or serve a local archive directly via HTTP server like Apache or Nginx.
1. Caching and history archives
The data transfer costs associated with public history archives are reduced by using standard caching techniques. Three rules apply to cache the History archives:
- Do not cache the archive state file .well-known/history-stellar.json (“Cache-Control: no-cache”)
- Cache everything else for as long as possible (> 1 day)
- Do not cache HTTP 4xx responses (“Cache-Control: no-cache”)
2. Local history archive using Nginx
The following steps help to publish local history archive using Nginx:
[HISTORY.local]get="cp /mnt/xvdf/stellar-core-archive/node_001/{0} {1}"put="cp {0} /mnt/xvdf/stellar-core-archive/node_001/{1}"mkdir="mkdir -p /mnt/xvdf/stellar-core-archive/node_001/{0}"
Create history archive structure:
# tree -a /mnt/xvdf/stellar-core-archive//mnt/xvdf/stellar-core-archive└── node_001 ├── history │ └── 00 │ └── 00 │ └── 00 │ └── history-00000000.json └── .well-known └── stellar-history.json6 directories, 2 files
Configure virtual host:
server { listen 80; root /mnt/xvdf/stellar-core-archive/node_001/; server_name history.example.com; # default is to deny all location / { deny all; } # do not cache 404 errors error_page 404 /404.html; location = /404.html { add_header Cache-Control "no-cache" always; } # do not cache history state file location ~ ^/.well-known/stellar-history.json$ { add_header Cache-Control "no-cache" always; try_files $uri; } # cache entire history archive for 1 day location / { add_header Cache-Control "max-age=86400"; try_files $uri; }}
3. Amazon S3 history archive
Using Amazon S3 publish history archive:
[HISTORY.s3]get='curl -sf http://history.example.com/{0} -o {1}' # Cached HTTP endpointput='aws s3 cp --region us-east-1 {0} s3://bucket.name/{1}' # Direct S3 access
To create an S3 archive, run the new-hist.
# sudo -u stellar stellar-core --conf /etc/stellar/stellar-core.cfg new-hist s3
Serve the archive using Amazon S3 static site.
server { listen 80; root /srv/nginx/history.example.com; index index.html index.htm; server_name history.example.com; # use google nameservers for lookups resolver 8.8 8.8.4.4; # bucket.name s3 static site endpoint set $s3_bucket "bucket.name.s3-website-us-east-1.amazonaws.com"; # default is to deny all location / { deny all; } # do not cache 404 errors error_page 404 /404.html; location = /404.html { add_header Cache-Control "no-cache" always; } # do not cache history state file location ~ ^/.well-known/stellar-history.json$ { add_header Cache-Control "no-cache" always; proxy_intercept_errors on; proxy_pass http://$s3_bucket; proxy_read_timeout 120s; proxy_redirect off; proxy_buffering off; proxy_set_header Host $s3_bucket; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } # cache history archive for 1 day location / { add_header Cache-Control "max-age=86400"; proxy_intercept_errors on; proxy_pass http://$s3_bucket; proxy_read_timeout 120s; proxy_redirect off; proxy_buffering off; proxy_set_header Host $s3_bucket; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }}
Running
1. Start Stellar Core
Start Stellar Core by running the command:
$ stellar-core run
The node’s activity will be visible now as it joins the network.
2. Interact with your instance
Using an administrative HTTP endpoint, interact with Stellar Core. Run command:
$ stellar-core http-command
Do not expose this HTTP endpoint to the public internet. It is accessed explicitly by administrators to submit transactions to the Stellar network.
3. Join the network
There are various phases while joining the network:
Establish a connection to other peers
"peers" : { "authenticated_count" : 3, "pending_count" : 4},
Observe Consensus:
Till the node sees quorum, it states:
"state" : "Joining SCP"
A new field quorum will display information about network decisions after observing consensus. The node will switch to the state
Catching up:
"quorum" : { "qset" : { "ledger" : 22267866, "agree" : 5, "delayed" : 0, "disagree" : 0, "fail_at" : 3, "hash" : "980a24", "missing" : 0, "phase" : "EXTERNALIZE" }, "transitive" : { "intersection" : true, "last_check_ledger" : 22267866, "node_count" : 21 }},"state" : "Catching up",
Catching up:
In this phase, the node downloads data from archives. It’ll start like:
"state" : "Catching up","status" : [ "Catching up: Awaiting checkpoint (ETA: 35 seconds)" ]Sync:
When catching up is done, the state will change to:
"state" : "Synced!"
4. Logging
By default, Stellar Core logs to standard output and stellar-core.log, configurable as LOG_FILE_PATH.
Based on progressive priority levels, log messages can be classified as TRACE, DEBUG, INFO, WARNING, ERROR and FATAL. The following configuration controls the log level:
$ stellar-core http-command "ll?level=debug"
There is another way to control the log levels. Adjust it on a partition-by-partition basis through the administrative interface. For instance, to set the history system to DEBUG level:
$ stellar-core http-command "ll?level=debug&partition=history"
The default log level is INFO.
5. Validator maintenance
Maintaining a validator includes everything from applying security patches and system upgrades to temporarily taking the validator out of the network. The administrator is responsible for keeping the validator safe and overall maintenance of the validator. Safety of validators means that the other dependent validators must not be affected when your validator is turned off for maintenance. When it comes back to the network, that validator runs smoothly as a part of the network.
Perform the following recommended steps as part of maintenance:
- To avoid a situation where many nodes go down simultaneously, display the intention to other dependent nodes.
- Dependencies should check the health of their quorum.
- If no objection, take your instance down.
- Once modifications are complete, restart the instance to rejoin the network.
Monitor and Maintain node
After the node is live and running, it is essential to ensure that it stays afloat and contributes to the overall network’s health. Stellar Core provides information that helps in monitoring the node and diagnosing potential problems. This information can be accessed using commands and inspecting the output.
General node information
Run:
$ stellar-core http-command 'info'
The output will be:
{ "build" : "v11.1.0", "history_failure_rate" : "0", "ledger" : { "age" : 3, "baseFee" : 100, "baseReserve" : 5000000, "closeTime" : 1560350852, "hash" : "40d884f6eb105da56bea518513ba9c5cda9a4e45ac824e5eac8f7262c713cc60", "maxTxSetSize" : 1000, "num" : 24311579, "version" : 11 }, "network" : "Public Global Stellar Network ; September 2015", "peers" : { "authenticated_count" : 5, "pending_count" : 0 }, "protocol_version" : 10, "quorum" : { "qset" : { "agree" : 6, "delayed" : 0, "disagree" : 0, "fail_at" : 2, "hash" : "d5c247", "ledger" : 24311579, "missing" : 1, "phase" : "EXTERNALIZE" }, "transitive" : { "critical" : null, "intersection" : true, "last_check_ledger" : 24311536, "node_count" : 21 } }, "startedOn" : "2019-06-10T17:40:29Z", "state" : "Catching up", "status" : [ "Catching up: downloading and verifying buckets: 30/30 (100%)" ] }}
Some notable fields in info are:
build: build number for this specific Stellar Core instance
ledger: local state of the node, in case the node was disconnected
state: node’s sync status relative to the network
Overlay information
Run the command:
$ stellar-core http-command 'peers'
The output lists the result of both inbound and outbound connections from node to peers.
{ "authenticated_peers": { "inbound": [ { "address": "54.161.82.181:11625", "elapsed": 6, "id": "sdf1", "olver": 5, "ver": "v9.1.0" } ], "outbound": [ { "address": "54.219.174.177:11625", "elapsed": 2303, "id": "sdf2", "olver": 5, "ver": "v9.1.0" }, { "address": "54.160.195.7:11625", "elapsed": 14082, "id": "sdf3", "olver": 5, "ver": "v9.1.0" } ] }, "pending_peers": { "inbound": ["211.249.63.74:11625", "45.77.5.118:11625"], "outbound": ["178.21.47.226:11625", "178.131.109.241:11625"] }}
This list provides the required information of all the peers connected to the node. This overlay information enables investigating and monitoring the network’s overall health.
Finally, the job of setting up and running the Stellar Core node is done. Stellar is continually gaining popularity worldwide because of its unique consensus protocol, built-in order books and connection with existing financial infrastructure. Such immensely growing popularity brings the need to learn the development and execution of Stellar. This article presented technical and operational aspects to successfully install, configure, monitor, and maintain the Stellar Core node.
If you’re searching for a company to develop and run Stellar Core nodes for your organization, we’re here to help you. Consult our team of Stellar blockchain experts and discuss your requirements.
Start a conversation by filling the form
All information will be kept confidential.
Insights
Stellar-vs-EVM-Based-Blockchains
Stellar and EVM-based blockchains are decentralized, open-source platforms designed to develop smart contracts and decentralized applications.
How to Issue and Anchor Assets on Stellar?
Stellar Distributed Network is used to hold, transfer and issue assets, including dollars, euros, stocks, gold and other tokens of value.
How to set up deposits and withdrawals on Stellar?
Supporting deposits and withdrawals of an asset on the Stellar blockchain requires interaction between anchor and wallet apps.