This document contains solutions to some of the commonly faced issues you are likely to encounter as an Admin.
Check if the server is running in normal mode in the file
If the server is in "degraded" or "maintenance" mode, RudderStack just stores the events and will not process them.
Check if you have provided the right backend token
Check if the Control Plane is up (https://api.rudderlabs.com/health)
Check your internal firewall rules and edit if needed. You need access to outbound HTTP.
Check if those destination are enabled in the Control Plane
Verify that the config parameters like API key, tracking ID, etc. are correct
There is a possibility that a destination service (Google Analytics, S3, etc.) is down.
Check the number of pending gateway tables (tables that start with
gw_), router tables (tables that start with
rt_) and batch router tables (tables that start with
If the number for any of the above possibilities is high (> 5), then we have incoming requests at a higher rate than what we can process. Consider adding another RudderStack node if possible.
If you have access to RudderStack Enterprise edition, check out the Grafana dashboards
When RudderStack enters "degraded" mode, it will only log the event and not process the event. If the issue why the server entered mode is temporary (Transformer is down), then fix the issue and restart the server in the normal mode.
When RudderStack enters "maintenance" mode, we take a back up of the old database and create a new database in "degraded" mode. RudderStack will only log the event and not process the event in this case. If the issue is fixed, start another instance of RudderStack server in normal mode but in a different port (say 8081) pointing to the old DB. That will drain all the events in the old DB
Then restart the actual server in the normal mode by updating the
/tmp/recovery_data.json. Set Mode to "normal". It will resume routing pending events and the ordering of the events is guaranteed.
Check if your system is in "degraded" or "maintenance" mode. This could result in only logging the events and not processing them. If needed, increase the storage capacity of your machine till everything goes green.
Ideally, this should not happen. Restarting the service is recommended in such a scenario.
If you have sessions enabled, RudderStack caches the session information. Configure
If there are tables that start with
pre_drop_but if you don't see them being removed, then verify the access credentials to your object storage like S3
If you have multiple instances of the Data Plane, each table dump will be inside a specific folder named after the
If you have access to RudderStack Enterprise, you already have a visualization of the RudderStack server metrics at your disposal for tracking the health of your server.
Check if the number of
jobsDB tables are not always increasing.
Verify that Server mode is "normal"
Enable debug logging by setting the following variable in your
.env file as shown:
We recommend the following configuration for the production deployments. On a linux machine, add the following lines to /etc/sysctl.conf.
net.ipv4.tcp_max_tw_buckets = 65536net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_tw_reuse = 0net.ipv4.tcp_max_syn_backlog = 131072net.ipv4.tcp_syn_retries = 3net.ipv4.tcp_synack_retries = 3net.ipv4.tcp_retries1 = 3net.ipv4.tcp_retries2 = 8net.ipv4.tcp_rmem = 16384 174760 349520net.ipv4.tcp_wmem = 16384 131072 262144net.ipv4.tcp_mem = 262144 524288 1048576net.ipv4.tcp_max_orphans = 65536net.ipv4.tcp_fin_timeout = 10net.ipv4.tcp_low_latency = 1net.ipv4.tcp_syncookies = 0
If your system is hitting TCP limits and returning HTTP errors, above configuration will help.